By Karen Gehrman, Associate Direction of Communications, UCSF Helen Diller Family Comprehensive Cancer Center
From chatbots to voice transcription to data review, our centers are already using AI to expedite common processes. Like many of my colleagues, I see AI working its way into my role managing digital communications for cancer at UCSF. At this year’s NACCDO/PAMN annual conference in Atlanta, I kept my ears open for mentions of how colleagues are using AI like ChatGPT in their MarComms efforts and heard some interesting recurring themes and a few surprising thoughts.
- Fighting the blank page. Many peers noted using AI as a prompt for content such as:
- Generating responses to disease-specific questions from patients
- Soliciting headline ideas for articles
- Suggesting social media posts for events or awards
- Creating a suite of different length communications such as a thank you letter for a donor, a shorter email to follow up, and a tiny version to put into social media posts
- Identifying hashtag suggestions based on content
- Using AI to proofread. Part thesaurus, part encyclopedia, peers are leveraging AI to summarize text:
- Dropping in scientific text to more deeply understand content
- Soliciting key takeaways from scientific articles before submitting a final draft to determine whether the piece is successfully summarizing the science in lay language
- Putting in a general subject and asking for a succinct synopsis
- Using as a thesaurus for varying content
But of course, some concerns.
- AI requires proofreading. Some prompts may generate untrue embellishments, such as random accolades in an AI-generated faculty biography.
- AI has implicit bias. With output pulling from existing biased language, output needs to be thoroughly reviewed for content and subtext.
- Giving AI credit. Do you attribute the AI as a content-generation resource, and could this undermine your authenticity, such as in an AI-generated crisis response email.
- Your privacy. Are submissions to AI generators being collected and archived or connected to those of us sumbitting them? (We should assume so!)
The bottom line? Itʻs a mixed bag. AI can be useful for writers in suggesting first draft language to jump start a blank page, as a thesaurus to paraphrase, summarize, or simplify important scientific concepts, or as a reference source to populate facts or statistics to further supplement an informational post. Regardless of the content, proofreading is even more critical for AI-generated content, and commensurate with the impact of the content – summaries of treatment options or scientific concepts would require far more scrutiny than a hashtag recommendation.
Lastly, we should consider the meta concepts – how is our solicitation of AI generated content being archived? Is AI a credible source to be attributed? The question of AI usage in cancer communications most certainly will be a session at our next conference in Salt Lake City, and it is both exciting and concerning to think of the complexities that AI may be presenting us then. For that, I asked ChatGPT to answer this prompt, and I will leave you to ponder the usefulness of the response.
—————————————————–
Give me 500 words on the benefits and challenges AI content generation may have for digital communications about cancer research and treatment in the next year
By ChatGPT
AI content generation presents both benefits and challenges for digital communications about cancer research and treatment in the next year. Here are 500 words elaborating on these aspects:
Benefits of AI Content Generation:
- Enhanced efficiency: AI content generation can automate the creation of reports, articles, and summaries related to cancer research and treatment. This automation can significantly reduce the time and effort required for content production, enabling researchers and healthcare professionals to focus on other critical tasks.
- Data-driven insights: AI algorithms can analyze vast amounts of cancer-related data, including research papers, clinical trials, and patient records. By extracting key information and generating insights, AI-powered content generation can facilitate evidence-based decision-making, inform treatment protocols, and highlight emerging trends in cancer research.
- Personalized patient education: AI-generated content can be tailored to the specific needs and characteristics of individual patients. By analyzing patient data, such as medical history and genomic information, AI algorithms can create educational materials that provide personalized information about cancer, treatment options, and support resources. This can empower patients to make informed decisions about their healthcare journey.
- Multilingual and accessible communication: AI content generation can help overcome language barriers by automatically translating cancer-related information into different languages. This enables the dissemination of knowledge to diverse populations, making vital cancer research and treatment information accessible to individuals worldwide.
- Rapid dissemination of research findings: AI can accelerate the process of sharing research findings by automatically generating summaries, abstracts, and key takeaways from scientific papers. This enables researchers to disseminate their work more efficiently, enhancing collaboration and facilitating the translation of research into clinical practice.
Challenges of AI Content Generation:
- Quality assurance: While AI can generate content quickly, ensuring its accuracy, relevance, and quality remains a challenge. Human oversight and validation are necessary to review and verify the generated content before it is shared with healthcare professionals, researchers, and patients. Adequate quality control measures need to be in place to maintain high standards.
- Ethical considerations: AI-generated content may raise ethical concerns, such as the ownership of content, plagiarism, and proper attribution. Ensuring that the generated content complies with ethical guidelines, copyright laws, and respects intellectual property rights will be crucial.
- Lack of context and nuance: AI algorithms may struggle with understanding the contextual nuances of cancer research and treatment. They may produce content that lacks the depth, subtlety, and expert judgment that human professionals can provide. Careful curation and human input are necessary to ensure that the generated content accurately reflects the complexities of cancer-related information.
- Patient trust and comprehension: Patients may have concerns about relying solely on AI-generated content for their cancer-related information. Building and maintaining patient trust will require transparent communication about the role of AI in content generation, its limitations, and the involvement of healthcare professionals in the curation process. Additionally, efforts should be made to ensure that AI-generated content is presented in a format that is easily understandable for patients of varying health literacy levels.
- Bias and diversity: AI algorithms can inadvertently perpetuate biases present in the data used to train them. In the context of cancer research and treatment, this can lead to disparities in information access and treatment recommendations. It is essential to address algorithmic bias and ensure diverse representation in the training data to avoid perpetuating existing inequalities.
In conclusion, AI content generation holds great potential for digital communications about cancer research and treatment. It can enhance efficiency, provide data-driven insights, personalize patient education, and facilitate the dissemination of research findings. However, challenges related to quality assurance, ethical considerations, contextual understanding, patient trust, and bias need to be carefully addressed. By leveraging the benefits and proactively managing the challenges, AI content generation can play a valuable role in advancing cancer communications in the next year and beyond.