Researchers Explore the Promise and Peril of Using AI to Fight Cancer

BTCC
Researchers Explore the Promise and Peril of Using AI to Fight Cancer
Blockonomics

[ad_1]

Artificial intelligence (AI) holds tremendous potential to accelerate advancements across health and medicine, but it also comes with risks if not applied carefully, as evidenced by dueling results from recent cancer treatment studies.

On the one hand, UK-based biotech startup Etcembly just announced that it was able to use generative AI to design a novel immunotherapy that targets hard-to-treat cancers. This represents the first time an immunotherapy candidate was developed using AI, and Etcembly—which is a member of Nvidia’s Inception program—was able to create it in just 11 months, or twice as fast as conventional methods.

Etcembly’s new therapeutic, called ETC-101, is a bispecific T cell engager, which means it targets a protein found in many cancers and not healthy tissue. It also demonstrates picomolar affinity, and is thus up to a million times more potent than natural T cell receptors.

The company says it also has a robust pipeline of other immunotherapies for cancer and autoimmune diseases designed by its AI engine, called EMLy.

Tokenmetrics
Image: Etcembly

“Etcembly was born from our desire to bring together two concepts that are ahead of the scientific mainstream—TCRs and generative AI—to design the next generation of immunotherapies,” said CEO Michelle Teng. “I’m excited to take these assets forward so we can make the future of TCR therapeutics a reality and bring transformative treatments to patients.’

Previously, researchers showed that AI could help predict experimental cancer treatment outcomes, improve cancer screening techniques, discover new senolytic drugs, detect Parkinson’s disease signs, and understand protein interactions to design new compounds.

Dangers of deploying an unvalidated AI

On the other hand, significant risks remain. Some individuals are starting to use AI chatbots instead of doctors and therapists, with one person even killing himself after following a chatbot’s harmful advice.

Scientists are also aligning with the idea that people should not blindly follow AI advice. A new study published by JAMA Oncology suggests that ChatGPT has critical limitations when generating cancer treatment plans, underscoring the risks if AI recommendations are deployed clinically without extensive validation. 

Researchers at Brigham and Women’s Hospital in Boston found ChatGPT’s treatment recommendations for various cancer cases contained many factual errors and contradictory information.

Out of 104 queries, around one-third of ChatGPT’s responses had incorrect details, per the study published in JAMA Oncology.

“All outputs with a recommendation included at least 1 NCCN-concordant treatment, but 35 of 102 (34.3%) of these outputs also recommended 1 or more nonconcordant treatments,” the study found.

Source: JAMA Oncology

Although 98% of plans included some accurate guidelines, nearly all mixed right and wrong content.

“We were struck by the degree to which incorrect information was blended with accurate facts, making errors challenging to identify—even for specialists,” said co-author Dr. Danielle Bitterman. 

Specifically, the study found 12.5% of ChatGPT’s treatment recommendations were completely hallucinated or fabricated by the bot with no factual accuracy. The AI faced particular trouble generating reliable localized therapies for advanced cancers and appropriate use of immunotherapy drugs.

OpenAI itself cautions that ChatGPT is not meant to provide medical advice or diagnostic services for serious health conditions. Even so, the model’s tendency to respond confidently with contradictory or false information heightens risks if deployed clinically without rigorous validation.

Eating ant poison is an obvious no-no even if your supermarket’s AI advises you to, obviously, but when it comes to complex academic terms and delicate advice, you should also talk to a human.

With careful validation, AI-powered tools could rapidly unlock new lifesaving treatments while avoiding dangerous missteps. But for now, patients are wise to view AI-generated medical advice with a healthy dose of skepticism.

Stay on top of crypto news, get daily updates in your inbox.

[ad_2]

Source link

fiverr