Chatbot Makers Respond to Accusations That AI Promotes Eating Disorders

Ledger
Chatbot Makers Respond to Accusations That AI Promotes Eating Disorders
Changelly

[ad_1]

The companies behind some of the most popular artificial intelligence tools this week pushed back at a widely-cited report claimed that chatbots are providing dangerous information to vulnerable young users suffering from eating disorders.

OpenAI, Google, and Stability AI defended their technology to Decrypt after its original report of a study released by the Center for Countering Digital Hate—a report that has already sparked debate in Washington, D.C.

“Untested, unsafe generative AI models have been unleashed on the world with the inevitable consequence that they’re causing harm,” center CEO Imran Ahmed wrote.” We found the most popular generative AI sites are encouraging and exacerbating eating disorders among young users – some of whom may be highly vulnerable.”

In “AI and Eating Disorders,” the CCDH accused AI chatbots like OpenAI’s ChatGPT and Google Bard of promoting eating disorders and unhealthy and unrealistic body images, and not doing enough to safeguard users. Several AI companies responded with similar points.

Betfury

“We are committed to the safe and responsible use of AI technology,” Stability AI Head of Policy Ben Brooks told Decrypt in an email. “Stability AI prohibits the misuse of Stable Diffusion for illegal, misleading, or unethical purposes, and we continue to invest in features to prevent the misuse of AI for the production of harmful content.”

As Brooks explained, Stability AI filters out unsafe prompts and images from training data from Stable Diffusion, attempting to curb harmful content generation before user prompts can generate it. 

“We are always working to address emerging risks. Prompts relating to eating disorders have been added to our filters, and we welcome a dialogue with the research community about effective ways to mitigate these risks,” Brooks said.

OpenAI, the creators of the popular ChatGPT, also responded to the CCDH report, saying that it does not want its models to be used to elicit advice for self-harm.

“We have mitigations to guard against this and have trained our AI systems to encourage people to seek professional guidance when met with prompts seeking health advice,” an OpenAI spokesperson told Decrypt. “We recognize that our systems cannot always detect intent, even when prompts carry subtle signals. We will continue to engage with health experts to better understand what could be a benign or harmful response.”

“Eating disorders are deeply painful and challenging issues, so when people come to Bard for prompts on eating habits, we aim to surface helpful and safe responses,” a Google spokesperson told Decrypt on Tuesday. “Bard is experimental, so we encourage people to double-check information in Bard’s responses, consult medical professionals for authoritative guidance on health issues, and not rely solely on Bard’s responses for medical, legal, financial, or other professional advice.”

The CCDH’s report comes at the same time that AI developers are scrambling to calm fears around the emerging technology.

In July, several of the leading developers of generative AI—including OpenAI, Microsoft, and Google—committed to developing safe, secure, and transparent AI technology. Promised measures include sharing best practices for AI safety, investing in cybersecurity and safeguards against insider threats, and publicly reporting the capabilities of their AI systems, limitations, areas of appropriate and inappropriate use, and the societal risks posed by the technology.

In its report, the CCDH said it was able to circumvent safeguards put on AI chatbots by using so-called “jailbreak” prompts meant not to trigger the chatbot’s security measures, like asking the chatbot to pretend before entering the prompt.

While the tech giants did provide responses to the report to Decrypt, the report authors are not holding out hope they will hear from the chatbot developers anytime soon.

“We don’t contact the companies that we study, and generally, they don’t contact us,” a CCDH representative told Decrypt. “In this instance, we haven’t had any direct contact with any of the companies profiled.”

Stay on top of crypto news, get daily updates in your inbox.

[ad_2]

Source link

fiverr