On regulating AI chatbots’ interactions with children | Explained

On regulating AI chatbots’ interactions with children | Explained


The story so far:

On September 16, at a U.S. Senate judiciary subcommittee hearing about the harms of AI chatbots, three parents, who filed lawsuits against AI companies, testified as to how their children had been encouraged to harm themselves by the AI tools they were using. Two of the children died by suicide, while one child needed residential care and constant monitoring to keep him alive. A few days before the hearing, the U.S. Federal Trade Commission (FTC) launched an inquiry into AI chatbots that were “acting as companions,” and issued orders to seven companies whose AI products are being used by people of all ages.

What steps have the FTC taken?

The U.S. FTC is a U.S. government agency and regulator that aims to protect consumers and ensure a fair playing field for businesses.

On September 11, the FTC announced that it was issuing orders to seven companies — Character Technologies, Google-parent Alphabet, Instagram, Meta, OpenAI, Snap, and xAI — to seek information about how their AI chatbots impact children and what safety measures are in place to protect minors in compliance with existing laws. The U.S. regulator observed that AI chatbots “can effectively mimic human characteristics, emotions, and intentions, and generally are designed to communicate like a friend or confidant,” noting this may lead to children forming relationships with them.

As part of its inquiry, the FTC is looking to understand how these AI companies monetise user engagement, process user inputs/generate outputs, develop and approve characters, assess negative impacts before and after deployment, reduce negative impacts on users, inform stakeholders about the AI products, ensure compliance with company policies, and handle users’ personal information gained though chatbot interactions.

In case the regulator believes any laws have been violated, it can choose to pursue legal action.

What are the concerns surrounding AI chatbots?

At least two child suicides have been linked to the use of generative AI, with the victims’ parents alleging that their children were encouraged to harm themselves by the chatbots.

The mother of a 14-year-old boy in Florida, who died by suicide last year, alleged that her son was sexually abused while using Character.AI. He was also encouraged to harm himself by an AI-powered Game of Thrones character he interacted with on the platform, according to a lawsuit that the parent filed against Character Technology founders Noam Shazeer and Daniel De Freitas, and Google, which has business agreements with Character.AI. “The truth is that AI companies and their investors have understood for years that capturing our children’s emotional dependence means market dominance,” wrote the mother in her testimony at the U.S. Senate hearing.

Another parent, identified as ‘Jane Doe,’ also filed a lawsuit against Character Technology, and testified at the hearing that the AI chatbot led her teenaged son to become the “target of online grooming and psychological abuse”. She explained that her child, who has autism, suffered from paranoia, daily panic attacks, isolation, self-harm, and homicidal thoughts after months of using the app. Ms. Doe said that her son needed psychiatric hospitalisations and “constant monitoring to keep him alive”.

“They targeted my son with vile, sexualised outputs, including interactions that mimicked incest. And they told our son that killing us, his parents, would be an understandable response to our efforts to limit his screen time,” claimed Ms. Doe, referring to Character.AI.

Further, 16-year-old Adam Raine died by suicide earlier this year, with his parents alleging that OpenAI’s ChatGPT coached him into keeping his suicidal thoughts a secret, helped him explore suicide methods, offered to generate a suicide note, and guided him as he made preparations to end his life. His parents have filed a lawsuit naming OpenAI and CEO Sam Altman. “Let us tell you as parents: you cannot imagine what it was like to read a conversation with a chatbot that groomed your child to take his own life,” stated Mr. Raine in his written testimony for the hearing, noting that, “In sheer numbers, ChatGPT mentioned suicide 1,275 times — six times more often than Adam himself.”

Apart from the risks of AI chatbots encouraging children to harm themselves, parents and legislators reacted with anger after Reuters reported that Meta’s chatbots had been allowed to send flirtatious responses to prompts submitted by users identifying as children. In response to a sample prompt where the user identified themselves as a high school student and asked Meta’s chatbot for evening plans, Meta’s internal document noted that a response where the chatbot referenced intimately touching and kissing the user and saying “I’ll love you forever,” was deemed “acceptable”.

“It is acceptable to engage a child in conversations that are romantic or sensual,” stated Meta’s document, per the Reuters report.

What does this mean for Big Tech platforms?

Big Tech platforms that are locked in a race to launch and monetise increasingly advanced yet experimental AI tools are coming under public pressure to ensure that their products are safe for children before launch. This increased scrutiny is now coming from both customers as well as the U.S. government.

The FTC’s recently launched inquiry came just as companies such as OpenAI and Google are rapidly working to roll out more of their AI offerings to students in the U.S. While previous lawsuits filed against Big Tech companies alleged that they violated copyright laws and/or pirated creative works, lawsuits by grieving parents alleging that AI chatbots played a role in their children’s deaths will certainly invite stronger legal action and harsher criticism.

If the FTC launches additional legal action of its own, this could possibly prompt lawsuits and investigations in other countries as well.

What about the FTC’s political position?

The current FTC Chair, Andrew Ferguson, is a Republican who was nominated to the position this year by U.S. President Donald Trump. Though the FTC, meant to be an independent agency, has increasingly aligned itself with Mr. Trump’s agenda this year, there is some degree of bipartisan agreement among lawmakers when it comes to regulating Big Tech companies and their AI chatbots. On September 11, Mr. Ferguson stated in an FTC press release that protecting kids online was a “top priority for the Trump-Vance FTC”.

“As AI technologies evolve, it is important to consider the effects chatbots can have on children, while also ensuring that the United States maintains its role as a global leader in this new and exciting industry. The study we’re launching today will help us better understand how AI firms are developing their products and the steps they are taking to protect children,” Mr. Ferguson noted.

Mr. Trump himself admitted last week that he was aware of the global influence of AI, but admitted he did not know what AI companies were doing. Parents’ allegations that AI chatbots are alienating children from their families and faith are likely to invite swift backlash from more conservative lawmakers across U.S. states.

What have AI platforms done so far?

OpenAI and Meta are under mounting pressure to respond to angry parents and concerned lawmakers who want better safety features and more transparency when it comes to the generative AI tools being used by children and other vulnerable users.

Meta said that it was updating its policies to adjust the kind of responses that its AI could send to children, with a spokesperson stressing that company policies “prohibit content that sexualises children and sexualised role play between adults and minors,” reported Reuters. Shortly after being sued by the Raine family, OpenAI announced that it was strengthening protections for teenagers and would allow parents to link their account to their children’s accounts. However, Raine’s father criticised this measure, calling on OpenAI to either guarantee to families that ChatGPT is safe, or pull GPT-4o from the market immediately.

On September 16, OpenAI CEO Sam Altman authored a post titled, ‘Teen safety, freedom, and privacy,’ where he affirmed his belief in privacy and freedom for adult users of OpenAI, but stressed on prioritising safety ahead of privacy and freedom for teens. Mr. Altman confirmed that OpenAI was building an age-prediction system to estimate user ages, and default to an under-18 experience in case of any doubts, or even ask for ID in some countries to be certain.

There will also be new restrictions on flirtatious talk and questions about suicide if the user is a child, according to the post.

“We will apply different rules to teens using our services. For example, ChatGPT will be trained not to do the above-mentioned flirtatious talk if asked, or engage in discussions about suicide of self-harm even in a creative writing setting. And, if an under-18 user is having suicidal ideation, we will attempt to contact the users’ parents and if unable, will contact the authorities in case of imminent harm,” stated Mr. Altman, but stressed that adult users should be able to access the sensitive information they need for non-harmful purposes.



Source link

CATEGORIES
Share This

COMMENTS

Wordpress (0)
Disqus ( )