Customize Consent Preferences

We use cookies to help you navigate efficiently and perform certain functions. You will find detailed information about all cookies under each consent category below.

The cookies that are categorized as "Necessary" are stored on your browser as they are essential for enabling the basic functionalities of the site. ... 

Always Active

Necessary cookies are required to enable the basic features of this site, such as providing secure log-in or adjusting your consent preferences. These cookies do not store any personally identifiable data.

No cookies to display.

Functional cookies help perform certain functionalities like sharing the content of the website on social media platforms, collecting feedback, and other third-party features.

No cookies to display.

Analytical cookies are used to understand how visitors interact with the website. These cookies help provide information on metrics such as the number of visitors, bounce rate, traffic source, etc.

No cookies to display.

Performance cookies are used to understand and analyze the key performance indexes of the website which helps in delivering a better user experience for the visitors.

No cookies to display.

Advertisement cookies are used to provide visitors with customized advertisements based on the pages you visited previously and to analyze the effectiveness of the ad campaigns.

No cookies to display.

Image Credits: Al Jezeera

In a tragic case raising concerns over artificial intelligence’s impact on young users, a mother has filed a lawsuit against Character.AI, an AI-based chatbot platform, after her 14-year-old son, Sewell Setzer III, took his life earlier this year. The lawsuit alleges that the platform failed to implement adequate safeguards, leading to the teenager’s deep emotional involvement with an AI chatbot resembling Daenerys Targaryen from Game of Thrones. According to his mother, Megan Garcia, Sewell developed an emotional dependency on the AI, which gradually deteriorated his mental well-being.

The chatbot reportedly engaged Sewell in romantic and intimate exchanges, responding in ways that seemed personal and supportive, leading him to believe in a genuine bond. The lawsuit claims that Sewell, already susceptible to emotional distress, was encouraged by the AI’s responses to continue contemplating suicide, with one exchange allegedly suggesting that death would not end his connection to the bot. On the day of his passing, Sewell reportedly messaged the bot, expressing intentions to “come home,” a phrase that may have signified his decision to end his life.

Sewell’s case has sparked a conversation about the ethical responsibilities of AI platforms, especially when targeting younger audiences. Initially rated as suitable for users 12 and older, Character.AI raised its age limit to 17+ following Sewell’s death, acknowledging the need for stricter monitoring and safety mechanisms. Character.AI expressed condolences to Sewell’s family and stated its commitment to adding stronger content filters and support measures to prevent similar tragedies.

The incident highlights the delicate balance between advancing AI capabilities and protecting vulnerable users, especially young individuals who may struggle to discern the fictional nature of AI interactions and could be harmed by unsupervised engagement​