New York, Feb 26 (EFE).- Instagram has announced that "next week" it will begin to notify parents if their teenage child "attempts to repeatedly search for terms related to suicide or self-harm" on its social network, according to its parent company, Meta, announced this Thursday.
You may be interested in: Keys to the Seedance phenomenon, the Chinese AI of hyperrealistic videos that challenges Hollywood
This measure will be activated for parents in the US, UK, Australia, and Canada who use the social network's parental supervision tools, and will be available in other regions "by the end of this year". "These alerts are designed to provide parents with the information needed to support their teenage children and include specialized resources to help them address these sensitive conversations," the company says in a statement, which is being investigated in multiple lawsuits for allegedly fostering harmful effects on the mental health of minors. The searches that will trigger the alert include phrases that suggest a teenager wants to self-harm and terms like "suicide" or "self-harm". Said notices will be sent to parents via email, SMS, or WhatsApp, according to the contact information available, as well as through an in-app notification. "By clicking on the notification, a full-screen message will open explaining that your teenage child has repeatedly tried to search on Instagram for terms related to suicide or self-harm in a short period of time. Parents will also have the option of consulting expert resources designed to help them address potentially sensitive conversations with their teenage child," notes Meta. The tech company emphasizes that it has "strict policies against content that promotes or glorifies suicide or self-harm," but that it allows "people to share content about their own difficulties with these problems," content that is hidden "from teenagers, even if it is shared by someone they follow." “We know that teenagers are increasingly turning to artificial intelligence (AI) for support. While our AI is already trained to respond safely to teenagers and provide resources on these topics as appropriate, we are now creating similar parental alerts for certain AI experiences,” says Meta in its statement. These will alert parents if a teenager attempts to engage in certain types of conversations related to suicide or self-harm with AI. According to the company, this is an important job and they will share more information in the "coming months".





