Social Media

Facebook's New AI Can Identify Suicidal Thoughts In Posts And Videos

The company is hoping the technology will help it identify users' expressions of suicidal thoughts faster.

Facebook's New AI Can Identify Suicidal Thoughts In Posts And Videos
Getty Images
SMS

Facebook is expanding its commitment to safety with a little help from artificial intelligence.

The company announced Monday its will use the technology to help prevent suicide.

Facebook built an AI that can detect patterns in posts and videos that contain suicidal thoughts and feelings.

Certain comments and posts like "Are you OK?" or "Can I help?" will be flagged as potential indicators. The flagged posts will be reviewed by Facebook employees, who can choose to send resources to the user or alert local authorities.

Despite Reports, Facebook's AI Project Probably Won't Kill Us All
Despite Reports, Facebook's AI Project Probably Won't Kill Us All

Despite Reports, Facebook's AI Project Probably Won't Kill Us All

Some outlets wrote that a glitch in a Facebook AI program was so scary it was shut down, but developers say that's not a fair characterization.

LEARN MORE

The function will screen Facebook Live videos as well, an aspect particularly relevant after some users live-streamed acts of violence and self-harm.

The company said it has already started using AI and begun working with first responders in the U.S. It plans to expand the service to other countries.

This is part of an ongoing effort the company made in February to help users form safer, more supportive communities. Facebook's been relying on users to alert it to troubling content, but the company's hoping artificial intelligence will speed up and prioritize how it handles incidents. 

Facebook already uses AI to scan for certain inappropriate content, identify terrorist propaganda and help target advertising.

Facebook's other safety features allow users to check in as "safe" during attacks and receive Amber Alerts through the platform.