How can we protect children and teenagers, exploring the online world and gathering important experiences, in the Internet? How can illegal content on the Internet be effectively combated? We interviewed Dr. Alexander Kleist, Public Policy Lead of Instagram in Germany, Austria and Switzerland, about child safety on Internet platforms like Instagram. He is a keynote speaker of the first pre-summit session of the eco Trust&Safety Summit on 17 August 2021.
How does Instagram as a platform guarantee the protection of children and teenagers, Mr. Kleist?
Kleist: Keeping young people safe on Instagram is one of our most important priorities and we are continuously working with dedicated teams who are always exploring new tools to protect minors on our platform. This year, we have already launched a bunch of features to prevent teens from unwanted interactions and protect their privacy. Lately, we announced to default people under 18 into a private account when they join Instagram. We require everyone to be at least 13 to use Instagram and while many people are honest about their age, we understand that some people do lie about it. To address this challenge, we’re focused on developing new artificial intelligence and machine learning technology to help us understand age — and apply new age-appropriate features.
This is an important step — but we’re going even further to prevent adults from sending messages to people under 18 who don’t follow them. For example, when an adult tries to message a teen who doesn’t follow them, they receive a notification that they are not allowed to.
These features build on our ongoing work — continuing to strengthen our policies around child safety, improve technology to find harmful content, build safety tools to help people control their experience, and work with local experts and organizations.
Why is the cooperation between platforms so important? Do we need international compliance norms and standards?
Kleist: As a company, we take a comprehensive approach to make our platform a better place for everyone. We do this by writing clear policies about what is and isn’t allowed on our platform, developing sophisticated technology to detect and prevent abuse, providing helpful tools and resources for people to control their experience or get help as well as engage with over 500 safety partners around the world Among those are leading Internet safety organizations in our Safety Advisory Board, and our Global Women’s Safety Expert Advisors, a group of 12 nonprofit leaders, activists and academic experts, who help us develop new policies, products and programmes that better support the women who use our apps.
Beyond our own initiatives, we also collaborate with other companies and engage in cross-industry work to protect kids online. Last year, we joined Google, Microsoft and other tech companies to build Project Protect, a plan to combat online child sexual abuse. Additionally, we made our photo and video-matching technologies open-source, which helps industry partners, developers and non-profits to easily identify abusive content, share hashes — or digital fingerprints — of different types of harmful content.
Very strict community guidelines offer safe protection for users. On the other side, they pose a risk: Platforms might delete legal content and opinions, as well. How could we prevent this conflict?
Kleist: As a platform, we do see the conflict between freedom of speech and the protection of our users from abuse, hate speech, bullying and harassment. Our Community Standards, which set out what you can and can’t share on Facebook and Instagram, are designed to help us strike that balance.
At the same time, Facebook has stated time and again that private companies should not make far-reaching decisions about content on their own. This is why we have established an independent Oversight Board whose members are experts on different topics from different cultures and backgrounds. The board deals with selected and controversial content decisions on Facebook and Instagram, making binding decisions and recommendations on content which also regard the further development of our guidelines.
Despite all our efforts, it is important to note: In some cases, we make mistakes. Our content review team receives millions of reports a week, and on some occasions, they may remove content in error on occasion. When we become aware of errors, we immediately investigate and restore content where warranted.
Thank you for the interview!