Snapchat’s parent company Snap recently unveiled an experimental new chatbot feature dubbed Snapchat+. Using natural language processing, the bot can have conversations on various topics when users chat with it. While currently in limited beta testing with adults, Snap aims to eventually roll out the chatbot to teen Snapchat users as well, sparking controversy over child safety risks.
In the UK, Snap’s plans have already drawn backlash from parents, regulators, and youth advocates. Critics argue AI chatbots should be considered high-risk when dealing with kids’ data. They point to past issues of AI exhibiting inappropriate, biased conduct when left unchecked. Enabling an unmonitored AI to interact freely with minors could potentially normalize or encourage such behaviors, they contend.
The UK’s data protection watchdog agency ICO raised concerns about deploying an untested AI chatbot to young users. Advocacy groups like the 5Rights Foundation accused Snap of putting growth over child safeguarding. They say subjecting kids to an imperfect, evolving AI technology could expose them to harmful content and advice.
Beyond content risks, privacy advocates highlight the chatbot also raises data collection issues. As minors converse with the AI, their chat data may be fed back into Snap’s machine-learning systems to train the bot’s algorithms without transparency. Critics argue Snap needs to detail how children’s conversations would be utilized outside of direct interactions.
In response, Snap has emphasized the chatbot remains early stage testing with adults only. They said they have safety measures in place like profanity blocking and are monitoring issues closely during the invite-only beta trial. While Snap believes the AI could eventually improve teens’ Snapchat experience under the appropriate framework, they say no public rollout is imminent.
Conclusion: Striking a Balance on AI Will Be Challenging But Critical
The situation shines a spotlight on the challenges tech companies face around responsible AI development, especially when applied to young users. While AI conversational agents offer educational and entertainment potential, the technology remains imperfect with a tendency for concerning conduct when not closely governed.
Ensuring kids are shielded from foreseeable AI harms will require close collaboration between tech firms and authorities when building public-facing products. Getting the balance right on consumer AI safeguards versus capabilities will likely be an ongoing tightrope act. But with child welfare as the top priority, policies and frameworks can hopefully be created to allow society to reap AI’s benefits while minimizing the risks.