As regulators around the globe start to tackle AI, Big Tech is joining the call to regulate it. Regulators are focused on issues such as privacy online and disinformation. AI works through repeated training -- continual ingestion of mass quantities of data helps it "learn" and become more intelligent. That ongoing ingestion of material, however, creates a risk of bad data either as input or output (or both).
Why It Matters
Regulation of content is a sticky subject, and AI is likely to prove a challenge. For large tech companies like Google, many of which are working to commercialize AI, to call for its regulation is highly unusual -- and a sign of the potential power and potential for harm that unfettered AI could possess.