Online data privacy has always been a contentious issue, but generative AI has turned it into a minefield. As AI becomes more sophisticated, citizens will want to know just how much of their data is available as a training resource. Providing these answers will require both private companies and government agencies to cooperate and come up with guidelines as the nascent technology develops. However, expecting AI to handle private data responsibly might be premature, considering that humans themselves often fall short on that count.
Three experts weighed in on this topic at the Techonomy 23: The Promise and Peril of AI conference in Orlando, Florida. Cosmin Andriescu (cofounder of Lumenova AI), Nia Castelly (cofounder of Checks by Google), and Bryan McGowan (U.S. trusted AI lead at KPMG) focused mostly on the ethical growth of AI, and what role governments might play in regulating it. Analyzing private user data, for better or worse, is one way that AI language models can grow.
Castelly’s platform, Checks, uses AI to ensure data privacy compliance in the Google Play Store. “I think it’s about information,” she said. “If the consumer knows how you’re going to use the technology, and how it’s going to benefit them, then that’s an informed decision, and an easy tradeoff.”