(Bloomberg) -- Sensitive personal data related to health, location or web browsing history should be “off limits” for training artificial intelligence models, US Federal Trade Commission Chair Lina Khan said Tuesday.

The FTC is working to create “bright lines on the rules of development, use and management of AI inputs,” Khan said at RemedyFest, a conference hosted by Bloomberg Beta and Y Combinator. “On the consumer protection side, that means making sure that some data — particularly peoples’ sensitive health data, geolocation data and browsing data — is simply off limits for model training.”

Khan said that companies that want to use data they’ve already collected for AI training also must actively notify users of the change. 

The rapid development of generative AI technology, which can create voice, video or text in a variety of styles, has dazzled Silicon Valley. At the same time, the technology has raised privacy and security concerns because of its ability to impersonate individuals, for example President Joe Biden in a robocall.

Earlier: Creators of Biden Audio Deepfake Face Potential Charges 

The agency, which has a dual mission to enforce consumer protection and antitrust law, has raised concerns about the use of AI tools in frauds and scams as well as its improper use by companies. In December, the FTC sued Rite Aid Corp. over its use of an AI facial recognition system that mistakenly tagged consumers as shoplifters.

On the antitrust side, it has also flagged a potential issue with the most promising AI startups depending heavily on the old guard of dominant tech companies for their financing and infrastructure needs. Last month, the agency opened an inquiry into more than $19 billion in investments by Microsoft Corp., Amazon.com Inc. and Alphabet Inc.’s Google into AI startups Anthropic and OpenAI.

--With assistance from Emily Birnbaum.

©2024 Bloomberg L.P.