FTC chair Lina Khan warns AI could 'turbocharge' fraud and scams

Fraud and Scams Could Be Amplified by AI, Warns FTC Chair Lina Khan

The United States Federal Trade Commission (FTC) cautioned on Tuesday that artificial intelligence (AI) tools, such as ChatGPT, could intensify consumer harms like fraud and scams. According to FTC members, the US government has significant authority to regulate AI-related consumer harms under existing legislation.

FTC Chair Lina Khan expressed serious concerns to House lawmakers about the potential for these tools to supercharge the rate of fraud and scams. Recently, a variety of new AI tools have garnered interest due to their proficiency in generating convincing emails, narratives, essays, and even images, audio, and videos. While these tools can potentially revolutionize workflow and creativity, there are concerns about their potential misuse in deceiving others by impersonating individuals.

Despite ongoing debates among federal policymakers about promoting specific AI rules to address possible algorithmic discrimination and privacy issues, FTC investigations can still be conducted currently under a range of long-standing statutes, Khan and her fellow commissioners noted.

FTC Commissioner Rebecca Slaughter asserted that the FTC has always been successful in adapting its enforcement to evolving technology. "The agency remains committed to applying existing tools to these changing technologies, and won't let the novelty of the technology deter us," she said.

Fellow FTC Commissioner Alvaro Bedoya also added that companies cannot evade liability by simply stating that their algorithms are a black box. He said, “Our staff consistently maintains that our authority over unfair and deceptive practices applies, as do our civil rights laws, fair credit, and the Equal Credit Opportunity Act. Companies will need to follow the law.”

In the past, the FTC has issued detailed public guidance to AI companies. Just last month, the agency received a request to probe OpenAI over allegations that the ChatGPT's parent company has misled customers about the tool’s capabilities and restrictions.

Back to list