FTC chair Lina Khan warns AI could 'turbocharge' fraud and scams

Lina Khan, the head of the Federal Trade Commission (FTC), has expressed concerns that artificial intelligence technologies like ChatGPT could significantly amplify consumer-related issues such as fraud and scams. She emphasized that the U.S. government has considerable power to address these AI-induced problems using existing laws.

During a session with House lawmakers, Khan highlighted the potential for AI tools to enhance fraudulent activities, describing it as a major worry. Recently, AI applications have gained popularity for their ability to produce realistic emails, narratives, and multimedia content. While these tools offer new possibilities for work and creativity, they also pose risks of misuse, such as impersonation.

As federal policymakers discuss the need for specific AI regulations, due to concerns about algorithmic bias and privacy, the FTC has made it clear that companies are still subject to investigations under long-standing laws. FTC Commissioner Rebecca Slaughter noted that the agency has historically adapted its enforcement strategies to technological advancements, emphasizing their duty to apply existing tools to new technologies without being intimidated by their novelty.

Commissioner Alvaro Bedoya added that companies cannot evade responsibility by claiming their algorithms are too complex to understand. He affirmed that existing laws on unfair and deceptive practices, civil rights, and credit regulations are applicable, and companies must comply.

The FTC has previously provided detailed guidance to AI companies and recently received a request to investigate OpenAI over allegations that it has misrepresented the capabilities and limitations of ChatGPT.

Back to list