Lina Khan, the chair of the Federal Trade Commission (FTC), has expressed concerns that artificial intelligence tools, like ChatGPT, could significantly increase consumer risks such as fraud and scams. She emphasized that the U.S. government has the legal power to address these AI-related consumer issues under current laws.
During a discussion with House lawmakers, Khan highlighted the potential for AI tools to enhance fraudulent activities, which she considers a major issue. Recently, AI technologies have gained attention for their ability to produce realistic emails, stories, essays, images, audio, and videos. While these tools offer innovative ways to work and create, there are worries about their potential misuse for impersonation and deception.
As federal policymakers debate the need for specific AI regulations, focusing on algorithmic bias and privacy concerns, the FTC has indicated that companies can still be investigated under existing laws. FTC Commissioner Rebecca Slaughter noted that the agency has always adapted its enforcement strategies to new technologies and will continue to do so.
Commissioner Alvaro Bedoya added that companies cannot avoid accountability by claiming their algorithms are too complex to understand. He reinforced that existing laws on unfair and deceptive practices, civil rights, and credit apply to AI technologies, and companies must comply.
The FTC has previously provided detailed guidance to AI companies and recently received a request to investigate OpenAI over allegations that it misrepresented the capabilities and limitations of ChatGPT.