As the 2026 midterm elections approach, concerns about the influence of artificial intelligence (AI) on the electoral process are becoming more prominent. While fears about AI’s disruptive potential in the 2024 elections were not fully realized, advances in AI technology have led to renewed scrutiny over its role in campaign communications, election administration, and cybersecurity.
A major topic among lawmakers is the use of AI-generated “deepfakes”—realistic images, videos, or audio clips that depict events which never happened. These deepfakes have been used to spread misinformation about candidates and public officials. Despite their prevalence, there is little evidence to suggest they have significantly influenced actual election outcomes.
In response to these risks, states are enacting regulations on AI use in political communications. Currently, 26 states have laws addressing this issue, up from just five in 2023. Additional legislation is under consideration in New Jersey, Virginia, Maryland, Tennessee, and Vermont. Most state laws require labels on false content created with AI rather than outright bans. The effectiveness of these measures remains unclear due to their recent implementation. Legal challenges are also a concern; for example, a California law prohibiting deceptive deepfakes was struck down as a violation of free speech rights.
The ongoing expansion of such regulations raises questions about their constitutional validity and practical impact. As campaigns intensify ahead of November 2026, observers will be watching how these rules are enforced and whether new legal challenges emerge.
AI is also being adopted by election officials to improve efficiency and effectiveness in administering elections. Unlike previous cycles where AI tools were optional add-ons, many now come built into widely used applications. This integration means election offices must develop clear policies for responsible AI use. A key guideline is ensuring human oversight: while AI can draft press releases or training materials for poll workers, final approval should rest with experienced staff.
Organizations like the federal Election Assistance Commission and Arizona State University’s AI & Elections Clinic provide resources to help officials manage these changes responsibly.
Cybersecurity remains another area affected by evolving AI capabilities. There was an increase in distributed denial-of-service attacks on election websites using AI tools during the last presidential cycle. Both major campaigns experienced sophisticated phishing attempts aided by advanced technologies. Although attackers benefit from new tools, so do defenders; cybersecurity professionals are leveraging AI to enhance protections.
Recent shifts at the federal level mean that state and local governments bear greater responsibility for securing election infrastructure—potentially requiring increased funding and attention from those jurisdictions.
Basic cybersecurity practices such as multifactor authentication and careful review of emails continue to be effective defenses against most threats—even as technology evolves.
AI’s growing role across all aspects of society will shape how elections are run and protected going forward. Flexibility and adaptation by lawmakers and administrators may be essential for realizing benefits while minimizing harms without compromising fundamental democratic rights.


