Artificial intelligence is the fastest running wave to take over the globe with all possible creations at once, from self-driving cars to delivery applications of medical diagnosis. It works with precision. But great power comes with great responsibility and presently, considerations about its safety as well as security plus being fair to all, are most important. The legislative journey towards attaining AI safety has not been easy in America. There but now Donald Trump made a grand return for his second term in office by early 2026.
Trump focuses on speed and growth. He wants America to lead the world in AI dominance over its rivals- read China with less bureaucracy and room for corporations to innovate. But at what expense to safety? Will there be enough guardrails against misuse- from deepfakes or biased algorithms? This paper explores the prospects of AI regulation under a Trump policy approach by reviewing how tech lobbying decisions are made, the move toward federal oversight, and future challenges. It will strive to do so in plain English so that everyone understands it because this is why getting the balance right between innovation and safety matters for America going forward.
Much is at stake. If done right AI will add trillions to the economy. Get the rules wrong and it will take away jobs, mess with privacy, even hurt national security. The Trump team wants a “light touch” on AI regulation, but states are prepped to hit back. Here’s how.
The Present Scene of AI Rules
From Biden-Era Guards to a New Age
Before Trump took office in January 2025, the Biden administration took initial steps for cautious AI regulation. In 2023, Biden signed an executive order requiring safety reviews on high-risk AI systems that do not perpetuate biases within hiring systems and as a plan against cyber attacks. This was through strengthened enforcement of rules by bodies such as the Federal Trade Commission. Put simply, AI should work for people, not against them.
Trump’s 2nd term changes the script. On his first entire day in office, he reversed Biden’s order. It opens the doors right open for Artificial Intelligence to be sped up. Trump policy is tech ‘America First’, views that placing large regulations on Artificial Intelligence is holding American companies down. Instead of with strong federal control, their plan relies on voluntary standards from top business heads.
It has sparked a debate. Proponents say it enables creativity. Critics warn that there are enormous holes in the safety net. For example, without stringent federal control, measures to control issues like the free-for-all spread of AI-generated misinformation would not be addressed. By January 2026, no major federal AI safety bill has passed Congress to become law. However, executive actions are shifting things around the edges as states step in the gap with their own laws – which California and New York have already done regarding rules on AI transparency and bias checks.
This shift shows how technology is shaped by politics. In the Trump administration, AI governance focuses on competition. However, as aspects of daily life come under management by smart tools, finding the right balance is more and more important.
The policy goes, Innovate now. Regulate later.
In January 2025, Executive Order 14179 came into being and was accurately titled as, “Removing Barriers to American Leadership in Artificial Intelligence.”
It tossed Biden’s safety mandates aside and instead called for billions in AI investments to be made. It was all structured so that US firms could beat China in the global race.
Push to December 2025. President Trump signed another big order – Ensuring a National Policy Framework for Artificial Intelligence. This one goes right at state-level AI rules. It says varying the rules in each state slows things down. The order makes a single federal standard to cut the confusion. It sets up inside the Justice Department an AI Litigation Task Force that will go to court against state laws that conflict with national goals, for example, if a state bars some uses of AI in hiring, that task force could sue to stop it.
This forms the heart of Trump policy: Deregulate to allow growth. There is a nod towards safety, though, in the framework. It includes basics for safety – such as children who need easy protections from harmful AI content – but still shows too loose, experts say because without enforcement mechanisms a company could simply ignore risks – such as breaches in data privacy.
David Sacks was picked by Trump for this role in early 2025. Sacks is a main investor in tech who pushes for light rules. Much regulation kills jobs and kills innovation, he says. Under him, the “America’s AI Action Plan” came out in July 2025. This plan has three parts: Build up infrastructure such as data centers, train workers for the jobs that AI will create, and take part in global engagement without heavy federal oversight.
The plan promises $100 billion in federal funding in artificial intelligence research. It calls for global agreements for safe sharing of technology. Safety is not at the top of the list though speaking about voluntary audits for AI risk not compulsory ones. This goes well with the Trump policy mood; let businesses self-regulate.
By 2026, a few lawmakers are already putting together a bill that could turn all these ideas into law – essentially making federal oversight the main approach. For AI safety legislation, this probably means putting off the bigger topic of algorithmic fairness but moving very quickly on issues where wins can be scored fast, like rules about the use of AI in defense.
Tech has never had a better lobby.
In 2025, major firms put over $1 billion into making sure that any attempt at regulating AI was on the path of growth, not of holding back. Under a Trump rule, their words would count. Heads like Elon Musk and Sam Altman are already in regular talks with staff in the White House to keep federal watch as loose as it can be, interference by the states.
Why all the push? AI is a treasure trove. Firms like OpenAI and Google expect to unlock trillions in value. If they are imposed with strict AI regulation, their speed gets capped. So they lobby for ‘innovation-friendly’ laws.’ Take an example of Meta and Amazon who also pushed the December 2025 executive order. It protects them from different state rules on data use.
- Elon Musk (xAI and Tesla): has poured millions of dollars worth of donations into pro-Trump efforts. He supports deregulation as it will help speed up AI in cars and robots. Musk cautions that if federal oversight becomes too heavy, this would hand the lead over to China.
- Sam Altman (OpenAI): was first in line to praise Trump and to lobby for rules that would let AI firms play a big part in setting their own terms. His group feels self-reporting on safety is enough, no need for any hard checks.
- Sundar Pichai (Alphabet/Google): Alphabet dropped $50 million on tech lobbying. It seeks unified federal oversight due to ease of compliance across states.
- Jensen Huang (Nvidia): As chip king he pushes export controls on AI tech but light domestic rules. Nvidia influence secured tax breaks in the AI Action Plan.
- Mark Zuckerberg (Meta): Child safety carve-outs. Meta lobbied for exceptions in AI regulation to allow ad targeting.
The initiatives illustrate the way high-tech lobbying will ultimately shape policies under Trump’s administration. Firms are framing it under the umbrella of Patriotism- Strong AI for a Strong America. Critics argue that this is dodging to sidestep the risk against the general public. Increased federal oversight seemingly opens the door for even more such influences in the coming 2026 midterms.
The grand war is between federal oversight and state power.
On the Trump administration’s side, they argue for one consolidated national plan regarding AI regulation. States starting with California argue that local interests take precedence. This strain may mark out the 2026 legislative process.
Federal oversight guarantees uniformity. One structure means the same rules applied to firms at all places. It quickens implementation and also reduces expenses but according to states, it ignores regional concerns such as how AI impacts employment in rural areas.
| Aspect | Federal Oversight (Trump Policy) | State Initiatives (e.g., CA, NY) |
| Focus | Innovation and global competition | Safety, bias, and local protections |
| Examples | National AI Action Plan; Litigation Task Force | CA’s 2026 transparency laws; NY’s RAISE Act |
| Strengths | Uniform rules; faster growth | Tailored to communities; quick action |
| Weaknesses | May overlook local risks; slower on safety | Creates patchwork; confuses businesses |
| Enforcement | DOJ task force; voluntary guidelines | State attorneys general; mandatory audits |
| 2026 Outlook | Push for preemption bill in Congress | 38 states planning new AI rules despite pushback |
This table shows the breakdown. Under Trump policy, feds can preempt states either with a threat to remove funds or by suing them. California, where many tech companies are based, has plans to fight back. Its new rules say that AI makers must disclose safety risks. If taken to court, this could go on for years.
It is magnified by tech lobbying. Big firms crave federal rule-easily to handle just one set of laws. For AI safety law, it means a win on speed for the federal government but a win on details like privacy for the states.
A.I. Safety Challenges and Opportunities
Even with a Trump policy push, obstacles in the path of AI safety are noteworthy. Here’s what stands out:
- Misinformation Risks: Reduce regulation and deepfakes will massively harm elections and trust.
- Job Displacement: Rapid AI can displace millions of jobs without any support in retraining.
- Security Gaps: In the absence of strong federal oversight, pathways left open for hacking or even for foreign theft.
- Bias and Fairness: If not tightly regulated, artificial intelligence could more broadly entrench existing inequalities in such areas as lending or policing.
- Global Pressure: China runs fast; the US has to run faster but safer.
On the bright side, opportunities abound. Trump’s plan funds ethical AI research. States’ experiments could inspire federal tweaks. Tech lobbying, while powerful, includes calls for child safety nets. By mid-2026, a bipartisan bill might emerge, blending federal oversight with targeted AI regulation.
Conclusion
Trump proposes a bold vision for AI in his second term: Lead the world through freedom, not fear. From executive orders to the Plan of ACTION for Artificial Intelligence, growth under federal supervision is at the heart of his policies. Under pressure from tech lobbying, big firms are made safe while real AI regulation does not happen. As states resist and dangers grow, smart safety laws are now seen as essential.
For more exclusive Politics updates, visit CBS Magazine





















