Cadastre-se agora para um orçamento mais personalizado!

NOTÍCIAS QUENTES

How the Trump administration changed AI: A timeline

Jul, 22, 2025 Hi-network.com
white house
Douglas Rissing/Getty Images

The year is more than halfway over, and it's already been a full one for AI. Since President Donald Trump took office in January, the country and the industry have awaited a US AI policy -- including what, if any, regulation it will bring to the technology.

As Trump indicated in his Jan. 23 executive order on AI, the administration will release that policy, called the AI Action Plan, on Wednesday, July 23 alongside rumored additional executive orders. The president will give a speech at a summit hosted by media organizations including The Hill and Valley Forum and the All-In podcast, which will also feature leaders from tech companies including Hadrian, Palantir and YCombinator.

The administration's record thus far has emphasized progress and investment with little regard for safety or responsibility. Insider reporting by Axios last week confirmed that the forthcoming 20-page plan focuses on "promoting innovation, reducing regulatory burdens and overhauling permitting," avoiding highly contested topics like copyright in training data. 

Here are all the actions it has taken on AI so far and what they indicate about Wednesday's announcement.

Trump reverses Biden's AI executive order

Jan. 23

On his first day of his second term, Trump overturned President Biden's executive order on AI, signed in October 2023. Shortly after, Trump released his own executive order, which indicated very little policy?wise -- only that the US must "sustain and enhance America's global AI dominance in order to promote human flourishing, economic competitiveness, and national security."

Also: How much energy does AI really use? The answer is surprising - and a little complicated

Unlike Biden's order, relevant terms including "safety," "consumer," "data," and "privacy" did not appear in the document, setting a tone for the initiatives that have since followed. The administration has framed safety as antithetical to progress in AI, focusing on removing what the White House called "unnecessarily burdensome requirements for companies developing and deploying AI."

The rollback communicated "the Trump administration's willingness to overlook the potential dangers of AI," Peter Slattery, a researcher on MIT's FutureTech team who led its Risk Repository project, told at the time. "This could prove to be shortsighted: a high?profile failure -- what we might call a 'Chernobyl moment' -- could spark a crisis of public confidence, slowing the progress that the administration hopes to accelerate."

That same week, the administration launched Project Stargate, a data center investment initiative in partnership with OpenAI and several foreign investors. The partnership is intended to execute Trump's desire to expand AI infrastructure in the US.

(Disclosure: Ziff Davis, 's parent company, filed an April 2025 lawsuit against OpenAI, alleging it infringed Ziff Davis copyrights in training and operating its AI systems.)

Meanwhile, several companies including Anthropic stealthily adjusted their safety language and commitments to reflect the new administration's priorities.

Show more

Request for AI policy feedback

Feb. 6 to March 15

In February, the Trump administration opened a public comment period via the National Science Foundation on what should be its AI policy. This allowed everyone from citizens to corporations to submit feedback on the developing policy and share their concerns.

Also: How these proposed standards aim to tame our AI wild west

Many of the major AI players published statements during this time. OpenAI released an advisory arguing for less copyright enforcement and federal?only regulation to avoid state?by?state adaptations, while Anthropic urged national testing requirements. 

On April 24, the White House reported that more than 10,000 comments had been submitted.

Show more

Trump cuts AI research staff and grant funding

March 3

The US AI Safety Institute (US AISI) was established by former President Biden's executive order. In March, the Department of Government Efficiency (DOGE) wittled down staff at US AISI and the NSF, targeting several AI researchers.

The cuts also impacted grants administered to colleges and universities, alarming experts about the future of the US AI talent pipeline -- an impact that directly contradicts Trump's objective to make the US the unequivocal leader in AI. The Trump administration made the cuts despite having geared the National Institute of Standards and Technology (NIST) -- which houses US AISI -- to focus on the then?emerging technology during Trump's first term.

Show more

Worker and student AI upskilling executive orders

April 23

In the absence of meaningful regulation preserving worker protections in the face of AI, private companies are increasingly offering AI upskilling courses to help professionals stay competitive.

Also: Microsoft is saving millions with AI and laying off thousands - where do we go from here?

In late April, Trump joined this effort with two executive orders: one targeting worker upskilling with apprenticeships -- including those focused on AI -- and another focused on AI in education. The latter urged "educators, industry leaders, and employers who rely on an AI?skilled workforce" to "partner to create educational programs that equip students with essential AI skills and competencies across all learning pathways."

The order established a task force and set a deadline for it to announce public?private partnerships. "While AI education in kindergarten through 12th grade is critical, our nation must also make resources available for lifelong learners to develop new skills for a changing workforce," the order continued, referencing professional skills resources of which there are increasingly many.

Also: The great AI skills disconnect - and how to fix it

At the same time, researchers found that the Department of Government Efficiency had cut several education grants and studies dedicated to ramping up AI in education, according to the Hechinger Report.

Dana? Metaxa, a professor working on an AI literacy initiative for students, said as much in a post on Bluesky. "There is something especially offensive about this EO from April 23 about the need for AI education... Given the termination of my grant on exactly this topic on April 26," they wrote.

It's unclear what criteria determined why some grants were cut despite being ostensibly in line with Trump's priorities. Elsewhere, private companies and research initiatives are progressing AI efforts in schools.

Show more

US AI Safety Institute changes over

June 3

Shortly after Trump reversed Biden's order on his first day in office, the former head of the US AISI, Elizabeth Kelly -- now overseeing Beneficial Deployment at Anthropic -- stepped down in late February. The change appeared due to Trump's dismissal of anything Biden?related and AI safety and responsibility efforts.

The Trump administration notably did not invite members of AISI to join Vice President JD Vance at France's AI Action Summit in February, where he advocated for doing away with safety precautions to the international community.

On June 3, the US Department of Commerce announced that the AISI would become the "pro?innovation, pro?science US Center for AI Standards and Innovation (CAISI)." The release stated that the center would function as the AI industry's primary point of government contact -- much like it did under its previous name, but with a slightly different outlook that appears primarily semantic.

Also: What 'OpenAI for Government' means for US AI policy

"For far too long, censorship and regulations have been used under the guise of national security. Innovators will no longer be limited by these standards," Secretary of Commerce Howard Lutnick wrote in the release. "CAISI will evaluate and enhance US innovation of these rapidly developing commercial AI systems while ensuring they remain secure to our national security standards."

CAISI will develop model standards, conduct testing, and "represent US interests internationally to guard against burdensome and unnecessary regulation of American technologies by foreign governments," the release clarifies. There is no mention of creating a culture of model red?teaming reporting, for example, or requiring companies to publish the results of certain deployment tests, which some local laws like New York's RAISE Act address as safety requirements.

Safety may not be a top priority for policy, which leaves the AI community to check on itself. Just last week, researchers from several major AI companies came together to advocate for the preservation of chain of thought (CoT) monitoring, or the process of observing a reasoning model's CoT response as a way to catch harmful intentions and other safety issues.

While it's encouraging to see AI companies agree on a safety measure, it's not the same as government?enforced regulation. For example, an AI policy organized around advancing progress with adequate safety and civil rights protections in place could adapt that recommendation into a requirement for companies releasing new models.

Show more

US government leaks AI plans

June 10

Multiple reports confirmed that, in early June, evidence leaked via GitHub suggested the government had some kind of AI?for?government initiative in the works, complete with a mocked?up website. 

Also: Are AI subscriptions worth it? Most people don't seem to think so, according to this study

While the finding indicated little in terms of big?picture policy, it demonstrated the administration's effort to prompt AI adoption across multiple branches and agencies. That goal is consistent with what several advisers to Trump, including Elon Musk, have pushed.

Show more

Senate debates state power over AI

June into July

In the lead?up to the recently passed "big, beautiful" tax bill, Congress added and ultimately removed a rule that would restrict AI legislation at the state level for a period of five to 10 years -- at one point withholding broadband funding from states as collateral. The move, though eventually axed, indicated a serious effort on the part of Republicans to concentrate AI regulation at the federal level, which OpenAI had explicitly called for in its March policy advisory.

Also: I found 5 AI content detectors that can correctly identify AI text 100% of the time

Going into Wednesday, states retain the ability to pass AI legislation for now. It's unclear whether Trump plans to restrict that in his official policy or whether another limiting effort on the part of AI companies or Republicans will surface.

Show more

Department of Defense contracts

July 14

Earlier this month, the Department of Defense (DOD) solidified$200?million contracts with Google, OpenAI, xAI, and Anthropic, ushering in a new era of integration between the rapidly growing AI companies and present military objectives.

The announcement wasn't a total surprise; on June 5, Anthropic announced Claude Gov, a version of its chatbot retrofitted for government and cybersecurity use cases. On June 16, OpenAI announced its umbrella government initiative that combined its various contracts, including this one with the Department of Defense.

Also: OpenAI tailored ChatGPT Gov for government use - here's what that means

On one hand, the lack of emphasis on safety regulations or transparency requirements -- a priority for Biden's AISI -- could read as a way to ensure AI tools fast?track a path to approval for lucrative military contracts. On the other hand, the tailoring of pedestrian models for government use may address a significant number of safety concerns given DOD's strict requirements.

Without transparency into what model testing is happening as part of that contract, it's unclear.

Overall, Trump's tenure on AI thus far indicates Wednesday's policy announcement will prioritize American leadership in AI, primarily for private companies, in what has been framed as a global race for advancement with China. 

It remains to be seen what, if any, additional priorities it contains at a moment when companies are replacing human workers with AI, people are using the technology in risky ways, and worries that AI erodes critical thinking are on the rise.

Show more

Trump lifts Nvidia chip ban on China

July 14

Both the Biden and Trump administrations had limited the sale of Nvidia's competitive chips, which provide the computing power needed to develop AI systems, to Chinese companies in various ways, in order to stall competition from companies like DeepSeek.

Also: Open-source skills can save your career when AI comes knocking

In a quick reversal last week, the Trump administration changed that approach and is now allowing Nvidia to sell chips to China once again. The thought behind the shift, led by Nvidia CEO Jensen Huang and AI Czar David Sacks, is that the revenue from sales to China will help fund R&D efforts that will keep the US ahead in the race.

However, experts and researchers like those at the AI Now Institute have pointed out that the desire to stay ahead of China is a convenient argument for deregulating a technology that impacts our environment, economic future, and social reality -- and is tied to at least millions in revenue. 

"The purported AI arms race with China is being leveraged to convince governments that domestic infrastructure is an imperative for national competitiveness and security, encouraging public agencies to throw generous tax exemptions at private companies in order to build massive data centers in communities that may not want them there," AI Now researchers write in "Artificial Power," the Institute's 2025 landscape report, published on June 3. "These exemptions -- totaling billions of dollars -- steal investment from strong public resources that benefit everyone, like investments in more teachers, roads, and libraries."

Show more

Trump invests$92 billion in AI in Pennsylvania

July 15

In keeping with Trump's mission to shore up US AI infrastructure, the president visited Pennsylvania to announce a new$92 billion toward "cutting-edge AI and energy initiatives" in the state. The investment intends to ensure data centers and other key parts of AI development have the energy they need to continue centralizing AI growth (and the jobs Trump hopes it will bring) within the US.

Also: How ChatGPT actually works (and why it's been so game-changing)

Show more

Want more stories about AI?Sign up for AI Leaderboard, our weekly newsletter.

tag-icon Tags quentes : Inteligência artificial Inovação

Copyright © 2014-2024 Hi-Network.com | HAILIAN TECHNOLOGY CO., LIMITED | All Rights Reserved.
Our company's operations and information are independent of the manufacturers' positions, nor a part of any listed trademarks company.