Trump Reportedly Considering Executive Order Aimed at Vetting New AI Models

Trump Reportedly Considering Executive Order Aimed at Vetting New AI Models

According to an anonymously sourced story in the New York Times, the president is considering a new oversight scheme for the AI industry. Apparently this comes from people “briefed on the conversations” held at meetings last week between Anthropic, Google, and OpenAI executives, and members of the Trump Administration.

Trump is reportedly mulling an executive order that would create an “A.I. working group” made of government and tech industry representatives, and this group would discuss possible oversight plans, including what the Times calls “a formal government review process for new A.I. models.”

The Times’ sources apparently claim the working group itself would be the entity determining which government agencies it would be involved with—a list that could include the NSA, the White House Office of the National Cyber Director, and the office of the director of national intelligence (Tulsi Gabbard, currently).

There is also, it should be noted, already an entity under the aegis of the National Institute of Standards and Technology called the Center for A.I. Standards and Innovation (CAISI), created under President Biden specifically for the vetting of AI models. But it appears that CAISI’s mission was changed shortly after Trump took office.

“For far too long, censorship and regulations have been used under the guise of national security. Innovators will no longer be limited by these standards. CAISI will evaluate and enhance U.S. innovation of these rapidly developing commercial AI systems while ensuring they remain secure to our national security standards,” Secretary of Commerce Howard Lutnick said at the time.

Furthermore, a policy document called “A National Policy Framework for Artificial Intelligence” released by the Trump White House released less than two months ago calls for very soft regulations—an approach that clashes significantly with what Trump now seems to be considering. It mostly prevents regulations, and contains little that’s more burdensome for Big Tech than age verification requirements.

In spirit, that document was the successor of Vice President J.D. Vance’s blistering speech last year at the AI Action Summit in France, the message of which was basically, AI rules; America wins at AI; and there’s nothing any of your mid-tier economies and your European nanny states can do about it.

The U.S. and U.K. refused to sign a statement at that meeting. Perhaps fittingly, the Time’s sources say the vetting plan under consideration has drawn comparisons to “one being developed in Britain,” in which multiple government entities will seek to vet AI models for safety.

The plan this seems to refer to is the one that appeared to form shortly after British banks and government agencies were given a preview of Anthropic’s as-yet unreleased Claude Mythos Preview model—deemed by Anthropic to be too dangerous to release, particularly around cybersecurity. Regulators at the U.K.’s National Cyber Security Centre, its Financial Conduct Authority, its Treasury, and officials from the Bank of England were scrambling to decide a course of action as of last month.

Carly Page Avatar

Leave a Reply

Discover more from AZ Shopping

Subscribe now to keep reading and get access to the full archive.

Continue reading