The US Government Issues New Directives on AI Development
Amid broader speculation about the legal implications of AI development, and in particular, the use of copyright-protected content to train AI models, U.S. President Donald Trump has issued a range of new orders which aim to make the U.S. the leader in global AI development, and remove restrictions around key aspects.
President Trump has today issued a range of directives related to AI development, including:
- An order that alleviates federal regulations in relation to the use of copyright protected content to train frontier AI models
- An order that restricts the use of “woke” AI models by federal departments
- An order the revokes Biden-era requirements around the development of energy projects to support AI projects
- An order relating to the push to export American AI technology packages to allies and partners worldwide
The combined AI package will form the basis for America’s push to lead the AI race, and become the key provider of AI technology in the years to come.
“It is the policy of the United States to preserve and extend American leadership in AI and decrease international dependence on AI technologies developed by our adversaries by supporting the global deployment of United States-origin AI technologies.”
The updated approach could be a blow to copyright holders, but Trump says that AI developers simply can't be expected to build a successful program “when every single article, book, or anything else that you've read or studied, you're supposed to pay for.”
As such, there’ll be new protection for such usage, within certain AI models, which will reduce the capacity for legal recourse in many cases.
Though it’s the comments around “woke” AI that have sparked the most discussion.
As per the White House order on “woke AI”:
“Artificial intelligence will play a critical role in how Americans of all ages learn new skills, consume information, and navigate their daily lives. Americans will require reliable outputs from AI, but when ideological biases or social agendas are built into AI models, they can distort the quality and accuracy of the output.”
This is true, and there’s evidence to suggest that several AI developers are already building in a level of bias based on their owners’ leanings.
And the White House order specifically points out the bias it’s aiming to address:
“For example, one major AI model changed the race or sex of historical figures – including the Pope, the Founding Fathers, and Vikings – when prompted for images because it was trained to prioritize DEI requirements at the cost of accuracy. Another AI model refused to produce images celebrating the achievements of white people, even while complying with the same request for people of other races.”
The inherent challenge within this is that there does need to be a level of control over such prompts, in order to stop AI models going off the rails as users seek to make them say more controversial things. But at the same time, any weighting is going to make their outputs less objective, which leads to potential problems as people become more reliant on these tools for information.
xAI has been looking to correct for this in its tools, by using Elon Musk’s own posts as a reference point to check for factual alignment. Which is obviously not the way to go, using a single person as a moral compass, but there could be a way to build in X’s Community Notes style approach to facilitate a more representative, and accurate view on all topics.
We just don’t have a real answer yet, and with the web flooded with divisive, partisan takes, and that information being used as the reference input for AI tools, it’s hard to see how we can get there without a level of smoothing in the results.
Indeed, it’s the inputs in this respect that remain the problem. AI models are reliant on large-scale databases, primarily based on what they can access online, but those datasets are likely not accurate representations of general opinion or leaning.
For example, Twitter/X has repeatedly noted that only around 20% of its users ever post anything at all, with the vast majority of people using the app in “read only” mode. That’s likely the same for other social platforms as well, and with only a fifth of users actively contributing to conversations, that means that we’re only getting a small sample, of largely the most divisive, argumentative people, who are then informing how AI chatbots “think” about key topics.
Of course, AI bots are also able to access factual information from additional sources, and that should solidify their answers, particularly on technical queries. But when it comes to matters of political debate and/or divisive opinion, that could lead to issues with misinformation and misleading responses based on such input.
Which could then see these models fall into “woke” territory, and it’ll be interesting to see how the White House plans to test for “wokeness” in AI models, as a means to determine AI procurement.
Though any such test may end up benefiting Elon Musk’s xAI models, particularly given that Musk likely has the inside word on how the government’s policies were developed, and therefore what they’ll be looking for in such testing.
There are a range of interesting considerations within these new orders, which could have a range of implications, particularly as AI use expands into more elements.
And with AI set to play a major role in social media interaction in particular, it’s worth noting how these rules impact such, and relate to the use of AI for moderation, creation, etc.
Originally published at Social Media Today