AI Isn’t Just Being Built. It’s Being Negotiated

Australia has always punched well above its weight globally, but Australians have always still had an eye on America. Hollywood entertainment, New York style, pro sports leagues, there’s an undeniable gravity to it all. With tech for example, if you’re an Australian business owner or entrepreneur, you’re likely spending a lot of time watching what happens in Silicon Valley. Overall, that instinct makes sense because most of the frontier breakthroughs in artificial intelligence are still coming out of American labs and companies.


But lately, especially in the midst of the AI “Boom,” the most interesting signals aren’t coming from Hollywood, or the New York culture scene, or from California tech labs. They’re coming from Washington DC.


Artificial intelligence has recently crossed a threshold in the United States. It’s no longer just a technology story, it’s now firmly (and dramatically) a political one.


You can see the shift in the alliances and arguments starting to form around it. Bernie Sanders, for example, has been openly skeptical about the economic impact of AI, warning that automation could concentrate wealth and wipe out the middle class. He’s also pushed for stronger guardrails around how large tech companies deploy automation, arguing that productivity gains should translate into higher wages and shorter working hours rather than mass displacement. His concern isn’t just that jobs will change, it’s that the economic upside could pool at the top while the middle gets squeezed. The underlying worry is less about the technology itself and more about how quickly the benefits could concentrate compared to how slowly the system adapts.


At the other end of the political guardrails spectrum, Trump has recently taken aim at companies like Anthropic, questioning how much influence AI developers should have over national infrastructure and public life. Trump tends to frame advanced AI as a “national asset,” saying that companies building frontier models should be monitored by government oversight and should stay aligned with U.S. priorities.


The recent Anthropic drama really exposed the tension building between politicians and AI companies, because it wasn’t just a boring policy disagreement, it was a full-blown standoff.
Anthropic basically drew a line and said they didn’t want their models used for things like autonomous weapons or mass civilian surveillance. The Pentagon’s response was a hard core “Remove the restrictions or you’re out.”


At the same time, reports were coming out that Claude had already been used in real military operations, including the Venezuela raid and ongoing activity in Iran. So they ended up in this difficult middle ground where the company was trying to set ethical boundaries, while the technology was already embedded deep enough that those boundaries became…negotiable. Trump’s response was, predictably, to escalate.


Anthropic was labelled a national security “supply chain risk”, contracts were cut, and federal agencies were told to stop using their systems. The reasoning was that if a private company can decide how these models behave, then it can also decide when they don’t.


One of the most advanced AI systems in the world was technically banned, but still deeply embedded in the infrastructure it was built for. Military teams still don’t want to lose it, replacements aren’t ready yet, and the transition could take well over a year.
This stalemate highlights the real issue.


It’s not just about noise, it comes down to the fact that AI has already become too integrated and too strategically important to cleanly separate from policy.


Bernie’s opinions and Trump’s actions come from totally different perspectives, of course. One is concerned with labour and economic disruption. The other is concerned with corporate power and national control over influential technology.
But both of them point to the same underlying reality that artificial intelligence is becoming too important to remain a purely private sector project.


When technologies start to shape productivity and national competitiveness, politics inevitably moves in. For example, the Biden administration’s executive order on AI required companies building advanced models to share safety testing results with the government before public release. It also set expectations around security and transparency for systems that could impact national infrastructure. It was an early, lower key signal that the legal rulebook was starting to form.


It happened historically with railroads. It happened with telecommunications. And it happened with the internet. AI is simply the next version of that story.


For business leaders around the world, that shift matters more than it might initially appear. Policy decisions and disagreements in the United States will influence how models are trained, how data can be used, what safety standards emerge, and which companies are allowed to operate globally. Those choices will ripple outward into every other market, including, of course, Australia.


Australian companies often treat American politics as a distant spectacle. It’s like a reality show, easy to write off as messy entertainment. But when it comes to artificial intelligence, ignoring the political layer will be a clear strategic mistake.
The rules that emerge from Washington will shape the infrastructure that ambitious businesses everywhere end up building on.


If you’re serious about deploying AI within your business, watching the global political debate is no longer optional. It’s part of understanding where the technology is actually heading.
The smartest companies won’t just follow the breakthroughs coming out of the US labs. As mentally draining as it can be, they’ll also be paying attention to the political arguments unfolding in the room where it happens.

If you want a sense of how quickly this conversation is evolving, a few podcasts worth listening to include:

The A.I Daily Brief

Offline with John Favreau

Pivot (with Kara Swisher and Scott Galloway)

The Ezra Klein Show