Welcome back to In the Loop, TIME’s new twice-weekly newsletter about the world of AI. We’re publishing installments both as stories on Time.com and as emails.
If you’re reading this in your browser, you can subscribe to have the next one delivered straight to your inbox.
What to Know
[time-brightcove not-tgx=”true”]
Let’s say, sometime in the next few years, artificial intelligence automates most of the jobs that humans currently do. If that happens, how can we avoid societal collapse? This question, once the stuff of science fiction, is now very real.
In May, Anthropic CEO Dario Amodei warned that AI could wipe out half of all entry-level white collar jobs in the next one to five years, and send unemployment rising to up to 20%. In response to such a prediction, you might expect states to begin seriously drawing up contingency plans. Not so. But a growing group of academic economists are working on this question. A new paper, published this Tuesday, suggests a novel way for states to protect their populations in a world of mass AI-enabled job loss.
Sovereign wealth funds — The paper recommends that states invest in AI-related industries via sovereign wealth funds, the same type of financial vehicles that have allowed the likes of Norway and the United Arab Emirates to diversify their oil wealth. This isn’t strictly a new idea. In fact, the UAE has already been investing billions of dollars from its sovereign wealth funds into AI. Nvidia, the semiconductor company, has been urging states to invest in “sovereign AI” for years now. But unlike those examples, which are focused on yielding as big a return on investment as possible, or exerting geopolitical influence over AI, the paper lays out the social reasons that this might be a good idea.
AI welfare state — The paper argues that if transformative AI is around the corner, the ability of states to economically provide for their citizens may be directly tied to how exposed they are to AI’s upside. “Such investments can be seen as part of the state’s responsibility to safeguard public welfare in the face of disruptive technological change,” the paper argues. The returns on investment could be used to fund universal basic income, or a “stabilization fund” that could allow states to “absorb shocks, redistribute benefits, and support displaced workers in real time.”
Reasons to be skeptical — To be sure, this approach has risks. States investing billions in AI could paradoxically accelerate the very job-automation trends that they’re seeking to mitigate. On the flipside, if AI turns out to be less transformative than expected, piling in at the top of the market could bring losses. And just like retail investing, any potential upside is correlated with how much money you have available in the first place. Rich countries will have the opportunity to become richer; poor countries will struggle to participate at all. “As [transformative AI]-generated wealth risks deepening global inequality, it may also be necessary to explore new models for transnational benefit-sharing,” the paper notes, “including internationally coordinated sovereign investment vehicles that allocate a portion of AI-derived returns toward global public goods.”
Who to Know
Person in the news – Elon Musk, owner of xAI
Late Wednesday night, with the launch of Grok 4, the AI race got a new leader: Elon Musk. At least, that’s if you believe the benchmarks, which show Grok trouncing competition from OpenAI, Google and Anthropic on some of the industry’s most difficult tests. On ARC AGI 2, a benchmark designed to be easy for humans but difficult for AIs, Grok 4’s reasoning mode scored 16.2%—nearly double that of its closest contender (Claude Opus 4 by Anthropic).
Unexpected result — Musk’s xAI has not traditionally been seen as a “frontier” AI company, despite its huge cache of GPU chips. Previous releases of Grok delivered middling performance. And just a day before Grok 4’s release, an earlier version of the chatbot had a meltdown on X, repeatedly referring to itself as “MechaHitler” and sharing violent rape fantasies. (The posts were later deleted.) Episodes like this had encouraged hopes, in at least some corners of the AI world, that the far-right billionaire’s attempts to make his bot more “truth-seeking” were actually making it more stupid.
Musk on Grok and AGI — On a livestream broadcast on X on Wednesday night, Musk said Grok 4 had been trained using 10 times as much computing power as Grok 3. “With respect to academic questions, Grok 4 is better than PhD level in every subject, no exceptions,” he said. On the subject of artificial general intelligence, he said: “Will this be bad or good for humanity? I think it’ll be good. Most likely it’ll be good. But I’ve somewhat reconciled myself to the fact that even if it wasn’t gonna be good, I’d at least like to be alive to see it happen.”
What Musk does best — If Grok 4’s benchmark scores are borne out, it would mean that Musk’s core skill — spinning up cracked engineering teams that are blindly focused on a single goal — is applicable to the world of AI. That will worry those in the industry who care about not just developing AI quickly, but also doing so safely. As the MechaHitler debacle showed, neither Musk nor anybody else yet knows how to prevent current AI systems from going badly out of control. “If you can’t prevent your AI from endorsing Hitler,” says expert forecaster Peter Wildeford, “how can we trust you with ensuring far more complex future AGI can be deployed safely?”
AI in Action
Where is AI? Last year, I wrote about a group of researchers who had attempted to answer that question. Now, the team at Epoch AI have gone one step further: they’ve built an interactive map of more than 500 AI supercomputers, to track exactly where the world’s major AI infrastructure is located. The map confirms what I wrote last year: AI compute is concentrated in rich countries, with the vast majority in the US and China, followed by Europe and the Persian Gulf.
As always, if you have an interesting story of AI in Action, we’d love to hear it. Email us at: intheloop@time.com
What We’re Reading
They Asked an A.I. Chatbot Questions. The Answers Sent Them Spiraling by Kashmir Hill in the New York Times
Large language models are mysterious, shapeshifting artifacts. Their creators train them to adopt helpful personas—but sometimes these personas can slip, revealing a darker side that can lead some vulnerable users down conspiratorial rabbit holes. That tendency was especially stark earlier this year, when OpenAI shipped an update to ChatGPT that inadvertently caused the bot to become more sycophantic—meaning it would egg-on almost anything a user said, delusional or not. Kashmir Hill, one of the best reporters in the business, spoke to many users who experienced this behavior, and found some shocking personal stories… including one that turned out to be fatal.