Uncanny Valley: 2025 Year-End Tech & Political Recap
Episode Intro
Love it or hate it, 2025 delivered no shortage of seismic shifts that reshaped everything from the global economy to daily life. This year gave us everything from an AI boom remaking entire industries, to Elon Musk’s controversial Department of Government Efficiency (DOGE) inserting unprecedented influence into U.S. federal agencies. In this special year-end episode, host Zoë Schiffer and WIRED executive editor Brian Barrett reflect on the year’s most consequential moments, and unpack what those moments signal for what lies ahead in 2026.
Articles Mentioned in This Episode
-
The AI Data Center Boom Is Warping the US Economy
-
The US Needs an Open Source AI Intervention to Beat China
-
DOGE Is the Deep State
-
ICE Wants to Build Out a 24/7 Social Media Surveillance Team
-
The FBI's Jeffrey Epstein Prison Video Had Nearly 3 Minutes Cut Out
Connect With Us
Follow Zoë Schiffer on Bluesky at @zoeschiffer and Brian Barrett on Bluesky at @brbarrett. Reach the Uncanny Valley team by email at [email protected].
How to Listen
You can stream this week’s episode directly via the audio player on this page. To subscribe for free and get every new episode automatically, follow these steps:
-
If you use an iPhone or iPad, open the default Apple Podcasts app, or tap this link to subscribe directly.
-
You can also find Uncanny Valley on popular third-party apps like Overcast and Pocket Casts—just search “Uncanny Valley” to subscribe. We’re also available on Spotify.
Full Transcript
Note: This is a lightly edited automated transcript, and may contain minor errors.
Zoë Schiffer: Welcome to WIRED’s Uncanny Valley. I’m Zoë Schiffer, WIRED’s director of business and industry coverage. Today, we’re wrapping up our annual news recap by looking back at the biggest tech and political trends that defined 2025. And there’s no better co-host for this than our executive editor Brian Barrett, who toils tirelessly away from the spotlight, right?
Brian Barrett: Thanks for having me, Zoë. Happy to finally emerge from my shadowy lair for this chat.
Zoë Schiffer: Out of the dark, deep cave. Alright, what a year this has been—and I’m so ready for it to be almost over.
Brian Barrett: Oh, same. No question.
Zoë Schiffer: It’s safe to say this year has been absolutely packed with news, especially at the intersection of tech and politics. It was actually pretty tough to narrow down which trends to cover today, but we landed on five stories that perfectly sum up 2025, and give us clear clues about what will unfold in 2026. The first I want to dive into is near and dear to my beat: AI data centers. We all know how staggering the investment pouring into these facilities is right now—Meta, Google, and Microsoft all tripled down on AI infrastructure spending this year alone. But it’s not just the total dollar amount that matters. How this capital is being deployed is already sending ripple effects across the entire tech sector and the broader U.S. economy.
Brian Barrett: I’m glad we’re starting here, because data centers sound like the most boring topic imaginable. In a perfect world, they’d just be unnoticeable background infrastructure. But right now, they’re at the center of almost every major shift happening in tech. They’re single-handedly propping up large swathes of the U.S. economy, driving major environmental harm, pushing up household energy costs for regular people, and power every AI tool—some of which are incredibly useful, many of which are not, depending on who’s building and using them. If you’d told anyone five years ago that data centers would be the biggest story in tech, no one would have believed you.
Zoë Schiffer: I mean, if you’d told me five years ago we’d have multiple full-time reporters covering data centers as a core beat, I’d have said that sounds way too boring to bother with. But it turns out we agree with Peter Thiel on this, of all things. Thiel says the U.S. doesn’t have any other big moonshot projects right now—we don’t have a modern Manhattan Project. All we have is artificial intelligence. He frames that as a bad thing, that we should be working on other big initiatives too, but even more specifically, all the action in AI right now revolves around data centers. A lot of industry leaders, from Sam Altman on down, are openly talking about a growing AI bubble that continues to inflate right now. And data centers are right at the center of that.
Many of these financing deals are structured through special purpose vehicles, so the spending doesn’t show up directly on these companies’ balance sheets. On top of that, 60 percent of the cost of building a data center goes to GPUs, and you have to replace those GPUs every three years. A lot of observers are looking at this setup and warning that the math just doesn’t add up long-term.
Brian Barrett: That’s exactly what Michael Burry is betting on, right? Burry, who became famous for calling the 2008 housing bubble (he was the central figure in The Big Short), has placed a major bet that this AI data center boom is a bubble with funky accounting that’s going to burst. Now, Burry hasn’t been right every time he’s called a bubble.
Zoë Schiffer: That’s what I was going to say.
Brian Barrett: He’s not infallible, but his argument reflects a broader concern circulating right now, and he has a track record of catching bubbles early. Zoë, do you think 2026 is the year this all comes to a head? Even Sam Altman admits this is probably a bubble, that there will be big winners and big losers. He’s obviously betting he’ll end up like Google was after the dot-com bubble: he’ll survive, come out stronger, and dominate the market for decades. Do you think that’s where we’re heading in 2026, or do we have more time before it pops?
Zoë Schiffer: I actually think we have at least one more good year—“good” depending on where you stand in this industry, of course. This is mostly just a gut feeling, so take it with a huge grain of salt, but I think we have until 2027 before this really hits a breaking point. One thing I’m watching closely: I’ve visited several of these new data centers, and I’ve talked to politicians who fall over themselves to attract data centers to their districts. Right now, supporting data centers doesn’t carry much political cost in a lot of places. There’s obviously some local pushback, but most of these new data centers are being built in red states where job opportunities are scarce, so leaders frame them as a huge economic boon that will bring tons of jobs to local communities. But the reality is very different.
First, you need thousands of workers to build a data center, but you only need a tiny fraction of that number to keep it running once it’s built. Most of those construction jobs disappear once construction is done. Second, these facilities are incredibly resource-intensive: they require massive amounts of water for cooling, and they suck up extraordinary amounts of energy. My prediction is that before long, supporting new data centers will become politically untenable, and we’ll see a major push to offshore data center development within the next three years.
Brian Barrett: And even those construction jobs don’t usually go to local workers, because a lot of the work requires specialized crews that travel from site to site. I live in a red state, and there’s a data center being proposed near me—there’s local pushback, but not enough to stop it. It’s still going to get built. But I agree with you: once people start connecting the dots between data centers and their higher energy bills, and realize the permanent jobs they were promised never materialized, we’re going to see a lot more widespread pushback. And after that? We’ll just build data centers in space.
Zoë Schiffer: Literally, I was just going to say on Mars! I live in Santa Barbara right now, we don’t have a big data center here, but we’re really close to a SpaceX rocket launch site. This is a pretty politically apathetic community, but people rally hard against those launches because they scare their dogs and horses. I can only imagine what would happen if someone tried to build a data center here.
Brian Barrett: That would be amazing to watch.
Zoë Schiffer: OK, we’d be remiss not to talk about what all these data centers are actually being built for: chatbots, specifically inference capacity to serve millions of people asking ChatGPT and other tools questions every day. One trend that really stood out this year was the explosion of AI chatbot companions and AI-powered romantic relationships. How do you feel about this trend now, and how has your perspective changed since the start of the year?
Brian Barrett: It’s such an interesting topic. On one hand, we’ve seen so many high-profile cases where chatbot companions have caused real harm: there have been cases where interactions with AI companions have contributed to people dying by suicide, and we’ve seen a lot of really unhealthy attachment patterns develop. But on the other hand, there are also cases where people say these AI companions have made them genuinely happier, that they fill a void that nothing else in their life is filling. Personally, I’ve always been reflexively skeptical of the whole idea, but I’ve started to come around to the fact that just because it’s not for me doesn’t mean it doesn’t serve a real need for some people.
Most of all, this space is still so new. We need so much more research to understand what these relationships do to people, for better or for worse. I just wish AI companies had done more of that research upfront before rolling these tools out and telling people to go have fun with your new AI boyfriend or girlfriend. No one actually knows what the long-term consequences are yet—it’s still too early. I’m glad to see more safeguards being put in place across the industry now, but it still feels like we’re moving way too fast, before we even understand what we’re dealing with.
Zoë Schiffer: I’ve been thinking a lot about this in the context of what’s being called “AI psychosis.” A lot of the time, as a tech reporter, I’ve seen issues like misinformation where the root cause is technical, but everyone looks only to tech companies to fix the entire problem that impacts all of society. With AI psychosis—when a chatbot validates a user’s unhinged theory about new physics or whatever, and the user comes to believe it—that does feel like a problem with the chatbot itself. But with AI relationships, I wonder if when there’s harm, it’s a more complex issue. I wonder if it’s more of a symptom of broader societal disconnection, rather than a cause of it. I absolutely agree we need more safeguards for AI tools, but I also think we need broader societal changes to help people connect with each other more easily, to build more community in daily life.
Brian Barrett: Zoë, we’re just going to bring back bowling leagues. That’s the solution. What could be better than bowling leagues?
Zoë Schiffer: Fair enough. Another trend we covered extensively this year at WIRED is the global race to build cutting-edge frontier AI models. I promise we’ll get to non-AI topics soon, but it would be dishonest to pretend AI wasn’t the defining story for our industry this year. One of the biggest moments came really early in the year: back in January, Chinese AI lab DeepSeek released its open R1 model, and it completely upended the global AI industry.
Brian Barrett: Yeah, it felt like it came out of nowhere, but it shouldn’t have been a surprise. China has been doing really impressive AI work for a while now.
Zoë Schiffer: And it actually moved markets, right? Investors panicked.
Brian Barrett: Right now, Nvidia is the biggest bellwether for the global AI industry. After DeepSeek launched, Nvidia lost nearly $600 billion in market capitalization in a single day on January 27. That’s the largest single-day single-stock market cap loss in history.
Zoë Schiffer: And that was the end of Nvidia, right?
Brian Barrett: Yeah, no one’s heard from them since.
Zoë Schiffer: [laughs] No, they’ve done just fine.
Brian Barrett: They’re still doing great. But that tells you how much room there is for these sky-high AI stocks to swing wildly. It also shows how much impact a single model release can have. There are dozens of new AI models coming out every month, but the fact that one Chinese model could have that big of an impact shows how seriously everyone takes this competition—and they’re right to take it seriously.
The biggest thing about DeepSeek R1 for me was its open licensing: it’s an open-weight model, which means anyone can use it. Before that, only Meta was really pursuing that strategy at scale in the U.S. All of a sudden we had a Chinese model that was competitive with Meta’s Llama, and Llama kind of got pushed aside. Now we’re in a world where China is leading on open-weight models that anyone can access, adapt, and run. That’s a really big deal, because if you have a choice between paying for a closed model or using a free, capable open one, most people are going to pick the open one. These Chinese open models are going to shape how millions of people use AI over the next decade, and I think we’re going to see a huge amount of Chinese influence on global AI development as a result.
Zoë Schiffer: Let me clarify for anyone who doesn’t know the jargon: open-weight means the model’s core parameters (the weights) are published publicly. Anyone can download the model onto their own device, and modify it however they want. You can’t do that with closed models like ChatGPT, but you can with DeepSeek. You can study how it works, and tweak it to fit your specific use case. The reason this model is so attractive for AI developers is that instead of only having your own 300 in-house researchers working on improvements, you release it openly, and suddenly every tinkerer and developer in the world can contribute improvements that you can then integrate into your own model. You get access to a global research community that’s way bigger than anything you could build in-house.
That’s a huge advantage for China, because they’re going all in on open source, open-weight AI, and it’s letting them advance really, really quickly. Meanwhile, the U.S. has shifted hard toward closed-source AI. Even Meta, which was one of the first big U.S. companies to build advanced open source AI, has signaled its next generation of models will likely be proprietary. This creates a real strategic disadvantage for the U.S. On top of that, we’re duplicating so much work: every U.S. lab is using massive amounts of energy, resources, research, and compute to do the exact same training run from scratch, instead of building on the innovations other labs have already made.
Brian Barrett: Zoë, what do you think about the censorship that Chinese models are often subject to? DeepSeek has run into that issue, same as other Chinese models. Do you think that will limit its growth, or do you think most users just won’t care that much? I suspect it’s the latter, but I’m curious what you think.
Zoë Schiffer: I was talking about this with Will Knight, one of our top AI reporters here at WIRED, and from what he’s seen, most users really don’t care. Even U.S. companies that publicly talk about the U.S. vs China AI race are using DeepSeek behind closed doors. It’s cheaper, it’s really advanced, and its capabilities hold up well against U.S. models. This also changes the whole debate around U.S. export controls. The debate has been: do we cut China off from advanced GPUs to slow their progress, or do we let them access our chips so they become dependent on U.S. hardware? When DeepSeek launched, it was a clear signal that cutting China off wasn’t working, because it pushed them to innovate in cheaper, more efficient ways—DeepSeek was trained for far less money than most comparable U.S. models.
Now we’ve seen the Trump administration backtrack, and allow China access to some cutting-edge chips, while China has responded by saying any company operating in China can’t use those U.S. chips. They’re trying to tie Chinese AI development more closely to Chinese-made hardware.
Alright, moving on from AI. The next big trend that defined our coverage this year was the creation and operation of the so-called Department of Government Efficiency, or DOGE. Where do we even start, Brian?
Brian Barrett: Yeah, that’s a lot.
Zoë Schiffer: This story kept giving us new angles all year, and for good reason. We recently learned that DOGE members are still working across the federal government, largely unsupervised from what we can tell. So it’s the perfect time to take stock of why DOGE has been such a big deal this year.
Brian Barrett: I was just looking back at our earliest DOGE reporting this week, and I was reminded what a chaotic few months that was. For anyone who tuned out the first half of the year: I’m jealous, honestly. To recap: DOGE came out of a deal between Elon Musk and Donald Trump. Trump basically gave Musk free rein to operate however he wanted across the federal government—I’m not exaggerating that. Musk’s allies took over key agencies: the Office of Personnel Management, which handles HR for the entire federal government, and the General Services Administration, which runs the federal government’s IT infrastructure. From there, they spread out to almost every agency, and they caused a huge amount of chaos in the first months of the Trump administration: massive layoffs, deep cuts to USAID, widespread rollbacks of regulations, a ridiculous policy requiring every federal worker to send a weekly email listing five things they did that week, which no one ever reads.
DOGE never delivered on its original goal. The plan was to cut $1 trillion from the federal budget, which is literally impossible unless you cut entitlement programs—something DOGE had no control over, and something that would be politically impossible to pull off. So Zoë, what did DOGE actually accomplish, then?
Zoë Schiffer: Yeah, the official goal was to root out fraud, waste, and abuse in the federal government. That sounded good on paper, but it quickly became clear that this was mostly a political project for Elon Musk, and he’s basically said as much. I remember when the executive order officially creating and naming DOGE came out. Before that, everyone was talking about it like it