Month: October 2025

Nvidia recently announced the largest private investment in history: an eye-popping $100 billion into OpenAI. But this outlay isn’t about empowering people or enabling breakthroughs, as Sam Altman said; this kind of vertical integration is about money, control, and power. It’s the latest step in a decades-long campaign by Big Tech to capture every layer of the digital economy—from chips to clouds to the apps you use. A few trillion-dollar companies now comprise an AI oligopoly that poses major risks to competition and to our national security.
[time-brightcove not-tgx=”true”]
They’re building an AI economy where the same companies own the infrastructure, the technology, and its applications—and where no one else gets a fair shot. Nvidia, the most valuable company in the world, has long controlled the market for designing graphics processing units, or GPUs, which are the kind of chips needed for AI.
Amazon, Microsoft, and Google own two-thirds of cloud computing, where chips are used and where AI models are built. Each of these three so-called hyperscalers are among the top five most valuable companies in the world. At its core, the cloud is a model analogous to electricity, water, and other utilities; computing is a commodified service, generated at a remote location (in this case, a data center) and delivered through a network (here, the internet). Unlike other utilities, however, hyperscalers are unregulated, allowing them to pick winners and losers among their customers. For most developers, that means lock-in and dependency, which became normal before today’s AI boom and is assumed.
At first, OpenAI and Anthropic seemed like they were ready to challenge Big Tech. Instead, they fused with the silicon giants. What once looked like healthy competition has become a carousel of Big Tech ownership, with upstarts absorbed before they can become real rivals. OpenAI’s biggest investors are Microsoft and now Nvidia. Anthropic’s biggest owners include Amazon and Google. Each has also acquired or invested in countless AI startups.
Tech companies often dub these dealings “partnerships,” but regulators shouldn’t. These are clear moves for cross-ownership, industry consolidation, and sectoral domination. It’s the same strategy Big Tech has used for decades: Facebook bought Instagram to kill competition. Google bought DoubleClick to dominate ads. Amazon used its marketplace data to copy and undercut sellers. Microsoft pioneered this action in the 1990s by crushing Netscape to protect Internet Explorer. Apple used its App Store to tax rivals. The AI playbook is no different, and it’s literally the same companies that have already used these tactics to great success.
When the same few companies own the entire tech stack, they stop competing and start colluding. The Federal Trade Commission found cloud providers prioritizing scarce GPUs for companies in which they invest over independent startups. If you’re a startup, these same few companies can be simultaneously your suppliers, investors, customers, and competitors. This creates unavoidable and intolerable conflicts of interest that poison the competitive dynamics that healthy markets require.
History tells us how this story ends and what to do about it. A century ago, railroads snapped up coal mines and gave preferential treatment to their own shipments; Congress forced the rail conglomerates to divest from coal. Later, telecom companies were required to let competitors interconnect rather than wall off their networks. Banks were structurally separated from commerce to avoid conflicts of interest. In the past, lawmakers stepped in when private empires monopolized essential infrastructure, and it’s long been time to do so in the digital economy. The emergence of Big AI makes the case crystal clear.
The first step to a healthier market is to break up companies that are vertically integrated, so that platforms don’t compete with their customers. Chips must be independent from clouds, and clouds must be independent from AI models. Those models should compete on merit, not on whether they’re tethered to a trillion-dollar sponsor. Regulators should reject Nvidia’s OpenAI investment outright. And Congress should pass legislation to break apart other parts of the Big AI ecosystem—including undoing investments and other partnerships that look like acquisitions designed to evade regulatory scrutiny—before it cements even more control of the digital economy.
Undoubtedly, Big Tech will cry that regulation would stifle innovation. Don’t believe it. The real innovation being stifled today is the innovation that’ll never bloom, suffocated by vertical integration and acquisitions disguised as partnerships. If lawmakers fail to act, the future of AI will be written not by open competition or bold new ideas, but by the same handful of firms that already dominate the products where AI may be most economically beneficial: e-commerce, search, and productivity software.
Americans once broke up railroads, reined in banks, and forced telecom to open its networks. The tools are there, and the precedents are clear. What’s missing is political will to break up Big AI before it breaks us.

Before there was social media, there was gossip. Long before tweets could topple reputations, whispers did the job just fine—sometimes with deadlier precision. Gossip fed the frenzy of the Salem Witch Trials and has been the subtext of one too many fables where mischief masks moral rot. But gossip has also been a lifeline, fueling resistance, stitching together communities, and rallying support for social justice movements the world over.
[time-brightcove not-tgx=”true”]
Historically, gossip is equal parts social glue and poison. And it also has a far more pedestrian side—as the fodder of our mundane daily conversations. By some accounts, more than 65% of our conversations are about other people. The stories we tell about others helps to spice up the doldrums of life. This explains why Page Six draws 21 million monthly readers or why an entire teenage generation (myself included) nourished themselves on the dramatic woes of Gossip Girl.
The fact that we all gossip so much—without getting caught—is a paradox hiding in plain sight: how do we so freely spread sensitive information, often about people we know, without the subject of our gossip ever finding out? In a paper recently published in Nature Human Behaviour we discovered the answer to this question.
Let’s first consider the scope of the problem. If you want to predict where information will spread, you need to estimate how it might move through your social network—not just among your friends, but your friends’ friends, and so on. Social networks typically include hundreds of people, with tens of thousands of possible connections. To predict where a piece of gossip might travel, you need to calculate which of many paths it might travel along. That is a staggering amount of mental math. And yet, humans appear to do this effortlessly before letting gossip slip.
Intrigued by this puzzle, my team of researchers at Brown University, led by graduate student Alice Xia, ran a set of studies to understand how humans pull off this impressive feat.
We started by designing a series of lab experiments using small, artificial social networks. Participants watched pairs of people interact—each interaction representing a friendship—and pieced together a mental map of the network, similar to how we link different streets together to map out an entire neighborhood. As people gathered information about who was friends with whom, they began to infer who was well-connected, who was further removed, and who was well-liked and popular.
Then we asked participants to share information with others without letting it reach the target of this gossip. What we found was striking. People tracked two key features of the network that aren’t directly visible. First, they noted how far the target was from their conversation partner. Second, they paid attention to how popular their conversation partner was. Participants gossiped the least with people who were close to the target—especially if they were popular—and gossiped the most with those who were both popular and socially distant from the target. In other words, people intuitively used popularity and distance to calculate where gossip might spread.
Our research reflects how gossip operates in relatively small networks. What happens in the real-world where there are hundreds of people in a network? Out in the wild, it’s nearly impossible to know all the relationships around you, and tracking how gossip might travel becomes a serious cognitive challenge. In Brown University’s freshman class, we recorded who was friends with whom, and then asked students to guess which peers might hear a piece of gossip depending on where it originated. Even in these large, real-world networks, students formed mental maps that captured the two key features: how popular someone was and how far they were from the target of gossip. These mental maps helped them estimate where gossip was likely to spread.
It might seem like a lot of effort to keep gossip from getting into the wrong hands, but the cost of getting it wrong is steep. I recently finished Edith Wharton’s novel House of Mirth, where reputation is currency and gossip is the unsung villain, enforcing the rigid rules of New York high society while destroying its nonconforming heroine, Lily Bart. Lily is a cautionary tale for how a life can be undone by backroom rumors if you don’t stay ahead of the gossip game. The ability to gossip effectively and with such precision is a testament to the mind’s sophistication—a feature, not a bug.
Gossip, for all its bad press, is not a character flaw. Rather, it’s a powerful cognitive tool that allows our minds to weigh social risk like a chess master, many moves ahead. We therefore need to stop treating gossip as a moral failing and start recognizing it as a form of social intelligence, a vital skill for managing relationships, reputations, and the flow of information in our modern world. Gossiping wisely is not just smarter than we think—it’s essential for social survival.
Alex Bitter/BI
- Driverless cars are spreading to more US cities.
- Many are run by Waymo, both directly and through a partnership with Uber.
- I hailed a ride in a Waymo car in San Francisco to see what it’s like.
Robotaxis are becoming common in some big US cities, but they’re still novelties in many.
In Atlanta and Austin, for instance, companies like Uber and Lyft started offering rides in driverless cars earlier this year, but some riders have told me they’ve had to go out of their way to hail one. Where I live, in Washington, DC, you can spot self-driving cars on the road — Google’s Waymo, for one, is testing them in the market — but they’re not yet available to customers.
In San Francisco, meanwhile, self-driving cars are everywhere.
On a trip there last week, I couldn’t walk around the areas like Union Square and Market Street in the city center without seeing a car drive by without anyone at the wheel.
It was a strange sight for me, but I shouldn’t have been surprised. Waymo has been operating driverless rides in San Francisco since 2022, and the company opened the service up to anyone who downloads its app in the city last year.
After days of seeing robotaxis pass me on the street, I decided to hail a Waymo car and see what it’s like to ride one in a city where they’ve become so ubiquitous. Here’s what I found.
