Categories
Selected Articles

The AI doomers are having their moment

A creative image of AI surrounded by yellow foldable warning signs.
The world’s leading tech companies are in a race to AGI. There is growing evidence that large language models, which underpin the most popular chatbots, might never get there.

  • Top AI companies are in a race to develop artificial general intelligence.
  • The large language models powering popular chatbots, however, are showing their limits.
  • Some researchers say world models or other strategies might be the clearer path to AGI.

The race to build artificial general intelligence is colliding with a harsh reality: Large language models might be maxed out.

For years, the world’s top AI tech talent has spent billions of dollars developing LLMs, which underpin the most widely used chatbots.

The ultimate goal of many of the companies behind these AI models, however, is to develop AGI, a still theoretical version of AI that reasons like humans. And there’s growing concern that LLMs may be nearing their plateau, far from a technology capable of evolving into AGI.

AI thinkers who have long held this belief were once written off as cynical. But since the release of OpenAI’s GPT-5, which, despite improvements, didn’t live up to OpenAI’s own hype, the doomers are lining up to say, “I told you so.”

Principal among them is perhaps Gary Marcus, an AI leader and best-selling author. Since GPT-5’s release, he’s taken his criticism to new heights.

“Nobody with intellectual integrity should still believe that pure scaling will get us to AGI,” he wrote in a blog post earlier this month, referring to the costly strategy of amassing data and data centers to reach general intelligence. “Even some of the tech bros are waking up to the reality that ‘AGI in 2027’ was marketing, not reality.”

Here’s why some think LLMs are not all they are cracked up to be, and the alternatives some AI researchers believe are the better path to AGI.

The AI bubble

OpenAI is now the most valuable startup on the planet. It has raised about $60 billion, and a discussed secondary share sale could push the company’s valuation over $500 billion. That would make OpenAI the most valuable private company in the world.

There are good reasons for the excitement. According to the company, ChatGPT has 700 million weekly users, and OpenAI’s products have largely set the pace of the AI race.

There are a couple of problems, however. First, and perhaps foremost for its investors, OpenAI is not profitable and shows few signs of becoming profitable soon. Second, the company’s founding mission is to develop AGI in a way that benefits all of humanity, yet there’s a growing feeling that this world-changing technology, which props up much of the hype around AI, is much further away than many engineers and investors originally thought.

Other companies, too, have been riding this hype wave. Google, Meta, xAI, and Anthropic are all attracting and pouring billions of dollars into scaling their LLMs, which means snapping up talent, buying data, and building vast arrays of data centers.

The mismatch between spending and revenue, and hype and reality, is provoking alarm that the AI industry is a bubble on the verge of bursting. OpenAI CEO Sam Altman himself thinks so.

“Are we in a phase where investors as a whole are overexcited about AI? My opinion is yes. Is AI the most important thing to happen in a very long time? My opinion is also yes,” he told journalists earlier this month.

While other tech leaders, like former Google CEO Eric Schmidt, are less certain, a $1 trillion stock market tech sell-off last week showed the concerns are widespread. The market recovered on Friday after Federal Reserve Chair Jerome Powell said he is considering a rate cut in September.

Now, everyone is eagerly anticipating Wednesday’s earnings report from Nvidia, which makes the chips powering LLMs and is the pick-and-shovel company of the AI rush. If the company’s earnings show signs of slowing and its outlook is more cautious, there will be a whole new round of worry, and the AI doomers will again remind everyone of what they’ve been saying for years: LLMs are not the way.

The problem with LLMs

In June, Apple researchers released a paper called “The Illusion of Thinking.” What they found sounded positively human: Advanced reasoning models give up when faced with more complex tasks.

Their conclusion, however, was that these models rely on pattern recognition rather than logical thinking, and the researchers cautioned against the belief that they could result in AGI. “Claims that scaling current architectures will naturally yield general intelligence appear premature,” the researchers wrote.

The paper was widely mocked online, largely because Apple, despite its size and vast resources, is perceived as far behind in the AI race. For skeptics, however, it was validating.

Andrew Gelman, a professor of statistics and political science at Columbia University, has argued that the level of textual comprehension shown by LLMs falls short of expectations. What LLMs do compared to what humans do is the difference between “jogging and running,” Gelman wrote in a 2023 blog post.

“I can jog and jog and jog, thinking about all sorts of things and not feeling like I’m expending much effort, my legs pretty much move up and down of their own accord … but then if I need to run, that takes concentration,” he wrote.

Geoffrey Hinton, the Nobel Prize winner known to some as the Godfather of AI, disagrees. “By training something to be really good at predicting the next word, you’re actually forcing it to understand,” he told The New Yorker back in 2013.

Another potential problem with LLMs is their tendency to misinterpret the meanings of words, hallucinate, and spread misinformation. This reality is why, for now, most companies adopting AI require a human in the mix.

In a report published earlier this year, a group of academic researchers in Germany specializing in computational linguistics surveyed “in-the-wild” hallucination rates for 11 LLMs across 30 languages. They found that the average hallucination rate across all languages varied between 7% to 12%.

Leading AI companies like OpenAI have, in recent years, operated under the belief that these problems can be mitigated by feeding LLMs more information. The so-called scaling laws, which OpenAI researchers outlined in a 2020 paper, state that “model performance depends most strongly on scale.”

However, recently, researchers have begun to question whether LLMs have hit a wall and are facing diminishing returns as they scale. Yann LeCun, Meta’s chief AI scientist who heads a lab under the company’s superintelligence unit, is largely focused on next-generation AI approaches instead of LLMs.

“Most interesting problems scale extremely badly,” he said at the National University of Singapore in April. “You cannot just assume that more data and more compute means smarter AI.” Apple’s analysis also found that current LLM-based reasoning models are inconsistent due to “fundamental limitations in how models maintain algorithmic consistency across problem scales.”

Alexandr Wang, the head of Meta’s superintelligence division, appears equally uncertain. He said scaling is “the biggest question in the industry” at the Cerebral Valley conference last year.

Even if scaling worked, access to high-quality data is limited.

The hunt for unique data has been so fierce that leading AI companies are pushing boundaries — sometimes at the risk of copyright violations. Meta once considered acquiring publisher Simon & Schuster as a solution. Anthropic collected and scanned millions of pirated books while training Claude, which a district judge ruled in June did not constitute fair use.

Ultimately, some leading AI researchers say language itself is the limiting factor, and that’s why LLMs are not the path to AGI.

“Language doesn’t exist in nature,” Fei Fei Li, the Stanford professor famous for inventing ImageNet, said on an episode of Andreessen Horowitz’s podcast in June. “Humans,” she said, “not only do we survive, live, and work, but we build civilization beyond language.”

LeCun’s gripe is similar.

“We need AI systems that can learn new tasks really quickly. They need to understand the physical world, not just text and language but the real world, have some level of common sense, and abilities to reason and plan, have persistent memory — all the stuff that we expect from intelligent entities,” he said during his talk in April.

New ways to AGI

Researchers like Li and LeCun are pursuing an alternative to LLMs, called world models, that they believe is a better path to AGI.

Unlike large language models, which determine outputs based on statistical relationships between words and phrases, world models make predictions by simulating and learning from the world around them. These kinds of models feel more akin to how humans learn, while LLMs rely on vast troves of data that humans have no access to.

Computer scientist and MIT professor Jay Wright Forrester outlined the value of this kind of model all the way back in a 1971 paper.

“Each of us uses models constantly. Every person in private life and in business instinctively uses models for decision-making. The mental images in one’s head about one’s surroundings are models,” he wrote. “All decisions are taken on the basis of models. All laws are passed on the basis of models. All executive actions are taken on the basis of models.”

Recent research has found that world models not only capture reality as it is, but can also simulate new environments and scenarios.

In a 2018 paper, researchers David Ha and Jurgen Schmidhuber built a simple world model inspired by humans’ cognitive systems. This was used to not only model hypothetical scenarios, but also to train agents.

“Training agents in the real world is even more expensive,” the authors wrote. “So world models that are trained incrementally to simulate reality may prove to be useful for transferring policies back to the real world.”

In August, Google’s DeepMind released Genie 3, a world model that it says “pushes the boundaries of what world models can accomplish.” It can model physical properties of the real world, like volcanic terrain or a dimly lit ocean. This could allow AI to make predictions based on what it learns from these real-world simulations.

There are other ideas in the works, too. Neuroscience models try to mimic the processes of the brain. Multi-agent models operate on the theory that multiple AIs interacting with each other is a better analogy to how humans function in real life. Researchers pursuing multi-agent models believe AGI is more likely to emerge through this kind of social exchange.

Then, there is embodied AI, which adapts world models into physical forms, allowing robots to interpret and train on the world around them. “Robots take in all kinds of forms and shapes,” Li said on the No Priors podcast in June.

The potential of these alternatives, and in particular world models, gives hope to even Marcus, the premier LLM doomer. He refers to world models as cognitive models and urges AI companies to pivot from LLMs and focus on these alternatives.

“In some ways, LLMs far exceed humans, but in other ways, they are still no match for an ant,” Marcus said in a June blog post. “Without robust cognitive models of the world, they should never be fully trusted.”

Read the original article on Business Insider
Categories
Selected Articles

National Guard units begin carrying weapons in Washington

The move comes after Trump deployed hundreds of troops to the US capital in what he portrayed as a crime crackdown.
Categories
Selected Articles

Australia’s youngest senator describes depression, ‘whack’ responses and a pet-related white lie in first speech

The 21-year-old spoke of being bullied and battling mental health issues, and said she will focus on issues including housing, domestic violence and the climate crisis

Australia’s youngest senator battled depression and bullying before her election, and has dealt with misogyny and Pauline Hanson since.

Labor’s Charlotte Walker, a South Australian who turned 21 on election night, delivered her first speech on Monday night.

Continue reading…

Categories
Selected Articles

Trump Calls For ‘Fake News’ Networks To Have Licenses Revoked by FCC

Trump also asked why networks weren’t paying “millions of dollars” in license fees.
Categories
Selected Articles

A Harvard professor on why AI ‘evangelism’ is harming students’ career prospects

Student on Harvard campus
Alex Green, a Harvard professor, explains his concerns with AI use in classrooms.

  • Debate is rampant over the best ways to incorporate AI in classrooms.
  • Alex Green, a Harvard professor, said he’s concerned that AI use is damaging students’ communication skills.
  • He said that while there’s a place for AI, intensive teacher training to understand its risks is vital.

You’re a student who spent an entire semester researching and writing a 20-page paper. You’ve poured time and effort into the assignment, and you’re looking forward to hearing your professor’s feedback.

Instead, you get a mediocre grade and three short paragraphs of vague comments, and you wonder: Did ChatGPT grade my essay?

Turns out, it did.

That’s a scenario a student recounted to Alex Green, an author and professor at Harvard’s Kennedy School. Green told Business Insider that the “AI evangelism” push — efforts to use AI across classrooms to make both teaching and learning easier — is doing more harm than good, undermining critical relationships between teachers and students.

Whether teachers or students are using AI, Green said it’s leading to a loss of “so many fundamental communication skills,” like knowledge and reasoning.

Green, who teaches policy communications and op-ed writing, said AI could also harm his students’ career prospects if they’re pursuing fields like communications and rely on AI to build those skills.

“My job, in part, is to help prepare them to go get jobs,” Green said. He added that he heard from some of his students that their prospective employers required them to share their screens while they take writing tests to ensure they’re not using AI.

“And so what would I be doing for them if I said to them, ‘No, no, you can just use these indiscriminately, and how you write and how you think and how you synthesize ideas doesn’t really matter?'” Green said.

Over the past decade, tech leaders and educators have been pushing initiatives to incorporate AI in classrooms. While some surveys have shown that AI use has helped teachers save time and provide higher-quality lessons, there’s minimal evidence that using AI to learn is effective. Additionally, AI is already starting to impact young people’s job prospects, with some tech leaders saying that it will decrease white-collar job openings.

Green said he’s not against AI — he has used it himself for his work, and he allows it in his classroom, to an extent. But it’s not a replacement for teachers, and heavily relying on it is a waste of a school’s resources, he said.

“You’re here now, and you’re in a class, and you have someone who is a total nerd for this and has devoted their life to every aspect of this. And you have me fully at every moment of every hour for the next eight weeks and beyond,” Green said. “Why on earth would you take all of that sacrifice and all of that dedication and give that over to a machine?”

‘The bible salesman version of AI’

There’s no shortage of efforts to incorporate AI in education. Take Khan Academy — the online tutoring organization established in 2008, which gradually started using AI to create lessons that personalized students’ experiences.

Khan Academy continues to enroll students, but other efforts have failed. AltSchool, which was backed by tech billionaires including Mark Zuckerberg, began to shutter four years after opening in 2013, as parents saw that their kids weren’t excelling using technology-based education.

Green said the problem is that many of these initiatives are focused on making learning as easy as possible, and that shouldn’t be the goal.

“These people have reframed the idea of learning as something where any struggle to wrestle with a concept or think really hard about something is a sign that the education is bad, and that what we need is for things to be as seamless and easy as possible,” Green said.

That’s not to say there isn’t a place for AI. Green said that he used a large language model, or LLM, to comb through materials for his research and found it helpful. In his classroom, he said that after five weeks of “intensive non-technological use,” he starts incorporating AI to help his students prepare for the political communications landscape, which includes dealing with chatbots and identifying falsely generated images.

Some colleges are putting AI at the forefront. In February, California State University announced its initiative to become the “nation’s first and largest AI-empowered university system” through public-private partnerships to train students and teachers on AI technology, including offering all students and faculty access to a version of ChatGPT.

On the federal level, the Trump administration is establishing a task force to promote AI in K-12 classrooms and look into redirecting funds toward AI efforts.

Some critics have warned that the US should tread carefully. South Korea recently rolled back its initiative to place AI textbooks in classrooms due to backlash from parents and teachers over a lack of preparation on how to best use the tech.

Green said that if colleges want to adopt AI in classrooms, they should mandate intensive training for faculty to understand “the very serious risks” the technology poses to learning. He also suggested restrictions on how teachers can use the technology, including using it to grade and communicate with students.

“We need actual committed educators who are not the bible salesman version of AI at the front of the room, opening up space for ideas about its judicious use in the classroom,” Green said. “We could really end up with some incredible amounts of junk here, and at the expense of our young people actually learning skills that you do need in the real world.”

Read the original article on Business Insider
Categories
Selected Articles

Will Labor’s major expansion to first-home guarantee scheme ‘drive prices higher’?

Treasury estimates new policy will only add 0.5% to home prices over six years, but some experts disagree

Labor’s major new expansion to the home guarantee scheme will help first-time buyers who would have bought anyway, experts say, and is likely to push prices higher.

The scheme, which has been around since 2020, helps eligible first-time buyers get into the market with a deposit of as little as 5%, without the need to pay tens of thousands in lender’s mortgage insurance and taking years off the time required to save for a deposit.

Continue reading…

Categories
Selected Articles

Former Wiggles CEO sues children’s entertainment group over alleged bonus and fair work breaches

Luke O’Neill claims blue Wiggle Anthony Field ‘undermined him’ and was responsible for ‘unnecessary costs’ and budget overruns

The former Wiggles CEO has claimed in federal court documents that the blue Wiggle “undermined him” in front of other staff and excluded him from a meeting with Kmart to discuss selling their toys.

According to documents provided by the court to Guardian Australia, the former CEO Luke O’Neill is suing Wiggles Holdings Pty Ltd, blue Wiggle Anthony Field and general counsel Matthew Salgo for not being paid a bonus relating to his work and for multiple alleged breaches of the Fair Work Act.

Continue reading…

Categories
Selected Articles

Four young suspects charged in Bronx shooting that killed one, injured four including innocent teen bystander

Four suspects have been charged in the wild shooting battle that blasted a 17-year-old girl in the face with a stray bullet, killed a 32-year-old man, and injured three others during a basketball tournament at a Bronx park, according to authorities.
Categories
Selected Articles

Law graduate Leticia Paul dies at 22 after routine CT scan

A young, thriving Brazilian lawyer tragically died after a severe allergic reaction during a routine CT scan.
Categories
Selected Articles

Indonesia hosts annual US-led combat drills with Indo-Pacific allies

Indonesia hosts annual US-led combat drills with Indo-Pacific allies