Categories
Selected Articles

Budapest meeting update A senior Russian official stated that no agreement had ever been made for the meeting, and preparatory talks between U.S. Secretary of State Marco Rubio and Russian Foreign Minister Sergey Lavrov have been postponed indefinitely. There will not be an imminent meeting between U.S. President Donald Trump and Russian leader Vladimir Putin in Budapest Timeline of the proposed Budapest summit October 17, 2025: Donald Trump and Vladimir Putin discuss a ceasefire in Ukraine during a phone call, with Trump announcing afterward that a summit would take place in Budapest. October 20, 2025: CNN reports that the preparatory meeting between Rubio and Lavrov was postponed. The Krem

Categories
Selected Articles

Taylor Swift Makes Rare Social Media Move

The pop star doesn’t usually comment on other people’s Instagram and TikTok posts, but she recently made an exception.
Categories
Selected Articles

JPMorgan Chase unveils new 60-story headquarters, reshaping New York City’s skyline

JPMorgan Chase unveils new 60-story headquarters, reshaping New York City’s skyline [deltaMinutes] mins ago Now
Categories
Selected Articles

Warner Bros. Discovery says it’s open to a sale after ‘unsolicited offers,’ stock surges 8%

Zaslav said Warner Bros. Discovery, which owns HBO, CNN and the Warner Bros. studio, has been preparing to split into two companies next year.
Categories
Selected Articles

Dodgers’ Shohei Ohtani Has MVP Award Changed Before World Series

The Los Angeles Dodgers two-way superstar saw his latest Most Valuable Player Award trophy altered.
Categories
Selected Articles

Fox News host Laura Ingraham joins business venture with Donald Trump Jr

Host often criticized Hunter Biden’s business on her show and said he capitalized on his father’s presidency for deals

Fox News host Laura Ingraham is joining a business venture that includes Donald Trump Jr – after she repeatedly criticized the business dealings of another president’s son: Hunter Biden.

Ingraham, Trump Jr and Chamath Palihapitiya, a business associate of Donald Trump’s eldest son and namesake, were all listed as board members of a new venture seeking to go public on the stock market, according to Bloomberg. It purports to “fund the next chapter of American Exceptionalism and help Make America Grow Again”.

Continue reading…

Categories
Selected Articles

Taxis, mileage, and car hire: RTÉ racks up €800k travel tab in six months

Just under €268,000 was spent on taxi fares for staff and guests in Ireland, as well as on trips overseas, in the first half of this year.
Categories
Selected Articles

Zelensky and European leaders accuse Putin of stalling for time on peace talks

Russia occupies about one fifth of Ukraine but carving up their country in return for peace is unacceptable to Kyiv officials.
Categories
Selected Articles

Taxis, mileage, and car hire: RTÉ racks up €800k travel tab in six months

Just under €268,000 was spent on taxi fares for staff and guests in Ireland, as well as on trips overseas, in the first half of this year.
Categories
Selected Articles

Meta Poaches Key Google AI Researcher

Welcome back to In the Loop, TIME’s new twice-weekly newsletter about AI. If you’re reading this in your browser, why not subscribe to have the next one delivered straight to your inbox?

What to Know: Meta poaches key Google researcher

Meta poached a key Google DeepMind researcher last month, in a sign that competition for talent in a relatively new area of AI development is heating up. Tim Brooks, who co-led the Sora team at OpenAI before moving to Google DeepMind in 2024, has been working at Meta’s Superintelligence Labs since September, according to his LinkedIn profile. The move indicates that Meta may be doubling down on an effort to build “world models,” a type of AI that both OpenAI and Google believe will be a key stepping-stone to artificial general intelligence.

[time-brightcove not-tgx=”true”]

Brooks did not respond to a request for comment, and his compensation could not be learned. (Meta has reportedly lured some top researchers from rival companies with pay packets worth over $1 billion.) “I am an AI researcher at Meta Superintelligence Labs where I make multimodal generative models,” Brooks’ personal website reads. Meta did not respond to a request for comment.

The background — Upon its release earlier this month, OpenAI’s Sora 2 model took the Internet by storm, thanks to its ability to generate realistic videos from just a text prompt. But Sora is about more than just capturing eyeballs with viral content. “On the surface, Sora, for example, does not look like it is AGI-relevant,” OpenAI CEO Sam Altman said on a podcast earlier this month. “But I would bet that if we can build really great world models, that will be much more important to AGI than people think.”

Altman was speaking to a growing belief inside the AI industry at large: that if you can simulate the world with enough accuracy, you could drop AI agents into those simulations. There, they could learn more skills than they currently can from just text, photos, and videos—because they could interact with a simulated world. That form of training could be highly efficient, in part because simulated time can be accelerated, and because many simulations can be run in parallel.

World models — When Google hired Brooks from OpenAI this time last year, DeepMind CEO Demis Hassabis personally welcomed him with a post on X, saying that he was “so excited to be working together to make the long-standing dream of a world simulator a reality.” The company has become increasingly bullish on the idea that world models are key to developing AGI. The company recently announced Genie 3, a 3D world simulator that allows the user to navigate around an environment generated by a prompt. “World models are a key stepping stone on the path to AGI, since they make it possible to train AI agents in an unlimited curriculum of rich simulation environments,” the company said in the model’s announcement. That announcement included Brooks’ name in its list of acknowledgements.

Meta’s hire — Neither Meta nor Brooks responded to questions about his new role at Meta Superintelligence Labs. But Brooks’ hiring is especially notable because his expertise appears to conflict with Meta’s previous approach to world models. Like Google and OpenAI, the company believes world models will be a vital step toward AGI. But until now, it has built them in a fundamentally different way to the ones Brooks built at OpenAI and Google. Rather than generating realistic videos pixel-by-pixel like Sora and Genie, Meta’s models predict outcomes in abstract space, without rendering video.

The main cheerleader for this approach inside Meta has been chief AI scientist Yann LeCun, who has been highly critical of Sora. “Sora is trained to generate pixels,” LeCun wrote on X in 2024. “There is nothing wrong with that if your purpose is to actually generate videos. But if your purpose is to understand how the world works, it’s a losing proposition.” Brooks’ arrival suggests that Meta may now be exploring that very approach. That might represent a loss for LeCun, whose influence has waned since Meta announced its new Superintelligence Labs division, which has eclipsed LeCun’s Fundamental AI Research team as the center of gravity for AI within Meta.

If you have a minute, please take our quick survey to help us better understand who you are and which AI topics interest you most.

Who to Know: Kevin Weil, VP of Science at OpenAI

It was an embarrassing weekend for Kevin Weil, OpenAI’s VP of science. He tweeted on Friday that GPT-5 had found solutions to 10 “previously unsolved” mathematical problems that “have all been open for decades.” Coming hot on the heels of OpenAI and DeepMind models beating human experts at the International Math Olympiad, the post seemed to show OpenAI had at long last achieved a tantalizing goal: pushing the frontier of mathematics beyond what any human could achieve.

There was just one problem: Weil had got it wrong. The mathematical problems had already been solved by humans—and GPT-5 had simply uncovered the existing solutions to those problems. Demis Hassabis, leader of OpenAI’s rival DeepMind, weighed in with an uncharacteristically brutal post on X: “This is embarrassing.”

In fairness to OpenAI, what GPT-5 did is still pretty cool. It unearthed a mathematical proof from a forgotten 1960s paper, which had been written in German, and identified it as the correct solution to a problem that had been (erroneously) described online as “open.” That’s not the same as making a novel breakthrough, to be sure, but it’s still a potential superpower for mathematicians and scientists working on hard problems. As OpenAI researcher Sebastien Bubeck wrote on X: “It’s not about AIs discovering new results on their own, but rather how tools like GPT-5 can help researchers navigate, connect, and understand our existing body of knowledge in ways that were never possible before (or at least much much more time consuming).”

AI in Action

The U.K. government said last week it used an AI tool to analyse and sort more than 50,000 responses to a consultation in just two hours, surpassing human accuracy at the same task. The government said it hoped the rollout of similar tools would eventually save officials 75,000 days of work on rote tasks per year—the equivalent of £20 million ($27 million) in staffing costs. Rather than replacing workers, the AI is intended to free up government officials to focus on more important matters, digital government minister Ian Murray said in a statement. “This shows the huge potential for technology and AI to deliver better and more efficient public services for the public and provide better value for the taxpayer.”

As always, if you have an interesting story of AI in Action, we’d love to hear it. Email us at: intheloop@time.com

What We’re Reading

Technological Optimism and Appropriate Fear, by Jack Clark in Import AI

Anthropic’s co-founder and policy chief Jack Clark published a sobering essay last week, describing the terror that he sometimes feels about our AI trajectory, even as he remains a technological optimist. David Sacks, the White House AI czar, seized on this essay as evidence that Anthropic is supposedly running a “sophisticated regulatory capture strategy based on fear-mongering.” Another way of looking at it is that Clark is motivated not by greed, but by genuine fear—which from where I’m standing looks pretty well-founded. The whole piece is worth a read, but here’s an excerpt:

I am also deeply afraid. It would be extraordinarily arrogant to think working with a technology like this would be easy or simple.

My own experience is that as these AI systems get smarter and smarter, they develop more and more complicated goals. When these goals aren’t absolutely aligned with both our preferences and the right context, the AI systems will behave strangely.

[…] These AI systems are already speeding up the developers at the AI labs via tools like Claude Code or Codex. They are also beginning to contribute non-trivial chunks of code to the tools and training systems for their future systems.

To be clear, we are not yet at “self-improving AI”, but we are at the stage of “AI that improves bits of the next AI, with increasing autonomy and agency”. And a couple of years ago we were at “AI that marginally speeds up coders”, and a couple of years before that we were at “AI is useless for AI development”. Where will we be one or two years from now?

And let me remind us all that the system which is now beginning to design its successor is also increasingly self-aware and therefore will surely eventually be prone to thinking, independently of us, about how it might want to be designed.

Of course, it does not do this today. But can I rule out the possibility it will want to do this in the future? No.