Categories
Selected Articles

Map Shows F-35 Bases and Jet Fighter Carriers Near China

The U.S. military said the F-35 fleet supports the combined combat power of the U.S. and its allies and partners in the Indo-Pacific region.
Categories
Selected Articles

Netanyahu says Israel could withdraw from Lebanon if Hezbollah is disarmed

Netanyahu says Israel could withdraw from Lebanon if Hezbollah is disarmed [deltaMinutes] mins ago Now
Categories
Selected Articles

Your next real estate agent may be a teenager

A young woman is livestreaming on her phone in front of a SOLD for sale sign, and surrounded by social media likes.

Toni Marmo says she studied for her real estate license only to fix one quick issue: helping her boyfriend buy a house. But she quickly realized she loved the subject. Selling homes seemed like a better fit for her unfiltered, energetic demeanor than the three years she spent studying international business before dropping out, or the time she spent as a hair stylist, where she preferred selling products over styling. After leaving both behind, she moved to real estate last year. “A true entrepreneur doesn’t need to go to college for entrepreneurship,” Marmo says. “You’ve just got it. You just have it in your body.”

The typical American Realtor is a 55-year-old white woman who went to college and owns a home, according to the National Association of Realtors‘ (NAR) 2024 member profile. That Marmo is not. She’s a 24 year-old with 40,000 TikTok followers who pokes fun at stereotypes of her generation, and leverages her following to get leads in her home state of New Jersey. She loves it, she tells me, because like fellow Gen Zers, she is so over the 9 to 5. “You’re working for someone else to make all the money, but you’re putting the most of the work in,” Marmo says of the corporate path. “If you could put that much work into yourself, why not do it?”

Some Zoomers like Marmo are ditching four-year degrees in favor of work that unchains them from a desk, puts money in their bank accounts sooner, and — they hope — will survive the artificial intelligence boom that is already starting to change once-hot professions like software engineering, consulting, and marketing. Some are turning to blue collar work like HVAC servicing and wind turbine installation. Others are trying to start their own ventures via influencing and side hustles. And some see the lure in the licensed white-collar job, including working in real estate or insurance.

That shift in licensed jobs is slow, but growing. The share of Realtors younger than 30 grew from 1% to 4% in 2024, according to NAR’s member profile, and sits at 3% in 2025. Among insurance agents, the median age of an insurance principal who owns 20% or more of their agency is 55, with 22% of principals over the age of 66, according to a 2024 study of agencies conducted by the Big “I,” an association for independent insurance agents. Many are likely eying retirement, which could open up a huge amount of demand for young people to take up the trade.

Several Gen Zers I spoke to for this story told me they find appeal in working in real estate because there’s no ceiling on what they can earn. Rather than invest tens or hundreds of thousands of dollars in a four-year degree, they can spend a few weeks or months training to receive licenses and start working in fields where their hustle correlates to their payday.

That’s refreshing for a generation that’s watched the white-collar jobs they grew up aspiring to work dry up. Young workers have found themselves facing mass layoffs despite being high performers now that the era of workplace loyalty is dying.

“Autonomy is such a big piece of it, and something we hear consistently from students that there’s the idea of: Be your own boss, make your own way. It’s you. You’ll sink or swim based on your effort,” says Blake Garrett, CEO of Aceable, an online platform that focuses on licensing in industries like real estate, mortgage, and insurance. “There is a direct correlation to your effort and your earning potential, and that doesn’t happen with many white collar jobs.”

I was making at that point $80,000, they had accrued $80,000 in debt.Whitney Harvey, a 30-year-old Realtor in Tennessee

Half of the people who take courses with Aceable are under 30, says Garrett. Those industries fit with Gen Z’s priorities: flexibility, autonomy, and high salaries — among all generations, they pin financial security to the highest average salary of about $200,000. Gen Z is more exposed to high spending habits thanks to influencers who show off their hauls, and many recall growing up in households affected by the 2008 financial crisis and its lingering economic uncertainty. They crave higher salaries to feel stable. (The average insurance agent makes about $60,000 a year, per the US Bureau of Labor Statistics, and the median Realtor makes nearly $58,000, per NAR. The median pay for a Gen Z college graduate is $60,000 and $40,000 for a high school graduate, according to the Federal Reserve Bank of New York.)

Americans are losing trust in the ROI of a college degree. A 2025 Gallup poll found that only 42% of respondents had a “great deal” or “quite a lot” of confidence in higher education, compared to 57% of people in 2015. As once-reliable roles in Big Tech have started to dwindle, young people who studied computer engineering and computer science had higher unemployment rates than those who studied art history, journalism, and performing arts, according to a 2023 data from the Federal Reserve Bank of New York. Student enrollment at colleges dropped by 8.5% by 2024 from its peak in 2010, according to the Education Data Initiative. The federal government is freezing funding for research, or even stripping it from some universities as it attacks colleges in an all-out culture war. To top it all off, the average Gen Zer who invested in a four-year degree is working to pay off around $23,000 in student debt, according to Experian.

Whitney Harvey, a 30-year-old Realtor in Tennessee, tells me she dreamed about being a nuclear engineer, but got to college and realized that she and chemical equations just didn’t vibe. She dropped out after about a year, and at 18 got her license to become a real estate agent. She took her first job with a brokerage firm where they called her “the baby,” she says. Harvey estimates there wasn’t another person under 50 working there, and tells me she first learned about property taxes when studying for her license. But she soon started posting listings to Facebook, something none of her coworkers were doing, and made her first sale. Not long after, the older agents were coming to her for advice on how to use social media for their listings. “I really was able to capture like a whole new set of first-time homebuyers,” Harvey says.

Those early career years did set her apart from her friends, who were still in college. “It was mind boggling,” she tells me. “I was making at that point $80,000, they had accrued $80,000 in debt.”

Because it’s still something of a rarity to see a baby-faced real estate agent or teenager selling life insurance, the young people in licensure jobs I spoke to say that succeeding means not just learning the trade but competing against ageist stereotypes. The median age of a first-time home buyer has risen to an all-time-high of 38, according to NAR. That’s up from an average age of 33 a decade ago, according to a Zillow analysis. The idea of having a newly minted, 18-year-old real estate agent guide you through the biggest financial decision of your life is jarring. Katie Kenny, a 24-year-old Realtor in Chicago’s suburbs, says people meet her and are surprised, as they “expect the real estate agent to be like double my age,” she tells me. “They’re like, ‘oh, you’re young.’ And then when I open my mouth and start talking, they’re actually surprised because I do know a lot more, and I sound a lot more mature than what a normal 24-year-old would sound like.”

Social media is also giving these industries more cache among young people. Insurance brokers post videos to TikTok detailing how much money they made each day of the week. The agents, typically young women, film themselves typing on their computers as their earnings flash on the screen: Wednesday: $0, but on Thursday: $3,000. Some brag about making thousands of dollars a month from home at just 19, and others use sound memes and TikTok trends to mock the differences between themselves and older agents in their offices.

The same goes for real estate agents, who film themselves donning matching sets and making their way to pristine, open house ready homes. Marmo posts videos of herself walking through kitchens that she calls “so aesthetic” or noting that a prospective buyer could film their “get ready with me” videos in the large, well-lit bathroom. Some older real estate agents comment on her videos from time to time and criticize her candid nature as unprofessional, but Marmo says her clients don’t mind. She says she will tell them that a kitchen “eats,” and that they might be confused, but typically aren’t put off. Marmo says it’s part of what makes her seem authentic, and as long as she can back up what she says with knowledge, her clients trust her. “Making time for clients and going an extra mile, it really does give you a lot of credibility,” Marmo says.

A thirst for AI by corporations is upending slices of the white collar job market, and Gen Z is nervous. More than 60% of the college class of 2025 who were familiar with AI tools said they were at least somewhat concerned that AI would affect their career prospects, increasing from 44% in 2023, according to a survey from early career site Handshake.

For now, Gen AI tools are less of a threat to replace workers in fields like real estate and insurance than in tech. Insurance brokerage is a “very people focused,” field, says Jamie Behymer, the director of diversity, inclusion, and young agents at the Big “I.” “Relationships are what drive the industry,” she says. While agents might be using AI to help with paperwork or tedious tasks, “the industry is really focusing on relationships over AI.”

Kenny says she uses AI to help stage houses and give on-the-fence clients a feel for how they could use spaces or map routes for her showings. She sometimes drafts tricky emails or social media posts with ChatGPT — but it’s not about to replace her. Broker bots aren’t going to stage and host an open house with fresh-baked cookies or be your preferred first phone call when disaster strikes. “It can’t open the door, it can’t negotiate with a seller’s agent,” Kenny tells me. “I like the hustle. I like that you’re not capped with how hard you can work.”


Amanda Hoover is a senior correspondent at Business Insider covering the tech industry. She writes about the biggest tech companies and trends.

Read the original article on Business Insider
Categories
Selected Articles

Israeli airstrike on southern Gaza hospital kills 8, health ministry says

Israeli airstrike on southern Gaza hospital kills 8, health ministry says [deltaMinutes] mins ago Now
Categories
Selected Articles

Sunkist’s political affiliations under scrutiny amid concerns over support for Israel

Concerns regarding corporate affiliations and social responsibility are at the forefront of consumer consciousness as global debates on human rights and political engagement intensify. Questions have arisen about whether Sunkist, a major player in the beverage and fruit industry, is financially or politically invested in Israel amidst the broader discourse surrounding conflicts in the Middle East and corporate ethics, reports 24brussels.

This article examines Sunkist’s corporate history, partnerships, political contributions, and public statements to assess its stance on Israel. It also explores the broader implications for activists and consumers, particularly how movements advocating for boycotts, divestitures, and sanctions (BDS) impact businesses across similar sectors.

Background on Sunkist

Sunkist Growers, Inc. is the premier citrus marketing cooperative in America, founded in 1893. Originally established as the Southern California Fruit Exchange, it was created by struggling citrus farmers seeking collective strength against powerful industry middlemen. This cooperative model enabled growers to pool resources, enhancing profitability and influence.

Initially focused on orange farmers in Southern California, the cooperative soon expanded to lemon producers and growers from neighboring regions like Arizona. By the late 1920s, Sunkist dominated, selling nearly 75 percent of California’s citrus produce. It launched significant marketing initiatives early on, including one of the first major ad campaigns for perishable goods in 1907, and officially registered the Sunkist trademark in 1926.

The cooperative comprises mostly small to medium-sized family farms, many cultivating fewer than 40 acres. With processing facilities in the western United States, Sunkist produces juices, oils, pulp, and peels, while maintaining a strong position in the fresh fruit market. Recent years have seen Sunkist diversify its reach through licensing agreements for products like orange soda and juices, aiming for greater global brand recognition.

Sunkist has sought to expand its international sales, maintaining a significant presence in American markets, particularly California and Arizona. The cooperative’s headquarters has relocated multiple times, with the latest move to Valencia, California, in 2014.

Why is support of Israel important?

Israel, recognized as the Start-Up Nation, boasts a strong technological economy, particularly in software development, AI, and cybersecurity. Numerous international corporations, including major IT firms, have established R&D centers in Israel to tap into its innovative ecosystem. Operating in Israel allows businesses access to skilled labor, cutting-edge technologies, and market gateways across Europe, Asia, and Africa, making it a strategic hub for multinational corporations.

Countries like the United States and other Western nations typically express strong support for Israel through political and diplomatic relations. Consequently, companies often align their operations with the geopolitical priorities of their home governments, leading to perceived endorsements of Israel’s policies. Corporations might also engage in pro-Israel initiatives as part of corporate social responsibility (CSR) efforts, strengthening brand perceptions among certain consumer demographics.

The Israel-Palestine conflict, often described as a significant geopolitical challenge, is rife with controversy and human rights concerns. Businesses’ affiliations with Israel are scrutinized by activists who interpret these relationships as tacit approval of political and military actions. The rise of social media amplifies these sentiments, with boycott movements promoting corporate accountability for alleged human rights violations while positive portrayals can enhance a business’s image as a promoter of innovation and democracy.

Investing or operating in Israel poses reputational risks, with potential backlash from anti-Zionist groups and consumers. Conversely, businesses that withdraw support for Israel may alienate other partners or customer segments, complicating their international strategies.

Political donations and lobbying of Sunkist

Sunkist Growers, Inc. has established a Political Action Committee registered with the Federal Election Commission. In the 2023-2024 election cycle, the Sunkist PAC contributed just over $41,700, distributed with approximately 53 percent directed to Republican candidates and 40 percent to Democrats. Notable recipients include Michelle Steel (R-CA-45), David Valadao (R-CA-22), and Jimmy Panetta (D-CA-19). There is currently no evidence linking any contributions to pro-Israel political activities.

Federal records indicate that Sunkist Growers did not engage in lobbying efforts during early 2024 and similar periods in 2025. The focus of its political contributions appears to prioritize American agricultural interests rather than broader geopolitical concerns.

Broader implications for businesses operating internationally

Businesses engaging with Israel, particularly in connection with occupied territories, face legal risks. Various international bodies, including the United Nations and the International Court of Justice, have labeled Israel’s activities in certain areas as violations of international law.

Companies may face sanctions or penalties for supporting occupations and must ensure compliance with increasing trade restrictions, including tariffs or bans on goods produced in Israeli settlements. This necessitates thorough due diligence regarding supply chains and regional affiliations.

Final words

The importance of public perception is underscored in an era of heightened human rights awareness and social media influence. Companies implicated in supporting or profiting from Israeli activities face significant backlash, social pressure, and damaged reputations, which can adversely affect sales and investor confidence. To maintain transparency and uphold ethical standards, corporations must carefully navigate the political dimensions of their global operations.

Categories
Selected Articles

The AI doomers are having their moment

A creative image of AI surrounded by yellow foldable warning signs.
The world’s leading tech companies are in a race to AGI. There is growing evidence that large language models, which underpin the most popular chatbots, might never get there.

  • Top AI companies are in a race to develop artificial general intelligence.
  • The large language models powering popular chatbots, however, are showing their limits.
  • Some researchers say world models or other strategies might be the clearer path to AGI.

The race to build artificial general intelligence is colliding with a harsh reality: Large language models might be maxed out.

For years, the world’s top AI tech talent has spent billions of dollars developing LLMs, which underpin the most widely used chatbots.

The ultimate goal of many of the companies behind these AI models, however, is to develop AGI, a still theoretical version of AI that reasons like humans. And there’s growing concern that LLMs may be nearing their plateau, far from a technology capable of evolving into AGI.

AI thinkers who have long held this belief were once written off as cynical. But since the release of OpenAI’s GPT-5, which, despite improvements, didn’t live up to OpenAI’s own hype, the doomers are lining up to say, “I told you so.”

Principal among them is perhaps Gary Marcus, an AI leader and best-selling author. Since GPT-5’s release, he’s taken his criticism to new heights.

“Nobody with intellectual integrity should still believe that pure scaling will get us to AGI,” he wrote in a blog post earlier this month, referring to the costly strategy of amassing data and data centers to reach general intelligence. “Even some of the tech bros are waking up to the reality that ‘AGI in 2027’ was marketing, not reality.”

Here’s why some think LLMs are not all they are cracked up to be, and the alternatives some AI researchers believe are the better path to AGI.

The AI bubble

OpenAI is now the most valuable startup on the planet. It has raised about $60 billion, and a discussed secondary share sale could push the company’s valuation over $500 billion. That would make OpenAI the most valuable private company in the world.

There are good reasons for the excitement. According to the company, ChatGPT has 700 million weekly users, and OpenAI’s products have largely set the pace of the AI race.

There are a couple of problems, however. First, and perhaps foremost for its investors, OpenAI is not profitable and shows few signs of becoming profitable soon. Second, the company’s founding mission is to develop AGI in a way that benefits all of humanity, yet there’s a growing feeling that this world-changing technology, which props up much of the hype around AI, is much further away than many engineers and investors originally thought.

Other companies, too, have been riding this hype wave. Google, Meta, xAI, and Anthropic are all attracting and pouring billions of dollars into scaling their LLMs, which means snapping up talent, buying data, and building vast arrays of data centers.

The mismatch between spending and revenue, and hype and reality, is provoking alarm that the AI industry is a bubble on the verge of bursting. OpenAI CEO Sam Altman himself thinks so.

“Are we in a phase where investors as a whole are overexcited about AI? My opinion is yes. Is AI the most important thing to happen in a very long time? My opinion is also yes,” he told journalists earlier this month.

While other tech leaders, like former Google CEO Eric Schmidt, are less certain, a $1 trillion stock market tech sell-off last week showed the concerns are widespread. The market recovered on Friday after Federal Reserve Chair Jerome Powell said he is considering a rate cut in September.

Now, everyone is eagerly anticipating Wednesday’s earnings report from Nvidia, which makes the chips powering LLMs and is the pick-and-shovel company of the AI rush. If the company’s earnings show signs of slowing and its outlook is more cautious, there will be a whole new round of worry, and the AI doomers will again remind everyone of what they’ve been saying for years: LLMs are not the way.

The problem with LLMs

In June, Apple researchers released a paper called “The Illusion of Thinking.” What they found sounded positively human: Advanced reasoning models give up when faced with more complex tasks.

Their conclusion, however, was that these models rely on pattern recognition rather than logical thinking, and the researchers cautioned against the belief that they could result in AGI. “Claims that scaling current architectures will naturally yield general intelligence appear premature,” the researchers wrote.

The paper was widely mocked online, largely because Apple, despite its size and vast resources, is perceived as far behind in the AI race. For skeptics, however, it was validating.

Andrew Gelman, a professor of statistics and political science at Columbia University, has argued that the level of textual comprehension shown by LLMs falls short of expectations. What LLMs do compared to what humans do is the difference between “jogging and running,” Gelman wrote in a 2023 blog post.

“I can jog and jog and jog, thinking about all sorts of things and not feeling like I’m expending much effort, my legs pretty much move up and down of their own accord … but then if I need to run, that takes concentration,” he wrote.

Geoffrey Hinton, the Nobel Prize winner known to some as the Godfather of AI, disagrees. “By training something to be really good at predicting the next word, you’re actually forcing it to understand,” he told The New Yorker back in 2013.

Another potential problem with LLMs is their tendency to misinterpret the meanings of words, hallucinate, and spread misinformation. This reality is why, for now, most companies adopting AI require a human in the mix.

In a report published earlier this year, a group of academic researchers in Germany specializing in computational linguistics surveyed “in-the-wild” hallucination rates for 11 LLMs across 30 languages. They found that the average hallucination rate across all languages varied between 7% to 12%.

Leading AI companies like OpenAI have, in recent years, operated under the belief that these problems can be mitigated by feeding LLMs more information. The so-called scaling laws, which OpenAI researchers outlined in a 2020 paper, state that “model performance depends most strongly on scale.”

However, recently, researchers have begun to question whether LLMs have hit a wall and are facing diminishing returns as they scale. Yann LeCun, Meta’s chief AI scientist who heads a lab under the company’s superintelligence unit, is largely focused on next-generation AI approaches instead of LLMs.

“Most interesting problems scale extremely badly,” he said at the National University of Singapore in April. “You cannot just assume that more data and more compute means smarter AI.” Apple’s analysis also found that current LLM-based reasoning models are inconsistent due to “fundamental limitations in how models maintain algorithmic consistency across problem scales.”

Alexandr Wang, the head of Meta’s superintelligence division, appears equally uncertain. He said scaling is “the biggest question in the industry” at the Cerebral Valley conference last year.

Even if scaling worked, access to high-quality data is limited.

The hunt for unique data has been so fierce that leading AI companies are pushing boundaries — sometimes at the risk of copyright violations. Meta once considered acquiring publisher Simon & Schuster as a solution. Anthropic collected and scanned millions of pirated books while training Claude, which a district judge ruled in June did not constitute fair use.

Ultimately, some leading AI researchers say language itself is the limiting factor, and that’s why LLMs are not the path to AGI.

“Language doesn’t exist in nature,” Fei Fei Li, the Stanford professor famous for inventing ImageNet, said on an episode of Andreessen Horowitz’s podcast in June. “Humans,” she said, “not only do we survive, live, and work, but we build civilization beyond language.”

LeCun’s gripe is similar.

“We need AI systems that can learn new tasks really quickly. They need to understand the physical world, not just text and language but the real world, have some level of common sense, and abilities to reason and plan, have persistent memory — all the stuff that we expect from intelligent entities,” he said during his talk in April.

New ways to AGI

Researchers like Li and LeCun are pursuing an alternative to LLMs, called world models, that they believe is a better path to AGI.

Unlike large language models, which determine outputs based on statistical relationships between words and phrases, world models make predictions by simulating and learning from the world around them. These kinds of models feel more akin to how humans learn, while LLMs rely on vast troves of data that humans have no access to.

Computer scientist and MIT professor Jay Wright Forrester outlined the value of this kind of model all the way back in a 1971 paper.

“Each of us uses models constantly. Every person in private life and in business instinctively uses models for decision-making. The mental images in one’s head about one’s surroundings are models,” he wrote. “All decisions are taken on the basis of models. All laws are passed on the basis of models. All executive actions are taken on the basis of models.”

Recent research has found that world models not only capture reality as it is, but can also simulate new environments and scenarios.

In a 2018 paper, researchers David Ha and Jurgen Schmidhuber built a simple world model inspired by humans’ cognitive systems. This was used to not only model hypothetical scenarios, but also to train agents.

“Training agents in the real world is even more expensive,” the authors wrote. “So world models that are trained incrementally to simulate reality may prove to be useful for transferring policies back to the real world.”

In August, Google’s DeepMind released Genie 3, a world model that it says “pushes the boundaries of what world models can accomplish.” It can model physical properties of the real world, like volcanic terrain or a dimly lit ocean. This could allow AI to make predictions based on what it learns from these real-world simulations.

There are other ideas in the works, too. Neuroscience models try to mimic the processes of the brain. Multi-agent models operate on the theory that multiple AIs interacting with each other is a better analogy to how humans function in real life. Researchers pursuing multi-agent models believe AGI is more likely to emerge through this kind of social exchange.

Then, there is embodied AI, which adapts world models into physical forms, allowing robots to interpret and train on the world around them. “Robots take in all kinds of forms and shapes,” Li said on the No Priors podcast in June.

The potential of these alternatives, and in particular world models, gives hope to even Marcus, the premier LLM doomer. He refers to world models as cognitive models and urges AI companies to pivot from LLMs and focus on these alternatives.

“In some ways, LLMs far exceed humans, but in other ways, they are still no match for an ant,” Marcus said in a June blog post. “Without robust cognitive models of the world, they should never be fully trusted.”

Read the original article on Business Insider
Categories
Selected Articles

National Guard units begin carrying weapons in Washington

The move comes after Trump deployed hundreds of troops to the US capital in what he portrayed as a crime crackdown.
Categories
Selected Articles

Australia’s youngest senator describes depression, ‘whack’ responses and a pet-related white lie in first speech

The 21-year-old spoke of being bullied and battling mental health issues, and said she will focus on issues including housing, domestic violence and the climate crisis

Australia’s youngest senator battled depression and bullying before her election, and has dealt with misogyny and Pauline Hanson since.

Labor’s Charlotte Walker, a South Australian who turned 21 on election night, delivered her first speech on Monday night.

Continue reading…

Categories
Selected Articles

Trump Calls For ‘Fake News’ Networks To Have Licenses Revoked by FCC

Trump also asked why networks weren’t paying “millions of dollars” in license fees.
Categories
Selected Articles

A Harvard professor on why AI ‘evangelism’ is harming students’ career prospects

Student on Harvard campus
Alex Green, a Harvard professor, explains his concerns with AI use in classrooms.

  • Debate is rampant over the best ways to incorporate AI in classrooms.
  • Alex Green, a Harvard professor, said he’s concerned that AI use is damaging students’ communication skills.
  • He said that while there’s a place for AI, intensive teacher training to understand its risks is vital.

You’re a student who spent an entire semester researching and writing a 20-page paper. You’ve poured time and effort into the assignment, and you’re looking forward to hearing your professor’s feedback.

Instead, you get a mediocre grade and three short paragraphs of vague comments, and you wonder: Did ChatGPT grade my essay?

Turns out, it did.

That’s a scenario a student recounted to Alex Green, an author and professor at Harvard’s Kennedy School. Green told Business Insider that the “AI evangelism” push — efforts to use AI across classrooms to make both teaching and learning easier — is doing more harm than good, undermining critical relationships between teachers and students.

Whether teachers or students are using AI, Green said it’s leading to a loss of “so many fundamental communication skills,” like knowledge and reasoning.

Green, who teaches policy communications and op-ed writing, said AI could also harm his students’ career prospects if they’re pursuing fields like communications and rely on AI to build those skills.

“My job, in part, is to help prepare them to go get jobs,” Green said. He added that he heard from some of his students that their prospective employers required them to share their screens while they take writing tests to ensure they’re not using AI.

“And so what would I be doing for them if I said to them, ‘No, no, you can just use these indiscriminately, and how you write and how you think and how you synthesize ideas doesn’t really matter?'” Green said.

Over the past decade, tech leaders and educators have been pushing initiatives to incorporate AI in classrooms. While some surveys have shown that AI use has helped teachers save time and provide higher-quality lessons, there’s minimal evidence that using AI to learn is effective. Additionally, AI is already starting to impact young people’s job prospects, with some tech leaders saying that it will decrease white-collar job openings.

Green said he’s not against AI — he has used it himself for his work, and he allows it in his classroom, to an extent. But it’s not a replacement for teachers, and heavily relying on it is a waste of a school’s resources, he said.

“You’re here now, and you’re in a class, and you have someone who is a total nerd for this and has devoted their life to every aspect of this. And you have me fully at every moment of every hour for the next eight weeks and beyond,” Green said. “Why on earth would you take all of that sacrifice and all of that dedication and give that over to a machine?”

‘The bible salesman version of AI’

There’s no shortage of efforts to incorporate AI in education. Take Khan Academy — the online tutoring organization established in 2008, which gradually started using AI to create lessons that personalized students’ experiences.

Khan Academy continues to enroll students, but other efforts have failed. AltSchool, which was backed by tech billionaires including Mark Zuckerberg, began to shutter four years after opening in 2013, as parents saw that their kids weren’t excelling using technology-based education.

Green said the problem is that many of these initiatives are focused on making learning as easy as possible, and that shouldn’t be the goal.

“These people have reframed the idea of learning as something where any struggle to wrestle with a concept or think really hard about something is a sign that the education is bad, and that what we need is for things to be as seamless and easy as possible,” Green said.

That’s not to say there isn’t a place for AI. Green said that he used a large language model, or LLM, to comb through materials for his research and found it helpful. In his classroom, he said that after five weeks of “intensive non-technological use,” he starts incorporating AI to help his students prepare for the political communications landscape, which includes dealing with chatbots and identifying falsely generated images.

Some colleges are putting AI at the forefront. In February, California State University announced its initiative to become the “nation’s first and largest AI-empowered university system” through public-private partnerships to train students and teachers on AI technology, including offering all students and faculty access to a version of ChatGPT.

On the federal level, the Trump administration is establishing a task force to promote AI in K-12 classrooms and look into redirecting funds toward AI efforts.

Some critics have warned that the US should tread carefully. South Korea recently rolled back its initiative to place AI textbooks in classrooms due to backlash from parents and teachers over a lack of preparation on how to best use the tech.

Green said that if colleges want to adopt AI in classrooms, they should mandate intensive training for faculty to understand “the very serious risks” the technology poses to learning. He also suggested restrictions on how teachers can use the technology, including using it to grade and communicate with students.

“We need actual committed educators who are not the bible salesman version of AI at the front of the room, opening up space for ideas about its judicious use in the classroom,” Green said. “We could really end up with some incredible amounts of junk here, and at the expense of our young people actually learning skills that you do need in the real world.”

Read the original article on Business Insider