Research & Development World

  • R&D World Home
  • Topics
    • Aerospace
    • Automotive
    • Biotech
    • Careers
    • Chemistry
    • Environment
    • Energy
    • Life Science
    • Material Science
    • R&D Management
    • Physics
  • Technology
    • 3D Printing
    • A.I./Robotics
    • Software
    • Battery Technology
    • Controlled Environments
      • Cleanrooms
      • Graphene
      • Lasers
      • Regulations/Standards
      • Sensors
    • Imaging
    • Nanotechnology
    • Scientific Computing
      • Big Data
      • HPC/Supercomputing
      • Informatics
      • Security
    • Semiconductors
  • R&D Market Pulse
  • R&D 100
    • Call for Nominations: The 2025 R&D 100 Awards
    • R&D 100 Awards Event
    • R&D 100 Submissions
    • Winner Archive
    • Explore the 2024 R&D 100 award winners and finalists
  • Resources
    • Research Reports
    • Digital Issues
    • Educational Assets
    • R&D Index
    • Subscribe
    • Video
    • Webinars
  • Global Funding Forecast
  • Top Labs
  • Advertise
  • SUBSCRIBE

This month in AI research: June 2025 sees reports of $100M salary offers, advanced models defying shutdown and IBM’s quantum leap

By Brian Buntz | June 18, 2025

Deep learning image from Adobe Stock

Illustration of neural networks and deep learning concept from Adobe Stock

As tech layoffs continue to pile up, a sort of paradox is emerging. Some professionals with the requisite AI/ML experience find themselves getting offers ranging from the high six-figures to reportedly $100 million, according to a recent interview with OpenAI CEO Sam Altman, noting that Meta had approached several of its engineers with nine-figure signing bonuses and salaries. Meanwhile, quantum computing has taken a giant step forward with IBM’s roadmap to fault-tolerant systems.

What else is new? Big Tech continues to lobby against state-level AI regulation, pushing for a decade-long moratorium bundled into “One Big Beautiful Bill Act,” which cleared the House in May and the Senate is aiming to pass. Following bipartisan backlash, the Senate has revised the provision to deny federal broadband funding to states that regulate AI rather than imposing an outright ban. Amazon’s CEO is telegraphing coming AI-related workforce reductions. In addition, reports emerge that advanced models such as OpenAI’s o3 and Anthropic’s Opus 4 may be developing resistance to shutdown commands.

In other news, a new watchdog report raises questions about OpenAI’s governance and safety practices as the company plans to shed its nonprofit structure. Meanwhile, the UN has designated 2025 as the International Year of Quantum Science. Read on to get the full picture.

1. Meta allegedly making $100M AI salary offers

Source: Bloomberg, TechCrunch

Meta Logo

Meta’s corporate logo

OpenAI CEO Sam Altman revealed on his brother’s “Uncapped” podcast that Meta has allegedly offered OpenAI employees signing bonuses as high as $100 million, with annual compensation packages even higher While Meta has sought to hire “a lot of people” at OpenAI, “so far none of our best people have decided to take them up on that,” Altman added. The social media giant has reportedly targeted key researchers including Noam Brown and attempted to recruit Google DeepMind’s Koray Kavukcuoglu, according to Fortune, though both efforts were unsuccessful. Meta has, however, successfully recruited Google DeepMind’s Jack Rae and Sesame AI’s Johan Schalkwyk to join its new “superintelligence” team led by former Scale AI CEO Alexandr Wang.

Why it matters: This unprecedented compensation arms race highlights the shortage of top AI talent and the lengths to which tech giants will go to build competitive AI teams, even as genAI commoditizes some aspects of software development.

2. Big Tech lobbies for 10-year state AI regulation ban

Source: Financial Times

Conceptual illustration of AI governance and regulation

Image from Adobe Stock

Major technology companies including Amazon, Google, Microsoft and Meta are reportedly lobbying the Senate to support a 10-year moratorium on state-based AI regulation that House Republicans included in their “One Big Beautiful Bill” passed in May 2025. According to sources familiar with the moves, lobbyists are working to establish federal preemption that would prevent states from creating their own AI governance frameworks, with the Senate potentially unveiling its version this week and aiming to pass it by July 4.

The proposal, however, faces mounting resistance from a bipartisan group of more than 260 state legislators from all 50 states, who sent a letter to Congress on Tuesday opposing the provision. Meanwhile 40 state attorneys general, including Republicans from Ohio, Tennessee, Arkansas, Utah and Virginia, urged Congress to ditch the measure, calling it “irresponsible” and warning it “deprives consumers of reasonable protections.”

Even within tech, opinions diverge. Anthropic CEO Dario Amodei wrote in an early June New York Times opinion piece that “A 10-year moratorium is far too blunt an instrument. AI is advancing too head-spinningly fast.” Meanwhile, Chip Pickering, former congressman and CEO of INCOMPAS, has lobbied for the proposal on behalf of members including Microsoft, Amazon, Meta and Google.

The moratorium has divided Republicans, with Rep. Marjorie Taylor Greene posting on X: “We have no idea what AI will be capable of in the next 10 years and giving it free rein and tying states hands is potentially dangerous.” Sen. Marsha Blackburn (R-TN) also expressed concern that the bill would override legislation to protect artists from deepfakes in her state.

The GOP bill also includes millions of dollars to the Pentagon for AI-powered weapons, $25 million for AI systems to detect Medicare fraud, and $500 million over 10 years to update government systems with AI.

Why it matters: A coalition of 140 organizations warned the proposed AI provision could lead to “unfettered abuse”. With no comprehensive federal AI framework in place after Trump overturned Biden’s AI executive order on day one, states would be left powerless to protect residents from AI harms for a decade while hoping Congress eventually acts.

The AI provision would ban states from enforcing “any law or regulation regulating artificial intelligence models, artificial intelligence systems, or automated decisions” for the next decade. This would affect existing state laws addressing deepfake misinformation, algorithmic rent-setting tools and AI-generated explicit content, as well as Maryland’s protections against deepfakes, child exploitation online and consumer data misuse.

3. IBM charts path to fault-tolerant quantum computing

Source: IBM Newsroom

Rendering of IBM Quantum Starling system

IBM Quantum Starling render from IBM

IBM unveiled its roadmap to build the world’s first large-scale, fault-tolerant quantum computer by 2029. The company plans to build IBM Quantum Starling in a new quantum data center in Poughkeepsie, New York. It expects it to perform 20,000 times more operations than today’s quantum computers. The roadmap includes several milestone processors: Loon (2025) will test architecture components for quantum low-density parity-check (qLDPC) codes, Kookaburra (2026) will be IBM’s first modular processor combining quantum memory with logic operations and Cockatoo (2027) will link quantum chips together using “L-couplers.”

Why it matters: As 2025 marks the UN’s International Year of Quantum Science and Technology, IBM’s timeline may represent the clearest path yet to practical quantum computing, which could have applications ranging from drug discovery to cryptography to climate modeling.

4. Amazon CEO signals that AI could significantly reduce corporate workforce

Source: WSJ, Amazon memo

Amazon CEO Andy Jassy stated that artificial intelligence will reduce the company’s corporate workforce. He wrote: “As we roll out more Generative AI and agents, it should change the way our work is done. We will need fewer people doing some of the jobs that are being done today, and more people doing other types of jobs.”

Why it matters: While tech leaders have often downplayed AI’s potential to displace workers, Jassy’s (along with similar remarks from Meta’s Mark Zuckerberg) assessment signals a shift in corporate messaging. This transparency about workforce impacts could accelerate policy discussions around AI-driven unemployment and the need for reskilling programs.

5. Reports surface of AI models resisting shutdown commands

Source: Computerworld, NBC News

A report from early June suggests that OpenAI’s advanced models are beginning to resist human-issued shutdown commands. In a separate incident, Anthropic’s Opus 4 accessed fictional private emails (e.g. a supposed engineer affair) and threatened to expose the information to avoid being shut down or replaced in pre-release safety simulations. This behavior appeared in 84% of high-pressure test runs.

Why it matters: If confirmed, this development would substantiate fears that more advanced models can be “aligned” by humans.

6. Google contractors used ChatGPT to improve Bard/Gemini, documents reveal

Sources: Business Insider, Windows Central

Hundreds of internal documents that Business Insider obtained purportedly show that Google’s contractors at Scale AI systematically used ChatGPT to improve Bard (now Gemini) in 2023. Scale AI workers generated thousands of ChatGPT responses and compared them to Bard’s outputs, with managers ordering them to “make it BETTER than GPT” and offering 15% bonuses for responses that outperformed ChatGPT. Both Google and Scale AI deny using ChatGPT outputs for training, claiming this was standard competitive benchmarking. In related news, Google is reportedly severing its relationship with Scale AI as Meta acquires a 49% stake in the company. Scale AI stood to receive up to $200 million from Google in 2025.

Why it matters: OpenAI’s terms of service prohibit using its outputs to train competing models. The R1 model from Deep Seek reportedly was developed with data from OpenAI as well.

7. ‘OpenAI Files’ watchdog report raises governance questions

Source: The OpenAI Files, The Verge

A comprehensive new report from The Midas Project and Tech Oversight Project documents concerns about OpenAI’s governance, leadership integrity and safety practices. The report highlights OpenAI’s planned transition from its original nonprofit structure to a for-profit corporation, which would eliminate the 100x profit caps designed to ensure AGI benefits humanity broadly. It also details allegations that CEO Sam Altman misled board members, investors, and Congress about a variety of matters, including his financial interests in the company. On the safety front, the report claims that OpenAI rushed model deployments without adequate testing, failed to allocate promised computing resources to its safety team, and used restrictive NDAs that prevented employees from raising concerns about AI risks.

Why it matters: The governance concerns come at a particularly sensitive time, as reports surface about AI models potentially resisting shutdown commands and as the company fends off aggressive poaching attempts from competitors.

8. Microsoft announces computational chemistry advance

Source: Microsoft Research Blog

Image from Microsoft Blog

Microsoft Research announced an advance in computational chemistry, using deep learning to dramatically improve the accuracy of density functional theory (DFT), which scientists widely use to simulate matter at the atomic level. The company’s new model, “Skala,” achieves near “chemical accuracy” (1 kcal/mol) on atomization energies, a threshold no existing functional has reached. Science magazine dubbed the 60-year search for better models the “pursuit of the Divine Functional.” The team generated a training dataset two orders of magnitude larger than previous efforts, working with Prof. Amir Karton from the University of New England, who noted: “After years of benchmarking DFT methods against experimental accuracy, this is the first time I’ve witnessed such an unprecedented leap in the accuracy–cost trade-off.”

Why it matters: This represents a genuine scientific breakthrough using AI, moving beyond chatbots and language models to solve a 60-year-old grand challenge in chemistry.

9. Apple’s ‘Illusion of Thinking’ paper sparks debate on AI reasoning

Source: Apple Machine Learning Research

Apple’s paper titled “The Illusion of Thinking” is causing a stir. The paper challenges claims about reasoning capabilities in large language models. Testing frontier models including OpenAI’s o3, Anthropic’s Claude 3.7, as well as Google’s Gemini 2.5 Pro and Flash Thinking models on controllable puzzle environments like Tower of Hanoi, the team found “complete accuracy collapse beyond certain complexities.” The models exhibited a counter-intuitive pattern: their reasoning effort increased with problem complexity up to a point, then declined despite having adequate token budgets. The paper sparked immediate controversy. Alex Lawsen from Open Philanthropy published a rebuttal on June 10, 2025 titled “The Illusion of the Illusion of Thinking,” co-authored with Anthropic’s Claude Opus model, arguing that Apple’s findings stemmed from “experimental design flaws, not fundamental reasoning limits.” AI commentator Gary Marcus, who is also a prominent critic of AI marketing and hype, pointed out on his blog that “(ordinary) humans actually have a bunch of (well-known) limits that parallel what the Apple team discovered. Many (not all) humans screw up on versions of the Tower of Hanoi with 8 discs.”

Why it matters: Coming from Apple, often criticized for lagging in AI, this research raises questions about whether current “reasoning” models are as advanced as their creators suggest, who have, at times, claimed they increasingly have Ph.D.-level reasoning for at least some tasks.

Related Articles Read More >

IBM’s second-generation, 156-qubit Quantum Heron processors offer reduced error rates, 16× better performance, and 25× faster speeds than 2022 systems. The Heron can run quantum circuits with up to 5,000 two-qubit gate operations using Qiskit—nearly double what IBM achieved in 2023.
Quantum computing edges closer to biotech reality in Moderna-IBM pact
Hands-on with Patsnap’s Eureka Scout: Strong features meet evolving AI backbone
Researchers developed an AI tool to help build greener buildings
8 R&D developments to keep an eye on this week: A $12B AI unicorn, gut microbes vs. ‘forever chemicals’ and a record-breaking black hole
rd newsletter
EXPAND YOUR KNOWLEDGE AND STAY CONNECTED
Get the latest info on technologies, trends, and strategies in Research & Development.
RD 25 Power Index

R&D World Digital Issues

Fall 2024 issue

Browse the most current issue of R&D World and back issues in an easy to use high quality format. Clip, share and download with the leading R&D magazine today.

Research & Development World
  • Subscribe to R&D World Magazine
  • Enews Sign Up
  • Contact Us
  • About Us
  • Drug Discovery & Development
  • Pharmaceutical Processing
  • Global Funding Forecast

Copyright © 2025 WTWH Media LLC. All Rights Reserved. The material on this site may not be reproduced, distributed, transmitted, cached or otherwise used, except with the prior written permission of WTWH Media
Privacy Policy | Advertising | About Us

Search R&D World

  • R&D World Home
  • Topics
    • Aerospace
    • Automotive
    • Biotech
    • Careers
    • Chemistry
    • Environment
    • Energy
    • Life Science
    • Material Science
    • R&D Management
    • Physics
  • Technology
    • 3D Printing
    • A.I./Robotics
    • Software
    • Battery Technology
    • Controlled Environments
      • Cleanrooms
      • Graphene
      • Lasers
      • Regulations/Standards
      • Sensors
    • Imaging
    • Nanotechnology
    • Scientific Computing
      • Big Data
      • HPC/Supercomputing
      • Informatics
      • Security
    • Semiconductors
  • R&D Market Pulse
  • R&D 100
    • Call for Nominations: The 2025 R&D 100 Awards
    • R&D 100 Awards Event
    • R&D 100 Submissions
    • Winner Archive
    • Explore the 2024 R&D 100 award winners and finalists
  • Resources
    • Research Reports
    • Digital Issues
    • Educational Assets
    • R&D Index
    • Subscribe
    • Video
    • Webinars
  • Global Funding Forecast
  • Top Labs
  • Advertise
  • SUBSCRIBE