Research & Development World

  • R&D World Home
  • Topics
    • Aerospace
    • Automotive
    • Biotech
    • Careers
    • Chemistry
    • Environment
    • Energy
    • Life Science
    • Material Science
    • R&D Management
    • Physics
  • Technology
    • 3D Printing
    • A.I./Robotics
    • Software
    • Battery Technology
    • Controlled Environments
      • Cleanrooms
      • Graphene
      • Lasers
      • Regulations/Standards
      • Sensors
    • Imaging
    • Nanotechnology
    • Scientific Computing
      • Big Data
      • HPC/Supercomputing
      • Informatics
      • Security
    • Semiconductors
  • R&D Market Pulse
  • R&D 100
    • Call for Nominations: The 2025 R&D 100 Awards
    • R&D 100 Awards Event
    • R&D 100 Submissions
    • Winner Archive
    • Explore the 2024 R&D 100 award winners and finalists
  • Resources
    • Research Reports
    • Digital Issues
    • R&D Index
    • Subscribe
    • Video
    • Webinars
  • Global Funding Forecast
  • Top Labs
  • Advertise
  • SUBSCRIBE

Google Creates Fail-Safe for Stopping Dangerous A.I.

By Ryan Bushey | June 6, 2016

Companies have started to explore how artificial intelligence (A.I.) and robotics could be useful to their customers.

Facebook deployed an algorithm that can sort through thousands of posts per second in order to deliver the best content for its users, whereas Apple acquired an artificial intelligence startup named Emotient back in January to potentially use Emotient’s technology to support facial recognition features on a future version of the iPhone.

However, what would happen if these helpful programs went rogue?

Google DeepMind, a subsidiary of the tech giant specializing in A.I. research, in conjunction with the Future of Humanity Institute published a study explaining that it would take more than just unplugging a computer to stop a malfunctioning program.

Basically, researchers designing these algorithms would need to install something called an “interruption policy, ” according to Popular Science. This would need to be a proprietary signal that can only be activated through remote control by the researchers who created the hypothetical A.I. program, a big red button, so to speak.

An example of where these advanced A.I. systems could learn to bypass traditional commands happened in 2013 where an algorithm quickly learned that it would not lose at Tetris if it simply paused the game.

The trigger would essentially cause the machine to stop doing what it’s doing because the signal emitted from the control would trick it into believing its making this decision on its own.

Rest assured that this is still a speculative scenario because there is no specific architecture for developing generalized A.I. programs just yet, noted Popular Science.

It’s an intriguing notion to consider, though, as studies on A.I. slowly shift from science fiction to a potentially practical service.

 

Related Articles Read More >

Is your factory (or lab) ready to think? An insider’s take on next-gen automation and what really works
8 reasons all is not well in GenAI land
Efficiency first: Sandia’s new director balances AI drive with deterrent work
GreyB’s AI-driven Slate offers single search across 160 million patents, 264 million papers
rd newsletter
EXPAND YOUR KNOWLEDGE AND STAY CONNECTED
Get the latest info on technologies, trends, and strategies in Research & Development.
RD 25 Power Index

R&D World Digital Issues

Fall 2024 issue

Browse the most current issue of R&D World and back issues in an easy to use high quality format. Clip, share and download with the leading R&D magazine today.

Research & Development World
  • Subscribe to R&D World Magazine
  • Enews Sign Up
  • Contact Us
  • About Us
  • Drug Discovery & Development
  • Pharmaceutical Processing
  • Global Funding Forecast

Copyright © 2025 WTWH Media LLC. All Rights Reserved. The material on this site may not be reproduced, distributed, transmitted, cached or otherwise used, except with the prior written permission of WTWH Media
Privacy Policy | Advertising | About Us

Search R&D World

  • R&D World Home
  • Topics
    • Aerospace
    • Automotive
    • Biotech
    • Careers
    • Chemistry
    • Environment
    • Energy
    • Life Science
    • Material Science
    • R&D Management
    • Physics
  • Technology
    • 3D Printing
    • A.I./Robotics
    • Software
    • Battery Technology
    • Controlled Environments
      • Cleanrooms
      • Graphene
      • Lasers
      • Regulations/Standards
      • Sensors
    • Imaging
    • Nanotechnology
    • Scientific Computing
      • Big Data
      • HPC/Supercomputing
      • Informatics
      • Security
    • Semiconductors
  • R&D Market Pulse
  • R&D 100
    • Call for Nominations: The 2025 R&D 100 Awards
    • R&D 100 Awards Event
    • R&D 100 Submissions
    • Winner Archive
    • Explore the 2024 R&D 100 award winners and finalists
  • Resources
    • Research Reports
    • Digital Issues
    • R&D Index
    • Subscribe
    • Video
    • Webinars
  • Global Funding Forecast
  • Top Labs
  • Advertise
  • SUBSCRIBE