Research & Development World

  • Home Page
  • Topics
    • Aerospace
    • Archeology
    • Automotive
    • Biotech
    • Chemistry
    • COVID-19
    • Environment
    • Energy
    • Life Science
    • Material Science
    • R&D Market Pulse
    • R&D Management
    • Physics
  • Technology
    • 3D Printing
    • A.I./Robotics
    • Battery Technology
    • Controlled Environments
      • Cleanrooms
      • Graphene
      • Lasers
      • Regulations/Standards
      • Sensors
    • Imaging
    • Nanotechnology
    • Scientific Computing
      • Big Data
      • HPC/Supercomputing
      • Informatics
      • Security
      • Software
    • Semiconductors
  • 2021 R&D 100 Award Winners
    • R&D 100 Awards
    • 2020 Winners
    • Winner Archive
  • Resources
    • Digital Issues
    • Podcasts
    • Subscribe
  • Global Funding Forecast
  • Webinars

Information Theory Holds Surprises for Machine Learning

By Santa Fe Institute | January 24, 2019

Examples from the MNIST handwritten digits database. (Credit: Josef Steppan)

New research challenges a popular conception of how machine learning algorithms “think” about certain tasks.

The conception goes something like this: because of their ability to discard useless information, a class of machine learning algorithms called deep neural networks can learn general concepts from raw data—like identifying cats generally after encountering tens of thousands of images of different cats in different situations. This seemingly human ability is said to arise as a byproduct of the networks’ layered architecture. Early layers encode the “cat” label along with all of the raw information needed for prediction. Subsequent layers then compress the information, as if through a bottleneck. Irrelevant data, like the color of the cat’s coat, or the saucer of milk beside it, is forgotten, leaving only general features behind. Information theory provides bounds on just how optimal each layer is, in terms of how well it can balance the competing demands of compression and prediction.

“A lot of times when you have a neural network and it learns to map faces to names, or pictures to numerical digits, or amazing things like French text to English text, it has a lot of intermediate hidden layers that information flows through,” says Artemy Kolchinsky, an SFI Postdoctoral Fellow and the study’s lead author. “So there’s this long-standing idea that as raw inputs get transformed to these intermediate representations, the system is trading prediction for compression, and building higher-level concepts through this information bottleneck.”

However, Kolchinsky and his collaborators Brendan Tracey (SFI, MIT) and Steven Van Kuyk (University of Wellington) uncovered a surprising weakness when they applied this explanation to common classification problems, where each input has one correct output (e.g., in which each picture can either be of a cat or of a dog). In such cases, they found that classifiers with many layers generally do not give up some prediction for improved compression. They also found that there are many “trivial” representations of the inputs which are, from the point of view of information theory, optimal in terms of their balance between prediction and compression.

“We found that this information bottleneck measure doesn’t see compression in the same way you or I would. Given the choice, it is just as happy to lump ‘martini glasses’ in with ‘Labradors’, as it is to lump them in with ‘champagne flutes,'” Tracey explains. “This means we should keep searching for compression measures that better match our notions of compression.”

While the idea of compressing inputs may still play a useful role in machine learning, this research suggests it is not sufficient for evaluating the internal representations used by different machine learning algorithms.

At the same time, Kolchinsky says that the concept of trade-off between compression and prediction will still hold for less deterministic tasks, like predicting the weather from a noisy dataset. “We’re not saying that information bottleneck is useless for supervised [machine] learning,” Kolchinsky stresses. “What we’re showing here is that it behaves counter-intuitively on many common machine learning problems, and that’s something people in the machine learning community should be aware of.”

Related Articles Read More >

Five simple ways to improve project management processes for your R&D team
ENPICOM launches display solution to accelerate antibody selection while maximizing precision
Frontier supercomputer debuts as world’s fastest, breaking exascale barrier
Groundbreaking research could help paramedics save the lives of pedestrian casualties 
2021 R&D Global Funding Forecast

Need R&D World news in a minute?

We Deliver!
R&D World Enewsletters get you caught up on all the mission critical news you need in research and development. Sign up today.
Enews Signup

R&D World Digital Issues

February 2020 issue

Browse the most current issue of R&D World and back issues in an easy to use high quality format. Clip, share and download with the leading R& magazine today.

Research & Development World
  • Subscribe to R&D World Magazine
  • Enews Sign Up
  • Contact Us
  • About Us
  • Drug Discovery & Development
  • Pharmaceutical Processing
  • 2022 Global Funding Forecast

Copyright © 2022 WTWH Media LLC. All Rights Reserved. The material on this site may not be reproduced, distributed, transmitted, cached or otherwise used, except with the prior written permission of WTWH Media
Privacy Policy | Advertising | About Us

Search R&D World

  • Home Page
  • Topics
    • Aerospace
    • Archeology
    • Automotive
    • Biotech
    • Chemistry
    • COVID-19
    • Environment
    • Energy
    • Life Science
    • Material Science
    • R&D Market Pulse
    • R&D Management
    • Physics
  • Technology
    • 3D Printing
    • A.I./Robotics
    • Battery Technology
    • Controlled Environments
      • Cleanrooms
      • Graphene
      • Lasers
      • Regulations/Standards
      • Sensors
    • Imaging
    • Nanotechnology
    • Scientific Computing
      • Big Data
      • HPC/Supercomputing
      • Informatics
      • Security
      • Software
    • Semiconductors
  • 2021 R&D 100 Award Winners
    • R&D 100 Awards
    • 2020 Winners
    • Winner Archive
  • Resources
    • Digital Issues
    • Podcasts
    • Subscribe
  • Global Funding Forecast
  • Webinars