Research & Development World

  • R&D World Home
  • Topics
    • Aerospace
    • Automotive
    • Biotech
    • Careers
    • Chemistry
    • Environment
    • Energy
    • Life Science
    • Material Science
    • R&D Management
    • Physics
  • Technology
    • 3D Printing
    • A.I./Robotics
    • Software
    • Battery Technology
    • Controlled Environments
      • Cleanrooms
      • Graphene
      • Lasers
      • Regulations/Standards
      • Sensors
    • Imaging
    • Nanotechnology
    • Scientific Computing
      • Big Data
      • HPC/Supercomputing
      • Informatics
      • Security
    • Semiconductors
  • R&D Market Pulse
  • R&D 100
    • 2025 R&D 100 Award Winners
    • 2025 Professional Award Winners
    • 2025 Special Recognition Winners
    • R&D 100 Awards Event
    • R&D 100 Submissions
    • Winner Archive
  • Resources
    • Research Reports
    • Digital Issues
    • Educational Assets
    • R&D Index
    • Subscribe
    • Video
    • Webinars
    • Content submission guidelines for R&D World
  • Global Funding Forecast
  • Top Labs
  • Advertise
  • SUBSCRIBE

Online Game Reveals Something Fishy about Mathematical Models

By R&D Editors | December 3, 2015

Even though the statistical properties of the model matched those of the real data, both experts and members of the public could differentiate between simulated and real fish.How can you tell if your mathematical model is good enough? In a new study, researchers implemented a Turing test in the form of an online game with over 1700 players to assess how good their models were at reproducing collective motion of real fish schools.

Mathematical models allow us to understand how patterns and processes in the real world are generated and how complex behavior, such as the collective movement of animal groups, can be produced from simple individual level rules. Fitting models based on the large scale properties of the data is one way to choose between different models, but can we be satisfied with our model when this has been achieved? How can we apply other methods to see how good our model fit is?

James Herbert-Read, researcher at the Department of Mathematics at Uppsala University, and his colleagues highlight and propose a solution to this problem by implementing a Turing test to assess how good their models were at reproducing collective motion.

They designed an online game where members of the public and a small group of experts were asked to differentiate between the collective movements of real fish schools and those simulated by a model.

“By putting the game online, and though crowd sourcing this problem, the public have not only become engaged in science, they have also helped our research,” says James Herbert-Read.

Even though the statistical properties of the model matched those of the real data, both experts and members of the public could differentiate between simulated and real fish. The researchers asked the online players who answered all six questions correctly to give feedback on how they differentiated between the real schools and the simulated ones.

“These players commonly suggested that the spatial organization of the groups and smoothness of the trajectories appeared different between the simulated and real schools. These are aspects of the model we can try to improve in the future,” says James Herbert-Read.

“Our results highlight that we can use ourselves as Mechanical Turks through ‘citizen science’ to improve and refine model fitting.”

The results are published in Biology Letters.

Citation: Herbert-Read JE, Romenskyy M, Sumpter DJT. A Turing test for collective motion. Biol. Lett., 2015 DOI: 10.1098/rsbl.2015.0674

  • See the online game used in the study: http://www.collective-behavior.com/apps/fishgame/
  • See the authors’ new game: http://www.collective-behavior.com/apps/fishindanger/webgl

Turing test

Alan Turing provided a means of assessing whether a machine’s behavior was equivalent or indistinguishable from that of a human. In the Turing test, if a human observer could not determine between which one of two interacting players was a machine (the other a human), then the machine had passed the test and exhibited intelligent behavior. The test is designed to assess the ability of a model (the machine) to reproduce the real world (human behavior).

Related Articles Read More >

NASA R&D 100 Winner enables high-speed data transfer from space
Lab automation is “vaporizing”: Why the hottest innovation is invisible
Google on how AI will extend researchers
Kythera Labs’ Wayfinder remasters incomplete medical data for AI analysis
rd newsletter
EXPAND YOUR KNOWLEDGE AND STAY CONNECTED
Get the latest info on technologies, trends, and strategies in Research & Development.
RD 25 Power Index

R&D World Digital Issues

Fall 2025 issue

Browse the most current issue of R&D World and back issues in an easy to use high quality format. Clip, share and download with the leading R&D magazine today.

R&D 100 Awards
Research & Development World
  • Subscribe to R&D World Magazine
  • Sign up for R&D World’s newsletter
  • Contact Us
  • About Us
  • Drug Discovery & Development
  • Pharmaceutical Processing
  • Global Funding Forecast

Copyright © 2025 WTWH Media LLC. All Rights Reserved. The material on this site may not be reproduced, distributed, transmitted, cached or otherwise used, except with the prior written permission of WTWH Media
Privacy Policy | Advertising | About Us

Search R&D World

  • R&D World Home
  • Topics
    • Aerospace
    • Automotive
    • Biotech
    • Careers
    • Chemistry
    • Environment
    • Energy
    • Life Science
    • Material Science
    • R&D Management
    • Physics
  • Technology
    • 3D Printing
    • A.I./Robotics
    • Software
    • Battery Technology
    • Controlled Environments
      • Cleanrooms
      • Graphene
      • Lasers
      • Regulations/Standards
      • Sensors
    • Imaging
    • Nanotechnology
    • Scientific Computing
      • Big Data
      • HPC/Supercomputing
      • Informatics
      • Security
    • Semiconductors
  • R&D Market Pulse
  • R&D 100
    • 2025 R&D 100 Award Winners
    • 2025 Professional Award Winners
    • 2025 Special Recognition Winners
    • R&D 100 Awards Event
    • R&D 100 Submissions
    • Winner Archive
  • Resources
    • Research Reports
    • Digital Issues
    • Educational Assets
    • R&D Index
    • Subscribe
    • Video
    • Webinars
    • Content submission guidelines for R&D World
  • Global Funding Forecast
  • Top Labs
  • Advertise
  • SUBSCRIBE