The Islamic State group destroyed a sixth-century Christian monastery in Iraq in 2014, a fact confirmed last week by studying satellite images. The cultural loss is significant and is being widely lamented.
Remotely sensed images can be valuable information sources for the public, such as journalists and their readership. High-resolution imagery of places in the news have been used extensively to bring world events to the doorsteps of the public.
The monastery destruction points out a key aspect of modern satellite image interpretation: the enduring importance of experts, despite the rise of computer analysis and crowdsourcing.
In my own research concerning expert interpretation and as a trained image analyst, I have not only studied the transition from novice to expert but also lived it. A key component of training is moving from the superficial identification of objects to understanding the nuanced patterns that emerge from imagery.
How those images and patterns are interpreted — and by whom (or what) — has varied through history. A present trend toward the inclusion of nonexperts in the process of interpreting satellite photography risks marginalizing the role of expert image analysts, like the person who confirmed the ruin of St. Elijah’s monastery in Mosul.
Fears confirmed by satellite: Iraqi monastery destroyed.
Early image interpretation
Aerial photography dates back to the mid-1800s, when the first images of Paris and Boston were captured from balloons. With the invention of the airplane and the onset of World War I, reconnaissance air photography was born.
As the method gained popularity, the need grew for highly trained experts to generate intelligence. The ability to view Earth from high above was something foreign to most people until the release of the first air photo map by the United Kingdom’s Ordnance Survey in 1919. Before then, lack of experience was seen as a huge barrier to nonexpert interpretation of overhead images. Initially, interpreters were mainly working to identify objects of military importance.
When the first images from space came back in 1946, a new era began in civilian satellite image acquisition. At the beginning, computers were seen as potentially improving image interpretation. And indeed, today it is possible for computers to outperform humans at some tasks, such as sorting images into categories based on what they picture.
However, early attempts to model the human photo interpretation process fell short: computerized systems failed to replicate the flexibility of human creativity and inference from abstraction. Think about the last time you saw an animal in a cloud passing overhead. Humans are adept at seeing the meaningful patterns in the mundane.
Experts’ ability to recall meaningful patterns for a given situation within their specialty area, and to quickly choose which patterns are important, give them an advantage over computers and nonexperts.
The rise of computing
Early knowledge-based systems led the way for modern computer-aided image analysis. Neural networks, for example, are computer systems that can learn from multiple data inputs how to analyze complex data.
The popularity of computational methods has begun to remove the expert from visual interpretation — with potentially serious implications. This could be detrimental in cases where creativity and mental flexibility are important.
Examples of computer automation failing come from a number of domains including aviation and image recognition.
Recent advances in computer vision now make it possible for computers to “learn” from their past mistakes and improve their performance over time.
In 1994, French cultural theorist Paul Virilio predicted a futuristic vision machine, a system where machines both capture and perceive imagery. Could his vision be coming true?
When nonexperts get involved
Another major recent change in the field of image interpretation is the rise of crowdsourcing, which is rooted in the idea of collective intelligence. Collective intelligence arises when individuals act as a group to perform intelligent tasks — in the aggregate, the group is better at some tasks than any of the individuals would be.
Existing crowdsourcing campaigns have suffered from several pitfalls of using nonexperts, returning results that are inaccurate, incomplete and in some cases completely incorrect.
Many of the errors arise due to misconceptions about the skill level of nonexperts, such as the ability of an untrained person to accurately measure or estimate distance, or even to interpret task instructions.
Other errors can be as simple as mistaken identity, as when an experienced Wikimapia user mislabeled one golf course as another from nearby. Despite the potential corrective power of the crowd, the error remained in a publicly editable online map for two years.
To best capitalize on the advantages and capabilities of both computer-aided and crowdsourced image analysis, we will still need a highly skilled expert community with experience and training in photo interpretation.
These experts can do their own significant work, such as identifying extremist wreckage of historic sites. They can also help better understand the types of errors that arise in computer and crowdsourcing efforts, developing new methods to improve those techniques.
Raechel A. Bianchetti, Assistant Professor of Geography, Michigan State University. This article was originally published on The Conversation. Read the original article.