People

Meet The Technology Tackling Poverty From Space

September 8, 2016Posted by Terry Turner

Since 1990, extreme poverty has been cut in half worldwide. But there are still more than 700 million people living on less than two dollars a day around the globe.

The United Nations has made the elimination of poverty number one on its list of 17 Sustainable Development Goals by 2030, a challenge that may sound impossible but that a team of Stanford University researchers believe could actually be well within reach.

That’s because they’ve come up with a cost-effective method of getting over one of the most difficult, if not most obvious, hurdles toward reaching that goal on time.

“Poverty is this big problem that everybody wants to solve, but you can’t even begin to work on it if you don’t know where the poor are,” said Neal Jean, a doctoral student and lead author of the Stanford research.

Jean’s common sense observation isn’t rocket science, but his solution was. He and the rest of the Stanford research team started collecting stacks of high-resolution photos of the earth, taken from satellites, then teaching computers how to use those photos to look for areas of poverty.

“We have a limited number of surveys conducted in scattered villages across the African continent, but otherwise we have very little local-level information on poverty,” study co-author Marshall Burke, an assistant professor of Earth system science at Stanford and a fellow at the Center on Food Security and the Environment, said in a statement. “At the same time, we collect all sorts of other data in these areas, like satellite imagery,  constantly.”

Traditionally, researchers measure poverty levels by going house-to-house, conducting surveys, expending shoe leather instead of rocket fuel. But the Stanford solution — relying on billions of dollars worth of already existing satellites — turns out to be more cost effective. That’s because space has gotten pretty crowded with mapping satellites in the last 20 years, and companies like Google have photographed virtually every square foot of the planet.

mozambique-509490_1920-wjgomes-pixabay

The team then “taught” a computer to look for signs of wealth and poverty through a process called “machine learning.” Unlike artificial intelligence, which focuses on creating software that mimics human reactions and understanding of a situation, machine learning uses statistics to develop self learning algorithms.

“As a human, if you were to look at these images and I asked you predict whether the people living there were rich or poor, you would probably look for some indicators that make sense to you,” Jean said. “Looking at U.S. suburbs, you might look for evidence of swimming pools or sidewalks.”

Since computers don’t have that human intuition, the team effectively trained a computer what to look for. They had the computer compare the difference in light intensities between day and night in hundreds of thousands of photos of the same places— darker places at night would indicate less electricity and therefore more poverty, for example. They also identified visible indicators like farm roads and villages that the computer could use to make judgements.

The method, applied to five African countries, outperformed existing approaches to predicting poverty distribution.

The Stanford team published their results in the August 19 edition of the journal Science

“Our paper demonstrates the power of machine learning in this context,” said study co-author Stefano Ermon, assistant professor of computer science and a fellow by courtesy at Stanford Woods Institute for the Environment. “And since it’s cheap and scalable, requiring only satellite images, it could be used to map poverty around the world in a very low-cost way.”

The Stanford model replaces expensive, time-consuming house-to-house surveys with readily available images and computers limited only by the speed of electrons passing through their circuits.

TransferLearning2

“The marginal cost of using those satellites is basically nothing,” Jean said. “Given that those images already exist, all we have to do is process them.”

The images, taken from Google Maps, are free, and so is the code the researchers used to crunch their numbers. It’s open source so anyone can run it on their computers without having to pay for it. Compare that to the expense and amount of time required to send hundreds of people across a country or a continent to do the same job.

Unlike traditional surveys, the Stanford model is also scalable, allowing NGOs and governments to target whole regions or small villages, increasing efficiency and reducing costs.

The full cost of eliminating extreme poverty is an elusive target. Earlier this year, the Brookings Institution estimated it would cost $80 billion to eliminate extreme poverty around the world. Other estimates have placed it as high as $175 billion.

Related Post

Share this article:
Facebooktwitterlinkedinmail

#

Browse other categories:

Make a Comment or Leave a Reply

Your email address will not be published. Required fields are marked *