While the concept of “Deepfakes,” or AI-generated synthetic imagery, has been decried primarily in connection with involuntary depictions of people, the technology is dangerous (and interesting) in other ways as well. For instance, researchers have shown that it can be used to manipulate satellite imagery to produce real-looking — but totally fake — overhead maps of cities.
The study, led by Bo Zhao from the University of Washington, is not intended to alarm anyone but rather to show the risks and opportunities involved in applying this rather infamous technology to cartography. In fact, their approach has as much in common with “style transfer” techniques — redrawing images in an impressionistic, crayon, and arbitrary other fashions — than with deep fakes as they are commonly understood.
The team trained a machine learning system on satellite images of three different cities: Seattle, nearby Tacoma and Beijing. Each has its own distinctive look, just as a painter or medium does. For instance, Seattle tends to have larger overhanging greenery and narrower streets, while Beijing is more monochrome and — in the images used for the study — the taller buildings cast long, dark shadows. The system learned to associate details of a street map (like Google or Apple’s) with those of the satellite view.
The resulting machine learning agent, when given a street map, returns a realistic-looking faux satellite image of what that area would look like if it were in any of those cities.