Researchers at Binghamton and Los Angeles Universities have developed a new way to detect “deep fakes,” videos that almost perfectly simulate the actions and words of a human being.
Called “FakeCatch”, their new method is based on photoplethysmography, an optical technique that measures someone’s vascular function, and which is used in particular by the Apple Watch heart rate monitor.
The principle is simple: our blood flow constantly creates tiny variations in the tone of our skin. They are invisible to the naked eye, but can be captured by image processing software. From there, the researchers succeeded in creating a classifier able to distinguish videos of real people from deep fakes, with a success rate ranging from 91 to 96%.
Obviously, this victory will probably only be temporary, while the authors of deep fakes manage to simulate these color variations from blood flow as well.
Source : IEEE Spectrum