Technology can't rescue us from AI-generated fake news
Digital forensics experts can still easily tell computer-generated fakes apart from real images, but they're no good at stopping them going viral in the first place
By Matthew Reynolds
09 Jan 2018
After two decades as a digital forensics expert Hany Farid has come to know the telltale signs of a fake image. The shadows are often a dead giveaway. “They tell you a lot about the scene,” he says. “The nature of the light in the scene, where it was coming from.” Forgers often get them wrong, putting shadows in improbably locations or omitting them altogether.
For most of Farid’s career, digital forensics has boiled down to a simple question. "It’s asking whether the video, image or audio recording has been manipulated since it was recorded,” he says. But now the question is no longer “has this blemish been removed or this scene altered,” it’s “did this scene ever exist in the first place?"
“There’s a whole new host of ways of manipulating photographs now,” Farid says. Photoshop has made it easier for anyone to edit images convincingly enough to fool untrained eyes into thinking they’re seeing the real deal. The kind of image that Farid finds himself working on for courts or news agencies has changed too. Now that everyone has a camera in their pocket, he’s often presented with blurry footage taken at an unknown time and place and uploaded to the web anonymously.
Recent advancements in machine learning have got Farid particularly worried. 2017 was a bumper year for image-faking technology. In July, researchers at the University of Washington trained an AI that converts audio into realistic mouth movements, creating a video of someone saying words they never actually uttered. A paper from the graphics card company Nvidia released in October showed that you can use machine learning to automatically change the weather in a photograph, turning a summer’s day into a wintery scene with snow piled up on pavements and the trees leafless.
Another group from Nvidia trained a machine learning algorithm to generate images of unknown celebrities. Trained on a database of 30,000 photos for 20 days, the software learned to create high-resolution 'photographs' that looked almost indistinguishable form the real thing. But those celebrities don’t exist in the real world – they are simply an algorithms’ idea of what a celebrity looks like.

read on: http://www.wired.co.uk/article/fake-images-video-nvidia-news-online-twitter-facebook-digital-forensics
Love Always
mudra
Digital forensics experts can still easily tell computer-generated fakes apart from real images, but they're no good at stopping them going viral in the first place
By Matthew Reynolds
09 Jan 2018
After two decades as a digital forensics expert Hany Farid has come to know the telltale signs of a fake image. The shadows are often a dead giveaway. “They tell you a lot about the scene,” he says. “The nature of the light in the scene, where it was coming from.” Forgers often get them wrong, putting shadows in improbably locations or omitting them altogether.
For most of Farid’s career, digital forensics has boiled down to a simple question. "It’s asking whether the video, image or audio recording has been manipulated since it was recorded,” he says. But now the question is no longer “has this blemish been removed or this scene altered,” it’s “did this scene ever exist in the first place?"
“There’s a whole new host of ways of manipulating photographs now,” Farid says. Photoshop has made it easier for anyone to edit images convincingly enough to fool untrained eyes into thinking they’re seeing the real deal. The kind of image that Farid finds himself working on for courts or news agencies has changed too. Now that everyone has a camera in their pocket, he’s often presented with blurry footage taken at an unknown time and place and uploaded to the web anonymously.
Recent advancements in machine learning have got Farid particularly worried. 2017 was a bumper year for image-faking technology. In July, researchers at the University of Washington trained an AI that converts audio into realistic mouth movements, creating a video of someone saying words they never actually uttered. A paper from the graphics card company Nvidia released in October showed that you can use machine learning to automatically change the weather in a photograph, turning a summer’s day into a wintery scene with snow piled up on pavements and the trees leafless.
Another group from Nvidia trained a machine learning algorithm to generate images of unknown celebrities. Trained on a database of 30,000 photos for 20 days, the software learned to create high-resolution 'photographs' that looked almost indistinguishable form the real thing. But those celebrities don’t exist in the real world – they are simply an algorithms’ idea of what a celebrity looks like.

read on: http://www.wired.co.uk/article/fake-images-video-nvidia-news-online-twitter-facebook-digital-forensics
Love Always
mudra