Film Video Digital

603 - 643 - 2627

Filtering by Tag: February 2019

Fast Company: How to spot fake photos online

Advances in artificial intelligence have made it easier to create compelling and sophisticated fake images, videos, and audio recordings. Meanwhile, misinformation proliferates on social media, and a polarized public may have become accustomed to being fed news that conforms to their worldview.

All contribute to a climate in which it is increasingly more difficult to believe what you see and hear online.

There are some things that you can do to protect yourself from falling for a hoax. As the author of the upcoming book Fake Photos, to be published in August, I’d like to offer a few tips to protect yourself from falling for a hoax.

Read More

ABA Journal: As deepfakes make it harder to discern truth, lawyers can be gatekeepers

Deepfakes, also called “AI synthesized fakes,” are rapidly evolving and proliferating. While many websites banned the use of the technology, new forensic tools are being developed to root out fakes. Meanwhile, lawmakers are pushing for new regulations while many lawyers argue that the law is already able to manage the illegal use of the emerging technology.

Read More

MIT Technology Review: Once hailed as unhackable, blockchains are now getting hacked

More and more security holes are appearing in cryptocurrency and smart contract platforms, and some are fundamental to the way they were built.

Blockchains are particularly attractive to thieves because fraudulent transactions can’t be reversed as they often can be in the traditional financial system. Besides that, we’ve long known that just as blockchains have unique security features, they have unique vulnerabilities. Marketing slogans and headlines that called the technology “unhackable” were dead wrong.

Read More

Fast Company: This image-authentication startup is combating faux social media accounts, doctored photos, deep fakes, and more

Truepic’s technology is already used by the U.S. State Department and others. The startup now wants to get social media companies on board.

Truepic was founded in 2015 by Craig Stack, a Goldman Sachs alum who saw an opportunity in making it harder for Craigslist scammers and dating-site lurkers to deceive people. “It hit me that there were all these apps that deal with image manipulation or spoofing location and time settings,” says Stack, who now serves as COO. But today the company’s primary mission is to use image-verification tools to identify and battle more formidable forms of disinformation—from the faux social media accounts that the Kremlin used to manipulate the 2016 U.S. presidential election to the doctored photos that travel the back roads of WhatsApp and catalyze violence in places like Myanmar and India.

Read More

The Daily Dot: New website will endlessly generate fake faces thanks to AI

A new website that utilizes artificial intelligence can endlessly generate the faces of people who don’t actually exist.

The site, simply titled thispersondoesnotexist.com, is possible thanks to machine learning algorithms recently released by technology company Nvidia.

The AI works by analyzing countless photos of the human face in order to generate realistic ones of its own. While creating such images initially required advanced computer hardware and specific knowledge, the process is now widely available thanks to the site.

Philip Wang, a software engineer at Uber and creator of the website, told Motherboard that the new service is designed to “dream up a random face every two seconds.”

Read More

WIRED: A NEW TOOL PROTECTS VIDEOS FROM DEEPFAKES AND TAMPERING

Video has become an increasingly crucial tool for law enforcement, whether it comes from security cameras, police-worn body cameras, a bystander's smartphone, or another source. But a combination of "deepfake" video manipulation technology and security issues that plague so many connected devices has made it difficult to confirm the integrity of that footage. A new project suggests the answer lies in cryptographic authentication.

Called Amber Authenticate, the tool is meant to run in the background on a device as it captures video. At regular, user-determined intervals, the platform generates "hashes"—cryptographically scrambled representations of the data—that then get indelibly recorded on a public blockchain. If you run that same snippet of video footage through the algorithm again, the hashes will be different if anything has changed in the file's audio or video data—tipping you off to possible manipulation.

Read More

The Drum: Anatomy of a deepfake: how Salvador Dalí was brought back to life

Burger King’s Super Bowl spot featuring Andy Warhol eating a Whopper led many viewers to question whether what they were witnessing was real. Warhol, after all, has been dead for nearly 32 years, so how was he available to shoot a commercial?

In transpired that the Warhol in the film was ‘real’ – Burger King had repurposed Danish filmmaker Jørgen Let’s 1982 study of the Factory don. Yet in the same week another artist really was being brought back to life using the same deepfake technology that placed Steve Buscemi’s face onto Jennifer Lawrence in a rather disturbing viral video.

Dalí Lives, the latest project from the the Dalí Museum in Florida, is being marketed as an ‘avant-garde experience’ designed to celebrate the 30th anniversary of the surrealist’s death. Goodby Silverstein & Partners were brought on board to commemorate the occasion; the San Franciscan agency decided resurrection was the route to go down.

Read More

Variety: Hollywood PR Trio Launches Reputation-Management Cyber Firm to Vet Celebrities, Corporate Giants

Paul Pflug, Melissa Zukerman, and Hans-Dieter Kopal of Principal Communications have teamed with leading cyber research and security firm Edgeworth to form Foresight Solutions Group — a “reputation-management” entity that will use advanced technology, former FBI data analysts, and old fashioned crisis-management skills to advise individuals and companies in an age where old tweets can bring down an Oscar host or jeopardize a billion-dollar superhero franchise.

Foresight’s tech team will also work proactively to squash erroneous and damaging social media content in the public sphere and on the dark web — like the unsettling rise of fake videos (or “deepfakes”) that have targeted stars like “Game of Thrones” lead Emilia Clarke with manufactured, but photo-realistic pornographic images.

Read More

Georgetown Security Studies Review: Policy Options for Fighting Deepfakes

Advances in machine learning are making it easy to create fake videos—popularly known as “deepfakes”—where people appear to say and do things they never did. For example, a faked video of Barack Obama went viral in April in which he appears to warn viewers about misinformation. Falsehoods already spread farther than the truth, and deepfakes are making it cheap and easy for anyone to create fake videos. When convincing fakes become commonplace, the public will also start to distrust real video evidence, especially when it does not match their biases. Unfortunately, the technology that enables deepfakes is advancing rapidly. Deepfakes will become easier to create, and humans will increasingly struggle to distinguish fake videos from real ones. Luckily, there is some hope that algorithms may be able to automatically detect deepfakes. Computer scientists have generally struggled to automate fact checking. However, early research suggests that fake videos may be an exception.

Read More

Reason: Should Congress Pass A "Deep Fakes" Law?

Axios reports that several important legislators have proposed new criminal laws banning the creation or distribution of so-called "deepfakes," computer generated videos that make it seem like someone did something they didn't actually do. The technological ability to create deepfakes has caused a lot of justifiable concern. But I wanted to express some skepticism about the current round of proposed new criminal laws.

Read More

Techdirt: Deep Fakes: Let's Not Go Off The Deep End

In just a few short months, "deep fakes" are striking fear in technology experts and lawmakers. Already there are legislative proposals, a law review article, national security commentaries, and dozens of opinion pieces claiming that this new deep fake technology — which uses artificial intelligence to produce realistic-looking simulated videos — will spell the end of truth in media as we know it.

But will that future come to pass?

Read More