Film Video Digital

603 - 643 - 2627

MIT Technology Review: Once hailed as unhackable, blockchains are now getting hacked

More and more security holes are appearing in cryptocurrency and smart contract platforms, and some are fundamental to the way they were built.

Blockchains are particularly attractive to thieves because fraudulent transactions can’t be reversed as they often can be in the traditional financial system. Besides that, we’ve long known that just as blockchains have unique security features, they have unique vulnerabilities. Marketing slogans and headlines that called the technology “unhackable” were dead wrong.

Read More

Fast Company: This image-authentication startup is combating faux social media accounts, doctored photos, deep fakes, and more

Truepic’s technology is already used by the U.S. State Department and others. The startup now wants to get social media companies on board.

Truepic was founded in 2015 by Craig Stack, a Goldman Sachs alum who saw an opportunity in making it harder for Craigslist scammers and dating-site lurkers to deceive people. “It hit me that there were all these apps that deal with image manipulation or spoofing location and time settings,” says Stack, who now serves as COO. But today the company’s primary mission is to use image-verification tools to identify and battle more formidable forms of disinformation—from the faux social media accounts that the Kremlin used to manipulate the 2016 U.S. presidential election to the doctored photos that travel the back roads of WhatsApp and catalyze violence in places like Myanmar and India.

Read More

The Daily Dot: New website will endlessly generate fake faces thanks to AI

A new website that utilizes artificial intelligence can endlessly generate the faces of people who don’t actually exist.

The site, simply titled thispersondoesnotexist.com, is possible thanks to machine learning algorithms recently released by technology company Nvidia.

The AI works by analyzing countless photos of the human face in order to generate realistic ones of its own. While creating such images initially required advanced computer hardware and specific knowledge, the process is now widely available thanks to the site.

Philip Wang, a software engineer at Uber and creator of the website, told Motherboard that the new service is designed to “dream up a random face every two seconds.”

Read More

WIRED: A NEW TOOL PROTECTS VIDEOS FROM DEEPFAKES AND TAMPERING

Video has become an increasingly crucial tool for law enforcement, whether it comes from security cameras, police-worn body cameras, a bystander's smartphone, or another source. But a combination of "deepfake" video manipulation technology and security issues that plague so many connected devices has made it difficult to confirm the integrity of that footage. A new project suggests the answer lies in cryptographic authentication.

Called Amber Authenticate, the tool is meant to run in the background on a device as it captures video. At regular, user-determined intervals, the platform generates "hashes"—cryptographically scrambled representations of the data—that then get indelibly recorded on a public blockchain. If you run that same snippet of video footage through the algorithm again, the hashes will be different if anything has changed in the file's audio or video data—tipping you off to possible manipulation.

Read More

The Drum: Anatomy of a deepfake: how Salvador Dalí was brought back to life

Burger King’s Super Bowl spot featuring Andy Warhol eating a Whopper led many viewers to question whether what they were witnessing was real. Warhol, after all, has been dead for nearly 32 years, so how was he available to shoot a commercial?

In transpired that the Warhol in the film was ‘real’ – Burger King had repurposed Danish filmmaker Jørgen Let’s 1982 study of the Factory don. Yet in the same week another artist really was being brought back to life using the same deepfake technology that placed Steve Buscemi’s face onto Jennifer Lawrence in a rather disturbing viral video.

Dalí Lives, the latest project from the the Dalí Museum in Florida, is being marketed as an ‘avant-garde experience’ designed to celebrate the 30th anniversary of the surrealist’s death. Goodby Silverstein & Partners were brought on board to commemorate the occasion; the San Franciscan agency decided resurrection was the route to go down.

Read More

Variety: Hollywood PR Trio Launches Reputation-Management Cyber Firm to Vet Celebrities, Corporate Giants

Paul Pflug, Melissa Zukerman, and Hans-Dieter Kopal of Principal Communications have teamed with leading cyber research and security firm Edgeworth to form Foresight Solutions Group — a “reputation-management” entity that will use advanced technology, former FBI data analysts, and old fashioned crisis-management skills to advise individuals and companies in an age where old tweets can bring down an Oscar host or jeopardize a billion-dollar superhero franchise.

Foresight’s tech team will also work proactively to squash erroneous and damaging social media content in the public sphere and on the dark web — like the unsettling rise of fake videos (or “deepfakes”) that have targeted stars like “Game of Thrones” lead Emilia Clarke with manufactured, but photo-realistic pornographic images.

Read More

Georgetown Security Studies Review: Policy Options for Fighting Deepfakes

Advances in machine learning are making it easy to create fake videos—popularly known as “deepfakes”—where people appear to say and do things they never did. For example, a faked video of Barack Obama went viral in April in which he appears to warn viewers about misinformation. Falsehoods already spread farther than the truth, and deepfakes are making it cheap and easy for anyone to create fake videos. When convincing fakes become commonplace, the public will also start to distrust real video evidence, especially when it does not match their biases. Unfortunately, the technology that enables deepfakes is advancing rapidly. Deepfakes will become easier to create, and humans will increasingly struggle to distinguish fake videos from real ones. Luckily, there is some hope that algorithms may be able to automatically detect deepfakes. Computer scientists have generally struggled to automate fact checking. However, early research suggests that fake videos may be an exception.

Read More

Reason: Should Congress Pass A "Deep Fakes" Law?

Axios reports that several important legislators have proposed new criminal laws banning the creation or distribution of so-called "deepfakes," computer generated videos that make it seem like someone did something they didn't actually do. The technological ability to create deepfakes has caused a lot of justifiable concern. But I wanted to express some skepticism about the current round of proposed new criminal laws.

Read More

VentureBeat: Google releases dataset to help AI systems spot fake audio recordings

When Google announced the Google News Initiative in March 2018, it pledged to release datasets that would help “advance state-of-the-art research” on fake audio detection — that is, clips generated by AI intended to mislead or fool voice authentication systems. Today, it’s making good on that promise.

The Google News team and Google’s AI research division, Gai prinoogle AI, have teamed up to produce a corpus of speech containing “thousands” of phrases spoken by the Mountain View company’s text-to-speech models. Phrases drawn from English newspaper articles are spoken by 68 different synthetic voices, which cover a variety of regional accents.

Read More

Techdirt: Deep Fakes: Let's Not Go Off The Deep End

In just a few short months, "deep fakes" are striking fear in technology experts and lawmakers. Already there are legislative proposals, a law review article, national security commentaries, and dozens of opinion pieces claiming that this new deep fake technology — which uses artificial intelligence to produce realistic-looking simulated videos — will spell the end of truth in media as we know it.

But will that future come to pass?

Read More

CNET: Deepfakes, disinformation among global threats cited at Senate hearing

At this year's Worldwide Threats hearing before the US Senate's Select Committee on Intelligence, the leaders of the country's top intelligence agencies, including the National Security Agency, the CIA and the FBI, again pointed at tech issues as their biggest worry.

The Tuesday hearing covered issues like weapons of mass destruction, terrorism, and organized crime, but technology's problems took center stage. That echoes last year's hearing, when officials flagged cybersecurity as their greatest concern, after major hacks like the NotPetya attack, which cost billions in damages. But concerns over technology aren't limited to cyberattacks: Lawmakers also brought up deepfakes, artificial intelligence, disinformation campaigns on social media, and the vulnerability of internet of things devices.

Read More

Carnegie Endowment for International Peace: How Should Countries Tackle Deepfakes?

WHAT KINDS OF DAMAGE COULD DEEPFAKES CAUSE IN GLOBAL MARKETS OR INTERNATIONAL AFFAIRS?

Deepfakes could incite political violence, sabotage elections, and unsettle diplomatic relations. Earlier this year, for instance, a Belgian political party published a deepfake on Facebook that appeared to show U.S. President Donald Trump criticizing Belgium’s stance on climate change. The unsophisticated video was relatively easy to dismiss, but it still provoked hundreds of online comments expressing outrage that the U.S. president would interfere in Belgium’s internal affairs.

Read More

CNN: When seeing is no longer believing

Inside the Pentagon’s race against deepfake videos

Advances in artificial intelligence could soon make creating convincing fake audio and video – known as “deepfakes” – relatively easy. Making a person appear to say or do something they did not has the potential to take the war of disinformation to a whole new level.

Read More

Buzzfeed: It's Not Fake Video We Should Be Worried About — It's Real Video

The big mood these days is waiting on the tech apocalypse. All it takes is a video of a humanoid robot displaying the motor skills of a 6-year-old to have people preparing for Skynet to kill us all. The same goes, perhaps even more so, for fears of “deepfakes”: software getting good enough that anybody with an iPhone can doctor fake videos that can spark a riot. Seeing computers convincingly putting words in the mouths of presidents is scary, and once a Macedonian teenager can do it in minutes it’s game over, so the thinking goes.

But if the last few years — and yes, the particularly hellish last few days — have taught us anything, it’s that fake video isn’t going to destroy our ability to see the truth. It’s the real video we need to worry about, and our true problem is that we can all see the very same thing and disagree on what it was.

Read More

Interview with Hany Farid on "Everything in Your Archive is Now Fake"

John Tariot interviews Hany Farid, professor at Berkeley School of Information, where he focusses on digital forensics, image analysis, and human perception. He is one of the subjects in the New Yorker article, “In the Age of A.I., Is Seeing Still Believing?” Hany and John Tariot have been having an ongoing conversation about the impact deepfakes will have on archives, and he and John continue the discussion here on some of the issues raised at the Association of Moving Image Archivists conference session “Everything in Your Archive is Now Fake.”

Read More

Fortune: Fake Porn Videos Are Terrorizing Women. Do We Need a Law to Stop Them?

In the darker corners of the Internet, you can now find celebrities like Emma Watson and Selma Hayek performing in pornographic videos. The clips are fake, of course—but it’s distressingly hard to tell. Recent improvements in artificial intelligence software have made it surprisingly easy to graft the heads of stars, and ordinary women, to the bodies of X-rated actresses to create realistic videos.

These explicit movies are just one strain of so-called “deepfakes,” which are clips that have been doctored so well they look real. Their arrival poses a threat to democracy; mischief makers can, and already have, used them to spread fake news. But another great danger of deepfakes is their use as a tool to harass and humilia

Read More

Washington Post: Fake-porn videos are being weaponized to harass women: ‘Everybody is a potential target’

“Deepfake” creators are making disturbingly realistic, computer-generated videos with photos taken from the Web, and ordinary women are suffering the damage.

How fake-porn opponents are fighting back: The best hope for fighting computer-generated fake-porn videos might come from a surprising source: the artificial intelligence software itself.

Technical experts and online trackers say they are developing tools that could automatically spot these “deepfakes” by using the software’s skills against it, deploying image-recognition algorithms that could help detect the ways their imagery bends belief.

Read More

CNBC: Anti-election meddling group makes A.I.-powered Trump impersonator to warn about ‘deepfakes'

A political organization endorsed by former U.S. Vice President Joe Biden is concerned so-called “deepfakes” could be a threat to democracy.

It developed an online quiz to see whether people found an AI-generated Trump impersonator more convincing than actors and comedians.

The next step for the foundation is building deepfake-detection software, rolling it out to journalists, and educating the public about the technology.

Read More