‘Deepfake’ videos: to believe or not believe?

4G1jb9.jpg

‘Deepfake’ videos: to believe or not believe?

Queen Elizabeth had never before included a dance routine in her annual Christmas message, nor had North Korean dictator Kim Jong Un previously warned Americans that “democracy is fragile”, but that is what they appeared to do in videos that went viral last year.

Both those videos — and thousands like them — were “deepfakes” that manipulate the speech and actions of politicians and celebrities, using static artificial intelligence-generated faces.

Although many deepfakes are produced for their comedic or shock value with no intention of misleading viewers, such videos have become a tool for spreading misinformation.

A viral deepfake video featuring Kim Jong Un was used to encourage Americans to vote © RepresentUs/YouTube

“What is distinctive about deepfakes is that the audiovisual element has a more powerful effect on our psychology than other types of media,” says Jon Bateman, a researcher in Cyber Policy Initiative at the Carnegie Endowment for International Peace.

In principle, AI could automate the process of media editing and manipulation and make it accessible to anyone, but this method of deception requires huge amounts of training data and technical skills. Other, more straightforward, methods of attacking targets or influencing public opinion already provide a simpler pay-off. 

Manipulated videos that do not use deep learning algorithms — referred to as “cheapfakes” or “shallowfakes” — are widespread on social media and can alter opinion even after they are debunked. A video of Nancy Pelosi, US House Speaker, apparently drunk and slurring her words, was created by slowing it down but went viral in spite of its obvious fakeness.

“Deepfakes are a reality but they are impractical to create.” says Andy Patel, an AI researcher at F-Secure, a cyber security group. “Shallowfakes are quickly debunked but people still believe them — so why would you put more effort into it?”

Beyond the eventual potential for anyone to create a convincing fake video using out-the-box AI tools, deepfakes do offer new opportunities in high-stakes, highly targeted contexts. Mr Bateman calls these “narrowcast” deepfakes, as opposed to deepfakes broadcast widely online. Criminal and state-sponsored hacking groups can use deepfakes to impersonate specific people for fraudulent purposes, he says. 

Such attacks usually rely on email communication or human impersonations on a phone call, dubbed “vishing”. Attacks using voice calls tap into the emotive power of direct human interactions, and AI can make it harder to detect such impersonators. In 2019, an unnamed UK energy company transferred £200,000 to criminal fraudsters who used deepfake technology to impersonate the chief executive of the German parent company on a phone call.

Ed Bishop, co-founder and chief technology officer of cyber security company Tessian, says that while a deep learning model requires huge amounts of training data, it only needs a limited sample of audio or video data of the target to generate a personalised deepfake — about one minute of audio or 20 to 40 minutes of video.

Deepfake written communication, which generates believable, casual correspondence, is a big focus for Tessian, since Twitter, LinkedIn and public blogs are readily available public data sets of personal communication and could be harnessed by hackers.

Social media platforms are leading the fight against malicious deepfakes, although researchers and security experts see targeted attacks as a bigger threat than viral misinformation.

Both Facebook and Google have generated and shared huge data sets of their own deepfake audio and video clips created for training AI-based deepfake detection models. But the deep learning models used to create deepfakes — Generative Adversarial Networks (GANS) — work by testing how detectable generated deepfakes are, and then refining them. Developing AI tools that can get ahead of these models at their own game is difficult.

Accenture’s Cyber Fusion Center, the US consultancy’s cyber security R&D lab, used Facebook’s deepfake data set to develop a detection tool that uses different AI models analysing deepfake features, and weights the indicators to produce an estimation of authenticity.

Malek Ben Salem, a cyber researcher who worked on the project, says detection tools analyse metadata and technical features, physical and psychological integrity, and audio and semantic irregularities.

Further advances could be made by embedding a “proof of origin” functionality at the source of media creation — phones and other devices — that would encode a digital stamp specifying when and where the media was recorded.

This could enable social media and news media platforms to automatically verify and label authenticity on media that they publish. Microsoft has partnered with media organisations on Project Origin, to develop tools to create and detect “digital fingerprints” on media samples.

Yet the looming challenge is to strike a balance between appropriate awareness levels and radical scepticism, says Mr Bateman. Deepfakes are unlikely to be effective and universally undetected in spreading misinformation, but they do not need to be. The mere threat of convincing deepfakes — along with viral shallowfakes and unconvincing deepfakes — is enough to undermine traditional sources of authority.

John Conwell, principal data scientist at internet forensics group DomainTools, emphasises the need for public education and usage of tools such as reverse media searches, but also notes their limitations. “Relying on individuals to verify media authenticity doesn’t work when the fake information matches their perception of reality, which is why political memes are so effective.” 

Published at Tue, 26 Jan 2021 03:00:00 +0000

BlackBerry Expands Partnership with Baidu to Power Next Generation Autonomous Driving …

WATERLOO, ON and BEIJING, Jan. 25, 2021 /PRNewswire/ — BlackBerry Limited (NYSE: BB; TSX: BB) today announced an expansion of its strategic partnership with Baidu, whose high-definition maps will run on the QNX® Neutrino® Real-time Operating System (RTOS) and will be mass-produced in the forthcoming GAC New Energy Aion models from the EV arm of GAC Group (Guangzhou Automobile Group Co., Ltd.).

The milestones build on the company’s January 2018 agreement to make BlackBerry QNX’s industry-leading ISO 26262 ASIL D certified operating system (OS) the foundation for Baidu’s ‘Apollo’ autonomous driving open platform.

Baidu is one of the few high-definition map vendors with an Automotive SPICE® certification from TÜV Rheinland – an industry certification that addresses rigid requirements for the software development process for Tier 1 and Tier 2 automotive suppliers. With world-leading levels of data granularity, Baidu’s high-definition maps provide a critical component for global automakers looking to launch next generation connected and autonomous vehicles in China.

The QNX Neutrino RTOS foundation for Baidu’s high-definition maps is a robust real-time microkernel operating system that provides deterministic performance as well as flexibility to address the limited resources of the embedded system.

“With BlackBerry QNX’s embedded software as its foundation, Baidu has made significant progress as part of its Apollo platform in establishing a commercial ecosystem for innovative technologies that OEMs can leverage for their next generation vehicles,” said Dhiraj Handa, VP, Channel, Partners and APAC, BlackBerry Technology Solutions. “We look forward to continuing to work closely with Baidu to help develop and deploy leading edge autonomous driving and connected vehicle technologies to meet the ever increasing mission-critical and security requirements of the automotive industry.”

“We aim to provide car manufacturers with a clear and fast path to the production of autonomous vehicles, with safety and security as the top priority. The BlackBerry QNX software performs well in functional safety, network security and reliability, while Baidu has achieved long-term development in artificial intelligence and deep learning. Together, we can help car manufacturers quickly produce safe autonomous vehicles and promote the development collaboratively of the intelligent networked automobile industry,” said Wang Yunpeng, Senior Director of Technology Department of Baidu’s Intelligent Driving Group.

As the leader in safe, secure, and reliable software for critical embedded systems, BlackBerry QNX provides OEMs and Tier 1s around the world with state-of-the-art foundational software and cybersecurity technologies. BlackBerry QNX technology is used in more than 175 million vehicles on the road in their advanced driver assistance (ADAS), digital instrument clusters, connectivity modules, hands-free, and infotainment systems.

About BlackBerry

BlackBerry (NYSE: BB; TSX: BB) provides intelligent security software and services to enterprises and governments around the world. The company secures more than 500M endpoints including over 175M cars on the road today.  Based in Waterloo, Ontario, the company leverages AI and machine learning to deliver innovative solutions in the areas of cybersecurity, safety and data privacy solutions, and is a leader in the areas of endpoint security management, encryption, and embedded systems.  BlackBerry’s vision is clear – to secure a connected future you can trust.

BlackBerry. Intelligent Security. Everywhere. 

For more information, visit BlackBerry.com and follow @BlackBerry.  

Trademarks, including but not limited to BLACKBERRY, EMBLEM Design and QNX are the trademarks or registered trademarks of BlackBerry Limited, its subsidiaries and/or affiliates, used under license, and the exclusive rights to such trademarks are expressly reserved. All other trademarks are the property of their respective owners. BlackBerry is not responsible for any third-party products or services.

Media Contact:
BlackBerry Media Relations
+1 (519) 597-7273
[email protected]

SOURCE BlackBerry Limited

Related Links

https://www.blackberry.com

Published at Tue, 26 Jan 2021 01:52:30 +0000