McAfee Unveils Tool to Identify Potential Deep Fakes
McAfee today added a tool to detect deep fakes to its portfolio, that will initially be made available on PCs from Lenovo that are optimized to run artificial intelligence (AI) applications.
In addition, McAfee is launching a Smart AI Hub through which it will provide education and training to help end users better identify deep fakes that have been created using AI technologies.
Earlier this year McAfee revealed it was working with Intel to develop McAfee Deepfake Detector, which makes use of the neural processing units (NPUs) that are embedded in a forthcoming series of PCs designed to run AI applications.
McAfee CTO Steve Grobman said McAfee Deepfake Detector deploys an AI inference engine on NPUs found in Lenovo Copilot+ PCs to detect audio that has been altered using AI technologies. That capability makes it possible to detect in seconds scams that are being perpetrated using deep fakes without having to upload files from a PC to an external service, he added.
In the future, McAfee plans to also add the ability to analyze video files, but most deep fake scams today involve audio files that have been altered to either spread misinformation or compromise a business workflow involving, for example, accounting teams that might be directed to pay an invoice.
The NPU enables audio and video files to be analyzed in near real-time without McAfee having to collect or secure them on behalf of the end user, noted Grobman.
Of course, not every audio and video file created using AI is malicious, however, end users should be made aware when audio or video files have been altered as it becomes increasingly easier for cybercriminals to create deepfakes that, for example, might impersonate the chief financial officer (CFO) of an organization, he added.
Most employees don’t interact very much with the CFO of their organization so it’s much easier for them to be deceived than many cybersecurity teams may appreciate, said Grobman.
It’s not clear how many deep fake attacks are being launched, but there have already been several cyberattacks involving deep fakes that have led in some instances to millions of dollars in losses. As such, an argument might be made for replacing legacy PCs with next-generation AI PCs that have the capabilities required to thwart these types of attacks.
Each organization will naturally need to determine to what degree organizations might terminate employees who fall victim to deep fake attacks. Arguably, it’s difficult to justify firing an employee if the tools and training required to thwart these attacks were not provided in the first place. The challenge now is assessing the total cost of providing that capability in an era where the cost of a single breach could far outsize the cost of a PC upgrade.
Regardless of who is ultimately accountable, the one certain thing is that in the event of a breach involving a deep fake, it will be cybersecurity teams that are already short-handed that will be tasked with cleaning up the mess.