Last year was full of cybersecurity disasters, from the revelation of security flaws in billions of microchips to massive data breaches and attacks using malicious software that locks down computer systems until a ransom is paid, usually in the form of an untraceable digital currency.
Recommended for You
We analyzed 16,625 papers to figure out where AI is headed next
Boeing’s electric autonomous passenger air vehicle just had its first flight
Amazon’s experimental, six-wheel delivery robot is taking to the streets
CERN wants to build a particle collider that’s four times bigger than the LHC
CRISPR babies are real and the scientist who made them sought “personal fame and fortune”
We’re going to see more mega-breaches and ransomware attacks in 2019. Planning to deal with these and other established risks, like threats to web-connected consumer devices and critical infrastructure such as electrical grids and transport systems, will be a top priority for security teams. But cyber-defenders should be paying attention to new threats, too. Here are some that should be on watch lists:
Exploiting AI-generated fake video and audio
Thanks to advances in artificial intelligence, it’s now possible to create fake video and audio messages that are incredibly difficult to distinguish from the real thing. These “deepfakes” could be a boon to hackers in a couple of ways. AI-generated “phishing” e-mails that aim to trick people into handing over passwords and other sensitive data have already been shown to be more effective than ones generated by humans. Now hackers will be able to throw highly realistic fake video and audio into the mix, either to reinforce instructions in a phishing e-mail or as a standalone tactic.
Cybercriminals could also use the technology to manipulate stock prices by, say, posting a fake video of a CEO announcing that a company is facing a financing problem or some other crisis. There’s also the danger that deepfakes could be used to spread false news in elections and to stoke geopolitical tensions.
Such ploys would once have required the resources of a big movie studio, but now they can be pulled off by anyone with a decent computer and a powerful graphics card. Startups are developing technology to detect deepfakes, but it’s unclear how effective their efforts will be. In the meantime, the only real line of defense is security awareness training to sensitize people to the risk.