IBM presented the results of an innovative new project exploring the future of malware during the Black Hat USA cybersecurity conference in August 2018.
The team, which included three scientists from the computing giant's vaunted research division, managed to create a never-before-seen form of artificial intelligence-driven malware called DeepLocker.
The program uses neural networking technology to pinpoint specific targets and deploy tailored strikes, avoiding detection by materializing at the moment of attack.
The DeepLocker reveal sent shock waves through the data security community, whose members had long worried over the notion of weaponized AI and its potential use in the wild. While IBM's lab-generated bug was relegated to the digital test tube in which it was conceived, its mere existence seemed to indicate the age of AI-infused malware was near.
In the months since DeepLocker wooed white hats across the globe, the clamor has calmed.
However, there are grounds for concern. AI-guided malware could prove catastrophic to businesses in almost every sector, including those navigating the financial services space, many of which already struggle to protect their digital assets. In fact, almost 40 percent of financial entities do not have an overall data security strategy in place, according to research from PricewaterhouseCoopers.
Businesses leaders, particularly those in the financial services arena, would be wise to collaborate with internal information technology stakeholders to explore and understand burgeoning cyberthreats like DeepLocker, which, when replicated in the real world, will do great harm to enterprise infrastructure.
AI-driven malware is coming and organizations should begin preparations.
The end of spray-and-pray
Hackers deploying traditional malware leverage what IBM Principal Researcher and DeepLocker development leader Marc Ph. Stoecklin characterizes as the spray-and-pray methodology, which involves launching a large volume of attacks indiscriminately in hopes that some strikes will hit high-level targets.
This approach has proven effective. After all, cybercriminals managed to conduct more than 53,000 successful attacks in 2018 alone, according to research from Verizon Wireless. Unfortunately, AI-directed bugs like DeepLocker would allow nefarious actors the power to conduct more targeted attacks that render more damage.
IBM's bug can enter enterprise networks by concealing its destructive contents within legitimate applications. However, these damaging digital assets remain locked away until certain preconfigured trigger conditions are met.
These values define an intended target, and can constitute everything from audiovisual cues to geolocation characteristics. In the event that DeepLocker latches onto a viable victim, its embedded neural networking features go to work crafting a unique online key capable of unlocking and releasing the sinister assets that lie within.
This attack methodology would not only prove extremely effective if deployed in the wild but also supremely difficult to counter. Why?
For one, the adaptive AI model at the center of DeepLocker allows it to circulate servers undetected. Infected assets work as intended up to the moment of attack, meaning internal and external IT teams would be unaware that the mission-critical systems under their purview were compromised until it was too late.
Additionally, DeepLocker, and quite possibly its AI-infused antecedents, is near impossible to dissect following deployment. The number of potential trigger conditions are numerous, preventing even the most seasoned and well-equipped white hats from reverse engineering an attack and acquiring the payload that propelled it.
Unpacking an extensive digital history
While AI-driven malware variants like DeepLocker might seem wholly insurgent on the surface, these malicious programs actually hail from an extensive line of evasive viruses. In fact, hackers began leveraging metamorphic and polymorphic bugs during the 1980s and 1990s - assets that could mutate to surpass anti-virus software, according to Stoecklin.
Even as data security firms upped their defenses during late 1990s and early aughts, cybercriminals struck back, rolling out malware with encrypted payloads and bugs that could leverage environment analysis modules to avoid detection.
DeepLocker merely represents the next generation of evasive malware.
Addressing weaponized AI
In November 2018, the white hats at Malwarebytes published their annual list of cybersecurity predictions. Perhaps the most startling item to make the roundup was the appearance of AI-boosted malware. Yes, the organization has predicted that viruses similar to DeepLocker may enter the online environment as soon as 2019.
That said, these assets may not be as advanced as IBM's entirely self-contained creation. Malwarebytes envisions an immediate future in which hackers leverage remote AI controllers to manipulate current malware variants, lending them only a fraction of the elusiveness associated with DeepLocker.
Still, these bugs pose a serious threat to modern financial services firms, many of which are ill-prepared to address even low-level cyberthreats.
With this in mind, organizations in this space must work quickly to optimize their data security infrastructure and roll out defenses that might mitigate threats posed by the early-stage AI-driven malware that could appear in 2019. Luckily, Stoecklin and his team provided some guidance on this front during the presentation at Black Hat USA.
While the fixes they prescribed for addressing the virus concealment angle are perhaps too ambitious to consider at this point, those centered on payload execution are doable for many financial services businesses. For instance, the DeepLocker team at IBM recommended the use of code analysis tools, which are widely available on the marketplace.
The banks and other financial entities interested in taking action to prepare the encroachment of weaponized AI would be wise to look into the available options, lest they risk being the first victims of DeepLocker's emancipated siblings.
Here at buguroo, we develop and deploy online financial fraud prevention solutions centered on deep learning technology. Our bugFraud platform leverages system monitoring, behavioral biometrics and other identity verification tools to separate legitimate account holders from fraudsters executing account takeovers and social engineering campaigns.
From run-of-the-mill remote access Trojans and form grabbers to code injection tools and key loggers, the solution facilitates optimal backend transparency, lending internal and external technical specialists the insight they need to move forward with mitigation tactics.
Want to learn more about our bugFraud solution and how it might prepare your bank or financial organization for operating in a post-DeepLocker world? Connect with buguroo today.