One of the reports monitoring the cybersecurity industry conducted by the Ponemon Institute was published recently. The report includes interviews with several client companies that use Threat Intelligence to discover which components they believe are most important and whether they find them useful or not. The report compares the years 2015, 2016 and 2017.
The conclusion is not very promising for the industry, as almost 70% of these services’ clients believe that alerts do not arrive in time and that they are either incomprehensible or cannot be used for decision-making and mitigating the impact of the threat.
A result such as this must lead us to engage in deep reflection. Although, on other occasions, we have mentioned the vigorousness of cybercrime developments, in our industry, we usually focus more on the threat’s “harshness” or “innovation”.
This report serves as a reminder that it is time to underscore, once again, two crucial components of Crime-as-a-Service: speed and adaptability.
Where we have come from
At an earlier stage, cybercrime did not have the necessary skills or resources to create threats that were both complex and adaptable to security measures. It didn’t need to!
The same malware would circle the planet for weeks on end and reap its rewards. This enabled cybersecurity firms to adopt a black-box testing approach, thus giving them time to identify the threat and implement a solution to prevent it from spreading.
Furthermore, in this context, Threat Intelligence can quite quickly compile a series of Indicators of Compromises (IOCs), usually based on device information, IPs, e-mails, etc., in addition to supplying information that proves very useful for blocking the threat. In fact, in this model, collective intelligence has a great deal to offer, as it enables repositories to be created that are shared by different industries and/or companies in the same industry.
To sum up, what happens to one firm can be used to stop the threat in another.
Nevertheless, the cybercriminal behavior we are used to has changed and, although the old models are still in use, they will gradually disappear.
Where we are today
This new stage has seen the release of diverse threat codes and the offer for sale of exploit kits to run new malware campaigns without the need for very much technical knowledge. The latest malware is smart and mutates on the fly in order to break through all protection measures.
New stakeholders have emerged in the cybercriminal value chain, such as cybercriminal payment platforms, mule management, etc., which support cybercriminal groups in the preparation of fast-moving campaigns that quickly adapt to the mitigation measures implemented by companies.
Thanks to this new climate and a growing cybercriminal black market, we can see how phishing or malware campaigns may last for only a few days or have a devastating impact in just a few hours.
In this new context, previous defense models based on analyzing the familiar are becoming further and further removed from what companies really need. And the gap is getting wider. To sum up, Threat Intelligence provides extremely valuable support but, on its own, it cannot be the core component; and this is what the study reveals.
The industry’s current challenge is crystal clear: how to be prepared for the unknown.
And it is here where new narratives and arguments come into play that speak about Artificial Intelligence-based technologies, such as machine learning and deep learning, and, for example, about the success of behavioral biometrics solutions.
Where we are headed
Although considerable attention is still paid to threats, the current trend is to protect the user by adopting all types of measures. To sum up, if the user is who they say they are and is not being manipulated, it will not matter which threat they have been infected with.
In this respect, the challenges launched in authentication processes are only useful at the moment when the user begins the session, but not while they remain logged on. Threats such as RAT and ATO are used to manipulate the user once they are in their session so that they do what the cybercriminal wants.
Artificial Intelligence (AI) enables techniques, such as behavioral biometric analysis, to take advantage of all their potential by identifying the user throughout the entire session in real time in order to ratify two sacrosanct cornerstones of anti-fraud:
That the user is who they say they are. That the user is not being manipulated.