The TDA Cyber team recently presented two remarkable papers at world-renowned conferences. In October, at the IEEE International Conference on Data Science & Advanced Analytics (DSAA). In November, at the Symposium on Electronic Crime Research (eCrime).
On the DSAA, Albert Calvo, a researcher at i2CAT’s Artificial Intelligence team, and Nil Ortiz, a researcher at the Cybersecurity team, spoke about the article A Data-driven Approach for Risk Exposure Analysis in Enterprise Security. The paper was written by Albert, Nil, Santiago Escuder from the AI team and Jordi Guijarro, Cybersecurity Innovation Director at the i2CAT Foundation. For several years, Security Operation Centers (SOCs) have relied on tools such as Security Information and Event Management (SIEM) and Intrusion Detection Systems (IDS) for reactive threat detection and risk management. However, these tools must be improved in detecting the current threat landscape, continuously increasing volume and variety, and targeting the most vulnerable component in the kill chain, the human actor. The article presents a novel data-driven approach that models user and entity behaviour in the early stages of the kill chain. The proposed system estimates the probability of an entity being exposed by a threat actor during the delivery stage, thereby providing better anticipation time and allowing the end-user to undertake mitigation focusing on concrete entities. Moreover, the framework has been tested in real scenarios, executing realistic phishing simulations and achieving successful results.
For the eCrime, Albert and Nil exposed the dissertation Achieving High-fidelity Explanations for Risk Exposition Assessment in the Cybersecurity Domain. Understanding AI-driven systems has become fundamental, mainly when these systems are employed for critical decision-making, as in cybersecurity. In this regard, explainability has been extensively advocated as a cornerstone to comprehend the model, enhancing trust and accountability in data-driven systems. A successful use-case of a risk exposure assessment framework to reduce an organisation’s attack surface was used to illustrate this. The paper focused on an explainable proxy founded on the generation of systematic evaluations of explanations, offering a swift and dependable method for assessing answers tailored to cybersecurity. This article was written by Albert, Nil, Santiago Escuder from the Artificial Intelligence team and Jordi Guijarro, Cybersecurity Innovation Director; Xavier Marrugat, Cybersecurity researcher, and Josep Escrig, manager of the Artificial Intelligence area at i2CAT.