The overwhelming trend for 2025 cybersecurity predictions is the rise of Artificial Intelligence (AI) as an integral operational component of both offensive and defensive capabilities.
From an offensive perspective, AI can be used to achieve two primary objectives: the creation of content which appears as human origin for phishing and information operations (fake news) and adaptation of advanced attack Tactics, Techniques, and Procedures (TTP)s to exploit of new vulnerabilities faster than ever. Due to the rise of AI, the technical prowess required to leverage complex vulnerabilities, which has historically set a high bar for exploitation, has been lowered to that of the novice attacker. AI allows the use of highly technical capabilities to be leveraged by actors with limited skills in the cybersecurity area, leading to an unavoidable increase in complex attacks. This also dramatically reduces the time between vulnerability identification and in-the-wild exploitation which has historically taken weeks, now could be limited to days or even hours.
Social engineering via AI is also one of the primary offensive capabilities due to the ever increasing indistinguishably human characteristics of AI models responses. By creating social media posts, emails, instant messages or any other communication stream via AI prompts aimed to mimic human responses or, in advanced cases, as specific person, it increases the risk of human error regarding interaction. Human risk in an organization is still one of the primary concerns for security operators, as most breaches still occur due to human error. The increase in content quality will only increase this risk and likely lead to more breaches caused by manipulating end users to leak credentials, disclose sensitive information or even download malware such as ransomware, which is still the most prominent malware variant.
Defensive operators will also be leveraging AI systems to counter many of these threats, allowing for real-time analysis of environment data to detect anomalies. These systems can be used to monitor for anomalous activity/behaviour as part of EDR (Endpoint Detection and Response) solutions as well as being leveraged to assist during DFIR (Digital Forensics and Incident Response). AI is anticipated to help close the cybersecurity skills gap by assisting security operators to analyse, assess and remediate incidents faster. This technology will likely be applied to both environment permitter and internal monitoring solutions.
Zero-trust: a baseline security requirement
As a result of the ever-increasing AI threat, zero-trust is predicted to become a baseline requirement for any network. Due to the increased human risk factor, which is a primary method of AI attack, zero-trust is effective at reducing, if not eliminating, potentially severe breaches as a result. Additionally, zero-trust enforcing technologies are anticipated to be implemented into infrastructure as standard beyond policy