Our 2025 upcoming predictions
-
CISA and Partners Release Advisory on Black Basta Ransomware
Black Basta ransomware poses as IT support on Microsoft Teams to breach networks
SOCI-ally aware: More Security of Critical Infrastructure Act reforms on the way
Demystifying Australia's Security of Critical Infrastructure Regime
Novel phishing techniques to evade detection: ASCII-based QR codes and ‘Blob’ URIs
-
For the first Newsletter of 2025 we have decided to give our predictions for the year and what we have to look forward to. We will cover some new laws that are going to be important to bear in mind for the coming year, and that will be deeply affecting us as cybersecurity professionals, a new trend in AI that could give any company and their cybersecurity a new edge and the new threat vectors that we should all be aware of and look out for.
So - what are our predictions? Let’s get into it.
Agentic AI - A New Frontier for AI and Security
To start off – what is Agentic AI?
Agentic AI is a new frontier to AI that has some key differences from the traditional AI that we know and are all used to. These key differences though put this technology into a completely different plane with the opportunities this brings for our security.
We’re now entering the so called Third Wave of AI - the focus is going to be shifting from creating content with generative AI, to developing fully autonomous intelligent agents capable of making decisions and performing tasks. These agents will be designed for specific roles with guardrails (also known as trust boundaries) in place to control their scope and ensure safety.
So, what makes it different? First off, it’s an entirely autonomous system – once the AI has a goal, it needs no other human input to reach this goal. And further, it will learn as it goes along from its own experiences to be able to adapt the behaviour to be more efficient in reaching this goal. And all of this without any input from a human.
Salesforce predicts that over 1 billion AI agents will enter the workforce next year, performing a wide range of tasks. This may lead to lighter Security Operations Centers (SOCs), where AI agents handle routine tasks, freeing up human analysts to focus on more critical issues.
Another great feature of Agentic AI is that it is able to navigate complex situations and can consider multiple variables and potential outcomes of each of those situations. It has the key ability to function in dynamic and unpredictable situations without needing anyone to oversee it to function.
There’s also a key difference in how the AI makes these decisions. In traditional AI the decisions are done with predetermined rules that a human inputted, and there is no capacity for any complexity – if you give it a task that needs to consider several outcomes or that is not straightforward with the rules the AI has had, it is not able to complete the task.
For Agentic AI, there are no predetermined rules. Once the system is given a goal, it will simply work towards reaching that goal, and over time adapt and learn to improve its ability to make these decisions to be able to handle a broader range of tasks and challenges – this is especially important if it is an environment that is constantly changing.
It eliminates the need to write extensive rules for the AI, or constantly go in to give input when the situation changes.
There are also high-level AI agents being developed to manage other agents, creating a hierarchical structure. However, the enterprise space must ensure that these systems are trustworthy, as a wrong decision by an AI agent could have catastrophic consequences. It’s essential to maintain a balance, with humans at the helm to oversee important decisions and prevent dependency on large tech companies.
The Agentic AI is also fully capable of integrating with its environment to gather data and information – whether that be sensors, video feeds or any other input device. It is able to actually extract meaningful features and data, recognise objects that it’s seeing or identify any relevant identities that are in the environment its perceiving.
Once it processes everything the AI is able to act out tasks towards its goal – again, fully autonomously. It can do this even more in depth if the integration with external tools and software is set up. And no worries if there is concerns that the AI will do this incorrectly. You are fully able to set up guardrails to ensure nothing goes wrong.
So, what does this mean for Cyber Security?
For Cybersecurity this offers a game-changer. While current tools rely on predefined rules and human oversight, Agentic AI operates and functions entirely autonomously. It doesn’t just detect anomalies and flag them for a human to fix – it adapts to evolving threats, learns from each situation and responds in real time without needing to wait for human input.
A good example of this is unusual traffic – traditional tools we’re used to might flag this for someone else to review, but Agentic AI goes one step further. It identifies the issue, isolates the systems related to it and contains the threat before any compromise occurs – all of this without needing any intervention or review. It’s a fully self-sufficient system, capable of keeping to pace with anything that happens.
Our prediction is the increased usage and development of this AI in the coming months of the new year.
It can further detect security alerts before those ever reach a human analyst, replicating and acting using the human SOC workflow and decision-making. This takes off the pressure of a situation where a lot of alerts are coming in.
The AI will take the alerts, deal with the simple ones, categorise them, give a basic rundown, and the SOC will have the time to only focus on ones that are truly important. It is also able to remove any false positives, ensure there are no alerts missed. It gives the team more time to spend on what is important and what is high priority.
On top of it, it has the ability to create playbooks for remediation – all a SOC analyst has to do is review the content.
So what is the drawback?
As always with new technology it is not all sunshine and roses – there are several ethical considerations that we need to consider as this technology develops.
· How can we ensure safety, especially in high stakes situations.
· Who is accountable if the AI makes a harmful decision?
· How can we guarantee that the AI cannot develop a bias and stays fair?
· How can we guarantee that regulations keep up with the development of the system and the unique challenges we will face.
But even past these, there is value in Agentic AI and value in implementing it. To have a completely autonomous system to handle security alerts at inhuman speeds is absolutely something to look into.
Managing the Tsunami of Cyber Security Laws and Regulations
There is a whole new wave of cybersecurity and data protection laws set to take effect in 2025, and it will make it ever harder for companies to be able to keep up with this – especially those operating across multiple countries. Each country has its own rules, laws, and for any company working in several countries, navigating this maze of regulations will be a challenge that the coming year will make ever more difficult.
An example is if you are an Australian company working with a company in the EU. You have to keep up with not only the Australian Cybersecurity Act and its rules and timelines, but also EU’s Digital Operational Resilience Act (DORA). Doing so is not easy, especially, but necessary to avoid any costly missteps.
This is has spearheaded the creation of tools like RuleUp – they track every single one of these changes and let companies that benefit from knowing them know exactly what is happening, where its happening and how it affects them.
Although we are not lawyers and this isn’t legal advice, here are only some of the key legislations changing and coming around the world that companies should pay attention to.
EU’s Digital Operational Resilience Act (DORA)
This entered into force on the 16th of January 2023 and will apply as of 17th 2025. It importantly covers ICT risk management, third-party ICT risk management, digital operational resilience testing, ICT related incidents, information sharing and oversight of critical third-party providers.
This is all within the financial section and is important for anyone dealing with ICT, cybersecurity, technology and financial areas in Europe – or someone who has clients in this area. It covers necessary moves for financial companies to stay safe from cyber-attacks and incidents.
The new draft for the Digital Personal Data Protection Rules or DPDP Rules, 2025
These new rules are under the Digital Personal Data Protection Act, 2023 in India. These draft rules are not in place yet, but they have been made available for public review until February 18th, 2025.
The purpose of these rules is to introduce a framework to safeguard personal data and privacy rights. This is not something that directly deals with cyber security, but it nonetheless heavily affects it – with the whole purpose of cybersecurity to protect data, whether personal or corporate.
There will be new rules for issuing clear and standalone notices when any personal data is being collected, there’s a new specific criteria for consent managers, data breach notifications and changes that ensures that there is accountability and compliance for data fiduciaries.
Enhancements to the 2018 Security of Critical Infrastructure Act in Australia
In the face of the increasing threats Australia set out their 2023 – 2030 Cybersecurity Strategy, underpinned by the Cybersecurity Legislative Package 2024. In this included are enhancements to the 2018 SOCI Act with a new Security of Critical Infrastructure and Other Legislation Amendment (Enhanced Response and Prevention) Bill 2024 (SOCI Bill).
The purpose of the original act was to govern Australia’s critical infrastructure by defining the certain assets as ‘critical infrastructure assets’ and defining obligations for the entities that are responsible for those assets. It includes a need to register assets, the necessity of reporting and risk management programs.
The new bill would simply update the regime – a change would be made to expand the definition of critical infrastructure assets, giving the government enhanced intervention for incidents, and introducing a revised definition of ‘protected information’.
There are further changes being made and to anyone working within the critical infrastructure section in Australia benefits from making sure they’re well aware of the imminent changes – these are expected to take effect on May 30th, 2025.
AI and New Legislation
There is also more than enough changes being made to old legislation and entirely new laws and policies made around AI. There is the NIST-AI-600-1, Artificial Intelligence Risk Management Framework: Generative Artificial Intelligence Profile created by NIST and released on July 26, 2024 that will become ever more important the more AI is being used within Cyber Security.
There is further the EU AI Act – the first major regulation that came out relating to AI regulation at all. The purpose of it was to ensure that AI systems within the EU are safe, transparent, that they can be traced, and they are not discriminatory. There was also a section for ensuring that the technology stays environmentally friendly.
So as you can see, there are a lot of regulations that either came out within the last year or are set to be valid in this coming year. Its why it is important for organizations to keep an eye on the legislation that matters to them in the jurisdiction that is most applicable to them. Some of these laws require months of preparation to ensure compliance and that everything is done correctly – the right time to start is now.
Adversaries and new threat vectors
And finally, as a last point we predict entirely new threat vectors being used by adversaries. This was already a rising trend within the passing year, and we expect that it will be something to keep an eye out for in the coming year. We will give examples of some adversaries that have started ramping up their new tactics techniques and procedures, and highly encourage companies to be aware of these to keep themselves secure.
Before we do, why are new threat vectors being leveraged? This is simply due to defenders being more effective at blocking attacks, which forces adversaries to find new vectors - the trend we are seeing now.
Black Basta and the new Microsoft Teams Attack
Black Basta is not an entirely new name going around, but from 2022 and their emergence they have been a constant, attacking over 500 private industry and critical infrastructure entities – including healthcare.
They operate as a ransomware-as-a-service variant, meaning the ransomware is developed by one group and sold as a ready-to-use tool for other attackers to launch their own cyber-attacks.
Their attacks heavily rely on social engineering, designed to confuse and create anxiety in their victims. For example, they start by flooding the victim’s inbox with harmless but overwhelming spam - email bombing - which not only clutters the inbox but makes it harder to spot genuine communications. This creates a sense of urgency and distraction.
Then comes the next phase: a message via Microsoft Teams, where the attacker poses as the IT manager or help desk. Leveraging the anxiety and confusion from the spam, they gain the victim’s trust and convince them to download remote access software or malware. Once installed, the attacker gains access to the corporate device, allowing them to steal data, spread laterally through the network, and ultimately deploy ransomware to encrypt the system.
Unless you know 100% who is contacting you, never download software without knowing who is telling you to do it and what the software is.
New Novel Phishing Techniques – ASCII-Based QR Codes and Blob URI
Attackers are always finding new ways to bypass the security set up by websites and applications, and two of the latest known tricks are ASCII-based QR codes and Blob URI phishing attacks.
For example, to bypass the security set up to detect malicious QR codes, attackers instead create them with ASCII art inside the email. This will slip past email security filters, but when scanned lead to a phishing website that steals login credentials or other sensitive information.
The second way is to use Blob URIs, which often bypass security systems. They are hidden directly in the email and when clicked on, they link users to phishing pages or install malware on their devices.
These techniques show us more than ever as to how phishing like this is constantly changing to be more deceptive and ever harder to catch by security systems in place.
The use of AI as a threat vector
There are yet more and more ways that AI is being used for attacks, and as AI develops so will the threat actors’ methods. This is one of the ever-increasing negatives with AI, as much as AI can be used for good and for our benefit, it is easily accessible to anyone wanting to use it for nefarious purposes.
One of these examples is deepfake phishing - a method that mixes social engineering with advanced deepfake technology. Whether that is through using AI to write better personalised emails or messages, or even the extent of imitating voices or faces. With AI developing every day, it is something that everyone needs to be aware of.
-
This Privacy Collection Notice describes how 59 Degrees North Pty Ltd (ABN 85 665 008 597) (we, us or our) collects and handles your personal information when you make an enquiry with us. We collect personal information from you so that we can respond to your enquiry and for related purposes set out in our Privacy Policy, available on our website (or on request).
We may disclose this personal information to third parties, including our personnel, related entities, any third parties engaged by us and acting on our behalf and as otherwise set out in our Privacy Policy.
We store personal information in Australia. Where we disclose your personal information to third parties, those third parties may store, transfer or access personal information outside of Australia.
If you do not provide your personal information to us, it may affect our ability to do business with you. For example, if you do not provide your email address, we may not be able to respond to your inquiries or provide you with our services.
Please see our Privacy Policy for more information about how we collect, store, use and disclose your personal information, including details about overseas disclosure, access, correction, how you can make a privacy-related complaint and our complaint-handling process.
If you have questions about our privacy practices, please contact us by email at: contact@59n.com.au By providing your personal information to us, you agree to the collection, use, storage and disclosure of that information as described in this privacy collection notice.