Country Manager, Microsoft Nigeria, Ola Williams, has said that new research shows that AI is modifying the abilities of security teams, transforming them into ‘super defenders’ that are faster and more effective than ever before, despite the fact that over the past few years, AI has completely changed the battlefield for both cyber criminals and defenders.
According to her, the latest edition of Microsoft’s Cyber Signals research shows that regardless of their expertise level, security analysts are around 44 per cent more accurate and 26 per cent faster when using Copilot for Security.
Ola stated that history has taught that prevention is key to combating all cyber threats, whether traditional or AI-enabled.
The report offers four additional recommendations for local businesses looking to better defend themselves against the backdrop of a rapidly evolving cybersecurity landscape.
“The key is to ensure the organisation’s data remains private and controlled from end to end. Conditional access policies can provide clear, self-deploying guidance to strengthen the organisation’s security posture and will automatically protect tenants based on risk signals, licencing, and usage. These policies are customisable and will adapt to the changing cyber threat landscape.
“Enabling multifactor authentication for all users, especially for administrator functions, can also reduce the risk of account takeover by more than 99 per cent.”
The Country Manager narrated that aside from educating employees to recognise phishing emails and social engineering attacks, IT leaders can proactively share and amplify their organisations’ policies on the use and risks of AI.
This, she stated, includes specifying which designated AI tools are approved for enterprise and providing points of contact for access and information.
Also, proactive communications can help keep employees informed and empowered while reducing their risk of bringing unmanaged AI into contact with enterprise IT assets.
Educating further, Ola noted that through clear and open practices, IT leaders should assess all areas where AI can come into contact with their organisation’s data, including through third-party partners and suppliers.
The security team should assess the relevant vendors’ built-in features to ascertain the AI’s access to employees and teams using the technology. This will help to foster secure and compliant AI adoption.
“Finally, it’s important to implement strict input validation for user-provided prompts to AI. Context-aware filtering and output encoding can help prevent prompt manipulation.
“Cyber risk leaders should also regularly update and fine-tune large language models (LLMs) to improve the models’ understanding of malicious inputs and edge cases. This includes monitoring and logging LLM interactions to detect and analyse potential prompt injection attempts.
ALSO READ THESE TOP STORIES FROM NIGERIAN TRIBUNE