Cyber attacks are becoming ever more sophisticated. The new waves of attacks are outstripping humans, and cybercriminals are increasingly relying on artificial intelligence (AI). AI is being used both as an offensive weapon and as a defence against cyber attacks. This dual role presents significant challenges to the cybersecurity landscape but also offers opportunities to strengthen defences and identify attackers more effectively.
AI is rapidly changing the cybersecurity landscape, acting as both a powerful shield and a sharp sword. While it offers unprecedented opportunities to bolster defences and proactively combat threats, it also empowers malicious actors with new tools and techniques. Across the rapidly evolving digital landscape, the concept of AI vs. AI is at the forefront of discussions around cybersecurity. While AI stands as a defence against hackers by automating threat detection and response mechanisms, making systems smarter and more resilient, it is also leveraged by cybercriminals to create sophisticated attacks—ranging from deepfakes to advanced phishing campaigns—putting personal and corporate data at risk.
This paradox brings us to a critical conversation about digital rights and data protection. As AI technology continues to evolve, we face complex questions: How do we protect individuals’ rights in a world where AI can be weaponised? How can we ensure that AI, which has the potential to safeguard data, does not inadvertently undermine the very privacy it is meant to protect?
The Nigerian perspective on digital rights and data protection
As the author of Nigeria’s Digital Rights and Freedom Bill, now enshrined in Nigeria’s Data Protection Act, I have witnessed firsthand the urgent need for clear policy frameworks to address these challenges. This bill, a foundational part of Nigeria’s legal landscape, focuses on securing digital rights, ensuring privacy, and enforcing transparency in AI systems. It responds to the growing need for legislation that can keep pace with the rapid advancements in AI while safeguarding individuals’ data and freedoms.
Nigeria’s Data Protection Act (NDPA) serves as a cornerstone in securing the country’s digital future. Modelled after global standards such as the EU’s General Data Protection Regulation (GDPR), the NDPA represents a bold step in protecting citizens’ data while encouraging innovation. It addresses concerns by setting strict guidelines for the collection, use, and storage of personal data, balancing the need for technological innovation with the protection of individuals’ privacy. This act establishes transparency and accountability in managing data, ensuring Nigeria aligns with global standards while addressing local realities.
The significance of this cannot be overstated. Nigeria, as Africa’s largest economy and a leading tech hub, is poised to leverage AI-driven innovation to solve pressing societal challenges. Yet, the country also faces a surge in cyber threats, with financial institutions, startups, and public systems increasingly targeted. Thus, robust cybersecurity measures and a resilient data protection framework are non-negotiable.
The dual role of AI in cybersecurity: Shield vs. sword
AI as a shield
AI fortifies defences against emerging threats by processing and analysing vast datasets at unprecedented speed. This capability enables the rapid identification of vulnerabilities and real-time mitigation of risks, such as malware or unauthorised access. A report by the Capgemini Research Institute found that nearly 69% of organisations believe AI is indispensable in detecting and responding to cybersecurity breaches. AI’s predictive capabilities enhance accuracy and significantly reduce response times, making it an invaluable tool in securing digital ecosystems.
For instance, Nigerian financial institutions are leveraging AI to combat fraud and secure transactions. By analysing customer behaviours and detecting anomalies, AI-driven tools have significantly reduced incidents of financial crimes. Furthermore, the telecommunications sector in Nigeria uses AI to monitor networks for vulnerabilities, ensuring service integrity and consumer data safety.
Artificial intelligence can rapidly analyse events, pick out the threats, and even create and implement a response. Additionally, AI really comes into its own when you consider that the more data it analyses, the better it becomes at spotting threats. So, it could build an accurate picture of employees’ security behaviours, an organisation’s cybersecurity posture, and the security of systems, devices, and networks, with little or no human input.
AI as a sword
On the flip side, AI has become a dangerous tool in the hands of cybercriminals. Since 2021, cybercrime incidents have surged worldwide, with data breaches increasing by 72 percent between 2021 and 2023. Automated tools used in attacks and extortion, such as chatbots, can rely on AI to become more sophisticated and believable. AI can increase the scale of some types of cyberattacks, like distributed denial of service (DDoS) attacks, where massive amounts of web traffic are used to overwhelm the target’s servers. AI’s application can also extend beyond the initial cyberattack itself. When criminals succeed in a data breach, they can use AI tools to comb through terabytes of data and identify the most sensitive information, like personal information, trade secrets, and financial data.
For example, in Nigeria, there has been an increase in sophisticated phishing campaigns targeting SMEs and individuals, exploiting the lack of cybersecurity awareness. Additionally, data breaches in the healthcare and public sectors highlight the critical need for AI-driven cybersecurity strategies to prevent and mitigate such attacks. According to the World Economic Forum, “Only AI can play AI at its own game.” And so, the globe finds itself in a situation where AI is fighting AI. While it doesn’t really look like the sci-fi vision, it is, nonetheless, extremely dangerous.
Evolving threats require new frameworks.
As technology advanced, so did the sophistication of cyberattacks. The digital era, characterised by the proliferation of cloud computing, mobile devices, AI, and the Internet of Things (IoT), has created a vastly expanded attack surface.
Modern cyber threats are no longer the work of lone hackers but are often orchestrated by state-sponsored groups, organised crime syndicates, and hacktivists with various motivations, including financial gain, political objectives, and social causes.
In recent years, the world has witnessed the rise of advanced persistent threats (APTs), which involve prolonged, targeted attacks aimed at stealing sensitive information from high-profile targets such as governments, corporations, and critical infrastructure. Cyber espionage, intellectual property theft, and attacks on national infrastructure have become common, with significant implications for national security and global stability. Additionally, the advent of zero-day exploits, supply chain attacks, and deepfake technologies has further complicated the cybersecurity landscape, making it increasingly difficult to detect and defend against sophisticated cyberattacks.
As cyber threats continue to evolve, the integration of cutting-edge technologies, including AI, into cybersecurity strategies has become crucial. AI can enhance the ability to detect and respond to threats in real time, analyse large datasets to identify patterns and anomalies, and predict potential vulnerabilities before they can be exploited.
The need for these advanced cybersecurity measures is driven by the increasing frequency and severity of cyberattacks, as well as the growing dependence on digital technologies in critical sectors such as finance, healthcare, and national defence. As the digital landscape continues to expand, so must the strategies and technologies used to defend it, ensuring that cybersecurity remains a step ahead of evolving cyber threats.
The Data Protection Act and Nigeria’s focus on digital rights are steps in the right direction, but more must be done. By now, we are all aware that traditional cybersecurity laws struggle to keep pace with the rapid advancements in AI, making existing legal and regulatory structures outdated. Nigeria must therefore develop frameworks tailored to AI-driven threats, prioritising transparency, accountability, and ethics. Such policies should address:
AI-Specific Legal Structures: Establishing governance mechanisms to ensure ethical AI deployment and mitigate risks associated with its misuse. • Public Awareness: Educating citizens and businesses about the implications of AI-powered cyber threats and the importance of proactive measures. • Collaboration: Strengthening partnerships between public and private sectors to share intelligence and best practices in combating AI-driven cybercrime.
The intersection of AI, digital rights, and data protection
AI’s potential in industries such as healthcare, finance, and education also raises critical concerns about data privacy. As organisations increasingly rely on AI systems, safeguarding personal information becomes a dual challenge and responsibility. A report by McKinsey highlights that by 2030, AI-powered systems will handle over 70 percent of customer interactions, necessitating a balance between innovation and privacy.
Nigeria’s NDPA is a significant step in this direction, establishing legal safeguards to protect citizens’ data in an AI-driven world. However, its implementation requires stronger enforcement mechanisms and public education to ensure compliance. Transparency in AI systems, clear governance structures, and the ethical collection and use of data are essential to maintaining trust and protecting digital rights.
The NDPA’s emphasis on transparency and accountability ensures that AI technologies align with ethical standards. By embedding data privacy into every stage of AI development, Nigerian organisations can build trust, comply with regulations, and build sustainable solutions. This approach strengthens Nigeria’s position as a leader in digital innovation while safeguarding citizens’ rights.
The ethical imperative of AI usage
The promise of AI is vast, but it carries significant ethical responsibilities. As AI systems are integrated into critical sectors, they must reflect fairness, transparency, and accountability. A key concern is bias in AI algorithms, which often stems from unrepresentative or prejudiced training data. This bias can lead to discriminatory outcomes, particularly in areas such as hiring, lending, and law enforcement, where AI-driven decisions impact individuals directly.
For Nigeria, this challenge is particularly relevant as the country increasingly adopts AI technologies across various sectors. In recruitment processes, several organisations have started using AI for the preliminary screening of job applicants. However, with training data skewed towards urban demographics, applicants from rural areas or less-developed regions may face unintended biases. Similarly, AI-driven credit scoring systems used by fintech companies may disadvantage individuals without extensive digital footprints, disproportionately affecting the unbanked population in Nigeria.
In the healthcare sector, AI tools are being tested for diagnosis and treatment recommendations. However, without sufficient representation of Nigerian genetic, cultural, and environmental factors in training datasets, there is a risk of misdiagnoses or suboptimal recommendations. For instance, AI tools designed for dermatological diagnosis often struggle with dark skin tones due to underrepresentation in global training data.
Ensuring the ethical deployment of AI means implementing robust frameworks that mandate fairness in algorithm design, regular audits to detect biases, and transparency in AI decision-making processes. Organisations must also prioritise diversity in data collection to reflect Nigeria’s rich demographic and cultural complexity.
Bias in AI training data is not only an ethical concern but also a practical one. When bias is introduced into training datasets, it perpetuates inequalities and undermines the reliability of AI systems. For instance, in facial recognition technologies, under-represented groups often face higher error rates, leading to potentially harmful consequences. Addressing this requires a multi-faceted approach:
Ensuring diversity and representation in data collection to build unbiased AI models.
Implementing regular audits and independent reviews to identify and correct biases.
Establishing clear guidelines and standards for ethical AI development.
Public education is another vital component of ethical AI use. Citizens need to understand how AI impacts their rights and how they can advocate for accountability. A notable example is the Civic Hive programme, which hosts workshops to educate Nigerians on digital rights and the implications of AI technologies. Meanwhile, encouraging collaboration between government agencies, tech companies, and civil society will be critical to developing guidelines that align with international best practices while addressing local realities.
Ethical AI deployment not only builds public trust but also ensures sustainable innovation. By embedding ethical considerations into every stage of AI development, Nigeria can set a benchmark for responsible AI usage in Africa and beyond, paving the way for technologies that empower rather than marginalise.
Hon. Dr. Chukwuemeka Ujam, mni
Join BusinessDay whatsapp Channel, to stay up to date
Open In Whatsapp