Introduction
As artificial intelligence (AI) continues to shape industries worldwide, the need for ethical frameworks governing its development and deployment becomes ever more critical. To address the challenges that arise with AI, governments and organizations must navigate the balance between developing guidelines for ethical AI and enacting enforceable regulations. This article explores the importance of both guidelines and regulations in fostering responsible AI practices, with a particular focus on Nigeria’s evolving AI landscape.
The Ethical Issues with AI Development and Deployment
Ethical AI refers to the design, development, and deployment of artificial intelligence systems in a way that prioritizes fairness, transparency, accountability, privacy, and the well-being of individuals and society. The integration of AI in various industries like manufacturing, healthcare, and education presents outstanding opportunities for economic growth and improvement in the quality of life. Similarly, in other sectors, generative AI (“GenAI”) enhances productivity and efficiency while introducing innovative approached to service delivery. The growing desire for the benefits of AI is reflected by the investment made in AI development, both globally and in Nigeria. Reports show that the total global corporate investment in AI has grown 150% since 2019 to nearly $180 billion (N240 trillion naira). In Nigeria, the AI market is expected to grow substantially with a projected $434.4 million by 2026, correspondent with the country’s 70% AI adoption rate which surpasses the global average of 48%.
However, despite the benefits of AI integration, challenges such as bias, discrimination, job loss, and intellectual property infringement highlight the ethical concerns in its development and deployment.
Guidelines for Ethical AI
Guidelines for Ethical AI are frameworks that direct the responsible development and use of AI technologies, ensuring fairness, transparency, and accountability. Whilst these Guidelines can be tailored to suit the specific needs of the particular organization or country, it should be borne in mind that there are universally acceptable principles for the development and deployment of AI, such as UNESCO’s ‘Recommendation on the Ethics of Artificial Intelligence’ and which is applicable to all 194 member states. On a national level, these guidelines could come in the form of a national AI strategy document or a national AI policy.
National AI Strategy/AI Presidential Executive Order and the Implications for AI Ethics
In Nigeria, the National AI Strategy (NAIS) document was released in 2024 and sets the path for the country to achieve its vision of being “…a global leader in harnessing the transformative power of AI through responsible, ethical, and inclusive innovation, fostering sustainable development through collaborative efforts.” This strategy document highlights the country’s objectives in leveraging AI. First is using AI as a tool for economic growth and competitiveness; second, for social development and inclusion; and third, for technological advancement and leadership. In achieving these objectives and the overall national AI vision, the document outlines five pillars that will be instrumental. The pillars which are relevant to this article are: “Ensuring Responsible and Ethical AI Development” (Pillar 4) and “Developing a Robust AI Governance Framework” (Pillar 5). These Pillars further outline specific strategies to be implemented such as establishing national AI principles and implementing legislative reforms. The establishment of national AI principles remains a key strategy in aligning AI development and deployment to critical principles such as fairness, transparency, accountability, privacy, and human well-being.
Other countries with national AI strategies include Norway, Germany, and France. The United States of America adopts a multipronged approach by establishing a National AI Initiative via Executive Order 13859 in February 2019 which was codified into law as part of the National AI Initiative Act of 2020 As opposed to first enacting an Act with general provisions that direct the use of AI, define liabilities and establish principles for AI development and use, the executive order empowers various agencies to formulate their own AI-targeted plans and strategies to drive the integration of AI into various sectors. There are also distinct AI policy documents to guide the ethical use of AI in various U.S. government agencies and departments. For instance, the Department of State has an Enterprise AI Strategy to empower diplomacy through Responsible AI and increase management efficiency in the Department using AI. There is also the Principles of Artificial Intelligence Ethics for the US Intelligence Community. While this approach by the United States addresses the unique AI needs of the various institutions, it might not work best in the Nigerian setting, which already struggles with multiplicity of laws and duplication of roles. Rather, the Nigerian AI community will benefit more from a unified directional document which can help create certainty as to the government’s position on the subject matter. As innovation occurs, more specific guidelines can be created to address unique areas.
AI Guidelines vs. Regulations
Whilst the establishment of guidelines are a critical first step, governments and policy makers must go a step further to enact binding laws and regulations to mandate compliance. AI guidelines are a means to an end and not the end itself. The guidelines serve as a manifesto of what AI means to each government and how AI will be developed and deployed to serve the greater good. And whilst developers, organizations and the citizens (AI stakeholders) may make recourse to the ethical guidelines under those documents, can they be sanctioned for failure to comply?
Indeed, as more conversations are being had on the responsible, ethical and transparent use of AI, various stakeholders are aligning their development and deployment of the technology. It is also noteworthy that these ethical principles are tied to binding laws, and a lack of adherence to such principles may result in the breaking of a law. For instance, where AI developers do not abide by the transparent use of AI, it would cast doubt on the source of their training data and whether any privacy or data protection laws were broken in the process. Also, a disregard for ethical AI might mean that source data and GenAI results have infringed on certain intellectual property laws.
It is only a matter of time before regulations like IP laws are amended to cover AI-related issues or before AI-specific regulation is enacted by more countries. It is therefore a good starting point for the various stakeholders to begin adopting a mindset shift in embracing ethical AI practices. It therefore also means that an effective approach to AI requires a two-layered system, and that is a synergy between principles and laws.
Laws Driving Ethical AI
In Nigeria, there is currently no law on AI. While the NAIS does not include as part of its objectives the enactment an AI-specific law, it mentions implementing legislative reforms to identify and address emerging legal and ethical challenges that AI poses. For instance, the Copyright Act might need to address questions such as who will be liable where a GenAI tool produces the same or substantially similar output to an existing copyrighted work. Will it be the AI model developer? the user? both? or neither? Also, specific policies or laws must be drafted to address the risks inherent in the use of AI, such as job loss, which is a major fear among employees. Reports from Goldman Sachs suggest that AI could replace 300 million jobs – that figure represents 9.1% of jobs worldwide. It is expected that the proposed National AI Policy Framework and the National AI Risk Management Framework will both address the apparent challenges by establishing guidelines and protocols to mitigate risks.
While the government seeks to address AI-related issues from various angles such as the development of National AI Principles, development of a National AI Policy Framework, establishment of an AI Regulatory Body, and a National AI Risk Management Framework, thought must be given to the enforceability of these approaches. Hence, the recommendation of a two-layered approach of guidelines and regulations.
An AI-specific legislation will establish key definitions and scope for the use of AI; specify the regulator; codify the national AI ethical principles; define responsibilities of AI developers and address their liability, as well as the liability of operators and users; establish mechanisms for redress; provide tax benefits to AI startups and companies; mandate periodic inspection and audit of AI systems by AI developers; in addition to other critical provisions. Seeking input from key stakeholders such as was done with the NAIS can provide a wholistic legislation.
As AI systems and applications become more advanced, newer challenges might arise, requiring urgent legislative reform. The drawback of legislative amendment in the Nigerian setting might slow down the necessary changes. However, the following strategies are recommended: having the regulator issue guidelines and rules in response to emerging technologies and challenges; periodic reporting by the regulator to provide advisory to the Government on AI developments and regulatory needs; creating an AI innovation sandbox where developers can test new technologies in a controlled environment without the risk of non-compliance. The sandbox can serve as an educational tool to guide legislative reform based on use cases. Other strategies include providing regular training to lawmakers about AI technologies and the risks to aid better understanding and influence quicker rate of legislative reform.
Cue from the European Union
The first set of rules under the European Union Act on AI became enforceable on the 2nd of February 2025. However, before this law, the EU relied on a combination of existing laws, ethics guidelines, and strategic frameworks to regulate AI, such as the General Data Protection Regulation (GDPR) and the Ethics Guidelines for Trustworthy Artificial Intelligence 2019. The combination of these strategies focused on privacy, data protection, ethical AI development, and liability issues. These efforts laid the foundation for the more formalized and comprehensive approach that the AI Act represents, aiming to provide clearer regulatory standards for AI technologies while promoting trust and innovation.
On the path to enacting an AI Act in Nigeria, existing laws that impact on the development of AI can be amended. For instance, the Nigeria Data Protection Act (NDPA) provides a solid foundation for the protection of personal data which fuels the development of AI models. By incorporating provisions that address consent, transparency, and accountability for AI data use, a more solid path can be laid for the enactment of the Act.
Conclusion
As Nigeria prepares to be a global leader in harnessing the transformative power of AI, adopting both guidelines and laws will accelerate progress toward this goal. Policymakers should strive to align national AI guidelines with global best practices. Additionally, while AI laws should mitigate risks and set boundaries, lawmakers should be careful not to stifle innovation. Finally, government must continue to adopt a proactive stance to direct ethical AI development and deployment.
CONTRIBUTORS
Uche Wigwe – Managing Partner Wigwe & Partners
Tilewa Oyefeso – Partner
Emaediong Lawrence – Associate
DISCLAIMER
This article is for informational purposes only and does not constitute legal advice or establish a lawyer-client relationship.
Join BusinessDay whatsapp Channel, to stay up to date
Open In Whatsapp