Artificial intelligence (AI) is currently revolutionizing numerous aspects of society, including healthcare, financial services, energy, transportation, education, security, employment, and legal services. As AI becomes increasingly integrated into society, it is crucial to evaluate the potential effects of this technology on society as a whole.
To begin with, it is important to define AI and understand its capabilities. AI refers to the development of computer systems that can perform tasks that normally require human intelligence, such as learning, problem-solving, and decision-making. There are several different types of AI, ranging from narrow AI, which is designed to perform a specific task, to general AI, which has the ability to perform any intellectual task that a human can.
Artificial Intelligence is one of the transformative technologies of the Fourth Industrial Age. Since 1955 when the term was first coined, AI has developed into a system that can learn from large amounts of data and make predictions based on its learnings. One application of AI is Machine Learning, where computer systems are not explicitly programmed for a specific outcome, but rather learn from examples. In other words, Machine Learning involves training AI models with numerous examples of the correct solution to a specific problem
One example of a widely used AI application are image generation apps, which can generate new images based on a set of inputs or a specific style such as Lensa and Dall-E. These types of apps have a variety of practical uses, including generating realistic images for creative work. Another key use case of AI is in the development of Text-generation apps (such as ChatGPT developed by OpenAI). These Text generation apps can be used to improve the accuracy and sophistication of written content. The academic sector, legal services, creative industry, and software development industry will be significantly impacted. For example, an author may use a text generation app to generate ideas for their next novel, or a musician may use one to generate lyrics for their next song.
AI is impacting society in many ways including the disruption of business models but the focus in this piece will be on the ethical and legal implications. As AI becomes more advanced, it is capable of making decisions that have significant consequences for individuals and society. For example, AI systems are increasingly being used in healthcare to diagnose and treat patients, and in the criminal justice system to predict the likelihood of reoffending. However, there are concerns that these systems may be biased, leading to unfair treatment of certain groups and individuals. It is therefore important to consider the ethical and legal implications of AI and ensure that appropriate measures are in place to prevent bias and ensure fairness. The overarching question is: Can we trust these AIs? The answer to this question lies in the norms and principles that guide the design of AIs.
As Big Tech (such as Google’s DeepMind) ramped up heavy investments in AI, are they being guided by the principles of fostering responsible and beneficial AI? What do strong ethical AI principles look like? We can look to the Organization for Economic Cooperation and Development (OECD) Principles for guidance. They propose universal ethical principles as follows: AI must be robust, secure, safe, and transparent. Microsoft as a corporation says it is guided by the following principles in designing its AI: Fairness, Reliability and Safety, Privacy and Security, Transparency, Accountability, and inclusiveness. These are good starting points. But a lot more needs to be done.
Governments and businesses ought to work collaboratively to propose and adopt strong AI Policies and strategies which are guided by ethical principles. Recognising that data becomes the AI (and biased data becomes biased AI), it is imperative that adherence to strong Data governance and security standards become the norm. We have seen early efforts in this regard such as Norway’s National Strategy for Artificial Intelligence and the UK’s National AI Strategy. As Technologists, Innovators, Policy Makers, Academics, and Civil Societies come together in global and national conversations to build an ethical AI-enabled society, it behoves us all to be guided by the principles of responsible use of AI to accelerate innovation.
As AI becomes more advanced and capable of making decisions that have significant consequences for individuals and society, the overarching question is: Can we trust AI?
While AI will no doubt be instrumental in solving world problems such as poverty, access to affordable healthcare, education, transportation, access to capital, access to justice, and clean energy we must be mindful of the dark side of AI. We have seen the potential of some image-generating/editing apps to abuse user data privacy and sexualize minors. Not to mention the other concerning applications of AI such as the inappropriate use of AI in law enforcement and in warfare or by dark actors and criminals. It is a collective societal responsibility to advocate for responsible AI – Artificial Intelligence that is not biased but fair, explainable, and ultimately subject to human control and review.
It suffices to add that while it is natural to have concerns about the potential impact of new technologies like AI, it is important to approach these issues with a measured and informed perspective. Fears about demonic manipulation or the “Anti-Christ” are unfounded and should not be a concern for those interested in the responsible and ethical use of AI. It is important to approach AI concerns with a rational and evidence-based understanding of its capabilities and limitations. AI is simply a tool that is designed to perform tasks based on a set of predetermined rules or algorithms. By carefully considering the potential risks and benefits of AI and working to ensure that it is developed and used in an ethical and responsible manner, we can harness its power to improve our lives and solve some of the most pressing challenges facing humanity.
Currently, there is no specific legal framework for regulating AI at the international level. However, there are a number of existing laws and regulations that apply to the use of AI, such as data protection laws and a number of and initiatives that are working to develop guidelines and best practices for the ethical development and use of AI (The World Economic Forum is working on a Global Ethical Framework for AI). Nigeria’s National Information Technology Development Agency (NITDA) is currently taking steps to develop a National Artificial Intelligence Policy like many other countries. It is hoped that these interventions will approach designing AI Policies and legal frameworks in a manner that promotes and encourages the ethical and responsible use of data, in addition to seeking protections on data use.
AI is still in its early stages of development and as it continues to develop and become more prevalent, it will be important to stay informed and engage in ongoing discussions about the impact of AI on society. In the meantime, as we continue to discover the capabilities of AI, we must remain equally focused on the phenomenal capability of the human mind. While AI will significantly improve efficiency, accuracy, and productivity across a spectrum of human endeavours, the human mind reserves the unique capability of figuring out what problems to tackle and what opportunities to pursue. This means that entrepreneurs, innovators, scientists, creators, and all kinds of thinkers will continue to be essential. AI will always look up to human ingenuity for leadership. The future belongs to those who are able to put AI to use in collaboration with human intelligence.
Rotimi Ogunyemi is a Technology Attorney writing from Lagos