• Sunday, April 28, 2024
businessday logo

BusinessDay

The legal and business risks in the application of AI systems

The legal and business risks in the application of AI systems

Riding on the back of the internet and digital transformation technologies, artificial intelligence systems (AI) are providing new paradigms for innovations, creativity, and speedy and efficient problem-solving opportunities for businesses in a myriad of sectors. In procurement, an AI system can help track an order from anywhere in the world to its final destination in another geographical location.

In the healthcare sector, AI systems can be trained using available expert medical data to carry out some medical diagnosis. In the criminal justice space, AI systems can speed up access to relevant precedents for a case in court and even generate evidence for use in such cases. However, like all beneficial technologies, AI systems are screaming for policy, legal, and ethical safeguards.

Benefits of AI systems:

AI refers to the ability of trained computer systems to mimic the ability of humans to perform simple or complex tasks that ordinarily can be performed by human beings. For instance, Google Search and ChatGPT, owned by Google and OpenAI, respectively, are trained to use existing internet data and information to search for and present relevant information from queries imputed by users in various sectors of human endeavour. This activity in the recent past would have required a visit to a library that has relevant books relating to the user’s preferred subject matter and pouring over a tonne of encyclopaedias from different publishers. The expected AI market value globally by 2027 is in the region of $267 billion dollars, and by 2030 it may catapult to $15.7 trillion dollars, which represents a global average of 37.3 percent. This makes AI systems very impactful for the global economy. Other benefits of AI systems include task automation, smart decision-making, time management, and efficiency in production.

Downsides of AI systems:

Isaac Asimov, a professor of biochemistry at Boston University and a prolific science fiction writer, is also reputed to be the father of robotics, and by extension, AI proposed the three famous laws of robotics. (i) A robot may not injure a human being or, through inaction, allow a human being to come to harm. (ii) A robot must obey orders given to it by human beings, except where such orders would conflict with the first law. (iii) A robot must protect its own existence as long as such protection does not conflict with the first and second laws. This premise probably lays the foundation for ethical considerations in the use of AI systems.

Some of the drawbacks of AI systems include the potential for job losses, data privacy risks, cybersecurity risks, uncertainty as to what the future portends, and the unregulated use of deep-flyers, chatbots, and digital assistants.

The way forward for all these drawbacks may reside in the development of legal, policy, technical, and internal ethical frameworks or mechanisms for the use and application of AI systems. Since 2021, the EU has been working on a possible regulatory regime for the use of AI within its territory, and it finally, on March 13, 2024, came up with the world’s first attempt at regulating AI systems with the enactment of the EU AI Act, which has wide and multi-jurisdictional implications for the global economy.

Nigerian businesses are facing fines of up to 35 million euros, or 7 percent of their global revenues, due to the seriousness of data privacy issues internationally. The Nigerian Communications Commission (NCC) and National Information Technology Development Agency (NITDA) should act urgently to ensure compliance by developing clear data protection regulations and providing consultative guidance to various industry sectors. This proactive approach could position Nigeria as a leader in data privacy and foster a thriving digital economy that respects user privacy.

Adetokunbo ONIKOSI Partner; AOCATERNIZ LAW SOLUTIONS