• Thursday, January 23, 2025
businessday logo

BusinessDay

Integrating Ethical AI Practices by AI Stakeholders – Part 1

Integrating Ethical AI Practices by AI Stakeholders – Part 1

Introduction

Artificial intelligence (AI) has seamlessly integrated into many aspects of our daily lives, becoming a driving force behind significant advancements across various industries. The rapid development of AI technology has not only transformed the way we live but also revolutionized the way we work. According to research, AI is one of the fastest-growing sectors worldwide, with an expected annual growth rate of 33.2% between 2020 and 2027. As AI continues to be woven into the fabric of human progress, it is fueling both innovation and economic growth.

In education, AI powers virtual assistants, dropout prediction systems, and automated grading. In media, it reshapes content creation with AI-generated music and deepfakes for lifelike characters. The legal industry is not immune to the AI revolution either. For example, PwC recently announced that it is deploying AI chatbots to assist over 4,000 of its lawyers across 100 countries with tasks such as contract analysis, regulatory compliance, due diligence, and other legal advisory services. Overall, these innovations are enhancing human productivity, reducing errors, and driving efficiency across a wide range of sectors.

However, AI also presents significant risks, including privacy violations, deepfakes, bias, job displacement, and the erosion of human skills. A 2024 Stanford report highlighted data privacy as the top global concern, while 38% of workers fear AI will replace their jobs within three years, with manufacturing facing the largest impact.

While the risks associated with AI are considerable, they can be mitigated through the adoption of ethical AI practices. Ethical AI development focuses on minimizing negative impacts while ensuring that AI systems align with human values, rights, and societal well-being. By prioritizing ethical guidelines and regulatory frameworks, we can harness the transformative power of AI while safeguarding against its potential harms.

This article, the first in a series of AI ethics expositions, will explore the ethical challenges in developing and deploying AI by various stakeholders such as AI developers, organizations deploying AI, end-users, and the government. It will also offer recommendations for developers to ensure responsible and fair AI development.

Integrating Ethical AI Practices: A Focus on AI Developers

The first group of stakeholders to consider are the developers of AI systems and technologies. This includes major players such as OpenAI, Apple, Amazon Web Services (AWS), IBM Watson, and others. Integrating ethical AI principles into the development process is crucial in shaping how AI impacts human lives. Key ethical issues surrounding AI development include algorithmic bias, privacy concerns, and intellectual property infringement.

a. Algorithmic Bias

Algorithmic bias refers to the presence of systematic and unfair discrimination in the outcomes generated by AI systems. This bias can arise due to biased data, flawed design, or unintended consequences in the algorithm’s decision-making process. Bias often stems from training datasets that are unrepresentative or imbalanced, which can lead to the exclusion of certain groups or races from the AI system’s outputs. As a result, AI may reinforce existing prejudices or perpetuate inequalities, creating significant ethical challenges in its application.

By prioritizing fairness and inclusivity in AI design, developers can help mitigate these risks, ensuring that AI systems work to benefit all users equally and without discrimination.

Best Practices to Prevent Bias:

Diverse and Representative Training Data:
AI systems should be trained on datasets that are diverse and representative of the population they will impact. This includes ensuring that the data accounts for a wide range of demographics, including race, gender, age, socioeconomic status, geographic location, and other factors. For instance, if a hiring system for a manager role is trained on data from past managers who are predominantly male, female applicants may be unfairly flagged or disqualified. To address this, it’s crucial to intentionally diversify the datasets used by such systems, employing strategies like data augmentation or synthetic data generation.

Bias Audits and Regular Testing:
In addition to internal testing, independent third-party audits can provide an unbiased review of the data and models. External organizations with expertise in ethics and fairness can conduct assessments to ensure that the AI system is operating equitably.

Algorithm Transparency and Explainability:
Developers should be transparent about how AI models are built, what data they are trained on, and what algorithms are used.

Human-in-the-Loop Systems:
Even with advanced AI systems, human oversight is essential. By including a “human-in-the-loop” approach, humans can intervene and correct biased or unfair decisions before they affect individuals or groups. This is particularly important for high-stakes decisions, such as hiring, lending, or law enforcement.

b. Privacy Concerns

In 2024, consumers in Germany, Australia, the United Kingdom, and the United States expressed concerns about privacy risks associated with artificial intelligence. Only 56% believed that retailers could ensure data privacy when implementing AI-powered tools. Privacy risks remain one of the most significant ethical challenges in AI development. AI and machine learning systems rely on vast amounts of data to function, which often includes sensitive information, data collection without consent, and the use of data without permission. Additional risks, such as data leakage and exfiltration, further complicate privacy concerns.

A notable example of these risks occurred in 2020 when Clearview AI, a facial recognition company, faced backlash for scraping billions of images from social media platforms without user consent. The AI system, used by law enforcement to identify individuals by matching these images with publicly available data, highlighted the challenges of regulating AI companies operating across borders and handling vast amounts of personal data without consent. This case underscores the pressing need for stronger regulations to protect privacy in the age of AI.

Best Practices for Mitigating Privacy Risks in AI:

Data Minimization:
This principle of data protection, outlined in various data protection laws including the Nigeria Data Protection Act 2023, mandates that data controllers and processors collect and retain only the minimum amount of data necessary for processing. This principle ensures that only essential personal data required for AI training and operation is collected and stored.

Anonymization and Pseudonymization of Data:
Anonymization refers to the process of removing or modifying personal data so that individuals can no longer be identified, directly or indirectly, from the data. Pseudonymization involves replacing identifiable information in a dataset with pseudonyms or artificial identifiers. Techniques such as data masking, encryption, and de-identification help ensure that sensitive information is not directly linked to individuals, enhancing security and fostering greater public trust in AI systems.

Privacy Audits:
A data privacy audit is a thorough review process conducted by an organization to evaluate how it handles personal information. Periodic privacy audits are crucial to assess risks associated with AI system development, pinpoint areas for improvement, and ensure compliance with privacy regulations.

c. Intellectual Property Infringement

Intellectual property (IP) infringement is an emerging and complex ethical issue in AI development. As AI systems increasingly rely on vast datasets to generate content—such as text, images, and videos—important questions arise regarding the origins of the data used to train these models. Concerns about whether this data includes copyrighted material or other forms of protected intellectual property are growing. Addressing these concerns is essential for balancing innovation with legal and ethical considerations.

For example, in Raw Story Media v. OpenAI, the plaintiffs claimed OpenAI violated the Digital Millennium Copyright Act (DMCA) by removing Copyright Management Information (CMI) from their articles before using them to train ChatGPT. The Southern District of New York dismissed the case, stating the plaintiffs lacked standing because they failed to show tangible harm or instances of ChatGPT reproducing their content without attribution. This ruling underscores that claims require specific evidence of direct or imminent infringement to succeed.

Best Practices for Safeguarding Intellectual Property in the Age of AI:

Obtain Authorization from Copyright Holders:
AI developers should secure explicit authorization from copyright holders for any use of protected content unless specific exceptions apply, such as those for text and data mining.

Promote Transparency in Training Data:
AI developers should be transparent about the data used to train their models. This includes publicly disclosing detailed summaries of the datasets, outlining the types of content incorporated, and clarifying how this data was obtained.

Conclusion
The ethical development of AI is crucial for ensuring that its benefits are maximized while minimizing potential harms. By adopting best practices around fairness, privacy, and intellectual property, AI developers can play a vital role in fostering responsible, human-centered AI technologies that align with societal values and legal standards.

Contributors

Tilewa Oyefeso – Partner at Wigwe & Partners ([email protected])

Emaediong Lawrence – Associate at Wigwe & Partners ([email protected])

Disclaimer:
This article is for informational purposes only and does not constitute legal advice or establish a lawyer-client relationship.

Join BusinessDay whatsapp Channel, to stay up to date

Open In Whatsapp