top of page

Exploring AI Bias: An Introduction to What You Need to Know


Orange and blue graphic of a woman on a laptop computer with the Big Little Insights logo attached to a robot


What is AI Bias?


Artificial intelligence, also known as AI, has taken the world by storm.  Most recognizable in AI, various tech companies have been vying for dominance with the public with their own AI chatbots, such as Open AI's ChatGPT, Google's Bard, and Microsoft's CoPilot.  These chatbots have been used to compose text and write code for marketing, reports, software, and contracts.  However, AI has continued to be used and developed to assist in other functions, such as screening resumes, recognizing faces, taking notes, and creating images.


The outputs of AI can lead to biased responses, which is called AI bias.  AI bias has also been referred to as ethical AI and responsible AI with some distinctions differentiating the different terms.  The common perception is that companies only need to focus on hiring diverse employees in diversity and inclusion efforts or react to direct employee actions to avoid discrimination.  However, contrary to common perception, AI can also create biased or discriminatory responses or outputs.  Discriminatory responses or outputs from AI can lead individuals or companies who use AI to make biased or discriminatory decisions.  AI-powered recruiting software may screen out certain applicants based on age because it looks at who landed in those positions in the past, and none were over 40.  When AI recognizes faces, it may only know how to recognize white male faces correctly but misidentifies female faces or faces of people of color.  Alternatively, AI photo editors may replace Asian faces with white faces or AI photo generators may only generate content based on stereotypes.  AI may produce content contrary to a user’s desire even if the user wants to create an image that counteracts those stereotypes.  AI may favor and give a response from a particular cultural perspective, such as that of its creator or the majority, while disfavoring responses from different cultures or members of a minority group or view. 


Contrary to common perception, conscious and willful hate, malice, or hostility against a group of people is not the only cause of bias and discrimination.  Although AI has been known to hallucinate (make up untrue or fictional content) or spew out hateful or discriminatory content through learning, AI bias does not always happen because of malice as people often think about discrimination and bias.  Along with other reasons, bias can derive from favoring one group over another without specific malice against the disfavored group.


Why should I care about AI bias and do something about it?


When that group is a protected category, such as race, gender, sex, religion, national origin, and so forth, that bias can create legal problems for companies, both the creators and users of such AI. Some AI issues are novel, and it will take some time for courts and adjudicative bodies to sort out the outcome and for legislative bodies to draft new laws to deal with these novel issues.  However, other AI issues, such as discrimination, are old problems applied in new ways.  Bodies, such as the US Equal Employment Opportunity Commission (EEOC), are already issuing guidance about AI bias. 


What can be done about AI bias? Who can help?


Companies can learn more about the topic and develop a strategy to deal with AI bias, as well as a larger AI strategy to deal with other consequences of AI, such as copyright and privacy issues.  While involving a lawyer or HR can help figure out how the law or corporate policy (respectively) on AI to your company’s actions, HR and lawyers cannot solve all problems related to AI bias.  Lawyers and HR typically deal with issues where there is precedent (existing law or existing policy) and only deal with a narrow scope of issues. 

Hiring a facilitator can bring together various parts of the company, such as lawyers, HR, policy, and engineers, to discuss what to do about the issue.  Hiring a coach can help companies figure out what to do and how to do it in a way that aligns with the company’s overall strategy, values, risk tolerance, and other considerations in addition to legal considerations.  While many corporate diversity, equity, and inclusion (DEI) initiatives may focus on employee hiring, DEI initiatives may also be used to prevent bias or discrimination in the products and services that companies provide to their clients or customers and the products and services they use. Through coaching, companies can start to redefine what DEI means to them to improve not only corporate culture but also growth, profit, and sustainability.


Whereas lawyers, consultants, psychologists, and other subject-matter experts excel where there is existing external knowledge to draw from to address a topic, coaches excel in empowering clients to make decisions and take actions in uncharted territory, novel topics, and in novel, tailored ways. Coaches also pull out answers from clients where the answer lies within and where clients want to consider all of the factors (not just one area of expertise) in making a decision.  Like trying to have people perform surgery on themselves, it is difficult for a client to uncover their own blind spots without the assistance of a coach.  Coaches and other process experts empower clients to figure out the outcome by providing services on the how (the process) whereas subject-matter experts tell clients what the outcome should be based on the experts' own values and knowledge.

 

To learn more about AI workshops, facilitation, and coaching services offered by Big Little Insights, read more here: https://www.biglittleinsights.com/ai

bottom of page