Mental Models of Artificial Intelligence Part 4

Let's explore some of the problems in AI

Jessica Ezemba

12/7/20239 min read

a cartoon character with a message about how to use a phone
a cartoon character with a message about how to use a phone

Okay, back to gender discrimination in hiring

Think back to the Amazon discrimination in hiring case, the algorithm learned that words such as “Female Soccer”, or “women’s chess club captain” were highly weighted to being female. At the same time, the model also learned that females were less represented in Technology fields and most outperforming employees were male because males account for about 70% of the technology field on average [6].

The two things the algorithm learned were true but combined, the model learned that males were more preferred because males were more successful in the tech field because males were more represented in the tech field. Think about it, if you wanted to know which candidates in the tech field were successful, you would train an algorithm on existing candidates in the Tech field (which would be mostly male), pick the most successful ones, learn the most successful one’s traits, and compare it with new candidates.

The problem is if you are picking from a majority male pool, you will mostly pick males as top candidates, not females. Amazon tried to remove gender biases because of this but the algorithm learned other things that were highly weighted to being female and was able to discriminate against being female.

You could go into a lot of theory about why algorithms pay attention to different things but for our case, algorithms could be positive because it helps you discover something that you never considered before, or they could be negative because it is weighting a part of the input that is not actually important. This is where the biases in artificial intelligence algorithms occur.

But first, Let's talk about the good!

Artificial Intelligence can process tons of data sources and correlate input-output pairs to give usable solutions and current artificial intelligence technology does it very well. The ability to be able to search an image in google and find a result or hum a sound in Spotify and find a result is something that was not previously imaged 20 years ago. Most of the benefits seen in artificial intelligence are allowing people to work more efficiently and have products and services personalized to them. Artificial intelligence is behind most personal recommendation systems like Tick Tok, YouTube, and Netflix and it controls the information we have access to. By interacting with the technology, the weights and biases get updated to your preferences which eliminate the need for you to store that information in your mind.

A great disadvantage is that the human mind is limited by its personal experience. It cannot generate completely new information that it has never experienced before or make connections to data or information it had not previously interacted with. Artificial intelligence is not limited by one data source (i.e. human experience) but by multiple streams of information. It can work to generate novel ideas and concepts and connect the dots between information from varying sources. This is why technologies that can find new information from existing knowledge are popular. Being able to generate video content from a sentence you input or understand a document through in-depth and well-written summaries is allowing people to access information that would have been almost impossible to tie together.

This is also beneficial across industries where understanding more data can lead to better outcomes such as improved health care. Artificial intelligence has the potential to tie symptoms to diseases at a far greater accuracy than humans can because of the lack of other human experiences. People who utilize artificial intelligence are more likely to discover novel ideas, products, and services than those who do not.

Okay, now the bad

I can go on and on with specific examples of how AI is solving so many problems in multiple industries by being better at connecting dots from multiple sources and doing it more efficiently and repeatably, but this does not mean that there are no challenges that exist with the technology. These challenges are what keep polarized users that avoid the technology actively. Artificial intelligence has multiple challenges as with any technology but some that will be touched on in this article are generalizability, bias, accessibility, and privacy.


This is commonly referred to as, in the artificial intelligence community, the ability of an algorithm to be able to take any input from different areas and produce usable outputs. For example, an algorithm that has been trained to extract text from legal documents and summarize them in a way that a layperson can read them which if you have ever tried to read terms and conditions or apartment lease agreements would be awesome to have. But this algorithm may not work in the document summary of a medical document.

This is because again, it is all about data. Data used to train the legal document summary to achieve impressive accurate results may have excluded data from different fields in order not to confuse the input-output pairs correlation. The majority of the work that is published right now is based on specific and narrow applications that work really well in those specific tasks. Artificial intelligence is bounded by the data presented to it at the time of training.

Some technologies are being introduced that bridge these gaps of specificity because they are trained on large (really large) amounts of data so the machine learning algorithm has sufficient examples of input-output pairs that it can relate multiple adjacent concepts together. These algorithms are limited because you need a whole lot of data to begin with so only the AI powerhouse companies (Amazon, Google, Meta, Microsoft, and IBM) have been able to produce these models which raises other issues of bias and accessibility.


The term bias used here refers not to a number but a preference for something. All humans have biases that are ingrained within them that are formed based on everyone’s life experiences. It is not necessarily always bad as it helped us to survive by preferring things that brought about a benefit (pleasure) and avoiding things that did not bring about benefits (pain). Humans can find it difficult to recognize what exactly they are biased towards but most of the time, when pointed out, there is typically a reason that bias exists. Unfortunately, in the case of computers, it is hard to point out and understand a reason why biases exist in algorithms, but a lot of research is going into what could be a way to prevent biases from slipping into algorithms.

Good data does not mean free of technical errors only but that the data is representative enough for the prediction task. Because biases exist in the human world can easily be transferred to a computer. For example, data gotten from text conversations on the internet can bias non-English speakers, people who don’t have access to the internet or people who rarely contribute on the platforms which historically have been gender specific, etc. In this case, referring to this data as a good representation of human communication is not accurate and can lead to the program being trained from a limited perspective view.

These are the issues that are typically found in popular media when a company’s algorithm gets input from a user and produces racist output as was the case in google [7]. The algorithm had learned to associate the input of a male black person with being a criminal (Mug shots) because the text on the internet most likely referred to the same. Therefore, some researchers are calling for a more diverse workforce of machine learning scientists to be able to pinpoint these issues at the time of data gathering to reduce some of these negative effects.

Discrimination in Algorithms
Discrimination in Algorithms

While the google correlation of a black to a prisoner was offensive it has the potential to do more harm when algorithms are used in more serious situations such as Medical and Mental Health Diagnostics, Law court decisions, Loan approvals, etc.


Accessibility comes into consideration and is a motivation behind this article. There is a lack of accessibility in knowledge, data, and transparency in the artificial intelligence algorithm creation. You may have been hearing the buzz of artificial intelligence everywhere but in reality, only a few people control how these algorithms are developed and used. Only about 8% of companies have dedicated Artificial Intelligence algorithms [[8]( share of firms using any form of,employees have invested in some form of AI.)]. The lack of accessibility becomes even more apparent if you think about who is developing these algorithms. These are typically better-off individuals with high socio-economic factors with a more developed education (typically white and male) creating technology for the entire world. This doesn’t exist only with artificial intelligence but it can because a problem when people use the algorithm blindly and it is being used on people who don’t know it is being used on them.

This limitation on who makes and maintains the technology is so large because the education gap to jump over has many prerequisites that an individual needs to have a core foundation of. And… they aren’t the easy ones either: math (Linear Algebra, Calculus, Probability), statistics, and programming. These are only taught majorly in higher education and sometimes in well-funded high schools. This cuts a portion of the population who cannot access this information. And you can keep adding other factors that reduce the number of people that can access this information and resources.

Data Privacy

Data privacy is as the name implies, who owns your data. This is another big research area in not only machine learning but other communities and government agencies so this will also be a summary. Anyone can make a copy of their favorite movie and sell it right? Yeah, if you want to make prison your next house. The act of taking something that doesn’t belong to you or copying it is not new and it has been condemned in society and the law but something about the data on the internet is different.

Data on the internet is free for anyone to use right? Yeah, if you abide by the terms and conditions of the website and copyright rules… Which everyone does. Taking information from the internet has become so commonplace that it hardly seems illegal.

What does this have to do with data privacy you might ask? Any data you put on the internet is free for use… to those who can afford it. Companies are not bound by the same regulations that exist with copyright or piracy so they can take, store, use, and sell data that is not theirs. The amount of data being shared by everyone could never have been understood on a large scale before the boom of artificial intelligence because it was so much data. But with artificial intelligence, companies are able to know your preferences and how much buying power you have. For example, a company might know you love buying video games every month but you are currently a student (low purchasing power). These connections will enable the company to put ads on affordable video games or buy one get one free promotion. This may not be the case for someone who is a grandparent buying games for their grandchildren who has a higher purchasing power and will be only given the full price to pay. Because of this, there is a privacy vs personalization debate that is currently being discussed in multiple industries and governments and is another important consideration in Artificial Intelligence applications.

Okay, that’s a wrap!

Artificial Intelligence is on the rise with more of the technology being integrated into our everyday lives. Because of this expansion, this article dove into what artificial intelligence does under the hood to better inform the everyday person on what are some of the benefits and limitations of the artificial intelligence algorithm and how to start thinking about them. Artificial intelligence creates patterns from input-output combinations and uses those patterns to classify new/unseen examples. Artificial intelligence technology such as Deep Learning draws its power from data which can create new opportunities to be discovered or uncover existing biases in our world. Information, as the popular saying goes, is power. Information about what happens when an input is given to an AI algorithm to produce an output, even at a high level, is enough to get more people asking questions about the outputs of an AI and have a sufficient level of trust when using these algorithms.


[1] Chiang, C. W., & Yin, M. (2022, March). Exploring the Effects of Machine Learning Literacy Interventions on Laypeople’s Reliance on Machine Learning Models. In 27th International Conference on Intelligent User Interfaces (pp. 148-161).

[2] Buçinca, Z., Malaya, M. B., & Gajos, K. Z. (2021). To trust or to think: cognitive forcing functions can reduce overreliance on AI in AI-assisted decision-making. Proceedings of the ACM on Human-Computer Interaction, 5(CSCW1), 1-21.

[3] Ehsan, U., Liao, Q. V., Muller, M., Riedl, M. O., & Weisz, J. D. (2021, May). Expanding explainability: Towards social transparency in ai systems. In Proceedings of the 2021 CHI Conference on Human Factors in Computing Systems (pp. 1-19).

[4] [Dealing With Bias in Artificial Intelligence - The New York Times (]( is an unavoidable feature of life%2C the,gets a bank loan or who gets surveilled.)

[5] [What are Mental Models? | IxDF (]( models are abstract%2C inner representations that people,is or how it is supposed to work.)

[6] Amazon scraps secret AI recruiting tool that showed bias against women | Reuters

[7] The ‘three black teenagers’ search shows it is society, not Google, that is racist | Antoine Allen | The Guardian

[8] [AI Is All the Rage. So Why Aren’t More Businesses Using It? | WIRED]( share of firms using any form of,employees have invested in some form of AI.)

a cartoonish woman with a phone in her hand
a cartoonish woman with a phone in her hand