Floorish Newsletter AI & DEI

Welcome to the second edition of the Floorish newsletter dedicated to providing you with insightful data, ideas and views on diversity, equity and inclusion. In this newsletter, taking no more than 3 minutes of your time, I aim to keep you informed and inspired with thought-provoking content, practical tips and inspiring stories.

No alt text provided for this image

AI can have diverging effects on diversity, equity and inclusion. Let’s explore artificial intelligence’s (AI) potential in the sectors of healthcare, education, recruitment, legal, finance, retail and public.

1. Healthcare

 Improved access: AI in healthcare, specifically through telemedicine and remote monitoring, enhances access to medical services. By leveraging AI-powered technologies, individuals in underserved areas can receive remote consultations and continuous monitoring, eliminating the need for physical visits and expanding healthcare reach to those who face limited access.

 Biased outcomes: AI algorithms are trained on vast amounts of data and if the training data is biased or incomplete, it can lead to biased outcomes in healthcare. For example, if historical patient data used to train AI systems primarily represents certain demographics or underrepresents specific populations, the algorithms may produce biased recommendations or diagnoses. This can result in unequal treatment, misdiagnosis, or inadequate care for certain patient groups.

2. Education

 Personalised learning: AI can adapt learning materials and provide tailored recommendations to meet diverse student needs, accommodating different learning styles and abilities.

 Reduced human support: Over-reliance on AI systems and automated processes may lead to a loss of human connection, which is essential for fostering inclusive and supportive learning environments. Some students, particularly those from marginalised backgrounds or with diverse learning needs, may require individualised attention, guidance and emotional support that AI systems may not adequately provide.

3. Recruitment

 Fair hiring: AI can mitigate biases in hiring by standardising the evaluation process, focusing on skills and qualifications rather than demographic factors. AI can help identify and eliminate gender-biased language in job postings and provide suggestions for inclusive language. AI can also test skills, aptitudes and readiness in a way which helps candidates break through glass ceilings to reach senior positions. 

 Reinforced inequalities: AI systems in recruitment may inadvertently perpetuate socioeconomic inequalities. If historical data used to train AI algorithms reflects biased patterns, such as privileging candidates from certain educational or professional backgrounds, the algorithms may inadvertently reinforce these inequalities. This can result in overlooking talented individuals from disadvantaged backgrounds who may possess valuable skills and potential.

4. Legal

 Improved access: AI can enhance access to legal services by providing automated assistance and self-help tools. This can benefit marginalised communities who may face barriers in accessing legal advice due to cost or geographical limitations.

 Reduced trust: The use of AI in legal processes raises concerns about accountability, transparency and fairness. Decisions made by AI systems may lack the ability to explain their reasoning, making it difficult to challenge or contest their outcomes, which can lead to potential injustices and reduced trust in the legal system.

5. Finance

 Reduced bias decision-making: AI algorithms have the potential to reduce bias in financial decision-making processes. By relying on data-driven analysis rather than subjective judgments, AI can help mitigate human biases that can influence lending decisions, investment choices, or credit assessments. This can promote fairness and equal opportunities by ensuring that decisions are based on objective criteria rather than subjective factors.

 Limited accessibility: AI technologies in the finance sector rely on digital infrastructure and access to technology. However, marginalised communities, such as low-income individuals or those in rural areas, may have limited access to digital resources, creating a digital divide. This can further marginalise these communities and hinder their ability to benefit from AI-driven financial services and opportunities, deepening existing inequalities.

6. Retail

 Improved access: AI can improve accessibility for individuals with disabilities with features such as AI-powered chatbots and virtual assistants that provide real-time assistance, making retail platforms and services more inclusive.

 Misused information: It can raise concerns about privacy and data discrimination because algorithms collect and analyse vast amounts of customer data. There is a risk of unauthorised access or misuse of personal information and discriminatory outcomes may arise if AI systems disproportionately target certain demographic groups.

7. Public

 Enhanced decision-making: By leveraging AI algorithms, governments and public institutions can analyse large datasets to identify patterns and trends related to social inequalities. This can help inform evidence-based policies and interventions that address the specific needs of marginalised communities, promoting fairness and inclusivity in public sector initiatives.

 Undermined trust: AI implementation in the public sector may lead to a lack of transparency and accountability. Complex AI algorithms may make decisions that are difficult to explain or understand, making it challenging to challenge or question those decisions. This lack of transparency can undermine trust in public institutions and create concerns about potential biases or discriminatory outcomes that may disproportionately affect marginalised groups.

No alt text provided for this image

1. Microsoft’s chatbot “Tay”

In 2016, Microsoft’s chatbot “Tay” quickly became a prime example of diversity, equity and inclusion and AI went wrong. Tay was designed to interact with users on Twitter and learn from their conversations to improve its responses. Trolls and malicious individuals quickly exploited this feature and bombarded Tay with racist, sexist and hateful messages. Within hours of its launch, Tay started posting offensive and inflammatory tweets, reflecting the hateful and racist messages it received.

2. Amazon’s recruitment tool

In 2018, it was reported that Amazon had developed an AI-powered recruiting tool to automate the screening of job applicants. However, the algorithm was discovered to be biased against women, systematically downgrading female candidates. The algorithm had been trained on historical resumes, mostly from male applicants, leading to gender-based discrimination.

3. Apple’s virtual assistant

In 2020, Apple came under scrutiny for the gender bias exhibited by its virtual assistant, Siri. It was observed that Siri’s responses to certain gender-related questions perpetuated stereotypes and demonstrated a lack of understanding of gender diversity. For example, when asked questions like “Are you a feminist?” or “What do you think about gender equality?”, Siri provided evasive or dismissive responses that failed to acknowledge the importance of gender equality or feminist movements.

No alt text provided for this image

In the realm of diversity, equity and inclusion, AI holds immense potential to make a positive impact. However, it also gives rise to significant concerns. One such concern revolves around the lack of diversity within AI development teams. When teams lack diversity, they may inadvertently overlook or underestimate the perspectives, needs and experiences of marginalised communities. This oversight can result in AI products that fail to address their diverse requirements adequately. Moreover, AI algorithms, being products of their training data, have the potential to learn and perpetuate biases, leading to discriminatory outcomes.

While the possibilities offered by AI are undeniably promising, it is crucial to remember that this technology relies on human-collected and curated data. Since the advancement of AI tools hinges on human choices, it becomes imperative to conduct regular testing of both the data and the models. Through continuous evaluation and adjustments over time, we can ensure that this technology effectively fulfils its intended purpose. The journey towards an inclusive AI future is ongoing and there is still much work to be done!

I hope these insights have sparked your curiosity and I invite you to share any data, ideas or views you believe should be highlighted in future newsletters. Stay tuned for the next edition which will explore diversity, equity and inclusion and diversity washing.

Warm regards,

Floor Martens

No alt text provided for this image

穢 2023 Floorish  All rights reserved