Abstract
This chapter discusses the growing importance of artificial intelligence (AI) systems across different areas of human life, and subsequent concerns about these new technologies exhibiting biased or even discriminatory behaviour, especially in contexts where multiple dimensions of diversity, such as gender, colour, religion or ethnicity, overlap. Using Kimberlé Crenshaw's (1989) concept of intersectionality as our main framework and adding perspectives from current legal and social science discourses, we argue that discriminatory AI is a human-made problem and can therefore only be tackled through a human-centred approach. This approach includes discussing protected attributes and their (in)stability, vulnerability, and essentialist versus non-essentialist attribution of group identity, as well as focusing on human-made inequalities and power imbalances, with a special attention to minority women, as the source for biased AI systems. AI models are biased and discriminatory because our society structures are as well; solutions only addressing technological challenges therefore fall short of tackling the underlying issue of inequalities. The chapter analyses the EU AI Act and the European Centre for Algorithmic Transparency as possible strategies for mitigating discriminatory effects through AI governance and conclude that successfully creating fair AI will not be possible without addressing the societal roots of its discriminatory behaviour.