ISBN-13: 9786209538230 / Angielski / Miękka / 2026 / 60 str.
Through a systematic literature review, this paper explores how some AI algorithms are biased, a situation that represents a major ethical and social risk, given that automated systems actively participate in a variety of decisions. It also discusses how the lack of diversity in AI development teams and training datasets increases the likelihood of bias. In addition, it highlights the ethical and social implications of these biases in different contexts, such as in the workplace, where automated hiring tools discriminate against female candidates, or in medicine, where diagnoses are less accurate for women due to lack of representative data. From the regulatory point of view, the insufficiency of binding legal frameworks is recognized. Although there are efforts such as the AI Law proposed by the European Union or the ethical recommendations of UNESCO, many of these regulations lack effective enforcement mechanisms. In contrast, countries such as Canada have moved forward with mandatory algorithmic impact assessment tools, which represent a more effective model.