V originále
This interdisciplinary chapter, based on the collaboration of international lawyers and artificial intelligence (AI) experts, introduces various phases of the AI life cycle in light of possible human rights violations that may arise from each of them. It identifies the root cause of the risks to human rights and analyses the possible remedies that are common to all AI systems despite their great diversity and domain of use today. The risk of human rights violation arises notably due to unbalanced or biased data, insufficiently identified system boundary conditions or modified context, the existence of the black box, malicious use, or abuse of AI. The gist is to introduce the human rights risk assessment throughout the whole AI life cycle and integrate it into the user requirements and the system specifications in the initial phase. This ensures, inter alia, that the AI system will be developed, tested, and monitored in light of the applicable human rights limitations. Requirements related to transparency, explainability, certification, or the selection of development data are all highly relevant for protection of human rights.