关于
This skill provides a comprehensive framework for identifying and addressing algorithmic bias in machine learning models and LLMs. It enables developers to perform systematic fairness evaluations across protected categories, detect hidden proxy variables, and implement mitigation strategies throughout the AI lifecycle—from pre-processing data to post-processing model outputs. It is essential for teams aiming to build ethical, compliant, and transparent AI systems that adhere to international fairness standards and legal frameworks like the Civil Rights Act or ADA.