Code and open initiatives.
21 resources
Open source AI governance platform for managing AI compliance, risk assessments, and documentation. Supports EU AI Act, ISO 42001, and NIST AI RMF compliance workflows.
Comprehensive toolkit for detecting and mitigating bias in machine learning models. Includes over 70 fairness metrics and 10 bias mitigation algorithms with Python and R support.
Open source toolkit for training interpretable models and explaining black-box systems. Includes Explainable Boosting Machines (EBM) and various explanation methods like SHAP and LIME.
Open source platform for managing the ML lifecycle including experimentation, reproducibility, deployment, and model registry. Provides foundation for ML governance workflows.
Open source framework for testing ML models including LLMs. Provides automated vulnerability detection, bias testing, and quality evaluation for AI systems.
AI Fairness 360 (AIF360) is a comprehensive open-source toolkit that provides metrics to detect unwanted bias in datasets and machine learning models. It includes state-of-the-art algorithms to mitigate identified bias, helping developers build more fair and equitable AI systems.
An open-source toolkit designed to help detect and mitigate bias in machine learning models across various domains including finance, healthcare, and education. The platform provides practical tools in both Python and R to translate fairness research into real-world applications.
An extensible open source toolkit designed to help users understand how machine learning models make predictions. The toolkit provides various methods for explaining AI model behavior throughout the entire AI application lifecycle.
The Microsoft Responsible AI Toolbox is a collection of integrated tools and functionalities designed to help organizations operationalize responsible AI principles in practice. It provides practical resources and capabilities to implement responsible AI approaches across AI development and deployment workflows.
Microsoft's collection of responsible AI tools and practices including open-source packages for assessing AI system fairness and mitigating bias. The platform provides toolkits for understanding both glass-box and black-box ML models to support responsible AI development.
An open-source suite of tools providing model and data exploration interfaces and libraries for better understanding of AI systems. It includes visualization widgets and a responsible AI dashboard that enables developers and stakeholders to develop, assess, and monitor AI systems more responsibly while making informed data-driven decisions.
TensorFlow's Responsible AI Toolkit is a collection of open-source resources and tools designed to help machine learning practitioners develop AI systems responsibly. The toolkit provides practical guidance and technical implementations to support responsible AI development practices within the ML community.
TensorFlow's collection of Responsible AI tools designed to help developers build fairness, interpretability, privacy, and security into AI systems. The tools provide practical implementation guidance for responsible AI development within the TensorFlow ecosystem.
A collection of tutorials and tools provided by TensorFlow to help developers implement Responsible AI practices in machine learning development. The resource builds on Google's AI principles introduced in 2018 and provides practical guidance for ethical AI development.
Fairlearn is an open-source toolkit designed to help assess and improve fairness in machine learning models. It provides metrics, algorithms, and visualizations to identify and mitigate bias in AI systems, built collaboratively by contributors with diverse backgrounds and expertise.
Fairlearn is an open-source Python package that enables developers to assess and mitigate fairness issues in artificial intelligence systems. It provides mitigation algorithms and metrics for evaluating model fairness across different demographic groups.
Fairlearn is an open source Python library and project designed to help practitioners assess and improve the fairness of artificial intelligence systems. The tool provides capabilities for evaluating model outputs across different affected groups and implementing fairness improvements in AI systems.
A comprehensive analysis of leading AI-powered open-source data governance tools, featuring projects like Egeria under the Linux Foundation. The report covers automated metadata synchronization, context-aware search capabilities, and governance zone support for improved data visibility and interoperability.
An open-source interactive toolkit designed for analyzing the internal workings of Transformer-based language models. The tool provides transparency capabilities to help researchers and practitioners understand how large language models operate internally, supporting AI governance through enhanced model interpretability.
A comprehensive guide that evaluates and ranks leading open source AI models including LLaMA 4, Mixtral, and Gemma based on performance, speed, and licensing criteria. The report serves as a resource for selecting appropriate open source AI tools with transparency considerations for 2026.
An open source project focused on providing supply chain security for machine learning models. The tool aims to enhance transparency and trust in ML model distribution and deployment through cryptographic signing and verification mechanisms.