Back to library

Open source governance projects

Code and open initiatives.

21 resources

Type:
21 resources found
toolVerifyWise • 2024

VerifyWise - Open Source AI Governance Platform

Open source AI governance platform for managing AI compliance, risk assessments, and documentation. Supports EU AI Act, ISO 42001, and NIST AI RMF compliance workflows.

Governance platforms
toolIBM Research • 2018

AI Fairness 360 (AIF360)

Comprehensive toolkit for detecting and mitigating bias in machine learning models. Includes over 70 fairness metrics and 10 bias mitigation algorithms with Python and R support.

Risk assessment tools
toolMicrosoft Research • 2019

InterpretML - Machine Learning Interpretability

Open source toolkit for training interpretable models and explaining black-box systems. Includes Explainable Boosting Machines (EBM) and various explanation methods like SHAP and LIME.

Transparency tooling
toolDatabricks • 2018

MLflow - ML Lifecycle Management

Open source platform for managing the ML lifecycle including experimentation, reproducibility, deployment, and model registry. Provides foundation for ML governance workflows.

Governance platforms
toolGiskard • 2022

Giskard - ML Testing & Quality Framework

Open source framework for testing ML models including LLMs. Provides automated vulnerability detection, bias testing, and quality evaluation for AI systems.

Evaluation tooling
toolIBM Research • 2018

AI Fairness 360

AI Fairness 360 (AIF360) is a comprehensive open-source toolkit that provides metrics to detect unwanted bias in datasets and machine learning models. It includes state-of-the-art algorithms to mitigate identified bias, helping developers build more fair and equitable AI systems.

Governance platforms
toolIBM/Linux Foundation AI • 2018

AI Fairness 360

An open-source toolkit designed to help detect and mitigate bias in machine learning models across various domains including finance, healthcare, and education. The platform provides practical tools in both Python and R to translate fairness research into real-world applications.

Governance platforms
toolIBM • 2024

AI Explainability 360 Toolkit

An extensible open source toolkit designed to help users understand how machine learning models make predictions. The toolkit provides various methods for explaining AI model behavior throughout the entire AI application lifecycle.

Governance platforms
toolMicrosoft • 2024

Microsoft Responsible AI Toolbox

The Microsoft Responsible AI Toolbox is a collection of integrated tools and functionalities designed to help organizations operationalize responsible AI principles in practice. It provides practical resources and capabilities to implement responsible AI approaches across AI development and deployment workflows.

Governance platforms
toolMicrosoft • 2024

Responsible AI Tools and Practices

Microsoft's collection of responsible AI tools and practices including open-source packages for assessing AI system fairness and mitigating bias. The platform provides toolkits for understanding both glass-box and black-box ML models to support responsible AI development.

Governance platforms
toolMicrosoft • 2024

Responsible AI Toolbox

An open-source suite of tools providing model and data exploration interfaces and libraries for better understanding of AI systems. It includes visualization widgets and a responsible AI dashboard that enables developers and stakeholders to develop, assess, and monitor AI systems more responsibly while making informed data-driven decisions.

Governance platforms
toolGoogle • 2024

Responsible AI Toolkit

TensorFlow's Responsible AI Toolkit is a collection of open-source resources and tools designed to help machine learning practitioners develop AI systems responsibly. The toolkit provides practical guidance and technical implementations to support responsible AI development practices within the ML community.

Governance platforms
toolGoogle • 2020

Responsible AI with TensorFlow

TensorFlow's collection of Responsible AI tools designed to help developers build fairness, interpretability, privacy, and security into AI systems. The tools provide practical implementation guidance for responsible AI development within the TensorFlow ecosystem.

Governance platforms
toolGoogle • 2024

Responsible AI Tutorials

A collection of tutorials and tools provided by TensorFlow to help developers implement Responsible AI practices in machine learning development. The resource builds on Google's AI principles introduced in 2018 and provides practical guidance for ethical AI development.

Governance platforms
toolOpen Source Community • 2020

Fairlearn

Fairlearn is an open-source toolkit designed to help assess and improve fairness in machine learning models. It provides metrics, algorithms, and visualizations to identify and mitigate bias in AI systems, built collaboratively by contributors with diverse backgrounds and expertise.

Governance platforms
toolFairlearn Community • 2020

Fairlearn: A Python Package to Assess and Improve Fairness of Machine Learning Models

Fairlearn is an open-source Python package that enables developers to assess and mitigate fairness issues in artificial intelligence systems. It provides mitigation algorithms and metrics for evaluating model fairness across different demographic groups.

Governance platforms
toolarXiv • 2023

Fairlearn: Assessing and Improving Fairness of AI Systems

Fairlearn is an open source Python library and project designed to help practitioners assess and improve the fairness of artificial intelligence systems. The tool provides capabilities for evaluating model outputs across different affected groups and implementing fairness improvements in AI systems.

Governance platforms
reportOvalEdge • 2024

Top 5 AI-Powered Open-Source Data Governance Tools in 2026

A comprehensive analysis of leading AI-powered open-source data governance tools, featuring projects like Egeria under the Linux Foundation. The report covers automated metadata synchronization, context-aware search capabilities, and governance zone support for improved data visibility and interoperability.

Governance platforms
toolMeta AI Research • 2024

LLM Transparency Tool (LLM-TT)

An open-source interactive toolkit designed for analyzing the internal workings of Transformer-based language models. The tool provides transparency capabilities to help researchers and practitioners understand how large language models operate internally, supporting AI governance through enhanced model interpretability.

Transparency tooling
reportCake.ai • 2024

The 6 Best Open Source AI Tools for 2026

A comprehensive guide that evaluates and ranks leading open source AI models including LLaMA 4, Mixtral, and Gemma based on performance, speed, and licensing criteria. The report serves as a resource for selecting appropriate open source AI tools with transparency considerations for 2026.

Transparency tooling
toolSigstore • 2024

Model Transparency: Supply Chain Security for ML

An open source project focused on providing supply chain security for machine learning models. The tool aims to enhance transparency and trust in ML model distribution and deployment through cryptographic signing and verification mechanisms.

Transparency tooling
Open source governance projects | AI Governance Library | VerifyWise