Risk scoring
Understand the AI Governance Risk Score, LLM-enhanced analysis, and suggested risks.
Overview
The AI Governance Risk Score (AGRS) evaluates scan findings across multiple risk dimensions to produce a single numeric score (0–100) and letter grade (A–F). The score helps your team quickly assess the governance risk posture of a scanned repository.
Risk scores appear on the scan details page after a scan completes. You can calculate the score manually or enable LLM-enhanced analysis for deeper insights including narrative summaries, recommendations, and suggested risks.
Score cards
Once calculated, four cards display across the top of the scan details page:
- Overall score: Numeric score from 0 to 100 with a risk level label — Low risk (80+), Moderate risk (60–79), or High risk (below 60)
- Grade: Letter grade from A (Excellent) to F (Critical) with the calculation timestamp
- Dimensions at risk: Count of dimensions scoring below the 70-point threshold
- Dimension breakdown: Horizontal progress bars showing the score for each risk dimension
Risk dimensions
The overall score is composed of five weighted dimensions. Each dimension starts at 100 and receives penalties based on the types of findings detected:
- Data sovereignty: Penalized when data is sent to external cloud APIs. High-risk library imports, API calls to external providers, and hardcoded secrets contribute to penalties.
- Transparency: Penalized when AI usage is poorly documented or hard to audit. Undocumented model references, missing licenses, and low-confidence findings increase risk.
- Security: Penalized by model file vulnerabilities, hardcoded credentials, and critical security findings. Severity levels (Critical, High, Medium, Low) determine penalty weights.
- Autonomy: Penalized when autonomous AI agents are detected. Agent frameworks, MCP servers, and tool-using agents increase this dimension's risk.
- Supply chain: Penalized by external dependencies and third-party AI components. Libraries with restrictive licenses, numerous external providers, and RAG components contribute.
Hover over any dimension bar in the breakdown card to see the top contributors to that dimension's penalties.
Grade scale
| Grade | Score range | Label |
|---|---|---|
| A | 90–100 | Excellent — minimal governance risk |
| B | 75–89 | Good — low governance risk |
| C | 60–74 | Moderate — some areas need attention |
| D | 40–59 | Poor — significant governance gaps |
| F | 0–39 | Critical — immediate action required |
Calculating the score
On the scan details page, click Calculate risk score (or Recalculate score if a score already exists) to generate the AGRS. A progress dialog shows each step of the calculation process. The score is stored with the scan and displayed on future visits.
You can recalculate at any time — for example, after enabling LLM analysis in settings or after adjusting dimension weights. The previous score is replaced with the new calculation.
LLM-enhanced analysis
When enabled in AI Detection → Settings → Risk scoring, the scoring engine sends anonymized finding summaries to your configured LLM for deeper analysis. The LLM provides:
- Narrative summary: A written analysis of the repository's risk posture, highlighting key areas of concern with important findings in bold
- Recommendations: Actionable steps to improve the governance score
- Dimension adjustments: Fine-tuned score adjustments based on contextual analysis that rule-based scoring alone may miss
- Suggested risks: Structured risk suggestions that can be added to your risk register
The AI analysis section appears below the score cards as a collapsible panel. Click the chevron to expand or collapse it.
Suggested risks
When LLM analysis is enabled, the system may suggest concrete risks based on the scan findings. These appear in a collapsible "Suggested risks" section below the AI analysis.
Each suggestion includes:
- Risk name: A concise title describing the risk
- Risk dimension: Which AGRS dimension this risk relates to
- Risk level: Likelihood and severity assessment
- Description: Explanation of the risk and its potential impact
- Risk categories: Classification tags (e.g., Cybersecurity risk, Compliance risk)
Adding a suggestion to the risk register
Click Add to risk register on any suggestion to open the risk creation form with pre-filled values. The form includes the suggested risk name, description, category, lifecycle phase, likelihood, severity, impact, and mitigation plan. Review and adjust the values as needed, then save to add the risk to your organization's risk register.
The review notes field is automatically populated with a reference to the scan and the specific findings that prompted the suggestion.
Dismissing suggestions
Click Ignore on a suggestion to dismiss it. You can choose a reason — "Not relevant" or "Already mitigated" — from the dropdown menu. Dismissed suggestions are hidden from the current view but do not affect the underlying score.
Customizing dimension weights
Navigate to AI Detection → Settings → Risk scoring to adjust how much each dimension contributes to the overall score. Use the sliders to increase or decrease the weight of each dimension. The total weight across all dimensions must equal 100%. Click Save to apply your changes, then recalculate existing scores to reflect the new weights.