User guideAI DetectionAI Detection settings
AI Detection

AI Detection settings

Configure GitHub tokens, LLM analysis, and dimension weights.

AI Detection settings

The Settings page has two tabs: GitHub integration for configuring repository access tokens, and Risk scoring for enabling LLM-enhanced analysis and customizing dimension weights.

GitHub integration

To scan private repositories, you need a GitHub Personal Access Token. Without a token, AI Detection can only scan public repositories.

Creating a token

Click the Create a new token on GitHub link to open GitHub's token creation page with the recommended scopes pre-selected:

  • repo: Full access to private and public repositories. Required for scanning private repos.
  • public_repo: Access to public repositories only. Use this if you only need to scan public repos.

Saving your token

Paste your token into the Personal access token field. Optionally give it a descriptive name (e.g., "VerifyWise Scanner Token") to help identify it later. Click Test token to verify it works, then Save token to store it.

Managing your token

Once a token is configured, you'll see a status indicator showing it's active. You can update the token at any time by entering a new one and clicking Update token. To remove the token entirely, click the delete button.

Tokens are stored securely on the server. They are never exposed in the browser after being saved.

Risk scoring

The Risk scoring tab controls how the AI Governance Risk Score (AGRS) is calculated for your scans.

LLM-enhanced analysis

Toggle LLM-enhanced analysis to enable AI-powered scoring. When enabled, the risk scoring engine sends anonymized finding summaries to your configured LLM to produce a narrative analysis, actionable recommendations, and suggested risks.

Select which LLM key to use from the dropdown. LLM keys are managed in Settings → LLM keys at the organization level. If no keys are configured, the dropdown shows a message directing you to set one up.

Without LLM enhancement, risk scores are calculated using rule-based analysis only. The score is still accurate but does not include narrative summaries, recommendations, or suggested risks.

Dimension weights

Adjust the sliders to control how much each risk dimension contributes to the overall score. The five dimensions are:

  • Data sovereignty: Weight for external data exposure and cloud API usage
  • Transparency: Weight for documentation quality and audit readiness
  • Security: Weight for vulnerabilities and credential exposure
  • Autonomy: Weight for autonomous AI agent detection
  • Supply chain: Weight for third-party dependencies and licensing

The total weight across all dimensions must equal 100%. A validation message appears if the total is too high or too low. Click Reset to defaults to restore the original weight distribution. After changing weights, click Save and recalculate any existing scores to apply the new weights.

PreviousScan results
AI Detection settings - AI Detection - VerifyWise User Guide