AI governance

Intake forms

Collect structured AI project requests through branded public forms with built-in risk scoring.

12 min read

Overview

Intake forms let anyone in your organization request a new AI use case or register a model without needing a VerifyWise account. You publish a public form, share the link, and submissions land in your governance queue for review. When you approve a submission, VerifyWise creates the use case or model inventory entry and carries over the risk score from the intake.

Without this, the gap between "someone wants to launch an AI project" and "the governance team finds out" usually gets filled by ad hoc emails or a spreadsheet that nobody updates. Intake forms replace that with a repeatable path: request, score, review, approve or reject.

Intake forms are publicly accessible and do not require authentication. Submitters fill out the form in their browser and receive email updates as their submission moves through review.

How it works

  1. Build: Create a form in the drag-and-drop builder. Choose field types, set validation rules, and map fields to entity properties.
  2. Brand: Customize the look with your organization's colors, logo, and typography.
  3. Publish: One click generates a public URL. Share it internally, embed it in a wiki, or send it via email.
  4. Collect: Submissions arrive with automatic risk scoring. If an LLM key is connected, the system also runs an LLM-based risk analysis on top.
  5. Review: Approve or reject each submission. Approvals create governed entities instantly. Rejections send the submitter a pre-filled resubmission link.

When to use intake forms

A few scenarios where intake forms tend to pay off:

New AI project requests

A product team wants to build a recommendation engine. Rather than filing a ticket or sending a Slack message that gets lost in a thread, they fill out the intake form: what the system does, what data it uses, who it affects, what it decides. The governance team gets the request with a risk score already attached.

Vendor AI tool registration

Marketing buys a third-party content generation tool. The intake form captures the vendor name, what data the tool touches, whether it makes decisions on its own, and how many people are affected. Governance reviews it before the tool goes live across the company.

Model inventory registration

Data science teams ship models regularly. An intake form set up for model inventory collects the model name, version, training data description, intended use, and provider. Approved submissions go straight into the model inventory with risk metadata carried over.

Department self-service

HR is piloting AI resume screening. Finance is testing automated fraud detection. Every department fills out the same intake form, so the governance team can compare risk and compliance exposure on equal terms instead of parsing different request formats from each group.

External partner intake

A consulting firm or system integrator is deploying AI on your behalf. Since intake forms require no login, external partners can submit their project details directly without getting access to your internal systems.

Compliance pre-screening

Before a team spends months building an AI system, an intake form can flag whether the project falls into a high-risk category under the EU AI Act or triggers GDPR obligations. The risk scoring gives teams early feedback on what compliance work lies ahead.

Creating a form

Navigate to the Intake forms page from the sidebar and click Create form. You will choose:

  • Form name: Descriptive title shown to submitters at the top of the form
  • Description: Optional context about what the form is for
  • Entity type: Whether approved submissions create a Use case or a Model inventory entry

After creation, the form builder opens with your chosen entity type's default fields pre-loaded.

Default fields

New use case forms start with six fields that are already mapped to entity properties:

  1. Use case name (mapped to project title)
  2. What does this AI system do? (mapped to description)
  3. What business goal does this serve? (mapped to goal)
  4. AI risk classification (mapped to risk classification)
  5. Does this system make autonomous decisions?
  6. What type of personal data does this process?

You can keep all of these, remove the ones you don't need, or add your own. Model inventory forms have their own set of defaults (model name, version, provider, intended use, and risk level).

The form builder

The builder has three areas: a field palette on the left, the form canvas in the center, and a settings panel on the right.

Field types

Drag any field type from the palette onto the canvas:

TypeDescriptionExample use
TextSingle-line text inputProject name, owner name
TextareaMulti-line text for longer responsesSystem description, business justification
EmailEmail address with format validationTechnical contact email
URLWeb address with URL validationRepository link, documentation URL
NumberNumeric value with min/max boundsNumber of affected users, budget
DateDate pickerTarget launch date, review date
SelectSingle-choice dropdownRisk classification, deployment status
Multi-selectMultiple-choice list with checkboxesApplicable regulations, data categories
CheckboxTrue/false toggleDPIA completed, Terms accepted

Field configuration

Click any field on the canvas to open its configuration in the right panel. Every field supports:

  • Label: The question text shown to the submitter
  • Placeholder: Hint text inside the input (disappears on focus)
  • Help text: Appears below the field label as secondary guidance
  • Guidance text: Detailed instructions shown below the input field
  • Required: Whether the field must be filled to submit
  • Default value: Pre-filled value when the form loads
  • Entity field mapping: Which entity property this field populates on approval (e.g., project_title, description, risk_classification)

Text and textarea fields also support min length, max length, and regex pattern validation. Number fields support min and max bounds. Select and multi-select fields have an options editor where you add label/value pairs.

Entity field mapping

Each field can optionally map to a property on the entity that gets created when the submission is approved. For use case forms, available mappings include:

  • project_title — use case name
  • description — system description
  • goal — business justification
  • owner — responsible person
  • start_date — planned start date
  • ai_risk_classification — risk tier
  • type_of_high_risk_role — high-risk role category

For model inventory forms: name, description, modelVersion, provider, owner, modelType, intendedUse, and riskLevel.

Fields without a mapping are still captured in the submission data — they just don't auto-populate entity properties. The reviewing admin can always edit entity data before confirming approval.

Field operations

Each field on the canvas has a toolbar with:

  • Move up / Move down: Reorder fields in the form
  • Duplicate: Copy the field with all its settings (gets a new ID)
  • Delete: Remove the field after confirmation

Design settings

Switch to the Design tab in the builder to change how the public form looks.

SettingDescriptionDefault
FormatForm width — narrow (620px) or wide (820px)Narrow
AlignmentHorizontal position — left, center, or rightCenter
Color themePrimary color used for the banner gradient, focused inputs, and the submit button#13715B
Background colorPage background behind the form card#fafafa
Logo URLOrganization logo displayed on the formNone
Font familyTypography for all form textInter

The form canvas updates in real-time as you change design settings, so you see exactly what submitters will see.

Form settings

The right panel in Edit mode has several settings beyond individual field configuration.

Notification recipients

Select which team members receive an email when a new submission arrives. Recipients are chosen from your organization's user list. If no recipients are configured, submissions still appear in the review queue but no email alerts are sent.

Risk tier system

Choose how submissions are classified by risk:

  • Generic: Four-tier system: Low, Medium, High, Critical
  • EU AI Act: Four-tier system aligned with the regulation: Minimal, Limited, High, Unacceptable

The tier system determines the labels and thresholds used when scoring submissions. Both systems use the same scoring dimensions under the hood.

LLM-enhanced analysis

If you have an LLM key configured in your organization settings, you can connect it to the form. The LLM analyzes each submission and adds explanations to the risk scoring, not just numbers. Without an LLM key, risk scoring still runs but uses rule-based analysis only.

Suggested questions

Toggle on Suggested questions to show a panel of pre-built governance questions in the builder. They are organized by category (risks, compliance, operations, vendors, models) and you can add any of them to your form with one click. Each question comes with the field type, validation rules, and guidance text already set up.

Collect contact information

Toggle Collect contact information to control whether submitters are asked for their name and email. When enabled, the public form shows a "Your contact information" section at the top with name (optional) and email (required). When disabled, submissions are anonymous.

When to go anonymous
Disable contact collection when the form is used internally by teams who are already identified through other channels, or when you want to lower the barrier for reporting shadow AI usage.

Publishing a form

Forms start in Draft status. Click Publish to generate a public URL you can copy and share. The form status changes to Active and submissions start showing up in your review queue.

You can still edit an active form. Changes take effect for new visitors right away. To stop accepting submissions, archive the form. Archived forms can be deleted later if you no longer need them.

StatusAccepts submissionsEditableCan deleteCan transition to
DraftNoYesYesActive
ActiveYesYesNoArchived
ArchivedNoYesYesActive

The public form experience

When someone opens the public link, they see the form page with your color theme, the form title in a gradient banner, and an optional description. No login required.

Submitting a form

The submitter fills out the required fields, optionally enters their contact information, solves a math CAPTCHA (something like "7 + 4 = ?"), and clicks Submit. The CAPTCHA blocks automated spam but is simple enough that it won't slow anyone down.

After submission, the submitter sees a success page with their reference number. If contact information was collected, they also receive a confirmation email.

Spam protection

Public forms are protected by two layers:

  • Math CAPTCHA: A simple arithmetic question that changes on each page load. The answer is verified server-side using a time-limited cryptographic token that expires after 5 minutes.
  • IP-based rate limiting: Limits the number of submissions from a single IP address within a time window. Exceeding the limit returns a "Too many submissions" error.

Reviewing submissions

All submissions appear in the Submissions tab on the intake forms page. Each row shows the submitter's name (if provided), the form name, submission status, risk tier, and date.

Click Review on any submission to open the review modal, which shows:

  • All submitted field values
  • The calculated risk assessment with dimension-level scores
  • An entity data preview built from field mappings
  • Approve and Reject action buttons

Risk scoring

Every submission is scored across multiple risk dimensions. Each dimension produces a score from 1 to 10, a weight, and a set of signals (short explanations of what drove the score).

DimensionWhat it measures
ImpactHow severe the harm could be if the system fails or behaves incorrectly
LikelihoodHow probable a failure or unintended outcome is
ScopeHow many people or processes the system affects
ReversibilityWhether the consequences of a mistake can be undone
ContestabilityWhether affected individuals can challenge the system's decisions
TransparencyHow explainable the system's behavior is to stakeholders

The weighted dimension scores produce an overall score that maps to a tier in your chosen risk system (Generic or EU AI Act). With an LLM key connected, each dimension also gets a written explanation of what drove the score.

Overriding the risk tier

Reviewers can override the calculated risk tier during approval. You need to provide a justification, and the override is logged with your identity and timestamp. This matters when the automatic scoring misses context you have, like a system that scores "medium" but affects a protected population.

Approving a submission

When you approve a submission:

  1. Review the entity data preview — these are the field values that will populate the new entity. You can edit them before confirming.
  2. Optionally override the risk tier with a justification.
  3. Click Approve. VerifyWise creates the entity (use case or model inventory entry) with the confirmed data.
  4. If the submitter provided an email, they receive an approval notification.

The new entity starts in its default lifecycle state ("Under review" for use cases, "Pending" for models) and shows up in your governance dashboards right away.

Rejecting a submission

When you reject a submission:

  1. Enter a rejection reason explaining what needs to change.
  2. Click Reject. The submission status changes to "Rejected."
  3. If the submitter provided an email, they receive a rejection notification that includes the reason and a resubmission link.

Resubmission

The rejection email includes a link that opens the form with all previous answers pre-filled. The submitter edits what needs changing and resubmits. These links expire after 7 days and are cryptographically signed so they can't be tampered with.

Resubmissions create a new submission record linked to the original. The original submission is preserved for audit purposes.

Email notifications

Emails go out at four points in the workflow. Submitter-facing emails are only sent when an email address was collected.

EventRecipientWhat it includes
Submission receivedSubmitterConfirmation with reference number and form name
New submission alertForm recipientsSubmitter name/email, form name, submission ID
Submission approvedSubmitterApproval confirmation, entity type created
Submission rejectedSubmitterRejection reason and resubmission link (7-day expiry)
Anonymous submissions
When contact collection is off, there is no way to email the submitter. Approval and rejection notifications get skipped. Admin alerts still go out to configured recipients, showing "Anonymous" as the submitter.

Managing forms

The intake forms list page shows all your forms with their status, entity type, submission count, and creation date.

Form actions

  • Edit: Open the form builder to modify fields, settings, or design
  • Preview: See how the form looks to submitters without publishing
  • Copy link: Copy the public URL to clipboard (active forms only)
  • Archive: Stop accepting submissions while preserving the form and its data
  • Delete: Permanently remove the form (draft and archived forms only)

Submission statuses

StatusMeaningPossible transitions
PendingWaiting for admin reviewApproved or Rejected
ApprovedEntity created from submission dataNone (final)
RejectedReturned to submitter with reasonSubmitter can resubmit (creates new Pending submission)

Best practices

Form design

  • Keep forms under 15 fields. Longer forms have higher abandonment rates.
  • Use guidance text on complex questions. A two-sentence explanation prevents misinterpretation.
  • Map fields to entity properties wherever possible. This saves the reviewer from manual data entry on approval.
  • Use select and multi-select fields for classification questions (risk level, data categories). Structured answers are easier to score and compare.
  • Start with the default fields and remove what you don't need rather than starting blank.

Review workflow

  • Configure at least one notification recipient so submissions don't sit unreviewed.
  • Use the EU AI Act tier system if your organization reports under that regulation. The generic system works well for internal governance programs.
  • Connect an LLM key if you want richer risk explanations. The rule-based scoring still works without it, but the LLM adds reasoning context.
  • When rejecting, be specific in the rejection reason. The submitter sees your text verbatim.
  • Review entity data before confirming approval. The field mapping builds a reasonable starting point, but a quick check prevents errors downstream.

Security

  • Archive forms as soon as you stop accepting submissions. This prevents stale links from collecting data.
  • Enable contact collection when you need to communicate decisions back to submitters. Disable it when anonymity is more important than follow-up.
  • The CAPTCHA and rate limiting run automatically — no configuration needed.
NextManaging model inventory
Intake forms - AI governance - VerifyWise User Guide