User guideLLM EvalsManaging models
LLM Evals

Managing models

View and manage the AI models used across your evaluation experiments.

Managing models

The Models page is your central registry for the AI models used across your evaluation experiments. It shows which models have been tested, their providers, and how they've been accessed (via API, locally through Ollama, or through HuggingFace).

Model list

Each model entry shows the model name, provider, and access method. Models are automatically added to this list when they are used in experiments. You don't need to manually register models before running evaluations.

Supported providers

VerifyWise supports a wide range of model providers:

  • OpenAI: GPT-4, GPT-4 Turbo, GPT-3.5 Turbo, and newer models.
  • Anthropic: Claude 3 Opus, Sonnet, and Haiku.
  • Google Gemini: Gemini Pro and Ultra.
  • xAI: Grok models.
  • Mistral: Mistral Large and Medium.
  • HuggingFace: Open-source models. No API key required for some models.
  • Ollama: Locally-hosted models running on your own hardware.
  • Local / Custom API: Any endpoint with an OpenAI-compatible API.

API key configuration

API keys for cloud providers are configured in the Settings tab of your evals project. Keys are stored securely and shared across all experiments in the project. You can add, update, or remove keys at any time.

For local models (Ollama), no API key is needed. Just make sure your Ollama instance is running and accessible from the server.
PreviousRunning bias audits
NextLLM Arena
Managing models - LLM Evals - VerifyWise User Guide