AI Settings

Configure Gemini-backed AI and validate connectivity for the v2 UI.

AI configuration status

The backend AI stack (Gemini + ATOMIC kernel) is configured entirely via environment variables on the Dev VM. Use this page to see current status and test candidate keys and models without persisting secrets.

API key
Not detected

Reads GOOGLE_GEMINI_API_KEY inside the backend container. Managed via the VM's .env, not this UI.

Active model
gemini-2.5-flash (default)

Resolved from GOOGLE_GEMINI_MODEL when set; otherwise defaults to gemini-2.5-flash.

Health
Checking…

When configured, the same Gemini stack powers both the floating AI chat widget and AI Reports page.

Gemini API key

Use this section to quickly test a Gemini API key and model combination. Keys are never stored; they are used only for the one-off validation call.

How to obtain a Gemini API key
  1. Visit Google AI Studio
  2. Click "Get API key" and choose a project
  3. Copy the generated key
  4. Configure it as GOOGLE_GEMINI_API_KEY on Dev VM 220

This field is used only to call the /api/v1/ai/test endpoint. It is never written to disk.

If left blank, the backend will fall back to its default model (typically gemini-2.5-flash).

Model comparison

For most portfolio analysis workflows, Gemini 2.5 Flash is the recommended choice: fast, cost-efficient, and capable of handling long context. Pro variants are ideal for the heaviest reasoning workloads or narrative-heavy reports.

  • 2.5 Flash: primary model for chat + AI Reports. Great balance of speed and quality.
  • 2.5 Pro: most capable model, best for deep analysis and complex “what-if” questions.
  • 3 Pro Preview: early access to next-gen capabilities. Expect occasional rough edges.
  • Flash/Pro Latest: tracks Google's latest releases without code changes.

Currently testing with gemini-2.5-flash (Gemini 2.5 Flash (Recommended)).