Skip to main content

Documentation Index

Fetch the complete documentation index at: https://docs.navigara.com/llms.txt

Use this file to discover all available pages before exploring further.

Overview

Full on-premises deployment runs all Navigara components in your infrastructure. Source code, analysis results, and all metadata stay within your network.

Architecture

A full deployment consists of three infrastructure components:
  1. Linux VM — runs the Navigara application (backend, frontend, collector) via Docker Compose
  2. Managed PostgreSQL — stores all application data including vector embeddings (requires pgvector extension)
  3. LLM API endpoint — Anthropic, OpenAI, or Google Vertex AI API for AI-powered commit analysis
  4. Navigara license key — JWT issued by Navigara that unlocks analysis features (see License Key)

Prerequisites

Linux VM

The VM runs all Navigara application containers via Docker Compose. Operating system: Ubuntu 24.04 LTS or Debian 13+ (other systemd-based Linux distributions may work but are not officially supported) Required software:
  • Docker Engine 24+
  • Docker Compose v2+

Managed PostgreSQL

A managed PostgreSQL instance with the pgvector extension enabled. All major cloud providers support this:
  • AWS: Amazon RDS for PostgreSQL with pgvector
  • GCP: Cloud SQL for PostgreSQL with pgvector
  • Azure: Azure Database for PostgreSQL with pgvector
  • Self-managed: PostgreSQL 18+ with pgvector extension installed
PostgreSQL version: 18 (required for native UUIDv7 support via uuidv7()) Required extension: pgvector — vector similarity search for knowledge graph analysis

License Key

A full on-premises deployment requires a license key issued by Navigara. Contact your Navigara account representative with the domain you intend to host Navigara on (e.g. navigara.yourcompany.com) to obtain one. Set it as LICENSE_KEY in your .env.
Treat LICENSE_KEY as a secret. Store it alongside your other credentials (vault or secret manager) and do not commit it to source control.

LLM API Endpoint

Navigara requires an LLM API endpoint for AI-powered commit analysis. Supported providers:
ProviderModelNotes
AnthropicClaude Sonnet / Claude HaikuRecommended — best quality/cost ratio
Google Vertex AIGemini 2.5 FlashGood cost/performance ratio
OpenAIGPT-5.4Widely available
The endpoint must be reachable from the VM. For air-gapped environments, a locally hosted model with an OpenAI-compatible API (e.g. vLLM, Ollama) can be used — contact support for guidance.

Network Requirements

The VM must have outbound access to the following services. Ensure your firewall rules allow these connections:
ServicePurposeExample endpoints
Git providerRepository cloning and commit fetchinggithub.com, gitlab.com, or your self-hosted instance
Task managementAlignment scoring via issue/task dataapi.linear.app, *.atlassian.net, or your self-hosted instance
LLM APIAI-powered commit analysisapi.anthropic.com, api.openai.com, or Vertex AI endpoints
If any of these services are unreachable, the corresponding Navigara features will not function. Git provider access is required for core functionality.

Hardware Requirements

Suitable for small to mid-size engineering teams.
ComponentCPUMemoryDisk
Linux VM16 vCPU16 GB500 GB SSD
PostgreSQL16 vCPU32 GB500 GB SSD
Disk requirements are primarily driven by knowledge graph data and vector embeddings. SSD storage is required for acceptable query performance.

Installation

1. Prepare the VM

# Install Docker
curl -fsSL https://get.docker.com | sh
sudo usermod -aG docker $USER

# Install Docker Compose plugin (if not included)
sudo apt-get install docker-compose-plugin

# Verify
docker compose version

2. Configure the deployment

Create the deployment directory structure and docker-compose.yml:
mkdir -p /opt/navigara/deployment/caddy && cd /opt/navigara
services:
  backend:
    image: ${BACKEND_IMAGE:-europe-docker.pkg.dev/navigara-images/public/vision-be:${NAVIGARA_VERSION}}
    container_name: navigara-backend
    init: true
    env_file:
      - .env
    environment:
      DATABASE_MAX_CONNS: ${DATABASE_MAX_CONNS:-10}
      DATABASE_MIN_CONNS: ${DATABASE_MIN_CONNS:-2}
      GRPC_PORT: ${GRPC_PORT:-9090}
      HTTP_PORT: ${HTTP_PORT:-8080}
      LOG_LEVEL: ${LOG_LEVEL:-INFO}
      LLM_PROVIDER: ${LLM_PROVIDER:-anthropic}
      LLM_MODEL: ${LLM_MODEL:-claude-sonnet-4-20250514}
      LLM_MAX_TOKENS: ${LLM_MAX_TOKENS:-8192}
      MIGRATIONS_PATH: ${MIGRATIONS_PATH:-db/migrations}
      COLLECTOR_API_KEY: ${COLLECTOR_API_KEY:-navigara-collector-key}
      LICENSE_KEY: ${LICENSE_KEY}
      FRONTEND_URL: https://${DOMAIN}
      PUBLIC_API_URL: https://${DOMAIN}/api
    ports:
      - "127.0.0.1:8080:8080"
      - "127.0.0.1:9090:9090"
    # Uncomment if using Vertex AI with a service account key file
    # volumes:
    #   - ./vertex-ai-key.json:/etc/navigara/vertex-ai-key.json:ro
    restart: unless-stopped
    networks:
      - navigara-network

  collector:
    image: ${COLLECTOR_IMAGE:-europe-docker.pkg.dev/navigara-images/public/vision-collector:${NAVIGARA_VERSION}}
    container_name: navigara-collector
    init: true
    env_file:
      - .env
    environment:
      SERVER_ADDR: backend:9090
      COLLECTOR_ID: ${COLLECTOR_ID:-collector-1}
      COLLECTOR_API_KEY: ${COLLECTOR_API_KEY:-navigara-collector-key}
      MAX_WORKERS: ${COLLECTOR_MAX_WORKERS:-10}
      LLM_PROVIDER: ${LLM_PROVIDER:-anthropic}
      LLM_MODEL: ${LLM_MODEL:-claude-sonnet-4-20250514}
      WORK_DIR: /tmp/git-analysis
    depends_on:
      backend:
        condition: service_started
    # Uncomment if using Vertex AI with a service account key file
    # volumes:
    #   - ./vertex-ai-key.json:/etc/navigara/vertex-ai-key.json:ro
    restart: unless-stopped
    networks:
      - navigara-network

  frontend:
    image: ${FRONTEND_IMAGE:-europe-docker.pkg.dev/navigara-images/public/vision-fe:${NAVIGARA_VERSION}}
    container_name: navigara-frontend
    env_file:
      - .env
    environment:
      BACKEND_API_URL: ${BACKEND_API_URL:-http://backend:8080}
      NODE_ENV: ${NODE_ENV:-production}
    ports:
      - "127.0.0.1:3000:3000"
    depends_on:
      - backend
    restart: unless-stopped
    healthcheck:
      test: ["CMD", "wget", "--no-verbose", "--tries=1", "--spider", "http://127.0.0.1:3000"]
      interval: 15s
      timeout: 5s
      retries: 3
      start_period: 15s
    networks:
      - navigara-network

  caddy:
    image: caddy:2-alpine
    container_name: navigara-caddy
    ports:
      - "80:80"
      - "443:443"
      - "443:443/udp"
    volumes:
      - ./deployment/caddy/Caddyfile:/etc/caddy/Caddyfile:ro
      - caddy_data:/data
      - caddy_config:/config
    depends_on:
      - frontend
    restart: unless-stopped
    networks:
      - navigara-network

volumes:
  caddy_data:
    name: navigara-caddy-data
  caddy_config:
    name: navigara-caddy-config

networks:
  navigara-network:
    name: navigara-network
    driver: bridge

3. Generate Root Private Key

Generate an Ed25519 key pair for admin root access. You will use the private key to generate login links in Product Setup.
PRIVKEY_PEM=$(openssl genpkey -algorithm Ed25519)
PUBKEY=$(echo "$PRIVKEY_PEM" | openssl pkey -pubout -outform DER | tail -c 32 | base64)
SEED=$(echo "$PRIVKEY_PEM" | openssl pkey -outform DER | tail -c 32 | base64)
printf -- "-----BEGIN PRIVATE KEY-----\n%s\n-----END PRIVATE KEY-----\n" "$SEED" > admin_key.pem
echo "Public key (use below): $PUBKEY"
echo "Private key saved to: admin_key.pem"
The admin_key.pem file is a credential for root access to your Navigara instance. Store it in a secure location (e.g. a vault or password manager) and restrict access to authorized administrators only.

4. Prepare Environment File

Create a .env file with your configuration (paste the public key from step 3 into ADMIN_PUBLIC_KEY):
# Version
NAVIGARA_VERSION=0.10.16

# Container images (override to use your own registry)
# BACKEND_IMAGE=your-registry.example.com/navigara/vision-be:${NAVIGARA_VERSION}
# COLLECTOR_IMAGE=your-registry.example.com/navigara/vision-collector:${NAVIGARA_VERSION}
# FRONTEND_IMAGE=your-registry.example.com/navigara/vision-fe:${NAVIGARA_VERSION}

# Database
DATABASE_URL=postgresql://navigara:<password>@<postgres-host>:5432/navigara?sslmode=require

# LLM Configuration — see "LLM API Endpoint" section above for all providers
LLM_PROVIDER=anthropic             # anthropic (recommended) | openai | genai
LLM_MODEL=claude-sonnet-4-20250514 # Model name for your provider
LLM_API_KEY=<your-api-key>        # API key for the LLM provider
LLM_API_URL=                       # Custom endpoint URL (optional, for self-hosted models)

# OpenAI (uncomment if using openai provider)
# LLM_PROVIDER=openai
# LLM_MODEL=gpt-5.4
# LLM_API_KEY=<your-openai-api-key>

# Google Vertex AI (uncomment if using genai provider)
# LLM_PROVIDER=genai
# LLM_MODEL=gemini-2.5-flash
# GOOGLE_PROJECT=<your-gcp-project>
# GOOGLE_LOCATION=global
# GOOGLE_APPLICATION_CREDENTIALS=/etc/navigara/vertex-ai-key.json  # Required if host is not authenticated via gcloud

# Application
DOMAIN=navigara.yourcompany.com  # Used by Caddy and to derive FRONTEND_URL / PUBLIC_API_URL

# Navigara License (JWT issued by Navigara, tied to your DOMAIN — see "License Key" section above)
LICENSE_KEY=<your-license-jwt>

# Admin Root Access
ADMIN_PUBLIC_KEY=<paste-public-key-from-step-3>

# Authentication (enable password auth if not using OAuth/SSO)
# ENABLE_PASSWORD_AUTH=true

# Google OAuth (uncomment to enable Google sign-in)
# GOOGLE_CLIENT_ID=<your-google-oauth-client-id>
# GOOGLE_CLIENT_SECRET=<your-google-oauth-client-secret>

# Microsoft OAuth (uncomment to enable Microsoft / Entra ID sign-in)
# MICROSOFT_CLIENT_ID=<your-microsoft-oauth-client-id>
# MICROSOFT_CLIENT_SECRET=<your-microsoft-oauth-client-secret>

# Collector
COLLECTOR_API_KEY=<generate-a-random-key>
At least one authentication method must be configured for users to log in to Navigara. Enable one or more of: password auth (ENABLE_PASSWORD_AUTH=true), Google OAuth (GOOGLE_CLIENT_ID + GOOGLE_CLIENT_SECRET), or Microsoft OAuth (MICROSOFT_CLIENT_ID + MICROSOFT_CLIENT_SECRET). Without at least one of these, only the root admin can access the instance via the impersonation link in Product Setup.

5. Configure PostgreSQL

Connect to your PostgreSQL instance as a superuser and run the following:
-- Create database and application user
CREATE DATABASE navigara;
CREATE USER navigara WITH PASSWORD '<strong-password>';

-- Connect to the navigara database
\c navigara

-- Enable required extensions
CREATE EXTENSION IF NOT EXISTS vector;

-- Grant permissions to the application user
GRANT CONNECT, CREATE, TEMPORARY ON DATABASE navigara TO navigara;
GRANT CREATE, USAGE ON SCHEMA public TO navigara;
GRANT SELECT, INSERT, UPDATE, DELETE, REFERENCES, TRIGGER, TRUNCATE ON ALL TABLES IN SCHEMA public TO navigara;
GRANT SELECT, UPDATE, USAGE ON ALL SEQUENCES IN SCHEMA public TO navigara;
ALTER DEFAULT PRIVILEGES FOR ROLE navigara IN SCHEMA public GRANT SELECT, INSERT, UPDATE, DELETE, REFERENCES, TRIGGER, TRUNCATE ON TABLES TO navigara;
ALTER DEFAULT PRIVILEGES FOR ROLE navigara IN SCHEMA public GRANT SELECT, UPDATE, USAGE ON SEQUENCES TO navigara;
Then set DATABASE_URL in your .env to use the navigara user:
DATABASE_URL=postgresql://navigara:<password>@<postgres-host>:5432/navigara?sslmode=require
Navigara runs database migrations automatically on startup — no manual schema setup is needed.

6. Start Navigara

# Pull the latest images
docker compose pull

# Start all services
docker compose up -d
Verify all services are running:
docker compose ps

7. Configure Reverse Proxy (Caddy)

Navigara ships with Caddy as the reverse proxy. Caddy handles TLS termination, automatic certificate provisioning via Let’s Encrypt, and routing between frontend, backend API, and gRPC services. Create the Caddyfile at deployment/caddy/Caddyfile:
navigara.yourcompany.com {
    header {
        Strict-Transport-Security "max-age=31536000; includeSubDomains; preload"
        X-Frame-Options "SAMEORIGIN"
        X-Content-Type-Options "nosniff"
        X-XSS-Protection "1; mode=block"
        Referrer-Policy "strict-origin-when-cross-origin"
    }

    handle_path /api/* {
        encode gzip

        header {
            Access-Control-Allow-Origin "https://navigara.yourcompany.com"
            Access-Control-Allow-Methods "GET, POST, PUT, DELETE, OPTIONS"
            Access-Control-Allow-Headers "Authorization, Content-Type, X-Api-Key"
            Access-Control-Max-Age "86400"
        }

        @options method OPTIONS
        respond @options 204

        reverse_proxy backend:8080
    }

    handle {
        encode gzip
        reverse_proxy frontend:3000
    }

    log {
        output stdout
        format json
    }
}
Caddy automatically provisions and renews TLS certificates via Let’s Encrypt. No manual certificate management is required.

8. Product Setup

After all services are running, you need to log in as root to create the first organization and invite users. Use the admin_key.pem you generated in step 3. Generate a Root Login Link Use the following Python script to generate a one-time login link. The link expires after 60 seconds. Requirements: pip install PyJWT cryptography
#!/usr/bin/env python3
"""Generate a root login link for Navigara (on-prem)."""
import jwt, time, argparse, base64, urllib.parse
from cryptography.hazmat.primitives.asymmetric.ed25519 import Ed25519PrivateKey
from cryptography.hazmat.primitives import serialization

parser = argparse.ArgumentParser()
parser.add_argument("--key", required=True, help="Path to Ed25519 private key PEM file")
parser.add_argument("--domain", required=True, help="Your Navigara domain (e.g. navigara.yourcompany.com)")
args = parser.parse_args()

with open(args.key) as f:
    lines = [l.strip() for l in f if not l.startswith("-----")]
seed = base64.b64decode("".join(lines))
key = Ed25519PrivateKey.from_private_bytes(seed)
pem = key.private_bytes(serialization.Encoding.PEM, serialization.PrivateFormat.PKCS8, serialization.NoEncryption())

host = args.domain.split(":")[0]
now = int(time.time())
token = jwt.encode({"sub": "root", "aud": host, "iat": now, "exp": now + 60, "impersonated": True}, pem, algorithm="EdDSA")

print(f"https://{args.domain}/impersonate?token={urllib.parse.quote(token)}")
Save this as impersonate.py and run it:
python3 impersonate.py --key admin_key.pem --domain navigara.yourcompany.com
Open the printed URL in your browser within 60 seconds. You will be logged in as the root admin and can create your first organization and invite users.
You can re-run this script anytime you need root access — for example, to create additional organizations or manage platform settings.

Backup and Restore

Database backup

Back up your PostgreSQL database regularly using pg_dump:
pg_dump -h <postgres-host> -U navigara -d navigara -Fc > navigara_backup_$(date +%Y%m%d).dump

Database restore

pg_restore -h <postgres-host> -U navigara -d navigara --clean navigara_backup.dump
The database contains all application state. Ensure backups are stored securely and tested regularly.

Monitoring and Health Checks

The backend exposes a health check endpoint on the HTTP port:
EndpointDescription
GET /healthApplication and database health (returns gRPC health status)
The frontend container also has a built-in health check via wget on port 3000. Use these with your monitoring system (Prometheus, Datadog, etc.) to track service availability.

Upgrades

To upgrade Navigara:
cd /opt/navigara

# Pull the latest images
docker compose pull

# Restart with new images
docker compose up -d
Database migrations run automatically on startup. Always back up your database before upgrading.

Troubleshooting

IssueSolution
Analysis or collector features disabledVerify LICENSE_KEY is set and matches your DOMAIN. Contact Navigara if your license has expired or needs to be reissued.
Database connection refusedVerify DATABASE_URL and that the PostgreSQL instance allows connections from the VM’s IP
pgvector extension missingRun CREATE EXTENSION IF NOT EXISTS vector; as a superuser on the database
LLM analysis failingVerify LLM_API_KEY and that the VM can reach the LLM API endpoint
High disk usageCheck knowledge graph data growth; consider scaling to a larger instance tier
Migrations stuckRun the migration lock cleanup script: docker compose exec backend /app/scripts/clear-migration-locks.sh