Skip to main content
Real-time Google Meet, Microsoft Teams, and Zoom transcription API. Get up and running in minutes. Note: Zoom joins require additional configuration (Zoom Meeting SDK + OAuth/OBF). See Zoom Integration Setup Guide.
💡 For end users: If you want quick deployment on your platform of choice with user interfaces, see Vexa Lite Deployment Guide and vexa-lite-deploy repository.
💻 For developers: This guide covers the full Docker Compose setup for development.

Quick Start

TL;DR - Try these in order:

1. If you have an established development machine

Docker Compose setup for development - All services wrapped in docker-compose.yml with Makefile convenience commands. Try running directly - this might work instantly:
git clone https://github.com/Vexa-ai/vexa.git && cd vexa
make all  # Default profile: remote transcription (GPU-free)
What make all does:
  • Builds all Docker images (takes some time at the first run)
  • Spins up all containers (API, bots, transcription services, database)
  • Runs database migrations (if necessary)
  • Starts a simple test to verify everything works
If you change code later, just run make all again - it rebuilds what’s needed and skips the rest.
💡 Note: This is the development setup. For production deployments, see Vexa Lite Deployment Guide.

2. If you’re on a fresh GPU VM in the cloud

Automated setup - Tested on Vultr vcg-a16-6c-64g-16vram Sets up everything for you on a fresh VM:
git clone https://github.com/Vexa-ai/vexa.git && cd vexa
sudo ./fresh_setup.sh --gpu    # or --cpu for CPU-only hosts
make all

3. Manual setup (if the above don’t work)

For fresh GPU virtual machines or custom setups: Ubuntu/Debian:
# Prerequisites
sudo apt update && sudo apt install -y \
  python3 python3-pip python-is-python3 python3-venv \
  make git curl jq ca-certificates gnupg

# Docker Engine + Compose v2
sudo apt remove -y docker.io docker-doc docker-compose docker-compose-v2 podman-docker containerd runc || true
sudo install -m 0755 -d /etc/apt/keyrings
curl -fsSL https://download.docker.com/linux/ubuntu/gpg | \
  sudo gpg --dearmor -o /etc/apt/keyrings/docker.gpg
echo "deb [arch=$(dpkg --print-architecture) signed-by=/etc/apt/keyrings/docker.gpg] \
  https://download.docker.com/linux/ubuntu $(. /etc/os-release && echo $VERSION_CODENAME) stable" | \
  sudo tee /etc/apt/sources.list.d/docker.list > /dev/null
sudo apt update && sudo apt install -y \
  docker-ce docker-ce-cli containerd.io docker-buildx-plugin docker-compose-plugin
sudo systemctl enable --now docker

# GPU only (requires NVIDIA drivers: nvidia-smi must work)
curl -fsSL https://nvidia.github.io/libnvidia-container/gpgkey | \
  sudo gpg --dearmor -o /usr/share/keyrings/nvidia-container-toolkit-keyring.gpg
curl -s -L https://nvidia.github.io/libnvidia-container/stable/deb/nvidia-container-toolkit.list | \
  sed 's#deb https://#deb [signed-by=/usr/share/keyrings/nvidia-container-toolkit-keyring.gpg] https://#g' | \
  sudo tee /etc/apt/sources.list.d/nvidia-container-toolkit.list
sudo apt update && sudo apt install -y nvidia-container-toolkit
sudo nvidia-ctk runtime configure --runtime=docker
sudo systemctl restart docker

# Deploy
git clone https://github.com/Vexa-ai/vexa.git && cd vexa
# make all              # CPU (tiny model)
make all
macOS (CPU only): Install Docker Desktop, then:
git clone https://github.com/Vexa-ai/vexa.git && cd vexa
make all

4. Vexa Lite - For Users (Production Deployment)

For end users who want quick deployment on their platform of choice. Vexa Lite is a single-container deployment that:
  • Runs as a single Docker container (no multi-service orchestration)
  • Requires no GPU (transcription runs externally)
  • Perfect for serverless providers and production deployments
  • Works with a variety of open-source user interfaces
Quick start with hosted transcription:
# 1. Set up database (PostgreSQL)
docker network create vexa-network
docker run -d \
  --name vexa-postgres \
  --network vexa-network \
  -e POSTGRES_USER=postgres \
  -e POSTGRES_PASSWORD=your_password \
  -e POSTGRES_DB=vexa \
  -p 5432:5432 \
  postgres:latest

# 2. Run Vexa Lite container
docker run -d \
  --name vexa \
  --network vexa-network \
  -p 8056:8056 \
  -e DATABASE_URL="postgresql://postgres:your_password@vexa-postgres:5432/vexa" \
  -e ADMIN_API_TOKEN="your-secret-admin-token" \
  -e TRANSCRIBER_URL="https://transcription.vexa.ai/v1/audio/transcriptions" \
  -e TRANSCRIBER_API_KEY="your-api-key" \
  vexaai/vexa-lite:latest
API available at: http://localhost:8056

Testing

Once deployed, services are available at: Live meeting test:
make test MEETING_ID=abc-defg-hij  # Use your Google Meet ID (xxx-xxxx-xxx format)
What to expect:
  1. Bot joins your Google Meet
  2. Admit the bot when prompted
  3. Start speaking to see real-time transcripts

Tested Environments

  • CPU: Mac Pro (Docker Desktop)
  • GPU: Fresh Vultr A16 VM (vcg-a16-6c-64g-16vram)

Management Commands

make ps        # Show container status
make logs      # View logs
make down      # Stop services
make test-api  # Quick API connectivity test

Managing Self-Hosted Vexa

For detailed guidance on managing users and API tokens in your self-hosted deployment, see: Self-Hosted Management Guide This guide covers:
  • Creating and managing users
  • Generating and revoking API tokens
  • Updating user settings (bot limits, etc.)
  • Complete workflow examples with curl and Python
For complete API documentation and usage guides, see the API Documentation.

Troubleshooting

GPU Issues:
  • “unknown device” error: Ensure NVIDIA drivers work (nvidia-smi) and Container Toolkit is configured
  • Bot creation fails: Check docker-compose.yml has correct device_ids (usually "0")
Test Issues:
  • JSON parsing errors: Use valid Google Meet ID format (xxx-xxxx-xxx) and admit bot to meeting
  • Bot doesn’t join: Check firewall settings and meeting permissions
Environment Issues:
  • WSL2 GPU issues: Docker Compose GPU startup may fail with nvidia-container-cli: device error on WSL2 even when nvidia-smi works. This is a known compatibility issue with certain GPU/driver combinations on WSL2. (#76)
  • NGINX reverse proxy: If you run Vexa behind an NGINX reverse proxy, base URL configuration is not yet documented. See #75 for community discussion.

Need help? Join our Discord Community | Video tutorial: 3-minute setup guide