Overview
Jan is a groundbreaking open-source ChatGPT alternative designed to run completely offline on your VPS, providing enterprise-grade privacy and security for AI interactions. Unlike cloud-based AI services, Jan processes everything locally, ensuring sensitive data never leaves your infrastructure. Built with a modern interface, Jan supports LLaMA 2, Mistral, Mixtral, Phi-2, and other state-of-the-art models with full OpenAI API compatibility.
Key Features
Complete Offline Operation
Run AI models entirely on your VPS without internet dependency. Zero data leakage and complete privacy for sensitive conversations.
Multi-Model Support
Compatible with LLaMA 2, Mistral, Mixtral, Phi-2, Code LLaMA, and other popular open-source models.
OpenAI API Compatibility
Drop-in replacement for OpenAI API with support for completions and chat endpoints.
Hardware Optimization
GPU acceleration with efficient CPU fallback. Supports quantized GGUF models for resource-constrained VPS.
Privacy-First Architecture
Local conversation storage with no telemetry. Full data export and encryption support.
Cross-Platform UI
Modern Electron-based interface with web UI option for remote VPS access.
Common Use Cases
- Enterprise AI without cloud dependency for internal teams
- Healthcare applications maintaining HIPAA compliance through local processing
- Legal document analysis preserving attorney-client privilege
- Development coding assistants for proprietary codebases
- Education AI tutors with full control over data
- Financial services maintaining regulatory compliance
Installation Guide
Download Jan Linux binary from GitHub releases. Install dependencies and CUDA toolkit for GPU acceleration. Configure for remote access via API server on port 1337. Set up Nginx reverse proxy with SSL. Create systemd service for automatic startup. Download models through Jan interface or pre-cache GGUF files. Use quantized models (Q4_K_M or Q5_K_M) for optimal balance.
Configuration Tips
Configuration stored in ~/.jan directory. Edit settings.json for API behavior. Set environment variables for GPU (CUDA_VISIBLE_DEVICES) and data folder (JAN_DATA_FOLDER). Enable API CORS for browser integrations. Customize system prompts via templates. Regular backups of conversations.db recommended. Monitor GPU memory and use process managers for automatic restart.
Technical Requirements
System Requirements
- حافظه: 8GB
- CPU: 4 cores
- ذخیرهسازی: 20GB
Dependencies
- ✓ Node.js 18.x+
- ✓ CUDA toolkit (GPU)
- ✓ Model files (3GB-20GB)