Immich is the best self-hosted Google Photos alternative available right now. If you’ve read our comparison of self-hosted photo tools and decided Immich is the one (smart choice), this guide walks you through the entire setup — from zero to a fully working photo server with mobile app sync.

I’ve set this up on dozens of machines at this point. Here’s exactly what works.

What You’ll Need

Hardware Requirements

  • CPU: Any modern x86_64 processor. Intel 10th gen or newer is ideal because of Quick Sync (hardware video transcoding). ARM64 works too — Immich runs fine on Raspberry Pi 4/5.
  • RAM: 4GB absolute minimum. 8GB recommended. The machine learning container alone wants 2-3GB when processing.
  • Storage: SSD for the database and application (even 50GB is plenty). Separate HDD/NAS storage for your actual photo library.
  • GPU (optional): NVIDIA GPU dramatically speeds up face recognition and object detection. Not required, but nice to have.

Software Requirements

  • Linux server (Ubuntu 22.04/24.04, Debian 12, or any distro with Docker support)
  • Docker and Docker Compose v2
  • A domain name (optional, but recommended for remote access)

Quick Docker Install

If you don’t have Docker yet:

1
2
3
4
5
6
7
8
9
# Install Docker using the official convenience script
curl -fsSL https://get.docker.com | sudo sh

# Add your user to the docker group (log out and back in after)
sudo usermod -aG docker $USER

# Verify
docker --version
docker compose version

Step 1: Create the Directory Structure

Pick a location for Immich’s configuration files and your photo library. I use /opt/immich for the app and a separate mount for photos:

 1
 2
 3
 4
 5
 6
 7
 8
 9
10
# Create Immich config directory
sudo mkdir -p /opt/immich

# Create photo storage directory (adjust to your storage setup)
sudo mkdir -p /mnt/photos/immich

# Set ownership
sudo chown -R $USER:$USER /opt/immich /mnt/photos/immich

cd /opt/immich

If your photo storage is on a different drive or NAS mount, adjust /mnt/photos/immich accordingly. The key thing is that your photo storage should be on your largest/cheapest storage, while the Immich config and database stay on an SSD for performance.

Step 2: Create the Environment File

Create /opt/immich/.env:

 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
12
13
14
15
cat > /opt/immich/.env << 'EOF'
# Immich version — pin to a specific release for stability
IMMICH_VERSION=release

# Database credentials
DB_PASSWORD=CHANGE_ME_to_a_strong_password_123
DB_USERNAME=postgres
DB_DATABASE_NAME=immich

# Upload location — where your photos are stored
UPLOAD_LOCATION=/mnt/photos/immich

# Optional: set timezone
TZ=Europe/Amsterdam
EOF

Important: Change that DB_PASSWORD to something actually secure. Use openssl rand -base64 32 to generate one.

Step 3: Create the Docker Compose File

Create /opt/immich/docker-compose.yml:

 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
services:
  immich-server:
    container_name: immich_server
    image: ghcr.io/immich-app/immich-server:${IMMICH_VERSION}
    volumes:
      - ${UPLOAD_LOCATION}:/usr/src/app/upload
      - /etc/localtime:/etc/localtime:ro
    env_file:
      - .env
    ports:
      - "2283:2283"
    depends_on:
      - redis
      - database
    restart: always
    healthcheck:
      test: ["CMD-SHELL", "curl -f http://localhost:2283/api/server/ping || exit 1"]
      interval: 30s
      timeout: 10s
      retries: 3

  immich-machine-learning:
    container_name: immich_machine_learning
    image: ghcr.io/immich-app/immich-machine-learning:${IMMICH_VERSION}
    volumes:
      - model-cache:/cache
    env_file:
      - .env
    restart: always
    healthcheck:
      test: ["CMD-SHELL", "python3 -c 'import urllib.request; urllib.request.urlopen(\"http://localhost:3003/ping\")'"]
      interval: 30s
      timeout: 10s
      retries: 3

  redis:
    container_name: immich_redis
    image: docker.io/redis:6.2-alpine
    healthcheck:
      test: redis-cli ping || exit 1
      interval: 10s
      timeout: 5s
      retries: 5
    restart: always

  database:
    container_name: immich_postgres
    image: docker.io/tensorchord/pgvecto-rs:pg14-v0.2.0
    environment:
      POSTGRES_PASSWORD: ${DB_PASSWORD}
      POSTGRES_USER: ${DB_USERNAME}
      POSTGRES_DB: ${DB_DATABASE_NAME}
      POSTGRES_INITDB_ARGS: "--data-checksums"
    volumes:
      - pgdata:/var/lib/postgresql/data
    healthcheck:
      test: pg_isready --dbname='${DB_DATABASE_NAME}' --username='${DB_USERNAME}' || exit 1
      interval: 10s
      timeout: 5s
      retries: 5
    command:
      [
        "postgres",
        "-c", "shared_preload_libraries=vectors.so",
        "-c", "search_path=\"$$user\", public, vectors",
        "-c", "logging_collector=on",
        "-c", "max_wal_size=2GB",
        "-c", "shared_buffers=512MB",
        "-c", "wal_compression=on",
      ]
    restart: always

volumes:
  pgdata:
  model-cache:

This is a clean, production-ready setup. Each service has health checks, proper restart policies, and sensible defaults.

Step 4: Launch Immich

1
2
cd /opt/immich
docker compose up -d

Watch the logs to make sure everything starts cleanly:

1
docker compose logs -f

You’re looking for the Immich server to report that it’s listening on port 2283. The machine learning container takes a minute or two to download its models on first start — that’s normal.

Once you see something like Immich Server is listening on 0.0.0.0:2283, you’re in business.

Step 5: Initial Web Setup

Open your browser and go to http://YOUR_SERVER_IP:2283.

You’ll see a welcome screen prompting you to create the first admin account:

  1. Enter your name, email, and password — this becomes the admin account
  2. Click through the setup wizard
  3. You’ll land on the main timeline view (empty for now)

Key Settings to Configure

Head to Administration (gear icon → Administration) and configure these:

Storage Template (under Settings → Storage Template): Enable this and set a template like:

{{y}}/{{y}}-{{MM}}-{{dd}}/{{filename}}

This organizes uploaded photos into year/date folders instead of dumping everything flat. Way easier to manage and backup.

Machine Learning (under Settings → Machine Learning):

  • Ensure facial recognition is enabled
  • Smart search should be on by default
  • If you have limited RAM, you can switch to smaller ML models

Map (under Settings → Map):

  • Enable the map feature
  • It uses OpenStreetMap tiles by default — no API key needed

Trash (under Settings → Trash):

  • Set trash retention to 30 days (default)
  • This gives you a safety net before photos are permanently deleted

Step 6: Set Up the Mobile App

This is where Immich really shines. The mobile apps are excellent.

iOS

  1. Download Immich from the App Store (it’s free)
  2. Open the app
  3. Enter your server URL: http://YOUR_SERVER_IP:2283
  4. Log in with the account you just created
  5. Go to Settings → Backup in the app
  6. Enable Background Backup
  7. Select which albums to back up (Camera Roll at minimum)
  8. Choose backup settings:
    • Wi-Fi only: Recommended unless you have unlimited data
    • Background backup: Enable this — it uses iOS background app refresh to upload photos even when the app isn’t open
    • Upload original files: Keep this ON to preserve full quality

Android

Same process, but from the Play Store. Android’s background backup is even more reliable than iOS since there are fewer restrictions on background processes.

Pro Tip: External Libraries

If you already have a photo collection on your server (maybe from a previous tool or a Google Takeout export), you don’t need to re-upload through the app. Use Immich’s External Libraries feature:

  1. Go to Administration → External Libraries
  2. Add a new library pointing to your existing photo directory
  3. Mount that directory in the Docker container by adding an extra volume:
1
2
3
4
5
# Add to the immich-server volumes section:
volumes:
  - ${UPLOAD_LOCATION}:/usr/src/app/upload
  - /path/to/existing/photos:/mnt/existing-photos:ro
  - /etc/localtime:/etc/localtime:ro
  1. Scan the library — Immich will index everything without copying or moving files

Step 7: Set Up GPU Acceleration (Optional)

If you have an NVIDIA GPU, you can dramatically speed up Immich’s machine learning processing.

Prerequisites

 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
# Install NVIDIA Container Toolkit
curl -fsSL https://nvidia.github.io/libnvidia-container/gpgkey | sudo gpg --dearmor -o /usr/share/keyrings/nvidia-container-toolkit-keyring.gpg

curl -s -L https://nvidia.github.io/libnvidia-container/stable/deb/nvidia-container-toolkit.list | \
  sed 's#deb https://#deb [signed-by=/usr/share/keyrings/nvidia-container-toolkit-keyring.gpg] https://#g' | \
  sudo tee /etc/apt/sources.list.d/nvidia-container-toolkit.list

sudo apt-get update
sudo apt-get install -y nvidia-container-toolkit
sudo nvidia-ctk runtime configure --runtime=docker
sudo systemctl restart docker

Update Docker Compose

Replace the immich-machine-learning service with:

 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
12
13
14
15
  immich-machine-learning:
    container_name: immich_machine_learning
    image: ghcr.io/immich-app/immich-machine-learning:${IMMICH_VERSION}-cuda
    volumes:
      - model-cache:/cache
    env_file:
      - .env
    restart: always
    deploy:
      resources:
        reservations:
          devices:
            - driver: nvidia
              count: 1
              capabilities: [gpu]

Note the image tag change to ${IMMICH_VERSION}-cuda and the GPU resource reservation. After updating, run:

1
docker compose up -d

Verify GPU access:

1
docker exec immich_machine_learning nvidia-smi

With a GPU, face recognition goes from processing ~100 photos/minute to 500+ photos/minute. Worth it if you have the hardware.

Step 8: Reverse Proxy Setup

You’ll want a reverse proxy for HTTPS and remote access. Here’s an Nginx Proxy Manager setup (the most common choice for homelabbers), or a raw Nginx config:

Option A: Nginx Config

 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
server {
    listen 443 ssl http2;
    server_name photos.yourdomain.com;

    ssl_certificate /etc/letsencrypt/live/photos.yourdomain.com/fullchain.pem;
    ssl_certificate_key /etc/letsencrypt/live/photos.yourdomain.com/privkey.pem;

    client_max_body_size 50000M;  # Important for large video uploads

    location / {
        proxy_pass http://localhost:2283;
        proxy_set_header Host $host;
        proxy_set_header X-Real-IP $remote_addr;
        proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
        proxy_set_header X-Forwarded-Proto $scheme;

        # WebSocket support (needed for real-time updates)
        proxy_http_version 1.1;
        proxy_set_header Upgrade $http_upgrade;
        proxy_set_header Connection "upgrade";
    }
}

server {
    listen 80;
    server_name photos.yourdomain.com;
    return 301 https://$server_name$request_uri;
}

The critical setting here is client_max_body_size 50000M. Without it, Nginx will reject large video uploads. Set it high — Immich handles its own upload limits.

Option B: Caddy (Simpler)

If you prefer Caddy (automatic HTTPS, zero config):

photos.yourdomain.com {
    reverse_proxy localhost:2283
    request_body {
        max_size 50GB
    }
}

That’s it. Caddy handles Let’s Encrypt certificates automatically.

Step 9: Backup Strategy

Your photos are precious. Immich stores data in two places that both need backing up:

1. The Photo Library

This is the directory you set as UPLOAD_LOCATION. It contains all your original photos and generated thumbnails. Back this up with whatever you normally use:

1
2
3
4
5
# Example: rsync to a backup drive
rsync -avz --progress /mnt/photos/immich/ /mnt/backup-drive/immich-photos/

# Example: restic to a remote backup
restic -r s3:s3.amazonaws.com/my-backup-bucket backup /mnt/photos/immich/

2. The Database

The PostgreSQL database contains all your metadata, face data, album organization, and user accounts. Losing this means losing all your organizational work (the photos themselves would be fine, but you’d need to re-index everything).

1
2
# Create a database dump
docker exec immich_postgres pg_dumpall -U postgres | gzip > /mnt/backup-drive/immich-db-$(date +%Y%m%d).sql.gz

Set up a cron job for daily database backups:

1
2
# Add to crontab (crontab -e)
0 3 * * * docker exec immich_postgres pg_dumpall -U postgres | gzip > /mnt/backup-drive/immich-db-$(date +\%Y\%m\%d).sql.gz && find /mnt/backup-drive/ -name "immich-db-*.sql.gz" -mtime +30 -delete

This dumps the database at 3 AM daily and cleans up backups older than 30 days.

Restoring from Backup

If disaster strikes:

  1. Redeploy Immich with a fresh docker compose up -d
  2. Restore the database:
1
2
3
4
5
6
7
8
# Stop Immich server first
docker compose stop immich-server immich-machine-learning

# Restore the dump
gunzip < /mnt/backup-drive/immich-db-20260201.sql.gz | docker exec -i immich_postgres psql -U postgres

# Restart everything
docker compose up -d
  1. Make sure your photo directory is mounted in the same location
  2. Immich will reconnect to the restored database and everything should be back

Updating Immich

Immich releases frequently. To update:

 1
 2
 3
 4
 5
 6
 7
 8
 9
10
cd /opt/immich

# Pull latest images
docker compose pull

# Restart with new images
docker compose up -d

# Clean up old images
docker image prune -f

Tip: Check the Immich release notes before updating. Occasionally there are breaking changes that require extra migration steps (rare, but it happens).

Migrating from Google Photos

If you’re coming from Google Photos:

  1. Request your data from Google Takeout. Select only Google Photos, choose .zip format, and pick the largest file size available (50GB).
  2. Download all the ZIP files to your server
  3. Extract them: for f in *.zip; do unzip "$f" -d google-photos-export; done
  4. Use Immich’s Google Takeout import — the CLI tool handles the metadata JSON files that Google annoyingly separates from the actual photos:
1
2
3
4
5
6
7
8
# Install immich-go (community migration tool)
docker run -it --rm \
  -v /path/to/google-photos-export:/import:ro \
  ghcr.io/simulot/immich-go:latest \
  upload \
  -server http://YOUR_SERVER_IP:2283 \
  -key YOUR_API_KEY \
  /import

Generate the API key from the Immich web UI under your user settings.

Troubleshooting

“Machine learning container keeps restarting” Usually a RAM issue. Check with docker logs immich_machine_learning. If you see OOM kills, either add more RAM or switch to smaller ML models in the admin settings.

“Uploads are slow” Check your reverse proxy settings. The client_max_body_size limit in Nginx is the most common culprit. Also ensure you’re not running the database on a slow HDD.

“Face recognition isn’t working” It takes time. Immich processes faces in the background after photos are uploaded. Check Administration → Jobs to see the queue progress. If it’s stuck, try restarting the machine learning container.

“Photos show wrong date” Immich reads EXIF data for dates. If your photos lack EXIF (screenshots, downloads), they’ll use the file’s modification date. You can manually adjust dates in the photo detail view.

What’s Next

Once Immich is running, you’ll probably want to:

  • Set up partner sharing so your family can see your photos
  • Configure external libraries to import existing collections
  • Enable OAuth/SSO if you run Authelia or Authentik
  • Explore the API for automation (it’s well-documented)

Welcome to owning your photos again. It feels good, doesn’t it?