This runbook describes how to run the project locally (developer machine) and on the internal server.
The goal is a clean separation between:
/mnt/niederlassungen)docker-compose.yml
/mnt/niederlassungen:/mnt/niederlassungen:rodocker-compose.local.yml
./.local_nas:/mnt/niederlassungen:rodocker-compose.server-tools.yml (optional)
COMPOSE_FILE or -f flags.Committed templates:
.env.docker.example.env.local.exampleLocal runtime env (not committed):
.env.dockerServer runtime env (not committed):
.env.serverThe compose setup uses:
ENV_FILE to select which env file is loaded into the app container.Copy the template:
cp .env.docker.example .env.docker
Then edit .env.docker:
SESSION_SECRET (>= 32 characters)For local HTTP testing add:
SESSION_COOKIE_SECURE=false
The Search backend supports two providers:
SEARCH_PROVIDER=fs (default)
SEARCH_PROVIDER=qsirch
If you set SEARCH_PROVIDER=qsirch, these env variables are required:
SEARCH_PROVIDER=qsirch
QSIRCH_BASE_URL=http://<nas-ip>:8080
QSIRCH_ACCOUNT=<qsirch-user>
QSIRCH_PASSWORD=***
QSIRCH_PATH_PREFIX=/Niederlassungen
Optional Qsirch tuning:
# Allowed: modified | created (case-insensitive)
QSIRCH_DATE_FIELD=modified
# Allowed: sync | async | auto (case-insensitive)
# Current implementation is sync-first.
# - "auto" currently behaves like "sync" (future-proof placeholder).
QSIRCH_MODE=sync
Notes:
scripts/validate-env.mjs enforces these keys when SEARCH_PROVIDER=qsirch.trim, lowercase for QSIRCH_DATE_FIELD / QSIRCH_MODE, and trim for QSIRCH_PATH_PREFIX) so behavior matches env validation.Create a minimal NAS tree:
mkdir -p ./.local_nas/NL01/2024/10/23
printf "dummy" > ./.local_nas/NL01/2024/10/23/test.pdf
Run with the local override:
docker compose -f docker-compose.yml -f docker-compose.local.yml up --build
If you prefer running in the background:
docker compose -f docker-compose.yml -f docker-compose.local.yml up -d --build
Note: The
appcontainer runsnode scripts/validate-env.mjsautomatically beforenpm run start(seedocker-compose.ymlcommand:). If required env vars are missing/invalid, the container fails fast and logs the validation error.
curl -s http://localhost:3000/api/health
Expected (example):
db is oknas.entriesSample contains NL01Generate a bcrypt hash (local):
node -e "const bcrypt=require('bcryptjs'); console.log(bcrypt.hashSync('secret-password', 10))"
Open Mongo shell inside the DB container:
docker exec -it rhl-lieferscheine-db mongosh -u root -p supersecret --authenticationDatabase admin
In mongosh:
use rhl-lieferscheine
db.users.insertOne({
username: "branchuser",
email: "nl01@example.com",
passwordHash: "<PASTE_HASH_HERE>",
role: "branch",
branchId: "NL01",
createdAt: new Date(),
updatedAt: new Date()
})
Login (stores cookie in cookies.txt):
curl -i -c cookies.txt \
-H "Content-Type: application/json" \
-d '{"username":"BranchUser","password":"secret-password"}' \
http://localhost:3000/api/auth/login
Call endpoints with cookie:
curl -i -b cookies.txt http://localhost:3000/api/branches
curl -i -b cookies.txt http://localhost:3000/api/branches/NL01/years
curl -i -b cookies.txt http://localhost:3000/api/branches/NL01/2024/months
curl -i -b cookies.txt http://localhost:3000/api/branches/NL01/2024/10/days
curl -i -b cookies.txt "http://localhost:3000/api/files?branch=NL01&year=2024&month=10&day=23"
RBAC negative test (expected 403):
curl -i -b cookies.txt http://localhost:3000/api/branches/NL02/years
Logout:
curl -i -b cookies.txt -c cookies.txt http://localhost:3000/api/auth/logout
The repo contains a manual smoke-test script that exercises:
Script:
scripts/manual-api-client-flow.mjsRun locally from your host machine:
node scripts/manual-api-client-flow.mjs \
--baseUrl=http://localhost:3000 \
--username=<user> \
--password=<pw> \
--branch=NL01
If your host machine does not have Node, you can also run it inside the app container:
docker compose exec app node scripts/manual-api-client-flow.mjs \
--baseUrl=http://127.0.0.1:3000 \
--username=<user> \
--password=<pw> \
--branch=NL01
Note: You may see a Node warning about ESM detection (
MODULE_TYPELESS_PACKAGE_JSON). The script still works and the warning is harmless.
Git Bash may rewrite paths like /mnt/niederlassungen.
If you need to run docker exec ... ls /mnt/niederlassungen, disable MSYS path conversion:
MSYS_NO_PATHCONV=1 docker exec -it rhl-lieferscheine-app ls -la /mnt/niederlassungen
ssh administrator@192.168.0.23
Docker and Docker Compose installed.
The real NAS share is mounted at:
/mnt/niederlassungenThe mount is readable by Docker.
On the server (in the project folder), create .env.server based on the template:
cp .env.docker.example .env.server
Edit .env.server:
SESSION_SECRET.NODE_ENV=production.SESSION_COOKIE_SECURE unset (or true).If the app is currently served over plain HTTP (no TLS), you may temporarily set:
SESSION_COOKIE_SECURE=false
This is required because most clients will not send Secure cookies over HTTP.
If the application is served over plain HTTP (no TLS), many clients will not send
Securecookies back. In that case, logins will appear to “work” (Set-Cookie is present), but subsequent requests will still be unauthenticated.
If the server should use Qsirch for search:
SEARCH_PROVIDER=qsirch
QSIRCH_BASE_URL=http://<nas-ip>:8080
QSIRCH_ACCOUNT=<qsirch-user>
QSIRCH_PASSWORD=***
QSIRCH_PATH_PREFIX=/Niederlassungen
# Optional:
QSIRCH_DATE_FIELD=modified
QSIRCH_MODE=sync
Operational notes:
QSIRCH_BASE_URL must be reachable from inside the app container.Use the base compose file only (no local override):
ENV_FILE=.env.server docker compose -f docker-compose.yml up -d --build
If you configured a server-local
.envfile (see 3.4.1 / 3.4.2), you can use the short form:> docker compose up -d --build > ``` Optional (recommended): validate env inside the container after updating env files:bash docker compose exec app node scripts/validate-env.mjs
### 3.4.1 Optional: Persist ENV_FILE selection via `.env` If you want a simpler startup command (and to avoid forgetting `ENV_FILE=...`), you can create a small `.env` file **on the server only** that defines which env file Compose should use. Create `./.env` in the project root:bash printf "ENV_FILE=.env.server\n" > .env
After that, you can start the stack with:bash docker compose -f docker-compose.yml up -d --build
Notes: - Keep `.env` server-local (do not commit it). - `.env.server` still contains secrets and must not be committed. - Always run `docker compose` from the project root so Compose picks up the correct `.env` file. ### 3.4.2 Optional: Persist ENV_FILE + COMPOSE_FILE selection via `.env` If your server setup uses multiple compose files (e.g. base + server tooling), you can also persist the compose file selection in the same **server-local** `.env` file. Example `./.env` (server only, do not commit):env ENV_FILE=.env.server COMPOSE_FILE=docker-compose.yml:docker-compose.server-tools.yml
Notes: - `COMPOSE_FILE` supports multiple files separated by `:` (common on Linux servers). - Keep `.env` and `.env.server` server-local (do not commit). After that, you can start/update the stack with:bash docker compose up -d --build
Optional: verify which configuration Compose is actually using:bash docker compose config
### 3.5 Verify On the server:bash curl -s http://127.0.0.1:3000/api/health
Expected: - `db` is `ok` - `nas.entriesSample` contains real branch folders (`NLxx`) > Note: On some Linux servers, `localhost` resolves to IPv6 (`::1`). > If `curl http://localhost:3000/api/health` fails, use `127.0.0.1` or `curl -4`. ### 3.6 Manual end-to-end flow (recommended inside container) Run the manual smoke test inside the app container (no Node installation required on the host):bash docker compose exec app node scripts/manual-api-client-flow.mjs \ --baseUrl=http://127.0.0.1:3000 \ --username= \ --password= \ --branch=NL01
### 3.7 Logs and troubleshooting Tail app logs:bash docker compose -f docker-compose.yml logs --tail=200 app
Check container status:bash docker compose ps
Common healthy state: - `app` is `Up` - `db` is `Up (healthy)` > If you see the Node warning `MODULE_TYPELESS_PACKAGE_JSON`, it is currently expected and does not break runtime. --- ## 4. HTTPS Note (Future) For real users, the application should be served over **HTTPS** (reverse proxy / TLS termination). ### 4.1 Why HTTPS matters here - The session cookie (`auth_session`) is typically `Secure` in production. - Browsers and many clients will **not send Secure cookies over HTTP**. - Without HTTPS you may see: - Login response returns `200` and `Set-Cookie` appears - But subsequent API calls act unauthenticated (`/api/auth/me` returns `{ user: null }`) ### 4.2 Recommended setup Run the Next.js container behind a reverse proxy that terminates TLS. Common internal options: - Nginx - Caddy - Traefik Key requirements: - External URL is HTTPS - Proxy forwards requests to the app container (typically `http://127.0.0.1:3000`) ### 4.3 Cookie configuration rules - Preferred production behavior: - `NODE_ENV=production` - `SESSION_COOKIE_SECURE` unset (or `true`) - Temporary plain HTTP (not recommended): - set `SESSION_COOKIE_SECURE=false` > Security note: Disabling Secure cookies on real networks is a risk. > Use it only as a temporary workaround while HTTPS is being introduced. ### 4.4 Operational tip: host consistency for cookies Cookies are scoped to the **host**. - If you authenticate against `http://localhost:3000`, open links (including PDFs) on `http://localhost:3000`. - If you authenticate against `http://127.0.0.1:3000`, also use that host. Mixing hosts can look like “random 401s” because the browser will not send the cookie to a different host. --- ## 5. Appendix: Quick smoke checklist ### 5.1 Localbash docker compose -f docker-compose.yml -f docker-compose.local.yml up -d --build curl -s http://localhost:3000/api/health npx vitest run npm run build
### 5.2 Serverbash docker compose up -d --build curl -s http://127.0.0.1:3000/api/health docker compose exec app node scripts/validate-env.mjs ```