Compare commits
28 Commits
df27af1f10
...
update-fla
| Author | SHA1 | Date | |
|---|---|---|---|
|
|
f7d85cb4f7 | ||
| a226d68df7 | |||
| 8e6cfa47b8 | |||
| 9d702285b6 | |||
| ef128b240e | |||
| f66c8792c4 | |||
| dde6dc3246 | |||
| 0d3d41c8f9 | |||
| 8c566ed869 | |||
| eb09fdf8fd | |||
| 650ae4ef16 | |||
| cac57806b5 | |||
| 6e8246672d | |||
| b80322f58a | |||
| 936df3b7fc | |||
| 38809d7c3c | |||
| ea60876ce9 | |||
| 5880242e06 | |||
| 61502c1490 | |||
| c20b77a6b8 | |||
| 3442744803 | |||
| 87ee0849f9 | |||
| 732d29eea3 | |||
| 6ee3eaf9d3 | |||
| 352b589ad1 | |||
| e3c43a2d8a | |||
| 11061e7692 | |||
| c969f66c60 |
32
.continue/agents/ollama.yaml
Normal file
32
.continue/agents/ollama.yaml
Normal file
@@ -0,0 +1,32 @@
|
|||||||
|
# This is an example configuration file
|
||||||
|
# To learn more, see the full config.yaml reference: https://docs.continue.dev/reference
|
||||||
|
name: ollama
|
||||||
|
version: 1.0.0
|
||||||
|
schema: v1
|
||||||
|
# Define which models can be used
|
||||||
|
# https://docs.continue.dev/customization/models
|
||||||
|
models:
|
||||||
|
- name: StarCoder2 Local
|
||||||
|
provider: ollama
|
||||||
|
model: starcoder2:7b
|
||||||
|
modelTimeout: "5s"
|
||||||
|
roles:
|
||||||
|
- autocomplete
|
||||||
|
autocompleteOptions:
|
||||||
|
useCache: true
|
||||||
|
useImports: true
|
||||||
|
useRecentlyEdited: true
|
||||||
|
- name: Nomic Embed Local
|
||||||
|
provider: ollama
|
||||||
|
model: nomic-embed-text:latest
|
||||||
|
roles:
|
||||||
|
- embed
|
||||||
|
- name: Autodetect
|
||||||
|
provider: ollama
|
||||||
|
model: AUTODETECT
|
||||||
|
defaultCompletionOptions:
|
||||||
|
contextLength: 64000
|
||||||
|
# MCP Servers that Continue can access
|
||||||
|
# https://docs.continue.dev/customization/mcp-tools
|
||||||
|
mcpServers:
|
||||||
|
- uses: anthropic/memory-mcp
|
||||||
125
.github/agents/dependency-auditor.agent.md
vendored
Normal file
125
.github/agents/dependency-auditor.agent.md
vendored
Normal file
@@ -0,0 +1,125 @@
|
|||||||
|
---
|
||||||
|
description: |
|
||||||
|
Use when auditing NixOS flake inputs or installed modules for known CVEs,
|
||||||
|
checking pinned revisions against security advisories, scanning repo code for
|
||||||
|
vulnerabilities, or running IaC/SCA audits on the nix-dotfiles repo. Use this
|
||||||
|
agent whenever flake.lock is updated or a new input/module is added.
|
||||||
|
tools: [read, 'io.snyk/mcp/*', search, web, 'nixos/*']
|
||||||
|
---
|
||||||
|
|
||||||
|
# Dependency Security Auditor
|
||||||
|
|
||||||
|
You are a dependency security auditor for this NixOS flake repository. Your job
|
||||||
|
is to identify known CVEs, security advisories, and vulnerable package versions
|
||||||
|
across flake inputs, NixOS modules, and repo code — without interacting with any
|
||||||
|
hosted infrastructure or live services.
|
||||||
|
|
||||||
|
## Scope
|
||||||
|
|
||||||
|
- Read `flake.lock` to enumerate all pinned inputs.
|
||||||
|
- Read `flake.nix` and system/module configs to identify which NixOS packages
|
||||||
|
and services are in active use.
|
||||||
|
- Use the nixos MCP and Snyk MCP to cross-reference versions against known
|
||||||
|
vulnerabilities.
|
||||||
|
- Use the web tool only to look up public CVE/advisory databases (NVD, GitHub
|
||||||
|
Security Advisories, NixOS security tracker). Do NOT connect to any hosted
|
||||||
|
service in this infrastructure.
|
||||||
|
|
||||||
|
## Constraints
|
||||||
|
|
||||||
|
- DO NOT edit, create, or delete any files.
|
||||||
|
- DO NOT run terminal commands.
|
||||||
|
- DO NOT connect to or probe any live service (Gitea, Mattermost, Nextcloud,
|
||||||
|
HAProxy, etc.).
|
||||||
|
- DO NOT authenticate to Snyk on behalf of the user without confirming first
|
||||||
|
— call `snyk_auth_status` and report back if auth is missing.
|
||||||
|
- ONLY report findings grounded in real CVE/advisory data with a reference URL
|
||||||
|
or ID.
|
||||||
|
|
||||||
|
## Audit Steps
|
||||||
|
|
||||||
|
Work through these steps in order. Show a summary of what you checked at the end
|
||||||
|
of each step.
|
||||||
|
|
||||||
|
### Step 1: Enumerate Flake Inputs
|
||||||
|
|
||||||
|
Read `flake.lock` and extract for each node:
|
||||||
|
|
||||||
|
- Owner, repo, rev (commit hash), lastModified date
|
||||||
|
- Whether it is a `github`, `git`, or `tarball` type
|
||||||
|
|
||||||
|
Flag any inputs that:
|
||||||
|
|
||||||
|
- Have not been updated in > 180 days (stale pinning risk)
|
||||||
|
- Use a mutable `ref` without a fixed `rev` (reproducibility risk)
|
||||||
|
- Are fetched over plain HTTP (not HTTPS)
|
||||||
|
|
||||||
|
### Step 2: Look Up Active Package Versions via nixos MCP
|
||||||
|
|
||||||
|
For the pinned nixpkgs revision, use the nixos MCP (`nixos_search`,
|
||||||
|
`nixos_info`) to:
|
||||||
|
|
||||||
|
- Look up key security-sensitive packages in use across palatine-hill:
|
||||||
|
`mattermost`, `gitea`, `nextcloud`, `postgresql`, `hydra`, `attic`,
|
||||||
|
`ollama`, `loki`, `minio`, `haproxy`, `samba`.
|
||||||
|
- Note the package version returned.
|
||||||
|
- Search for any known vulnerabilities associated with that version using the
|
||||||
|
nixos MCP and the web tool (NVD: `https://nvd.nist.gov/vuln/search`, GitHub
|
||||||
|
advisory DB: `https://github.com/advisories`).
|
||||||
|
|
||||||
|
### Step 3: Run Snyk Code Scan
|
||||||
|
|
||||||
|
Before running, call `snyk_auth_status` to confirm authentication. If
|
||||||
|
unauthenticated, report that and skip this step.
|
||||||
|
|
||||||
|
Run `snyk_code_scan` on the absolute repo path
|
||||||
|
(`/home/alice/.gitprojects/nix-dotfiles`) with `severity_threshold: medium`.
|
||||||
|
Report all findings with:
|
||||||
|
|
||||||
|
- Rule ID and CWE
|
||||||
|
- Affected file and line
|
||||||
|
- Severity
|
||||||
|
- Suggested fix
|
||||||
|
|
||||||
|
### Step 4: Run Snyk IaC Scan
|
||||||
|
|
||||||
|
Run `snyk_iac_scan` on the absolute repo path
|
||||||
|
(`/home/alice/.gitprojects/nix-dotfiles`) with `severity_threshold: medium`.
|
||||||
|
While Snyk IaC does not natively parse Nix, it will catch any Kubernetes, Docker
|
||||||
|
Compose, or YAML configs present in `systems/palatine-hill/docker/` and similar
|
||||||
|
paths.
|
||||||
|
|
||||||
|
Report all findings with:
|
||||||
|
|
||||||
|
- Issue title and severity
|
||||||
|
- Affected file and line
|
||||||
|
- Impact description
|
||||||
|
- Suggested fix
|
||||||
|
|
||||||
|
### Step 5: Cross-Check NixOS Security Tracker
|
||||||
|
|
||||||
|
Use the web tool to check `https://github.com/NixOS/nixpkgs/issues?q=CVE` and
|
||||||
|
`https://discourse.nixos.org/c/security` for any open CVEs affecting:
|
||||||
|
|
||||||
|
- The pinned nixpkgs revision (from `flake.lock`)
|
||||||
|
- Any of the key packages identified in Step 2
|
||||||
|
|
||||||
|
### Step 6: Summarise
|
||||||
|
|
||||||
|
Produce a final report with:
|
||||||
|
|
||||||
|
1. **Critical / High CVEs** — packages with active, unpatched CVEs in the
|
||||||
|
pinned revision
|
||||||
|
2. **Stale Inputs** — inputs not updated in > 180 days
|
||||||
|
3. **Snyk Code Findings** — medium+ severity SAST issues
|
||||||
|
4. **Snyk IaC Findings** — medium+ severity misconfigurations in non-Nix config
|
||||||
|
files
|
||||||
|
5. **Clean** — categories with no findings (list explicitly so the report is
|
||||||
|
complete)
|
||||||
|
|
||||||
|
Each finding must include:
|
||||||
|
|
||||||
|
- Severity
|
||||||
|
- CVE ID or Snyk rule ID (with reference URL)
|
||||||
|
- Affected package/file/input
|
||||||
|
- Recommended action (upgrade nixpkgs pin, patch config, etc.)
|
||||||
140
.github/agents/security-researcher.agent.md
vendored
Normal file
140
.github/agents/security-researcher.agent.md
vendored
Normal file
@@ -0,0 +1,140 @@
|
|||||||
|
---
|
||||||
|
description: |
|
||||||
|
Use when auditing NixOS server configurations for security issues, checking
|
||||||
|
for secrets in the Nix store, exposed ports, weak authentication, missing
|
||||||
|
service hardening, overly permissive firewall rules, SSH misconfiguration,
|
||||||
|
Docker socket exposure, or SOPS secrets mishandling. Read-only. Does NOT
|
||||||
|
interact with any live infrastructure or hosted resources.
|
||||||
|
tools: [read, search, 'nixos/*']
|
||||||
|
---
|
||||||
|
|
||||||
|
# Security Researcher
|
||||||
|
|
||||||
|
You are a security researcher auditing this NixOS flake repository for potential
|
||||||
|
vulnerabilities and misconfigurations. Your job is to read the configuration
|
||||||
|
as-written and identify security issues an attacker or misconfiguration could
|
||||||
|
exploit.
|
||||||
|
|
||||||
|
## Scope
|
||||||
|
|
||||||
|
- Inspect server systems only (`server = true`; currently **palatine-hill**).
|
||||||
|
- Work entirely from repository source files. DO NOT interact with any live
|
||||||
|
system, hosted service, URL, or external resource.
|
||||||
|
- Use the nixos MCP tool to look up option defaults and known behaviours — not
|
||||||
|
to reach external hosts.
|
||||||
|
|
||||||
|
## Constraints
|
||||||
|
|
||||||
|
- DO NOT edit, create, or delete any files.
|
||||||
|
- DO NOT run terminal commands.
|
||||||
|
- DO NOT fetch URLs or browse the web.
|
||||||
|
- DO NOT attempt to connect to, probe, or fingerprint any live service.
|
||||||
|
- ONLY report issues that are grounded in the actual content of the repository
|
||||||
|
files.
|
||||||
|
|
||||||
|
## Audit Checklist
|
||||||
|
|
||||||
|
Work through these categories in order. For each, read the relevant files before
|
||||||
|
reporting.
|
||||||
|
|
||||||
|
### 1. Secrets in the Nix Store
|
||||||
|
|
||||||
|
- Are any passwords, tokens, or API keys hardcoded in `.nix` files (not behind
|
||||||
|
SOPS)?
|
||||||
|
- Are `password = "..."` fields used in NixOS module options that end up
|
||||||
|
world-readable in `/nix/store`?
|
||||||
|
- Check service DB password fields, `initialScript`, environment variables, and
|
||||||
|
`settings` blocks.
|
||||||
|
- Use the nixos MCP tool to confirm whether a given option value lands in the
|
||||||
|
store.
|
||||||
|
|
||||||
|
### 2. SOPS Secrets Hygiene
|
||||||
|
|
||||||
|
- Do `sops.secrets` entries have the correct `owner` set to the service user
|
||||||
|
(not `root` unless necessary)?
|
||||||
|
- Is `defaultSopsFile` scoped correctly, or could one system's secrets bleed
|
||||||
|
into another?
|
||||||
|
- Are any secrets referenced in config that are not declared in `sops.secrets`?
|
||||||
|
|
||||||
|
### 3. Firewall and Attack Surface
|
||||||
|
|
||||||
|
- Which TCP/UDP ports are exposed in `firewall.nix`? Are all of them
|
||||||
|
intentional and documented?
|
||||||
|
- Are `trustedInterfaces` entries broader than necessary (e.g., `br+` covering
|
||||||
|
all bridge interfaces)?
|
||||||
|
- Does `extraCommands` insert raw iptables rules that bypass the NixOS firewall
|
||||||
|
abstraction in a dangerous way?
|
||||||
|
- Are any high-risk ports (22, 80, 443, 5432, 6379, 27017) exposed directly?
|
||||||
|
|
||||||
|
### 4. SSH Configuration
|
||||||
|
|
||||||
|
- What port is SSH running on? Is password authentication disabled?
|
||||||
|
- Are `PermitRootLogin`, `PasswordAuthentication`, and `PubkeyAuthentication`
|
||||||
|
set explicitly?
|
||||||
|
- Check `modules/openssh.nix` and any system-level overrides.
|
||||||
|
|
||||||
|
### 5. PostgreSQL Authentication
|
||||||
|
|
||||||
|
- Does `authentication` (pg_hba) use `trust` for any user or database?
|
||||||
|
- Are `scram-sha-256` or `peer` used consistently rather than `md5` or
|
||||||
|
`password`?
|
||||||
|
- Does any service connect over TCP with a plaintext password that ends up in
|
||||||
|
the Nix store?
|
||||||
|
- Are `ensureUsers` entries scoped correctly (no unnecessary `superuser` or
|
||||||
|
`createdb` grants)?
|
||||||
|
|
||||||
|
### 6. Service Isolation and Hardening
|
||||||
|
|
||||||
|
- Do systemd services set `DynamicUser`, `PrivateTmp`, `NoNewPrivileges`,
|
||||||
|
`ProtectSystem`, or similar hardening options where applicable?
|
||||||
|
- Check custom `systemd.services` blocks for missing or weak sandboxing.
|
||||||
|
- Are services running as root that should run as a dedicated user?
|
||||||
|
|
||||||
|
### 7. Docker and Container Security
|
||||||
|
|
||||||
|
- Is the Docker socket (`/var/run/docker.sock`) mounted into any container? If
|
||||||
|
so, flag it as a privilege escalation vector.
|
||||||
|
- Are any containers run with `--privileged` or `network_mode: host`?
|
||||||
|
- Are Docker compose files in the repo using hardcoded secrets or environment
|
||||||
|
variables that land in the store?
|
||||||
|
|
||||||
|
### 8. Web-Facing Services
|
||||||
|
|
||||||
|
- Do reverse-proxied services (Gitea, Mattermost, Nextcloud, etc.) set
|
||||||
|
`siteUrl`/`ROOT_URL` to HTTPS?
|
||||||
|
- Is there any service that could be accessed over plain HTTP internally?
|
||||||
|
- Are ACME/TLS certs scoped correctly and not shared across unrelated services?
|
||||||
|
|
||||||
|
### 9. Module Defaults That Are Security-Sensitive
|
||||||
|
|
||||||
|
- For each enabled service, use the nixos MCP tool to check if the default
|
||||||
|
values for security-relevant options (e.g., `database.password`,
|
||||||
|
`openFirewall`, `enableAdminCreateUser`) are safe, and confirm whether
|
||||||
|
defaults are overridden in the repo.
|
||||||
|
|
||||||
|
### 10. Broad Permission Grants
|
||||||
|
|
||||||
|
- Are any users granted `wheel`, `docker`, or other privileged groups without
|
||||||
|
clear justification?
|
||||||
|
- Does any non-human service account have `superuser`, `replication`, or
|
||||||
|
`createrole` PostgreSQL clauses?
|
||||||
|
|
||||||
|
## Output Format
|
||||||
|
|
||||||
|
Report findings as a numbered list grouped by severity:
|
||||||
|
|
||||||
|
- **Critical** — direct path to credentials exposure, RCE, or privilege
|
||||||
|
escalation
|
||||||
|
- **High** — exploitable misconfiguration or data exposure under realistic
|
||||||
|
conditions
|
||||||
|
- **Medium** — weak default, unnecessary privilege, or defence-in-depth gap
|
||||||
|
- **Low / Info** — hardening improvement or minor noise
|
||||||
|
|
||||||
|
Each finding must include:
|
||||||
|
|
||||||
|
- Severity label
|
||||||
|
- Exact file path and line (as a markdown link)
|
||||||
|
- One-sentence explanation of the risk
|
||||||
|
- Concrete suggested remediation
|
||||||
|
|
||||||
|
If a category is clean, state that explicitly so the report is complete.
|
||||||
81
.github/agents/server-architect.agent.md
vendored
Normal file
81
.github/agents/server-architect.agent.md
vendored
Normal file
@@ -0,0 +1,81 @@
|
|||||||
|
---
|
||||||
|
description: |
|
||||||
|
Use when reviewing server infrastructure, auditing NixOS server
|
||||||
|
configurations, planning how new services or modules integrate into
|
||||||
|
palatine-hill, checking for missing imports, DB/user alignment, firewall
|
||||||
|
gaps, module argument signatures, or reverse proxy routing. DO NOT use for
|
||||||
|
making changes or for desktop/workstation systems.
|
||||||
|
tools: [read, search, 'nixos/*']
|
||||||
|
---
|
||||||
|
|
||||||
|
# Infrastructure Architect
|
||||||
|
|
||||||
|
You are an infrastructure architect for this NixOS flake repository. Your job is
|
||||||
|
to review the existing server architecture and analyse how proposed or recently
|
||||||
|
added changes integrate with it.
|
||||||
|
|
||||||
|
## Scope
|
||||||
|
|
||||||
|
You only inspect **server** machines. In this repository that means systems where
|
||||||
|
`server = true` in their `default.nix` — currently **palatine-hill**. Do NOT
|
||||||
|
inspect or opine on desktop systems such as `artemision` or `selinunte` unless
|
||||||
|
explicitly asked.
|
||||||
|
|
||||||
|
## Constraints
|
||||||
|
|
||||||
|
- DO NOT edit, create, or delete any files.
|
||||||
|
- DO NOT run terminal commands.
|
||||||
|
- DO NOT make assumptions — read the actual files.
|
||||||
|
- ONLY report concrete, actionable findings with exact file and line references.
|
||||||
|
|
||||||
|
## Approach
|
||||||
|
|
||||||
|
When asked to review a change or audit the server state, work through these
|
||||||
|
checkpoints in order:
|
||||||
|
|
||||||
|
1. **Module registration** — Is the new `.nix` file imported in
|
||||||
|
`systems/<host>/configuration.nix`? Check the `imports` list.
|
||||||
|
2. **Module argument signature** — Does every module accept `{ ..., ... }:` to
|
||||||
|
absorb `specialArgs` (`system`, `server`, `inputs`, `outputs`)? A missing
|
||||||
|
`...` causes "unexpected argument" eval errors.
|
||||||
|
3. **Service dependencies** — Does the new service depend on another (e.g.
|
||||||
|
PostgreSQL, Redis, S3/Minio)? If so:
|
||||||
|
- Is the dependency service enabled and imported on this host?
|
||||||
|
- Are the required DB names and users present in `ensureDatabases` /
|
||||||
|
`ensureUsers`?
|
||||||
|
- Is the user name in `ensureUsers` consistent with what the service module
|
||||||
|
defaults to? (Use the nixos MCP tool to check default values.)
|
||||||
|
- Are authentication rules (`pg_hba`, `authentication` block) present for
|
||||||
|
the new user?
|
||||||
|
4. **Secrets alignment** — If the service uses SOPS secrets, are they declared
|
||||||
|
in `sops.secrets` with the correct `owner`? Does the secrets key exist in
|
||||||
|
`secrets.yaml`?
|
||||||
|
5. **Firewall exposure** — Is the service port opened in `firewall.nix`? If
|
||||||
|
traffic is reverse-proxied (e.g. via external HAProxy), no direct port
|
||||||
|
exposure in NixOS firewall is needed — confirm which model applies.
|
||||||
|
6. **Reverse proxy / TLS** — Is a proxy rule (HAProxy, nginx, Caddy) defined
|
||||||
|
for the new vhost? If the proxy is managed externally, note that explicitly.
|
||||||
|
Check that `siteUrl` / `ROOT_URL` / equivalent matches the actual domain.
|
||||||
|
7. **Upgrade / backup plumbing** — If the service has stateful data, is it
|
||||||
|
listed in `postgresql.upgrade.stopServices`? Is it covered by
|
||||||
|
`postgresqlBackup`?
|
||||||
|
8. **Module provisioning conflicts** — Does the NixOS module have a
|
||||||
|
`create`/`createLocally` option that auto-provisions a DB/user? If manual
|
||||||
|
provisioning also exists, flag potential ownership drift.
|
||||||
|
|
||||||
|
## Output Format
|
||||||
|
|
||||||
|
Report findings as a numbered list grouped by severity:
|
||||||
|
|
||||||
|
- **High** — will cause a build failure, service crash, or security issue
|
||||||
|
- **Medium** — will cause silent misconfiguration or future breakage
|
||||||
|
- **Low / Info** — style, redundancy, or optional improvements
|
||||||
|
|
||||||
|
Each finding must include:
|
||||||
|
|
||||||
|
- The severity label
|
||||||
|
- The exact file path and line (as a markdown link)
|
||||||
|
- A one-sentence explanation of the problem
|
||||||
|
- A concrete suggested fix
|
||||||
|
|
||||||
|
If everything checks out, say so explicitly and summarise what you verified.
|
||||||
698
.github/copilot-instructions.md
vendored
Normal file
698
.github/copilot-instructions.md
vendored
Normal file
@@ -0,0 +1,698 @@
|
|||||||
|
# Nix Dotfiles Repository Guide
|
||||||
|
|
||||||
|
This repository contains NixOS configurations for personal infrastructure. The setup is organized around a flake-based structure with per-system configurations and user-specific settings.
|
||||||
|
|
||||||
|
## Project Structure
|
||||||
|
|
||||||
|
- `flake.nix` - Main flake definition with inputs and outputs
|
||||||
|
- `systems/` - Per-system configurations (e.g., `artemision`, `palatine-hill`)
|
||||||
|
- `users/` - Per-user configurations using home-manager
|
||||||
|
- `modules/` - Reusable Nix modules for common services
|
||||||
|
- `lib/` - Custom Nix library functions
|
||||||
|
- `hydra/` - Hydra CI/CD configuration
|
||||||
|
- `secrets/` - SOPS encrypted secrets
|
||||||
|
|
||||||
|
## Key Concepts
|
||||||
|
|
||||||
|
### System Configuration
|
||||||
|
|
||||||
|
Each system has its own directory under `systems/` containing:
|
||||||
|
|
||||||
|
- `configuration.nix` - Main system configuration
|
||||||
|
- Component modules (audio.nix, desktop.nix, etc.)
|
||||||
|
- Hardware-specific configurations
|
||||||
|
|
||||||
|
### User Configuration
|
||||||
|
|
||||||
|
User configurations are in `users/<username>/`:
|
||||||
|
|
||||||
|
- `home.nix` - Home-manager configuration using `home.packages` and imports
|
||||||
|
- `secrets.yaml` - SOPS-encrypted secrets using age encryption
|
||||||
|
- `non-server.nix` - Desktop-specific configurations
|
||||||
|
|
||||||
|
### Nix Patterns
|
||||||
|
|
||||||
|
1. **Module-based approach**: Uses Nix modules for organizing configuration
|
||||||
|
1. **Home-manager integration**: User environment managed via home-manager
|
||||||
|
1. **SOPS secrets**: Secrets managed with SOPS and age encryption
|
||||||
|
1. **Flake-based**: Uses flakes for reproducible builds and development environments
|
||||||
|
1. **Multi-system support**: Supports multiple machines with different configurations
|
||||||
|
1. **Dynamic configuration generation**: Modules in the `modules/` directory are automatically imported into all systems (can be overridden per system). New systems are automatically discovered by `genSystems()`
|
||||||
|
|
||||||
|
### Modern Nix Features
|
||||||
|
|
||||||
|
This repository uses modern Nix features including:
|
||||||
|
|
||||||
|
- **Flakes**: Enabled via `flake` experimental feature
|
||||||
|
- **Nix Command**: Enabled via `nix-command` experimental feature
|
||||||
|
- **Blake3 Hashes**: Enabled via `blake3-hashes` experimental feature
|
||||||
|
- **Git Hashing**: Enabled via `git-hashing` experimental feature
|
||||||
|
- **Verified Fetches**: Enabled via `verified-fetches` experimental feature
|
||||||
|
|
||||||
|
### Key Commands
|
||||||
|
|
||||||
|
- `nh os switch` - Apply system configuration (using nix-community/nh)
|
||||||
|
- `nh home switch` - Apply user configuration (using nix-community/nh)
|
||||||
|
- `nh os build` - Build a specific system (using nix-community/nh)
|
||||||
|
- `nix build .#<system>` - Build a specific system
|
||||||
|
- `nix run .#<system>` - Run a specific system
|
||||||
|
- `nix flake update` - Update flake inputs
|
||||||
|
|
||||||
|
### Development Workflow
|
||||||
|
|
||||||
|
1. Make changes to system or user configuration
|
||||||
|
1. Test with `nh os switch` or `nh home switch`
|
||||||
|
1. For CI/CD, Hydra automatically builds and tests changes
|
||||||
|
1. Secrets are managed with SOPS and age keys
|
||||||
|
|
||||||
|
### Important Files
|
||||||
|
|
||||||
|
- `flake.nix` - Main entry point for the flake
|
||||||
|
- `systems/artemision/configuration.nix` - Example system configuration
|
||||||
|
- `users/alice/home.nix` - Example user configuration
|
||||||
|
- `modules/base.nix` - Base module with common settings
|
||||||
|
- `hydra/jobsets.nix` - Hydra CI configuration
|
||||||
|
|
||||||
|
### External Dependencies
|
||||||
|
|
||||||
|
- NixOS unstable channel
|
||||||
|
- Nixpkgs unstable channel
|
||||||
|
- SOPS for secrets management
|
||||||
|
- age for encryption
|
||||||
|
- home-manager for user environments
|
||||||
|
- nh (nix-community/nh) for simplified Nix operations
|
||||||
|
|
||||||
|
### Nix MCP Server
|
||||||
|
|
||||||
|
- Use the nix MCP server for looking up package names and options
|
||||||
|
- Specify `unstable` channel if the channel is specifiable (e.g., for `pkgs.<package-name>`)
|
||||||
|
|
||||||
|
## Dynamic Configuration System (lib/systems.nix)
|
||||||
|
|
||||||
|
This repository automatically generates NixOS system configurations based on the folder structure. Understanding how `constructSystem` and `genSystems` work is essential when adding new systems or global modules.
|
||||||
|
|
||||||
|
### How Configuration Generation Works
|
||||||
|
|
||||||
|
The process happens in three stages:
|
||||||
|
|
||||||
|
**Stage 1: Discovery** (`flake.nix` → `genSystems`)
|
||||||
|
|
||||||
|
- `flake.nix` calls `genSystems inputs outputs src (src + "/systems")`
|
||||||
|
- `genSystems` scans the `systems/` directory and lists all subdirectories
|
||||||
|
- Each subdirectory name becomes a system hostname (e.g., `artemision`, `palatine-hill`)
|
||||||
|
|
||||||
|
**Stage 2: Parameter Loading** (`genSystems` reads `default.nix`)
|
||||||
|
|
||||||
|
- For each discovered system, `genSystems` imports `systems/<hostname>/default.nix`
|
||||||
|
- This file exports parameters for `constructSystem` like:
|
||||||
|
- `users = [ "alice" ]` — which users to create
|
||||||
|
- `home = true` — enable home-manager
|
||||||
|
- `sops = true` — enable secret decryption
|
||||||
|
- `server = true/false` — machine role
|
||||||
|
- `modules = [ ... ]` — additional system-specific modules
|
||||||
|
|
||||||
|
**Stage 3: Assembly** (`constructSystem` assembles the full config)
|
||||||
|
|
||||||
|
- Loads essential system files: `hardware.nix`, `configuration.nix`
|
||||||
|
- Auto-imports all `.nix` files from `modules/` directory via `lib.adev.fileList`
|
||||||
|
- Conditionally loads home-manager, SOPS, and user configs based on parameters
|
||||||
|
- Merges everything into a complete NixOS system configuration
|
||||||
|
|
||||||
|
### Key Functions in lib/systems.nix
|
||||||
|
|
||||||
|
| Function | Purpose | Called By |
|
||||||
|
|----------|---------|-----------|
|
||||||
|
| `genSystems` | Scans `systems/` directory and creates configs for each subdirectory | `flake.nix` |
|
||||||
|
| `constructSystem` | Assembles a single NixOS system with all modules and configs | `genSystems` |
|
||||||
|
| `genHome` | Imports home-manager configs for specified users | `constructSystem` |
|
||||||
|
| `genSops` | Imports SOPS-encrypted secrets for users | `constructSystem` |
|
||||||
|
| `genUsers` | Imports user account configs from `users/<username>/` | `constructSystem` |
|
||||||
|
| `genHostName` | Creates hostname attribute set | `constructSystem` |
|
||||||
|
| `genWrapper` | Conditionally applies generator functions | `constructSystem` |
|
||||||
|
|
||||||
|
### Special Arguments Passed to All Configs
|
||||||
|
|
||||||
|
These are available in `configuration.nix`, `hardware.nix`, and all modules:
|
||||||
|
|
||||||
|
```nix
|
||||||
|
{ config, pkgs, lib, inputs, outputs, server, system, ... }:
|
||||||
|
```
|
||||||
|
|
||||||
|
- `config` — NixOS configuration options
|
||||||
|
- `pkgs` — Nix packages (nixpkgs)
|
||||||
|
- `lib` — Nix library functions (extended with `lib.adev`)
|
||||||
|
- `inputs` — Flake inputs (nixpkgs, home-manager, sops-nix, etc.)
|
||||||
|
- `outputs` — Flake outputs (for Hydra and other tools)
|
||||||
|
- `server` — Boolean: true for servers, false for desktops
|
||||||
|
- `system` — System architecture string (e.g., `"x86_64-linux"`)
|
||||||
|
|
||||||
|
## Adding a New NixOS System
|
||||||
|
|
||||||
|
### Step 1: Create the Directory Structure
|
||||||
|
|
||||||
|
```bash
|
||||||
|
mkdir -p systems/<new-hostname>
|
||||||
|
cd systems/<new-hostname>
|
||||||
|
```
|
||||||
|
|
||||||
|
### Step 2: Create `default.nix` (System Parameters)
|
||||||
|
|
||||||
|
This file is automatically discovered and loaded by `genSystems`. It exports the parameters passed to `constructSystem`.
|
||||||
|
|
||||||
|
**Minimal example:**
|
||||||
|
|
||||||
|
```nix
|
||||||
|
{ inputs }:
|
||||||
|
{
|
||||||
|
# Required: List of users to create (must have entries in users/ directory)
|
||||||
|
users = [ "alice" ];
|
||||||
|
|
||||||
|
# Optional: Enable home-manager (default: true)
|
||||||
|
home = true;
|
||||||
|
|
||||||
|
# Optional: Enable SOPS secrets (default: true)
|
||||||
|
sops = true;
|
||||||
|
|
||||||
|
# Optional: Is this a server? Used to conditionally enable server features
|
||||||
|
server = false;
|
||||||
|
|
||||||
|
# Optional: System architecture (default: "x86_64-linux")
|
||||||
|
system = "x86_64-linux";
|
||||||
|
|
||||||
|
# Optional: System-specific modules (in addition to global modules/)
|
||||||
|
modules = [
|
||||||
|
# ./custom-service.nix
|
||||||
|
];
|
||||||
|
}
|
||||||
|
```
|
||||||
|
|
||||||
|
**See `systems/palatine-hill/default.nix` for a complex example with all options.**
|
||||||
|
|
||||||
|
### Step 3: Create `hardware.nix` (Hardware Configuration)
|
||||||
|
|
||||||
|
Generate this via:
|
||||||
|
|
||||||
|
```bash
|
||||||
|
sudo nixos-generate-config --show-hardware-config > systems/<new-hostname>/hardware.nix
|
||||||
|
```
|
||||||
|
|
||||||
|
This file typically includes:
|
||||||
|
|
||||||
|
- Boot configuration and bootloader
|
||||||
|
- Filesystem mounts and ZFS/LVM settings
|
||||||
|
- Hardware support (CPU, GPU, network drivers)
|
||||||
|
- Device-specific kernel modules
|
||||||
|
|
||||||
|
### Step 4: Create `configuration.nix` (System Configuration)
|
||||||
|
|
||||||
|
This is the main NixOS configuration file. Structure:
|
||||||
|
|
||||||
|
```nix
|
||||||
|
{ config, pkgs, lib, inputs, server, system, ... }:
|
||||||
|
{
|
||||||
|
# System hostname (usually matches directory name)
|
||||||
|
networking.hostName = "new-hostname";
|
||||||
|
|
||||||
|
# Desktop/desktop specific config
|
||||||
|
services.xserver.enable = !server;
|
||||||
|
|
||||||
|
# System packages
|
||||||
|
environment.systemPackages = with pkgs; [
|
||||||
|
# ...
|
||||||
|
];
|
||||||
|
|
||||||
|
# Services to enable
|
||||||
|
services.openssh.enable = server;
|
||||||
|
|
||||||
|
# System-specific settings override global defaults
|
||||||
|
boot.kernelParams = [ "nomodeset" ];
|
||||||
|
}
|
||||||
|
```
|
||||||
|
|
||||||
|
### Step 5: Add Optional Secrets
|
||||||
|
|
||||||
|
If the system has sensitive data:
|
||||||
|
|
||||||
|
```bash
|
||||||
|
# Create and encrypt secrets file
|
||||||
|
sops systems/<new-hostname>/secrets.yaml
|
||||||
|
|
||||||
|
# This will be automatically loaded by genSops if sops = true
|
||||||
|
```
|
||||||
|
|
||||||
|
### Step 6: Add Optional System-Specific Modules
|
||||||
|
|
||||||
|
For system-specific functionality that shouldn't be global, create separate `.nix` files in the system directory:
|
||||||
|
|
||||||
|
```text
|
||||||
|
systems/<new-hostname>/
|
||||||
|
├── configuration.nix # Main config
|
||||||
|
├── default.nix
|
||||||
|
├── hardware.nix
|
||||||
|
├── secrets.yaml # (optional)
|
||||||
|
├── custom-service.nix # (optional) System-specific modules
|
||||||
|
├── networking.nix # (optional)
|
||||||
|
└── graphics.nix # (optional)
|
||||||
|
```
|
||||||
|
|
||||||
|
Reference these in `default.nix`:
|
||||||
|
|
||||||
|
```nix
|
||||||
|
{ inputs }:
|
||||||
|
{
|
||||||
|
users = [ "alice" ];
|
||||||
|
modules = [
|
||||||
|
./custom-service.nix
|
||||||
|
./networking.nix
|
||||||
|
./graphics.nix
|
||||||
|
];
|
||||||
|
}
|
||||||
|
```
|
||||||
|
|
||||||
|
### Step 7: Deploy the New System
|
||||||
|
|
||||||
|
The system is now automatically registered! Deploy with:
|
||||||
|
|
||||||
|
```bash
|
||||||
|
# Build the new system
|
||||||
|
nix build .#<new-hostname>
|
||||||
|
|
||||||
|
# Or if you want to switch immediately
|
||||||
|
nh os switch
|
||||||
|
```
|
||||||
|
|
||||||
|
## Adding a Global Module to modules/
|
||||||
|
|
||||||
|
Global modules are automatically imported into all systems. No registration needed.
|
||||||
|
|
||||||
|
### Create a Module File
|
||||||
|
|
||||||
|
Add a new `.nix` file to the `modules/` directory. Example: `modules/my-service.nix`
|
||||||
|
|
||||||
|
### Module Structure
|
||||||
|
|
||||||
|
```nix
|
||||||
|
{ config, pkgs, lib, inputs, server, ... }:
|
||||||
|
{
|
||||||
|
# Define configuration options for this module
|
||||||
|
options.myService = {
|
||||||
|
enable = lib.mkEnableOption "my service";
|
||||||
|
port = lib.mkOption {
|
||||||
|
type = lib.types.int;
|
||||||
|
default = 3000;
|
||||||
|
description = "Port for the service";
|
||||||
|
};
|
||||||
|
};
|
||||||
|
|
||||||
|
# Actual configuration (conditional on enable option)
|
||||||
|
config = lib.mkIf config.myService.enable {
|
||||||
|
environment.systemPackages = [ pkgs.my-service ];
|
||||||
|
|
||||||
|
systemd.services.my-service = {
|
||||||
|
description = "My Service";
|
||||||
|
wantedBy = [ "multi-user.target" ];
|
||||||
|
serviceConfig = {
|
||||||
|
ExecStart = "${pkgs.my-service}/bin/my-service";
|
||||||
|
Restart = "always";
|
||||||
|
};
|
||||||
|
};
|
||||||
|
};
|
||||||
|
}
|
||||||
|
```
|
||||||
|
|
||||||
|
### Using mkIf, mkDefault, and mkForce
|
||||||
|
|
||||||
|
- **`mkIf`** — Conditionally apply config based on a boolean
|
||||||
|
|
||||||
|
```nix
|
||||||
|
config = lib.mkIf config.myService.enable { ... };
|
||||||
|
```
|
||||||
|
|
||||||
|
- **`mkDefault`** — Provide a default value that can be overridden
|
||||||
|
|
||||||
|
```nix
|
||||||
|
boot.kernelParams = lib.mkDefault [ "quiet" ];
|
||||||
|
```
|
||||||
|
|
||||||
|
- **`mkForce`** — Force a value, preventing other modules from overriding
|
||||||
|
|
||||||
|
```nix
|
||||||
|
services.openssh.enable = lib.mkForce true;
|
||||||
|
```
|
||||||
|
|
||||||
|
- **`mkEnableOption`** — Define an `enable` option with standard description
|
||||||
|
|
||||||
|
```nix
|
||||||
|
options.myService.enable = lib.mkEnableOption "my service";
|
||||||
|
```
|
||||||
|
|
||||||
|
### Disable a Global Module for a Specific System
|
||||||
|
|
||||||
|
To disable a module for one system, override it in that system's `configuration.nix`:
|
||||||
|
|
||||||
|
```nix
|
||||||
|
{ config, lib, ... }:
|
||||||
|
{
|
||||||
|
# Disable the module entirely
|
||||||
|
myService.enable = false;
|
||||||
|
|
||||||
|
# Or override specific options
|
||||||
|
services.openssh.port = 2222;
|
||||||
|
}
|
||||||
|
```
|
||||||
|
|
||||||
|
### Module Loading Order in constructSystem
|
||||||
|
|
||||||
|
Modules are applied in this order (later modules override earlier ones):
|
||||||
|
|
||||||
|
1. `inputs.nixos-modules.nixosModule` (SuperSandro2000's convenience functions)
|
||||||
|
1. `inputs.nix-index-database.nixosModules.nix-index`
|
||||||
|
1. Hostname attribute from `genHostName`
|
||||||
|
1. `hardware.nix` (hardware-specific config)
|
||||||
|
1. `configuration.nix` (main system config)
|
||||||
|
1. **System-specific modules** from `modules` parameter in `default.nix` (e.g., custom-service.nix)
|
||||||
|
1. **All `.nix` files from global `modules/` directory** (features enabled across all systems)
|
||||||
|
1. SOPS module (if `sops = true`)
|
||||||
|
1. Home-manager module (if `home = true`)
|
||||||
|
1. User configurations (if `users = [...]` and `home = true`)
|
||||||
|
|
||||||
|
Important: Global modules (step 7) are applied after system-specific configs, so they can't override those values unless using `mkForce`. System-specific modules take precedence over global ones.
|
||||||
|
|
||||||
|
## Common Tasks
|
||||||
|
|
||||||
|
### Enable a Feature Across All Systems
|
||||||
|
|
||||||
|
1. Create `modules/my-feature.nix` with `options.myFeature.enable`
|
||||||
|
1. Set the feature enabled in `configuration.nix` of systems that need it:
|
||||||
|
|
||||||
|
```nix
|
||||||
|
myFeature.enable = true;
|
||||||
|
```
|
||||||
|
|
||||||
|
1. Or enable globally and disable selectively:
|
||||||
|
|
||||||
|
```nix
|
||||||
|
# In modules/my-feature.nix
|
||||||
|
config = lib.mkIf config.myFeature.enable {
|
||||||
|
# ...enabled by default
|
||||||
|
};
|
||||||
|
|
||||||
|
# In a system's configuration.nix
|
||||||
|
myFeature.enable = false; # Disable just for this system
|
||||||
|
```
|
||||||
|
|
||||||
|
### Add a New User to the System
|
||||||
|
|
||||||
|
1. Create user config: `users/<username>/default.nix` and `users/<username>/home.nix`
|
||||||
|
1. Update system's `default.nix`:
|
||||||
|
|
||||||
|
```nix
|
||||||
|
users = [ "alice" "newuser" ];
|
||||||
|
```
|
||||||
|
|
||||||
|
1. Create secrets: `sops users/<username>/secrets.yaml`
|
||||||
|
1. Redeploy: `nh os switch`
|
||||||
|
|
||||||
|
### Override a Module's Default Behavior
|
||||||
|
|
||||||
|
In any system's `configuration.nix`:
|
||||||
|
|
||||||
|
```nix
|
||||||
|
{
|
||||||
|
# Disable a service that's enabled by default in a module
|
||||||
|
services.openssh.enable = false;
|
||||||
|
|
||||||
|
# Override module options
|
||||||
|
boot.kernelParams = [ "nomodeset" ];
|
||||||
|
|
||||||
|
# Add to existing lists
|
||||||
|
environment.systemPackages = [ pkgs.custom-tool ];
|
||||||
|
}
|
||||||
|
```
|
||||||
|
|
||||||
|
### Check Which Modules Are Loaded
|
||||||
|
|
||||||
|
```bash
|
||||||
|
# List all module paths being loaded
|
||||||
|
nix eval .#nixosConfigurations.<hostname>.options --json | jq keys | head -20
|
||||||
|
|
||||||
|
# Evaluate a specific config value
|
||||||
|
nix eval .#nixosConfigurations.<hostname>.config.services.openssh.enable
|
||||||
|
```
|
||||||
|
|
||||||
|
### Validate Configuration Before Deploying
|
||||||
|
|
||||||
|
```bash
|
||||||
|
# Check syntax and evaluate
|
||||||
|
nix flake check
|
||||||
|
|
||||||
|
# Build without switching
|
||||||
|
nix build .#<hostname>
|
||||||
|
|
||||||
|
# Preview what would change
|
||||||
|
nix build .#<hostname> && nix-diff /run/current-system ./result
|
||||||
|
```
|
||||||
|
|
||||||
|
## Secrets Management
|
||||||
|
|
||||||
|
SOPS (Secrets Operations) manages sensitive data like passwords and API keys. This repository uses age encryption with SOPS to encrypt secrets per system and per user.
|
||||||
|
|
||||||
|
### Directory Structure
|
||||||
|
|
||||||
|
Secrets are stored alongside their respective configs:
|
||||||
|
|
||||||
|
```text
|
||||||
|
systems/<hostname>/secrets.yaml # System-wide secrets
|
||||||
|
users/<username>/secrets.yaml # User-specific secrets
|
||||||
|
```
|
||||||
|
|
||||||
|
### Creating and Editing Secrets
|
||||||
|
|
||||||
|
**Create or edit a secrets file:**
|
||||||
|
|
||||||
|
```bash
|
||||||
|
# For a system
|
||||||
|
sops systems/<hostname>/secrets.yaml
|
||||||
|
|
||||||
|
# For a user
|
||||||
|
sops users/<username>/secrets.yaml
|
||||||
|
```
|
||||||
|
|
||||||
|
SOPS will open your `$EDITOR` with decrypted content. When you save and exit, it automatically re-encrypts the file.
|
||||||
|
|
||||||
|
**Example secrets structure for a system:**
|
||||||
|
|
||||||
|
```yaml
|
||||||
|
# systems/palatine-hill/secrets.yaml
|
||||||
|
acme:
|
||||||
|
email: user@example.com
|
||||||
|
api_token: "secret-token-here"
|
||||||
|
postgresql:
|
||||||
|
password: "db-password"
|
||||||
|
```
|
||||||
|
|
||||||
|
**Example secrets for a user:**
|
||||||
|
|
||||||
|
```yaml
|
||||||
|
# users/alice/secrets.yaml
|
||||||
|
# The user password is required
|
||||||
|
user-password: "hashed-password-here"
|
||||||
|
```
|
||||||
|
|
||||||
|
### Accessing Secrets in Configuration
|
||||||
|
|
||||||
|
Secrets are made available via `config.sops.secrets` in modules and configurations:
|
||||||
|
|
||||||
|
```nix
|
||||||
|
# In a module or configuration.nix
|
||||||
|
{ config, lib, ... }:
|
||||||
|
{
|
||||||
|
# Reference a secret
|
||||||
|
services.postgresql.initialScript = ''
|
||||||
|
CREATE USER app WITH PASSWORD '${config.sops.secrets."postgresql/password".path}';
|
||||||
|
'';
|
||||||
|
|
||||||
|
# Or use the secret directly if it supports content
|
||||||
|
systemd.services.my-app.serviceConfig = {
|
||||||
|
EnvironmentFiles = [ config.sops.secrets."api-token".path ];
|
||||||
|
};
|
||||||
|
}
|
||||||
|
```
|
||||||
|
|
||||||
|
### Merging Secrets Files
|
||||||
|
|
||||||
|
When multiple systems or users modify secrets, use the sops-mergetool to resolve conflicts:
|
||||||
|
|
||||||
|
```bash
|
||||||
|
# Set up mergetool
|
||||||
|
git config merge.sopsmergetool.command "sops-mergetool-wrapper $BASE $CURRENT $OTHER $MERGED"
|
||||||
|
|
||||||
|
# Then during a merge conflict
|
||||||
|
git merge branch-name
|
||||||
|
|
||||||
|
# Git will use sops-mergetool to intelligently merge encrypted files
|
||||||
|
```
|
||||||
|
|
||||||
|
The repository includes helper scripts: `utils/sops-mergetool.sh` and `utils/sops-mergetool-new.sh`
|
||||||
|
|
||||||
|
### Adding a New Machine's Age Key
|
||||||
|
|
||||||
|
When adding a new system (`systems/<new-hostname>/`), you need to register its age encryption key:
|
||||||
|
|
||||||
|
1. Generate the key on the target machine (if using existing deployment) or during initial setup
|
||||||
|
1. Add the public key to `.sops.yaml`:
|
||||||
|
|
||||||
|
```yaml
|
||||||
|
keys:
|
||||||
|
- &artemision <age-key-for-artemision>
|
||||||
|
- &palatine-hill <age-key-for-palatine-hill>
|
||||||
|
- &new-hostname <age-key-for-new-hostname>
|
||||||
|
|
||||||
|
creation_rules:
|
||||||
|
- path_regex: 'systems/new-hostname/.*'
|
||||||
|
key_groups:
|
||||||
|
- age: *new-hostname
|
||||||
|
```
|
||||||
|
|
||||||
|
1. Re-encrypt existing secrets with the new key:
|
||||||
|
|
||||||
|
```bash
|
||||||
|
sops updatekeys systems/new-hostname/secrets.yaml
|
||||||
|
```
|
||||||
|
|
||||||
|
## Real-World Examples
|
||||||
|
|
||||||
|
### Example 1: Adding a Feature to All Desktop Machines
|
||||||
|
|
||||||
|
Using `artemision` (desktop) as an example:
|
||||||
|
|
||||||
|
**Create `modules/gpu-optimization.nix`:**
|
||||||
|
|
||||||
|
```nix
|
||||||
|
{ config, lib, server, ... }:
|
||||||
|
{
|
||||||
|
options.gpu.enable = lib.mkEnableOption "GPU optimization";
|
||||||
|
|
||||||
|
config = lib.mkIf (config.gpu.enable && !server) {
|
||||||
|
# Desktop-only GPU settings
|
||||||
|
hardware.nvidia.open = true;
|
||||||
|
services.xserver.videoDrivers = [ "nvidia" ];
|
||||||
|
};
|
||||||
|
}
|
||||||
|
```
|
||||||
|
|
||||||
|
**Enable in `systems/artemision/configuration.nix`:**
|
||||||
|
|
||||||
|
```nix
|
||||||
|
{
|
||||||
|
gpu.enable = true;
|
||||||
|
}
|
||||||
|
```
|
||||||
|
|
||||||
|
**Deploy:**
|
||||||
|
|
||||||
|
```bash
|
||||||
|
nix build .#artemision
|
||||||
|
nh os switch
|
||||||
|
```
|
||||||
|
|
||||||
|
### Example 2: Adding a Server Service to One System
|
||||||
|
|
||||||
|
Using `palatine-hill` (server) as an example:
|
||||||
|
|
||||||
|
**Create `systems/palatine-hill/postgresql-backup.nix`:**
|
||||||
|
|
||||||
|
```nix
|
||||||
|
{ config, pkgs, lib, ... }:
|
||||||
|
{
|
||||||
|
systemd.timers.postgres-backup = {
|
||||||
|
description = "PostgreSQL daily backup";
|
||||||
|
wantedBy = [ "timers.target" ];
|
||||||
|
timerConfig = {
|
||||||
|
OnCalendar = "03:00";
|
||||||
|
Persistent = true;
|
||||||
|
};
|
||||||
|
};
|
||||||
|
|
||||||
|
systemd.services.postgres-backup = {
|
||||||
|
description = "Backup PostgreSQL database";
|
||||||
|
script = ''
|
||||||
|
${pkgs.postgresql}/bin/pg_dumpall | gzip > /backups/postgres-$(date +%Y%m%d).sql.gz
|
||||||
|
'';
|
||||||
|
};
|
||||||
|
}
|
||||||
|
```
|
||||||
|
|
||||||
|
**Reference in `systems/palatine-hill/default.nix`:**
|
||||||
|
|
||||||
|
```nix
|
||||||
|
{ inputs }:
|
||||||
|
{
|
||||||
|
users = [ "alice" ];
|
||||||
|
server = true;
|
||||||
|
modules = [
|
||||||
|
./postgresql-backup.nix
|
||||||
|
];
|
||||||
|
}
|
||||||
|
```
|
||||||
|
|
||||||
|
**Deploy:**
|
||||||
|
|
||||||
|
```bash
|
||||||
|
nix build .#palatine-hill
|
||||||
|
```
|
||||||
|
|
||||||
|
### Example 3: Disabling a Global Module for a Specific System
|
||||||
|
|
||||||
|
To disable `modules/steam.nix` on a server (`palatine-hill`) while it stays enabled on desktops:
|
||||||
|
|
||||||
|
**In `systems/palatine-hill/configuration.nix`:**
|
||||||
|
|
||||||
|
```nix
|
||||||
|
{
|
||||||
|
steam.enable = false; # Override the module option
|
||||||
|
}
|
||||||
|
```
|
||||||
|
|
||||||
|
The module in `modules/steam.nix` should use:
|
||||||
|
|
||||||
|
```nix
|
||||||
|
config = lib.mkIf config.steam.enable {
|
||||||
|
# steam configuration only if enabled
|
||||||
|
};
|
||||||
|
```
|
||||||
|
|
||||||
|
## Debugging & Validation
|
||||||
|
|
||||||
|
### Check Module Evaluation
|
||||||
|
|
||||||
|
```bash
|
||||||
|
# See which modules are loaded for a system
|
||||||
|
nix eval .#nixosConfigurations.artemision.config.environment.systemPackages --no-allocator
|
||||||
|
|
||||||
|
# Validate module option exists
|
||||||
|
nix eval .#nixosConfigurations.artemision.options.myService.enable
|
||||||
|
```
|
||||||
|
|
||||||
|
### Debug SOPS Secrets
|
||||||
|
|
||||||
|
```bash
|
||||||
|
# View encrypted secrets (you must have the age key)
|
||||||
|
sops systems/palatine-hill/secrets.yaml
|
||||||
|
|
||||||
|
# Check if SOPS integration is working
|
||||||
|
nix eval .#nixosConfigurations.palatine-hill.config.sops.secrets --json
|
||||||
|
```
|
||||||
|
|
||||||
|
### Test Configuration Without Deploying
|
||||||
|
|
||||||
|
```bash
|
||||||
|
# Evaluate the entire configuration
|
||||||
|
nix eval .#nixosConfigurations.artemision --no-allocator
|
||||||
|
|
||||||
|
# Build (but don't activate)
|
||||||
|
nix build .#artemision
|
||||||
|
|
||||||
|
# Check for errors in the derivation
|
||||||
|
nix path-info ./result
|
||||||
|
```
|
||||||
18
.github/instructions/ai-doc-attribution.instructions.md
vendored
Normal file
18
.github/instructions/ai-doc-attribution.instructions.md
vendored
Normal file
@@ -0,0 +1,18 @@
|
|||||||
|
---
|
||||||
|
description: "Use when writing or updating documentation (Markdown, README, docs pages, guides). Require explicit top-of-document labeling when a document is fully AI-generated."
|
||||||
|
name: "AI Documentation Attribution"
|
||||||
|
applyTo: "**/*.md"
|
||||||
|
---
|
||||||
|
# AI Documentation Attribution
|
||||||
|
|
||||||
|
- When documentation is fully AI-generated, include an explicit attribution note.
|
||||||
|
- The attribution must be visible in the document body and easy to find by readers.
|
||||||
|
- Acceptable labels include one of:
|
||||||
|
1. "AI-generated documentation"
|
||||||
|
- Place the attribution at the top of the document by default.
|
||||||
|
- If only parts are AI-assisted, attribution is optional unless you want to disclose assistance.
|
||||||
|
- Do not imply fully human authorship for content produced by AI.
|
||||||
|
|
||||||
|
Example attribution lines:
|
||||||
|
|
||||||
|
- `> Note: This document was AI-generated and reviewed by a maintainer.`
|
||||||
5
.vscode/extensions.json
vendored
Normal file
5
.vscode/extensions.json
vendored
Normal file
@@ -0,0 +1,5 @@
|
|||||||
|
{
|
||||||
|
"recommendations": [
|
||||||
|
"davidanson.vscode-markdownlint"
|
||||||
|
]
|
||||||
|
}
|
||||||
1
.vscode/mcp.json
vendored
Normal file
1
.vscode/mcp.json
vendored
Normal file
@@ -0,0 +1 @@
|
|||||||
|
{}
|
||||||
@@ -10,6 +10,8 @@ This repo supports `x86_64-linux` and (theorically) `aarch64-linux`.
|
|||||||
Please see [our setup guide](./docs/setting-up.md) for more information on how
|
Please see [our setup guide](./docs/setting-up.md) for more information on how
|
||||||
to onboard a new user or system.
|
to onboard a new user or system.
|
||||||
|
|
||||||
|
For the media request stack on palatine-hill, see [the media stack guide](./docs/media-stack.md).
|
||||||
|
|
||||||
## For Those Interested
|
## For Those Interested
|
||||||
|
|
||||||
Although we are not actively looking for new members to join in on this repo,
|
Although we are not actively looking for new members to join in on this repo,
|
||||||
|
|||||||
@@ -56,7 +56,9 @@ forEachSystem (
|
|||||||
#!/usr/bin/env ruby
|
#!/usr/bin/env ruby
|
||||||
|
|
||||||
all
|
all
|
||||||
rule 'MD013', :tables => false
|
rule 'MD013', :tables => false, :line_length => 220
|
||||||
|
exclude_rule 'MD029' # ordered list items separated by blank lines
|
||||||
|
exclude_rule 'MD041' # YAML frontmatter triggers false positives
|
||||||
'').outPath;
|
'').outPath;
|
||||||
};
|
};
|
||||||
|
|
||||||
|
|||||||
422
docs/media-stack.md
Normal file
422
docs/media-stack.md
Normal file
@@ -0,0 +1,422 @@
|
|||||||
|
# Media Request Stack Setup
|
||||||
|
|
||||||
|
> Note: This is AI-generated documentation and was reviewed by a maintainer.
|
||||||
|
|
||||||
|
This page documents the setup needed to make media requests flow from Jellyseerr to the Starr apps to qBittorrent and finally into a Jellyfin library.
|
||||||
|
|
||||||
|
It is based on the services defined for palatine-hill in:
|
||||||
|
|
||||||
|
- `systems/palatine-hill/docker/arr.nix`
|
||||||
|
- `systems/palatine-hill/docker/torr.nix`
|
||||||
|
- `systems/palatine-hill/postgresql.nix`
|
||||||
|
- `systems/palatine-hill/vars.nix`
|
||||||
|
|
||||||
|
The guidance here follows the same hardlink principles used by TRaSH Guides: keep downloads and library folders separate, but make sure they live on the same filesystem and appear under the same container path.
|
||||||
|
|
||||||
|
## What Exists In This Repo
|
||||||
|
|
||||||
|
The media-request side currently defines these containers on palatine-hill:
|
||||||
|
|
||||||
|
- Jellyseerr on port `5055`
|
||||||
|
- Prowlarr on port `9696`
|
||||||
|
- Radarr on port `7878`
|
||||||
|
- Sonarr on port `8989`
|
||||||
|
- Lidarr on port `8686`
|
||||||
|
- Bazarr on port `6767`
|
||||||
|
- qBittorrent variants in `docker/torr.nix`
|
||||||
|
|
||||||
|
Related supporting details:
|
||||||
|
|
||||||
|
- The Starr apps and qBittorrent both mount `/data` from `vars.primary_torr`.
|
||||||
|
- PostgreSQL is enabled locally and used by the arr stack.
|
||||||
|
|
||||||
|
Two caveats matter before expecting the flow to work:
|
||||||
|
|
||||||
|
1. Jellyfin is not currently defined on palatine-hill in this repo, so this guide treats Jellyfin as the destination media server you will point at the finished library.
|
||||||
|
2. qBittorrent is using host-exposed or gluetun-attached networking rather than `arrnet`, so the Starr apps should connect to qBittorrent through the host and published port.
|
||||||
|
|
||||||
|
## Required Hardlink Layout
|
||||||
|
|
||||||
|
For hardlinks and atomic moves to work reliably, these rules need to be true:
|
||||||
|
|
||||||
|
- qBittorrent and the Starr apps must see the same underlying host filesystem and the same ZFS dataset.
|
||||||
|
- qBittorrent and the Starr apps should use the same in-container prefix, ideally `/data`.
|
||||||
|
- Downloads and the final library must be separate directories.
|
||||||
|
- Jellyfin should only read the final media library, not the download directories.
|
||||||
|
|
||||||
|
For ZFS specifically, sibling child datasets in the same pool are not enough. Hardlinks do not cross dataset boundaries, so `/data/torrents` and `/data/media` must be directories inside the same dataset.
|
||||||
|
|
||||||
|
Recommended logical layout inside containers:
|
||||||
|
|
||||||
|
```text
|
||||||
|
/data
|
||||||
|
├── torrents
|
||||||
|
│ ├── movies
|
||||||
|
│ ├── music
|
||||||
|
│ └── tv
|
||||||
|
└── media
|
||||||
|
├── movies
|
||||||
|
├── music
|
||||||
|
└── tv
|
||||||
|
```
|
||||||
|
|
||||||
|
This repo draft uses one shared host root from `vars.primary_torr` and mounts that as `/data` for qBittorrent, Radarr, Sonarr, Lidarr, Bazarr, Unpackerr, and Notifiarr.
|
||||||
|
|
||||||
|
### What Matters
|
||||||
|
|
||||||
|
The exact host path is less important than this invariant:
|
||||||
|
|
||||||
|
```text
|
||||||
|
same host filesystem + same container path prefix + separate downloads/media folders
|
||||||
|
```
|
||||||
|
|
||||||
|
If you split torrents and media across different datasets, imports may still be made to work with copies or path fixes, but hardlinks and instant moves will not be dependable.
|
||||||
|
|
||||||
|
## Suggested Host Layout
|
||||||
|
|
||||||
|
Once you choose a shared host root, create a structure like this beneath it:
|
||||||
|
|
||||||
|
```text
|
||||||
|
data/
|
||||||
|
├── torrents/
|
||||||
|
│ ├── movies/
|
||||||
|
│ ├── music/
|
||||||
|
│ └── tv/
|
||||||
|
└── media/
|
||||||
|
├── movies/
|
||||||
|
├── music/
|
||||||
|
└── tv/
|
||||||
|
```
|
||||||
|
|
||||||
|
In this repo draft, the shared host root is `vars.primary_torr`, with container mounts set to `"${vars.primary_torr}/data:/data"`.
|
||||||
|
|
||||||
|
The matching container paths should then be:
|
||||||
|
|
||||||
|
- qBittorrent download root: `/data/torrents`
|
||||||
|
- Radarr root folder: `/data/media/movies`
|
||||||
|
- Sonarr root folder: `/data/media/tv`
|
||||||
|
- Lidarr root folder: `/data/media/music`
|
||||||
|
- Jellyfin library roots: `/data/media/movies`, `/data/media/tv`, `/data/media/music`
|
||||||
|
|
||||||
|
Do not point any Starr app root folder at `/data/torrents`.
|
||||||
|
|
||||||
|
## Service Roles
|
||||||
|
|
||||||
|
### Jellyseerr
|
||||||
|
|
||||||
|
Jellyseerr is the user-facing request layer. It should:
|
||||||
|
|
||||||
|
- connect to Jellyfin for users, authentication, and media availability
|
||||||
|
- connect to Radarr for movies
|
||||||
|
- connect to Sonarr for series
|
||||||
|
|
||||||
|
Jellyseerr does not talk directly to qBittorrent for normal request flow.
|
||||||
|
|
||||||
|
### Prowlarr Values
|
||||||
|
|
||||||
|
Prowlarr should be the single source of indexers. Configure indexers there, then sync them to:
|
||||||
|
|
||||||
|
- Radarr
|
||||||
|
- Sonarr
|
||||||
|
- Lidarr
|
||||||
|
|
||||||
|
This avoids duplicating indexer setup in every Starr app.
|
||||||
|
|
||||||
|
### Radarr, Sonarr, Lidarr
|
||||||
|
|
||||||
|
These apps should:
|
||||||
|
|
||||||
|
- receive requests from Jellyseerr
|
||||||
|
- search indexers via Prowlarr
|
||||||
|
- send downloads to qBittorrent
|
||||||
|
- import completed downloads from `/data/torrents/...` into `/data/media/...`
|
||||||
|
|
||||||
|
### qBittorrent
|
||||||
|
|
||||||
|
qBittorrent should only download into `/data/torrents/...` and should not write directly into `/data/media/...`.
|
||||||
|
|
||||||
|
### Jellyfin
|
||||||
|
|
||||||
|
Jellyfin should only read the final library under `/data/media/...`.
|
||||||
|
|
||||||
|
## Configuration Order
|
||||||
|
|
||||||
|
Set the stack up in this order:
|
||||||
|
|
||||||
|
1. Shared path layout
|
||||||
|
2. qBittorrent
|
||||||
|
3. Prowlarr
|
||||||
|
4. Radarr, Sonarr, Lidarr
|
||||||
|
5. Jellyfin
|
||||||
|
6. Jellyseerr
|
||||||
|
7. Bazarr
|
||||||
|
|
||||||
|
That order keeps each layer pointing at services that already exist.
|
||||||
|
|
||||||
|
## qBittorrent Setup
|
||||||
|
|
||||||
|
The repo defines these Web UI ports:
|
||||||
|
|
||||||
|
- `8082` for `qbit`
|
||||||
|
- `8081` for `qbitVPN`
|
||||||
|
- `8083` for `qbitPerm`
|
||||||
|
|
||||||
|
Choose one instance for the Starr apps to use and keep that consistent.
|
||||||
|
|
||||||
|
Recommended qBittorrent settings:
|
||||||
|
|
||||||
|
- Default save path: `/data/torrents`
|
||||||
|
- Category mode: enabled
|
||||||
|
- Automatic torrent management: enabled
|
||||||
|
- Incomplete directory: optional, but avoid a different filesystem if you want cheap moves
|
||||||
|
- Listening port: use the instance-specific torrent port if applicable
|
||||||
|
|
||||||
|
Recommended categories:
|
||||||
|
|
||||||
|
- `radarr` -> `/data/torrents/movies`
|
||||||
|
- `sonarr` -> `/data/torrents/tv`
|
||||||
|
- `lidarr` -> `/data/torrents/music`
|
||||||
|
|
||||||
|
This matches the TRaSH pattern and keeps imports predictable.
|
||||||
|
|
||||||
|
## Prowlarr Setup
|
||||||
|
|
||||||
|
In Prowlarr:
|
||||||
|
|
||||||
|
1. Add your indexers.
|
||||||
|
2. Add app connections for Radarr, Sonarr, and Lidarr.
|
||||||
|
3. Sync indexers from Prowlarr into each Starr app.
|
||||||
|
|
||||||
|
Use the container hostnames from the repo when apps share the `arrnet` network:
|
||||||
|
|
||||||
|
- `http://radarr:7878`
|
||||||
|
- `http://sonarr:8989`
|
||||||
|
- `http://lidarr:8686`
|
||||||
|
|
||||||
|
If you are configuring through host-exposed ports in a browser from outside Docker, use the server host and published ports instead.
|
||||||
|
|
||||||
|
## Radarr Setup
|
||||||
|
|
||||||
|
In Radarr:
|
||||||
|
|
||||||
|
1. Add a root folder: `/data/media/movies`
|
||||||
|
2. Add qBittorrent as the download client
|
||||||
|
3. Set the category to `radarr`
|
||||||
|
4. Prefer completed download handling on
|
||||||
|
5. Do not use a movie root inside the downloads tree
|
||||||
|
|
||||||
|
For qBittorrent, use the chosen instance endpoint.
|
||||||
|
|
||||||
|
Examples:
|
||||||
|
|
||||||
|
- preferred for this repo draft: `http://<server>:8082`
|
||||||
|
- VPN-backed alternative if you intentionally use that instance: `http://<server>:8081`
|
||||||
|
|
||||||
|
The important part is that the path qBittorrent writes must still be visible to Radarr as `/data/torrents/movies`.
|
||||||
|
|
||||||
|
## Sonarr Setup
|
||||||
|
|
||||||
|
In Sonarr:
|
||||||
|
|
||||||
|
1. Add a root folder: `/data/media/tv`
|
||||||
|
2. Add qBittorrent as the download client
|
||||||
|
3. Set the category to `sonarr`
|
||||||
|
4. Enable completed download handling
|
||||||
|
|
||||||
|
Keep the same shared-path rule: Sonarr must be able to see qBittorrent output directly at `/data/torrents/tv`.
|
||||||
|
|
||||||
|
## Lidarr Setup
|
||||||
|
|
||||||
|
In Lidarr:
|
||||||
|
|
||||||
|
1. Add a root folder: `/data/media/music`
|
||||||
|
2. Add qBittorrent as the download client
|
||||||
|
3. Set the category to `lidarr`
|
||||||
|
4. Enable completed download handling
|
||||||
|
|
||||||
|
## Jellyfin Setup
|
||||||
|
|
||||||
|
Jellyfin should be pointed only at the final library paths:
|
||||||
|
|
||||||
|
- Movies: `/data/media/movies`
|
||||||
|
- TV: `/data/media/tv`
|
||||||
|
- Music: `/data/media/music`
|
||||||
|
|
||||||
|
Do not add `/data/torrents` as a Jellyfin library.
|
||||||
|
|
||||||
|
If Jellyfin runs in Docker, mount only the media sub-tree if you want a tighter boundary:
|
||||||
|
|
||||||
|
- `host-shared-root/media:/data/media`
|
||||||
|
|
||||||
|
If Jellyfin runs directly on the host, point it at the equivalent host paths.
|
||||||
|
|
||||||
|
## Jellyseerr Setup
|
||||||
|
|
||||||
|
Jellyseerr in this repo runs on port `5055` and joins both `arrnet` and `haproxy-net`.
|
||||||
|
|
||||||
|
Configure it with:
|
||||||
|
|
||||||
|
1. Jellyfin server URL
|
||||||
|
2. Jellyfin API key
|
||||||
|
3. Radarr server URL and API key
|
||||||
|
4. Sonarr server URL and API key
|
||||||
|
|
||||||
|
Suggested internal URLs when services share `arrnet`:
|
||||||
|
|
||||||
|
- Radarr: `http://radarr:7878`
|
||||||
|
- Sonarr: `http://sonarr:8989`
|
||||||
|
|
||||||
|
Jellyseerr request defaults should map:
|
||||||
|
|
||||||
|
- Movies -> Radarr root `/data/media/movies`
|
||||||
|
- Series -> Sonarr root `/data/media/tv`
|
||||||
|
|
||||||
|
After that, user flow is:
|
||||||
|
|
||||||
|
1. User requests media in Jellyseerr
|
||||||
|
2. Jellyseerr hands the request to Radarr or Sonarr
|
||||||
|
3. The Starr app searches via Prowlarr indexers
|
||||||
|
4. The Starr app sends the download to qBittorrent with its category
|
||||||
|
5. qBittorrent writes into `/data/torrents/...`
|
||||||
|
6. The Starr app imports into `/data/media/...`
|
||||||
|
7. Jellyfin scans or detects the new item in the final library
|
||||||
|
|
||||||
|
## Bazarr Setup
|
||||||
|
|
||||||
|
Bazarr is optional for the request-to-library path, but it fits after Radarr and Sonarr are stable.
|
||||||
|
|
||||||
|
Point Bazarr at:
|
||||||
|
|
||||||
|
- Radarr
|
||||||
|
- Sonarr
|
||||||
|
- the final media library visible under `/data/media`
|
||||||
|
|
||||||
|
It does not need the download tree for ordinary subtitle management.
|
||||||
|
|
||||||
|
## Remote Path Mappings
|
||||||
|
|
||||||
|
If you align the mounts properly, you should not need remote path mappings.
|
||||||
|
|
||||||
|
That is the preferred setup.
|
||||||
|
|
||||||
|
Only use remote path mappings if the downloader and the importing app see different absolute paths for the same files.
|
||||||
|
In a Docker-only setup with shared `/data`, that is a sign the mounts are wrong rather than a feature you should rely on.
|
||||||
|
|
||||||
|
## ZFS Notes
|
||||||
|
|
||||||
|
For a hardlink-safe media layout on ZFS:
|
||||||
|
|
||||||
|
- Keep `/data/torrents` and `/data/media` in the same dataset.
|
||||||
|
- Do not split them into separate child datasets if you want hardlinks.
|
||||||
|
- It is fine to keep qBittorrent config, Jellyfin metadata, and other appdata in separate datasets because those do not need hardlinks with payload files.
|
||||||
|
|
||||||
|
For `ZFS-primary/torr`, a better baseline for bulk media than a small-record, high-compression profile is:
|
||||||
|
|
||||||
|
- `recordsize=1M`
|
||||||
|
- `compression=zstd-3` or `lz4`
|
||||||
|
- `sync=standard`
|
||||||
|
- `logbias=throughput`
|
||||||
|
- `primarycache=metadata`
|
||||||
|
- `dnodesize=auto`
|
||||||
|
|
||||||
|
These are new-write behavior settings. `recordsize` only affects newly written data.
|
||||||
|
|
||||||
|
## Repo-Specific Notes
|
||||||
|
|
||||||
|
- Arr containers use `PUID=600` and `PGID=100`.
|
||||||
|
- qBittorrent containers also use `PUID=600` and `PGID=100`.
|
||||||
|
- The arr stack uses the local PostgreSQL service via `/var/run/postgresql`.
|
||||||
|
- `jellyseerr` stores config under `${vars.primary_docker}/overseerr` even though the container is Jellyseerr.
|
||||||
|
- The hardlink draft in this repo chooses `vars.primary_torr` as the shared `/data` root.
|
||||||
|
|
||||||
|
- `systems/palatine-hill/docker/default.nix` imports `torr.nix`, so the downloader stack is part of the host configuration.
|
||||||
|
|
||||||
|
## Deployment Checklist (Exact Values)
|
||||||
|
|
||||||
|
Use this checklist when configuring the stack so every app matches the current draft.
|
||||||
|
|
||||||
|
### Shared Paths
|
||||||
|
|
||||||
|
- Shared container path for arr + downloader: `/data`
|
||||||
|
- Download root: `/data/torrents`
|
||||||
|
- Media roots:
|
||||||
|
- Movies: `/data/media/movies`
|
||||||
|
- TV: `/data/media/tv`
|
||||||
|
- Music: `/data/media/music`
|
||||||
|
|
||||||
|
### qBittorrent (Primary Instance)
|
||||||
|
|
||||||
|
- Web UI URL for Starr apps: `http://<server>:8082`
|
||||||
|
- Web UI port: `8082`
|
||||||
|
- Torrent port: `29432` (TCP/UDP)
|
||||||
|
- Default save path: `/data/torrents`
|
||||||
|
- Category save-path mode: enabled
|
||||||
|
- Automatic torrent management: enabled
|
||||||
|
|
||||||
|
Category paths:
|
||||||
|
|
||||||
|
- `radarr` -> `/data/torrents/movies`
|
||||||
|
- `sonarr` -> `/data/torrents/tv`
|
||||||
|
- `lidarr` -> `/data/torrents/music`
|
||||||
|
|
||||||
|
### Radarr
|
||||||
|
|
||||||
|
- URL: `http://radarr:7878` (inside arr network)
|
||||||
|
- Root folder: `/data/media/movies`
|
||||||
|
- Download client: qBittorrent at `http://<server>:8082`
|
||||||
|
- qBittorrent category: `radarr`
|
||||||
|
- Completed download handling: enabled
|
||||||
|
|
||||||
|
### Sonarr
|
||||||
|
|
||||||
|
- URL: `http://sonarr:8989` (inside arr network)
|
||||||
|
- Root folder: `/data/media/tv`
|
||||||
|
- Download client: qBittorrent at `http://<server>:8082`
|
||||||
|
- qBittorrent category: `sonarr`
|
||||||
|
- Completed download handling: enabled
|
||||||
|
|
||||||
|
### Lidarr
|
||||||
|
|
||||||
|
- URL: `http://lidarr:8686` (inside arr network)
|
||||||
|
- Root folder: `/data/media/music`
|
||||||
|
- Download client: qBittorrent at `http://<server>:8082`
|
||||||
|
- qBittorrent category: `lidarr`
|
||||||
|
- Completed download handling: enabled
|
||||||
|
|
||||||
|
### Prowlarr
|
||||||
|
|
||||||
|
- URL: `http://prowlarr:9696` (inside arr network)
|
||||||
|
- App sync targets:
|
||||||
|
- `http://radarr:7878`
|
||||||
|
- `http://sonarr:8989`
|
||||||
|
- `http://lidarr:8686`
|
||||||
|
|
||||||
|
### Jellyseerr Values
|
||||||
|
|
||||||
|
- URL: `http://jellyseerr:5055` (internal) or via your reverse proxy externally
|
||||||
|
- Radarr target: `http://radarr:7878`
|
||||||
|
- Sonarr target: `http://sonarr:8989`
|
||||||
|
- Request defaults:
|
||||||
|
- Movies root: `/data/media/movies`
|
||||||
|
- Series root: `/data/media/tv`
|
||||||
|
|
||||||
|
### Jellyfin Values
|
||||||
|
|
||||||
|
- Library roots only:
|
||||||
|
- `/data/media/movies`
|
||||||
|
- `/data/media/tv`
|
||||||
|
- `/data/media/music`
|
||||||
|
- Do not add `/data/torrents` as a library.
|
||||||
|
|
||||||
|
## Validation Checklist
|
||||||
|
|
||||||
|
Use this after setup:
|
||||||
|
|
||||||
|
1. qBittorrent can create files in `/data/torrents/movies`, `/data/torrents/tv`, and `/data/torrents/music`.
|
||||||
|
2. Radarr, Sonarr, and Lidarr can browse both `/data/torrents/...` and `/data/media/...`.
|
||||||
|
3. A test download lands in the expected category folder.
|
||||||
|
4. The corresponding Starr app imports the item into `/data/media/...` without copy-delete behavior.
|
||||||
|
5. Jellyfin can see the imported file in the final library.
|
||||||
|
6. Jellyseerr shows the item as available after import and scan.
|
||||||
|
|
||||||
|
If imports fail or hardlinks do not work, check the mount design before changing app logic.
|
||||||
72
flake.lock
generated
72
flake.lock
generated
@@ -76,11 +76,11 @@
|
|||||||
},
|
},
|
||||||
"locked": {
|
"locked": {
|
||||||
"dir": "pkgs/firefox-addons",
|
"dir": "pkgs/firefox-addons",
|
||||||
"lastModified": 1775534587,
|
"lastModified": 1777348977,
|
||||||
"narHash": "sha256-OLAoGTTwPVTH13C1e2Vcdff4WigTsk6hO5Y3sEcwl/s=",
|
"narHash": "sha256-9aKuCI5TKHKnP073B1VzBdLRLAQJE7R9rbJWaSFXr3M=",
|
||||||
"owner": "rycee",
|
"owner": "rycee",
|
||||||
"repo": "nur-expressions",
|
"repo": "nur-expressions",
|
||||||
"rev": "9f1e4b7f5443c50cb4ccc2a376ba1058231e64b4",
|
"rev": "a314975f42bfa9665bf77d1586ee0e123790ed27",
|
||||||
"type": "gitlab"
|
"type": "gitlab"
|
||||||
},
|
},
|
||||||
"original": {
|
"original": {
|
||||||
@@ -240,11 +240,11 @@
|
|||||||
]
|
]
|
||||||
},
|
},
|
||||||
"locked": {
|
"locked": {
|
||||||
"lastModified": 1775556024,
|
"lastModified": 1777349711,
|
||||||
"narHash": "sha256-j1u/859OVS54rGlsvFqJdwKPEnFYCI+4pyfTiSfv1Xc=",
|
"narHash": "sha256-PGKgo2dO6fK603QGI+DWXdKmS09pbJjjTxwRHdhkGZA=",
|
||||||
"owner": "nix-community",
|
"owner": "nix-community",
|
||||||
"repo": "home-manager",
|
"repo": "home-manager",
|
||||||
"rev": "4bdfeff1d9b7473e6e58f73f5809576e8a69e406",
|
"rev": "c1140540536d483e2730320100f6835d62c94fdf",
|
||||||
"type": "github"
|
"type": "github"
|
||||||
},
|
},
|
||||||
"original": {
|
"original": {
|
||||||
@@ -281,11 +281,11 @@
|
|||||||
]
|
]
|
||||||
},
|
},
|
||||||
"locked": {
|
"locked": {
|
||||||
"lastModified": 1774778246,
|
"lastModified": 1776426061,
|
||||||
"narHash": "sha256-OX9Oba3/cHq1jMS1/ItCdxNuRBH3291Lg727nHOzYnc=",
|
"narHash": "sha256-3rROoGl8xBsIOM+5m+qZS4GJnsdQPAH3NJJe1OUfJ5o=",
|
||||||
"owner": "hyprwm",
|
"owner": "hyprwm",
|
||||||
"repo": "contrib",
|
"repo": "contrib",
|
||||||
"rev": "ca3c381df6018e6c400ceac994066427c98fe323",
|
"rev": "1f71628d86a7701fd5ba0f8aeabe15376f4c6afc",
|
||||||
"type": "github"
|
"type": "github"
|
||||||
},
|
},
|
||||||
"original": {
|
"original": {
|
||||||
@@ -335,11 +335,11 @@
|
|||||||
]
|
]
|
||||||
},
|
},
|
||||||
"locked": {
|
"locked": {
|
||||||
"lastModified": 1775365369,
|
"lastModified": 1777181277,
|
||||||
"narHash": "sha256-DgH5mveLoau20CuTnaU5RXZWgFQWn56onQ4Du2CqYoI=",
|
"narHash": "sha256-yVJbd07ortDRAttDFmDV5p220aOLTHgVAx//0nW/xW8=",
|
||||||
"owner": "Mic92",
|
"owner": "Mic92",
|
||||||
"repo": "nix-index-database",
|
"repo": "nix-index-database",
|
||||||
"rev": "cef5cf82671e749ac87d69aadecbb75967e6f6c3",
|
"rev": "b8eb7acee0f7604fe1bf6a5b3dcf5254369180fa",
|
||||||
"type": "github"
|
"type": "github"
|
||||||
},
|
},
|
||||||
"original": {
|
"original": {
|
||||||
@@ -415,11 +415,11 @@
|
|||||||
},
|
},
|
||||||
"nixos-hardware": {
|
"nixos-hardware": {
|
||||||
"locked": {
|
"locked": {
|
||||||
"lastModified": 1775490113,
|
"lastModified": 1776983936,
|
||||||
"narHash": "sha256-2ZBhDNZZwYkRmefK5XLOusCJHnoeKkoN95hoSGgMxWM=",
|
"narHash": "sha256-ZOQyNqSvJ8UdrrqU1p7vaFcdL53idK+LOM8oRWEWh6o=",
|
||||||
"owner": "NixOS",
|
"owner": "NixOS",
|
||||||
"repo": "nixos-hardware",
|
"repo": "nixos-hardware",
|
||||||
"rev": "c775c2772ba56e906cbeb4e0b2db19079ef11ff7",
|
"rev": "2096f3f411ce46e88a79ae4eafcfc9df8ed41c61",
|
||||||
"type": "github"
|
"type": "github"
|
||||||
},
|
},
|
||||||
"original": {
|
"original": {
|
||||||
@@ -438,11 +438,11 @@
|
|||||||
]
|
]
|
||||||
},
|
},
|
||||||
"locked": {
|
"locked": {
|
||||||
"lastModified": 1775331627,
|
"lastModified": 1776036369,
|
||||||
"narHash": "sha256-przIxCbTrNgLzcBlNPGZRfZbiPLzUkLUtNS05Ekcogk=",
|
"narHash": "sha256-TxBJY5IwDu3peDIK3b9+A7pwqBaFRCAIllaRSfYMQtI=",
|
||||||
"owner": "NuschtOS",
|
"owner": "NuschtOS",
|
||||||
"repo": "nixos-modules",
|
"repo": "nixos-modules",
|
||||||
"rev": "b4cc33254b872b286b9fe481e60e3fc2abc78072",
|
"rev": "2bea807180b3931cf8765078205fd9171dbfd2b5",
|
||||||
"type": "github"
|
"type": "github"
|
||||||
},
|
},
|
||||||
"original": {
|
"original": {
|
||||||
@@ -500,11 +500,11 @@
|
|||||||
},
|
},
|
||||||
"nixpkgs_2": {
|
"nixpkgs_2": {
|
||||||
"locked": {
|
"locked": {
|
||||||
"lastModified": 1775423009,
|
"lastModified": 1776877367,
|
||||||
"narHash": "sha256-vPKLpjhIVWdDrfiUM8atW6YkIggCEKdSAlJPzzhkQlw=",
|
"narHash": "sha256-EHq1/OX139R1RvBzOJ0aMRT3xnWyqtHBRUBuO1gFzjI=",
|
||||||
"owner": "nixos",
|
"owner": "nixos",
|
||||||
"repo": "nixpkgs",
|
"repo": "nixpkgs",
|
||||||
"rev": "68d8aa3d661f0e6bd5862291b5bb263b2a6595c9",
|
"rev": "0726a0ecb6d4e08f6adced58726b95db924cef57",
|
||||||
"type": "github"
|
"type": "github"
|
||||||
},
|
},
|
||||||
"original": {
|
"original": {
|
||||||
@@ -550,11 +550,11 @@
|
|||||||
]
|
]
|
||||||
},
|
},
|
||||||
"locked": {
|
"locked": {
|
||||||
"lastModified": 1775036584,
|
"lastModified": 1776796298,
|
||||||
"narHash": "sha256-zW0lyy7ZNNT/x8JhzFHBsP2IPx7ATZIPai4FJj12BgU=",
|
"narHash": "sha256-PcRvlWayisPSjd0UcRQbhG8Oqw78AcPE6x872cPRHN8=",
|
||||||
"owner": "cachix",
|
"owner": "cachix",
|
||||||
"repo": "git-hooks.nix",
|
"repo": "git-hooks.nix",
|
||||||
"rev": "4e0eb042b67d863b1b34b3f64d52ceb9cd926735",
|
"rev": "3cfd774b0a530725a077e17354fbdb87ea1c4aad",
|
||||||
"type": "github"
|
"type": "github"
|
||||||
},
|
},
|
||||||
"original": {
|
"original": {
|
||||||
@@ -594,11 +594,11 @@
|
|||||||
]
|
]
|
||||||
},
|
},
|
||||||
"locked": {
|
"locked": {
|
||||||
"lastModified": 1775531562,
|
"lastModified": 1777346187,
|
||||||
"narHash": "sha256-G83GDxQo6lqO5aeTSD5RFLhnh2g6DzJpSvSju2EjjrQ=",
|
"narHash": "sha256-oVxyGjpiIsrXhWTJVUOs38fZQkLjd0nZGOY9K7Kfot8=",
|
||||||
"owner": "oxalica",
|
"owner": "oxalica",
|
||||||
"repo": "rust-overlay",
|
"repo": "rust-overlay",
|
||||||
"rev": "d8b1b209203665924c81eabf750492530754f27e",
|
"rev": "146e7bf7569b8288f24d41d806b9f584f7cfd5b5",
|
||||||
"type": "github"
|
"type": "github"
|
||||||
},
|
},
|
||||||
"original": {
|
"original": {
|
||||||
@@ -614,11 +614,11 @@
|
|||||||
]
|
]
|
||||||
},
|
},
|
||||||
"locked": {
|
"locked": {
|
||||||
"lastModified": 1775365543,
|
"lastModified": 1777338324,
|
||||||
"narHash": "sha256-f50qrK0WwZ9z5EdaMGWOTtALgSF7yb7XwuE7LjCuDmw=",
|
"narHash": "sha256-bc+ZZCmOTNq86/svGnw0tVpH7vJaLYvGLLKFYP08Q8E=",
|
||||||
"owner": "Mic92",
|
"owner": "Mic92",
|
||||||
"repo": "sops-nix",
|
"repo": "sops-nix",
|
||||||
"rev": "a4ee2de76efb759fe8d4868c33dec9937897916f",
|
"rev": "8eaee5c45428b28b8c47a83e4c09dccec5f279b5",
|
||||||
"type": "github"
|
"type": "github"
|
||||||
},
|
},
|
||||||
"original": {
|
"original": {
|
||||||
@@ -647,11 +647,11 @@
|
|||||||
"tinted-zed": "tinted-zed"
|
"tinted-zed": "tinted-zed"
|
||||||
},
|
},
|
||||||
"locked": {
|
"locked": {
|
||||||
"lastModified": 1775429060,
|
"lastModified": 1776893932,
|
||||||
"narHash": "sha256-wbFF5cRxQOCzL/wHOKYm21t5AHPH2Lfp0mVPCOAvEoc=",
|
"narHash": "sha256-AFD5cf9eNqXq1brHS63xeZy2xKZMgG9J86XJ9I2eLn8=",
|
||||||
"owner": "danth",
|
"owner": "danth",
|
||||||
"repo": "stylix",
|
"repo": "stylix",
|
||||||
"rev": "d27951a6539951d87f75cf0a7cda8a3a24016019",
|
"rev": "84971726c7ef0bb3669a5443e151cc226e65c518",
|
||||||
"type": "github"
|
"type": "github"
|
||||||
},
|
},
|
||||||
"original": {
|
"original": {
|
||||||
@@ -767,11 +767,11 @@
|
|||||||
]
|
]
|
||||||
},
|
},
|
||||||
"locked": {
|
"locked": {
|
||||||
"lastModified": 1775531246,
|
"lastModified": 1777064547,
|
||||||
"narHash": "sha256-sbVYa4TS2Q1pkSjs8CvHsPGYFM5w4d9od4ltzIGV/bA=",
|
"narHash": "sha256-hssXWvyy6bzaGi9FuZQPGxVBLzQKRPDht13O0Y+Qxmo=",
|
||||||
"owner": "Toqozz",
|
"owner": "Toqozz",
|
||||||
"repo": "wired-notify",
|
"repo": "wired-notify",
|
||||||
"rev": "4fd4283803f198302af1a6a75b2225568004b343",
|
"rev": "95edd8613b1636639857a3fba403155cef82eb5d",
|
||||||
"type": "github"
|
"type": "github"
|
||||||
},
|
},
|
||||||
"original": {
|
"original": {
|
||||||
|
|||||||
@@ -13,6 +13,7 @@
|
|||||||
enable = lib.mkDefault true;
|
enable = lib.mkDefault true;
|
||||||
flags = [ "--accept-flake-config" ];
|
flags = [ "--accept-flake-config" ];
|
||||||
randomizedDelaySec = "1h";
|
randomizedDelaySec = "1h";
|
||||||
|
runGarbageCollection = true;
|
||||||
persistent = true;
|
persistent = true;
|
||||||
flake = "git+ssh://nayeonie.com/ahuston-0/nix-dotfiles.git";
|
flake = "git+ssh://nayeonie.com/ahuston-0/nix-dotfiles.git";
|
||||||
};
|
};
|
||||||
|
|||||||
@@ -54,6 +54,12 @@
|
|||||||
"starcoder2:7b"
|
"starcoder2:7b"
|
||||||
];
|
];
|
||||||
};
|
};
|
||||||
|
avahi = {
|
||||||
|
enable = true;
|
||||||
|
#publish.enable = true;
|
||||||
|
nssmdns4 = true;
|
||||||
|
openFirewall = true;
|
||||||
|
};
|
||||||
flatpak.enable = true;
|
flatpak.enable = true;
|
||||||
calibre-web = {
|
calibre-web = {
|
||||||
# temp disable this
|
# temp disable this
|
||||||
|
|||||||
@@ -6,7 +6,6 @@
|
|||||||
attic-client
|
attic-client
|
||||||
amdgpu_top
|
amdgpu_top
|
||||||
android-tools
|
android-tools
|
||||||
bat
|
|
||||||
bitwarden-cli
|
bitwarden-cli
|
||||||
bfg-repo-cleaner
|
bfg-repo-cleaner
|
||||||
brightnessctl
|
brightnessctl
|
||||||
@@ -19,7 +18,6 @@
|
|||||||
#claude-code
|
#claude-code
|
||||||
croc
|
croc
|
||||||
deadnix
|
deadnix
|
||||||
direnv
|
|
||||||
easyeffects
|
easyeffects
|
||||||
eza
|
eza
|
||||||
fanficfare
|
fanficfare
|
||||||
@@ -107,4 +105,13 @@
|
|||||||
zoom-us
|
zoom-us
|
||||||
zoxide
|
zoxide
|
||||||
];
|
];
|
||||||
|
programs = {
|
||||||
|
appimage = {
|
||||||
|
enable = true;
|
||||||
|
binfmt = true;
|
||||||
|
};
|
||||||
|
bat.enable = true;
|
||||||
|
direnv.enable = true;
|
||||||
|
kdeconnect.enable = true;
|
||||||
|
};
|
||||||
}
|
}
|
||||||
|
|||||||
@@ -1,10 +1,4 @@
|
|||||||
{ pkgs, ... }:
|
{ pkgs, ... }:
|
||||||
# let
|
|
||||||
# randWallpaper = pkgs.runCommand "stylix-wallpaper" { } ''
|
|
||||||
# numWallpapers =
|
|
||||||
# $((1 + $RANDOM % 10))
|
|
||||||
|
|
||||||
# in
|
|
||||||
{
|
{
|
||||||
stylix = {
|
stylix = {
|
||||||
enable = true;
|
enable = true;
|
||||||
|
|||||||
@@ -14,6 +14,7 @@
|
|||||||
./haproxy
|
./haproxy
|
||||||
./hardware-changes.nix
|
./hardware-changes.nix
|
||||||
./hydra.nix
|
./hydra.nix
|
||||||
|
./mattermost.nix
|
||||||
./minio.nix
|
./minio.nix
|
||||||
./networking.nix
|
./networking.nix
|
||||||
./nextcloud.nix
|
./nextcloud.nix
|
||||||
@@ -33,8 +34,7 @@
|
|||||||
loader.grub.device = "/dev/sda";
|
loader.grub.device = "/dev/sda";
|
||||||
useSystemdBoot = true;
|
useSystemdBoot = true;
|
||||||
kernelParams = [
|
kernelParams = [
|
||||||
"i915.force_probe=56a5"
|
"xe.force_probe=56a5"
|
||||||
"i915.enable_guc=2"
|
|
||||||
];
|
];
|
||||||
kernel.sysctl = {
|
kernel.sysctl = {
|
||||||
"vm.overcommit_memory" = lib.mkForce 1;
|
"vm.overcommit_memory" = lib.mkForce 1;
|
||||||
|
|||||||
@@ -5,6 +5,7 @@
|
|||||||
}:
|
}:
|
||||||
let
|
let
|
||||||
vars = import ../vars.nix;
|
vars = import ../vars.nix;
|
||||||
|
shared_data_path = "${vars.primary_torr}/data";
|
||||||
arr_postgres_config =
|
arr_postgres_config =
|
||||||
container_type:
|
container_type:
|
||||||
let
|
let
|
||||||
@@ -62,7 +63,7 @@ in
|
|||||||
];
|
];
|
||||||
volumes = [
|
volumes = [
|
||||||
"${vars.primary_docker}/bazarr:/config"
|
"${vars.primary_docker}/bazarr:/config"
|
||||||
"${vars.primary_plex_storage}/data:/data"
|
"${shared_data_path}:/data"
|
||||||
"/var/run/postgresql:/var/run/postgresql"
|
"/var/run/postgresql:/var/run/postgresql"
|
||||||
];
|
];
|
||||||
extraOptions = [
|
extraOptions = [
|
||||||
@@ -110,7 +111,7 @@ in
|
|||||||
];
|
];
|
||||||
volumes = [
|
volumes = [
|
||||||
"${vars.primary_docker}/radarr:/config"
|
"${vars.primary_docker}/radarr:/config"
|
||||||
"${vars.primary_plex_storage}/data:/data"
|
"${shared_data_path}:/data"
|
||||||
"/var/run/postgresql:/var/run/postgresql"
|
"/var/run/postgresql:/var/run/postgresql"
|
||||||
];
|
];
|
||||||
extraOptions = [
|
extraOptions = [
|
||||||
@@ -134,7 +135,7 @@ in
|
|||||||
];
|
];
|
||||||
volumes = [
|
volumes = [
|
||||||
"${vars.primary_docker}/sonarr:/config"
|
"${vars.primary_docker}/sonarr:/config"
|
||||||
"${vars.primary_plex_storage}/data:/data"
|
"${shared_data_path}:/data"
|
||||||
"/var/run/postgresql:/var/run/postgresql"
|
"/var/run/postgresql:/var/run/postgresql"
|
||||||
];
|
];
|
||||||
extraOptions = [
|
extraOptions = [
|
||||||
@@ -158,7 +159,7 @@ in
|
|||||||
];
|
];
|
||||||
volumes = [
|
volumes = [
|
||||||
"${vars.primary_docker}/lidarr:/config"
|
"${vars.primary_docker}/lidarr:/config"
|
||||||
"${vars.primary_plex_storage}/data:/data"
|
"${shared_data_path}:/data"
|
||||||
"/var/run/postgresql:/var/run/postgresql"
|
"/var/run/postgresql:/var/run/postgresql"
|
||||||
];
|
];
|
||||||
extraOptions = [
|
extraOptions = [
|
||||||
@@ -176,7 +177,7 @@ in
|
|||||||
};
|
};
|
||||||
volumes = [
|
volumes = [
|
||||||
"${vars.primary_docker}/unpackerr:/config"
|
"${vars.primary_docker}/unpackerr:/config"
|
||||||
"${vars.primary_plex_storage}:/data"
|
"${shared_data_path}:/data"
|
||||||
"/var/run/postgresql:/var/run/postgresql"
|
"/var/run/postgresql:/var/run/postgresql"
|
||||||
];
|
];
|
||||||
extraOptions = [ "--network=arrnet" ];
|
extraOptions = [ "--network=arrnet" ];
|
||||||
@@ -194,7 +195,7 @@ in
|
|||||||
environmentFiles = [ config.sops.secrets."docker/notifiarr".path ];
|
environmentFiles = [ config.sops.secrets."docker/notifiarr".path ];
|
||||||
volumes = [
|
volumes = [
|
||||||
"${vars.primary_docker}/notifiarr:/config"
|
"${vars.primary_docker}/notifiarr:/config"
|
||||||
"${vars.primary_plex_storage}:/data"
|
"${shared_data_path}:/data"
|
||||||
"/var/run/postgresql:/var/run/postgresql"
|
"/var/run/postgresql:/var/run/postgresql"
|
||||||
];
|
];
|
||||||
extraOptions = [ "--network=arrnet" ];
|
extraOptions = [ "--network=arrnet" ];
|
||||||
|
|||||||
@@ -1,9 +1,4 @@
|
|||||||
{
|
{ ... }:
|
||||||
config,
|
|
||||||
lib,
|
|
||||||
pkgs,
|
|
||||||
...
|
|
||||||
}:
|
|
||||||
|
|
||||||
{
|
{
|
||||||
imports = [
|
imports = [
|
||||||
@@ -20,7 +15,7 @@
|
|||||||
./nextcloud.nix
|
./nextcloud.nix
|
||||||
# ./postgres.nix
|
# ./postgres.nix
|
||||||
# ./restic.nix
|
# ./restic.nix
|
||||||
#./torr.nix
|
./torr.nix
|
||||||
# ./unifi.nix
|
# ./unifi.nix
|
||||||
];
|
];
|
||||||
|
|
||||||
|
|||||||
@@ -58,6 +58,7 @@ in
|
|||||||
volumes = [ "${nextcloud_path}/nc_data:/var/www/html:ro" ];
|
volumes = [ "${nextcloud_path}/nc_data:/var/www/html:ro" ];
|
||||||
extraOptions = [
|
extraOptions = [
|
||||||
"--device=/dev/dri:/dev/dri"
|
"--device=/dev/dri:/dev/dri"
|
||||||
|
"--network=nextcloud_default"
|
||||||
];
|
];
|
||||||
};
|
};
|
||||||
collabora-code = {
|
collabora-code = {
|
||||||
|
|||||||
@@ -48,6 +48,9 @@
|
|||||||
|
|
||||||
# torr
|
# torr
|
||||||
29432
|
29432
|
||||||
|
|
||||||
|
# mattermost
|
||||||
|
8065
|
||||||
];
|
];
|
||||||
|
|
||||||
allowedUDPPorts = [
|
allowedUDPPorts = [
|
||||||
|
|||||||
@@ -57,6 +57,7 @@ in
|
|||||||
minimumDiskFree = 50;
|
minimumDiskFree = 50;
|
||||||
minimumDiskFreeEvaluator = 100;
|
minimumDiskFreeEvaluator = 100;
|
||||||
extraConfig = ''
|
extraConfig = ''
|
||||||
|
allow_import_from_derivation = true
|
||||||
<git-input>
|
<git-input>
|
||||||
timeout = 3600
|
timeout = 3600
|
||||||
</git-input>
|
</git-input>
|
||||||
|
|||||||
20
systems/palatine-hill/mattermost.nix
Normal file
20
systems/palatine-hill/mattermost.nix
Normal file
@@ -0,0 +1,20 @@
|
|||||||
|
{
|
||||||
|
config,
|
||||||
|
...
|
||||||
|
}:
|
||||||
|
let
|
||||||
|
vars = import ./vars.nix;
|
||||||
|
in
|
||||||
|
{
|
||||||
|
services.mattermost = {
|
||||||
|
enable = true;
|
||||||
|
siteUrl = "https://mattermost.nayeonie.com"; # Set this to the URL you will be hosting the site on.
|
||||||
|
database = {
|
||||||
|
peerAuth = true; # This allows Mattermost to connect to the database without a password, which is more secure when both are on the same machine.
|
||||||
|
create = true;
|
||||||
|
driver = "postgres";
|
||||||
|
};
|
||||||
|
dataDir = "${vars.primary_mattermost}/mattermost";
|
||||||
|
host = "0.0.0.0";
|
||||||
|
};
|
||||||
|
}
|
||||||
@@ -40,6 +40,7 @@ in
|
|||||||
ensureDatabases = [
|
ensureDatabases = [
|
||||||
"atticd"
|
"atticd"
|
||||||
"alice"
|
"alice"
|
||||||
|
"mattermost"
|
||||||
];
|
];
|
||||||
ensureUsers = [
|
ensureUsers = [
|
||||||
{
|
{
|
||||||
@@ -171,6 +172,7 @@ in
|
|||||||
"hydra-server"
|
"hydra-server"
|
||||||
"atticd"
|
"atticd"
|
||||||
"gitea"
|
"gitea"
|
||||||
|
"mattermost"
|
||||||
];
|
];
|
||||||
};
|
};
|
||||||
};
|
};
|
||||||
|
|||||||
@@ -20,4 +20,5 @@ rec {
|
|||||||
primary_plex = "${zfs_primary}/plex";
|
primary_plex = "${zfs_primary}/plex";
|
||||||
primary_plex_storage = "${zfs_primary}/plex_storage";
|
primary_plex_storage = "${zfs_primary}/plex_storage";
|
||||||
primary_ollama = "${zfs_primary}/ollama";
|
primary_ollama = "${zfs_primary}/ollama";
|
||||||
|
primary_mattermost = "${zfs_primary}/mattermost";
|
||||||
}
|
}
|
||||||
|
|||||||
@@ -12,6 +12,107 @@
|
|||||||
options zfs zfs_arc_min=82463372083
|
options zfs zfs_arc_min=82463372083
|
||||||
options zfs zfs_arc_max=192414534860
|
options zfs zfs_arc_max=192414534860
|
||||||
'';
|
'';
|
||||||
|
|
||||||
|
initrd.systemd.services = {
|
||||||
|
zfs-import-zfs-primary = {
|
||||||
|
description = "Import ZFS-primary pool in initrd";
|
||||||
|
wantedBy = [ "initrd-root-fs.target" ];
|
||||||
|
wants = [ "systemd-udev-settle.service" ];
|
||||||
|
after = [ "systemd-udev-settle.service" ];
|
||||||
|
before = [
|
||||||
|
"sysroot.mount"
|
||||||
|
"initrd-root-fs.target"
|
||||||
|
];
|
||||||
|
unitConfig.DefaultDependencies = "no";
|
||||||
|
serviceConfig = {
|
||||||
|
Type = "oneshot";
|
||||||
|
RemainAfterExit = true;
|
||||||
|
};
|
||||||
|
path = with pkgs; [
|
||||||
|
coreutils
|
||||||
|
gawk
|
||||||
|
zfs
|
||||||
|
];
|
||||||
|
script = ''
|
||||||
|
ZFS_FORCE="-f"
|
||||||
|
msg=""
|
||||||
|
|
||||||
|
for o in $(cat /proc/cmdline); do
|
||||||
|
case "$o" in
|
||||||
|
zfs_force|zfs_force=1|zfs_force=y)
|
||||||
|
ZFS_FORCE="-f"
|
||||||
|
;;
|
||||||
|
esac
|
||||||
|
done
|
||||||
|
|
||||||
|
pool_ready() {
|
||||||
|
pool="$1"
|
||||||
|
state="$(zpool import -d /dev/disk/by-id/ 2>/dev/null | awk '/pool: '"$pool"'/ { found = 1 }; /state:/ { if (found == 1) { print $2; exit } }; END { if (found == 0) { print "MISSING" } }')"
|
||||||
|
if [ "$state" = "ONLINE" ]; then
|
||||||
|
return 0
|
||||||
|
fi
|
||||||
|
echo "Pool $pool in state $state, waiting"
|
||||||
|
return 1
|
||||||
|
}
|
||||||
|
|
||||||
|
pool_imported() {
|
||||||
|
pool="$1"
|
||||||
|
zpool list "$pool" >/dev/null 2>/dev/null
|
||||||
|
}
|
||||||
|
|
||||||
|
pool_import() {
|
||||||
|
pool="$1"
|
||||||
|
zpool import -d /dev/disk/by-id/ -N $ZFS_FORCE "$pool"
|
||||||
|
}
|
||||||
|
|
||||||
|
echo -n 'importing root ZFS pool "ZFS-primary"...'
|
||||||
|
# Loop until import succeeds, because by-id devices may not be discovered yet.
|
||||||
|
if ! pool_imported "ZFS-primary"; then
|
||||||
|
trial=1
|
||||||
|
while [ "$trial" -le 60 ]; do
|
||||||
|
if pool_ready "ZFS-primary" >/dev/null && msg="$(pool_import "ZFS-primary" 2>&1)"; then
|
||||||
|
break
|
||||||
|
fi
|
||||||
|
sleep 1
|
||||||
|
echo -n .
|
||||||
|
trial=$((trial + 1))
|
||||||
|
done
|
||||||
|
echo
|
||||||
|
if [ -n "$msg" ]; then
|
||||||
|
echo "$msg"
|
||||||
|
fi
|
||||||
|
pool_imported "ZFS-primary" || pool_import "ZFS-primary" # Try one last time, e.g. to import a degraded pool.
|
||||||
|
fi
|
||||||
|
'';
|
||||||
|
};
|
||||||
|
|
||||||
|
zfs-load-nix-key = {
|
||||||
|
description = "Load ZFS key for ZFS-primary/nix in initrd";
|
||||||
|
wantedBy = [ "initrd-fs.target" ];
|
||||||
|
requires = [
|
||||||
|
"sysroot.mount"
|
||||||
|
"zfs-import-zfs-primary.service"
|
||||||
|
];
|
||||||
|
after = [
|
||||||
|
"sysroot.mount"
|
||||||
|
"zfs-import-zfs-primary.service"
|
||||||
|
];
|
||||||
|
before = [
|
||||||
|
"initrd-fs.target"
|
||||||
|
"sysroot-nix.mount"
|
||||||
|
];
|
||||||
|
unitConfig.DefaultDependencies = "no";
|
||||||
|
serviceConfig = {
|
||||||
|
Type = "oneshot";
|
||||||
|
RemainAfterExit = true;
|
||||||
|
};
|
||||||
|
path = with pkgs; [ zfs ];
|
||||||
|
script = ''
|
||||||
|
key_file="/sysroot/crypto/keys/zfs-nix-store-key"
|
||||||
|
zfs load-key -L "file://$key_file" "ZFS-primary/nix"
|
||||||
|
'';
|
||||||
|
};
|
||||||
|
};
|
||||||
};
|
};
|
||||||
|
|
||||||
services = {
|
services = {
|
||||||
@@ -31,6 +132,7 @@
|
|||||||
"ZFS-primary/docker".useTemplate = [ "production" ];
|
"ZFS-primary/docker".useTemplate = [ "production" ];
|
||||||
"ZFS-primary/hydra".useTemplate = [ "nix-prod" ];
|
"ZFS-primary/hydra".useTemplate = [ "nix-prod" ];
|
||||||
"ZFS-primary/nextcloud".useTemplate = [ "production" ];
|
"ZFS-primary/nextcloud".useTemplate = [ "production" ];
|
||||||
|
"ZFS-primary/mattermost".useTemplate = [ "production" ];
|
||||||
# all docker containers should have a bind mount if they expect lasting zfs snapshots
|
# all docker containers should have a bind mount if they expect lasting zfs snapshots
|
||||||
"ZFS-primary/vardocker".useTemplate = [ "nix-prod" ];
|
"ZFS-primary/vardocker".useTemplate = [ "nix-prod" ];
|
||||||
"ZFS-primary/minio".useTemplate = [ "nix-prod" ];
|
"ZFS-primary/minio".useTemplate = [ "nix-prod" ];
|
||||||
@@ -81,69 +183,4 @@
|
|||||||
};
|
};
|
||||||
};
|
};
|
||||||
|
|
||||||
# hack to make sure pool is imported before keys are loaded,
|
|
||||||
# and also keys are imported before things get mounted
|
|
||||||
# note to self: move zfs encryption over to luks lol
|
|
||||||
boot.initrd.postResumeCommands = ''
|
|
||||||
ZFS_FORCE="-f"
|
|
||||||
|
|
||||||
for o in $(cat /proc/cmdline); do
|
|
||||||
case $o in
|
|
||||||
zfs_force|zfs_force=1|zfs_force=y)
|
|
||||||
ZFS_FORCE="-f"
|
|
||||||
;;
|
|
||||||
esac
|
|
||||||
done
|
|
||||||
poolReady() {
|
|
||||||
pool="$1"
|
|
||||||
state="$("zpool" import -d "/dev/disk/by-id/" 2>/dev/null | "awk" "/pool: $pool/ { found = 1 }; /state:/ { if (found == 1) { print \$2; exit } }; END { if (found == 0) { print \"MISSING\" } }")"
|
|
||||||
if [[ "$state" = "ONLINE" ]]; then
|
|
||||||
return 0
|
|
||||||
else
|
|
||||||
echo "Pool $pool in state $state, waiting"
|
|
||||||
return 1
|
|
||||||
fi
|
|
||||||
}
|
|
||||||
poolImported() {
|
|
||||||
pool="$1"
|
|
||||||
"zpool" list "$pool" >/dev/null 2>/dev/null
|
|
||||||
}
|
|
||||||
poolImport() {
|
|
||||||
pool="$1"
|
|
||||||
"zpool" import -d "/dev/disk/by-id/" -N $ZFS_FORCE "$pool"
|
|
||||||
}
|
|
||||||
|
|
||||||
echo -n "importing root ZFS pool \"ZFS-primary\"..."
|
|
||||||
# Loop across the import until it succeeds, because the devices needed may not be discovered yet.
|
|
||||||
if ! poolImported "ZFS-primary"; then
|
|
||||||
for trial in `seq 1 60`; do
|
|
||||||
poolReady "ZFS-primary" > /dev/null && msg="$(poolImport "ZFS-primary" 2>&1)" && break
|
|
||||||
sleep 1
|
|
||||||
echo -n .
|
|
||||||
done
|
|
||||||
echo
|
|
||||||
if [[ -n "$msg" ]]; then
|
|
||||||
echo "$msg";
|
|
||||||
fi
|
|
||||||
poolImported "ZFS-primary" || poolImport "ZFS-primary" # Try one last time, e.g. to import a degraded pool.
|
|
||||||
fi
|
|
||||||
|
|
||||||
# let root mount and everything, then manually unlock stuff
|
|
||||||
load_zfs_nix() {
|
|
||||||
local device="/dev/disk/by-uuid/8bfaa32b-09dd-45c8-831e-05e80be82f9e"
|
|
||||||
local mountPoint="/"
|
|
||||||
local options="x-initrd.mount,noatime,nodiratime"
|
|
||||||
local fsType="ext4"
|
|
||||||
|
|
||||||
echo "manually mounting key location, then unmounting"
|
|
||||||
udevadm settle
|
|
||||||
|
|
||||||
mountFS "$device" "$(escapeFstab "$mountPoint")" "$(escapeFstab "$options")" "$fsType"
|
|
||||||
|
|
||||||
zfs load-key -L "file://$targetRoot/crypto/keys/zfs-nix-store-key" "ZFS-primary/nix"
|
|
||||||
umount "$targetRoot/"
|
|
||||||
}
|
|
||||||
|
|
||||||
load_zfs_nix
|
|
||||||
'';
|
|
||||||
}
|
}
|
||||||
|
|||||||
@@ -1,10 +1,4 @@
|
|||||||
{ pkgs, ... }:
|
{ pkgs, ... }:
|
||||||
# let
|
|
||||||
# randWallpaper = pkgs.runCommand "stylix-wallpaper" { } ''
|
|
||||||
# numWallpapers =
|
|
||||||
# $((1 + $RANDOM % 10))
|
|
||||||
|
|
||||||
# in
|
|
||||||
{
|
{
|
||||||
stylix = {
|
stylix = {
|
||||||
enable = true;
|
enable = true;
|
||||||
|
|||||||
@@ -194,21 +194,23 @@ in
|
|||||||
onSave = true;
|
onSave = true;
|
||||||
forwardSearchAfter = true;
|
forwardSearchAfter = true;
|
||||||
};
|
};
|
||||||
forwardSearch = {
|
#forwardSearch = {
|
||||||
executable = "zathura";
|
# #executable = "zathura";
|
||||||
args = [
|
# args = [
|
||||||
"--synctex-forward"
|
# "--synctex-forward"
|
||||||
"%l:1:%f"
|
# "%l:1:%f"
|
||||||
"-x"
|
# "-x"
|
||||||
"zed %%{input}:%%{line}"
|
# "zed %%{input}:%%{line}"
|
||||||
"%p"
|
# "%p"
|
||||||
];
|
# ];
|
||||||
};
|
#};
|
||||||
};
|
};
|
||||||
};
|
};
|
||||||
};
|
};
|
||||||
};
|
};
|
||||||
|
|
||||||
|
services.gnome-keyring.enable = true;
|
||||||
|
|
||||||
home.packages = with pkgs; [
|
home.packages = with pkgs; [
|
||||||
cmake
|
cmake
|
||||||
shellcheck
|
shellcheck
|
||||||
@@ -268,7 +270,6 @@ in
|
|||||||
bitwarden-cli
|
bitwarden-cli
|
||||||
bitwarden-menu
|
bitwarden-menu
|
||||||
wtype
|
wtype
|
||||||
zathura
|
|
||||||
obsidian
|
obsidian
|
||||||
libreoffice-qt-fresh
|
libreoffice-qt-fresh
|
||||||
wlr-randr
|
wlr-randr
|
||||||
@@ -282,5 +283,8 @@ in
|
|||||||
|
|
||||||
pdf4qt
|
pdf4qt
|
||||||
masterpdfeditor4
|
masterpdfeditor4
|
||||||
|
|
||||||
|
gitea-mcp-server
|
||||||
|
tea
|
||||||
];
|
];
|
||||||
}
|
}
|
||||||
|
|||||||
@@ -28,6 +28,8 @@
|
|||||||
"plugdev"
|
"plugdev"
|
||||||
"uaccess"
|
"uaccess"
|
||||||
"ydotool"
|
"ydotool"
|
||||||
|
"video"
|
||||||
|
"render"
|
||||||
]
|
]
|
||||||
++ groups;
|
++ groups;
|
||||||
}
|
}
|
||||||
|
|||||||
Reference in New Issue
Block a user