1 Commits

Author SHA1 Message Date
github-actions[bot] f7d85cb4f7 automated: Update flake.lock
Check flake.lock / Check health of `flake.lock` (pull_request) Successful in 9s
Check Nix flake / Perform Nix flake checks (pull_request) Successful in 3m38s
Auto-generated by [update.yml][1] with the help of
[create-pull-request][2].

[1]: https://nayeonie.com/ahuston-0/nix-dotfiles/src/branch/main/.github/workflows/flake-update.yml
[2]: https://forgejo.stefka.eu/jiriks74/create-pull-request
2026-04-28 12:11:33 +00:00
31 changed files with 57 additions and 1039 deletions
@@ -1,32 +0,0 @@
---
description: "Use when working with SOPS secrets files (secrets.yaml). Never modify secrets.yaml files directly — always prompt the user to make changes using sops edit."
applyTo: "**"
---
# SOPS Secrets Files — Read-Only
Never modify any `secrets.yaml` file in this repository. These files are SOPS-encrypted and editing them directly (without `sops edit`) will corrupt the encryption and make the secrets unrecoverable.
## Rules
- **Do NOT edit `secrets.yaml` files** using file editing tools, even for renaming keys, restructuring blocks, or adding new entries.
- **Do NOT suggest patches or diffs** that target `secrets.yaml` files.
- **Always prompt the user** to make the change themselves using:
```bash
sops edit <path-to-secrets.yaml>
```
- When a new secret key is needed (e.g., for a new SOPS reference in Nix code), tell the user the exact key name and value to add, and ask them to add it via `sops edit`.
- You may **read** `secrets.yaml` files (e.g., with grep to check key names) — reading is safe. Only writing is forbidden.
## Example
Instead of editing `systems/palatine-hill/secrets.yaml` directly, say:
> Please run `sops edit systems/palatine-hill/secrets.yaml` and add the following under the `kanidm:` block:
>
> ```yaml
> kanidm:
> gitea_oidc_client_secret: "<your-generated-secret>"
> ```
-169
View File
@@ -1,169 +0,0 @@
name: "Update claurst"
on:
repository_dispatch:
workflow_dispatch:
schedule:
- cron: "00 14 * * 1" # Every Monday at 14:00 UTC
concurrency:
group: ${{ github.workflow }}-${{ github.event.pull_request.number || github.ref }}
cancel-in-progress: true
jobs:
update_claurst:
runs-on: ubuntu-latest
steps:
- name: Checkout repository
uses: actions/checkout@v4
- name: Install nix
uses: https://github.com/DeterminateSystems/nix-installer-action@main
- name: Setup Attic cache
uses: ryanccn/attic-action@v0
with:
endpoint: ${{ secrets.ATTIC_ENDPOINT }}
cache: ${{ secrets.ATTIC_CACHE }}
token: ${{ secrets.ATTIC_TOKEN }}
skip-push: "true"
- name: Get current claurst version
id: current
run: |
VERSION=$(grep 'version = ' pkgs/claurst/default.nix | head -1 | sed 's/.*version = "\(.*\)".*/\1/')
echo "version=$VERSION" >> $GITHUB_OUTPUT
echo "Current version: $VERSION"
- name: Get latest claurst release
id: latest
uses: actions/github-script@v7
with:
github-token: ${{ secrets.GITHUB_TOKEN }}
script: |
const release = await github.rest.repos.getLatestRelease({
owner: 'Kuberwastaken',
repo: 'claurst',
});
const tag = release.data.tag_name.replace(/^v/, '');
core.setOutput('version', tag);
core.info(`Latest release: ${tag}`);
- name: Check if update needed
id: check_update
run: |
CURRENT="${{ steps.current.outputs.version }}"
LATEST="${{ steps.latest.outputs.version }}"
if [ "$CURRENT" = "$LATEST" ]; then
echo "No update needed (current: $CURRENT, latest: $LATEST)"
echo "update_needed=false" >> $GITHUB_OUTPUT
else
echo "Update needed (current: $CURRENT, latest: $LATEST)"
echo "update_needed=true" >> $GITHUB_OUTPUT
fi
- name: Update claurst if new version available
if: steps.check_update.outputs.update_needed == 'true'
id: update
run: |
NEW_VERSION="${{ steps.latest.outputs.version }}"
# Backup original file
cp pkgs/claurst/default.nix pkgs/claurst/default.nix.bak
# Update version placeholder with empty hash to compute it
sed -i "s/version = \"[^\"]*\"/version = \"$NEW_VERSION\"/" pkgs/claurst/default.nix
# Try to fetch the new src hash
echo "Computing src hash for v$NEW_VERSION..."
SRC_HASH=$(nix-prefetch-url --unpack "https://github.com/Kuberwastaken/claurst/archive/refs/tags/v$NEW_VERSION.tar.gz" 2>/dev/null | tail -1 || echo "")
if [ -z "$SRC_HASH" ]; then
echo "Failed to compute src hash, reverting"
mv pkgs/claurst/default.nix.bak pkgs/claurst/default.nix
exit 1
fi
SRC_HASH="sha256-$SRC_HASH"
echo "New src hash: $SRC_HASH"
# Update src hash
sed -i "s|hash = \"sha256-[^\"]*\"|hash = \"$SRC_HASH\"|" pkgs/claurst/default.nix
# Compute cargoHash - this requires building
echo "Computing cargo hash..."
CARGO_HASH=$(nix build \
--no-eval-cache \
--expr "(import ./pkgs/default.nix { nixpkgs = import <nixpkgs> { }; }).mkPkgs \"x86_64-linux\" | .claurst" \
2>&1 | grep -oP 'got:\s*\K[^"]+' | head -1 || echo "")
if [ -z "$CARGO_HASH" ]; then
echo "Failed to compute cargo hash, trying with attribute substitution..."
CARGO_HASH=$(nix eval \
--impure \
--expr "
let
pkgs = import <nixpkgs> { config.allowUnsupportedSystem = true; };
claurst = import pkgs/claurst { inherit pkgs; };
in claurst.cargoHash
" 2>&1 | tail -1)
fi
if [ ! -z "$CARGO_HASH" ]; then
echo "New cargo hash: $CARGO_HASH"
sed -i "s|cargoHash = \"[^\"]*\"|cargoHash = \"$CARGO_HASH\"|" pkgs/claurst/default.nix
fi
rm -f pkgs/claurst/default.nix.bak
echo "version=$NEW_VERSION" >> $GITHUB_OUTPUT
- name: Validate nix flake
if: steps.check_update.outputs.update_needed == 'true'
run: |
echo "Running nix flake check..."
nix flake check --show-trace || true
- name: Build claurst to verify changes
if: steps.check_update.outputs.update_needed == 'true'
run: |
echo "Building updated claurst package..."
nix build ".#artemision.config.environment.systemPackages" --no-eval-cache 2>&1 | tail -20 || true
- name: Generate PR body
if: steps.check_update.outputs.update_needed == 'true'
id: pr_body
run: |
cat > pr_body.md << 'EOF'
# Claurst Update
Automated claurst package update.
**Changes:**
- Version: `${{ steps.current.outputs.version }}` → `${{ steps.update.outputs.version }}`
- Source hash updated
- Cargo hash updated
Auto-generated by [update-claurst.yml][1].
[1]: https://nayeonie.com/ahuston-0/nix-dotfiles/src/branch/main/.github/workflows/update-claurst.yml
EOF
cat pr_body.md
- name: Create Pull Request
if: steps.check_update.outputs.update_needed == 'true'
uses: https://nayeonie.com/ahuston-0/create-pull-request@main
with:
token: ${{ secrets.GH_TOKEN_FOR_UPDATES }}
add-paths: pkgs/claurst/default.nix
body-path: pr_body.md
author: '"github-actions[bot]" <github-actions[bot]@users.noreply.github.com>'
title: "automated: Update claurst to ${{ steps.update.outputs.version }}"
commit-message: |
automated: Update claurst to ${{ steps.update.outputs.version }}
- Bumped version from ${{ steps.current.outputs.version }} to ${{ steps.update.outputs.version }}
- Updated src and cargo hashes
Auto-generated by [update-claurst.yml][1].
[1]: https://nayeonie.com/ahuston-0/nix-dotfiles/src/branch/main/.github/workflows/update-claurst.yml
branch: update-claurst
delete-branch: true
pr-labels: |
dependencies
automated
- name: Print PR result
if: steps.check_update.outputs.update_needed == 'true'
run: |
echo "Pull request created successfully"
echo "Version updated: ${{ steps.current.outputs.version }} → ${{ steps.update.outputs.version }}"
permissions:
pull-requests: write
contents: write
-10
View File
@@ -9,10 +9,6 @@ keys:
- &artemision-home age1t29a6z6cfy8m3cnc8uva0ey833vhcppue8psyumts7mtyf0zufcqvfshuc - &artemision-home age1t29a6z6cfy8m3cnc8uva0ey833vhcppue8psyumts7mtyf0zufcqvfshuc
- &palatine-hill age1qw5k8h72k3fjg5gmlxx8q8gwlc2k6n6u08d8hdzpm2pk9r0fnfxsmw33nh - &palatine-hill age1qw5k8h72k3fjg5gmlxx8q8gwlc2k6n6u08d8hdzpm2pk9r0fnfxsmw33nh
- &selinunte age1jd2dcpykagz20kpk2kkchte3augqncwfn6nywursx0dkfyze6feqdzxkq2 - &selinunte age1jd2dcpykagz20kpk2kkchte3augqncwfn6nywursx0dkfyze6feqdzxkq2
# argiletum: replace placeholder after first boot with:
# nix-shell -p ssh-to-age --run 'cat /etc/ssh/ssh_host_ed25519_key.pub | ssh-to-age'
# then run: sops updatekeys systems/argiletum/secrets.yaml
- &argiletum age1aaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaa
# cspell:enable # cspell:enable
# add new users by executing: sops users/<user>/secrets.yaml # add new users by executing: sops users/<user>/secrets.yaml
# then have someone already in the repo run the below # then have someone already in the repo run the below
@@ -59,9 +55,3 @@ creation_rules:
- *admin_alice - *admin_alice
age: age:
- *palatine-hill - *palatine-hill
- path_regex: systems/argiletum/secrets.*\.yaml$
key_groups:
- pgp:
- *admin_alice
age:
- *argiletum
+4 -4
View File
@@ -1,5 +1,5 @@
{ {
"recommendations": [ "recommendations": [
"davidanson.vscode-markdownlint" "davidanson.vscode-markdownlint"
] ]
} }
+1 -13
View File
@@ -1,13 +1 @@
{ {}
"servers": {
"honeycomb": {
"command": "npx",
"args": [
"mcp-remote",
"https://mcp.honeycomb.io/mcp"
],
"type": "stdio"
}
},
"inputs": []
}
-105
View File
@@ -1,105 +0,0 @@
> Note: This document was AI-generated and reviewed by a maintainer.
# AGENTS Guide for nix-dotfiles
This file is the quick-start map for coding agents working in this repository.
Use this first, then follow the linked source files for full detail.
## Purpose and Scope
- Repository type: flake-based NixOS + Home Manager dotfiles/infrastructure.
- Primary goals: safe system/user config edits, reproducible builds, and clean secrets handling.
- Default assumption: preserve existing module patterns and avoid broad refactors unless requested.
## Source of Truth
Read these files before substantial changes:
- `.github/copilot-instructions.md`: Full repository guide for structure, workflows, dynamic system generation, module patterns, and SOPS handling.
- `.github/instructions/ai-doc-attribution.instructions.md`: Markdown rule for top-of-document attribution when docs are fully AI-generated.
- `flake.nix`: Flake inputs/outputs entrypoint; system generation begins here.
- `lib/systems.nix`: Core dynamic config assembly (`genSystems`, `constructSystem`, and wrapper generators).
- `systems/<hostname>/default.nix`: Per-host parameters (users, home, sops, server role, extra modules).
- `systems/<hostname>/configuration.nix`: Main host config.
- `modules/*.nix`: Global modules automatically imported into all systems.
- `users/<username>/home.nix` and `users/<username>/default.nix`: Home Manager and user account configuration.
- `hydra/jobs.nix` and `hydra/jobsets.nix`: CI/build orchestration details.
## Repo Mental Model
- `systems/` contains host-specific configs.
- `modules/` contains global modules applied across hosts.
- `users/` contains user and home-manager configs.
- `lib/systems.nix` auto-discovers hosts and composes final configs.
- SOPS secrets are colocated with hosts/users via `secrets.yaml` files.
## Dynamic Configuration Rules
- Hosts are auto-discovered from subdirectories in `systems/`.
- Each host's `default.nix` feeds `constructSystem` parameters.
- Effective module merge order matters. High-level order is: 1) base external
modules, 2) host essentials (`hardware.nix`, `configuration.nix`), 3)
host-specific modules from `systems/<host>/default.nix`, 4) global
`modules/*.nix`, 5) optional SOPS and Home Manager/user layers.
- Global modules load after host config, so explicit overrides may require `lib.mkForce` depending on target option.
## Editing Conventions
- Keep changes minimal and scoped to the requested behavior.
- Preserve existing Nix style and option naming patterns.
- Prefer module options + `lib.mkIf` toggles over hard-coded behavior.
- Use `lib.mkDefault` for soft defaults and `lib.mkForce` only when necessary.
- Do not commit plaintext secrets.
- Update docs when behavior/workflow changes.
## Validation and Workflow
Typical local sequence:
1. Make targeted edits.
2. Evaluate and build with `nix flake check` and `nix build .#<hostname>`.
3. Optionally deploy/apply with `nh os switch` or `nh home switch`.
4. For secrets-related changes, edit with `sops .../secrets.yaml` and validate expected `config.sops.secrets` evaluation paths.
## Secrets and Safety
- Secrets live in `systems/<hostname>/secrets.yaml` and `users/<username>/secrets.yaml`.
- Use SOPS for create/edit/rekey operations.
- During merge conflicts in encrypted files, prefer repository SOPS merge tooling (`utils/sops-mergetool.sh`, `utils/sops-mergetool-new.sh`).
## Agent and Tool Routing
When a specialized agent is available, route work by intent:
- `Explore`: Fast read-only repository exploration and Q&A.
- `dependency-auditor`: Flake/module dependency security and CVE-oriented audits.
- `security-researcher`: Read-only server security configuration audits.
- `server-architect`: Server integration/review planning for `palatine-hill` style infra changes.
Use Nix lookup tooling for package/options discovery; prefer `unstable` channel when channel selection is available.
## Where To Look Next (By Task)
- Add a new host: see `.github/copilot-instructions.md` sections on "Adding a New NixOS System", plus `systems/<new-host>/default.nix`, `hardware.nix`, and `configuration.nix`.
- Add/modify a global capability: see `modules/*.nix` and the `.github/copilot-instructions.md` section "Adding a Global Module to modules/".
- Change user/home-manager behavior: see `users/<username>/home.nix` and `users/<username>/default.nix`.
- Modify build/release automation: see `hydra/jobs.nix` and `hydra/jobsets.nix`.
- Work with secrets: see `.sops.yaml`, `systems/*/secrets.yaml`, `users/*/secrets.yaml`, and the `.github/copilot-instructions.md` section "Secrets Management".
- Validate module composition/debug evaluation: see `lib/systems.nix` and `nix eval .#nixosConfigurations.<host>...`.
## Documentation Attribution Rule
For Markdown docs (`**/*.md`):
- If a document is fully AI-generated, include explicit attribution near the top.
- Accepted label includes "AI-generated documentation" wording.
- Do not imply fully human authorship for fully AI-authored content.
## Quick Command Reference
- `nh os build`
- `nh os switch`
- `nh home switch`
- `nix build .#<hostname>`
- `nix flake check`
- `nix eval .#nixosConfigurations.<hostname>.config.<path>`
-206
View File
@@ -1,206 +0,0 @@
> Note: This document was AI-generated and reviewed by a maintainer.
# ADR 0001 — ZFS Native Encryption: Non-Interactive initrd Key Loading
| | |
|---|---|
| **Status** | Accepted |
| **Date** | 2026-05-03 |
| **Deciders** | Alice Huston |
| **Affects** | `systems/palatine-hill/hardware-changes.nix`, `systems/palatine-hill/zfs.nix` |
---
## Context
`palatine-hill` uses ZFS native encryption for the `/nix` dataset (`ZFS-primary/nix`). The ZFS encryption key was stored on a separate LVM volume (`/crypto/keys/zfs-nix-store-key`) inside the same LUKS container as root.
This created a forced ordering dependency: the `/nix` dataset could not be unlocked until root (`/`) and `/crypto` were both mounted, even though logically they are independent. Two custom initrd units worked around this:
- `zfs-import-zfs-primary` — polling import loop (duplicates NixOS-native logic)
- `zfs-load-nix-key` — reads key from `/sysroot/crypto/keys/zfs-nix-store-key` after `sysroot.mount`
Additionally, `boot.zfs.requestEncryptionCredentials` was forced off entirely, and a `postBootCommands` fallback ran
`zfs load-key -a` after stage 2 as a belt-and-suspenders measure. LUKS unlock was also interactive, requiring manual
passphrase entry at boot.
### Current initrd dependency graph (before this ADR)
```mermaid
flowchart TD
A([initrd start]) --> B[systemd-udev-settle]
A --> C["LUKS unlock nixos-pv\n⚠ interactive"]
C --> D[LVM activate]
D --> E["sysroot.mount\n/ on ext4"]
D --> F["sysroot-crypto.mount\n/crypto on LVM volume"]
B --> G["zfs-import-zfs-primary\n(custom polling loop, 60s timeout)"]
E --> H["zfs-load-nix-key\n(reads /sysroot/crypto/keys/zfs-nix-store-key)"]
F --> H
G --> H
H --> I["sysroot-nix.mount\nZFS-primary/nix"]
I --> J([initrd-fs.target])
E --> J
J --> K([stage 2])
K --> L["postBootCommands:\nzfs load-key -a"]
```
### Problems with the old approach
1. **Cross-filesystem key dependency**: `/nix` unlock depends on root mount, coupling two logically independent operations.
2. **Duplicated pool import logic**: the custom unit reimplements a polling loop that NixOS already generates natively; upstream fixes don't apply automatically.
3. **Native credential handling fully disabled**: `requestEncryptionCredentials = false` makes the configuration opaque to NixOS module evaluation.
4. **Double key load**: `postBootCommands` is a workaround indicating the initrd path is not reliable.
5. **Interactive LUKS unlock**: manual passphrase entry required at every boot — defeats unattended operation.
---
## Options Considered
### Option A — Key embedded in initrd (`boot.initrd.secrets`)
Store the ZFS key directly inside the initrd cpio archive. The key is available from the very start of stage 1 without mounting anything.
**Pro**: Eliminates the cross-mount dependency; re-enables native NixOS ZFS handling; zero new infrastructure.
**Con**: Key lives in the initrd on `/boot`, which is an unencrypted vfat partition. Anyone with physical or boot-partition read access has the key. Does not solve interactive LUKS unlock.
### Option B — Tang network key fetch (Clevis) ✅ Chosen
Encrypt both secrets (LUKS passphrase and ZFS key) as Clevis JWE blobs. At boot, the initrd reaches a Tang server
on the LAN to decrypt them. NixOS's `boot.initrd.clevis` module natively supports `luks`, `zfs`, and `bcachefs`
**no custom unit is needed for ZFS**.
**Pro**: Key never present on disk in plaintext; unified unlock surface for both LUKS and ZFS; no cross-mount dependency; JWE blobs on disk are useless without the Tang server.
**Con**: Adds Tang server as a boot dependency; server won't boot if Tang is unreachable.
---
## Decision
**Option B (Tang/Clevis) is adopted** for both the LUKS root device and the ZFS `/nix` dataset.
`boot.initrd.clevis.devices` handles both unlock targets natively. The custom `zfs-load-nix-key` unit is deleted
entirely. The `zfs-import-zfs-primary` unit is retained — the pool must still be imported before Clevis can load the
dataset key.
Static networking is configured in the initrd using systemd-networkd with a static IP (`192.168.76.2/24`). DNS
resolution (`192.168.76.1`, the OPNsense router running Unbound) allows the Tang URL to be `http://tang.lan`.
### New initrd dependency graph
```mermaid
flowchart TD
A([initrd start]) --> N["initrd-networkd\neno1: 192.168.76.2/24\nDNS: 192.168.76.1"]
A --> B[systemd-udev-settle]
N --> T["Tang server\ntang.lan"]
T -->|"boot.initrd.clevis\n.devices.nixos-pv"| C["LUKS unlock nixos-pv\n(Clevis/Tang — unattended)"]
T -->|"boot.initrd.clevis\n.devices.ZFS-primary/nix"| Z["ZFS-primary/nix key load\n(Clevis/Tang — unattended)"]
C --> D[LVM activate]
D --> E["sysroot.mount\n/ on ext4"]
B --> G["zfs-import-zfs-primary\n(custom polling loop — retained)"]
G --> Z
Z --> I["sysroot-nix.mount\nZFS-primary/nix"]
E --> J([initrd-fs.target])
I --> J
J --> L([stage 2 — fully unattended])
```
### Files changed
| File | Change |
|---|---|
| `systems/palatine-hill/hardware-changes.nix` | Removed `requestEncryptionCredentials = mkForce false`, removed `postBootCommands`, added `boot.initrd.clevis` block for both devices, added `boot.initrd.systemd.network` with static IP + DNS, removed `/crypto` from `/nix` depends |
| `systems/palatine-hill/zfs.nix` | Removed `zfs-load-nix-key` unit, added `boot.zfs.requestEncryptionCredentials = false` |
### Comparison
| | Before | After |
|---|---|---|
| Custom initrd units | 2 (import + key load) | 1 (import only; key load is native Clevis) |
| Key source | `/crypto` LVM volume (disk) | Tang server (network) |
| Disk-based key exposure | Key on LVM volume inside LUKS | `.jwe` blob only; useless without Tang |
| Cross-mount dependency | Yes | No |
| LUKS interactive unlock | Yes | No (Clevis/Tang) |
| Unattended boot | No | Yes (when Tang reachable) |
---
## Consequences
- Boot requires Tang server to be reachable on `tang.lan`. If Tang is down, boot stalls at the Clevis timeout. Maintain Tang server uptime accordingly.
- The `.jwe` files are safe to commit to the repository — they are encrypted blobs that are useless without the Tang server's private key.
- Rolling back to a generation without Clevis (pre-ADR) requires manual LUKS passphrase entry at the console; ensure prior generations remain in the bootloader during initial cutover.
---
## Implementation Notes
### Prerequisites
1. Deploy a Tang server on the LAN and create a DNS host override in OPNsense:
- Services → Unbound DNS → Host Overrides → `tang` / `lan` / `<tang IP>`
2. Verify DNS from palatine-hill before rebooting:
```bash
resolvectl query tang.lan
```
### Create the JWE files
Run from the repository root on a machine that has the LUKS passphrase and access to the running `/crypto` volume:
```bash
# LUKS passphrase JWE — substitute your actual passphrase
echo -n "your-luks-passphrase" | \
clevis encrypt tang '{"url":"http://tang.lan"}' \
> systems/palatine-hill/nixos-pv.jwe
# ZFS dataset key JWE — key file from the running system
clevis encrypt tang '{"url":"http://tang.lan"}' \
< /crypto/keys/zfs-nix-store-key \
> systems/palatine-hill/nix-store.jwe
```
### Commit and build
```bash
git add systems/palatine-hill/nixos-pv.jwe systems/palatine-hill/nix-store.jwe
git commit -m "feat(palatine-hill): add Clevis JWE files for Tang-based boot unlock"
nix build .#palatine-hill # verify build succeeds
```
### Deploy
```bash
nh os switch # keep previous generation in bootloader for rollback
```
### Verify after reboot
```bash
# Confirm ZFS dataset was unlocked automatically
zfs get keystatus ZFS-primary/nix
# Expected: keystatus = available
# Check Clevis log output
journalctl -b | grep -i clevis
# Confirm Tang was reached during initrd
journalctl -b | grep -i tang
```
### Rollback procedure (if needed)
Select the previous generation from the systemd-boot menu at boot. You will be prompted interactively for the LUKS passphrase — this is expected for the old generation.
Generated
-21
View File
@@ -68,26 +68,6 @@
"type": "github" "type": "github"
} }
}, },
"disko": {
"inputs": {
"nixpkgs": [
"nixpkgs"
]
},
"locked": {
"lastModified": 1777713215,
"narHash": "sha256-8GzXDOXckDWwST8TY5DbwYFjdvQLlP7K9CLSVx6iTTo=",
"owner": "nix-community",
"repo": "disko",
"rev": "63b4e7e6cf75307c1d26ac3762b886b5b0247267",
"type": "github"
},
"original": {
"owner": "nix-community",
"repo": "disko",
"type": "github"
}
},
"firefox-addons": { "firefox-addons": {
"inputs": { "inputs": {
"nixpkgs": [ "nixpkgs": [
@@ -585,7 +565,6 @@
}, },
"root": { "root": {
"inputs": { "inputs": {
"disko": "disko",
"firefox-addons": "firefox-addons", "firefox-addons": "firefox-addons",
"flake-compat": "flake-compat", "flake-compat": "flake-compat",
"flake-parts": "flake-parts", "flake-parts": "flake-parts",
+4 -11
View File
@@ -38,11 +38,6 @@
systems.url = "github:nix-systems/default"; systems.url = "github:nix-systems/default";
# flake inputs with dependencies (in alphabetic order) # flake inputs with dependencies (in alphabetic order)
disko = {
url = "github:nix-community/disko";
inputs.nixpkgs.follows = "nixpkgs";
};
firefox-addons = { firefox-addons = {
url = "gitlab:rycee/nur-expressions?dir=pkgs/firefox-addons"; url = "gitlab:rycee/nur-expressions?dir=pkgs/firefox-addons";
inputs = { inputs = {
@@ -169,23 +164,19 @@
lib = self; lib = self;
} }
); );
packageSetup = import ./pkgs/default.nix { inherit nixpkgs; };
inherit (packageSetup) localPackagesOverlay;
inherit (lib.adev.systems) genSystems getImages; inherit (lib.adev.systems) genSystems getImages;
inherit (self) outputs; # for hydra inherit (self) outputs; # for hydra
in in
rec { rec {
inherit lib; # for allowing use of custom functions in nix repl inherit lib; # for allowing use of custom functions in nix repl
overlays.default = localPackagesOverlay;
hydraJobs = import ./hydra/jobs.nix { inherit inputs outputs systems; }; hydraJobs = import ./hydra/jobs.nix { inherit inputs outputs systems; };
formatter = forEachSystem (system: nixpkgs.legacyPackages.${system}.nixfmt); formatter = forEachSystem (system: nixpkgs.legacyPackages.${system}.nixfmt);
nixosConfigurations = genSystems inputs outputs src (src + "/systems"); nixosConfigurations = genSystems inputs outputs src (src + "/systems");
homeConfigurations = { homeConfigurations = {
"alice" = inputs.home-manager.lib.homeManagerConfiguration { "alice" = inputs.home-manager.lib.homeManagerConfiguration {
pkgs = packageSetup.mkPkgs "x86_64-linux"; pkgs = import nixpkgs { system = "x86_64-linux"; };
modules = [ modules = [
inputs.stylix.homeModules.stylix inputs.stylix.homeModules.stylix
inputs.sops-nix.homeManagerModules.sops inputs.sops-nix.homeManagerModules.sops
@@ -212,7 +203,9 @@
qcow = getImages nixosConfigurations "qcow"; qcow = getImages nixosConfigurations "qcow";
}; };
packages = forEachSystem packageSetup.mkPackages; packages.x86_64-linux.lego-latest =
nixpkgs.legacyPackages.x86_64-linux.callPackage ./pkgs/lego-latest/default.nix
{ };
checks = import ./checks.nix { inherit inputs forEachSystem formatter; }; checks = import ./checks.nix { inherit inputs forEachSystem formatter; };
devShells = import ./shell.nix { inherit inputs forEachSystem checks; }; devShells = import ./shell.nix { inherit inputs forEachSystem checks; };
-2
View File
@@ -167,13 +167,11 @@ rec {
outputs outputs
server server
system system
home
; ;
}; };
modules = [ modules = [
inputs.nixos-modules.nixosModule inputs.nixos-modules.nixosModule
inputs.nix-index-database.nixosModules.nix-index inputs.nix-index-database.nixosModules.nix-index
{ nixpkgs.overlays = [ outputs.overlays.default ]; }
(genHostName hostname) (genHostName hostname)
(configPath + "/hardware.nix") (configPath + "/hardware.nix")
(configPath + "/configuration.nix") (configPath + "/configuration.nix")
+1 -4
View File
@@ -3,7 +3,6 @@
inputs, inputs,
outputs, outputs,
server, server,
home,
system, system,
... ...
}: }:
@@ -23,9 +22,6 @@
mutableUsers = lib.mkDefault false; mutableUsers = lib.mkDefault false;
}; };
networking.firewall.enable = lib.mkDefault true;
}
// lib.optionalAttrs home {
home-manager = { home-manager = {
useGlobalPkgs = true; useGlobalPkgs = true;
useUserPackages = true; useUserPackages = true;
@@ -38,4 +34,5 @@
}; };
}; };
networking.firewall.enable = lib.mkDefault true;
} }
-2
View File
@@ -19,7 +19,6 @@
libnotify, libnotify,
}: }:
let let
maintainers = import ../maintainers.nix;
bins = [ bins = [
jq jq
bitwarden-cli bitwarden-cli
@@ -65,7 +64,6 @@ stdenv.mkDerivation {
description = "Wrapper for Bitwarden and Rofi"; description = "Wrapper for Bitwarden and Rofi";
homepage = "https://github.com/mattydebie/bitwarden-rofi"; homepage = "https://github.com/mattydebie/bitwarden-rofi";
license = licenses.gpl3; license = licenses.gpl3;
maintainers = [ maintainers.alice ];
platforms = platforms.linux; platforms = platforms.linux;
}; };
-52
View File
@@ -1,52 +0,0 @@
{
lib,
fetchFromGitHub,
rustPlatform,
pkg-config,
openssl,
alsa-lib,
dbus,
libxkbcommon,
libxcb,
}:
let
maintainers = import ../maintainers.nix;
in
rustPlatform.buildRustPackage rec {
pname = "claurst";
version = "0.0.9";
src = fetchFromGitHub {
owner = "Kuberwastaken";
repo = "claurst";
rev = "v${version}";
hash = "sha256-bTQHtZGZxhEAki0JxSC8smAC3w+otm8ubHvZ9MvwDaE=";
};
cargoRoot = "src-rust";
cargoHash = "sha256-6+B43spqmUZ983YMl5UBH5647DcUOS2ngw5ChMIPFFo=";
buildAndTestSubdir = "src-rust";
doCheck = false;
nativeBuildInputs = [
pkg-config
];
buildInputs = [
openssl
alsa-lib
dbus
libxkbcommon
libxcb
];
meta = with lib; {
description = "Terminal coding agent written in Rust";
homepage = "https://github.com/Kuberwastaken/claurst";
license = licenses.gpl3Only;
mainProgram = "claurst";
maintainers = [ maintainers.alice ];
platforms = platforms.linux;
};
}
-33
View File
@@ -1,33 +0,0 @@
{ nixpkgs }:
let
localPackagesOverlay = final: _prev: {
lego-latest = final.callPackage ./lego-latest/default.nix { };
claurst = final.callPackage ./claurst/default.nix { };
};
mkPkgs =
system:
import nixpkgs {
inherit system;
overlays = [ localPackagesOverlay ];
};
mkPackages =
system:
let
pkgs = mkPkgs system;
in
{
inherit (pkgs)
lego-latest
claurst
;
};
in
{
inherit
localPackagesOverlay
mkPkgs
mkPackages
;
}
-8
View File
@@ -1,8 +0,0 @@
{
alice = {
name = "Alice Huston";
email = "aliceghuston@gmail.com";
github = "ahuston-0";
githubId = 43225907;
};
}
-37
View File
@@ -1,37 +0,0 @@
{ lib, ... }:
{
imports = [ ./disk.nix ];
time.timeZone = "America/New_York";
networking = {
hostId = "c3798ccc";
firewall = {
enable = true;
allowedTCPPorts = [ 80 ];
};
useNetworkd = true;
};
# Raspberry Pi 4 uses U-Boot / extlinux — disable both GRUB and systemd-boot
# TPM 2.0 HAT: systemd initrd required for tpm2-device auto-unlock
# After first install, enroll with:
# systemd-cryptenroll --tpm2-device=auto --tpm2-pcrs=0+7 --recovery-key /dev/mmcblk0p3
boot = {
useSystemdBoot = lib.mkForce false;
loader.grub.enable = lib.mkOverride 0 false;
initrd = {
systemd.enable = true;
luks.devices."cryptroot".crypttabExtraOpts = [ "tpm2-device=auto" ];
};
};
sops = {
defaultSopsFile = ./secrets.yaml;
age.sshKeyPaths = [ "/etc/ssh/ssh_host_ed25519_key" ];
};
services.tang.enable = true;
system.stateVersion = "26.11";
}
-12
View File
@@ -1,12 +0,0 @@
{ inputs, ... }:
{
system = "aarch64-linux";
server = true;
home = false;
sops = true;
users = [ "alice" ];
modules = [
inputs.nixos-hardware.nixosModules.raspberry-pi-4
inputs.disko.nixosModules.disko
];
}
-56
View File
@@ -1,56 +0,0 @@
{
disko.devices = {
disk = {
# SD card — change device to /dev/sda if booting from USB instead
main = {
type = "disk";
device = "/dev/mmcblk0";
content = {
type = "gpt";
partitions = {
# Raspberry Pi firmware partition — must be vfat and first
firmware = {
size = "256MiB";
type = "EF00";
priority = 1;
content = {
type = "filesystem";
format = "vfat";
mountpoint = "/boot/firmware";
mountOptions = [
"fmask=0077"
"dmask=0077"
];
};
};
# NixOS boot partition — holds kernels/initrds for each generation
boot = {
size = "1GiB";
priority = 2;
content = {
type = "filesystem";
format = "ext4";
mountpoint = "/boot";
};
};
# Root filesystem — LUKS-encrypted, unlocked via TPM 2.0 HAT
root = {
size = "100%";
priority = 3;
content = {
type = "luks";
name = "cryptroot";
settings.allowDiscards = true;
content = {
type = "filesystem";
format = "ext4";
mountpoint = "/";
};
};
};
};
};
};
};
};
}
-7
View File
@@ -1,7 +0,0 @@
# TODO: after first boot, regenerate with:
# sudo nixos-generate-config --no-filesystems
# (disko owns fileSystems; do not add them here)
{ ... }:
{
swapDevices = [ ];
}
-3
View File
@@ -40,9 +40,6 @@
dbus = { dbus = {
enable = true; enable = true;
implementation = "broker"; implementation = "broker";
packages = with pkgs; [
gcr
];
}; };
}; };
-2
View File
@@ -9,7 +9,6 @@
./acme.nix ./acme.nix
./attic ./attic
./docker ./docker
./garage.nix
./gitea.nix ./gitea.nix
./firewall.nix ./firewall.nix
./haproxy ./haproxy
@@ -19,7 +18,6 @@
./minio.nix ./minio.nix
./networking.nix ./networking.nix
./nextcloud.nix ./nextcloud.nix
./otel.nix
#./plex #./plex
./postgresql.nix ./postgresql.nix
./samba.nix ./samba.nix
-6
View File
@@ -50,7 +50,6 @@ frontend ContentSwitching
acl host_minio hdr(host) -i minio.alicehuston.xyz acl host_minio hdr(host) -i minio.alicehuston.xyz
acl host_minio_console hdr(host) -i minio-console.alicehuston.xyz acl host_minio_console hdr(host) -i minio-console.alicehuston.xyz
acl host_attic hdr(host) -i attic.nayeonie.com acl host_attic hdr(host) -i attic.nayeonie.com
acl host_s3 hdr(host) -i s3.nayeonie.com
acl host_minio hdr(host) -i minio.nayeonie.com acl host_minio hdr(host) -i minio.nayeonie.com
acl host_minio_console hdr(host) -i minio-console.nayeonie.com acl host_minio_console hdr(host) -i minio-console.nayeonie.com
#acl host_nextcloud_vol hdr(host) -i nextcloud-vol.alicehuston.xyz #acl host_nextcloud_vol hdr(host) -i nextcloud-vol.alicehuston.xyz
@@ -68,7 +67,6 @@ frontend ContentSwitching
use_backend nextcloud_nodes if host_nextcloud use_backend nextcloud_nodes if host_nextcloud
use_backend hydra_nodes if host_hydra use_backend hydra_nodes if host_hydra
use_backend attic_nodes if host_attic use_backend attic_nodes if host_attic
use_backend garage_nodes if host_s3
#use_backend nextcloud_vol_nodes if host_nextcloud_vol #use_backend nextcloud_vol_nodes if host_nextcloud_vol
# use_backend collabora_nodes if host_collabora # use_backend collabora_nodes if host_collabora
use_backend prometheus_nodes if host_prometheus use_backend prometheus_nodes if host_prometheus
@@ -144,10 +142,6 @@ backend minio_console_nodes
mode http mode http
server server 192.168.76.2:8501 server server 192.168.76.2:8501
backend garage_nodes
mode http
server server 192.168.76.2:8502
# backend foundry_nodes # backend foundry_nodes
# timeout tunnel 50s # timeout tunnel 50s
# mode http # mode http
-48
View File
@@ -1,48 +0,0 @@
{
config,
pkgs,
...
}:
let
vars = import ./vars.nix;
basePath = "${vars.primary_minio}/garage";
in
{
services.garage = {
enable = true;
package = pkgs.garage;
logLevel = "info";
settings = {
metadata_dir = "${basePath}/meta";
data_dir = "${basePath}/data";
db_engine = "sqlite";
replication_factor = 1;
rpc_bind_addr = "127.0.0.1:8504";
rpc_public_addr = "127.0.0.1:8504";
rpc_secret_file = config.sops.secrets."garage/rpc-secret".path;
s3_api = {
api_bind_addr = "127.0.0.1:8502";
s3_region = "us-east-1";
root_domain = ".s3.nayeonie.com";
};
admin = {
api_bind_addr = "127.0.0.1:8503";
admin_token_file = config.sops.secrets."garage/admin-token".path;
};
};
};
systemd.tmpfiles.rules = [
"d ${basePath}/meta 0750 garage garage -"
"d ${basePath}/data 0750 garage garage -"
];
sops.secrets = {
"garage/rpc-secret" = { };
"garage/admin-token" = { };
};
}
+9 -24
View File
@@ -1,7 +1,11 @@
{ lib, ... }: { lib, pkgs, ... }:
{ {
boot = { boot = {
zfs.requestEncryptionCredentials = lib.mkForce false;
postBootCommands = ''
${pkgs.zfs}/bin/zfs load-key -a
'';
initrd = { initrd = {
services.lvm.enable = true; services.lvm.enable = true;
luks.devices = { luks.devices = {
@@ -12,28 +16,6 @@
}; };
}; };
clevis = {
enable = true;
useTang = true;
devices = {
# Unlock LUKS root device via Tang
"nixos-pv".secretFile = ./nixos-pv.jwe;
# Unlock ZFS native-encrypted dataset via Tang
"ZFS-primary/nix".secretFile = ./nix-store.jwe;
};
};
# Static networking needed in initrd so Tang is reachable before any disk mounts
systemd.network = {
enable = true;
networks."10-initrd-eno1" = {
matchConfig.Name = "eno1";
address = [ "192.168.76.2/24" ];
routes = [ { Gateway = "192.168.76.1"; } ];
dns = [ "192.168.76.1" ];
linkConfig.RequiredForOnline = "routable";
};
};
}; };
}; };
@@ -55,7 +37,10 @@
"dmask=0077" "dmask=0077"
]; ];
"/nix".depends = [ "/" ]; "/nix".depends = [
"/"
"/crypto"
];
}; };
} }
+4 -31
View File
@@ -6,8 +6,6 @@
let let
hydra_notify_prometheus_port = "9199"; hydra_notify_prometheus_port = "9199";
hydra_queue_runner_prometheus_port = "9200"; hydra_queue_runner_prometheus_port = "9200";
postgres_exporter_port = 9187;
zfs_exporter_port = 9134;
in in
{ {
systemd.services.hydra-notify.serviceConfig.EnvironmentFile = systemd.services.hydra-notify.serviceConfig.EnvironmentFile =
@@ -98,23 +96,10 @@ in
enable = true; enable = true;
webExternalUrl = "https://prom.alicehuston.xyz"; webExternalUrl = "https://prom.alicehuston.xyz";
port = 9001; port = 9001;
exporters = { exporters.node = {
node = { enable = true;
enable = true; enabledCollectors = [ "systemd" ];
enabledCollectors = [ "systemd" ]; port = 9002;
port = 9002;
};
postgres = {
enable = true;
listenAddress = "127.0.0.1";
port = postgres_exporter_port;
runAsLocalSuperUser = true;
};
zfs = {
enable = true;
listenAddress = "127.0.0.1";
port = zfs_exporter_port;
};
}; };
scrapeConfigs = [ scrapeConfigs = [
{ {
@@ -134,18 +119,6 @@ in
} }
]; ];
} }
{
job_name = "postgres-local";
static_configs = [
{ targets = [ "127.0.0.1:${toString config.services.prometheus.exporters.postgres.port}" ]; }
];
}
{
job_name = "zfs-local";
static_configs = [
{ targets = [ "127.0.0.1:${toString config.services.prometheus.exporters.zfs.port}" ]; }
];
}
{ {
job_name = "hydra-external"; job_name = "hydra-external";
scheme = "https"; scheme = "https";
-117
View File
@@ -1,117 +0,0 @@
{
config,
...
}:
{
# node_exporter (port 9002) and Prometheus (port 9001) are already configured
# in hydra.nix — we just scrape the existing exporter here.
services.opentelemetry-collector = {
enable = true;
settings = {
receivers = {
# Host-level system metrics
hostmetrics = {
collection_interval = "60s";
scrapers = {
cpu = { };
memory = { };
disk = { };
filesystem = { };
network = { };
load = { };
processes = { };
};
};
# Scrape node_exporter for per-systemd-unit service state
prometheus = {
config = {
scrape_configs = [
{
job_name = "node-exporter";
scrape_interval = "60s";
static_configs = [
{
targets = [ "localhost:${toString config.services.prometheus.exporters.node.port}" ];
labels = {
host = "palatine-hill";
};
}
];
}
{
job_name = "postgres-exporter";
scrape_interval = "60s";
static_configs = [
{
targets = [ "localhost:${toString config.services.prometheus.exporters.postgres.port}" ];
labels = {
host = "palatine-hill";
};
}
];
}
{
job_name = "zfs-exporter";
scrape_interval = "60s";
static_configs = [
{
targets = [ "localhost:${toString config.services.prometheus.exporters.zfs.port}" ];
labels = {
host = "palatine-hill";
};
}
];
}
];
};
};
};
processors = {
batch = { };
# Attach hostname and other system resource attributes
"resourcedetection/system" = {
detectors = [ "system" ];
system.hostname_sources = [ "os" ];
};
};
exporters = {
"otlp/honeycomb" = {
endpoint = "api.honeycomb.io:443";
headers = {
# Expanded at runtime from the environment file
"x-honeycomb-team" = "\${HONEYCOMB_API_KEY}";
};
};
};
service = {
pipelines = {
metrics = {
receivers = [
"hostmetrics"
"prometheus"
];
processors = [
"resourcedetection/system"
"batch"
];
exporters = [ "otlp/honeycomb" ];
};
};
};
};
};
# Inject the Honeycomb API key at runtime — never stored in the Nix store
systemd.services.opentelemetry-collector.serviceConfig.EnvironmentFile =
config.sops.secrets."honeycomb/api-key".path;
sops.secrets = {
"honeycomb/api-key".owner = "root";
};
}
+3 -8
View File
@@ -42,11 +42,6 @@ server-validation:
webhook: ENC[AES256_GCM,data:Lwqy4UhyFutpXjai7EJPKp8MDlI+ayDna4T8jluvC6qkeJ7o1UaaDCOsgLy4Fw7LC77tXhJtkcmep9w37JaiHp2CoDOfy2iAaq8o9CCSi/a0zqMJx+HdZYZNemvmpc6E/be0K+JDrFZLbjr3unSpCidQ3whccC6XyY013R12swN3bFZIu1gtzXCgUZ4U,iv:pVbrRwH3ziu4+R5BfimPV7N71QmyerJEc9M5K4eofOc=,tag:zNrCXrIioQWPEPVz/wMDpQ==,type:str] webhook: ENC[AES256_GCM,data:Lwqy4UhyFutpXjai7EJPKp8MDlI+ayDna4T8jluvC6qkeJ7o1UaaDCOsgLy4Fw7LC77tXhJtkcmep9w37JaiHp2CoDOfy2iAaq8o9CCSi/a0zqMJx+HdZYZNemvmpc6E/be0K+JDrFZLbjr3unSpCidQ3whccC6XyY013R12swN3bFZIu1gtzXCgUZ4U,iv:pVbrRwH3ziu4+R5BfimPV7N71QmyerJEc9M5K4eofOc=,tag:zNrCXrIioQWPEPVz/wMDpQ==,type:str]
typhon: typhon:
hashedPassword: ENC[AES256_GCM,data:gMyY8gxUn3HzycQRu2cminqRFWghqWcjzZzTxAQZ5PJqn604iSwDiVdr7icHB7drJfCAfsE7L4oKRJgxaIAE32043oOkb2T7DDH8y2jxMzqmZCfbvrfMI4wdfRTHGqzxb6X/aZ5ai2rr1Q==,iv:4EsTo/lQld0o9iktDX9gobMlPUCitx1i9wn8EL16sIs=,tag:FgVDRHk2glDwpC/mprrPqQ==,type:str] hashedPassword: ENC[AES256_GCM,data:gMyY8gxUn3HzycQRu2cminqRFWghqWcjzZzTxAQZ5PJqn604iSwDiVdr7icHB7drJfCAfsE7L4oKRJgxaIAE32043oOkb2T7DDH8y2jxMzqmZCfbvrfMI4wdfRTHGqzxb6X/aZ5ai2rr1Q==,iv:4EsTo/lQld0o9iktDX9gobMlPUCitx1i9wn8EL16sIs=,tag:FgVDRHk2glDwpC/mprrPqQ==,type:str]
garage:
rpc-secret: ENC[AES256_GCM,data:Q2ZaAXcntD3yK6DynEpxab2TITByMZ7ECVrq1pb0ZU7hXOZnhaBmjdty/Os6len8l+GBl6+WaC0An6cFkhQTlQ==,iv:E8C4bnxMLXK9fky+KC7q8sHpmrEU5un0TEAwxVUBiLk=,tag:PiSiU+9NpyilH2aMs2Qc/Q==,type:str]
admin-token: ENC[AES256_GCM,data:Xjm8Xq99aDseR0jN50Uj3gLpeDaq2IGXzJCS0o1H0RgKX9LGdP8w508nWWE=,iv:+L9T3TEUSbIz+jo08ykjGHVhuz5ecmzrlhzD2iv48HE=,tag:7P2rY4F8cWFdG4Lm9n/etQ==,type:str]
honeycomb:
api-key: ENC[AES256_GCM,data:k+Z3tmF8pYwD6RokdZauQ/fMlhD5GbW1ekxzRnj0gEpQFlfGB8gQ6BPwbd7qk12ZhsCA4XdqmvsysAWJldYWrPo=,iv:dNAK/vbQYL7ir4UXhZMTWraZF1E6ps9EOF3skYe5wOk=,tag:5UJfCji/RmB5DIuR4179uw==,type:str]
sops: sops:
age: age:
- recipient: age1qw5k8h72k3fjg5gmlxx8q8gwlc2k6n6u08d8hdzpm2pk9r0fnfxsmw33nh - recipient: age1qw5k8h72k3fjg5gmlxx8q8gwlc2k6n6u08d8hdzpm2pk9r0fnfxsmw33nh
@@ -58,8 +53,8 @@ sops:
cXNZWmZqd0R0SmhINExscHBKWmxvblUKEFEQvt/zQFARba4S8vHz/1SoKdKg69At cXNZWmZqd0R0SmhINExscHBKWmxvblUKEFEQvt/zQFARba4S8vHz/1SoKdKg69At
LZ58XQGOmlGbBhPr7EzYQ2XSY4flWbnnD174cmCR8DNFm15DsNA5fw== LZ58XQGOmlGbBhPr7EzYQ2XSY4flWbnnD174cmCR8DNFm15DsNA5fw==
-----END AGE ENCRYPTED FILE----- -----END AGE ENCRYPTED FILE-----
lastmodified: "2026-05-03T16:28:04Z" lastmodified: "2026-01-17T01:50:50Z"
mac: ENC[AES256_GCM,data:dJ00o+Ny6btbOn7Pt5dc4iLx6FHJiTTx7onZDhjxya7Ywg2qAwHTiNP2q2aP6348w5uenlEGrgnV0Dc8xyHfApNCkJwj4G6UnI17jxEGn3lc0ZFNzJJ7jO2CJcwHir0E2G4XdPjmbZUdB3aKmM34dI9EEUWWhNFXdps4X0dNQcM=,iv:vOU3kHq0axRBrkNfVicQ/8H77nF0DGIJlpoDuJmwRGQ=,tag:uvoFkmaaTM8zpr1g2FpCMA==,type:str] mac: ENC[AES256_GCM,data:8TGSqwEcfmrW1PjuzTVNyDTNs6s3oWbT0tI+rg7u2w5Dcw1EEU+SjJ6VpNY06AZHTjSD6E0O7NzUxybtMpslHUGitOGWwQCk+sbqRJuUseFe7bWFboEVoJpEoYGN5pnn52opMT+NeHGkXumaxjhDjCxfwn1RBHR7TgD4ZHEH6pE=,iv:szBUnn3HL/osWhmTwYmHrUghobWdBR60Lc6uUD/eGMY=,tag:6vgdJeJjL4ZYKc8WjixClg==,type:str]
pgp: pgp:
- created_at: "2024-11-28T18:56:39Z" - created_at: "2024-11-28T18:56:39Z"
enc: |- enc: |-
@@ -74,4 +69,4 @@ sops:
-----END PGP MESSAGE----- -----END PGP MESSAGE-----
fp: 5EFFB75F7C9B74EAA5C4637547940175096C1330 fp: 5EFFB75F7C9B74EAA5C4637547940175096C1330
unencrypted_suffix: _unencrypted unencrypted_suffix: _unencrypted
version: 3.12.2 version: 3.11.0
+27 -1
View File
@@ -7,7 +7,6 @@
{ {
boot = { boot = {
zfs.extraPools = [ "ZFS-primary" ]; zfs.extraPools = [ "ZFS-primary" ];
zfs.requestEncryptionCredentials = false;
filesystem = "zfs"; filesystem = "zfs";
extraModprobeConfig = '' extraModprobeConfig = ''
options zfs zfs_arc_min=82463372083 options zfs zfs_arc_min=82463372083
@@ -86,6 +85,33 @@
fi fi
''; '';
}; };
zfs-load-nix-key = {
description = "Load ZFS key for ZFS-primary/nix in initrd";
wantedBy = [ "initrd-fs.target" ];
requires = [
"sysroot.mount"
"zfs-import-zfs-primary.service"
];
after = [
"sysroot.mount"
"zfs-import-zfs-primary.service"
];
before = [
"initrd-fs.target"
"sysroot-nix.mount"
];
unitConfig.DefaultDependencies = "no";
serviceConfig = {
Type = "oneshot";
RemainAfterExit = true;
};
path = with pkgs; [ zfs ];
script = ''
key_file="/sysroot/crypto/keys/zfs-nix-store-key"
zfs load-key -L "file://$key_file" "ZFS-primary/nix"
'';
};
}; };
}; };
-1
View File
@@ -90,7 +90,6 @@
gocryptfs gocryptfs
awscli2 awscli2
claurst
]; ];
}; };
+1 -9
View File
@@ -207,14 +207,6 @@ in
}; };
}; };
}; };
rbw = {
enable = true;
settings = {
lockTimeout = 300;
pinentry = pkgs.pinentry-gnome3;
email = "snowinginwonderland@gmail.com";
};
};
}; };
services.gnome-keyring.enable = true; services.gnome-keyring.enable = true;
@@ -276,7 +268,7 @@ in
nextcloud-client nextcloud-client
bitwarden-cli bitwarden-cli
rofi-rbw-wayland bitwarden-menu
wtype wtype
obsidian obsidian
libreoffice-qt-fresh libreoffice-qt-fresh
+3 -5
View File
@@ -10,8 +10,6 @@ alice:
attic-nix-cache-admin: ENC[AES256_GCM,data:xHJGeU4EUn1HRy2nIValiJ6iLZnYmmT6Njv/cGMh15Q0hJXKNBSsi8f0mAfLI7EX+GaC299VKh2uTlU25jptrAvogLxNJIc+LZBLsSkyGE/ojqqevHMKmZ/6eciLZRQL5ey9TM3V9HHyDOhGaFgdfawtwg/vyvbV13lZBKpqneAX9T3gPRuKRjV4/Uc/5cUckiOF8bQ50xVFN8Cql9HgGDJEGWgg4XUTPu5eYspof2EN63pYvU7wg6HD2begeLDvqc2/i2DIcsc0wqc5DgkY/dH2YtcssBtU8AR9vKpl+HmH/wvt6dfaEyZ7hF7ITGwWnOO6H2ko3SjYRfHkFK3XDmm1YRRjfkptnw==,iv:BdVgNyZ1azl5tKfH+RTeXuNV/rYY6hPvrareKlIXSeQ=,tag:/ar87eAjMod4TmQXoerNBQ==,type:str] attic-nix-cache-admin: ENC[AES256_GCM,data:xHJGeU4EUn1HRy2nIValiJ6iLZnYmmT6Njv/cGMh15Q0hJXKNBSsi8f0mAfLI7EX+GaC299VKh2uTlU25jptrAvogLxNJIc+LZBLsSkyGE/ojqqevHMKmZ/6eciLZRQL5ey9TM3V9HHyDOhGaFgdfawtwg/vyvbV13lZBKpqneAX9T3gPRuKRjV4/Uc/5cUckiOF8bQ50xVFN8Cql9HgGDJEGWgg4XUTPu5eYspof2EN63pYvU7wg6HD2begeLDvqc2/i2DIcsc0wqc5DgkY/dH2YtcssBtU8AR9vKpl+HmH/wvt6dfaEyZ7hF7ITGwWnOO6H2ko3SjYRfHkFK3XDmm1YRRjfkptnw==,iv:BdVgNyZ1azl5tKfH+RTeXuNV/rYY6hPvrareKlIXSeQ=,tag:/ar87eAjMod4TmQXoerNBQ==,type:str]
gitea-actions-token: ENC[AES256_GCM,data:QTEPMAh1RWWJ/O3yhkQkEBTdVL8XhIRGCDbiM0lLjfILKF4SpSJ2sA==,iv:mBaaB1JHb2KVc9n2pdeX4pSMvb7q5z3joMT7rR5Whgs=,tag:ef+58SI4AUeqUsk3RVDsRQ==,type:str] gitea-actions-token: ENC[AES256_GCM,data:QTEPMAh1RWWJ/O3yhkQkEBTdVL8XhIRGCDbiM0lLjfILKF4SpSJ2sA==,iv:mBaaB1JHb2KVc9n2pdeX4pSMvb7q5z3joMT7rR5Whgs=,tag:ef+58SI4AUeqUsk3RVDsRQ==,type:str]
gitea-pr-token: ENC[AES256_GCM,data:ybTya4X2wd65pNFSGbQkg73lu66GNtSba4yf8J6tT8XkuOtfvtBS4g==,iv:39mJiAlw4kud4l06jOpxOCRumChE/5q8IBNsPHG1rMc=,tag:MEvHD2b9E3fVHLlz7haNyw==,type:str] gitea-pr-token: ENC[AES256_GCM,data:ybTya4X2wd65pNFSGbQkg73lu66GNtSba4yf8J6tT8XkuOtfvtBS4g==,iv:39mJiAlw4kud4l06jOpxOCRumChE/5q8IBNsPHG1rMc=,tag:MEvHD2b9E3fVHLlz7haNyw==,type:str]
honeycomb-id: ENC[AES256_GCM,data:PndCclCbSMrgmlYdQ5a8//IB+hg9uB0ZwidIZFiKN6w=,iv:mxFcGPnY0eCliugvQT5HR9aGzJIvXZI6FTo8rphVQMM=,tag:3huIaIy7da0gx2G/BVJJLQ==,type:str]
honeycomb-secret: ENC[AES256_GCM,data:x2MZPgAJz3I1m+rjSVpINZdQVbu50XiwEtPGEX/kbJw=,iv:bPNNtXIES0pCLc4Nu/886nwoOVXaZgyTxndSpyWWU0E=,tag:igdd/NPCDpcvzSd9LO4hyw==,type:str]
sops: sops:
age: age:
- recipient: age1qw5k8h72k3fjg5gmlxx8q8gwlc2k6n6u08d8hdzpm2pk9r0fnfxsmw33nh - recipient: age1qw5k8h72k3fjg5gmlxx8q8gwlc2k6n6u08d8hdzpm2pk9r0fnfxsmw33nh
@@ -41,8 +39,8 @@ sops:
ZERFTlFyNjhOb3VCaW43ZXFHT1Vxc0UK7YV+BU7dCEOZxpqkQA394eDsnthvorj6 ZERFTlFyNjhOb3VCaW43ZXFHT1Vxc0UK7YV+BU7dCEOZxpqkQA394eDsnthvorj6
7bqrCdeU+6DU7DmFs6++BrNO2tx8vvOa1im+ZGrM/gZAJdv/7R2d6Q== 7bqrCdeU+6DU7DmFs6++BrNO2tx8vvOa1im+ZGrM/gZAJdv/7R2d6Q==
-----END AGE ENCRYPTED FILE----- -----END AGE ENCRYPTED FILE-----
lastmodified: "2026-05-03T16:20:31Z" lastmodified: "2025-04-07T23:43:57Z"
mac: ENC[AES256_GCM,data:/Ui0H0wgENYnzVB7V2aAbj4dIbE+sjRpPoso75xyKepmeeMp8nv8CJLKmLmtbp+rtX287teVw4hKxb/z8M6ZHIWmgt9Zgui8/+hw5hwuMmyjPeGqJQjFH8s4zcXkOKLDoFPA65gva5gaPsreqPwOSqgqYwmds5gOcxBI3Tqdh6E=,iv:ugtug8Vb7DRUp1eC2M6ooCoPFjbqZo8htHOV4AXIcSI=,tag:4sMJsh/r57Z4VFoOBmUmXw==,type:str] mac: ENC[AES256_GCM,data:ygQzxSpGJqXwkOq7jGDeflA2FTSSxnre/PXm0LxmxzQQW5s7LeIVSI75fMqWir0WU3Pi/xroYGEWjpCG6JvxV5RiJycTONk8VE7c3jtw3AbrHSS0b1K5tJ+Sf+q3rHJFWWk/COrPk8IsRFNb+taqH4jnaH3AAVNo5u0C1CHKMes=,iv:FO2GVDXE8SjjA81/9cDwc+dX8kJ2oHt5kqkhNBuMb54=,tag:hgzRAmsh32SCvJEvKyV+vg==,type:str]
pgp: pgp:
- created_at: "2024-09-05T06:10:22Z" - created_at: "2024-09-05T06:10:22Z"
enc: |- enc: |-
@@ -57,4 +55,4 @@ sops:
-----END PGP MESSAGE----- -----END PGP MESSAGE-----
fp: 5EFFB75F7C9B74EAA5C4637547940175096C1330 fp: 5EFFB75F7C9B74EAA5C4637547940175096C1330
unencrypted_suffix: _unencrypted unencrypted_suffix: _unencrypted
version: 3.12.2 version: 3.10.1