Compare commits

..

1 commit

Author SHA1 Message Date
3648b4d535
meta: add AI agent rules and skills
Some checks failed
Build / build (Testing) (pull_request) Has been cancelled
Build / build (Development) (pull_request) Has been cancelled
Build / Determining hosts to build (pull_request) Failing after 10m10s
Build / Determining hosts to build (push) Failing after 11m10s
Build / build (Testing) (push) Failing after 13m36s
Build / build (Development) (push) Failing after 15m18s
Create a modular, context-aware style guide for AI code assistants.

- Add nixos-architecture skill for .nix file generation and networking patterns
- Add dns-management rule to enforce Bind9 SOA serial increments
- Add cicd-networking rule for direct-IP runner authentication
- Add git-workflow rule to enforce conventional and atomic commits
2026-03-17 22:52:15 +01:00
38 changed files with 194 additions and 788 deletions

View file

@ -1,39 +0,0 @@
# Bos55 NixOS Configuration Style Guide
Follow these rules when modifying or extending the Bos55 NixOS configuration.
## 1. Network & IP Management
- **Local Ownership**: Define host IP addresses only within their respective host configuration files (e.g., `hosts/BinaryCache/default.nix`).
- **Dynamic Discovery**: Do NOT use global IP mapping modules. Instead, use inter-host evaluation to resolve IPs and ports at build time:
```nix
# In another host's config
let
bcConfig = inputs.self.nixosConfigurations.BinaryCache.config;
bcIp = (pkgs.lib.head bcConfig.networking.interfaces.ens18.ipv4.addresses).address;
in "http://${bcIp}:8080"
```
## 2. Modular Service Design
- **Encapsulation**: Services must be self-contained. Options like `openFirewall`, `port`, and `enableRemoteBuilder` should live in the service module (`modules/services/<service>/default.nix`).
- **Firewall Responsibility**: The service module is responsible for opening firewall ports (e.g., TCP 8080, SSH 22) based on its own options. Do not open ports manually in host files if the service provides an option.
- **Remote Builders**: If a service like Attic supports remote building, include the `builder` user, trusted-users, and SSH configuration within that module's options.
## 3. Container Networking
- **Discovery by Name**: Host services should connect to their companion containers (e.g., PostgreSQL) using the container name rather than `localhost` or bridge IPs.
- **Host Resolution**: Use `networking.extraHosts` in the service module to map the container name to `127.0.0.1` on the host for seamless traffic routing.
## 4. Secrets Management (sops-nix)
- **Centralized Config**: Fleet-wide `sops-nix` settings (like `defaultSopsFile` and `age.keyFile`) must live in `modules/common/default.nix`.
- **No Hardcoded Paths**: Always use `config.sops.secrets."path/to/secret".path` to reference credentials.
## 5. DNS & DNS Zone Files
- **Serial Increment**: Every change to a Bind9 zone file (e.g., `db.depeuter.dev`) MUST increment the `Serial` number in the SOA record.
- **Specific Domains**: Prefer a single, well-defined domain (e.g., `nix-cache.depeuter.dev`) over multiple aliases or magic values.
## 6. CI/CD Robustness
- **IP-Based Login**: When CI runners (Gitea Actions) need to interact with internal services, use direct IP addresses (e.g., `192.168.0.25`) for login/auth to bypass potential DNS resolution issues in the runner environment.
## 8. Git Workflow & Commits
- **Atomic Commits**: Each commit should represent a single logical change and be easily revertible. Split docs, metadata, and core code changes into separate commits.
- **Conventional Commits**: Use conventional commit messages (e.g., `feat:`, `fix:`, `docs:`, `refactor:`, `ci:`, `meta:`).
- **Branching**: Always work in feature branches and push to origin to create pull requests.

View file

@ -0,0 +1,13 @@
---
name: cicd-networking
description: Networking constraints for CI/CD workflow files (Gitea/GitHub Actions).
globs: [".github/workflows/.yml", ".github/workflows/.yaml", ".gitea/workflows/.yml", ".gitea/workflows/.yaml"]
---
# Bos55 CI/CD Networking Constraints
When generating or modifying CI/CD workflows, strictly follow these networking practices:
1. **IP-Based Login for Reliability**
- When CI runners (like Gitea Actions) need to interact with internal services for authentication or deployment, always use direct IP addresses (e.g., `192.168.0.25`) for machine-to-machine login steps.
- **Why?** This bypasses potential DNS resolution issues or delays within the isolated runner environment, ensuring maximum robustness during automated CI/CD runs.

View file

@ -0,0 +1,14 @@
---
name: dns-management
description: Hard constraints for modifying Bind9 DNS zone files.
globs: ["db.", ".zone"]
---
# Bos55 DNS Management Constraints
When modifying or generating Bind9 zone files, you MUST strictly adhere to the following rules:
1. **Serial Increment (CRITICAL)**
- Every single time you modify a Bind9 zone file (e.g., `db.depeuter.dev`), you MUST increment the Serial number in the SOA record. Failure to do so will cause DNS propagation to fail.
2. **Domain Name Specificity**
- Prefer a single, well-defined explicit domain (e.g., `nix-cache.depeuter.dev`) instead of creating multiple aliases or using magic values. Keep records clean and explicit.

View file

@ -0,0 +1,21 @@
---
name: git-workflow
description: Rules for generating Git commit messages and managing branch workflows.
globs: ["COMMIT_EDITMSG", ".git/*"]
---
# Git Workflow Constraints
When generating commit messages, reviewing code for a commit, or planning a branch workflow, strictly follow these standards:
1. **Commit Formatting**
- **Conventional Commits**: You MUST format all commit messages using conventional prefixes: `feat:`, `fix:`, `docs:`, `refactor:`, `ci:`, `meta:`.
- **Clarity**: Ensure the message clearly explains *what* changed and *why*.
2. **Atomic Commits**
- Group changes by a single logical concern.
- NEVER mix documentation updates, core infrastructure code, and style guide changes in the same commit.
- Ensure that the generated commit is easily revertible without breaking unrelated features.
3. **Branching Workflow**
- Always assume changes will be pushed to a feature branch to create a Pull Request.
- Do not suggest or generate commands that push directly to the main branch.

View file

@ -1,51 +0,0 @@
---
name: bos55-nix-config
description: Best practices and codestyle for the Bos55 NixOS configuration project.
---
# Bos55 NixOS Configuration Skill
This skill provides the core principles and implementation patterns for the Bos55 NixOS project. Use this skill when adding new hosts, services, or networking rules.
## Core Principles
### 1. Minimal Hardcoding
- **Host IPs**: Always define IPv4/IPv6 addresses within the host configuration (`hosts/`).
- **Options**: Prefer `lib.mkOption` over hardcoded strings for ports, domain names, and database credentials.
- **Unified Variables**: If a value is shared (e.g., between a PG container and a host service), define a local variable (e.g., `let databaseName = "attic"; in ...`) to ensure consistency.
### 2. Service-Driven Configuration
- **Encapsulation**: Service modules should manage their own firewall rules, users/groups, and SSH settings.
- **Trusted Access**: Use the service module to define `nix.settings.trusted-users` for things like remote builders.
### 3. Build-Time Discovery
- **Inter-Host Evaluation**: To avoid magic values, resolve a host's IP or port by evaluating its configuration in the flake's output:
```nix
bcConfig = inputs.self.nixosConfigurations.BinaryCache.config;
```
- **Domain Deferral**: Client modules should defer their default domain settings from the server module's domain option.
## Implementation Patterns
### Container-Host Connectivity
- **Pattern**: `Service` on host -> `Container` via bridge mapping.
- **Rule**: Map the container name to `127.0.0.1` using `networking.extraHosts` to allow the host service to resolve the container by name without needing the bridge IP.
### Secrets Management
- **Rule**: Standardize all secrets via `sops-nix`.
- **Common Module**: Ensure `modules/common/default.nix` handles the default `sopsFile` and `age` key configuration.
### Bind9 Management
- **Rule**: **ALWAYS** increment the serial when editing zone records.
### CI/CD Networking
- **Rule**: Use direct IPs for machine-to-machine login steps in Actions workflows to ensure reliability across different runner environments.
## 4. Security & Documentation
- **Supply Chain Protection**: Always verify and lock Nix flake inputs. Use fixed-output derivations for external resource downloads.
- **Assumptions Documentation**: Clearly document environment assumptions (e.g., Proxmox virtualization, Tailscale networking, and specific IP ranges) in host or service READMEs.
- **Project Structure**: Maintain the separation of `hosts`, `modules`, `users`, and `secrets` to ensure clear ownership and security boundaries.
### 5. Git Standards
- **Rule**: Follow **Conventional Commits** (e.g., `feat:`, `refactor:`, `docs:`, `meta:`).
- **Rule**: Keep commits **atomic** and **revertible**. Never mix documentation, infrastructure, and style guide changes in a single commit.

View file

@ -0,0 +1,47 @@
---
name: bos55-nix-architecture
description: Implementation patterns for NixOS configurations, networking, and service modules.
globs: [".nix", "hosts/**/", "modules//*", "secrets//*"]
---
# NixOS Architecture Skill
When generating or modifying NixOS configuration files for the Bos55 project, strictly adhere to the following architectural patterns:
## 1. Minimal Hardcoding & Dynamic Discovery
- **Local IP Ownership**: Define IPv4/IPv6 addresses **only** within their respective host configuration files (e.g., `hosts/<HostName>/default.nix`). Do not use global IP mapping modules.
- **Inter-Host Discovery**: Resolve a host's IP or port by evaluating its configuration at build time. Never hardcode another host's IP.
**Pattern Example**:
```
let
bcConfig = inputs.self.nixosConfigurations.BinaryCache.config;
bcIp = (pkgs.lib.head bcConfig.networking.interfaces.ens18.ipv4.addresses).address;
in "http://${bcIp}:8080"
```
- **Unified Variables**: Use local variables (e.g., `let dbName = "attic"; in ...`) for shared values between host services and containers to ensure consistency.
## 2. Modular Service Encapsulation
- **Self-Contained Modules**: Service modules (`modules/services/<service>/default.nix`) must manage their own configurations. Prefer `lib.mkOption` over hardcoded strings for domains, ports, and credentials.
- **Firewall Responsibility**: Open ports (e.g., TCP 8080, SSH 22) directly within the service module based on its own options. Do not open service ports manually in host files.
- **Remote Builders**: Define `nix.settings.trusted-users`, `builder` user, and SSH rules directly within the service module if it supports remote building (e.g., Attic).
## 3. Networking & Connectivity
- **Container-to-Host**: Host services must connect to companion containers using the container name, not the bridge IP or `localhost`.
- **Host Resolution**: Map the container name to `127.0.0.1` using `networking.extraHosts` in the host service module to route traffic seamlessly.
- **Domain Deferral**: Client modules must defer their default domain settings to the server module's defined domain option.
## 4. Secrets Management
- **Sops-Nix Exclusivity**: Manage all secrets via `sops-nix`.
- **Centralized Config**: Rely on `modules/common/default.nix` for fleet-wide settings like `defaultSopsFile` and `age.keyFile`.
- **References**: Always reference credentials dynamically using `config.sops.secrets."path/to/secret".path`.
## 5. Security & Documentation
- **Supply Chain Protection**: Always verify and lock Nix flake inputs. Use fixed-output derivations for external resource downloads.
- **Assumptions Documentation**: Clearly document environment assumptions (e.g., Proxmox virtualization, Tailscale networking, and specific IP ranges) in host or service READMEs.
- **Project Structure**: Maintain the strict separation of `hosts/`, `modules/`, `users/`, and `secrets/` to ensure clear ownership and security boundaries.

View file

@ -1,50 +1,43 @@
name: Build name: "Build"
on: on:
push:
branches:
- main
- 'test-*'
pull_request: pull_request:
push:
jobs: jobs:
# Job to find all hosts that should be built determine-hosts:
get-hosts: name: "Determining hosts to build"
runs-on: ubuntu-latest runs-on: ubuntu-latest
container: catthehacker/ubuntu:act-24.04 container: catthehacker/ubuntu:act-24.04
outputs: outputs:
hosts: ${{ steps.set-hosts.outputs.hosts }} hosts: ${{ steps.hosts.outputs.hostnames }}
steps: steps:
- uses: actions/checkout@v4 - uses: actions/checkout@v5
- name: Install Nix - uses: https://github.com/cachix/install-nix-action@v31
uses: cachix/install-nix-action@v27
- id: set-hosts
run: |
# Extract host names from nixosConfigurations
HOSTS=$(nix eval .#nixosConfigurations --apply "builtins.attrNames" --json)
echo "hosts=$HOSTS" >> $GITHUB_OUTPUT
build:
needs: get-hosts
runs-on: ubuntu-latest
container: catthehacker/ubuntu:act-24.04
strategy:
fail-fast: false
matrix:
host: ${{ fromJson(needs.get-hosts.outputs.hosts) }}
steps:
- uses: actions/checkout@v4
- name: Install Nix
uses: cachix/install-nix-action@v27
with: with:
nix_path: nixpkgs=channel:nixos-unstable nix_path: nixpkgs=channel:nixos-unstable
- name: Build NixOS configuration - name: "Determine hosts"
id: hosts
run: | run: |
nix build .#nixosConfigurations.${{ matrix.host }}.config.system.build.toplevel hostnames="$(nix eval .#nixosConfigurations --apply builtins.attrNames --json)"
- name: "Push to Attic" printf "hostnames=%s\n" "${hostnames}" >> "${GITHUB_OUTPUT}"
if: github.event_name == 'push' && github.ref == 'refs/heads/main'
run: | build:
nix profile install nixpkgs#attic-client runs-on: ubuntu-latest
attic login homelab http://192.168.0.25:8080 "${{ secrets.ATTIC_TOKEN }}" container: catthehacker/ubuntu:act-24.04
attic push homelab result needs: determine-hosts
strategy:
matrix:
hostname: [
Development,
Testing
]
steps:
- uses: actions/checkout@v5
- uses: https://github.com/cachix/install-nix-action@v31
with:
nix_path: nixpkgs=channel:nixos-unstable
- name: "Build host"
run: |
nix build ".#nixosConfigurations.${{ matrix.hostname }}.config.system.build.toplevel" --verbose

View file

@ -1,24 +0,0 @@
name: Check
on:
push:
branches:
- '**'
pull_request:
jobs:
check:
runs-on: ubuntu-latest
container: catthehacker/ubuntu:act-24.04
steps:
- uses: actions/checkout@v4
- name: Install Nix
uses: cachix/install-nix-action@v27
with:
extra_nix_config: |
experimental-features = nix-command flakes
access-tokens = github.com=${{ secrets.GITHUB_TOKEN }}
- name: Flake check
run: nix flake check

View file

@ -1,81 +0,0 @@
name: Deploy
on:
push:
branches:
- main
- 'test-*'
workflow_dispatch:
inputs:
mode:
description: 'Activation mode (switch, boot, test)'
default: 'switch'
required: true
jobs:
deploy:
runs-on: ubuntu-latest
container: catthehacker/ubuntu:act-24.04
steps:
- uses: actions/checkout@v4
- name: Install Nix
uses: cachix/install-nix-action@v27
with:
extra_nix_config: |
experimental-features = nix-command flakes
- name: Setup SSH
run: |
mkdir -p ~/.ssh
echo "${{ secrets.DEPLOY_SSH_KEY }}" > ~/.ssh/id_ed25519
chmod 600 ~/.ssh/id_ed25519
ssh-keyscan -H 192.168.0.0/24 >> ~/.ssh/known_hosts || true
# Disable strict host key checking for the local network if needed,
# or rely on known_hosts. For homelab, we can be slightly more relaxed
# but let's try to be secure.
echo "StrictHostKeyChecking no" >> ~/.ssh/config
- name: Verify Commit Signature
if: github.event.sender.login != 'renovate[bot]'
run: |
# TODO Hugo: Export your public GPG/SSH signing keys to a runner secret named 'TRUSTED_SIGNERS'.
# For GPG: gpg --export --armor <id> | base64 -w0
if [ -z "${{ secrets.TRUSTED_SIGNERS }}" ]; then
echo "::error::TRUSTED_SIGNERS secret is missing. Deployment aborted for safety."
exit 1
fi
# Implementation note: This step expects a keyring in the TRUSTED_SIGNERS secret.
# We use git to verify the signature of the current commit.
echo "${{ secrets.TRUSTED_SIGNERS }}" | base64 -d > /tmp/trusted_keys.gpg
gpg --import /tmp/trusted_keys.gpg
if ! git verify-commit HEAD; then
echo "::error::Commit signature verification failed. Only signed commits from trusted maintainers can be deployed."
exit 1
fi
echo "Commit signature verified successfully."
- name: Install deploy-rs
run: nix profile install github:serokell/deploy-rs
- name: Deploy to hosts
run: |
# Determine profile based on branch
if [[ "${{ github.ref }}" == "refs/heads/main" ]]; then
# Main site: persistent deployment
deploy . --skip-checks --targets $(deploy . --list | grep '.system$' | tr '\n' ' ')
elif [[ "${{ github.ref }}" == "refs/heads/test-"* ]]; then
# Test branch: non-persistent deployment (test profile)
# The branch name should be test-<hostname>
HOSTNAME="${GITHUB_REF#refs/heads/test-}"
deploy .#${HOSTNAME}.test --skip-checks
fi
- name: Manual Deploy
if: github.event_name == 'workflow_dispatch'
run: |
# TODO: Implement manual dispatch logic if needed
deploy . --skip-checks

View file

@ -1,64 +0,0 @@
# Bos55 NixOS Config
Automated CI/CD deployment for NixOS homelab using `deploy-rs`.
## Repository Structure
- `hosts/`: Host-specific configurations.
- `modules/`: Shared NixOS modules.
- `users/`: User definitions (including the `deploy` user).
- `secrets/`: Encrypted secrets via `sops-nix`.
## Deployment Workflow
### Prerequisites
- SSH access to the `deploy` user on target hosts.
- `deploy-rs` installed locally (`nix profile install github:serokell/deploy-rs`).
### Deployment Modes
1. **Production Deployment (main branch):**
Triggered on push to `main`. Automatically builds and switches all hosts. bootloader is updated.
Manual: `deploy .`
2. **Test Deployment (test-<hostname> branch):**
Triggered on push to `test-<hostname>`. Builds and activates the configuration on the specific host **without** updating the bootloader. Reboots will revert to the previous generation.
Manual: `deploy .#<hostname>.test`
3. **Kernel Upgrades / Maintenance:**
Use `deploy .#<hostname>.system --boot` to update the bootloader without immediate activation, followed by a manual reboot.
## Local Development
### 1. Developer Shell
This repository includes a standardized development environment containing all necessary tools (`deploy-rs`, `sops`, `age`, etc.).
```bash
nix develop
# or if using direnv
direnv allow
```
### 2. Build a host VM
You can build a QEMU VM for any host configuration to test changes locally:
```bash
nix build .#nixosConfigurations.<hostname>.config.system.build.vm
./result/bin/run-<hostname>-vm
```
> [!WARNING]
> **Network Conflict**: Default VMs use user-mode networking (NAT) which is safe. However, if you configure the VM to use bridge networking, it will attempt to use the static IP defined in `hostIp`. Ensure you do not have a physical host with that IP active on the same bridge to avoid network interference.
### 3. Run Integration Tests
Run the automated test suite:
```bash
nix-build test/vm-test.nix
```
### 3. Test CI Workflows Locally
Use `act` to test the GitHub Actions workflows:
```bash
act -W .github/workflows/check.yml
```
## Security
See [SECURITY.md](SECURITY.md) for details on the trust model and secret management.

View file

@ -1,93 +0,0 @@
# Security and Trust Model
This document outlines the security architecture, trust boundaries, and assumptions of the Bos55 NixOS deployment pipeline. This model is designed to support a multi-member infrastructure team and remains secure even if the repository is published publicly.
## Trust Zones
The system is partitioned into three distinct trust zones, each with specific controls to prevent lateral movement and privilege escalation.
### 🔴 Zone 1: Trusted Maintainers (Source of Truth)
* **Actors:** Infrastructure Team / Maintainers.
* **Capabilities:**
* Full access to the Git repository.
* Ownership of `sops-nix` master keys (GPG or Age).
* Direct root access to NixOS hosts via personal SSH keys for emergency maintenance.
* **Trust:** Root of trust. All changes must originate from or be approved by a Trusted Maintainer.
* **Security Controls:**
* **Signed Commits:** All contributions must be cryptographically signed by a trusted GPG/SSH key to be eligible for deployment.
- **MFA:** Hardware-based multi-factor authentication for repository access.
- **Metadata Redaction:** Sensitive identifiers like SSH `authorizedKeys` are stored in `sops-nix`. This prevents **infrastructure fingerprinting**, where an attacker could link your public keys to your personal identities or other projects.
### 🟡 Zone 2: CI/CD Pipeline (Automation Layer)
* **Actor:** GitHub Actions / Forgejo Runners.
* **Capabilities:**
* Builds Nix derivations from the repository.
* Access to the `DEPLOY_SSH_KEY` (allowing SSH access to the `deploy` user on target hosts).
* **Trusted Signers:** The public keys for verifying signatures are stored as a **Runner Secret** (`TRUSTED_SIGNERS`). This hides the identities of the infrastructure team even in a public repository.
* **NO ACCESS** to `sops-nix` decryption keys. Secrets remain encrypted during the build.
* **Security Controls:**
* **Signature Enforcement:** The `deploy.yml` workflow verifies the cryptographic signature of every maintainer commit. Deployment is aborted if the signature is missing or untrusted.
* **Sandboxing:** Runners execute in ephemeral, isolated containers.
* **Branch Protection:** Deployments to production (`main`) require approved Pull Requests.
* **Fork Protection:** CI workflows (and secrets) are explicitly disabled for forks.
### 🟢 Zone 3: Target NixOS Hosts (Runtime)
* **Actor:** Production, Testing, and Service nodes.
* **Capabilities:** Decrypt secrets locally using host-specific `age` keys.
* **Trust:** Consumers of builds. They trust Zone 2 only for the pushing of store paths and triggering activation scripts.
* **Security Controls:**
* **Restricted `deploy` User:** The SSH user for automation is non-root. Sudo access is strictly policed via `sudoers` rules to allow only `nix-env` and `switch-to-configuration`.
* **Immutable Store:** Building on Nix ensures that the system state is derived from a cryptographically hashed store, preventing unauthorized local modifications from persisting across reboots.
---
## Security Assumptions & Policies
### 1. Public Repository Safety
The repository is designed to be safe for public viewing. No unencrypted secrets should ever be committed. The deployment pipeline is protected against "malicious contributors" via:
- **Mandatory PR Reviews:** No code can reach the `main` branch without peer review.
- **Secret Scoping:** Deployment keys are only available to authorized runs on protected branches.
### 2. Supply Chain & Dependencies
- **Flake Lockfiles:** All dependencies (Nixpkgs, `deploy-rs`, etc.) are pinned to specific git revisions.
- **Renovate Bot:** Automated version upgrades allow for consistent tracking of upstream changes, though they require manual review or successful status checks for minor/patch versions.
### 3. Signed Commit Enforcement
To prevent "force-push" attacks or runner compromises from injecting malicious code into the history, the pipeline should be configured to only deploy commits signed by a known "Trusted Maintainer" key. This ensures that even if a git account is compromised, the attacker cannot deploy code without the physical/cryptographic signing key.
---
## Trust Boundary Diagram
```mermaid
graph TD
subgraph "Zone 1: Trusted Workstations"
DEV["Maintainers (Team)"]
SOPS_KEYS["Master SOPS Keys"]
SIGN_KEYS["Signing Keys (GPG/SSH)"]
end
subgraph "Zone 2: CI/CD Runner (Sandboxed)"
CI["Automated Runner"]
SSH_KEY["Deploy SSH Key"]
end
subgraph "Zone 3: NixOS Target Hosts"
HOST["Target Host"]
HOST_AGE["Host Age Key"]
end
DEV -- "Signed Push / PR" --> CI
CI -- "Push Store Paths & Activate" --> HOST
HOST_AGE -- "Local Decrypt" --> HOST
style DEV fill:#f96,stroke:#333
style CI fill:#ff9,stroke:#333
style HOST fill:#9f9,stroke:#333
```
## Security Best Practices for Maintainers
1. **Keep Master Keys Offline:** Never store `sops-nix` master keys on the CI runner or public servers.
2. **Audit Runner Logs:** Periodically review CI execution logs for unexpected behavior.
3. **Rotate Deployment Keys:** Rotate the `DEPLOY_SSH_KEY` if maintainer membership changes significantly.

View file

@ -13,78 +13,52 @@
url = "github:gytis-ivaskevicius/flake-utils-plus"; url = "github:gytis-ivaskevicius/flake-utils-plus";
inputs.flake-utils.follows = "flake-utils"; inputs.flake-utils.follows = "flake-utils";
}; };
deploy-rs = {
url = "github:serokell/deploy-rs";
inputs.nixpkgs.follows = "nixpkgs";
};
}; };
outputs = inputs@{ outputs = inputs@{
self, nixpkgs, self, nixpkgs,
flake-utils, sops-nix, utils, deploy-rs, flake-utils, sops-nix, utils,
... ...
}: }:
let let
system = utils.lib.system.x86_64-linux; system = utils.lib.system.x86_64-linux;
lib = nixpkgs.lib;
in in
utils.lib.mkFlake { utils.lib.mkFlake {
inherit self inputs; inherit self inputs;
hostDefaults.modules = [ hostDefaults = {
inherit system;
modules = [
./modules ./modules
./users ./users
sops-nix.nixosModules.sops sops-nix.nixosModules.sops
]; ];
hosts = {
# Infrastructure
Niko.modules = [ ./hosts/Niko ];
Ingress.modules = [ ./hosts/Ingress ];
Gitea.modules = [ ./hosts/Gitea ];
Vaultwarden.modules = [ ./hosts/Vaultwarden ];
BinaryCache.modules = [ ./hosts/BinaryCache ];
# Production
Binnenpost.modules = [ ./hosts/Binnenpost ];
Production.modules = [ ./hosts/Production ];
ProductionGPU.modules = [ ./hosts/ProductionGPU ];
ProductionArr.modules = [ ./hosts/ProductionArr ];
ACE.modules = [ ./hosts/ACE ];
# Lab
Template.modules = [ ./hosts/Template ];
Development.modules = [ ./hosts/Development ];
Testing.modules = [ ./hosts/Testing ];
};
deploy.nodes = let
pkg = deploy-rs.lib.${system};
isDeployable = nixos: (nixos.config.homelab.users.deploy.enable or false) && (nixos.config.homelab.networking.hostIp != null);
in
builtins.mapAttrs (_: nixos: {
hostname = nixos.config.homelab.networking.hostIp;
sshUser = "deploy";
user = "root";
profiles.system.path = pkg.activate.nixos nixos;
profiles.test.path = pkg.activate.custom nixos.config.system.build.toplevel ''
$PROFILE/bin/switch-to-configuration test
'';
}) (lib.filterAttrs (_: isDeployable) self.nixosConfigurations);
checks = builtins.mapAttrs (_: lib: lib.deployChecks self.deploy) deploy-rs.lib;
outputsBuilder = channels: {
formatter = channels.nixpkgs.alejandra;
devShells.default = channels.nixpkgs.mkShell {
name = "homelab-dev";
buildInputs = [
deploy-rs.packages.${system}.deploy-rs
channels.nixpkgs.sops
channels.nixpkgs.age
];
shellHook = "echo '🛡 Homelab Development Shell Loaded'";
};
};
}; };
hosts = {
# Physical hosts
Niko.modules = [ ./hosts/Niko ];
# Virtual machines
# Single-service
Ingress.modules = [ ./hosts/Ingress ];
Gitea.modules = [ ./hosts/Gitea ];
Vaultwarden.modules = [ ./hosts/Vaultwarden ];
# Production multi-service
Binnenpost.modules = [ ./hosts/Binnenpost ];
Production.modules = [ ./hosts/Production ];
ProductionGPU.modules = [ ./hosts/ProductionGPU ];
ProductionArr.modules = [ ./hosts/ProductionArr ];
ACE.modules = [ ./hosts/ACE ];
# Others
Template.modules = [ ./hosts/Template ];
Development.modules = [ ./hosts/Development ];
Testing.modules = [ ./hosts/Testing ];
};
};
} }

View file

@ -1,12 +1,10 @@
{ config, pkgs, ... }: { pkgs, ... }:
{ {
config = { config = {
homelab = { homelab = {
networking.hostIp = "192.168.0.41";
services.actions.enable = true; services.actions.enable = true;
virtualisation.guest.enable = true; virtualisation.guest.enable = true;
users.deploy.enable = true;
}; };
networking = { networking = {
@ -26,7 +24,7 @@
interfaces.ens18 = { interfaces.ens18 = {
ipv4.addresses = [ ipv4.addresses = [
{ {
address = config.homelab.networking.hostIp; address = "192.168.0.41";
prefixLength = 24; prefixLength = 24;
} }
]; ];

View file

@ -1,49 +0,0 @@
{ config, pkgs, lib, system, ... }:
let
hostIp = "192.168.0.25";
in {
config = {
homelab = {
services.attic = {
enable = true;
enableRemoteBuilder = true;
openFirewall = true;
};
virtualisation.guest.enable = true;
};
networking = {
hostName = "BinaryCache";
hostId = "100002500";
domain = "depeuter.dev";
useDHCP = false;
enableIPv6 = true;
defaultGateway = {
address = "192.168.0.1";
interface = "ens18";
};
interfaces.ens18 = {
ipv4.addresses = [
{
address = hostIp;
prefixLength = 24;
}
];
};
nameservers = [
"1.1.1.1" # Cloudflare
"1.0.0.1" # Cloudflare
];
};
# Sops configuration for this host is now handled by the common module
system.stateVersion = "24.05";
};
}

View file

@ -1,4 +1,4 @@
{ config, inputs, pkgs, ... }: { pkgs, ... }:
{ {
config = { config = {
@ -13,14 +13,12 @@
}; };
homelab = { homelab = {
networking.hostIp = "192.168.0.89";
apps = { apps = {
speedtest.enable = true; speedtest.enable = true;
technitiumDNS.enable = true; technitiumDNS.enable = true;
traefik.enable = true; traefik.enable = true;
}; };
virtualisation.guest.enable = true; virtualisation.guest.enable = true;
users.deploy.enable = true;
}; };
networking = { networking = {
@ -45,7 +43,7 @@
interfaces.ens18 = { interfaces.ens18 = {
ipv4.addresses = [ ipv4.addresses = [
{ {
address = config.homelab.networking.hostIp; address = "192.168.0.89";
prefixLength = 24; prefixLength = 24;
} }
]; ];
@ -85,14 +83,6 @@
"traefik.http.routers.hugo.rule" = "Host(`hugo.depeuter.dev`)"; "traefik.http.routers.hugo.rule" = "Host(`hugo.depeuter.dev`)";
"traefik.http.services.hugo.loadbalancer.server.url" = "https://192.168.0.11:444"; "traefik.http.services.hugo.loadbalancer.server.url" = "https://192.168.0.11:444";
"traefik.http.routers.attic.rule" = "Host(`${inputs.self.nixosConfigurations.BinaryCache.config.homelab.services.attic.domain}`)";
"traefik.http.services.attic.loadbalancer.server.url" =
let
bcConfig = inputs.self.nixosConfigurations.BinaryCache.config;
bcIp = (pkgs.lib.head bcConfig.networking.interfaces.ens18.ipv4.addresses).address;
bcPort = bcConfig.homelab.services.attic.port;
in "http://${bcIp}:${toString bcPort}";
}; };
system.stateVersion = "24.05"; system.stateVersion = "24.05";

View file

@ -3,7 +3,6 @@
{ {
config = { config = {
homelab = { homelab = {
networking.hostIp = "192.168.0.91";
apps = { apps = {
bind9.enable = true; bind9.enable = true;
homepage = { homepage = {
@ -14,7 +13,6 @@
plex.enable = true; plex.enable = true;
}; };
virtualisation.guest.enable = true; virtualisation.guest.enable = true;
users.deploy.enable = true;
}; };
networking = { networking = {
@ -38,7 +36,7 @@
interfaces.ens18 = { interfaces.ens18 = {
ipv4.addresses = [ ipv4.addresses = [
{ {
address = config.homelab.networking.hostIp; address = "192.168.0.91";
prefixLength = 24; prefixLength = 24;
} }
]; ];
@ -61,8 +59,7 @@
environment = { environment = {
# NOTE Required # NOTE Required
# The email address used when setting up the initial administrator account to login to pgAdmin. # The email address used when setting up the initial administrator account to login to pgAdmin.
# TODO Hugo: Populate 'pgadmin_email' in sops. PGADMIN_DEFAULT_EMAIL = "kmtl.hugo+pgadmin@gmail.com";
PGADMIN_DEFAULT_EMAIL = config.sops.placeholder.pgadmin_email or "pgadmin-admin@example.com";
# NOTE Required # NOTE Required
# The password used when setting up the initial administrator account to login to pgAdmin. # The password used when setting up the initial administrator account to login to pgAdmin.
PGADMIN_DEFAULT_PASSWORD = "ChangeMe"; PGADMIN_DEFAULT_PASSWORD = "ChangeMe";

View file

@ -3,12 +3,9 @@
{ {
config = { config = {
homelab = { homelab = {
networking.hostIp = "192.168.0.24";
apps.gitea.enable = true; apps.gitea.enable = true;
virtualisation.guest.enable = true; virtualisation.guest.enable = true;
users.deploy.enable = true;
users.admin = { users.admin = {
enable = true; enable = true;
authorizedKeys = [ authorizedKeys = [
@ -31,7 +28,7 @@
interfaces.ens18 = { interfaces.ens18 = {
ipv4.addresses = [ ipv4.addresses = [
{ {
address = config.homelab.networking.hostIp; address = "192.168.0.24";
prefixLength = 24; prefixLength = 24;
} }
]; ];

View file

@ -2,11 +2,7 @@
{ {
config = { config = {
homelab = { homelab.virtualisation.guest.enable = true;
networking.hostIp = "192.168.0.10";
virtualisation.guest.enable = true;
users.deploy.enable = true;
};
networking = { networking = {
hostName = "Ingress"; hostName = "Ingress";
@ -23,8 +19,8 @@
interfaces.ens18 = { interfaces.ens18 = {
ipv4.addresses = [ ipv4.addresses = [
{ {
address = config.homelab.networking.hostIp; address = "192.168.0.10";
prefixLength = 24; prefixLength = 24;
} }
]; ];
}; };
@ -43,7 +39,6 @@
}; };
}; };
security.acme = { security.acme = {
acceptTerms = true; acceptTerms = true;
defaults = { defaults = {
@ -51,7 +46,7 @@
dnsPropagationCheck = true; dnsPropagationCheck = true;
dnsProvider = "cloudflare"; dnsProvider = "cloudflare";
dnsResolver = "1.1.1.1:53"; dnsResolver = "1.1.1.1:53";
email = config.sops.placeholder.acme_email or "acme-email@example.com"; email = "tibo.depeuter@telenet.be";
credentialFiles = { credentialFiles = {
CLOUDFLARE_DNS_API_TOKEN_FILE = "/var/lib/secrets/depeuter-dev-cloudflare-api-token"; CLOUDFLARE_DNS_API_TOKEN_FILE = "/var/lib/secrets/depeuter-dev-cloudflare-api-token";
}; };

View file

@ -165,7 +165,7 @@ providers:
# Certificates # Certificates
"--certificatesresolvers.letsencrypt.acme.dnschallenge=true" "--certificatesresolvers.letsencrypt.acme.dnschallenge=true"
"--certificatesresolvers.letsencrypt.acme.dnschallenge.provider=cloudflare" "--certificatesresolvers.letsencrypt.acme.dnschallenge.provider=cloudflare"
"--certificatesresolvers.letsencrypt.acme.email=${config.sops.placeholder.acme_email or "acme-email@example.com"}" "--certificatesresolvers.letsencrypt.acme.email=tibo.depeuter@telenet.be"
"--certificatesresolvers.letsencrypt.acme.storage=/letsencrypt/acme.json" "--certificatesresolvers.letsencrypt.acme.storage=/letsencrypt/acme.json"
# Additional routes # Additional routes
@ -176,8 +176,8 @@ providers:
# "8080:8080/tcp" # The Web UI (enabled by --api.insecure=true) # "8080:8080/tcp" # The Web UI (enabled by --api.insecure=true)
]; ];
environment = { environment = {
# TODO Hugo: Populate 'cloudflare_dns_token' in sops. # TODO Hide this!
"CLOUDFLARE_DNS_API_TOKEN" = config.sops.placeholder.cloudflare_dns_token or "CLOUDFLARE_TOKEN_PLACEHOLDER"; "CLOUDFLARE_DNS_API_TOKEN" = "6Vz64Op_a6Ls1ljGeBxFoOVfQ-yB-svRbf6OyPv2";
}; };
environmentFiles = [ environmentFiles = [
]; ];

View file

@ -7,7 +7,6 @@
]; ];
homelab = { homelab = {
networking.hostIp = "192.168.0.11";
apps = { apps = {
technitiumDNS.enable = true; technitiumDNS.enable = true;
traefik.enable = true; traefik.enable = true;

View file

@ -3,13 +3,11 @@
{ {
config = { config = {
homelab = { homelab = {
networking.hostIp = "192.168.0.31";
apps = { apps = {
calibre.enable = true; calibre.enable = true;
traefik.enable = true; traefik.enable = true;
}; };
virtualisation.guest.enable = true; virtualisation.guest.enable = true;
users.deploy.enable = true;
}; };
networking = { networking = {
@ -33,7 +31,7 @@
interfaces.ens18 = { interfaces.ens18 = {
ipv4.addresses = [ ipv4.addresses = [
{ {
address = config.homelab.networking.hostIp; address = "192.168.0.31";
prefixLength = 24; prefixLength = 24;
} }
]; ];

View file

@ -3,13 +3,11 @@
{ {
config = { config = {
homelab = { homelab = {
networking.hostIp = "192.168.0.33";
apps = { apps = {
arr.enable = true; arr.enable = true;
traefik.enable = true; traefik.enable = true;
}; };
virtualisation.guest.enable = true; virtualisation.guest.enable = true;
users.deploy.enable = true;
}; };
networking = { networking = {
@ -33,7 +31,7 @@
interfaces.ens18 = { interfaces.ens18 = {
ipv4.addresses = [ ipv4.addresses = [
{ {
address = config.homelab.networking.hostIp; address = "192.168.0.33";
prefixLength = 24; prefixLength = 24;
} }
]; ];

View file

@ -3,10 +3,8 @@
{ {
config = { config = {
homelab = { homelab = {
networking.hostIp = "192.168.0.94";
apps.jellyfin.enable = true; apps.jellyfin.enable = true;
virtualisation.guest.enable = true; virtualisation.guest.enable = true;
users.deploy.enable = true;
}; };
networking = { networking = {
@ -30,7 +28,7 @@
interfaces.ens18 = { interfaces.ens18 = {
ipv4.addresses = [ ipv4.addresses = [
{ {
address = config.homelab.networking.hostIp; address = "192.168.0.94";
prefixLength = 24; prefixLength = 24;
} }
]; ];

View file

@ -3,13 +3,11 @@
{ {
config = { config = {
homelab = { homelab = {
networking.hostIp = "192.168.0.92";
apps = { apps = {
freshrss.enable = true; freshrss.enable = true;
traefik.enable = true; traefik.enable = true;
}; };
virtualisation.guest.enable = true; virtualisation.guest.enable = true;
users.deploy.enable = true;
}; };
networking = { networking = {
@ -34,7 +32,7 @@
interfaces.ens18 = { interfaces.ens18 = {
ipv4.addresses = [ ipv4.addresses = [
{ {
address = config.homelab.networking.hostIp; address = "192.168.0.92";
prefixLength = 24; prefixLength = 24;
} }
]; ];

View file

@ -3,7 +3,6 @@
{ {
config = { config = {
homelab = { homelab = {
networking.hostIp = "192.168.0.22";
apps.vaultwarden = { apps.vaultwarden = {
enable = true; enable = true;
domain = "https://vault.depeuter.dev"; domain = "https://vault.depeuter.dev";
@ -11,15 +10,11 @@
}; };
virtualisation.guest.enable = true; virtualisation.guest.enable = true;
users = { users.admin = {
deploy.enable = true; enable = true;
authorizedKeys = [
admin = { "ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIJnihoyozOCnm6T9OzL2xoMeMZckBYR2w43us68ABA93"
enable = true; ];
authorizedKeys = [
"ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIJnihoyozOCnm6T9OzL2xoMeMZckBYR2w43us68ABA93"
];
};
}; };
}; };
@ -37,7 +32,7 @@
interfaces.ens18 = { interfaces.ens18 = {
ipv4.addresses = [ ipv4.addresses = [
{ {
address = config.homelab.networking.hostIp; address = "192.168.0.22";
prefixLength = 24; prefixLength = 24;
} }
]; ];

View file

@ -1,6 +1,6 @@
$TTL 604800 $TTL 604800
@ IN SOA ns1 admin ( @ IN SOA ns1 admin (
16 ; Serial 15 ; Serial
604800 ; Refresh 604800 ; Refresh
86400 ; Retry 86400 ; Retry
2419200 ; Expire 2419200 ; Expire
@ -40,9 +40,6 @@ sonarr IN A 192.168.0.33
; Development VM ; Development VM
plex IN A 192.168.0.91 plex IN A 192.168.0.91
; Binary Cache (via Binnenpost proxy)
nix-cache IN A 192.168.0.89
; Catchalls ; Catchalls
*.production IN A 192.168.0.31 *.production IN A 192.168.0.31
*.development IN A 192.168.0.91 *.development IN A 192.168.0.91

View file

@ -496,8 +496,7 @@ in {
#FORGEJO__mailer__CLIENT_KEY_FILE = "custom/mailer/key.pem"; #FORGEJO__mailer__CLIENT_KEY_FILE = "custom/mailer/key.pem";
# Mail from address, RFC 5322. This can be just an email address, or the # Mail from address, RFC 5322. This can be just an email address, or the
# `"Name" <email@example.com>` format. # `"Name" <email@example.com>` format.
# TODO Hugo: Populate 'gitea_mailer_from' in sops. FORGEJO__mailer__FROM = ''"${title}" <git@depeuter.dev>'';
FORGEJO__mailer__FROM = config.sops.placeholder.gitea_mailer_from or "git@example.com";
# Sometimes it is helpful to use a different address on the envelope. Set this to use # Sometimes it is helpful to use a different address on the envelope. Set this to use
# ENVELOPE_FROM as the from on the envelope. Set to `<>` to send an empty address. # ENVELOPE_FROM as the from on the envelope. Set to `<>` to send an empty address.
#FORGEJO__mailer__ENVELOPE_FROM = ""; #FORGEJO__mailer__ENVELOPE_FROM = "";

View file

@ -72,7 +72,7 @@ in {
# Certificates # Certificates
"--certificatesresolvers.letsencrypt.acme.dnschallenge=true" "--certificatesresolvers.letsencrypt.acme.dnschallenge=true"
"--certificatesresolvers.letsencrypt.acme.dnschallenge.provider=cloudflare" "--certificatesresolvers.letsencrypt.acme.dnschallenge.provider=cloudflare"
"--certificatesresolvers.letsencrypt.acme.email=${config.sops.placeholder.acme_email or "acme-email@example.com"}" "--certificatesresolvers.letsencrypt.acme.email=tibo.depeuter@telenet.be"
"--certificatesresolvers.letsencrypt.acme.storage=/letsencrypt/acme.json" "--certificatesresolvers.letsencrypt.acme.storage=/letsencrypt/acme.json"
]; ];
volumes = [ volumes = [

View file

@ -344,7 +344,6 @@ in {
# ORG_CREATION_USERS=none # ORG_CREATION_USERS=none
## A comma-separated list means only those users can create orgs: ## A comma-separated list means only those users can create orgs:
# ORG_CREATION_USERS=admin1@example.com,admin2@example.com # ORG_CREATION_USERS=admin1@example.com,admin2@example.com
# TODO Hugo: Redact org creation users if needed.
## Invitations org admins to invite users, even when signups are disabled ## Invitations org admins to invite users, even when signups are disabled
# INVITATIONS_ALLOWED=true # INVITATIONS_ALLOWED=true
@ -591,7 +590,7 @@ in {
## To make sure the email links are pointing to the correct host, set the DOMAIN variable. ## To make sure the email links are pointing to the correct host, set the DOMAIN variable.
## Note: if SMTP_USERNAME is specified, SMTP_PASSWORD is mandatory ## Note: if SMTP_USERNAME is specified, SMTP_PASSWORD is mandatory
SMTP_HOST = "smtp.gmail.com"; SMTP_HOST = "smtp.gmail.com";
SMTP_FROM = config.sops.placeholder.vaultwarden_smtp_from or "vaultwarden@example.com"; SMTP_FROM = "vault@depeuter.dev";
SMTP_FROM_NAME = cfg.name; SMTP_FROM_NAME = cfg.name;
# SMTP_USERNAME=username # SMTP_USERNAME=username
# SMTP_PASSWORD=password # SMTP_PASSWORD=password

View file

@ -1,15 +1,8 @@
{ {
imports = [
./networking.nix
./secrets.nix
./substituters.nix
];
config = { config = {
homelab = { homelab = {
services.openssh.enable = true; services.openssh.enable = true;
users.admin.enable = true; users.admin.enable = true;
common.substituters.enable = true;
}; };
nix.settings.experimental-features = [ nix.settings.experimental-features = [
@ -19,10 +12,5 @@
# Set your time zone. # Set your time zone.
time.timeZone = "Europe/Brussels"; time.timeZone = "Europe/Brussels";
sops = {
defaultSopsFile = ../../secrets/secrets.yaml;
age.keyFile = "/var/lib/sops-nix/key.txt";
};
}; };
} }

View file

@ -1,19 +0,0 @@
{ config, lib, ... }:
{
options.homelab.networking = {
hostIp = lib.mkOption {
type = lib.types.nullOr lib.types.str;
default = null;
description = ''
The primary IP address of the host.
Used for automated deployment and internal service discovery.
'';
};
};
config = lib.mkIf (config.homelab.networking.hostIp != null) {
# If a hostIp is provided, we can potentially use it to configure
# networking interfaces or firewall rules automatically here in the future.
};
}

View file

@ -1,18 +0,0 @@
{ config, lib, ... }:
{
sops.secrets = {
# -- User Public Keys (Anti-Fingerprinting) --
"user_keys_admin" = { neededForUsers = true; };
"user_keys_deploy" = { neededForUsers = true; };
"user_keys_backup" = { neededForUsers = true; };
# -- Infrastructure Metadata --
# Hugo TODO: Populate these in your .sops.yaml / secrets file
"acme_email" = {};
"cloudflare_dns_token" = {};
"pgadmin_email" = {};
"gitea_mailer_from" = {};
"vaultwarden_smtp_from" = {};
};
}

View file

@ -1,28 +0,0 @@
{ config, lib, pkgs, inputs, ... }:
let
cfg = config.homelab.common.substituters;
in {
options.homelab.common.substituters = {
enable = lib.mkEnableOption "Binary cache substituters";
domain = lib.mkOption {
type = lib.types.str;
default = inputs.self.nixosConfigurations.BinaryCache.config.homelab.services.attic.domain;
description = "The domain name of the binary cache.";
};
publicKey = lib.mkOption {
type = lib.types.nullOr lib.types.str;
default = null;
description = "The public key of the Attic cache (e.g., 'homelab:...')";
};
};
config = lib.mkIf cfg.enable {
nix.settings = {
substituters = [
"https://${cfg.domain}"
];
trusted-public-keys = lib.optional (cfg.publicKey != null) cfg.publicKey;
};
};
}

View file

@ -1,119 +0,0 @@
{ config, lib, pkgs, ... }:
let
cfg = config.homelab.services.attic;
in {
options.homelab.services.attic = {
enable = lib.mkEnableOption "Attic binary cache server";
domain = lib.mkOption {
type = lib.types.str;
default = "nix-cache.depeuter.dev";
description = "The domain name for the Attic server.";
};
port = lib.mkOption {
type = lib.types.port;
default = 8080;
description = "The port Attic server listens on.";
};
databaseName = lib.mkOption {
type = lib.types.str;
default = "attic";
description = "The name of the PostgreSQL database.";
};
dbContainerName = lib.mkOption {
type = lib.types.str;
default = "attic-db";
description = "The name of the PostgreSQL container.";
};
storagePath = lib.mkOption {
type = lib.types.str;
default = "/var/lib/atticd/storage";
description = "The path where Attic store's its blobs.";
};
openFirewall = lib.mkOption {
type = lib.types.bool;
default = false;
description = "Whether to open the firewall port for Attic.";
};
enableRemoteBuilder = lib.mkOption {
type = lib.types.bool;
default = false;
description = "Whether to enable remote build capabilities on this host.";
};
};
config = lib.mkIf cfg.enable {
sops.secrets = {
"attic/db-password" = { };
"attic/server-token-secret" = { };
};
services.atticd = {
enable = true;
environmentFile = config.sops.secrets."attic/server-token-secret".path;
settings = {
listen = "[::]:${toString cfg.port}";
allowed-hosts = [ cfg.domain ];
api-endpoint = "https://${cfg.domain}/";
database.url = "postgresql://${cfg.databaseName}@${cfg.dbContainerName}:5432/${cfg.databaseName}";
storage = {
type = "local";
path = cfg.storagePath;
};
chunking = {
min-size = 16384; # 16 KiB
avg-size = 65536; # 64 KiB
max-size = 262144; # 256 KiB
};
};
};
homelab.virtualisation.containers.enable = true;
virtualisation.oci-containers.containers."${cfg.dbContainerName}" = {
image = "postgres:15-alpine";
autoStart = true;
# We still map it to host for Attic (running on host) to connect to it via bridge IP or name
# if we set up networking/DNS correctly.
ports = [
"5432:5432/tcp"
];
environment = {
POSTGRES_USER = cfg.databaseName;
POSTGRES_PASSWORD_FILE = config.sops.secrets."attic/db-password".path;
POSTGRES_DB = cfg.databaseName;
};
volumes = [
"attic-db:/var/lib/postgresql/data"
];
};
# Map the container name to localhost if Attic is on the host
networking.extraHosts = ''
127.0.0.1 ${cfg.dbContainerName}
'';
networking.firewall.allowedTCPPorts = lib.mkIf cfg.openFirewall [ cfg.port ];
# Remote build host configuration
nix.settings.trusted-users = lib.mkIf cfg.enableRemoteBuilder [ "root" "@wheel" "builder" ];
users.users.builder = lib.mkIf cfg.enableRemoteBuilder {
isNormalUser = true;
group = "builder";
openssh.authorizedKeys.keys = [
# Placeholders - user should provide actual keys
"ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIFrp6aM62Bf7bj1YM5AlAWuNrANU3N5e8+LtbbpmZPKS"
];
};
users.groups.builder = lib.mkIf cfg.enableRemoteBuilder {};
# Only open SSH if remote builder is enabled
services.openssh.ports = lib.mkIf cfg.enableRemoteBuilder [ 22 ];
networking.firewall.allowedTCPPorts = lib.mkIf cfg.enableRemoteBuilder [ 22 ];
};
}

View file

@ -1,7 +1,6 @@
{ {
imports = [ imports = [
./actions ./actions
./attic
./openssh ./openssh
]; ];
} }

View file

@ -26,9 +26,7 @@ in {
config.users.groups.wheel.name # Enable 'sudo' for the user. config.users.groups.wheel.name # Enable 'sudo' for the user.
]; ];
initialPassword = "ChangeMe"; initialPassword = "ChangeMe";
openssh.authorizedKeys.keyFiles = [ openssh.authorizedKeys.keys = cfg.authorizedKeys;
config.sops.secrets.user_keys_admin.path
];
packages = with pkgs; [ packages = with pkgs; [
curl curl
git git

View file

@ -12,8 +12,9 @@ in {
extraGroups = [ extraGroups = [
"docker" # Allow access to the docker socket. "docker" # Allow access to the docker socket.
]; ];
openssh.authorizedKeys.keyFiles = [ openssh.authorizedKeys.keys = [
config.sops.secrets.user_keys_backup.path # Hugo
"ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAICms6vjhE9kOlqV5GBPGInwUHAfCSVHLI2Gtzee0VXPh"
]; ];
}; };
}; };

View file

@ -3,19 +3,7 @@
let let
cfg = config.homelab.users.deploy; cfg = config.homelab.users.deploy;
in { in {
options.homelab.users.deploy = { options.homelab.users.deploy.enable = lib.mkEnableOption "user Deploy";
enable = lib.mkEnableOption "user Deploy";
authorizedKeys = lib.mkOption {
type = lib.types.listOf lib.types.str;
default = [];
description = ''
Additional SSH public keys authorized for the deploy user.
The CI runner key should be provided as a base key; personal
workstation keys can be appended here per host or globally.
'';
};
};
config = lib.mkIf cfg.enable { config = lib.mkIf cfg.enable {
users = { users = {
@ -27,15 +15,12 @@ in {
isSystemUser = true; isSystemUser = true;
home = "/var/empty"; home = "/var/empty";
shell = pkgs.bashInteractive; shell = pkgs.bashInteractive;
openssh.authorizedKeys.keyFiles = [ openssh.authorizedKeys.keys = [
config.sops.secrets.user_keys_deploy.path "ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIPrG+ldRBdCeHEXrsy/qHXIJYg8xQXVuiUR0DxhFjYNg"
]; ];
}; };
}; };
# Allow the deploy user to push closures to the nix store
nix.settings.trusted-users = [ "deploy" ];
security.sudo.extraRules = [ security.sudo.extraRules = [
{ {
groups = [ groups = [