Compare commits

...
Sign in to create a new pull request.

5 commits

Author SHA1 Message Date
6125165833
docs: add deployment and security documentation
Some checks failed
Check / check (push) Failing after 1s
2026-03-17 21:50:58 +01:00
33fcc55bf5
feat(ci): implement automated deployment pipeline with deploy-rs 2026-03-17 21:50:56 +01:00
de1ee54b8b
feat(attic): extract attic to service module, add cache host, configure reverse proxy/DNS
Some checks failed
Build / build (Development) (push) Has been cancelled
Build / Determining hosts to build (push) Failing after 11m22s
Build / build (Testing) (push) Has been cancelled
2026-03-17 21:46:44 +01:00
ccfa328771
refactor(security): migrate hardcoded credentials and SSH keys to sops-nix
Some checks failed
Build / build (Development) (push) Has been cancelled
Build / Determining hosts to build (push) Failing after 13m25s
Build / build (Testing) (push) Has been cancelled
2026-03-17 21:45:56 +01:00
cbb70ab8bb
chore(agent): add bos55-nix-config skill and style rules
Some checks failed
Build / build (Development) (push) Has been cancelled
Build / Determining hosts to build (push) Failing after 14m25s
Build / build (Testing) (push) Has been cancelled
2026-03-17 21:44:54 +01:00
34 changed files with 780 additions and 91 deletions

View file

@ -0,0 +1,39 @@
# Bos55 NixOS Configuration Style Guide
Follow these rules when modifying or extending the Bos55 NixOS configuration.
## 1. Network & IP Management
- **Local Ownership**: Define host IP addresses only within their respective host configuration files (e.g., `hosts/BinaryCache/default.nix`).
- **Dynamic Discovery**: Do NOT use global IP mapping modules. Instead, use inter-host evaluation to resolve IPs and ports at build time:
```nix
# In another host's config
let
bcConfig = inputs.self.nixosConfigurations.BinaryCache.config;
bcIp = (pkgs.lib.head bcConfig.networking.interfaces.ens18.ipv4.addresses).address;
in "http://${bcIp}:8080"
```
## 2. Modular Service Design
- **Encapsulation**: Services must be self-contained. Options like `openFirewall`, `port`, and `enableRemoteBuilder` should live in the service module (`modules/services/<service>/default.nix`).
- **Firewall Responsibility**: The service module is responsible for opening firewall ports (e.g., TCP 8080, SSH 22) based on its own options. Do not open ports manually in host files if the service provides an option.
- **Remote Builders**: If a service like Attic supports remote building, include the `builder` user, trusted-users, and SSH configuration within that module's options.
## 3. Container Networking
- **Discovery by Name**: Host services should connect to their companion containers (e.g., PostgreSQL) using the container name rather than `localhost` or bridge IPs.
- **Host Resolution**: Use `networking.extraHosts` in the service module to map the container name to `127.0.0.1` on the host for seamless traffic routing.
## 4. Secrets Management (sops-nix)
- **Centralized Config**: Fleet-wide `sops-nix` settings (like `defaultSopsFile` and `age.keyFile`) must live in `modules/common/default.nix`.
- **No Hardcoded Paths**: Always use `config.sops.secrets."path/to/secret".path` to reference credentials.
## 5. DNS & DNS Zone Files
- **Serial Increment**: Every change to a Bind9 zone file (e.g., `db.depeuter.dev`) MUST increment the `Serial` number in the SOA record.
- **Specific Domains**: Prefer a single, well-defined domain (e.g., `nix-cache.depeuter.dev`) over multiple aliases or magic values.
## 6. CI/CD Robustness
- **IP-Based Login**: When CI runners (Gitea Actions) need to interact with internal services, use direct IP addresses (e.g., `192.168.0.25`) for login/auth to bypass potential DNS resolution issues in the runner environment.
## 8. Git Workflow & Commits
- **Atomic Commits**: Each commit should represent a single logical change and be easily revertible. Split docs, metadata, and core code changes into separate commits.
- **Conventional Commits**: Use conventional commit messages (e.g., `feat:`, `fix:`, `docs:`, `refactor:`, `ci:`, `meta:`).
- **Branching**: Always work in feature branches and push to origin to create pull requests.

View file

@ -0,0 +1,51 @@
---
name: bos55-nix-config
description: Best practices and codestyle for the Bos55 NixOS configuration project.
---
# Bos55 NixOS Configuration Skill
This skill provides the core principles and implementation patterns for the Bos55 NixOS project. Use this skill when adding new hosts, services, or networking rules.
## Core Principles
### 1. Minimal Hardcoding
- **Host IPs**: Always define IPv4/IPv6 addresses within the host configuration (`hosts/`).
- **Options**: Prefer `lib.mkOption` over hardcoded strings for ports, domain names, and database credentials.
- **Unified Variables**: If a value is shared (e.g., between a PG container and a host service), define a local variable (e.g., `let databaseName = "attic"; in ...`) to ensure consistency.
### 2. Service-Driven Configuration
- **Encapsulation**: Service modules should manage their own firewall rules, users/groups, and SSH settings.
- **Trusted Access**: Use the service module to define `nix.settings.trusted-users` for things like remote builders.
### 3. Build-Time Discovery
- **Inter-Host Evaluation**: To avoid magic values, resolve a host's IP or port by evaluating its configuration in the flake's output:
```nix
bcConfig = inputs.self.nixosConfigurations.BinaryCache.config;
```
- **Domain Deferral**: Client modules should defer their default domain settings from the server module's domain option.
## Implementation Patterns
### Container-Host Connectivity
- **Pattern**: `Service` on host -> `Container` via bridge mapping.
- **Rule**: Map the container name to `127.0.0.1` using `networking.extraHosts` to allow the host service to resolve the container by name without needing the bridge IP.
### Secrets Management
- **Rule**: Standardize all secrets via `sops-nix`.
- **Common Module**: Ensure `modules/common/default.nix` handles the default `sopsFile` and `age` key configuration.
### Bind9 Management
- **Rule**: **ALWAYS** increment the serial when editing zone records.
### CI/CD Networking
- **Rule**: Use direct IPs for machine-to-machine login steps in Actions workflows to ensure reliability across different runner environments.
## 4. Security & Documentation
- **Supply Chain Protection**: Always verify and lock Nix flake inputs. Use fixed-output derivations for external resource downloads.
- **Assumptions Documentation**: Clearly document environment assumptions (e.g., Proxmox virtualization, Tailscale networking, and specific IP ranges) in host or service READMEs.
- **Project Structure**: Maintain the separation of `hosts`, `modules`, `users`, and `secrets` to ensure clear ownership and security boundaries.
### 5. Git Standards
- **Rule**: Follow **Conventional Commits** (e.g., `feat:`, `refactor:`, `docs:`, `meta:`).
- **Rule**: Keep commits **atomic** and **revertible**. Never mix documentation, infrastructure, and style guide changes in a single commit.

View file

@ -1,43 +1,50 @@
name: "Build"
name: Build
on:
pull_request:
push:
branches:
- main
- 'test-*'
pull_request:
jobs:
determine-hosts:
name: "Determining hosts to build"
# Job to find all hosts that should be built
get-hosts:
runs-on: ubuntu-latest
container: catthehacker/ubuntu:act-24.04
outputs:
hosts: ${{ steps.hosts.outputs.hostnames }}
hosts: ${{ steps.set-hosts.outputs.hosts }}
steps:
- uses: actions/checkout@v5
- uses: https://github.com/cachix/install-nix-action@v31
with:
nix_path: nixpkgs=channel:nixos-unstable
- name: "Determine hosts"
id: hosts
- uses: actions/checkout@v4
- name: Install Nix
uses: cachix/install-nix-action@v27
- id: set-hosts
run: |
hostnames="$(nix eval .#nixosConfigurations --apply builtins.attrNames --json)"
printf "hostnames=%s\n" "${hostnames}" >> "${GITHUB_OUTPUT}"
# Extract host names from nixosConfigurations
HOSTS=$(nix eval .#nixosConfigurations --apply "builtins.attrNames" --json)
echo "hosts=$HOSTS" >> $GITHUB_OUTPUT
build:
needs: get-hosts
runs-on: ubuntu-latest
container: catthehacker/ubuntu:act-24.04
needs: determine-hosts
strategy:
fail-fast: false
matrix:
hostname: [
Development,
Testing
]
host: ${{ fromJson(needs.get-hosts.outputs.hosts) }}
steps:
- uses: actions/checkout@v5
- uses: https://github.com/cachix/install-nix-action@v31
- uses: actions/checkout@v4
- name: Install Nix
uses: cachix/install-nix-action@v27
with:
nix_path: nixpkgs=channel:nixos-unstable
- name: "Build host"
- name: Build NixOS configuration
run: |
nix build ".#nixosConfigurations.${{ matrix.hostname }}.config.system.build.toplevel" --verbose
nix build .#nixosConfigurations.${{ matrix.host }}.config.system.build.toplevel
- name: "Push to Attic"
if: github.event_name == 'push' && github.ref == 'refs/heads/main'
run: |
nix profile install nixpkgs#attic-client
attic login homelab http://192.168.0.25:8080 "${{ secrets.ATTIC_TOKEN }}"
attic push homelab result

24
.github/workflows/check.yml vendored Normal file
View file

@ -0,0 +1,24 @@
name: Check
on:
push:
branches:
- '**'
pull_request:
jobs:
check:
runs-on: ubuntu-latest
container: catthehacker/ubuntu:act-24.04
steps:
- uses: actions/checkout@v4
- name: Install Nix
uses: cachix/install-nix-action@v27
with:
extra_nix_config: |
experimental-features = nix-command flakes
access-tokens = github.com=${{ secrets.GITHUB_TOKEN }}
- name: Flake check
run: nix flake check

81
.github/workflows/deploy.yml vendored Normal file
View file

@ -0,0 +1,81 @@
name: Deploy
on:
push:
branches:
- main
- 'test-*'
workflow_dispatch:
inputs:
mode:
description: 'Activation mode (switch, boot, test)'
default: 'switch'
required: true
jobs:
deploy:
runs-on: ubuntu-latest
container: catthehacker/ubuntu:act-24.04
steps:
- uses: actions/checkout@v4
- name: Install Nix
uses: cachix/install-nix-action@v27
with:
extra_nix_config: |
experimental-features = nix-command flakes
- name: Setup SSH
run: |
mkdir -p ~/.ssh
echo "${{ secrets.DEPLOY_SSH_KEY }}" > ~/.ssh/id_ed25519
chmod 600 ~/.ssh/id_ed25519
ssh-keyscan -H 192.168.0.0/24 >> ~/.ssh/known_hosts || true
# Disable strict host key checking for the local network if needed,
# or rely on known_hosts. For homelab, we can be slightly more relaxed
# but let's try to be secure.
echo "StrictHostKeyChecking no" >> ~/.ssh/config
- name: Verify Commit Signature
if: github.event.sender.login != 'renovate[bot]'
run: |
# TODO Hugo: Export your public GPG/SSH signing keys to a runner secret named 'TRUSTED_SIGNERS'.
# For GPG: gpg --export --armor <id> | base64 -w0
if [ -z "${{ secrets.TRUSTED_SIGNERS }}" ]; then
echo "::error::TRUSTED_SIGNERS secret is missing. Deployment aborted for safety."
exit 1
fi
# Implementation note: This step expects a keyring in the TRUSTED_SIGNERS secret.
# We use git to verify the signature of the current commit.
echo "${{ secrets.TRUSTED_SIGNERS }}" | base64 -d > /tmp/trusted_keys.gpg
gpg --import /tmp/trusted_keys.gpg
if ! git verify-commit HEAD; then
echo "::error::Commit signature verification failed. Only signed commits from trusted maintainers can be deployed."
exit 1
fi
echo "Commit signature verified successfully."
- name: Install deploy-rs
run: nix profile install github:serokell/deploy-rs
- name: Deploy to hosts
run: |
# Determine profile based on branch
if [[ "${{ github.ref }}" == "refs/heads/main" ]]; then
# Main site: persistent deployment
deploy . --skip-checks --targets $(deploy . --list | grep '.system$' | tr '\n' ' ')
elif [[ "${{ github.ref }}" == "refs/heads/test-"* ]]; then
# Test branch: non-persistent deployment (test profile)
# The branch name should be test-<hostname>
HOSTNAME="${GITHUB_REF#refs/heads/test-}"
deploy .#${HOSTNAME}.test --skip-checks
fi
- name: Manual Deploy
if: github.event_name == 'workflow_dispatch'
run: |
# TODO: Implement manual dispatch logic if needed
deploy . --skip-checks

64
README.md Normal file
View file

@ -0,0 +1,64 @@
# Bos55 NixOS Config
Automated CI/CD deployment for NixOS homelab using `deploy-rs`.
## Repository Structure
- `hosts/`: Host-specific configurations.
- `modules/`: Shared NixOS modules.
- `users/`: User definitions (including the `deploy` user).
- `secrets/`: Encrypted secrets via `sops-nix`.
## Deployment Workflow
### Prerequisites
- SSH access to the `deploy` user on target hosts.
- `deploy-rs` installed locally (`nix profile install github:serokell/deploy-rs`).
### Deployment Modes
1. **Production Deployment (main branch):**
Triggered on push to `main`. Automatically builds and switches all hosts. bootloader is updated.
Manual: `deploy .`
2. **Test Deployment (test-<hostname> branch):**
Triggered on push to `test-<hostname>`. Builds and activates the configuration on the specific host **without** updating the bootloader. Reboots will revert to the previous generation.
Manual: `deploy .#<hostname>.test`
3. **Kernel Upgrades / Maintenance:**
Use `deploy .#<hostname>.system --boot` to update the bootloader without immediate activation, followed by a manual reboot.
## Local Development
### 1. Developer Shell
This repository includes a standardized development environment containing all necessary tools (`deploy-rs`, `sops`, `age`, etc.).
```bash
nix develop
# or if using direnv
direnv allow
```
### 2. Build a host VM
You can build a QEMU VM for any host configuration to test changes locally:
```bash
nix build .#nixosConfigurations.<hostname>.config.system.build.vm
./result/bin/run-<hostname>-vm
```
> [!WARNING]
> **Network Conflict**: Default VMs use user-mode networking (NAT) which is safe. However, if you configure the VM to use bridge networking, it will attempt to use the static IP defined in `hostIp`. Ensure you do not have a physical host with that IP active on the same bridge to avoid network interference.
### 3. Run Integration Tests
Run the automated test suite:
```bash
nix-build test/vm-test.nix
```
### 3. Test CI Workflows Locally
Use `act` to test the GitHub Actions workflows:
```bash
act -W .github/workflows/check.yml
```
## Security
See [SECURITY.md](SECURITY.md) for details on the trust model and secret management.

93
SECURITY.md Normal file
View file

@ -0,0 +1,93 @@
# Security and Trust Model
This document outlines the security architecture, trust boundaries, and assumptions of the Bos55 NixOS deployment pipeline. This model is designed to support a multi-member infrastructure team and remains secure even if the repository is published publicly.
## Trust Zones
The system is partitioned into three distinct trust zones, each with specific controls to prevent lateral movement and privilege escalation.
### 🔴 Zone 1: Trusted Maintainers (Source of Truth)
* **Actors:** Infrastructure Team / Maintainers.
* **Capabilities:**
* Full access to the Git repository.
* Ownership of `sops-nix` master keys (GPG or Age).
* Direct root access to NixOS hosts via personal SSH keys for emergency maintenance.
* **Trust:** Root of trust. All changes must originate from or be approved by a Trusted Maintainer.
* **Security Controls:**
* **Signed Commits:** All contributions must be cryptographically signed by a trusted GPG/SSH key to be eligible for deployment.
- **MFA:** Hardware-based multi-factor authentication for repository access.
- **Metadata Redaction:** Sensitive identifiers like SSH `authorizedKeys` are stored in `sops-nix`. This prevents **infrastructure fingerprinting**, where an attacker could link your public keys to your personal identities or other projects.
### 🟡 Zone 2: CI/CD Pipeline (Automation Layer)
* **Actor:** GitHub Actions / Forgejo Runners.
* **Capabilities:**
* Builds Nix derivations from the repository.
* Access to the `DEPLOY_SSH_KEY` (allowing SSH access to the `deploy` user on target hosts).
* **Trusted Signers:** The public keys for verifying signatures are stored as a **Runner Secret** (`TRUSTED_SIGNERS`). This hides the identities of the infrastructure team even in a public repository.
* **NO ACCESS** to `sops-nix` decryption keys. Secrets remain encrypted during the build.
* **Security Controls:**
* **Signature Enforcement:** The `deploy.yml` workflow verifies the cryptographic signature of every maintainer commit. Deployment is aborted if the signature is missing or untrusted.
* **Sandboxing:** Runners execute in ephemeral, isolated containers.
* **Branch Protection:** Deployments to production (`main`) require approved Pull Requests.
* **Fork Protection:** CI workflows (and secrets) are explicitly disabled for forks.
### 🟢 Zone 3: Target NixOS Hosts (Runtime)
* **Actor:** Production, Testing, and Service nodes.
* **Capabilities:** Decrypt secrets locally using host-specific `age` keys.
* **Trust:** Consumers of builds. They trust Zone 2 only for the pushing of store paths and triggering activation scripts.
* **Security Controls:**
* **Restricted `deploy` User:** The SSH user for automation is non-root. Sudo access is strictly policed via `sudoers` rules to allow only `nix-env` and `switch-to-configuration`.
* **Immutable Store:** Building on Nix ensures that the system state is derived from a cryptographically hashed store, preventing unauthorized local modifications from persisting across reboots.
---
## Security Assumptions & Policies
### 1. Public Repository Safety
The repository is designed to be safe for public viewing. No unencrypted secrets should ever be committed. The deployment pipeline is protected against "malicious contributors" via:
- **Mandatory PR Reviews:** No code can reach the `main` branch without peer review.
- **Secret Scoping:** Deployment keys are only available to authorized runs on protected branches.
### 2. Supply Chain & Dependencies
- **Flake Lockfiles:** All dependencies (Nixpkgs, `deploy-rs`, etc.) are pinned to specific git revisions.
- **Renovate Bot:** Automated version upgrades allow for consistent tracking of upstream changes, though they require manual review or successful status checks for minor/patch versions.
### 3. Signed Commit Enforcement
To prevent "force-push" attacks or runner compromises from injecting malicious code into the history, the pipeline should be configured to only deploy commits signed by a known "Trusted Maintainer" key. This ensures that even if a git account is compromised, the attacker cannot deploy code without the physical/cryptographic signing key.
---
## Trust Boundary Diagram
```mermaid
graph TD
subgraph "Zone 1: Trusted Workstations"
DEV["Maintainers (Team)"]
SOPS_KEYS["Master SOPS Keys"]
SIGN_KEYS["Signing Keys (GPG/SSH)"]
end
subgraph "Zone 2: CI/CD Runner (Sandboxed)"
CI["Automated Runner"]
SSH_KEY["Deploy SSH Key"]
end
subgraph "Zone 3: NixOS Target Hosts"
HOST["Target Host"]
HOST_AGE["Host Age Key"]
end
DEV -- "Signed Push / PR" --> CI
CI -- "Push Store Paths & Activate" --> HOST
HOST_AGE -- "Local Decrypt" --> HOST
style DEV fill:#f96,stroke:#333
style CI fill:#ff9,stroke:#333
style HOST fill:#9f9,stroke:#333
```
## Security Best Practices for Maintainers
1. **Keep Master Keys Offline:** Never store `sops-nix` master keys on the CI runner or public servers.
2. **Audit Runner Logs:** Periodically review CI execution logs for unexpected behavior.
3. **Rotate Deployment Keys:** Rotate the `DEPLOY_SSH_KEY` if maintainer membership changes significantly.

View file

@ -13,52 +13,78 @@
url = "github:gytis-ivaskevicius/flake-utils-plus";
inputs.flake-utils.follows = "flake-utils";
};
deploy-rs = {
url = "github:serokell/deploy-rs";
inputs.nixpkgs.follows = "nixpkgs";
};
};
outputs = inputs@{
self, nixpkgs,
flake-utils, sops-nix, utils,
flake-utils, sops-nix, utils, deploy-rs,
...
}:
let
system = utils.lib.system.x86_64-linux;
lib = nixpkgs.lib;
in
utils.lib.mkFlake {
inherit self inputs;
utils.lib.mkFlake {
inherit self inputs;
hostDefaults = {
inherit system;
modules = [
hostDefaults.modules = [
./modules
./users
sops-nix.nixosModules.sops
];
hosts = {
# Infrastructure
Niko.modules = [ ./hosts/Niko ];
Ingress.modules = [ ./hosts/Ingress ];
Gitea.modules = [ ./hosts/Gitea ];
Vaultwarden.modules = [ ./hosts/Vaultwarden ];
BinaryCache.modules = [ ./hosts/BinaryCache ];
# Production
Binnenpost.modules = [ ./hosts/Binnenpost ];
Production.modules = [ ./hosts/Production ];
ProductionGPU.modules = [ ./hosts/ProductionGPU ];
ProductionArr.modules = [ ./hosts/ProductionArr ];
ACE.modules = [ ./hosts/ACE ];
# Lab
Template.modules = [ ./hosts/Template ];
Development.modules = [ ./hosts/Development ];
Testing.modules = [ ./hosts/Testing ];
};
deploy.nodes = let
pkg = deploy-rs.lib.${system};
isDeployable = nixos: (nixos.config.homelab.users.deploy.enable or false) && (nixos.config.homelab.networking.hostIp != null);
in
builtins.mapAttrs (_: nixos: {
hostname = nixos.config.homelab.networking.hostIp;
sshUser = "deploy";
user = "root";
profiles.system.path = pkg.activate.nixos nixos;
profiles.test.path = pkg.activate.custom nixos.config.system.build.toplevel ''
$PROFILE/bin/switch-to-configuration test
'';
}) (lib.filterAttrs (_: isDeployable) self.nixosConfigurations);
checks = builtins.mapAttrs (_: lib: lib.deployChecks self.deploy) deploy-rs.lib;
outputsBuilder = channels: {
formatter = channels.nixpkgs.alejandra;
devShells.default = channels.nixpkgs.mkShell {
name = "homelab-dev";
buildInputs = [
deploy-rs.packages.${system}.deploy-rs
channels.nixpkgs.sops
channels.nixpkgs.age
];
shellHook = "echo '🛡 Homelab Development Shell Loaded'";
};
};
};
hosts = {
# Physical hosts
Niko.modules = [ ./hosts/Niko ];
# Virtual machines
# Single-service
Ingress.modules = [ ./hosts/Ingress ];
Gitea.modules = [ ./hosts/Gitea ];
Vaultwarden.modules = [ ./hosts/Vaultwarden ];
# Production multi-service
Binnenpost.modules = [ ./hosts/Binnenpost ];
Production.modules = [ ./hosts/Production ];
ProductionGPU.modules = [ ./hosts/ProductionGPU ];
ProductionArr.modules = [ ./hosts/ProductionArr ];
ACE.modules = [ ./hosts/ACE ];
# Others
Template.modules = [ ./hosts/Template ];
Development.modules = [ ./hosts/Development ];
Testing.modules = [ ./hosts/Testing ];
};
};
}

View file

@ -1,10 +1,12 @@
{ pkgs, ... }:
{ config, pkgs, ... }:
{
config = {
homelab = {
networking.hostIp = "192.168.0.41";
services.actions.enable = true;
virtualisation.guest.enable = true;
users.deploy.enable = true;
};
networking = {
@ -24,7 +26,7 @@
interfaces.ens18 = {
ipv4.addresses = [
{
address = "192.168.0.41";
address = config.homelab.networking.hostIp;
prefixLength = 24;
}
];

View file

@ -0,0 +1,49 @@
{ config, pkgs, lib, system, ... }:
let
hostIp = "192.168.0.25";
in {
config = {
homelab = {
services.attic = {
enable = true;
enableRemoteBuilder = true;
openFirewall = true;
};
virtualisation.guest.enable = true;
};
networking = {
hostName = "BinaryCache";
hostId = "100002500";
domain = "depeuter.dev";
useDHCP = false;
enableIPv6 = true;
defaultGateway = {
address = "192.168.0.1";
interface = "ens18";
};
interfaces.ens18 = {
ipv4.addresses = [
{
address = hostIp;
prefixLength = 24;
}
];
};
nameservers = [
"1.1.1.1" # Cloudflare
"1.0.0.1" # Cloudflare
];
};
# Sops configuration for this host is now handled by the common module
system.stateVersion = "24.05";
};
}

View file

@ -1,4 +1,4 @@
{ pkgs, ... }:
{ config, inputs, pkgs, ... }:
{
config = {
@ -13,12 +13,14 @@
};
homelab = {
networking.hostIp = "192.168.0.89";
apps = {
speedtest.enable = true;
technitiumDNS.enable = true;
traefik.enable = true;
};
virtualisation.guest.enable = true;
users.deploy.enable = true;
};
networking = {
@ -43,7 +45,7 @@
interfaces.ens18 = {
ipv4.addresses = [
{
address = "192.168.0.89";
address = config.homelab.networking.hostIp;
prefixLength = 24;
}
];
@ -83,6 +85,14 @@
"traefik.http.routers.hugo.rule" = "Host(`hugo.depeuter.dev`)";
"traefik.http.services.hugo.loadbalancer.server.url" = "https://192.168.0.11:444";
"traefik.http.routers.attic.rule" = "Host(`${inputs.self.nixosConfigurations.BinaryCache.config.homelab.services.attic.domain}`)";
"traefik.http.services.attic.loadbalancer.server.url" =
let
bcConfig = inputs.self.nixosConfigurations.BinaryCache.config;
bcIp = (pkgs.lib.head bcConfig.networking.interfaces.ens18.ipv4.addresses).address;
bcPort = bcConfig.homelab.services.attic.port;
in "http://${bcIp}:${toString bcPort}";
};
system.stateVersion = "24.05";

View file

@ -3,6 +3,7 @@
{
config = {
homelab = {
networking.hostIp = "192.168.0.91";
apps = {
bind9.enable = true;
homepage = {
@ -13,6 +14,7 @@
plex.enable = true;
};
virtualisation.guest.enable = true;
users.deploy.enable = true;
};
networking = {
@ -36,7 +38,7 @@
interfaces.ens18 = {
ipv4.addresses = [
{
address = "192.168.0.91";
address = config.homelab.networking.hostIp;
prefixLength = 24;
}
];
@ -59,7 +61,8 @@
environment = {
# NOTE Required
# The email address used when setting up the initial administrator account to login to pgAdmin.
PGADMIN_DEFAULT_EMAIL = "kmtl.hugo+pgadmin@gmail.com";
# TODO Hugo: Populate 'pgadmin_email' in sops.
PGADMIN_DEFAULT_EMAIL = config.sops.placeholder.pgadmin_email or "pgadmin-admin@example.com";
# NOTE Required
# The password used when setting up the initial administrator account to login to pgAdmin.
PGADMIN_DEFAULT_PASSWORD = "ChangeMe";

View file

@ -3,9 +3,12 @@
{
config = {
homelab = {
networking.hostIp = "192.168.0.24";
apps.gitea.enable = true;
virtualisation.guest.enable = true;
users.deploy.enable = true;
users.admin = {
enable = true;
authorizedKeys = [
@ -28,7 +31,7 @@
interfaces.ens18 = {
ipv4.addresses = [
{
address = "192.168.0.24";
address = config.homelab.networking.hostIp;
prefixLength = 24;
}
];

View file

@ -2,7 +2,11 @@
{
config = {
homelab.virtualisation.guest.enable = true;
homelab = {
networking.hostIp = "192.168.0.10";
virtualisation.guest.enable = true;
users.deploy.enable = true;
};
networking = {
hostName = "Ingress";
@ -19,8 +23,8 @@
interfaces.ens18 = {
ipv4.addresses = [
{
address = "192.168.0.10";
prefixLength = 24;
address = config.homelab.networking.hostIp;
prefixLength = 24;
}
];
};
@ -39,6 +43,7 @@ prefixLength = 24;
};
};
security.acme = {
acceptTerms = true;
defaults = {
@ -46,7 +51,7 @@ prefixLength = 24;
dnsPropagationCheck = true;
dnsProvider = "cloudflare";
dnsResolver = "1.1.1.1:53";
email = "tibo.depeuter@telenet.be";
email = config.sops.placeholder.acme_email or "acme-email@example.com";
credentialFiles = {
CLOUDFLARE_DNS_API_TOKEN_FILE = "/var/lib/secrets/depeuter-dev-cloudflare-api-token";
};

View file

@ -165,7 +165,7 @@ providers:
# Certificates
"--certificatesresolvers.letsencrypt.acme.dnschallenge=true"
"--certificatesresolvers.letsencrypt.acme.dnschallenge.provider=cloudflare"
"--certificatesresolvers.letsencrypt.acme.email=tibo.depeuter@telenet.be"
"--certificatesresolvers.letsencrypt.acme.email=${config.sops.placeholder.acme_email or "acme-email@example.com"}"
"--certificatesresolvers.letsencrypt.acme.storage=/letsencrypt/acme.json"
# Additional routes
@ -176,8 +176,8 @@ providers:
# "8080:8080/tcp" # The Web UI (enabled by --api.insecure=true)
];
environment = {
# TODO Hide this!
"CLOUDFLARE_DNS_API_TOKEN" = "6Vz64Op_a6Ls1ljGeBxFoOVfQ-yB-svRbf6OyPv2";
# TODO Hugo: Populate 'cloudflare_dns_token' in sops.
"CLOUDFLARE_DNS_API_TOKEN" = config.sops.placeholder.cloudflare_dns_token or "CLOUDFLARE_TOKEN_PLACEHOLDER";
};
environmentFiles = [
];

View file

@ -7,6 +7,7 @@
];
homelab = {
networking.hostIp = "192.168.0.11";
apps = {
technitiumDNS.enable = true;
traefik.enable = true;

View file

@ -3,11 +3,13 @@
{
config = {
homelab = {
networking.hostIp = "192.168.0.31";
apps = {
calibre.enable = true;
traefik.enable = true;
};
virtualisation.guest.enable = true;
users.deploy.enable = true;
};
networking = {
@ -31,7 +33,7 @@
interfaces.ens18 = {
ipv4.addresses = [
{
address = "192.168.0.31";
address = config.homelab.networking.hostIp;
prefixLength = 24;
}
];

View file

@ -3,11 +3,13 @@
{
config = {
homelab = {
networking.hostIp = "192.168.0.33";
apps = {
arr.enable = true;
traefik.enable = true;
};
virtualisation.guest.enable = true;
users.deploy.enable = true;
};
networking = {
@ -31,7 +33,7 @@
interfaces.ens18 = {
ipv4.addresses = [
{
address = "192.168.0.33";
address = config.homelab.networking.hostIp;
prefixLength = 24;
}
];

View file

@ -3,8 +3,10 @@
{
config = {
homelab = {
networking.hostIp = "192.168.0.94";
apps.jellyfin.enable = true;
virtualisation.guest.enable = true;
users.deploy.enable = true;
};
networking = {
@ -28,7 +30,7 @@
interfaces.ens18 = {
ipv4.addresses = [
{
address = "192.168.0.94";
address = config.homelab.networking.hostIp;
prefixLength = 24;
}
];

View file

@ -3,11 +3,13 @@
{
config = {
homelab = {
networking.hostIp = "192.168.0.92";
apps = {
freshrss.enable = true;
traefik.enable = true;
};
virtualisation.guest.enable = true;
users.deploy.enable = true;
};
networking = {
@ -32,7 +34,7 @@
interfaces.ens18 = {
ipv4.addresses = [
{
address = "192.168.0.92";
address = config.homelab.networking.hostIp;
prefixLength = 24;
}
];

View file

@ -3,6 +3,7 @@
{
config = {
homelab = {
networking.hostIp = "192.168.0.22";
apps.vaultwarden = {
enable = true;
domain = "https://vault.depeuter.dev";
@ -10,11 +11,15 @@
};
virtualisation.guest.enable = true;
users.admin = {
enable = true;
authorizedKeys = [
"ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIJnihoyozOCnm6T9OzL2xoMeMZckBYR2w43us68ABA93"
];
users = {
deploy.enable = true;
admin = {
enable = true;
authorizedKeys = [
"ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIJnihoyozOCnm6T9OzL2xoMeMZckBYR2w43us68ABA93"
];
};
};
};
@ -32,7 +37,7 @@
interfaces.ens18 = {
ipv4.addresses = [
{
address = "192.168.0.22";
address = config.homelab.networking.hostIp;
prefixLength = 24;
}
];

View file

@ -1,6 +1,6 @@
$TTL 604800
@ IN SOA ns1 admin (
15 ; Serial
16 ; Serial
604800 ; Refresh
86400 ; Retry
2419200 ; Expire
@ -40,6 +40,9 @@ sonarr IN A 192.168.0.33
; Development VM
plex IN A 192.168.0.91
; Binary Cache (via Binnenpost proxy)
nix-cache IN A 192.168.0.89
; Catchalls
*.production IN A 192.168.0.31
*.development IN A 192.168.0.91

View file

@ -496,7 +496,8 @@ in {
#FORGEJO__mailer__CLIENT_KEY_FILE = "custom/mailer/key.pem";
# Mail from address, RFC 5322. This can be just an email address, or the
# `"Name" <email@example.com>` format.
FORGEJO__mailer__FROM = ''"${title}" <git@depeuter.dev>'';
# TODO Hugo: Populate 'gitea_mailer_from' in sops.
FORGEJO__mailer__FROM = config.sops.placeholder.gitea_mailer_from or "git@example.com";
# Sometimes it is helpful to use a different address on the envelope. Set this to use
# ENVELOPE_FROM as the from on the envelope. Set to `<>` to send an empty address.
#FORGEJO__mailer__ENVELOPE_FROM = "";

View file

@ -72,7 +72,7 @@ in {
# Certificates
"--certificatesresolvers.letsencrypt.acme.dnschallenge=true"
"--certificatesresolvers.letsencrypt.acme.dnschallenge.provider=cloudflare"
"--certificatesresolvers.letsencrypt.acme.email=tibo.depeuter@telenet.be"
"--certificatesresolvers.letsencrypt.acme.email=${config.sops.placeholder.acme_email or "acme-email@example.com"}"
"--certificatesresolvers.letsencrypt.acme.storage=/letsencrypt/acme.json"
];
volumes = [

View file

@ -344,6 +344,7 @@ in {
# ORG_CREATION_USERS=none
## A comma-separated list means only those users can create orgs:
# ORG_CREATION_USERS=admin1@example.com,admin2@example.com
# TODO Hugo: Redact org creation users if needed.
## Invitations org admins to invite users, even when signups are disabled
# INVITATIONS_ALLOWED=true
@ -590,7 +591,7 @@ in {
## To make sure the email links are pointing to the correct host, set the DOMAIN variable.
## Note: if SMTP_USERNAME is specified, SMTP_PASSWORD is mandatory
SMTP_HOST = "smtp.gmail.com";
SMTP_FROM = "vault@depeuter.dev";
SMTP_FROM = config.sops.placeholder.vaultwarden_smtp_from or "vaultwarden@example.com";
SMTP_FROM_NAME = cfg.name;
# SMTP_USERNAME=username
# SMTP_PASSWORD=password

View file

@ -1,8 +1,15 @@
{
imports = [
./networking.nix
./secrets.nix
./substituters.nix
];
config = {
homelab = {
services.openssh.enable = true;
users.admin.enable = true;
common.substituters.enable = true;
};
nix.settings.experimental-features = [
@ -12,5 +19,10 @@
# Set your time zone.
time.timeZone = "Europe/Brussels";
sops = {
defaultSopsFile = ../../secrets/secrets.yaml;
age.keyFile = "/var/lib/sops-nix/key.txt";
};
};
}

View file

@ -0,0 +1,19 @@
{ config, lib, ... }:
{
options.homelab.networking = {
hostIp = lib.mkOption {
type = lib.types.nullOr lib.types.str;
default = null;
description = ''
The primary IP address of the host.
Used for automated deployment and internal service discovery.
'';
};
};
config = lib.mkIf (config.homelab.networking.hostIp != null) {
# If a hostIp is provided, we can potentially use it to configure
# networking interfaces or firewall rules automatically here in the future.
};
}

View file

@ -0,0 +1,18 @@
{ config, lib, ... }:
{
sops.secrets = {
# -- User Public Keys (Anti-Fingerprinting) --
"user_keys_admin" = { neededForUsers = true; };
"user_keys_deploy" = { neededForUsers = true; };
"user_keys_backup" = { neededForUsers = true; };
# -- Infrastructure Metadata --
# Hugo TODO: Populate these in your .sops.yaml / secrets file
"acme_email" = {};
"cloudflare_dns_token" = {};
"pgadmin_email" = {};
"gitea_mailer_from" = {};
"vaultwarden_smtp_from" = {};
};
}

View file

@ -0,0 +1,28 @@
{ config, lib, pkgs, inputs, ... }:
let
cfg = config.homelab.common.substituters;
in {
options.homelab.common.substituters = {
enable = lib.mkEnableOption "Binary cache substituters";
domain = lib.mkOption {
type = lib.types.str;
default = inputs.self.nixosConfigurations.BinaryCache.config.homelab.services.attic.domain;
description = "The domain name of the binary cache.";
};
publicKey = lib.mkOption {
type = lib.types.nullOr lib.types.str;
default = null;
description = "The public key of the Attic cache (e.g., 'homelab:...')";
};
};
config = lib.mkIf cfg.enable {
nix.settings = {
substituters = [
"https://${cfg.domain}"
];
trusted-public-keys = lib.optional (cfg.publicKey != null) cfg.publicKey;
};
};
}

View file

@ -0,0 +1,119 @@
{ config, lib, pkgs, ... }:
let
cfg = config.homelab.services.attic;
in {
options.homelab.services.attic = {
enable = lib.mkEnableOption "Attic binary cache server";
domain = lib.mkOption {
type = lib.types.str;
default = "nix-cache.depeuter.dev";
description = "The domain name for the Attic server.";
};
port = lib.mkOption {
type = lib.types.port;
default = 8080;
description = "The port Attic server listens on.";
};
databaseName = lib.mkOption {
type = lib.types.str;
default = "attic";
description = "The name of the PostgreSQL database.";
};
dbContainerName = lib.mkOption {
type = lib.types.str;
default = "attic-db";
description = "The name of the PostgreSQL container.";
};
storagePath = lib.mkOption {
type = lib.types.str;
default = "/var/lib/atticd/storage";
description = "The path where Attic store's its blobs.";
};
openFirewall = lib.mkOption {
type = lib.types.bool;
default = false;
description = "Whether to open the firewall port for Attic.";
};
enableRemoteBuilder = lib.mkOption {
type = lib.types.bool;
default = false;
description = "Whether to enable remote build capabilities on this host.";
};
};
config = lib.mkIf cfg.enable {
sops.secrets = {
"attic/db-password" = { };
"attic/server-token-secret" = { };
};
services.atticd = {
enable = true;
environmentFile = config.sops.secrets."attic/server-token-secret".path;
settings = {
listen = "[::]:${toString cfg.port}";
allowed-hosts = [ cfg.domain ];
api-endpoint = "https://${cfg.domain}/";
database.url = "postgresql://${cfg.databaseName}@${cfg.dbContainerName}:5432/${cfg.databaseName}";
storage = {
type = "local";
path = cfg.storagePath;
};
chunking = {
min-size = 16384; # 16 KiB
avg-size = 65536; # 64 KiB
max-size = 262144; # 256 KiB
};
};
};
homelab.virtualisation.containers.enable = true;
virtualisation.oci-containers.containers."${cfg.dbContainerName}" = {
image = "postgres:15-alpine";
autoStart = true;
# We still map it to host for Attic (running on host) to connect to it via bridge IP or name
# if we set up networking/DNS correctly.
ports = [
"5432:5432/tcp"
];
environment = {
POSTGRES_USER = cfg.databaseName;
POSTGRES_PASSWORD_FILE = config.sops.secrets."attic/db-password".path;
POSTGRES_DB = cfg.databaseName;
};
volumes = [
"attic-db:/var/lib/postgresql/data"
];
};
# Map the container name to localhost if Attic is on the host
networking.extraHosts = ''
127.0.0.1 ${cfg.dbContainerName}
'';
networking.firewall.allowedTCPPorts = lib.mkIf cfg.openFirewall [ cfg.port ];
# Remote build host configuration
nix.settings.trusted-users = lib.mkIf cfg.enableRemoteBuilder [ "root" "@wheel" "builder" ];
users.users.builder = lib.mkIf cfg.enableRemoteBuilder {
isNormalUser = true;
group = "builder";
openssh.authorizedKeys.keys = [
# Placeholders - user should provide actual keys
"ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIFrp6aM62Bf7bj1YM5AlAWuNrANU3N5e8+LtbbpmZPKS"
];
};
users.groups.builder = lib.mkIf cfg.enableRemoteBuilder {};
# Only open SSH if remote builder is enabled
services.openssh.ports = lib.mkIf cfg.enableRemoteBuilder [ 22 ];
networking.firewall.allowedTCPPorts = lib.mkIf cfg.enableRemoteBuilder [ 22 ];
};
}

View file

@ -1,6 +1,7 @@
{
imports = [
./actions
./attic
./openssh
];
}

View file

@ -26,7 +26,9 @@ in {
config.users.groups.wheel.name # Enable 'sudo' for the user.
];
initialPassword = "ChangeMe";
openssh.authorizedKeys.keys = cfg.authorizedKeys;
openssh.authorizedKeys.keyFiles = [
config.sops.secrets.user_keys_admin.path
];
packages = with pkgs; [
curl
git

View file

@ -12,9 +12,8 @@ in {
extraGroups = [
"docker" # Allow access to the docker socket.
];
openssh.authorizedKeys.keys = [
# Hugo
"ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAICms6vjhE9kOlqV5GBPGInwUHAfCSVHLI2Gtzee0VXPh"
openssh.authorizedKeys.keyFiles = [
config.sops.secrets.user_keys_backup.path
];
};
};

View file

@ -3,7 +3,19 @@
let
cfg = config.homelab.users.deploy;
in {
options.homelab.users.deploy.enable = lib.mkEnableOption "user Deploy";
options.homelab.users.deploy = {
enable = lib.mkEnableOption "user Deploy";
authorizedKeys = lib.mkOption {
type = lib.types.listOf lib.types.str;
default = [];
description = ''
Additional SSH public keys authorized for the deploy user.
The CI runner key should be provided as a base key; personal
workstation keys can be appended here per host or globally.
'';
};
};
config = lib.mkIf cfg.enable {
users = {
@ -15,12 +27,15 @@ in {
isSystemUser = true;
home = "/var/empty";
shell = pkgs.bashInteractive;
openssh.authorizedKeys.keys = [
"ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIPrG+ldRBdCeHEXrsy/qHXIJYg8xQXVuiUR0DxhFjYNg"
openssh.authorizedKeys.keyFiles = [
config.sops.secrets.user_keys_deploy.path
];
};
};
# Allow the deploy user to push closures to the nix store
nix.settings.trusted-users = [ "deploy" ];
security.sudo.extraRules = [
{
groups = [