KB: magitek-ops
← All workspaces9213 results — page 2 of 185
| Title | Domain | Type | Severity | Source | Freshness | Updated |
|---|---|---|---|---|---|---|
| Gjenstår (neste sesjon) | claude/agents | pattern | info | HANDOFF-cloudflare-pihole-split-dns.md | 100 | 2026-03-04 |
|
Body:
### A. Verifiser NS-propagasjon til Cloudflare
```bash
AUTH_EMAIL="heine.salbu@gmail.com"
AUTH_KEY="$(grep CLOUDFLARE_API_KEY /var/www/magitek-ops/.env | cut -d= -f2)"
curl -s "https://api.cloudflare.com/client/v4/zones?per_page=20" \
-H "X-Auth-Email: $AUTH_EMAIL" -H "X-Auth-Key: $AUTH_KEY" | python3 -c "
import json,sys
for z in json.load(sys.stdin)['result']:
print(f\"{z['name']:25} {z['status']}\")"
```
Alle domener bør vise "active" (ikke "pending" eller "invalid nameservers").
##...
|
||||||
| Hva ble gjort | ops/network | pattern | info | HANDOFF-cloudflare-pihole-split-dns.md | 100 | 2026-03-04 |
|
Body:
### 1. Cloudflare DNS — 9 nye CNAME-records for *.magitek.no
Alle peker til `px10.magitek.no` (62.97.227.206 = kontoret WAN):
- px1, pmox10, pmox15 (Proxmox-servere)
- pfsense-kolsk, npm-kolsk (dupliserte tjenester med lokasjonssuffix)
- zyxel1920, freshtomato, mainwp, todo
**Navnekonvensjon:** `-kolsk` = kontoret, `-skeis` = hjemme
### 2. NPM Kontoret — 9 nye proxy hosts (ID 21-29)
Alle med ACL=5 (LAN_Kontor_hjemme_Scandic):
- Proxmox: px1 (21), pmox10 (22), pmox15 (23) — websocket=true, f...
|
||||||
| IMPORTANT: Preserve Source | ops/network | pattern | info | HANDOFF-calibre-opds-plugin.md | 100 | 2026-03-12 |
|
Body:
Copy patched source to permanent location before /tmp cleanup:
```bash
cp -r /tmp/opds-reader-fix /var/www/magitek-ops/tools/opds-reader-patch/
```
|
||||||
| Expert Files | claude/expert | pattern | info | HANDOFF-calibre-opds-plugin.md | 100 | 2026-03-12 |
|
Body:
- Server: `coordination/experts/operations/magitek-server-ops/hjemme/services/CURRENT-calibre-server.md`
- Plugin: `coordination/experts/operations/magitek-server-ops/hjemme/services/CURRENT-calibre-opds-plugin.md`
- SSH to Orion: `ssh heine@172.20.0.22` (Windows, has SSH)
|
||||||
| Deploy Command | claude/agents | pattern | info | HANDOFF-calibre-opds-plugin.md | 100 | 2026-03-12 |
|
Body:
```bash
cd /tmp/opds-reader-fix/un-pogaz-OPDS-reader-714fb9d && \
rm -f /tmp/OPDS-Reader-patched.zip && \
zip -r /tmp/OPDS-Reader-patched.zip . -x '.git/*' '.github/*' '*.pyc' '__pycache__/*' '.vscode/*' '.gitmodules' '.gitignore' 'pyproject.toml' 'README.md' 'changelog.md' 'LICENSE' && \
scp /tmp/OPDS-Reader-patched.zip heine@172.20.0.22:"C:/Users/heine/AppData/Roaming/calibre/plugins/OPDS Reader.zip"
```
|
||||||
| 6 Patches Applied (summary) | claude/agents | pattern | info | HANDOFF-calibre-opds-plugin.md | 100 | 2026-03-12 |
|
Body:
| # | Bug | Fix | Line |
|---|-----|-----|------|
| 1 | `feed.links` AttributeError | `getattr(feed, 'links', [])` | findNextUrl |
| 2 | Acquisition feed detection | `_isAcquisitionFeed()` + `_rootFeedIsAcquisition` | downloadOpdsRootCatalog |
| 3 | Relative URL + lost auth in pagination | `_makeAbsoluteUrl()` preserves credentials | all pagination |
| 4 | Auto-drill through nav levels | `downloadOpdsCatalog` drills into first nav entry | downloadOpdsCatalog |
| 5 | No library list in OPDS | ...
|
||||||
| Source Code | ops/proxmox | pattern | info | HANDOFF-calibre-opds-plugin.md | 100 | 2026-03-12 |
|
Body:
**Permanent**: `/var/www/magitek-ops/tools/opds-reader-patch/action.py`
**Volatile**: `/tmp/opds-reader-fix/un-pogaz-OPDS-reader-714fb9d/` (may be lost on reboot)
|
||||||
| Resolved Issues (v6→v9) | claude/agents | incident | medium | HANDOFF-calibre-opds-plugin.md | 100 | 2026-03-12 |
|
Body:
v6 had libraries in dropdown but Load OPDS did nothing. Root cause found via file-based debug logging:
- **v7**: Added SSL fallback (`ssl._create_unverified_context()`) + debug logging to `~/opds-debug.log`
- **v8**: ROOT CAUSE FIX — `_fetchAndParse()` had no credentials because catalog URLs from `_fetchCalibreLibraries()` don't contain `user:pass@`. Fixed by falling back to `_baseOpdsUrl` for credentials
- **v9**: Book download URLs were relative (`/get/epub/2196/business`). Calibre needs a...
|
||||||
| What Works | ops/network | pattern | info | HANDOFF-calibre-opds-plugin.md | 100 | 2026-03-12 |
|
Body:
- **calibre.hsal.no** (calibre-server, port 8080): HTTPS, Basic Auth, OPDS feed — all verified
- **books.hsal.no** (Calibre-Web, port 8083): HTTPS, login page works, SSL cert valid
- **OPDS feed**: Returns valid Atom/OPDS XML, 35 libraries, pagination with `next` links
- **Plugin dropdown**: All 35 libraries show correctly via REST API `/ajax/library-info`
- **Plugin installed**: `C:\Users\heine\AppData\Roaming\calibre\plugins\OPDS Reader.zip`
|
||||||
| Context | ops/proxmox | pattern | info | HANDOFF-calibre-opds-plugin.md | 100 | 2026-03-12 |
|
Body:
We patched the OPDS Reader plugin (github.com/un-pogaz/OPDS-reader v2.3.0) for Calibre 9.4 on Windows (ORION, 172.20.0.22) to work with our calibre-server (CT 102, calibre.hsal.no).
|
||||||
| Oppsummering: MCP-servere etter denne sesjonen | ops/network | pattern | info | HANDOFF-20260313-security-mcp-improvements.md | 100 | 2026-03-13 |
|
Body:
| # | MCP Server | Type | Status |
|---|-----------|------|--------|
| 1 | serena | Code navigation | Eksisterende |
| 2 | context7 | Documentation | Eksisterende |
| 3 | playwright | Browser testing | Eksisterende |
| 4-9 | proxmox-* (6 stk) | VM/CT management | Eksisterende |
| 10 | dam-sqlite | DAM database | Eksisterende |
| 11 | cloudflare | DNS, zones, audit | **NY** |
| 12 | docker | Containers (npm-kontoret) | **NY** |
| 13 | github | Repos, issues, PRs | **NY** |
| 14 | pfsense-konto...
|
||||||
| Gjenstående: pfSense MCP | ops/network | incident | medium | HANDOFF-20260313-security-mcp-improvements.md | 100 | 2026-03-13 |
|
Body:
### Status
Bruker fant https://github.com/gensecaihq/pfsense-mcp-server — grundig audit utført.
### Audit-resultat: TRYGT med forbehold
- **GenSecAI** — seriøs non-profit, 20 repos, kjent for Wazuh-MCP (139 stars)
- **Kode:** Python 3.11+, FastMCP, alle standard dependencies (httpx, pydantic, cryptography)
- **Ingen telemetri** — kun direkte pfSense REST API v2 kall
- **SSH injection fikset** (juni 2025) med whitelist
- **25+ verktøy** — inkludert read OG write operasjoner (firewall rules, N...
|
||||||
| Hva ble gjort | claude/agents | pattern | info | HANDOFF-20260313-security-mcp-improvements.md | 100 | 2026-03-13 |
|
Body:
Analyse av "Security #2" chat avdekket 7 effektivitetsproblemer. Alle ble fikset.
### Ferdige endringer (alle i ~/.claude/ eller .mcp.json — ikke git-tracked)
**Agent-filer oppdatert (5 stk):**
1. `~/.claude/agents/magitek-server-infra-ops.md` — Proxmox MCP-tabell, Cloudflare/Docker/GitHub MCP, hook-awareness, polling-forbud, maintenance mode referanse
2. `~/.claude/agents/magitek-proxmox-maintenance.md` — Proxmox MCP-tabell, polling-forbud
3. `~/.claude/agents/pentest-operator.md` — Proxmo...
|
||||||
| **Quality Mode:** SPEED (architect → implement) | magitek-ops | lesson | medium | MASTERPLAN.md | 100 | 2026-03-20T02:00:45Z |
|
Body:
|
||||||
| Rename the KB package from `magitek/kb-system` to `magitek/kb-system` across the entire Magitek i… | magitek-ops | lesson | medium | MASTERPLAN.md | 100 | 2026-03-20T02:00:45Z |
|
Body:
astructure. This includes: PHP namespace change (`Magitek\KbSystem\` → `Magitek\KbSystem\`), GitHub repo rename, local directory rename, Packeton re-registration, all 5 consumer workspace updates, and documentation/agent config updates.
|
||||||
| | Tier | Model | Provider | Input Context | Cost | |------|-------|----------|--------------|----… | magitek-ops | lesson | medium | MASTERPLAN.md | 100 | 2026-03-20T02:00:45Z |
|
Body:
| budget | gemini-2.5-flash | Gemini CLI | ~1M | Free | | standard | gemini-3-pro-preview | Gemini CLI | ~1M | Free | | codex | gpt-5.3-codex | Codex CLI | 272K | Free (ChatGPT sub) | | premium | claude-sonnet-4.6 | Copilot CLI | 128K | 1x premium | | ultra | claude-opus-4.6 | Copilot CLI | 128K | 3x premium |
|
||||||
| One external agent runs all three analyses sequentially against the full, uncompressed transcript… | magitek-ops | lesson | medium | MASTERPLAN.md | 100 | 2026-03-20T02:00:45Z |
|
Body:
roducing three report files in the workspace's audit directories. Smart model selection picks the cheapest sufficient model based on transcript size and user-configured tier preference, with automatic fallback chain: ultra/premium → codex → standard (Gemini 1M).
|
||||||
| Build a set of shell scripts that extract Claude Code session transcripts from JSONL files, strip… | magitek-ops | lesson | medium | MASTERPLAN.md | 100 | 2026-03-20T02:00:45Z |
|
Body:
ise, and delegate all three post-session analyses (meta-audit, MCP-audit, knowledge-harvest) to an external CLI agent (Gemini CLI, Codex CLI, or Copilot CLI). This replaces the current manual in-session approach that costs $15-30 per run on Opus.
|
||||||
| **Trigger:** Postiz VM 153 had full disk and crash-looped for 13 days undetected. | magitek-ops | lesson | medium | MASTERPLAN.md | 100 | 2026-03-20T02:00:45Z |
|
Body:
This system would have alerted at 80% disk usage.
|
||||||
| Deploy a high-availability Prometheus + Grafana + Alertmanager monitoring stack across both Magit… | magitek-ops | lesson | medium | MASTERPLAN.md | 100 | 2026-03-20T02:00:45Z |
|
Body:
locations (kontoret + hjemme), with optional cloud VPS as a third observer. Each location runs an independent monitoring instance that scrapes local targets and federates with the other site. If one location loses power/connectivity, the other continues monitoring and alerting independently.
|
||||||
| PIHOLE-05: IP-adresse .2 indikerer DNS-rolle | operations/magitek-server-ops/hjemme/compute | knowledge | medium | CURRENT-tkl-pihole.md | 100 | 2026-03-20 02:00:45 |
|
Body:
### PIHOLE-05: IP-adresse .2 indikerer DNS-rolle
- 192.168.86.2 er typisk DNS-adressen satt i pfSense DHCP.
- Hvis denne containeren går ned, mister hjemmenettet DNS-oppslag.
- Ingen sekundær DNS er dokumentert.
## Vanlige Operasjoner
### Sjekk status
```bash
ssh TU-px5 "pct exec 132 -- bash -lc '/usr/local/bin/pihole status'"
```
### Sjekk statistikk
```bash
ssh TU-px5 "pct exec 132 -- bash -lc '/usr/local/bin/pihole -c -e'"
```
### Oppdater gravity (blokkerings-lister)
```bash
ssh TU-px5 "pct exec 132 -- bash -lc '/usr/local/bin/pihole -g'"
```
### Se topp blokkerte domener
```bash
ssh TU-px5 "pct exec 132 -- bash -lc '/usr/local/bin/pihole -t'"
```
### Restart DNS
```bash
ssh TU-px5 "pct exec 132 -- bash -lc '/usr/local/bin/pihole restartdns'"
```
### Legg til lokal DNS-oppføring
```bash
ssh TU-px5 "pct exec 132 -- bash -lc 'echo \"IP HOSTNAME\" >> /etc/pihole/custom.list && /usr/local/bin/pihole restartdns'"
```
## Changelog
### v1.0 - 2026-03-03
- Initial kartlegging fra live system
- Pi-hole v5.7 med Unbound rekursiv DNS
- 77 526 blokkerte domener, 1 adlist (StevenBlack)
- 59 lokale DNS-oppføringer dokumentert
- Gotchas PIHOLE-01 til PIHOLE-05 identifisert
|
||||||
| Unbound-konfigurasjon | operations/magitek-server-ops/hjemme/compute | knowledge | medium | CURRENT-tkl-pihole.md | 100 | 2026-03-20 02:00:45 |
|
Body:
## Unbound-konfigurasjon
| Parameter | Verdi |
|-----------|-------|
| Interface | 127.0.0.1 (kun loopback) |
| Port | 5335 |
| IPv4 | Ja |
| IPv6 | Nei |
| Threads | 1 |
| Prefetch | Ja |
| EDNS buffer | 1232 |
| harden-glue | Ja |
| harden-dnssec-stripped | Ja |
| Private ranges | Alle RFC1918 + link-local |
## Cron-jobber
| Tid | Jobb |
|-----|------|
| Søndager 04:28 | Gravity-oppdatering (adlists) |
| Daglig 00:00 | Flush pihole log (logrotate) |
| Hvert 10. min | Sjekk lokal versjon |
| Daglig 12:10 | Sjekk remote versjon |
| @reboot | Logrotate + versjonssjekk |
## Kjente Gotchas
### PIHOLE-01: Svært utdatert versjon
- Pi-hole v5.7 → v6.4 er tilgjengelig. Major version upgrade.
- **Debian 10 Buster er EOL** (End of Life).
- Oppgradering bør planlegges (backup → ny CT med Debian 12 → Pi-hole v6).
### PIHOLE-02: Kun én adlist
- Kun StevenBlack/hosts er aktivert (77k domener).
- Mange bruker flere lister (OISD, hagezi, etc.) for bedre dekning.
- Vurder å legge til flere lister for bedre blokkering.
### PIHOLE-03: Overallokerte ressurser
- 6 GB RAM tildelt, kun 82 MB brukt (~1%).
- 8 cores tildelt, nesten ingen CPU-bruk.
- Kan trygt reduseres til 1-2 cores og 512 MB-1 GB RAM.
### PIHOLE-04: Mange utdaterte custom.list-oppføringer
- 59 lokale DNS-oppføringer, mange refererer til VMer/CTs som trolig er stoppet eller slettet.
- Bør ryddes opp til kun aktive tjenester.
### PIHOLE-06: custom.list fungerer IKKE for CNAME-override — bruk dnsmasq address=
- **Symptom:** `echo "192.168.x.x hostname" >> /etc/pihole/custom.list` fungerer for A-records, men IKKE for å overstyre CNAME-oppløsning (f.eks. tvinge et domene til en bestemt IP internt)
- **Årsak:** Pi-hole `custom.list` støtter kun enkle A-record-overrides. CNAME-overrides og "address"-direktiver krever dnsmasq-konfigurasjon
- **Korrekt metode — dnsmasq address= direktiv:**
```bash
# Opprett fil i /etc/dnsmasq.d/ (f.eks. 99-monitoring.conf)
echo "address=/monitoring-k.magitek.no/172.20.0.76" > /tmp/99-monitoring.conf
# Kopier til CT via Proxmox exec
pct push 132 /tmp/99-monitoring.conf /etc/dnsmasq.d/99-monitoring.conf
# Restart Pi-hole DNS
pct exec 132 -- bash -lc '/usr/local/bin/pihole restartdns'
```
- **KRAV — pihole.toml:** `etc_dnsmasq_d` må være `true` i `/etc/pihole/pihole.toml` for at `/etc/dnsmasq.d/`-filer skal lastes. Sjekk med:
```bash
pct exec 132 -- bash -lc 'grep etc_dnsmasq_d /etc/pihole/pihole.toml'
```
Sett til true hvis nødvendig: `sed -i 's/etc_dnsmasq_d = false/etc_dnsmasq_d = true/' /etc/pihole/pihole.toml`
- **OBS:** Denne Pi-hole (CT 132) er hjemme-instansen. Kontoret har egen Pi-hole (CT 108) — se `kontoret/services/EXPERT-infra-pihole-kontoret-v1.5-20260319.md`
- **Gjelder:** Pi-hole v5.x og v6.x (begge bruker dnsmasq under panseret)
|
||||||
| Pi-hole Versjon | operations/magitek-server-ops/hjemme/compute | knowledge | medium | CURRENT-tkl-pihole.md | 100 | 2026-03-20 02:00:45 |
|
Body:
## Pi-hole Versjon
| Komponent | Installert | Nyeste |
|-----------|-----------|--------|
| Pi-hole Core | v5.7 | v6.4 |
| AdminLTE (Web) | v5.9 | v6.5 |
| FTL | v5.12.1 | — |
**OBS:** Betydelig utdatert — v5 → v6 er en major upgrade.
## Blokkering
| Parameter | Verdi |
|-----------|-------|
| **Blokkerte domener** | 77 526 |
| **Adlists** | 1 (StevenBlack/hosts) |
| **Blokkering aktiv** | Ja |
| **DNSSEC** | Nei (i Pi-hole, men Unbound har harden-dnssec-stripped) |
| **Query logging** | Ja (PRIVACYLEVEL=0 = full logging) |
| **Cache size** | 10 000 |
### Adlist
| URL | Enabled | Sist oppdatert |
|-----|---------|---------------|
| `https://raw.githubusercontent.com/StevenBlack/hosts/master/hosts` | Ja | 2026-02-28 |
### Gravity-oppdatering
Gravity oppdateres automatisk **søndager kl 04:28** via cron.
## Lokal DNS (custom.list)
Pi-hole fungerer som lokal DNS-server for hjemmenettet med 59 oppføringer:
| IP | Hostname | Merknad |
|----|----------|---------|
| 192.168.86.1 | pfsense.loc | pfSense gateway |
| 192.168.86.2 | pihole.loc | Denne containeren |
| 192.168.86.5 | zyxel.loc | Zyxel switch |
| 192.168.86.12 | docker3.loc | Docker host |
| 192.168.86.14 | docker4.loc, nx.loc | Docker + NX |
| 192.168.86.21 | pbs1.loc | Proxmox Backup Server |
| 192.168.86.81 | idrac1.loc | iDRAC #1 |
| 192.168.86.82 | idrac2.loc | iDRAC #2 |
| 192.168.86.83 | px9.loc | Proxmox node 9 |
| 192.168.86.85 | px7.loc | Proxmox node 7 |
| 192.168.86.105 | docker1.loc | Docker host |
| 192.168.86.116 | px5.loc | Proxmox px5 |
| 192.168.86.120 | truenas.loc, truenas, ftp.magitek.no | TrueNAS |
| 192.168.86.166 | wp1.loc | WordPress |
| 192.168.86.167 | ubuntu_dt1.loc | Ubuntu desktop |
| 192.168.86.184 | suitecrm.loc | SuiteCRM |
| 192.168.86.187 | win10vm1 | Windows 10 VM |
| 192.168.86.192 | suitecrm8.loc | SuiteCRM 8 |
| 192.168.86.195 | dockervm1.loc, docker5.loc | Docker VM |
| 192.168.86.196 | docker2.loc | Docker host |
| 192.168.86.197 | redmine.loc | Redmine |
| 192.168.86.198 | odoo.loc | Odoo ERP |
| 192.168.86.199 | collabtive.loc | Collabtive |
| 192.168.86.200 | snipeit.loc | Snipe-IT |
| 192.168.86.203 | leantime.loc | Leantime |
| 192.168.86.207 | laptop.loc | Laptop |
| 192.168.86.208 | docker-registry.loc | Docker registry |
| 192.168.86.217 | mayan.loc | Mayan EDMS |
| 192.168.86.218 | invoiceninja.loc | Invoice Ninja |
| 192.168.86.219 | mysql1.loc | MySQL server |
| 192.168.86.220 | port7.loc | Portainer |
| 192.168.86.221 | lamp1.loc, s1.zuz.loc | LAMP + Zuz |
| 192.168.86.226 | yunohost.loc | YunoHost |
| 192.168.86.228 | plex.loc | Plex |
| 192.168.86.229 | observium.loc, tkl-observium | Observium |
| 192.168.86.232 | gitlab.loc, gitlab.magitek.no | GitLab |
| 192.168.86.233 | pxmxbckpsrv.loc, nextcloud.loc | PBS / Nextcloud |
| 192.168.86.234 | ubuntu-docker6.loc | Docker host |
| 192.168.86.235 | laravel.loc | Laravel dev |
| 192.168.86.236 | ansible.loc | Ansible |
| 192.168.86.240 | symfony.loc | Symfony dev |
| 192.168.86.241 | nodejs.loc | Node.js |
| 192.168.86.242 | redis.loc | Redis |
| 192.168.86.243 | moodle.loc | Moodle LMS |
| 192.168.86.244 | virtualmin.loc | Virtualmin |
| 192.168.86.245 | canvas.loc | Canvas LMS |
| 192.168.86.246 | lighttpd.magitek.no | Lighttpd |
| 192.168.86.247 | drupal9.magitek.no | Drupal 9 |
| 192.168.86.248 | magento.loc | Magento |
| 192.168.86.249 | processmaker.loc | ProcessMaker |
| 192.168.86.250 | truenasscale.loc | TrueNAS Scale |
|
||||||
| Identitet | operations/magitek-server-ops/hjemme/compute | knowledge | medium | CURRENT-tkl-pihole.md | 100 | 2026-03-20 02:00:45 |
|
Body:
# Infrastructure Sub-Expert: tkl-pihole (Pi-hole DNS)
**Version:** 1.0
**Date:** 2026-03-03
**Parent:** coordination/experts/operations/magitek-server-ops/CURRENT.md
**Load:** coordination/experts/operations/magitek-server-ops/CURRENT-tkl-pihole.md
---
## Identitet
| Egenskap | Verdi |
|----------|-------|
| **Hostname** | tkl-pihole |
| **Type** | LXC Container (CT 132) på px5 |
| **LAN IP** | 192.168.86.2 |
| **Gateway** | 192.168.86.1 (pfSense hjemme) |
| **Lokasjon** | Hjemme (Skeisstøa 37c) |
| **OS** | Debian 10 (Buster) — Turnkey Linux base |
| **Rolle** | DNS-server med annonse/tracker-blokkering + lokal DNS |
| **Status** | Aktiv, 85+ dagers uptime |
| **Onboot** | Ja (startup order=2) |
| **Web UI** | http://192.168.86.2/admin |
## Tilgang
| Metode | Detaljer |
|--------|----------|
| SSH (via px5) | `ssh TU-px5 "pct exec 132 -- bash"` |
| Web UI | http://192.168.86.2/admin (passord: Ansjos123) |
| CLI | `ssh TU-px5 "pct exec 132 -- bash -lc '/usr/local/bin/pihole <cmd>'"` |
## Ressurser
| Ressurs | Tildelt | Brukt |
|---------|---------|-------|
| **CPU** | 8 cores | ~0% (svært lite belastet) |
| **RAM** | 6 GB | 82 MB (~1%) |
| **Disk** | 32 GB | 1.4 GB (~5%) |
| **Swap** | 4 GB | 12 MB |
| **Arch** | amd64 | Unprivileged, nesting=1 |
## Tjenester
| Tjeneste | Port | Status | Rolle |
|----------|------|--------|-------|
| **pihole-FTL** | 53 (UDP/TCP) | Running | DNS-server + blokkering |
| **Unbound** | 5335 (loopback) | Running | Rekursiv DNS-resolver |
| **Lighttpd** | 80 | Running | Web UI for Pi-hole admin |
### DNS-arkitektur
```
Klienter (192.168.86.x)
│
▼ port 53
Pi-hole FTL (192.168.86.2)
│ blokkerer annonser/trackere
│ svarer på custom.list (lokal DNS)
│
▼ port 5335 (loopback)
Unbound (rekursiv resolver)
│ spør rot-servere direkte
▼
Root DNS servers
```
**Ingen ekstern DNS-forwarder** — Unbound gjør full rekursiv oppslag direkte mot rot-serverne. Dette gir bedre personvern enn å forwarde til Google/Cloudflare.
|
||||||
| Jobs 1-6 lost schedules during CT 135 migration (RESOLVED 2026-03-16) | operations/magitek-server-ops/hjemme/services | gotcha | medium | CURRENT-duplicati.md | 100 | 2026-03-20 02:00:44 |
|
Body:
- Migrated from laravelserver-v11 on 2026-03-05
- Schedule data was not preserved; all 6 jobs showed "No schedule" and "LastRun: Never"
- Job data and Google Drive history are intact (94+ filesets for older jobs)
- **Fix applied 2026-03-16:** Schedules restored via REST API PUT `/api/v1/backup/{id}` with Schedule object
- Use `python3 /tmp/create_schedules.py` on CT 135 if schedules are lost again (script on local /tmp)
---
|
||||||
| Duplicate remote files block backup AND repair | operations/magitek-server-ops/hjemme/services | gotcha | medium | CURRENT-duplicati.md | 100 | 2026-03-20 02:00:44 |
|
Body:
- Job had duplicate file on Google Drive
- Backup and repair both fail with "duplicates" error
- **Fix used:** Changed TargetURL to new folder, leaving orphans in old folder
- **Prevention:** If backup fails mid-upload, check for duplicate remote files BEFORE running repair
|
||||||
| Bash escaping over SSH kills heredocs with special chars in authid | operations/magitek-server-ops/hjemme/services | gotcha | medium | CURRENT-duplicati.md | 100 | 2026-03-20 02:00:44 |
|
Body:
- authid contains `%3A` and `!` which bash expands/mangles over SSH
- **Fix:** Write script to file locally, scp to CT, run there
|
||||||
| Unprivileged LXC cannot NFS mount directly | operations/magitek-server-ops/hjemme/services | gotcha | medium | CURRENT-duplicati.md | 100 | 2026-03-20 02:00:44 |
|
Body:
- `mount.nfs: Operation not permitted` in unprivileged CT
- **Fix:** Mount NFS on Proxmox host, then bind-mount into CT via `pct set`
|
||||||
| CLI repair fails if Duplicati service is running -- OAuth lock | operations/magitek-server-ops/hjemme/services | gotcha | medium | CURRENT-duplicati.md | 100 | 2026-03-20 02:00:44 |
|
Body:
- `duplicati-cli repair` gets "Invalid authid password" when `duplicati.service` is running
- **Fix:** `systemctl stop duplicati` before CLI repair, `systemctl start duplicati` after
|
||||||
| Duplicati 2.2.x web UI repair hangs on Recreate_Running | operations/magitek-server-ops/hjemme/services | gotcha | medium | CURRENT-duplicati.md | 100 | 2026-03-20 02:00:44 |
|
Body:
- Web UI repair starts but never completes
- **Fix:** Always use CLI repair with service stopped (see G-014)
|
||||||
| Duplicati job ID gaps after delete+recreate | operations/magitek-server-ops/hjemme/services | gotcha | medium | CURRENT-duplicati.md | 100 | 2026-03-20 02:00:44 |
|
Body:
- Duplicati never reuses deleted job IDs -- IDs are auto-incremented
- Cosmetic only, does not affect functionality
|
||||||
| REST API authid URL-encoding -- curl vs python (CRITICAL) | operations/magitek-server-ops/hjemme/services | gotcha | critical | CURRENT-duplicati.md | 100 | 2026-03-20 02:00:44 |
|
Body:
- When creating jobs via REST API, the `authid` in TargetURL contains special chars (`%3A`, `!`)
- **curl** with `-d` can double-encode or mangle the authid
- **Fix:** Always use Python `urllib.request` for job creation
- Copy the raw TargetURL from an existing working job and only change the folder path
- **Test after creation:** Always trigger a manual backup run and verify completion
|
||||||
| PBS datastore offsite backup -- forste kjoring tar mange timer | operations/magitek-server-ops/hjemme/services | gotcha | medium | CURRENT-duplicati.md | 100 | 2026-03-20 02:00:44 |
|
Body:
- Job 17 (mirror2x4tb, ~227 GB): ~8 timer ved 8 MB/s
- Job 20 (extbackup, ~268 GB): ~9-10 timer
- Job 19 (ext3, ~25 GB): ~32 min
- Inkrementelle backups deretter: kun endrede chunks, vesentlig raskere
|
||||||
| PBS chunk-mapper krever chmod for NFS-tilgang (CT 212 Jobs 17-19) | operations/magitek-server-ops/hjemme/services | gotcha | medium | CURRENT-duplicati.md | 100 | 2026-03-20 02:00:44 |
|
Body:
- PBS oppretter `.chunks/`-undermapper med `drwxr-x---` (750) eid av `backup` (UID 34)
- NFS-klienter kan ikke lese chunk-filer gjennom disse mappene
- **Fix:** Cron `/etc/cron.d/fix-pbs-chunk-perms` pa PBS kjorer daglig kl 06:00
- **Script:** `/usr/local/bin/fix-pbs-chunk-perms.sh`
- Ved nye PBS-oppdateringer: verifiser at rettighets-cron fortsatt kjorer
|
||||||
| CLI repair for outdated job databases | operations/magitek-server-ops/hjemme/services | gotcha | critical | CURRENT-duplicati.md | 100 | 2026-03-20 02:00:44 |
|
Body:
- When local job DBs are outdated, web UI repair does NOT work
- Must use CLI: `sudo duplicati-cli repair 'TARGET_URL' --dbpath='...' --passphrase='...' --encryption-module=... --compression-module=... --dblock-size=...`
- CRITICAL: Use single quotes around TARGET_URL (bash `!` expansion breaks double quotes)
- If repair fails with "database in-progress", delete the sqlite file and retry
- After CLI repair, restart: `sudo systemctl restart duplicati`
|
||||||
| Duplicati REST API job creation requires exact format | operations/magitek-server-ops/hjemme/services | gotcha | low | CURRENT-duplicati.md | 100 | 2026-03-20 02:00:44 |
|
Body:
- Auth: POST `/api/v1/auth/login` with `{"Password":"..."}` returns `AccessToken` for Bearer header
- Create job: POST `/api/v1/backups` with Bearer token
- No-encryption requires TWO settings: `encryption-module=""` AND `--no-encryption=true`
- Schedule format needs `Tags`, `AllowedDays` with both short and long forms
- Authid token from existing jobs can be reused for same Google Drive account
|
||||||
| VM backup does not contain /root/.config/Duplicati/ (laravelserver-v11 only) | operations/magitek-server-ops/hjemme/services | gotcha | medium | CURRENT-duplicati.md | 100 | 2026-03-20 02:00:44 |
|
Body:
- Historical -- not relevant for CT 135
- CT 135 recovery: recreate CT from scratch + CLI repair from Google Drive
|
||||||
| Phantom data folder at /etc/default/var/lib/Duplicati/ (laravelserver-v11 only) | operations/magitek-server-ops/hjemme/services | gotcha | medium | CURRENT-duplicati.md | 100 | 2026-03-20 02:00:44 |
|
Body:
- Historical -- not relevant for CT 135
|
||||||
| 401 Unauthorized on web UI after restart | operations/magitek-server-ops/hjemme/services | gotcha | medium | CURRENT-duplicati.md | 100 | 2026-03-20 02:00:44 |
|
Body:
- Password stored in `Duplicati-server.sqlite` may not survive config changes
- Fix: `sudo /usr/lib/duplicati/duplicati-server-util change-password "<passord>"` then restart
|
||||||
| Symlinking job DB as server DB breaks Duplicati | operations/magitek-server-ops/hjemme/services | gotcha | medium | CURRENT-duplicati.md | 100 | 2026-03-20 02:00:44 |
|
Body:
- Job DBs (random-name .sqlite files) have different schema than `Duplicati-server.sqlite`
- Symlinking a job DB as `Duplicati-server.sqlite` causes startup failure
- Fix: Never symlink -- server DB and job DBs are separate files with incompatible schemas
|
||||||
| sed operations corrupt DAEMON_OPTS | operations/magitek-server-ops/hjemme/services | gotcha | medium | CURRENT-duplicati.md | 100 | 2026-03-20 02:00:44 |
|
Body:
- `/etc/default/duplicati`
- Using `sed` to modify DAEMON_OPTS can duplicate entries or corrupt the line
- Fix: Always use full file overwrite (write entire file), never sed-based modification
|
||||||
| Missing --server-datafolder causes silent data loss (CRITICAL) | operations/magitek-server-ops/hjemme/services | gotcha | critical | CURRENT-duplicati.md | 100 | 2026-03-20 02:00:44 |
|
Body:
- `/etc/default/duplicati` DAEMON_OPTS
- Without `--server-datafolder=/var/lib/Duplicati`, Duplicati defaults to `/root/.config/Duplicati/` and creates empty DB -- all jobs invisible
- Fix: Ensure DAEMON_OPTS contains `--server-datafolder=/var/lib/Duplicati`, restart service
|
||||||
| POSTIZ-05: OpenAI API-nøkkel | operations/magitek-server-ops/hjemme/services | knowledge | medium | CURRENT-postiz.md | 100 | 2026-03-20 02:00:44 |
|
Body:
### POSTIZ-05: OpenAI API-nøkkel
- **Symptom:** Postiz trenger OpenAI API-nøkkel for AI-chat-funksjoner
- **Årsak:** Satt i docker-compose.yml som environment variable
- **Fix:** Sjekk/oppdater i `/home/heine/postiz/docker-compose.yml` under postiz service
## Changelog
### v1.0 - 2026-03-18
- Initial ekspertfil opprettet etter disk full-hendelse
- VM 153: Ubuntu 24.04, 4 cores, 8 GB RAM, 100 GB ZFS disk (utvidet fra 60 GB)
- Docker Compose stack med Postiz + Temporal dokumentert
- 5 gotchas dokumentert (crash-loop, sudo, root SSH, monitorering, API-nøkkel)
- Database credentials dokumentert
- Log rotation konfigurert (daemon.json)
|
||||||
| Operasjoner | operations/magitek-server-ops/hjemme/services | knowledge | medium | CURRENT-postiz.md | 100 | 2026-03-20 02:00:44 |
|
Body:
## Operasjoner
### Start/stopp stack
```bash
ssh postiz "cd /home/heine/postiz && docker compose up -d"
ssh postiz "cd /home/heine/postiz && docker compose down"
```
### Sjekk container-status
```bash
ssh postiz "docker ps --format 'table {{.Names}}\t{{.Status}}\t{{.Ports}}'"
```
### Sjekk diskbruk (inkl Docker)
```bash
ssh postiz "df -h / && echo '---' && docker system df"
```
### Sjekk container writable layer
```bash
ssh postiz "docker ps -a --size --format 'table {{.Names}}\t{{.Size}}'"
```
## Database Credentials
| Database | Bruker | Passord | DB-navn |
|----------|--------|---------|---------|
| Postiz PostgreSQL | `postiz-user` | `postiz-password` | `postiz-db-local` |
| Temporal PostgreSQL | `temporal` | `temporal` | (managed by auto-setup) |
## Snapshots
| Navn | Dato | Beskrivelse |
|------|------|-------------|
| `pre-temporal-20260318` | 2026-03-18 | Før Temporal-stack ble lagt til |
## Kjente Gotchas
### POSTIZ-01: pnpm/Prisma crash-loop fyller disk
- **Symptom:** Disk 100% full, Docker containers i restart-loop
- **Årsak:** Postiz image bruker `pnpm dlx` for Prisma. Ved oppstartsfeil akkumulerer writable layer raskt (42 GB observert etter 13 dagers crash-loop)
- **Fix:** `docker compose down && docker system prune -f && docker compose up -d`. Vurder disk-expand om nødvendig. Se CURRENT-docker-operations.md for full prosedyre.
### POSTIZ-02: sudo krever passord over SSH
- **Symptom:** `sudo <cmd>` henger over SSH
- **Årsak:** `heine` har sudo men IKKE NOPASSWD
- **Fix:** `ssh postiz "echo 'Ansjos123' | sudo -S <cmd>"`. For filer: scp til /tmp → sudo cp.
### POSTIZ-03: Root SSH ikke tilgjengelig
- **Symptom:** `ssh root@postiz` gir "Permission denied (publickey)"
- **Årsak:** Root-konto har kun publickey auth, ingen nøkkel er satt opp
- **Fix:** Bruk `heine` + sudo for alt. For fil-deploy: scp + `echo 'Ansjos123' | sudo -S cp`
### POSTIZ-04: Ingen monitorering — 13 dagers uoppdaget nedetid
- **Symptom:** Postiz var crash-looping i 13 dager uten at noen merket det
- **Årsak:** Ingen Uptime Kuma eller annen monitorering er satt opp
- **Fix:** TODO — Sett opp Uptime Kuma for postiz.magitek.no og andre kritiske tjenester
|
||||||
| Identitet | operations/magitek-server-ops/hjemme/services | knowledge | low | CURRENT-postiz.md | 100 | 2026-03-20 02:00:44 |
|
Body:
# Infrastructure Sub-Expert: Postiz (Social Media Scheduler — VM 153)
**Version:** 1.0
**Date:** 2026-03-18
**Parent:** coordination/experts/operations/magitek-server-ops/CURRENT.md
**Load:** coordination/experts/operations/magitek-server-ops/hjemme/services/CURRENT-postiz.md
**Sources:** Session 2026-03-18 (ad-hoc disk full troubleshooting + Temporal migration)
---
## Identitet
| Egenskap | Verdi |
|----------|-------|
| **Hostname** | postiz |
| **VMID** | 153 |
| **Type** | QEMU VM på px5 |
| **LAN IP** | 192.168.86.177 |
| **Lokasjon** | Hjemme (px5) |
| **OS** | Ubuntu 24.04.3 LTS (kernel 6.8.0-101-generic) |
| **Rolle** | Postiz social media scheduling + Temporal workflow engine |
| **Status** | Aktiv |
| **URL** | https://postiz.magitek.no (via NPM hjemme) |
## Ressurser
| Ressurs | Tildelt | Faktisk bruk (2026-03-18) |
|---------|---------|--------------------------|
| **RAM** | 8000 MB | ~2-3 GB (6 Docker containers) |
| **Cores** | 4 | Lav |
| **Disk** | 100 GB (ZFS zfsa) | ~16 GB brukt (17%) |
**Disk-historikk:** Opprinnelig 60 GB, utvidet live til 100 GB 2026-03-18 etter full-disk-hendelse.
## Tilgang
| Metode | Detaljer |
|--------|----------|
| SSH (direkte alias) | `ssh postiz` |
| SSH (tunnel) | `ssh TU-postiz` |
| SSH bruker | `heine` / `Ansjos123` |
| sudo | `echo 'Ansjos123' \| sudo -S <cmd>` (passord kreves, ingen NOPASSWD) |
| Root SSH | **Ikke tilgjengelig** (publickey only, ingen root-nøkkel) |
| Docker | `heine` er i docker-gruppen — ingen sudo nødvendig for docker-kommandoer |
## Docker Compose Stack
**Compose-fil:** `/home/heine/postiz/docker-compose.yml`
| Container | Image | Port | Rolle |
|-----------|-------|------|-------|
| postiz | ghcr.io/gitroomhq/postiz-app:latest | 5000→5000 | Postiz web app (Node.js) |
| postiz-postgres | postgres:17 | 5432 (intern) | Postiz database |
| postiz-redis | redis:7.2 | 6379 (intern) | Postiz cache/queue |
| temporal | temporalio/auto-setup:latest | 7233→7233 | Temporal workflow server |
| temporal-postgresql | postgres:13 | 5432 (intern) | Temporal database |
| temporal-elasticsearch | elasticsearch:7.16.2 | 9200 (intern) | Temporal search |
**Docker Volumes:** postgres-volume, postiz-config, postiz-uploads, postiz-redis-data, temporal-elasticsearch-data, temporal-postgresql-data
**Log rotation:** Konfigurert via `/etc/docker/daemon.json` — 50 MB × 3 filer per container.
|
||||||
| Se proxy host-konfig for et domene | operations/magitek-server-ops/hjemme/services | knowledge | medium | CURRENT-npm-hjemme.md | 100 | 2026-03-20 02:00:44 |
|
Body:
### Se proxy host-konfig for et domene
```bash
ssh -J heine@81.167.27.54:12322 heine@192.168.86.16 "grep -rl 'server_name domene.magitek.no' /opt/npm-stack/data/nginx/proxy_host/ | xargs cat"
```
### Restart NPM stack
```bash
ssh -J heine@81.167.27.54:12322 heine@192.168.86.16 "cd /opt/npm-stack && sudo docker compose restart"
```
### Se SSL-sertifikat-status
```bash
ssh -J heine@81.167.27.54:12322 heine@192.168.86.16 "ls /opt/npm-stack/letsencrypt/live/"
```
### Se NPM-logg
```bash
ssh -J heine@81.167.27.54:12322 heine@192.168.86.16 "sudo docker logs npm-stack-app-1 --tail 50"
```
## Changelog
### v1.1 - 2026-03-19
- Gotcha NPM-H-06: Monitoring proxy hosts for split-horizon DNS (monitoring.magitek.no, monitoring-k.magitek.no)
- Certs kopiert fra NPM kontoret — krever manuell fornyelse
- Sources: MP-0011, Session b6bb8896 (2026-03-18)
### v1.0 - 2026-02-28
- Initial versjon — full kartlegging av NPM hjemme
- VM 131 (192.168.86.16), Ubuntu 24.04, Docker Compose
- 153 proxy hosts kartlagt (130 med SSL)
- Kategorisert i: Nextcloud, Infrastruktur, Storage, Webapps, Kundesider, Dev/Test, Legacy
- Gotchas NPM-H-01 til NPM-H-05 dokumentert
|
||||||
| SSL/Let's Encrypt | operations/magitek-server-ops/hjemme/services | knowledge | medium | CURRENT-npm-hjemme.md | 100 | 2026-03-20 02:00:44 |
|
Body:
## SSL/Let's Encrypt
- **130 av 153 hosts** har SSL (Let's Encrypt)
- Sertifikater lagret i `/opt/npm-stack/letsencrypt/live/npm-{ID}/`
- NPM håndterer auto-renewal
- **23 hosts** er HTTP-only (ingen SSL) — typisk interne/avviklede tjenester
## Kjente Gotchas
### NPM-H-01: Gammel NPM-VM (152) er stoppet
- VM 152 (nginx-pm-docker) er den **gamle** NPM-instansen
- VM 131 er den **aktive** — forveksle ikke!
### NPM-H-02: IP-endring fra ekspert-fil
- CURRENT.md hadde IP 192.168.86.14/15 for NPM hjemme
- **Faktisk IP er 192.168.86.16** (bekreftet via qm guest agent)
### NPM-H-03: sudo krever passord
- `heine` har ikke NOPASSWD sudo
- Docker-kommandoer krever `sudo` (heine er ikke i docker-gruppen)
- For management: bruk NPM WebGUI (port 81) i stedet for CLI
### NPM-H-04: 153 proxy hosts — mange inaktive
- Svært mange proxy hosts peker trolig til stoppede VMs/CTs
- Opprydding anbefalt — slett hosts der backend er permanent nede
- Risikovurdering: inaktive hosts genererer Let's Encrypt renewal-forsøk som feiler
### NPM-H-05: DB credentials i compose = npm/npm
- MariaDB root password: npm
- NPM DB: user=npm, pass=npm
- Ikke synlig eksternt (kun Docker-internt), men svak sikkerhet
### NPM-H-06: Monitoring proxy hosts for split-horizon DNS (2026-03-18)
- `monitoring.magitek.no` og `monitoring-k.magitek.no` har proxy hosts pa NPM hjemme
- Nodvendig fordi hjemme-LAN resolves `*.magitek.no` CNAME til NPM hjemme (192.168.86.16) via Pi-hole
- Uten disse proxy hosts: hjemme-klienter far `ERR_SSL_UNRECOGNIZED_NAME` ved tilgang til monitoring
- Certs: kopiert fra NPM kontoret (LE certs 56, 57) — IKKE auto-fornyet, se MONITORING-03 gotcha
- Backend: monitoring.magitek.no → 192.168.86.162:3000, monitoring-k.magitek.no → 172.20.0.76:3000 (via WireGuard)
- Sources: MP-0011
## Vanlige Operasjoner
### SSH til NPM
```bash
ssh -J heine@81.167.27.54:12322 heine@192.168.86.16
```
### Sjekk NPM status
```bash
ssh -J heine@81.167.27.54:12322 heine@192.168.86.16 "docker ps 2>/dev/null || sudo docker ps"
```
|
||||||
| Komplett domene-liste (alfabetisk) | operations/magitek-server-ops/hjemme/services | knowledge | low | CURRENT-npm-hjemme.md | 100 | 2026-03-20 02:00:44 |
|
Body:
### Komplett domene-liste (alfabetisk)
<details>
<summary>Alle 153 domener (klikk for å utvide)</summary>
```
adminer.wp1.magitek.no ansible.magitek.no appflowy.magitek.no
apps.magitek.no aspnet.magitek.no asrock-cockpit.magitek.no
astroluma.magitek.no audacity.magitek.no avideo.magitek.no
backup-magitek.magitek.no bckp.magitek.no bi.hsal.no
bitwarden.magitek.no booksonic.magitek.no bookstack.magitek.no
calibre.magitek.no calibreweb.magitek.no canvas.magitek.no
collabtive.magitek.no dev1.magitek.no django.magitek.no
docker-registry.magitek.no docs.magitek.no dokuwiki.magitek.no
dreg.magitek.no drupal9.magitek.no duplicati.magitek.no
etherpad.magitek.no flyttet.magitek.no focalboard.magitek.no
gitlab.magitek.no grocy.hsal.no hd.hsal.no
hd.magitek.no healthyhair-backup4/5.magitek.no
healthyhair.magitek.no homelab.magitek.no homepage.magitek.no
hub.magitek.no humhub.magitek.no ibmts3200.magitek.no
idrac1/2/3.magitek.no invoiceninja.magitek.no is.magitek.no
jellyfin.magitek.no jenkins.magitek.no jupyter1.magitek.no
kinga.magitek.no kissandtranslateold.magitek.no
kursteam.magitek.no kursus.magitek.no lamp1.magitek.no
laravel.magitek.no leantime.magitek.no learn.hsal.no
libreoffice.magitek.no lighttpd.magitek.no linkwarden.magitek.no
litespeed.magitek.no m1.magitek.no mattermost.magitek.no
mautic.magitek.no mayan.magitek.no minio.magitek.no
moodle.magitek.no mulig.magitek.no mysql1.magitek.no
mysqlworkbench.magitek.no n8n.magitek.no nas1.magitek.no
nc-aio.hsal.no nc-aio.magitek.no nc-aio.nativja.no
nc.hsal.no nc.magitek.no nc.nativja.no
nc-office.magitek.no nc-office.nativja.no new-magitek.magitek.no
nodejs.magitek.no notat-kinga.hsal.no nx.magitek.no
observium.hsal.no odoo.magitek.no old-magitek.magitek.no
openproject.magitek.no pbs1.magitek.no pbsm.magitek.no
pfsense.magitek.no pihole.magitek.no plane.magitek.no
plex.magitek.no port1-9.magitek.no postiz.magitek.no
processmaker.magitek.no px2/3/5/6/8/10/11/15.magitek.no
qrcode.magitek.no redis.magitek.no redmine.magitek.no
remmina.magitek.no rs.magitek.no sc.magitek.no
sc7.magitek.no sc7.trubadurheine.no sc82.magitek.no
silverbox-cockpit.magitek.no snibox.magitek.no snipeit.magitek.no
snippetbox.magitek.no stage03/04.magitek.no symfony.magitek.no
taguette.magitek.no trillium.magitek.no trubadurheine.no
truenas.magitek.no truenas2.magitek.no truenasscale.magitek.no
vmin1.magitek.no vscode.magitek.no vtiger.magitek.no
vt.magitek.no wiki.magitek.no wiki1.magitek.no
wikijs.magitek.no wp1.magitek.no www.borilden.no
www.dummy07.no www.dummy08.magitek.no www.healthyhair.no
www.kissandtranslate.no www.magitek.no www.test10.magitek.no
www.trubadurheine.no yunohost.magitek.no zyxel1920.magitek.no
```
</details>
|
||||||
| Domener — Kategorisert | operations/magitek-server-ops/hjemme/services | knowledge | low | CURRENT-npm-hjemme.md | 100 | 2026-03-20 02:00:44 |
|
Body:
### Domener — Kategorisert
**Nextcloud (6 hosts):**
- nc.magitek.no, nc-aio.magitek.no, nc-office.magitek.no
- nc.hsal.no, nc-aio.hsal.no
- nc.nativja.no, nc-aio.nativja.no, nc-office.nativja.no
**Proxmox/Infrastruktur (10 hosts):**
- px5.magitek.no, px2.magitek.no, px3.magitek.no, px6.magitek.no, px8.magitek.no, px10.magitek.no, px11.magitek.no, px15.magitek.no
- pfsense.magitek.no, pbs1.magitek.no, pbsm.magitek.no
**Storage/NAS (4 hosts):**
- truenas.magitek.no, truenas2.magitek.no, truenasscale.magitek.no, nas1.magitek.no
**Webapps — Aktive (estimert):**
- www.magitek.no, wp1.magitek.no — Hovedsider
- bookstack.magitek.no, wiki.magitek.no, wiki1.magitek.no, wikijs.magitek.no, dokuwiki.magitek.no — Wiki/docs
- postiz.magitek.no — Social media scheduling
- mautic.magitek.no — Marketing automation
- n8n.magitek.no — Workflow automation
- pihole.magitek.no — DNS-blokkering
- plex.magitek.no — Medieserver
- duplicati.magitek.no — Backup
- bitwarden.magitek.no — Passordmanager
- homepage.magitek.no — Dashboard
- homelab.magitek.no — Homelab dashboard
**Kundesider/eksterne:**
- www.trubadurheine.no, sc7.trubadurheine.no — Trubadur Heine
- www.borilden.no — Borilden
- www.healthyhair.no, healthyhair.magitek.no, healthyhair-backup4/5.magitek.no — HealthyHair
- www.kissandtranslate.no, kissandtranslateold.magitek.no — Kiss & Translate
- kursteam.magitek.no, kursus.magitek.no — Kursteam
- bi.hsal.no, learn.hsal.no, grocy.hsal.no, hd.hsal.no, observium.hsal.no — HSAL-domener
- notat-kinga.hsal.no, kinga.magitek.no — Kinga
**Dev/Test/Port-baserte (9 port-hosts + diverse):**
- port1–port9.magitek.no — Portainer/Docker-porter
- stage03.magitek.no, stage04.magitek.no
- dev1.magitek.no, laravel.magitek.no
**Trolig inaktive/legacy:**
- focalboard.magitek.no, mattermost.magitek.no — Erstattet av andre?
- gitlab.magitek.no — Trolig ikke i bruk
- odoo.magitek.no, vtiger.magitek.no, vt.magitek.no — CRM-tester
- redmine.magitek.no, openproject.magitek.no, leantime.magitek.no — PM-tester
- jupyter1.magitek.no, canvas.magitek.no, moodle.magitek.no — Læringsplattformer
- yunohost.magitek.no, appflowy.magitek.no — App-tester
- dreg.magitek.no, docker-registry.magitek.no — Docker registry
- audacity.magitek.no, snibox.magitek.no — Diverse
|
||||||
| Identitet | operations/magitek-server-ops/hjemme/services | knowledge | medium | CURRENT-npm-hjemme.md | 100 | 2026-03-20 02:00:44 |
|
Body:
# Infrastructure Sub-Expert: NPM Hjemme (Nginx Proxy Manager)
**Version:** 1.0
**Date:** 2026-02-28
**Parent:** coordination/experts/operations/magitek-server-ops/CURRENT.md
**Load:** coordination/experts/operations/magitek-server-ops/CURRENT-npm-hjemme.md
---
## Identitet
| Egenskap | Verdi |
|----------|-------|
| **Hostname** | nginx-proxy-manager-ub-vm |
| **VM ID** | 131 |
| **Type** | KVM VM på px5 |
| **Proxmox Host** | px5 (pmox5) |
| **LAN IP** | 192.168.86.16 |
| **Lokasjon** | Hjemme (Skeisstøa 37c) |
| **OS** | Ubuntu 24.04.3 LTS (Noble Numbat) |
| **Rolle** | Reverse proxy + SSL-terminering for alle hjemme-tjenester |
| **Status** | Aktiv (83+ dagers uptime per 2026-02-28) |
## Hardware
| Ressurs | Verdi |
|---------|-------|
| **vCPUs** | 1 core |
| **RAM** | 4 GB (1.9 GB brukt / 1.8 GB tilgjengelig) |
| **Disk** | 60 GB (9.2 GB brukt / 45 GB ledig, 18%) |
| **Nettverk** | virtio, bridge=vmbr0, firewall=1 |
## Tilgang
| Metode | Detaljer |
|--------|----------|
| SSH | `ssh -J heine@81.167.27.54:12322 heine@192.168.86.16` |
| WebGUI | http://192.168.86.16:81 (NPM admin dashboard) |
| Via Proxmox | `ssh -J ... root@192.168.86.116 "qm guest cmd 131 ..."` |
**OBS:** SSH som `root` er IKKE tillatt (Permission denied). Bruk `heine`-bruker.
**OBS:** `sudo` krever passord (ikke NOPASSWD). Docker-kommandoer uten sudo fungerer ikke — bruk `sudo docker ...` eller legg heine i docker-gruppen.
## Docker Stack
**Compose-fil:** `/opt/npm-stack/docker-compose.yml`
| Container | Image | Status | Porter |
|-----------|-------|--------|--------|
| npm-stack-app-1 | jc21/nginx-proxy-manager:latest | Running | 80, 81, 443 |
| npm-stack-db-1 | mariadb:12.0 | Running | 3306 (intern) |
**Volumer:**
- `./data` → `/data` (nginx config, proxy hosts)
- `./letsencrypt` → `/etc/letsencrypt` (SSL-sertifikater)
- `./mysql` → `/var/lib/mysql` (database)
**DB og WebGUI credentials:** Se `CURRENT-credentials.md` → NPM Hjemme
## Proxy Hosts
**Totalt: 153 proxy hosts** (130 med SSL, 23 uten SSL/HTTP-only)
|
||||||
Ingestion History
Loading…