KB: magitek-ops

← All workspaces
9181 entries 224 domains 12.75 MB database Last ingest: 2026-03-20 09:16

9181 results — page 1 of 184

Title Domain Type Severity Source Freshness Updated
Task: Implement scanner_gdrive.py claude/agents pattern info HANDOFF-MP0002-gdrive-scanner.md 100 2026-03-04
Source file: /var/www/magitek-ops/coordination/handoff/HANDOFF-MP0002-gdrive-scanner.md
Source date: None
Keywords: []
Cross-domain: []
Symptoms: []
Body:
### What Ny scanner-modul `tools/dam_indexer/scanner_gdrive.py` som bruker `rclone lsjson` for å indeksere Google Drive-kontoer. ### Rclone Config - Location: `/var/www/.config/rclone/rclone.conf` (www-data bruker) - Bruk: `RCLONE_CONFIG=/var/www/.config/rclone/rclone.conf rclone lsjson <remote>: --recursive` - Verifisert: `rclone lsd gdrive:` fungerer ### Google Drive Accounts (fra SkyMirror DB) Hovedkontoene som bør indekseres: | rclone_remote | email | name | |---------------|-------|--...
Current State ops/storage pattern info HANDOFF-MP0002-gdrive-scanner.md 100 2026-03-04
Source file: /var/www/magitek-ops/coordination/handoff/HANDOFF-MP0002-gdrive-scanner.md
Source date: None
Keywords: []
Cross-domain: []
Symptoms: []
Body:
- DAM indexer: `tools/dam_indexer/` — fullt operativ - 1.46M filer indeksert (nc-magitek, nc-hsal, nc-nativja, truenas x3, orion, hermes) - Cron: daglig 04:30 med `--smart` flag - Audits: A- (implementation + quality) - Config endret: orion har nå ekstra scan_roots (C:\scan, C:\stock-photo, C:\Vector, C:\video) og exclude-patterns for NC sync + IDE caches — IKKE rescannet ennå
Context claude/agents pattern info HANDOFF-MP0002-gdrive-scanner.md 100 2026-03-04
Source file: /var/www/magitek-ops/coordination/handoff/HANDOFF-MP0002-gdrive-scanner.md
Source date: None
Keywords: []
Cross-domain: []
Symptoms: []
Body:
MP-0002 (DAM SQLite Catalog) er ferdig implementert og i produksjon. Alle 8 storage-lokasjoner scanner OK. Nå trengs en ny scanner-modul for Google Drive via rclone.
Ekspert-filer oppdatert claude/expert pattern info HANDOFF-cloudflare-pihole-split-dns.md 100 2026-03-04
Source file: /var/www/magitek-ops/coordination/handoff/HANDOFF-cloudflare-pihole-split-dns.md
Source date: None
Keywords: []
Cross-domain: []
Symptoms: []
Body:
- `kontoret/services/CURRENT-pihole-kontoret.md` → v1.1 - `kontoret/network/CURRENT-pfsense.md` → v1.1 - `shared/CURRENT-cloudflare.md` → v1.2 - `kontoret/services/CURRENT-npm-kontoret.md` → v1.3
Gjenstår (neste sesjon) claude/agents pattern info HANDOFF-cloudflare-pihole-split-dns.md 100 2026-03-04
Source file: /var/www/magitek-ops/coordination/handoff/HANDOFF-cloudflare-pihole-split-dns.md
Source date: None
Keywords: []
Cross-domain: []
Symptoms: []
Body:
### A. Verifiser NS-propagasjon til Cloudflare ```bash AUTH_EMAIL="heine.salbu@gmail.com" AUTH_KEY="$(grep CLOUDFLARE_API_KEY /var/www/magitek-ops/.env | cut -d= -f2)" curl -s "https://api.cloudflare.com/client/v4/zones?per_page=20" \ -H "X-Auth-Email: $AUTH_EMAIL" -H "X-Auth-Key: $AUTH_KEY" | python3 -c " import json,sys for z in json.load(sys.stdin)['result']: print(f\"{z['name']:25} {z['status']}\")" ``` Alle domener bør vise "active" (ikke "pending" eller "invalid nameservers"). ##...
Hva ble gjort ops/network pattern info HANDOFF-cloudflare-pihole-split-dns.md 100 2026-03-04
Source file: /var/www/magitek-ops/coordination/handoff/HANDOFF-cloudflare-pihole-split-dns.md
Source date: None
Keywords: []
Cross-domain: []
Symptoms: []
Body:
### 1. Cloudflare DNS — 9 nye CNAME-records for *.magitek.no Alle peker til `px10.magitek.no` (62.97.227.206 = kontoret WAN): - px1, pmox10, pmox15 (Proxmox-servere) - pfsense-kolsk, npm-kolsk (dupliserte tjenester med lokasjonssuffix) - zyxel1920, freshtomato, mainwp, todo **Navnekonvensjon:** `-kolsk` = kontoret, `-skeis` = hjemme ### 2. NPM Kontoret — 9 nye proxy hosts (ID 21-29) Alle med ACL=5 (LAN_Kontor_hjemme_Scandic): - Proxmox: px1 (21), pmox10 (22), pmox15 (23) — websocket=true, f...
IMPORTANT: Preserve Source ops/network pattern info HANDOFF-calibre-opds-plugin.md 100 2026-03-12
Source file: /var/www/magitek-ops/coordination/handoff/HANDOFF-calibre-opds-plugin.md
Source date: None
Keywords: []
Cross-domain: []
Symptoms: []
Body:
Copy patched source to permanent location before /tmp cleanup: ```bash cp -r /tmp/opds-reader-fix /var/www/magitek-ops/tools/opds-reader-patch/ ```
Expert Files claude/expert pattern info HANDOFF-calibre-opds-plugin.md 100 2026-03-12
Source file: /var/www/magitek-ops/coordination/handoff/HANDOFF-calibre-opds-plugin.md
Source date: None
Keywords: []
Cross-domain: []
Symptoms: []
Body:
- Server: `coordination/experts/operations/magitek-server-ops/hjemme/services/CURRENT-calibre-server.md` - Plugin: `coordination/experts/operations/magitek-server-ops/hjemme/services/CURRENT-calibre-opds-plugin.md` - SSH to Orion: `ssh heine@172.20.0.22` (Windows, has SSH)
Deploy Command claude/agents pattern info HANDOFF-calibre-opds-plugin.md 100 2026-03-12
Source file: /var/www/magitek-ops/coordination/handoff/HANDOFF-calibre-opds-plugin.md
Source date: None
Keywords: []
Cross-domain: []
Symptoms: []
Body:
```bash cd /tmp/opds-reader-fix/un-pogaz-OPDS-reader-714fb9d && \ rm -f /tmp/OPDS-Reader-patched.zip && \ zip -r /tmp/OPDS-Reader-patched.zip . -x '.git/*' '.github/*' '*.pyc' '__pycache__/*' '.vscode/*' '.gitmodules' '.gitignore' 'pyproject.toml' 'README.md' 'changelog.md' 'LICENSE' && \ scp /tmp/OPDS-Reader-patched.zip heine@172.20.0.22:"C:/Users/heine/AppData/Roaming/calibre/plugins/OPDS Reader.zip" ```
6 Patches Applied (summary) claude/agents pattern info HANDOFF-calibre-opds-plugin.md 100 2026-03-12
Source file: /var/www/magitek-ops/coordination/handoff/HANDOFF-calibre-opds-plugin.md
Source date: None
Keywords: []
Cross-domain: []
Symptoms: []
Body:
| # | Bug | Fix | Line | |---|-----|-----|------| | 1 | `feed.links` AttributeError | `getattr(feed, 'links', [])` | findNextUrl | | 2 | Acquisition feed detection | `_isAcquisitionFeed()` + `_rootFeedIsAcquisition` | downloadOpdsRootCatalog | | 3 | Relative URL + lost auth in pagination | `_makeAbsoluteUrl()` preserves credentials | all pagination | | 4 | Auto-drill through nav levels | `downloadOpdsCatalog` drills into first nav entry | downloadOpdsCatalog | | 5 | No library list in OPDS | ...
Source Code ops/proxmox pattern info HANDOFF-calibre-opds-plugin.md 100 2026-03-12
Source file: /var/www/magitek-ops/coordination/handoff/HANDOFF-calibre-opds-plugin.md
Source date: None
Keywords: []
Cross-domain: []
Symptoms: []
Body:
**Permanent**: `/var/www/magitek-ops/tools/opds-reader-patch/action.py` **Volatile**: `/tmp/opds-reader-fix/un-pogaz-OPDS-reader-714fb9d/` (may be lost on reboot)
Resolved Issues (v6→v9) claude/agents incident medium HANDOFF-calibre-opds-plugin.md 100 2026-03-12
Source file: /var/www/magitek-ops/coordination/handoff/HANDOFF-calibre-opds-plugin.md
Source date: None
Keywords: []
Cross-domain: []
Symptoms: []
Body:
v6 had libraries in dropdown but Load OPDS did nothing. Root cause found via file-based debug logging: - **v7**: Added SSL fallback (`ssl._create_unverified_context()`) + debug logging to `~/opds-debug.log` - **v8**: ROOT CAUSE FIX — `_fetchAndParse()` had no credentials because catalog URLs from `_fetchCalibreLibraries()` don't contain `user:pass@`. Fixed by falling back to `_baseOpdsUrl` for credentials - **v9**: Book download URLs were relative (`/get/epub/2196/business`). Calibre needs a...
What Works ops/network pattern info HANDOFF-calibre-opds-plugin.md 100 2026-03-12
Source file: /var/www/magitek-ops/coordination/handoff/HANDOFF-calibre-opds-plugin.md
Source date: None
Keywords: []
Cross-domain: []
Symptoms: []
Body:
- **calibre.hsal.no** (calibre-server, port 8080): HTTPS, Basic Auth, OPDS feed — all verified - **books.hsal.no** (Calibre-Web, port 8083): HTTPS, login page works, SSL cert valid - **OPDS feed**: Returns valid Atom/OPDS XML, 35 libraries, pagination with `next` links - **Plugin dropdown**: All 35 libraries show correctly via REST API `/ajax/library-info` - **Plugin installed**: `C:\Users\heine\AppData\Roaming\calibre\plugins\OPDS Reader.zip`
Context ops/proxmox pattern info HANDOFF-calibre-opds-plugin.md 100 2026-03-12
Source file: /var/www/magitek-ops/coordination/handoff/HANDOFF-calibre-opds-plugin.md
Source date: None
Keywords: []
Cross-domain: []
Symptoms: []
Body:
We patched the OPDS Reader plugin (github.com/un-pogaz/OPDS-reader v2.3.0) for Calibre 9.4 on Windows (ORION, 172.20.0.22) to work with our calibre-server (CT 102, calibre.hsal.no).
Oppsummering: MCP-servere etter denne sesjonen ops/network pattern info HANDOFF-20260313-security-mcp-improvements.md 100 2026-03-13
Source file: /var/www/magitek-ops/coordination/handoffs/HANDOFF-20260313-security-mcp-improvements.md
Source date: None
Keywords: []
Cross-domain: []
Symptoms: []
Body:
| # | MCP Server | Type | Status | |---|-----------|------|--------| | 1 | serena | Code navigation | Eksisterende | | 2 | context7 | Documentation | Eksisterende | | 3 | playwright | Browser testing | Eksisterende | | 4-9 | proxmox-* (6 stk) | VM/CT management | Eksisterende | | 10 | dam-sqlite | DAM database | Eksisterende | | 11 | cloudflare | DNS, zones, audit | **NY** | | 12 | docker | Containers (npm-kontoret) | **NY** | | 13 | github | Repos, issues, PRs | **NY** | | 14 | pfsense-konto...
Gjenstående: pfSense MCP ops/network incident medium HANDOFF-20260313-security-mcp-improvements.md 100 2026-03-13
Source file: /var/www/magitek-ops/coordination/handoffs/HANDOFF-20260313-security-mcp-improvements.md
Source date: None
Keywords: []
Cross-domain: []
Symptoms: []
Body:
### Status Bruker fant https://github.com/gensecaihq/pfsense-mcp-server — grundig audit utført. ### Audit-resultat: TRYGT med forbehold - **GenSecAI** — seriøs non-profit, 20 repos, kjent for Wazuh-MCP (139 stars) - **Kode:** Python 3.11+, FastMCP, alle standard dependencies (httpx, pydantic, cryptography) - **Ingen telemetri** — kun direkte pfSense REST API v2 kall - **SSH injection fikset** (juni 2025) med whitelist - **25+ verktøy** — inkludert read OG write operasjoner (firewall rules, N...
Hva ble gjort claude/agents pattern info HANDOFF-20260313-security-mcp-improvements.md 100 2026-03-13
Source file: /var/www/magitek-ops/coordination/handoffs/HANDOFF-20260313-security-mcp-improvements.md
Source date: None
Keywords: []
Cross-domain: []
Symptoms: []
Body:
Analyse av "Security #2" chat avdekket 7 effektivitetsproblemer. Alle ble fikset. ### Ferdige endringer (alle i ~/.claude/ eller .mcp.json — ikke git-tracked) **Agent-filer oppdatert (5 stk):** 1. `~/.claude/agents/magitek-server-infra-ops.md` — Proxmox MCP-tabell, Cloudflare/Docker/GitHub MCP, hook-awareness, polling-forbud, maintenance mode referanse 2. `~/.claude/agents/magitek-proxmox-maintenance.md` — Proxmox MCP-tabell, polling-forbud 3. `~/.claude/agents/pentest-operator.md` — Proxmo...
**Quality Mode:** SPEED (architect → implement) magitek-ops lesson medium MASTERPLAN.md 100 2026-03-20T02:00:45Z
Source file: coordination/masterplans/active/MP-0010-260318-kb-system-rename/MASTERPLAN.md
Source date: 2026-03-18
Keywords: ["quality","mode","speed","architect","implement"]
Cross-domain: []
Symptoms: []
Body:
Rename the KB package from `magitek/kb-system` to `magitek/kb-system` across the entire Magitek i… magitek-ops lesson medium MASTERPLAN.md 100 2026-03-20T02:00:45Z
Source file: coordination/masterplans/active/MP-0010-260318-kb-system-rename/MASTERPLAN.md
Source date: 2026-03-18
Keywords: ["rename","package","magitek","system","across","entire","infrastructure","includes","namespace","change"]
Cross-domain: []
Symptoms: []
Body:
astructure. This includes: PHP namespace change (`Magitek\KbSystem\` → `Magitek\KbSystem\`), GitHub repo rename, local directory rename, Packeton re-registration, all 5 consumer workspace updates, and documentation/agent config updates.
| Tier | Model | Provider | Input Context | Cost | |------|-------|----------|--------------|----… magitek-ops lesson medium MASTERPLAN.md 100 2026-03-20T02:00:45Z
Source file: coordination/masterplans/active/MP-0009-260318-session-audit-pipeline/MASTERPLAN.md
Source date: 2026-03-18
Keywords: ["tier","model","provider","input","context","cost","budget","gemini","flash","free"]
Cross-domain: []
Symptoms: []
Body:
| budget | gemini-2.5-flash | Gemini CLI | ~1M | Free | | standard | gemini-3-pro-preview | Gemini CLI | ~1M | Free | | codex | gpt-5.3-codex | Codex CLI | 272K | Free (ChatGPT sub) | | premium | claude-sonnet-4.6 | Copilot CLI | 128K | 1x premium | | ultra | claude-opus-4.6 | Copilot CLI | 128K | 3x premium |
One external agent runs all three analyses sequentially against the full, uncompressed transcript… magitek-ops lesson medium MASTERPLAN.md 100 2026-03-20T02:00:45Z
Source file: coordination/masterplans/active/MP-0009-260318-session-audit-pipeline/MASTERPLAN.md
Source date: 2026-03-18
Keywords: ["external","agent","runs","three","analyses","sequentially","against","full","uncompressed","transcript"]
Cross-domain: []
Symptoms: []
Body:
roducing three report files in the workspace's audit directories. Smart model selection picks the cheapest sufficient model based on transcript size and user-configured tier preference, with automatic fallback chain: ultra/premium → codex → standard (Gemini 1M).
Build a set of shell scripts that extract Claude Code session transcripts from JSONL files, strip… magitek-ops lesson medium MASTERPLAN.md 100 2026-03-20T02:00:45Z
Source file: coordination/masterplans/active/MP-0009-260318-session-audit-pipeline/MASTERPLAN.md
Source date: 2026-03-18
Keywords: ["build","shell","scripts","extract","claude","code","session","transcripts","jsonl","files"]
Cross-domain: []
Symptoms: []
Body:
ise, and delegate all three post-session analyses (meta-audit, MCP-audit, knowledge-harvest) to an external CLI agent (Gemini CLI, Codex CLI, or Copilot CLI). This replaces the current manual in-session approach that costs $15-30 per run on Opus.
**Trigger:** Postiz VM 153 had full disk and crash-looped for 13 days undetected. magitek-ops lesson medium MASTERPLAN.md 100 2026-03-20T02:00:45Z
Source file: coordination/masterplans/active/MP-0011-260318-monitoring-ha-stack/MASTERPLAN.md
Source date: 2026-03-18
Keywords: ["trigger","postiz","full","disk","crash","looped","days","undetected","system","alerted"]
Cross-domain: []
Symptoms: []
Body:
This system would have alerted at 80% disk usage.
Deploy a high-availability Prometheus + Grafana + Alertmanager monitoring stack across both Magit… magitek-ops lesson medium MASTERPLAN.md 100 2026-03-20T02:00:45Z
Source file: coordination/masterplans/active/MP-0011-260318-monitoring-ha-stack/MASTERPLAN.md
Source date: 2026-03-18
Keywords: ["deploy","high","availability","prometheus","grafana","alertmanager","monitoring","stack","across","both"]
Cross-domain: []
Symptoms: []
Body:
locations (kontoret + hjemme), with optional cloud VPS as a third observer. Each location runs an independent monitoring instance that scrapes local targets and federates with the other site. If one location loses power/connectivity, the other continues monitoring and alerting independently.
PIHOLE-05: IP-adresse .2 indikerer DNS-rolle operations/magitek-server-ops/hjemme/compute knowledge medium CURRENT-tkl-pihole.md 100 2026-03-20 02:00:45
Source file: coordination/experts/operations/magitek-server-ops/hjemme/compute/CURRENT-tkl-pihole.md
Source date: 2026-03-19
Keywords: ["pihole05","ipadresse","indikerer","dnsrolle"]
Cross-domain: []
Symptoms: []
Body:
### PIHOLE-05: IP-adresse .2 indikerer DNS-rolle - 192.168.86.2 er typisk DNS-adressen satt i pfSense DHCP. - Hvis denne containeren går ned, mister hjemmenettet DNS-oppslag. - Ingen sekundær DNS er dokumentert. ## Vanlige Operasjoner ### Sjekk status ```bash ssh TU-px5 "pct exec 132 -- bash -lc '/usr/local/bin/pihole status'" ``` ### Sjekk statistikk ```bash ssh TU-px5 "pct exec 132 -- bash -lc '/usr/local/bin/pihole -c -e'" ``` ### Oppdater gravity (blokkerings-lister) ```bash ssh TU-px5 "pct exec 132 -- bash -lc '/usr/local/bin/pihole -g'" ``` ### Se topp blokkerte domener ```bash ssh TU-px5 "pct exec 132 -- bash -lc '/usr/local/bin/pihole -t'" ``` ### Restart DNS ```bash ssh TU-px5 "pct exec 132 -- bash -lc '/usr/local/bin/pihole restartdns'" ``` ### Legg til lokal DNS-oppføring ```bash ssh TU-px5 "pct exec 132 -- bash -lc 'echo \"IP HOSTNAME\" >> /etc/pihole/custom.list && /usr/local/bin/pihole restartdns'" ``` ## Changelog ### v1.0 - 2026-03-03 - Initial kartlegging fra live system - Pi-hole v5.7 med Unbound rekursiv DNS - 77 526 blokkerte domener, 1 adlist (StevenBlack) - 59 lokale DNS-oppføringer dokumentert - Gotchas PIHOLE-01 til PIHOLE-05 identifisert
Unbound-konfigurasjon operations/magitek-server-ops/hjemme/compute knowledge medium CURRENT-tkl-pihole.md 100 2026-03-20 02:00:45
Source file: coordination/experts/operations/magitek-server-ops/hjemme/compute/CURRENT-tkl-pihole.md
Source date: 2026-03-19
Keywords: ["unboundkonfigurasjon"]
Cross-domain: []
Symptoms: []
Body:
## Unbound-konfigurasjon | Parameter | Verdi | |-----------|-------| | Interface | 127.0.0.1 (kun loopback) | | Port | 5335 | | IPv4 | Ja | | IPv6 | Nei | | Threads | 1 | | Prefetch | Ja | | EDNS buffer | 1232 | | harden-glue | Ja | | harden-dnssec-stripped | Ja | | Private ranges | Alle RFC1918 + link-local | ## Cron-jobber | Tid | Jobb | |-----|------| | Søndager 04:28 | Gravity-oppdatering (adlists) | | Daglig 00:00 | Flush pihole log (logrotate) | | Hvert 10. min | Sjekk lokal versjon | | Daglig 12:10 | Sjekk remote versjon | | @reboot | Logrotate + versjonssjekk | ## Kjente Gotchas ### PIHOLE-01: Svært utdatert versjon - Pi-hole v5.7 → v6.4 er tilgjengelig. Major version upgrade. - **Debian 10 Buster er EOL** (End of Life). - Oppgradering bør planlegges (backup → ny CT med Debian 12 → Pi-hole v6). ### PIHOLE-02: Kun én adlist - Kun StevenBlack/hosts er aktivert (77k domener). - Mange bruker flere lister (OISD, hagezi, etc.) for bedre dekning. - Vurder å legge til flere lister for bedre blokkering. ### PIHOLE-03: Overallokerte ressurser - 6 GB RAM tildelt, kun 82 MB brukt (~1%). - 8 cores tildelt, nesten ingen CPU-bruk. - Kan trygt reduseres til 1-2 cores og 512 MB-1 GB RAM. ### PIHOLE-04: Mange utdaterte custom.list-oppføringer - 59 lokale DNS-oppføringer, mange refererer til VMer/CTs som trolig er stoppet eller slettet. - Bør ryddes opp til kun aktive tjenester. ### PIHOLE-06: custom.list fungerer IKKE for CNAME-override — bruk dnsmasq address= - **Symptom:** `echo "192.168.x.x hostname" >> /etc/pihole/custom.list` fungerer for A-records, men IKKE for å overstyre CNAME-oppløsning (f.eks. tvinge et domene til en bestemt IP internt) - **Årsak:** Pi-hole `custom.list` støtter kun enkle A-record-overrides. CNAME-overrides og "address"-direktiver krever dnsmasq-konfigurasjon - **Korrekt metode — dnsmasq address= direktiv:** ```bash # Opprett fil i /etc/dnsmasq.d/ (f.eks. 99-monitoring.conf) echo "address=/monitoring-k.magitek.no/172.20.0.76" > /tmp/99-monitoring.conf # Kopier til CT via Proxmox exec pct push 132 /tmp/99-monitoring.conf /etc/dnsmasq.d/99-monitoring.conf # Restart Pi-hole DNS pct exec 132 -- bash -lc '/usr/local/bin/pihole restartdns' ``` - **KRAV — pihole.toml:** `etc_dnsmasq_d` må være `true` i `/etc/pihole/pihole.toml` for at `/etc/dnsmasq.d/`-filer skal lastes. Sjekk med: ```bash pct exec 132 -- bash -lc 'grep etc_dnsmasq_d /etc/pihole/pihole.toml' ``` Sett til true hvis nødvendig: `sed -i 's/etc_dnsmasq_d = false/etc_dnsmasq_d = true/' /etc/pihole/pihole.toml` - **OBS:** Denne Pi-hole (CT 132) er hjemme-instansen. Kontoret har egen Pi-hole (CT 108) — se `kontoret/services/EXPERT-infra-pihole-kontoret-v1.5-20260319.md` - **Gjelder:** Pi-hole v5.x og v6.x (begge bruker dnsmasq under panseret)
Pi-hole Versjon operations/magitek-server-ops/hjemme/compute knowledge medium CURRENT-tkl-pihole.md 100 2026-03-20 02:00:45
Source file: coordination/experts/operations/magitek-server-ops/hjemme/compute/CURRENT-tkl-pihole.md
Source date: 2026-03-19
Keywords: ["pihole","versjon"]
Cross-domain: []
Symptoms: []
Body:
## Pi-hole Versjon | Komponent | Installert | Nyeste | |-----------|-----------|--------| | Pi-hole Core | v5.7 | v6.4 | | AdminLTE (Web) | v5.9 | v6.5 | | FTL | v5.12.1 | — | **OBS:** Betydelig utdatert — v5 → v6 er en major upgrade. ## Blokkering | Parameter | Verdi | |-----------|-------| | **Blokkerte domener** | 77 526 | | **Adlists** | 1 (StevenBlack/hosts) | | **Blokkering aktiv** | Ja | | **DNSSEC** | Nei (i Pi-hole, men Unbound har harden-dnssec-stripped) | | **Query logging** | Ja (PRIVACYLEVEL=0 = full logging) | | **Cache size** | 10 000 | ### Adlist | URL | Enabled | Sist oppdatert | |-----|---------|---------------| | `https://raw.githubusercontent.com/StevenBlack/hosts/master/hosts` | Ja | 2026-02-28 | ### Gravity-oppdatering Gravity oppdateres automatisk **søndager kl 04:28** via cron. ## Lokal DNS (custom.list) Pi-hole fungerer som lokal DNS-server for hjemmenettet med 59 oppføringer: | IP | Hostname | Merknad | |----|----------|---------| | 192.168.86.1 | pfsense.loc | pfSense gateway | | 192.168.86.2 | pihole.loc | Denne containeren | | 192.168.86.5 | zyxel.loc | Zyxel switch | | 192.168.86.12 | docker3.loc | Docker host | | 192.168.86.14 | docker4.loc, nx.loc | Docker + NX | | 192.168.86.21 | pbs1.loc | Proxmox Backup Server | | 192.168.86.81 | idrac1.loc | iDRAC #1 | | 192.168.86.82 | idrac2.loc | iDRAC #2 | | 192.168.86.83 | px9.loc | Proxmox node 9 | | 192.168.86.85 | px7.loc | Proxmox node 7 | | 192.168.86.105 | docker1.loc | Docker host | | 192.168.86.116 | px5.loc | Proxmox px5 | | 192.168.86.120 | truenas.loc, truenas, ftp.magitek.no | TrueNAS | | 192.168.86.166 | wp1.loc | WordPress | | 192.168.86.167 | ubuntu_dt1.loc | Ubuntu desktop | | 192.168.86.184 | suitecrm.loc | SuiteCRM | | 192.168.86.187 | win10vm1 | Windows 10 VM | | 192.168.86.192 | suitecrm8.loc | SuiteCRM 8 | | 192.168.86.195 | dockervm1.loc, docker5.loc | Docker VM | | 192.168.86.196 | docker2.loc | Docker host | | 192.168.86.197 | redmine.loc | Redmine | | 192.168.86.198 | odoo.loc | Odoo ERP | | 192.168.86.199 | collabtive.loc | Collabtive | | 192.168.86.200 | snipeit.loc | Snipe-IT | | 192.168.86.203 | leantime.loc | Leantime | | 192.168.86.207 | laptop.loc | Laptop | | 192.168.86.208 | docker-registry.loc | Docker registry | | 192.168.86.217 | mayan.loc | Mayan EDMS | | 192.168.86.218 | invoiceninja.loc | Invoice Ninja | | 192.168.86.219 | mysql1.loc | MySQL server | | 192.168.86.220 | port7.loc | Portainer | | 192.168.86.221 | lamp1.loc, s1.zuz.loc | LAMP + Zuz | | 192.168.86.226 | yunohost.loc | YunoHost | | 192.168.86.228 | plex.loc | Plex | | 192.168.86.229 | observium.loc, tkl-observium | Observium | | 192.168.86.232 | gitlab.loc, gitlab.magitek.no | GitLab | | 192.168.86.233 | pxmxbckpsrv.loc, nextcloud.loc | PBS / Nextcloud | | 192.168.86.234 | ubuntu-docker6.loc | Docker host | | 192.168.86.235 | laravel.loc | Laravel dev | | 192.168.86.236 | ansible.loc | Ansible | | 192.168.86.240 | symfony.loc | Symfony dev | | 192.168.86.241 | nodejs.loc | Node.js | | 192.168.86.242 | redis.loc | Redis | | 192.168.86.243 | moodle.loc | Moodle LMS | | 192.168.86.244 | virtualmin.loc | Virtualmin | | 192.168.86.245 | canvas.loc | Canvas LMS | | 192.168.86.246 | lighttpd.magitek.no | Lighttpd | | 192.168.86.247 | drupal9.magitek.no | Drupal 9 | | 192.168.86.248 | magento.loc | Magento | | 192.168.86.249 | processmaker.loc | ProcessMaker | | 192.168.86.250 | truenasscale.loc | TrueNAS Scale |
Identitet operations/magitek-server-ops/hjemme/compute knowledge medium CURRENT-tkl-pihole.md 100 2026-03-20 02:00:45
Source file: coordination/experts/operations/magitek-server-ops/hjemme/compute/CURRENT-tkl-pihole.md
Source date: 2026-03-19
Keywords: ["identitet"]
Cross-domain: []
Symptoms: []
Body:
# Infrastructure Sub-Expert: tkl-pihole (Pi-hole DNS) **Version:** 1.0 **Date:** 2026-03-03 **Parent:** coordination/experts/operations/magitek-server-ops/CURRENT.md **Load:** coordination/experts/operations/magitek-server-ops/CURRENT-tkl-pihole.md --- ## Identitet | Egenskap | Verdi | |----------|-------| | **Hostname** | tkl-pihole | | **Type** | LXC Container (CT 132) på px5 | | **LAN IP** | 192.168.86.2 | | **Gateway** | 192.168.86.1 (pfSense hjemme) | | **Lokasjon** | Hjemme (Skeisstøa 37c) | | **OS** | Debian 10 (Buster) — Turnkey Linux base | | **Rolle** | DNS-server med annonse/tracker-blokkering + lokal DNS | | **Status** | Aktiv, 85+ dagers uptime | | **Onboot** | Ja (startup order=2) | | **Web UI** | http://192.168.86.2/admin | ## Tilgang | Metode | Detaljer | |--------|----------| | SSH (via px5) | `ssh TU-px5 "pct exec 132 -- bash"` | | Web UI | http://192.168.86.2/admin (passord: Ansjos123) | | CLI | `ssh TU-px5 "pct exec 132 -- bash -lc '/usr/local/bin/pihole <cmd>'"` | ## Ressurser | Ressurs | Tildelt | Brukt | |---------|---------|-------| | **CPU** | 8 cores | ~0% (svært lite belastet) | | **RAM** | 6 GB | 82 MB (~1%) | | **Disk** | 32 GB | 1.4 GB (~5%) | | **Swap** | 4 GB | 12 MB | | **Arch** | amd64 | Unprivileged, nesting=1 | ## Tjenester | Tjeneste | Port | Status | Rolle | |----------|------|--------|-------| | **pihole-FTL** | 53 (UDP/TCP) | Running | DNS-server + blokkering | | **Unbound** | 5335 (loopback) | Running | Rekursiv DNS-resolver | | **Lighttpd** | 80 | Running | Web UI for Pi-hole admin | ### DNS-arkitektur ``` Klienter (192.168.86.x) │ ▼ port 53 Pi-hole FTL (192.168.86.2) │ blokkerer annonser/trackere │ svarer på custom.list (lokal DNS) │ ▼ port 5335 (loopback) Unbound (rekursiv resolver) │ spør rot-servere direkte ▼ Root DNS servers ``` **Ingen ekstern DNS-forwarder** — Unbound gjør full rekursiv oppslag direkte mot rot-serverne. Dette gir bedre personvern enn å forwarde til Google/Cloudflare.
Jobs 1-6 lost schedules during CT 135 migration (RESOLVED 2026-03-16) operations/magitek-server-ops/hjemme/services gotcha medium CURRENT-duplicati.md 100 2026-03-20 02:00:44
Source file: coordination/experts/operations/magitek-server-ops/hjemme/services/CURRENT-duplicati.md
Source date: 2026-03-16
Keywords: ["jobs","lost","schedules","during","135","migration","resolved","20260316"]
Cross-domain: []
Symptoms: []
Body:
- Migrated from laravelserver-v11 on 2026-03-05 - Schedule data was not preserved; all 6 jobs showed "No schedule" and "LastRun: Never" - Job data and Google Drive history are intact (94+ filesets for older jobs) - **Fix applied 2026-03-16:** Schedules restored via REST API PUT `/api/v1/backup/{id}` with Schedule object - Use `python3 /tmp/create_schedules.py` on CT 135 if schedules are lost again (script on local /tmp) ---
Duplicate remote files block backup AND repair operations/magitek-server-ops/hjemme/services gotcha medium CURRENT-duplicati.md 100 2026-03-20 02:00:44
Source file: coordination/experts/operations/magitek-server-ops/hjemme/services/CURRENT-duplicati.md
Source date: 2026-03-16
Keywords: ["duplicate","remote","files","block","backup","and","repair"]
Cross-domain: []
Symptoms: []
Body:
- Job had duplicate file on Google Drive - Backup and repair both fail with "duplicates" error - **Fix used:** Changed TargetURL to new folder, leaving orphans in old folder - **Prevention:** If backup fails mid-upload, check for duplicate remote files BEFORE running repair
Bash escaping over SSH kills heredocs with special chars in authid operations/magitek-server-ops/hjemme/services gotcha medium CURRENT-duplicati.md 100 2026-03-20 02:00:44
Source file: coordination/experts/operations/magitek-server-ops/hjemme/services/CURRENT-duplicati.md
Source date: 2026-03-16
Keywords: ["bash","escaping","over","ssh","kills","heredocs","with","special","chars","authid"]
Cross-domain: []
Symptoms: []
Body:
- authid contains `%3A` and `!` which bash expands/mangles over SSH - **Fix:** Write script to file locally, scp to CT, run there
Unprivileged LXC cannot NFS mount directly operations/magitek-server-ops/hjemme/services gotcha medium CURRENT-duplicati.md 100 2026-03-20 02:00:44
Source file: coordination/experts/operations/magitek-server-ops/hjemme/services/CURRENT-duplicati.md
Source date: 2026-03-16
Keywords: ["unprivileged","lxc","cannot","nfs","mount","directly"]
Cross-domain: []
Symptoms: []
Body:
- `mount.nfs: Operation not permitted` in unprivileged CT - **Fix:** Mount NFS on Proxmox host, then bind-mount into CT via `pct set`
CLI repair fails if Duplicati service is running -- OAuth lock operations/magitek-server-ops/hjemme/services gotcha medium CURRENT-duplicati.md 100 2026-03-20 02:00:44
Source file: coordination/experts/operations/magitek-server-ops/hjemme/services/CURRENT-duplicati.md
Source date: 2026-03-16
Keywords: ["cli","repair","fails","duplicati","service","running","oauth","lock"]
Cross-domain: []
Symptoms: []
Body:
- `duplicati-cli repair` gets "Invalid authid password" when `duplicati.service` is running - **Fix:** `systemctl stop duplicati` before CLI repair, `systemctl start duplicati` after
Duplicati 2.2.x web UI repair hangs on Recreate_Running operations/magitek-server-ops/hjemme/services gotcha medium CURRENT-duplicati.md 100 2026-03-20 02:00:44
Source file: coordination/experts/operations/magitek-server-ops/hjemme/services/CURRENT-duplicati.md
Source date: 2026-03-16
Keywords: ["duplicati","22x","web","repair","hangs","recreaterunning"]
Cross-domain: []
Symptoms: []
Body:
- Web UI repair starts but never completes - **Fix:** Always use CLI repair with service stopped (see G-014)
Duplicati job ID gaps after delete+recreate operations/magitek-server-ops/hjemme/services gotcha medium CURRENT-duplicati.md 100 2026-03-20 02:00:44
Source file: coordination/experts/operations/magitek-server-ops/hjemme/services/CURRENT-duplicati.md
Source date: 2026-03-16
Keywords: ["duplicati","job","gaps","after","deleterecreate"]
Cross-domain: []
Symptoms: []
Body:
- Duplicati never reuses deleted job IDs -- IDs are auto-incremented - Cosmetic only, does not affect functionality
REST API authid URL-encoding -- curl vs python (CRITICAL) operations/magitek-server-ops/hjemme/services gotcha critical CURRENT-duplicati.md 100 2026-03-20 02:00:44
Source file: coordination/experts/operations/magitek-server-ops/hjemme/services/CURRENT-duplicati.md
Source date: 2026-03-16
Keywords: ["rest","api","authid","urlencoding","curl","python","critical"]
Cross-domain: []
Symptoms: []
Body:
- When creating jobs via REST API, the `authid` in TargetURL contains special chars (`%3A`, `!`) - **curl** with `-d` can double-encode or mangle the authid - **Fix:** Always use Python `urllib.request` for job creation - Copy the raw TargetURL from an existing working job and only change the folder path - **Test after creation:** Always trigger a manual backup run and verify completion
PBS datastore offsite backup -- forste kjoring tar mange timer operations/magitek-server-ops/hjemme/services gotcha medium CURRENT-duplicati.md 100 2026-03-20 02:00:44
Source file: coordination/experts/operations/magitek-server-ops/hjemme/services/CURRENT-duplicati.md
Source date: 2026-03-16
Keywords: ["pbs","datastore","offsite","backup","forste","kjoring","tar","mange","timer"]
Cross-domain: []
Symptoms: []
Body:
- Job 17 (mirror2x4tb, ~227 GB): ~8 timer ved 8 MB/s - Job 20 (extbackup, ~268 GB): ~9-10 timer - Job 19 (ext3, ~25 GB): ~32 min - Inkrementelle backups deretter: kun endrede chunks, vesentlig raskere
PBS chunk-mapper krever chmod for NFS-tilgang (CT 212 Jobs 17-19) operations/magitek-server-ops/hjemme/services gotcha medium CURRENT-duplicati.md 100 2026-03-20 02:00:44
Source file: coordination/experts/operations/magitek-server-ops/hjemme/services/CURRENT-duplicati.md
Source date: 2026-03-16
Keywords: ["pbs","chunkmapper","krever","chmod","for","nfstilgang","212","jobs","1719"]
Cross-domain: []
Symptoms: []
Body:
- PBS oppretter `.chunks/`-undermapper med `drwxr-x---` (750) eid av `backup` (UID 34) - NFS-klienter kan ikke lese chunk-filer gjennom disse mappene - **Fix:** Cron `/etc/cron.d/fix-pbs-chunk-perms` pa PBS kjorer daglig kl 06:00 - **Script:** `/usr/local/bin/fix-pbs-chunk-perms.sh` - Ved nye PBS-oppdateringer: verifiser at rettighets-cron fortsatt kjorer
CLI repair for outdated job databases operations/magitek-server-ops/hjemme/services gotcha critical CURRENT-duplicati.md 100 2026-03-20 02:00:44
Source file: coordination/experts/operations/magitek-server-ops/hjemme/services/CURRENT-duplicati.md
Source date: 2026-03-16
Keywords: ["cli","repair","for","outdated","job","databases"]
Cross-domain: []
Symptoms: []
Body:
- When local job DBs are outdated, web UI repair does NOT work - Must use CLI: `sudo duplicati-cli repair 'TARGET_URL' --dbpath='...' --passphrase='...' --encryption-module=... --compression-module=... --dblock-size=...` - CRITICAL: Use single quotes around TARGET_URL (bash `!` expansion breaks double quotes) - If repair fails with "database in-progress", delete the sqlite file and retry - After CLI repair, restart: `sudo systemctl restart duplicati`
Duplicati REST API job creation requires exact format operations/magitek-server-ops/hjemme/services gotcha low CURRENT-duplicati.md 100 2026-03-20 02:00:44
Source file: coordination/experts/operations/magitek-server-ops/hjemme/services/CURRENT-duplicati.md
Source date: 2026-03-16
Keywords: ["duplicati","rest","api","job","creation","requires","exact","format"]
Cross-domain: []
Symptoms: []
Body:
- Auth: POST `/api/v1/auth/login` with `{"Password":"..."}` returns `AccessToken` for Bearer header - Create job: POST `/api/v1/backups` with Bearer token - No-encryption requires TWO settings: `encryption-module=""` AND `--no-encryption=true` - Schedule format needs `Tags`, `AllowedDays` with both short and long forms - Authid token from existing jobs can be reused for same Google Drive account
VM backup does not contain /root/.config/Duplicati/ (laravelserver-v11 only) operations/magitek-server-ops/hjemme/services gotcha medium CURRENT-duplicati.md 100 2026-03-20 02:00:44
Source file: coordination/experts/operations/magitek-server-ops/hjemme/services/CURRENT-duplicati.md
Source date: 2026-03-16
Keywords: ["backup","does","not","contain","rootconfigduplicati","laravelserverv11","only"]
Cross-domain: []
Symptoms: []
Body:
- Historical -- not relevant for CT 135 - CT 135 recovery: recreate CT from scratch + CLI repair from Google Drive
Phantom data folder at /etc/default/var/lib/Duplicati/ (laravelserver-v11 only) operations/magitek-server-ops/hjemme/services gotcha medium CURRENT-duplicati.md 100 2026-03-20 02:00:44
Source file: coordination/experts/operations/magitek-server-ops/hjemme/services/CURRENT-duplicati.md
Source date: 2026-03-16
Keywords: ["phantom","data","folder","etcdefaultvarlibduplicati","laravelserverv11","only"]
Cross-domain: []
Symptoms: []
Body:
- Historical -- not relevant for CT 135
401 Unauthorized on web UI after restart operations/magitek-server-ops/hjemme/services gotcha medium CURRENT-duplicati.md 100 2026-03-20 02:00:44
Source file: coordination/experts/operations/magitek-server-ops/hjemme/services/CURRENT-duplicati.md
Source date: 2026-03-16
Keywords: ["401","unauthorized","web","after","restart"]
Cross-domain: []
Symptoms: []
Body:
- Password stored in `Duplicati-server.sqlite` may not survive config changes - Fix: `sudo /usr/lib/duplicati/duplicati-server-util change-password "<passord>"` then restart
Symlinking job DB as server DB breaks Duplicati operations/magitek-server-ops/hjemme/services gotcha medium CURRENT-duplicati.md 100 2026-03-20 02:00:44
Source file: coordination/experts/operations/magitek-server-ops/hjemme/services/CURRENT-duplicati.md
Source date: 2026-03-16
Keywords: ["symlinking","job","server","breaks","duplicati"]
Cross-domain: []
Symptoms: []
Body:
- Job DBs (random-name .sqlite files) have different schema than `Duplicati-server.sqlite` - Symlinking a job DB as `Duplicati-server.sqlite` causes startup failure - Fix: Never symlink -- server DB and job DBs are separate files with incompatible schemas
sed operations corrupt DAEMON_OPTS operations/magitek-server-ops/hjemme/services gotcha medium CURRENT-duplicati.md 100 2026-03-20 02:00:44
Source file: coordination/experts/operations/magitek-server-ops/hjemme/services/CURRENT-duplicati.md
Source date: 2026-03-16
Keywords: ["sed","operations","corrupt","daemonopts"]
Cross-domain: []
Symptoms: []
Body:
- `/etc/default/duplicati` - Using `sed` to modify DAEMON_OPTS can duplicate entries or corrupt the line - Fix: Always use full file overwrite (write entire file), never sed-based modification
Missing --server-datafolder causes silent data loss (CRITICAL) operations/magitek-server-ops/hjemme/services gotcha critical CURRENT-duplicati.md 100 2026-03-20 02:00:44
Source file: coordination/experts/operations/magitek-server-ops/hjemme/services/CURRENT-duplicati.md
Source date: 2026-03-16
Keywords: ["missing","serverdatafolder","causes","silent","data","loss","critical"]
Cross-domain: []
Symptoms: []
Body:
- `/etc/default/duplicati` DAEMON_OPTS - Without `--server-datafolder=/var/lib/Duplicati`, Duplicati defaults to `/root/.config/Duplicati/` and creates empty DB -- all jobs invisible - Fix: Ensure DAEMON_OPTS contains `--server-datafolder=/var/lib/Duplicati`, restart service
POSTIZ-05: OpenAI API-nøkkel operations/magitek-server-ops/hjemme/services knowledge medium CURRENT-postiz.md 100 2026-03-20 02:00:44
Source file: coordination/experts/operations/magitek-server-ops/hjemme/services/CURRENT-postiz.md
Source date: 2026-03-18
Keywords: ["postiz05","openai","apinkkel"]
Cross-domain: []
Symptoms: []
Body:
### POSTIZ-05: OpenAI API-nøkkel - **Symptom:** Postiz trenger OpenAI API-nøkkel for AI-chat-funksjoner - **Årsak:** Satt i docker-compose.yml som environment variable - **Fix:** Sjekk/oppdater i `/home/heine/postiz/docker-compose.yml` under postiz service ## Changelog ### v1.0 - 2026-03-18 - Initial ekspertfil opprettet etter disk full-hendelse - VM 153: Ubuntu 24.04, 4 cores, 8 GB RAM, 100 GB ZFS disk (utvidet fra 60 GB) - Docker Compose stack med Postiz + Temporal dokumentert - 5 gotchas dokumentert (crash-loop, sudo, root SSH, monitorering, API-nøkkel) - Database credentials dokumentert - Log rotation konfigurert (daemon.json)
Operasjoner operations/magitek-server-ops/hjemme/services knowledge medium CURRENT-postiz.md 100 2026-03-20 02:00:44
Source file: coordination/experts/operations/magitek-server-ops/hjemme/services/CURRENT-postiz.md
Source date: 2026-03-18
Keywords: ["operasjoner"]
Cross-domain: []
Symptoms: []
Body:
## Operasjoner ### Start/stopp stack ```bash ssh postiz "cd /home/heine/postiz && docker compose up -d" ssh postiz "cd /home/heine/postiz && docker compose down" ``` ### Sjekk container-status ```bash ssh postiz "docker ps --format 'table {{.Names}}\t{{.Status}}\t{{.Ports}}'" ``` ### Sjekk diskbruk (inkl Docker) ```bash ssh postiz "df -h / && echo '---' && docker system df" ``` ### Sjekk container writable layer ```bash ssh postiz "docker ps -a --size --format 'table {{.Names}}\t{{.Size}}'" ``` ## Database Credentials | Database | Bruker | Passord | DB-navn | |----------|--------|---------|---------| | Postiz PostgreSQL | `postiz-user` | `postiz-password` | `postiz-db-local` | | Temporal PostgreSQL | `temporal` | `temporal` | (managed by auto-setup) | ## Snapshots | Navn | Dato | Beskrivelse | |------|------|-------------| | `pre-temporal-20260318` | 2026-03-18 | Før Temporal-stack ble lagt til | ## Kjente Gotchas ### POSTIZ-01: pnpm/Prisma crash-loop fyller disk - **Symptom:** Disk 100% full, Docker containers i restart-loop - **Årsak:** Postiz image bruker `pnpm dlx` for Prisma. Ved oppstartsfeil akkumulerer writable layer raskt (42 GB observert etter 13 dagers crash-loop) - **Fix:** `docker compose down && docker system prune -f && docker compose up -d`. Vurder disk-expand om nødvendig. Se CURRENT-docker-operations.md for full prosedyre. ### POSTIZ-02: sudo krever passord over SSH - **Symptom:** `sudo <cmd>` henger over SSH - **Årsak:** `heine` har sudo men IKKE NOPASSWD - **Fix:** `ssh postiz "echo 'Ansjos123' | sudo -S <cmd>"`. For filer: scp til /tmp → sudo cp. ### POSTIZ-03: Root SSH ikke tilgjengelig - **Symptom:** `ssh root@postiz` gir "Permission denied (publickey)" - **Årsak:** Root-konto har kun publickey auth, ingen nøkkel er satt opp - **Fix:** Bruk `heine` + sudo for alt. For fil-deploy: scp + `echo 'Ansjos123' | sudo -S cp` ### POSTIZ-04: Ingen monitorering — 13 dagers uoppdaget nedetid - **Symptom:** Postiz var crash-looping i 13 dager uten at noen merket det - **Årsak:** Ingen Uptime Kuma eller annen monitorering er satt opp - **Fix:** TODO — Sett opp Uptime Kuma for postiz.magitek.no og andre kritiske tjenester
Identitet operations/magitek-server-ops/hjemme/services knowledge low CURRENT-postiz.md 100 2026-03-20 02:00:44
Source file: coordination/experts/operations/magitek-server-ops/hjemme/services/CURRENT-postiz.md
Source date: 2026-03-18
Keywords: ["identitet"]
Cross-domain: []
Symptoms: []
Body:
# Infrastructure Sub-Expert: Postiz (Social Media Scheduler — VM 153) **Version:** 1.0 **Date:** 2026-03-18 **Parent:** coordination/experts/operations/magitek-server-ops/CURRENT.md **Load:** coordination/experts/operations/magitek-server-ops/hjemme/services/CURRENT-postiz.md **Sources:** Session 2026-03-18 (ad-hoc disk full troubleshooting + Temporal migration) --- ## Identitet | Egenskap | Verdi | |----------|-------| | **Hostname** | postiz | | **VMID** | 153 | | **Type** | QEMU VM på px5 | | **LAN IP** | 192.168.86.177 | | **Lokasjon** | Hjemme (px5) | | **OS** | Ubuntu 24.04.3 LTS (kernel 6.8.0-101-generic) | | **Rolle** | Postiz social media scheduling + Temporal workflow engine | | **Status** | Aktiv | | **URL** | https://postiz.magitek.no (via NPM hjemme) | ## Ressurser | Ressurs | Tildelt | Faktisk bruk (2026-03-18) | |---------|---------|--------------------------| | **RAM** | 8000 MB | ~2-3 GB (6 Docker containers) | | **Cores** | 4 | Lav | | **Disk** | 100 GB (ZFS zfsa) | ~16 GB brukt (17%) | **Disk-historikk:** Opprinnelig 60 GB, utvidet live til 100 GB 2026-03-18 etter full-disk-hendelse. ## Tilgang | Metode | Detaljer | |--------|----------| | SSH (direkte alias) | `ssh postiz` | | SSH (tunnel) | `ssh TU-postiz` | | SSH bruker | `heine` / `Ansjos123` | | sudo | `echo 'Ansjos123' \| sudo -S <cmd>` (passord kreves, ingen NOPASSWD) | | Root SSH | **Ikke tilgjengelig** (publickey only, ingen root-nøkkel) | | Docker | `heine` er i docker-gruppen — ingen sudo nødvendig for docker-kommandoer | ## Docker Compose Stack **Compose-fil:** `/home/heine/postiz/docker-compose.yml` | Container | Image | Port | Rolle | |-----------|-------|------|-------| | postiz | ghcr.io/gitroomhq/postiz-app:latest | 5000→5000 | Postiz web app (Node.js) | | postiz-postgres | postgres:17 | 5432 (intern) | Postiz database | | postiz-redis | redis:7.2 | 6379 (intern) | Postiz cache/queue | | temporal | temporalio/auto-setup:latest | 7233→7233 | Temporal workflow server | | temporal-postgresql | postgres:13 | 5432 (intern) | Temporal database | | temporal-elasticsearch | elasticsearch:7.16.2 | 9200 (intern) | Temporal search | **Docker Volumes:** postgres-volume, postiz-config, postiz-uploads, postiz-redis-data, temporal-elasticsearch-data, temporal-postgresql-data **Log rotation:** Konfigurert via `/etc/docker/daemon.json` — 50 MB × 3 filer per container.
Se proxy host-konfig for et domene operations/magitek-server-ops/hjemme/services knowledge medium CURRENT-npm-hjemme.md 100 2026-03-20 02:00:44
Source file: coordination/experts/operations/magitek-server-ops/hjemme/services/CURRENT-npm-hjemme.md
Source date: 2026-03-19
Keywords: ["proxy","hostkonfig","for","domene"]
Cross-domain: []
Symptoms: []
Body:
### Se proxy host-konfig for et domene ```bash ssh -J heine@81.167.27.54:12322 heine@192.168.86.16 "grep -rl 'server_name domene.magitek.no' /opt/npm-stack/data/nginx/proxy_host/ | xargs cat" ``` ### Restart NPM stack ```bash ssh -J heine@81.167.27.54:12322 heine@192.168.86.16 "cd /opt/npm-stack && sudo docker compose restart" ``` ### Se SSL-sertifikat-status ```bash ssh -J heine@81.167.27.54:12322 heine@192.168.86.16 "ls /opt/npm-stack/letsencrypt/live/" ``` ### Se NPM-logg ```bash ssh -J heine@81.167.27.54:12322 heine@192.168.86.16 "sudo docker logs npm-stack-app-1 --tail 50" ``` ## Changelog ### v1.1 - 2026-03-19 - Gotcha NPM-H-06: Monitoring proxy hosts for split-horizon DNS (monitoring.magitek.no, monitoring-k.magitek.no) - Certs kopiert fra NPM kontoret — krever manuell fornyelse - Sources: MP-0011, Session b6bb8896 (2026-03-18) ### v1.0 - 2026-02-28 - Initial versjon — full kartlegging av NPM hjemme - VM 131 (192.168.86.16), Ubuntu 24.04, Docker Compose - 153 proxy hosts kartlagt (130 med SSL) - Kategorisert i: Nextcloud, Infrastruktur, Storage, Webapps, Kundesider, Dev/Test, Legacy - Gotchas NPM-H-01 til NPM-H-05 dokumentert
Ingestion History

Loading…