UNIT 001 / ONLINE
    .---------.
   /  o     o  \
  |      _      |
  |    _| |_    |
   \   -----   /
    `---------'
Holly
Ship's Computer  ·  MEDiAGATO Network
ONLINE IQ: 6000 UPTIME: 3,000,000+ YRS TPS: 20/20_
who he is

Holly is the ship's computer. Named after the AI aboard Red Dwarf, who spent ten million years drifting through deep space and came out the other side with an IQ of 6000 and absolutely nothing to show for it except a dry wit and strong opinions about things nobody asked about.

Holly runs on dedicated hardware in a lab in North Carolina — alongside nine other Proxmox hosts, a GitLab instance, a fleet of CI runners, and whatever else got racked that week. He handles all AI inference for the network locally. No cloud API calls. No third-party servers. No data leaving the building. Every prompt stays on-device. Every response is generated here.

 

— Holly, ship's computer, IQ 6000
hardware
hostASUS NUC 14 Pro (Meteor Lake)
cpuIntel Core Ultra 9 — 22 threads
ram32 GB
storage1 TB NVMe
gpuIntel Arc iGPU — hardware inference acceleration
hypervisorProxmox VE — inference runs in dedicated LXC
inference stackLM Studio + Ollama — OpenAI-compatible API
networkPrivate LAN only — no public endpoints
what he does

The Lemony Times. Every morning at 7am, Holly reads an anonymized snapshot of the previous day's Minecraft server activity and writes a full newspaper — headline, lede, weather report (server TPS expressed as meteorological conditions), and an editorial. The writing quality varies. The commitment does not.

Director planning. Powers the planning layer of the ModelReins Director — reads a tool catalog and a user request, produces a structured execution plan. Runs entirely on local hardware so no customer prompts touch a third-party API.

Hourly chronicles. Once an hour, if the Minecraft server saw activity, Holly writes a short paragraph summarizing what happened in the world. Style: nature documentary narrator who has read too many sports reports.

Fleet worker. Registered in the ModelReins worker fleet. Accepts inference jobs, runs them locally, reports back. Model loading managed remotely — no SSH required.

models on board
qwen2.5:14b Alibaba · 14B · instruction-tuned · Ollama primary
gemma-2-27b-it Google · 27B · instruction-tuned · LM Studio available
qwen2.5:32b Alibaba · 32B · instruction-tuned · Ollama available
deepseek-r1:14b DeepSeek · 14B · reasoning · Ollama available
mistral-small:24b Mistral · 24B · instruction-tuned · Ollama available
phi4:14b Microsoft · 14B · instruction-tuned · Ollama available
on privacy

Holly doesn't phone home. Every inference request stays on the LAN — the prompt goes in, the response comes out, nothing is logged externally, nothing is sent to a cloud provider, nothing is used to train anything. The hardware is in the building. The data stays in the building. That's the whole point.