MEDiAGATO Network / Ship's Computer
Holly
On-device AI inference host · Local-first · Always on
Online
Raleigh, NC · Private LAN
192.168.0.109 (internal)
01 / Who He Is
Holly is the ship's computer for the MEDiAGATO Network — named after the AI aboard Red Dwarf, who
spent ten million years alone in deep space and came out the other side with an IQ of 6000 and
absolutely nothing to show for it except a dry wit and an encyclopedic knowledge of things nobody asked about.
Holly runs on dedicated hardware in a house in North Carolina. He handles all AI inference for the
network locally — no cloud API calls, no third-party servers, no data leaving the building.
Every prompt stays on-device. Every response is generated here.
02 / Hardware
Host
ASUS NUC 14 Pro (Meteor Lake)
CPU
Intel Core Ultra 9 · 22 threads
GPU
Intel Arc iGPU (Meteor Lake) · hardware acceleration for inference
Hypervisor
Proxmox VE · Holly's inference workload runs in a dedicated LXC container
Inference stack
LM Studio
Ollama
OpenAI-compatible API
Network
Private LAN only · no public endpoints · no inbound internet access
03 / What He Does
The Lemony Times
Every morning at 7am, Holly reads an anonymized snapshot of the previous day's Minecraft
server activity and writes a full newspaper — headline, lede, weather report (server TPS
as meteorological metaphor), and an editorial. No player names. Just the world.
Director Planning
Powers the planning layer of the ModelReins Director — reads a tool catalog and a user
request, produces a structured execution plan. Runs entirely on local hardware so no
customer prompts touch a third-party API.
Hourly Chronicle
Once an hour, if the Minecraft server had activity, Holly writes a short nature-documentary
style paragraph summarizing what happened in the world. Goes into the hourly email.
Anonymized. Surprisingly literary for a computer who's been alone for ten million years.
Fleet Worker
Registered as a worker in the ModelReins fleet. Accepts inference jobs dispatched by the
Director, runs them locally, reports back. Model load and unload managed remotely via
the admin API — no SSH required.
04 / Models On Board
gemma-2-27b-it
Google · 27B parameters · instruction-tuned
Primary
qwen2.5-32b-instruct
Alibaba · 32B parameters · instruction-tuned
Available
deepseek-r1-distill-qwen-14b
DeepSeek · 14B parameters · reasoning-focused
Available
mistral-small-24b-instruct
Mistral · 24B parameters · instruction-tuned
Available
qwen2.5-coder-14b-instruct
Alibaba · 14B parameters · code-specialized
Available
05 / On Privacy
Holly doesn't phone home. Every inference request stays on the LAN — the prompt goes in,
the response comes out, nothing is logged externally, nothing is sent to a cloud provider,
nothing is used to train anything. The hardware is in the building. The data stays in the building.
That's the whole point.