Platform / Update
Herb Hub 365 — Infrastructure and Architecture
This post is a technical deep-dive into how Herb Hub 365 is wired together — the services, queues, data flows, and external integrations that run beneath the daily greenhouse updates. The diagrams below are generated directly from the live architecture definition and reflect the current state of the platform.
Full System Architecture
The complete platform spans IoT edge devices, eight Go microservices, a RabbitMQ message broker, shared file storage, and a small number of external APIs. Data flows left to right: physical sensors and the timelapse camera feed into the service layer, which coordinates content generation, video production, and publishing via asynchronous queues.
Video Content Pipeline
Every narrated video on this site follows a deterministic pipeline that starts with a sensor reading and ends with a YouTube embed injected into a blog post. The diagram below traces the full end-to-end flow, including the two paths by which video generation can be triggered: automatically by the daemon or manually via the manager web UI.
Manual YouTube Publish
In addition to the fully automated pipeline, videos can be published manually from the herbhub-manager web UI. Rather than adding a separate publish endpoint to video-publisher, the manager queues the message directly via the RabbitMQ management HTTP API. The video-publisher consumer picks it up from the same video.produced queue and handles the upload identically to the automated path.
:8080 participant R as RabbitMQ Mgmt API
rabbitmq:15672 participant Q as video.produced
queue participant VP as video-publisher participant YT as YouTube API participant GH as GitHub User->>M: POST /api/publish {slug} M->>M: Resolve post → find .mp4 in output dir M->>R: POST /api/exchanges/%2F/amq.default/publish
{slug, date, output_file, status:"completed"} R->>Q: Route message (delivery_mode:2 persistent) R-->>M: {routed: true} M-->>User: 202 Accepted {status:"queued"} Note over VP,Q: video-publisher consumer picks up message VP->>Q: AMQP consume Q-->>VP: {slug, date, output_file} VP->>VP: Load post metadata (title, tags, excerpt) VP->>YT: Upload MP4 (HTTPS OAuth2) YT-->>VP: videoId VP->>VP: Write .json marker with youtube_url VP->>GH: Inject iframe embed, git push VP->>VP: Delete local .mp4 Note over User,M: Posts page badge updates to "Published" on next refresh
Timelapse Pipeline
Timelapse videos follow a slightly different path. The timelapse-builder service stitches raw camera frames into an MP4 independently of the blog pipeline. When a timelapse is ready to be narrated and published, herbhub-manager triggers video-narrator directly with the timelapse file and a narration script, then the same video production and publishing path handles the rest.
Watering Automation
The watering subsystem is the most self-contained part of the platform. A Go service on hh-02 polls Prometheus every five minutes to read soil moisture metrics exported by node_exporter. When any zone drops below threshold it publishes a watering event to RabbitMQ and triggers the GPIO relay directly to open the corresponding valve.
Service Reference
| Service | Port | Mode | Consumes | Produces | External APIs |
|---|---|---|---|---|---|
| llm-service | :8080 | HTTP server | HTTP from blog-poster, herbhub-manager | Generated text responses | Ollama |
| blog-poster | — | cron 00:05 + 23:00 | RabbitMQ sensor.snapshots | Jekyll posts, git push | llm-service, GitHub, Prometheus |
| tts-narrator | — | cron 00:10 | Jekyll _posts/ | assets/audio/blog/*.mp3, git push | Kokoro TTS |
| video-narrator | :8090 | HTTP server + daemon | Jekyll posts, HTTP from herbhub-manager | Video Output .mp4, AMQP → video.produced | MuseTalk, Kokoro TTS |
| herbhub-manager | :8080 | HTTP server + Web UI | Jekyll posts, Video Output | HTTP to services, AMQP via RabbitMQ mgmt API → video.produced | video-narrator, timelapse-builder, llm-service |
| video-publisher | — | AMQP consumer | RabbitMQ video.produced | YouTube upload, Jekyll embed git push, .json marker, DLQ on failure | YouTube API, GitHub |
| timelapse-builder | :8082 | HTTP server | Image mount /input, HTTP from herbhub-manager | Timelapse .mp4 in /output | ffmpeg (local) |
| watering | :8787 health | 5 min poll | Prometheus metrics (hh-02:9100) | AMQP → watering.queue, GPIO valve | Prometheus, node_exporter |
| RabbitMQ | :5672 / :15672 | Infrastructure | Queues: sensor.snapshots · video.produced · video.produced.dlq · watering.queue | ||
| Traefik | :80 / :443 | Reverse proxy | manager.herbhub365.com → herbhub-manager · rabbit.herbhub365.com → RabbitMQ :15672 · scheduler.herbhub365.com → Cronicle :3012 | ||
| Cronicle | :3012 | Job scheduler | Manages scheduled tasks with web UI | ||
RabbitMQ Queue Reference
| Queue | Producer(s) | Consumer(s) | Message shape |
|---|---|---|---|
| sensor.snapshots | IoT devices / sensors | blog-poster | Sensor snapshot JSON |
| video.produced | video-narrator (daemon) herbhub-manager (via mgmt API) |
video-publisher | { slug, date, output_file, status, timestamp } |
| video.produced.dlq | video-publisher (on failure) | Manual inspection | { error, timestamp, original } |
| watering.queue | watering service | — | Watering event JSON |
The architecture is intentionally minimal at each boundary — services communicate via HTTP or AMQP rather than shared databases, each service owns its own data path, and the message broker provides the only coupling between the content pipeline and the publishing layer. This keeps any single service replaceable without cascading changes across the platform.