This post is a technical deep-dive into how Herb Hub 365 is wired together — the services, queues, data flows, and external integrations that run beneath the daily greenhouse updates. The diagrams below are generated directly from the live architecture definition and reflect the current state of the platform.

Scheduled / daemon service Management / interactive Message bus / queue Publishing / upload External API / cloud Infrastructure

Full System Architecture

The complete platform spans IoT edge devices, eight Go microservices, a RabbitMQ message broker, shared file storage, and a small number of external APIs. Data flows left to right: physical sensors and the timelapse camera feed into the service layer, which coordinates content generation, video production, and publishing via asynchronous queues.

flowchart LR subgraph IOT["🌱 IoT & External Sources"] direction TB SENSORS["Sensors\nhh-02:9100\n(node_exporter)"] CAMERA["Timelapse\nCamera"] BROWSER["User / Browser"] end subgraph SCHED["⏰ Scheduled Services"] direction TB BLOGPOSTER["blog-poster\ncron 00:05 UTC\n+ prom-post 23:00"] TTS["tts-narrator\ncron 00:10 UTC"] VIDNAR["video-narrator\ndaemon / server\n:8090"] TLAPSE["timelapse-builder\n:8082"] WATERING["watering\n5 min poll"] end subgraph MGMT["🎛️ Management"] MANAGER["herbhub-manager\n:8080\nWeb UI + REST API"] end subgraph MQ["📨 RabbitMQ — :5672 AMQP · :15672 Mgmt API"] direction TB Q1["⊟ sensor.snapshots"] Q2["⊟ video.produced"] QDLQ["⊟ video.produced.dlq"] Q3["⊟ watering.queue"] end subgraph DATA["🗄️ Data Stores"] direction TB JEKYLL[("Jekyll Repo\n_posts/\nassets/")] VIDOUT[("Video Output\n.mp4 + .json")] AUDIOOUT[("Audio Output\n.mp3")] end subgraph PUBLISH["📤 Publishing"] VIDPUB["video-publisher\nYouTube uploader"] end subgraph APIS["☁️ External APIs"] direction TB LLM["Ollama / LLM\nollama.la.home-cloud.uk"] KOKORO["Kokoro TTS\nkokoro-api.lab.home-cloud.uk"] MUSETALK["MuseTalk\n[ai-host]:8011"] YOUTUBE["YouTube API\ngoogleapis.com"] GITHUB["GitHub\nJekyll Repo"] PROM["Prometheus\nprometheus.home-cloud.uk"] end subgraph INFRA["🔧 Infrastructure"] direction TB TRAEFIK["Traefik\nReverse Proxy"] CRONICLE["Cronicle\nScheduler :3012"] end SENSORS -->|"AMQP publish"| Q1 CAMERA -->|"images mount"| TLAPSE BROWSER -->|"HTTPS"| TRAEFIK TRAEFIK -->|"HTTP"| MANAGER Q1 -->|"AMQP consume"| BLOGPOSTER BLOGPOSTER -->|"HTTP"| LLM BLOGPOSTER -->|"writes posts"| JEKYLL BLOGPOSTER -->|"git push"| GITHUB JEKYLL -->|"reads posts"| TTS TTS -->|"HTTPS POST"| KOKORO TTS -->|"writes MP3"| AUDIOOUT TTS -->|"git push"| GITHUB JEKYLL -->|"reads posts"| VIDNAR VIDNAR -->|"HTTP"| MUSETALK VIDNAR -->|"writes MP4"| VIDOUT VIDNAR -->|"AMQP publish"| Q2 MANAGER -->|"HTTPS"| VIDNAR MANAGER -->|"HTTP"| TLAPSE MANAGER -->|"HTTP"| LLM MANAGER -->|"HTTP :15672\nmgmt API"| Q2 Q2 -->|"AMQP consume"| VIDPUB VIDPUB -->|"HTTPS OAuth2"| YOUTUBE VIDPUB -->|"updates embed\ngit push"| GITHUB VIDPUB -->|"on failure"| QDLQ PROM -->|"HTTP scrape"| WATERING WATERING -->|"AMQP publish"| Q3 style VIDPUB fill:#fee2e2,stroke:#dc2626,color:#7f1d1d style MANAGER fill:#dcfce7,stroke:#16a34a,color:#14532d style Q2 fill:#fef9c3,stroke:#ca8a04 style BLOGPOSTER fill:#dbeafe,stroke:#3b82f6 style TTS fill:#dbeafe,stroke:#3b82f6 style VIDNAR fill:#dbeafe,stroke:#3b82f6

Video Content Pipeline

Every narrated video on this site follows a deterministic pipeline that starts with a sensor reading and ends with a YouTube embed injected into a blog post. The diagram below traces the full end-to-end flow, including the two paths by which video generation can be triggered: automatically by the daemon or manually via the manager web UI.

flowchart TD A["📡 IoT Sensors\nsoil / environment data"] -->|"AMQP → sensor.snapshots"| B B["blog-poster\ncron 00:05 UTC"] -->|"HTTP POST"| C["Ollama / LLM\nContent generation"] C -->|"Returns generated content"| B B -->|"Writes markdown"| D[("Jekyll _posts/\nYYYY-MM-DD-slug.md")] B -->|"git push"| GH["GitHub\nherbhub365.com"] D -->|"reads"| E["tts-narrator\ncron 00:10 UTC"] E -->|"HTTPS POST text"| F["Kokoro TTS API\nMP3 generation"] F -->|"audio stream"| E E -->|"writes .mp3"| AU[("assets/audio/blog/\nYYYY-MM-DD-slug.mp3")] AU -->|"audio_url in front matter\ngit push"| GH D -->|"reads"| G subgraph VIDGEN["Video Generation — two paths"] G["video-narrator\n:8090 daemon/server"] GM["herbhub-manager\n:8080 POST /api/generate"] GM -->|"HTTPS with post text"| G end G -->|"HTTP TTS + MuseTalk"| MT["MuseTalk API\n[ai-host]:8011\nAvatar video generation"] MT -->|"MP4 stream"| G G -->|"writes"| VO[("Video Output\nYYYY-MM-DD-slug.mp4")] G -->|"AMQP publish\nvideo.produced"| MQ VO -->|"file path in message"| MQ["RabbitMQ\nvideo.produced queue"] MQ -->|"AMQP consume"| VP["video-publisher"] VP -->|"HTTPS OAuth2\nupload MP4"| YT["YouTube API\ngoogleapis.com"] YT -->|"videoId"| VP VP -->|"injects iframe embed\ngit push"| GH VP -->|"deletes MP4,\nwrites .json marker"| VO style GM fill:#dcfce7,stroke:#16a34a style VP fill:#fee2e2,stroke:#dc2626 style MQ fill:#fef9c3,stroke:#ca8a04

Manual YouTube Publish

In addition to the fully automated pipeline, videos can be published manually from the herbhub-manager web UI. Rather than adding a separate publish endpoint to video-publisher, the manager queues the message directly via the RabbitMQ management HTTP API. The video-publisher consumer picks it up from the same video.produced queue and handles the upload identically to the automated path.

sequenceDiagram actor User participant M as herbhub-manager
:8080 participant R as RabbitMQ Mgmt API
rabbitmq:15672 participant Q as video.produced
queue participant VP as video-publisher participant YT as YouTube API participant GH as GitHub User->>M: POST /api/publish {slug} M->>M: Resolve post → find .mp4 in output dir M->>R: POST /api/exchanges/%2F/amq.default/publish
{slug, date, output_file, status:"completed"} R->>Q: Route message (delivery_mode:2 persistent) R-->>M: {routed: true} M-->>User: 202 Accepted {status:"queued"} Note over VP,Q: video-publisher consumer picks up message VP->>Q: AMQP consume Q-->>VP: {slug, date, output_file} VP->>VP: Load post metadata (title, tags, excerpt) VP->>YT: Upload MP4 (HTTPS OAuth2) YT-->>VP: videoId VP->>VP: Write .json marker with youtube_url VP->>GH: Inject iframe embed, git push VP->>VP: Delete local .mp4 Note over User,M: Posts page badge updates to "Published" on next refresh

Timelapse Pipeline

Timelapse videos follow a slightly different path. The timelapse-builder service stitches raw camera frames into an MP4 independently of the blog pipeline. When a timelapse is ready to be narrated and published, herbhub-manager triggers video-narrator directly with the timelapse file and a narration script, then the same video production and publishing path handles the rest.

flowchart LR CAM["📷 Timelapse Camera\n/home/andy/Pictures/timelapse"] -->|"images mount\n/input"| TB TB["timelapse-builder\n:8082"] -->|"POST /api/build"| TB TB -->|"ffmpeg stitch"| TVO[("Timelapse .mp4\n/output/")] MANAGER["herbhub-manager\n:8080"] -->|"POST /api/timelapse/build"| TB MANAGER -->|"POST /api/timelapse/publish\n{timelapse_file, tts_text, title}"| VN TB -->|"GET /api/timelapse/videos/{file}"| MANAGER VN["video-narrator\n:8090\nNarrate timelapse"] -->|"TTS + MuseTalk"| VN VN -->|"writes narrated MP4"| VO2[("Video Output\n.mp4")] VN -->|"AMQP publish\nvideo.produced"| MQ2["RabbitMQ\nvideo.produced"] MQ2 -->|"AMQP consume"| VP2["video-publisher"] VP2 -->|"HTTPS OAuth2"| YT2["YouTube API"] VP2 -->|"git push embed"| GH2["GitHub"] style MANAGER fill:#dcfce7,stroke:#16a34a style VP2 fill:#fee2e2,stroke:#dc2626 style MQ2 fill:#fef9c3,stroke:#ca8a04

Watering Automation

The watering subsystem is the most self-contained part of the platform. A Go service on hh-02 polls Prometheus every five minutes to read soil moisture metrics exported by node_exporter. When any zone drops below threshold it publishes a watering event to RabbitMQ and triggers the GPIO relay directly to open the corresponding valve.

flowchart LR NE["node_exporter\nhh-02:9100"] -->|"HTTP metrics"| W PROM["Prometheus\nprometheus.home-cloud.uk"] -->|"also scrapes"| NE W["watering service\n5 min poll"] -->|"compare to threshold\n(default 40%)"| W W -->|"below threshold\nAMQP publish"| Q3["⊟ watering.queue"] W -->|"GPIO control"| GPIO["GPIO\nWatering Valve"] style W fill:#dbeafe,stroke:#3b82f6

Service Reference

Service Port Mode Consumes Produces External APIs
llm-service :8080 HTTP server HTTP from blog-poster, herbhub-manager Generated text responses Ollama
blog-poster cron 00:05 + 23:00 RabbitMQ sensor.snapshots Jekyll posts, git push llm-service, GitHub, Prometheus
tts-narrator cron 00:10 Jekyll _posts/ assets/audio/blog/*.mp3, git push Kokoro TTS
video-narrator :8090 HTTP server + daemon Jekyll posts, HTTP from herbhub-manager Video Output .mp4, AMQP → video.produced MuseTalk, Kokoro TTS
herbhub-manager :8080 HTTP server + Web UI Jekyll posts, Video Output HTTP to services, AMQP via RabbitMQ mgmt API → video.produced video-narrator, timelapse-builder, llm-service
video-publisher AMQP consumer RabbitMQ video.produced YouTube upload, Jekyll embed git push, .json marker, DLQ on failure YouTube API, GitHub
timelapse-builder :8082 HTTP server Image mount /input, HTTP from herbhub-manager Timelapse .mp4 in /output ffmpeg (local)
watering :8787 health 5 min poll Prometheus metrics (hh-02:9100) AMQP → watering.queue, GPIO valve Prometheus, node_exporter
RabbitMQ :5672 / :15672 Infrastructure Queues: sensor.snapshots · video.produced · video.produced.dlq · watering.queue
Traefik :80 / :443 Reverse proxy manager.herbhub365.com → herbhub-manager · rabbit.herbhub365.com → RabbitMQ :15672 · scheduler.herbhub365.com → Cronicle :3012
Cronicle :3012 Job scheduler Manages scheduled tasks with web UI

RabbitMQ Queue Reference

Queue Producer(s) Consumer(s) Message shape
sensor.snapshots IoT devices / sensors blog-poster Sensor snapshot JSON
video.produced video-narrator (daemon)
herbhub-manager (via mgmt API)
video-publisher { slug, date, output_file, status, timestamp }
video.produced.dlq video-publisher (on failure) Manual inspection { error, timestamp, original }
watering.queue watering service Watering event JSON

The architecture is intentionally minimal at each boundary — services communicate via HTTP or AMQP rather than shared databases, each service owns its own data path, and the message broker provides the only coupling between the content pipeline and the publishing layer. This keeps any single service replaceable without cascading changes across the platform.