I Gave a School Its Own Cloud. Here's What That Actually Looked Like.


There's a version of this story that sounds very clean. Developer builds school a website, sets up email, deploys cloud storage, everyone goes home happy. That version is a lie.

The real version involves me at 11pm staring at a Traefik config wondering why a mail server thinks its own SSL certificate belongs to someone else. But we'll get there.


How It Started

Johnethel School has been in Ogbomoso since 2016, and for a few years now they have been genuinely ahead of the curve on digital learning — AI in the classroom, flatscreen TVs everywhere, digital resources instead of board-copying. Most schools around them are still on chalk and WhatsApp. Johnethel was already doing something different, partly because the school has always been willing to invest in good tools, and partly — I will be honest — because having me around helped.

My first project for them was a result management system back in 2020 — an internal web app that let teachers input grades and auto-generate end-of-term report sheets instead of producing them entirely by hand. It has been running quietly ever since. Then came the public website: Dioxus (Rust) frontend, Strapi headless CMS, statically generated, deploys in under 30 seconds when content changes.

But both of those projects were either entirely inward or entirely outward. The question that came next was about the institutional layer in between: what does the school's internal collaboration infrastructure look like? Staff were using personal email addresses for school business. Documents lived in WhatsApp chats or on individual phones. There was no shared storage, no school-owned platform.

For a school already leading on digital learning in the classroom, that gap felt increasingly odd. So we closed it.


The Stack (and Why)

I wanted everything self-hosted. Not out of stubbornness — mostly because for a school of this size and budget, paying for Google Workspace or Microsoft 365 per user adds up fast, and you're also handing your institutional data to a third party. Call it a conviction, call it frugality, the outcome is the same.

The core of it is Nextcloud AIO (All-In-One) — which is exactly what it sounds like. One deployment that handles cloud storage, calendar, contacts, document editing (via Collabora), video calls (via Talk), and a bunch of other things I probably won't turn on until the VPS gets more RAM. It's running on Docker, managed through Dokploy, with Traefik sitting in front of everything handling SSL and routing.

For mail, I went with Stalwart — a relatively new mail server written in Rust that handles SMTP, IMAP, JMAP, ManageSieve, all of it in a single binary. It scored 10/10 on mail-tester.com which is the kind of thing that makes you feel briefly invincible.

For user management, LLDAP — a lightweight LDAP implementation that's basically a friendly web UI over a standard LDAP directory. The goal: one place to create a user, and they automatically get access to Nextcloud and their school email. One account, one password, everything works.


The Part Where Things Got Interesting

I should explain something about how Nextcloud AIO works. The "mastercontainer" is like a manager — it spawns and manages all the other containers (Apache, Redis, PostgreSQL, Collabora, Stalwart, LLDAP, etc.) on its own Docker network. This is great for simplicity. It's slightly less great when you're running it behind a reverse proxy on a different network.

Traefik lives on dokploy-network. The AIO child containers live on nextcloud-aio network. These two networks don't talk to each other by default, so Traefik can see the mastercontainer but can't reach Stalwart or LLDAP to proxy their web UIs.

The solution is to manually connect the containers to dokploy-network. The problem is that every time AIO recreates those containers (which it does on updates and restarts), they lose the manual network attachment.

So I wrote a small systemd service that listens to Docker events and auto-reconnects specific containers to dokploy-network whenever they start. It's maybe 12 lines of bash. It runs quietly in the background and I never have to think about it again. That's the best kind of infrastructure solution.


The SSL Certificate Situation

This one took a while.

Stalwart (the mail server) expects to find its TLS certificates in a specific path inside a Docker volume that Nextcloud AIO creates for Caddy — a different web server that AIO uses internally. Except we're not using Caddy. We're using Traefik. So Caddy's volume exists, but nothing is putting certificates in it.

Stalwart starts up, looks for its cert, doesn't find it, and then politely refuses to do anything mail-related over TLS.

The fix: Traefik stores its Let's Encrypt certificates in acme.json. I wrote a small Python script to extract the certificate for mail.johnethel.school from that file, decode it from base64, and place it exactly where Stalwart expects to find it — inside the Caddy volume, at the path matching the Let's Encrypt directory structure that Stalwart's internal config hardcodes.

Then I turned off Stalwart's auto-cert-management so it stops trying to "help." And I added a monthly cron job to re-run the extraction script before the cert expires.

Feels like a hack. Works perfectly.


LLDAP + Stalwart Integration

The final piece was getting Stalwart to authenticate users against LLDAP — so when a teacher logs into their mail client and types their school password, it's the same password they use for Nextcloud.

LDAP authentication has two modes. One way: bind as admin, look up the user, fetch their userPassword attribute, compare hashes. The other way: bind as admin, look up the user's full DN, then try to bind as that user with the password they gave you — if the bind succeeds, the password is correct.

LLDAP doesn't expose the userPassword attribute (good — you shouldn't be able to read password hashes over LDAP). So it has to be the second method: bind-after-lookup.

Stalwart's web UI has a field for this. I configured it correctly. Except — and this is my favourite part — the UI had a bug where the "Bind Secret" field (the admin password) was saving the Bind DN string instead. I spent a non-trivial amount of time verifying that every other setting was correct before I checked the raw config and saw "bind.secret": "uid=admin,ou=people,...".

Fixed it by posting directly to the Stalwart settings API. The web UI is decorative in this particular case.

There was one more thing: Stalwart has an ENSURE_DIRECTORY_CONFIG environment variable that resets the authentication directory to its internal default on every restart. So even after fixing the bind secret, the fix would be undone every time the container restarted. Disabled that flag, set the directory to LLDAP via API, and now it actually sticks.


Where We Are Now

The system is live. A few teachers and secondary school students will have access to it first. One account in LLDAP gives you Nextcloud access, a school email address, and working IMAP/SMTP. The mail server has proper DKIM, SPF, and DMARC records. Emails land in inboxes, not spam folders. Shared calendars work. Document collaboration works.

The plan is to expand it gradually — more of the school, more use cases — as everyone grows comfortable with it. The infrastructure scales without any changes; it's just a matter of creating accounts.

What I found most interesting about this project isn't the technical parts. It's that the hardest problems weren't hard because of complexity — they were hard because of subtle mismatches between how different systems assume they'll be deployed and how you're actually deploying them. The SSL cert problem. The network isolation problem. The UI-that-lies-about-what-it-saved problem. None of these require deep expertise to solve. They just require patience and the willingness to read logs carefully.

That, and a healthy distrust of things that look like they're working.


The school is at johnethel.school if you want to see what the other half of this work looks like.