Building My Own Cloud (And Learning Why People Pay for Managed Services)

Well, this was more work than I expected.

Oh the fool I am.

What started as a light night side project turned into an all-night debugging bonanza.

With the acquisition of a 3rd-gen iPad Pro with only 256 GB of storage, I realized I needed cloud storage, not just for college, but for easy file sharing between all my devices.

If I had a Mac as my main driver (I’m sorry for this sin), this would’ve been trivial with icloud. Instead, I needed a third-party solution.

I could’ve used Google Drive or OneDrive, but I already had a mostly empty 2 TB HDD sitting in my PC. Why would I pay a monthly premium for storage I already physically own and lose access the moment Wi-Fi disappears? Add privacy concerns on top of that, and the decision was made.

So… I built my own cloud.

The goal was intentionally boring:
one drive, accessible from any device, anywhere I have internet.

Why I chose Nextcloud

I chose Nextcloud because:

  • It’s self-hosted and more importantly free

  • It has a solid web UI

  • Clients exist for Windows, macOS, iOS, Android, and Linux

It’s basically the “I want my own iCloud Drive” answer.

Docker Desktop: why it was optional yet required

Nextcloud is fundamentally a Linux application stack (web server + PHP + database + storage). You can force it to run “directly on Windows,” but that path becomes a fragile mess of services, permissions, and destructive updates. (windows loves to break everything with every update)

To avoid that class of problems, I ran Nextcloud in a Linux container.

On Windows, that means using Docker Desktop.

Docker Desktop uses a WSL 2 backend (a lightweight Linux environment) to run Linux containers. Docker Desktop becomes the bridge between Windows and that Linux runtime:

  • Storage bridge: bind mounts let containers read/write to Windows disks

  • Networking bridge: container ports can be published to the Windows host (localhost)

  • Isolation: the Linux stack stays predictable even if Windows changes underneath it

Without Docker Desktop + WSL 2, this setup would not have been stable long term.

Step 1: Installing Docker Desktop (Windows)

High level steps:

  1. Install Docker Desktop for Windows

  2. Enable / allow WSL integration during setup

  3. Update to WSL 2

  4. Reboot when asked

  5. Confirm Docker can run Linux containers

At this point, I had a real Linux environment ready to host Nextcloud without trying to “make Windows pretend to be Linux.”

Step 2: Deploying Nextcloud locally

The critical design choice: persistent storage on a real drive

Containers are disposable by design. If you store your data “inside” a container, you’re one bad reinstall away from pain.

So I mapped Nextcloud’s storage to a folder on my HDD, it looks something like:

D:\nextcloud\


That way:

  • The files physically live on the HDD

  • The container just uses them

  • Data survives container rebuilds / updates

Example docker-compose (sanitized)

I used a typical docker-compose.yml style deployment (Nextcloud + database). Here’s a clean example:

services:

  db:

    image: mariadb:11

    command: --transaction-isolation=READ-COMMITTED --binlog-format=ROW

    restart: always

    volumes:

      - db:/var/lib/mysql

    environment:

      - MYSQL_ROOT_PASSWORD=<REDACTED>

      - MYSQL_PASSWORD=<REDACTED>

      - MYSQL_DATABASE=nextcloud

      - MYSQL_USER=nextcloud


  app:

    image: nextcloud:stable

    restart: always

    ports:

      - "8090:80"

    links:

      - db

    volumes:

      - nextcloud:/var/www/html

      # Persistent data on a Windows drive:

      - "D:\\nextcloud\\data:/var/www/html/data"

    environment:

      - MYSQL_PASSWORD=<REDACTED>

      - MYSQL_DATABASE=nextcloud

      - MYSQL_USER=nextcloud

      - MYSQL_HOST=db


volumes:

  db:

  nextcloud:

Notes:

  • The published port 8090:80 is why Nextcloud becomes reachable at http://localhost:8090

  • Passwords are censored because they should never be in a blog post

  • The Windows path mapping is doubled (D:\\...) because YAML + Windows escaping is annoying

Local access

Once running, Nextcloud was reachable at:

http://localhost:8090


Local validation (important)

Before touching remote access, I made sure to check that I could:

  • Open the localhoast

  • Create an account

  • Upload files

  • Files appeared the HDD

  • Restarting PC and Desktop docker did not “lose” any data

This step mattered because debugging Docker + Cloudflare at the same time is a recipe for disaster and my own personal hell.

Step 3: Remote access (the “easy goal” that took forever)

And so the suffering begins


It was meant to be simple
It was meant to be quick
It was meant to be painless

All I had to do was expose Nextcloud securely without:

  •  Port forwarding

  •  Public IP dependence

  •  DynDNS clients

  •  Opening inbound firewall ports

The correct tool for that is a Cloudflare Tunnel via cloudflared

Why tunnels are the best bet for success here:

  • Outbound-only connection (nothing exposed coming in)

  • Works behind CGNAT and dynamic IPs

  • Home IP never has to be public

Traffic flow becomes:

Client → Cloudflare Edge → Tunnel → cloudflared (PC) → localhost → Nextcloud


Step 3.1: Quick Tunnel

Before making anything permanent, I tested a quick tunnel:

cloudflared tunnel --url http://localhost:8090


This instantly generates a temporary URL (trycloudflare.com). While the terminal stays open, traffic forwards to local Nextcloud.

It worked immediately which proved:

  • Nextcloud was responding properly

  • Docker networking was correct

  • Cloudflare could reach the origin through cloudflared

Step 3.2: Named tunnel (permanent) + domain

Quick tunnels are disposable. For a permanent hostname, I created a Cloudflare account and purchased a domain example.com

Authenticate cloudflared:

cloudflared tunnel login


Create the named tunnel:

cloudflared tunnel create nextcloud


This generates a UUID and a credentials JSON under:

C:\Users\<user>\.cloudflared\


At this moment, you have a tunnel… that routes to absolutely nowhere.

Step 3.3: config.yml

This file is the runtime definition for a named tunnel. It tells cloudflared:

  • which tunnel to run

  • where the credentials file is

  • what hostname(s) to accept

  • what local service(s) to forward to

  • what to do with everything else

Here’s the sanitized configuration:

tunnel: nextcloud

credentials-file: C:\Users\<user>\.cloudflared\e4a77678-****.json


ingress:

  - hostname: cloud.example.com

    service: http://localhost:8090

  - service: http_status:404

Key points:

  • hostname: must match what you want publicly (cloud.example.com)

  • service: must match your local Nextcloud origin (http://localhost:8090)

  • The final http_status:404 rule is the catch-all so you don’t accidentally expose anything else

Step 3.4: DNS route (no IPs required)

This command creates the DNS record automatically:

cloudflared tunnel route dns nextcloud cloud.example.com


Behind the scenes, Cloudflare sets a CNAME pointing at something like:

e4a77678-****.cfargotunnel.com


No public IP required. No port forwarding. No router config.

Step 3.5: Manual run (first real success)

Run the tunnel using the config:

cloudflared tunnel --config "C:\Users\<user>\.cloudflared\config.yml" run


It worked.

Then I closed the terminal.

And the tunnel was dead.

Step 3.Failure: Running cloudflared as a Windows Service

I tried installing cloudflared as a Windows service so it would survive reboots.

It should have been simple. It wasn’t
I should have been quick. It wasn't

It got stuck in weird states (including “won’t stop / won’t restart cleanly”), and troubleshooting service permissions and persistence at 3 AM is a special kind of misery. Every single time I tried a new fix I had to reboot the PC which was slow and painful. This one step was three hours I will never get back.

Eventually I abandoned this entire approach.

Step 3.Fix: Task Scheduler (saved my life)

Instead of fighting the service model, I used Task Scheduler to run cloudflared at startup.

Startup script (BAT) 


@echo off

cd /d "C:\Users\<user>\cloudflared"


cloudflared.exe tunnel --config "C:\Users\<user>\.cloudflared\config.yml" run ^

>> "C:\Users\<user>\cloudflared\cloudflared.log" 2>&1

Why this worked so easily:

  • Runs non-interactively

  • Logs everything

  • Survives reboot

  • Avoids service weirdness

Task registration (schtasks) 


schtasks /Create ^

 /TN "Cloudflared Nextcloud Tunnel" ^

 /TR "C:\Users\<user>\cloudflared\start-cloudflared.bat" ^

 /SC ONSTART ^

 /RL HIGHEST ^

 /F

Debug / verification commands

Check cloudflared is running:

tasklist | findstr cloudflared


Check the log:

type "C:\Users\<user>\cloudflared\cloudflared.log"


Confirm Nextcloud is reachable locally:

curl.exe -I http://localhost:8090


Debug rule of thumb:

  • If localhost is broken → Docker/Nextcloud problem

  • If localhost works but the domain is broken → Cloudflare/DNS/tunnel problem

That separation is important to my sanity and physical well being

The “I want it to behave like a normal folder” mistake

After everything worked, I tried to treat Nextcloud’s data directory like a normal Windows folder and directly add & remove files on disk.

That was a mistake.

Nextcloud tracks files through its database + metadata. If you edit the raw data directory directly, you can cause mismatches:

  • Files exist on disk but don’t appear in the UI

  • Files appear but won’t open correctly

  • Sync weirdness

  • Database indexing mismatch

Final thoughts

In the end, I got exactly what I wanted:

  • A private cloud

  • Backed by local storage

  • Accessible from anywhere

  • No open inbound ports

  • No public IP exposure

  • TLS handled by Cloudflare

All I wanted was a drive I could access anywhere.

I got that plus a crash course in Docker, tunnels, Windows persistence and command prompt misery 

Worth it?

…probably.

BONUS MISTAKES (Learn from my pain)

If you’re trying to recreate this setup, here are the mistakes that either cost me time, broke things, or broke me and what to do instead.


1) Forgetting that containers are disposable

Mistake: Storing important data inside a container filesystem.
What happens: Recreating the container can wipe the data.

Do this instead:

  • Always bind-mount persistent data to a real disk location (like a dedicated HDD folder)

  • Keep the database volume persistent too

2) Mixing up ports (container port vs published port)

Mistake: Pointing the tunnel at the wrong port.
Example confusion: Nextcloud listens on 80 inside the container, but I published it as 8090 on Windows.

Do this instead:

  • Access locally via the published port: http://localhost:8090

  • Point Cloudflare Tunnel at the same published endpoint: service: http://localhost:8090

3) Putting config.yml in the wrong place (or editing the wrong one)

Mistake: Creating multiple config files and then running cloudflared with a different one than the one you edited. This was an easy but very stupid mistake. I just couldn't tell which one I was editing after accidentally making FOUR OF THEM. 

Don’t ask how, I don’t want to know I happened

Do this instead:

  • Use a single known path:

    • C:\Users\<user>\.cloudflared\config.yml

  • And always run with an explicit --config "full\path\config.yml" so there’s no ambiguity.

4) Wrong credentials-file path

Mistake: credentials-file: pointing to a JSON that doesn’t exist, or pointing to the wrong tunnel JSON.
What happens: Tunnel won’t authenticate, fails silently, or connects but doesn’t route. This one almost made me give up on computers forever.

Do this instead:

  • Confirm the JSON exists in:

    • C:\Users\<user>\.cloudflared\

  • Match the tunnel name + JSON file to the tunnel you actually created.

5) Missing the catch-all ingress rule

Mistake: Leaving out:

- service: http_status:404


What happens: You can accidentally expose unintended services or get unpredictable routing behavior.

Do this instead:

  • Always include the 404 catch-all at the end of ingress:.

6) Expecting the Cloudflared Windows Service to “just work”

Mistake: Installing cloudflared as a service and assuming it will be stable.
What happens: Permissions issues, STOP_PENDING hangs, hard-to-debug failures, infinite reboots to get the damn service to STOP $&@%$*# PENDING along with general rage. This problem alone cost me three hours of fix, reboot, fix, reboot, fix, reboot, fix, reboot, fix, reboot, fix, reboot, fix, reboot, fix, reboot, fix, reboot, fix ,reboot ……

You have been warned

Please do this instead:

  • Use Task Scheduler at boot with highest privileges

  • Log output to a file so you can actually see what’s happening

7) Letting Versions & Trash silently eat your entire drive

Mistake: Leaving default settings on, then uploading/changing large files frequently. What happens: Nextcloud “helpfully” stores multiple versions + deleted files in Trash and your disk fills up fast.

Nextcloud versioning and trash are extremely convenient… that is until you upload or change big files frequently. Then storage disappears fast. A single test file devoured the entire drive in just a couple hours

Do this instead:

  • Review Versions and Trash settings early or just disable them

  • Periodically check storage usage from within Nextcloud

8) Debugging the tunnel before confirming localhost works

Mistake: Trying to fix Cloudflare/DNS when Nextcloud isn’t even reliably reachable locally.

Do this instead:

  • Always test local first:

curl.exe -I http://localhost:8090

  • Then tunnel/DNS.

Rule of thumb:

  • Local broken = Docker/Nextcloud issue

  • Local works but domain broken = Cloudflare/Tunnel/DNS issue