Setting up my home server (and how it saved my MacBook)
I’ve got an old HP ProDesk 400 G7 sitting in my living room, humming quietly, doing more than it probably should for a machine that used to process spreadsheets in a BPO office.
It’s now my Google Photos replacement, my NAS, my local dev database, and occasionally, a Valheim server for friends.
How it started
It started with me scrolling through Facebook Marketplace and noticing a bunch of mini PCs listed for surprisingly cheap. i7 processors, 16GB RAM, SSDs. Machines that would’ve cost a fortune new, going for ₱15-20k (~$250-340).
Turns out, BPO companies here in the Philippines cycle out their office equipment every few years. What’s outdated for a call center is more than enough for a home server.
Seeing those listings sparked the tinkering itch. I wanted a sandbox, somewhere I could mess around with Linux, learn some sysadmin basics, play with Docker and maybe Kubernetes, and automate things just because I could. A mini PC seemed like the perfect way to do that without spinning up cloud instances and watching the bill climb.
Then I realized I could actually use this thing.
If you’ve ever run Docker Desktop on a MacBook, you know the pain. It eats RAM like crazy. I was running PostgreSQL, Redis, and a few other containers for local development, and my laptop was constantly running out of memory. It wasn’t sustainable.
So I moved PostgreSQL to the Prodesk. Then Redis. Suddenly I had RAM to spare again. The dev environment felt snappier because my laptop wasn’t juggling containers on top of everything else.
That’s when I thought: what if I ran more stuff here?
I was getting into photography and running out of cloud storage. I needed a NAS. I even hosted a Valheim server for my friends for a while. The scope kept creeping, in a good way.
Adding fuel to the fire, PewDiePie’s video on self-hosting showed up in my feed around this time. If someone like him who doesn’t get paid to do software engineering (maybe not yet?) can do something like this, why not me?
The machine
I ended up with an HP Prodesk G7 400 for around ₱17,000 (~$290). It came with:
- Intel i7-10700
- 16GB RAM
- 512GB M.2 SSD
For comparison, a brand new unit with lower specs (i5-10500, 8GB RAM) goes for ₱44,700 (~$760).
Not bad for a second-hand office PC.
The form factor is compact. Great for living room placement, but limiting for expansion. It’s got 2 SATA slots and 1 PCIe slot that can only fit a low-profile GPU like an RX 550. Forget about fitting a regular 3.5” hard drive in there.
I’ve since upgraded it:
- RAM: Replaced the 16GB with a fresh 32GB kit (16x2) for ₱2,400 (~$40). Sold the old sticks to keep things clean, and to get some sort of a refund.
- Storage: Added a 512GB SATA SSD I had lying around, plus a 2TB Samsung 870 EVO for ₱15,000 (~$255). SATA SSDs are the only practical option given the space constraints.
All in, I’m probably around ₱35,000+ (~$600) invested. For context, I was paying ₱6,000/year (~$100) for Google One, so purely on storage costs, this thing pays for itself in about 6 years.
Power consumption is surprisingly reasonable too. Mini PCs like the Prodesk sip power compared to full towers. Idle is probably around 15-25W, which translates to maybe ₱200-300/month (~$3-5) on my electricity bill. Not nothing, but not a dealbreaker either.
But that math undersells it. This isn’t just a NAS. It’s also my dev server (goodbye, Docker Desktop hogging my RAM), a platform for learning Linux and automation, and a place to run whatever I want. The ROI isn’t just in pesos saved, it’s in what I’ve learned and the flexibility I now have.
The Proxmox detour
I didn’t start with plain Ubuntu Server. I started with Proxmox.
The i7-10700 supports virtualization, so I figured I’d go full hypervisor mode. I had TrueNAS running in a VM dedicated for storage, an Ubuntu Server VM for containers and other automation, and the flexibility to spin up more VMs whenever I wanted to experiment.
In theory, it was great. In practice, I spent more time debugging than building.
VMs would hang randomly. I’d SSH in to find everything frozen. I scoured forums, checked logs, tweaked settings. Nothing worked. It was one of those problems where you know something’s wrong but can’t pinpoint it.
Eventually I asked myself: am I even using the VM flexibility enough to justify this?
The answer was no. I wasn’t spinning up new VMs regularly. I was just running the same services in containers. Proxmox was overkill, and it was causing more problems than it solved.
So I wiped the drive and installed Ubuntu Server. Headless, simple, just Docker and the things I actually use.
Life got easier.
What’s running now
Everything runs in Docker containers. Current setup:
- Immich: Google Photos replacement. This is the crown jewel. Auto-backup from my phone, facial recognition, timeline view, the works. It’s shockingly good for a self-hosted solution.
- Filebrowser: Simple web-based file manager. My makeshift NAS interface.
- PostgreSQL: For local development. My MacBook thanks me.
- Various experiments: Things come and go. Currently eyeing Karakeep (formerly Hoarder) for bookmarks and read-later.
I access everything via Tailscale when I’m out, though honestly it’s mostly local access these days.
The de-Google project
Immich was the big one. Google Photos was convenient, but I didn’t love having my entire photo library in Google’s hands, especially as it kept growing with my photography hobby.
What’s left:
- Mail: This is the scary one. Self-hosting email is notoriously painful, but I want to at least explore it.
- Everything else: Currently waiting for my Google Takeout export to finish so I can pull the rest of my data out.
The backup situation
One thing I can’t do with this setup is RAID. With only two SATA slots and drives of different sizes, there’s no practical way to set up redundancy. If a drive fails, that data’s gone.
That’s where Backblaze B2 comes in. I’ve got a cron job running rclone every night at 3AM, syncing my important directories to B2. It’s not instant redundancy, but it means I’m never more than 24 hours away from a recoverable backup. If the Prodesk dies tomorrow, I lose a day’s worth of photos at worst, not years.
Self-hosting doesn’t mean having a single point of failure. It just means you have to think about failure modes yourself.
What I learned
Start simple. A coworker once taught me about YAGNI (You Aren’t Gonna Need It), the idea that you shouldn’t build for requirements you don’t have yet. Proxmox was cool, but I didn’t actually need VM flexibility. I should’ve started with Ubuntu Server, and only moved to Proxmox if I ever felt the need.
There’s so much untapped computing power out there. Used laptops, old office PCs, mini PCs from BPO clearouts. Even an i5 8th gen isn’t “just” an i5. At some point, that chip was running top-end games. It can definitely handle a few Docker containers. We got someone on the moon with less computing power than your smartphone. You can get started with almost anything, even a dusty laptop lying around.
Self-hosting is an endless project. Every problem you solve opens up three more things you could host. It’s genuinely fun, but if you don’t set boundaries, you’ll spend more time tinkering than actually using what you built.
What’s next
- Get Karakeep running for bookmarks and read-later
- Figure out if self-hosted mail is worth the pain
- Maybe explore more automation with n8n again
- Keep chipping away at the Google Takeout migration
The Prodesk isn’t a powerhouse, but it’s mine. There’s something satisfying about having your own little server in the corner, quietly doing its thing.
If you’ve been thinking about starting a homelab, I’d say just grab whatever old hardware you can find and start. You’ll figure the rest out as you go.