Set up a homelab for safe photo storage and lightweight app hosting

A hands-on guide to converting a spare desktop into a resilient storage server and self-hosted app platform using TrueNAS, mirrored drives, and secure remote access

Convert a gaming pc into a homelab

You won’t believe what happened when a common gaming PC met TrueNAS.

For photographers and tinkerers seeking control over media and apps, a homelab can resolve storage problems and feed curiosity about self-hosting.

The setup began by repurposing a retired gaming machine. Large-capacity hard disks were added. TrueNAS was installed to manage file services.

The objective was simple: stop manually copying bulky Fujifilm RAW archives and provide a secure, manageable platform for a few lightweight services.

This article begins with the hardware and design choices that matter most to young, curious readers considering their first homelab.

Storage and backup strategy

Building on the hardware and design choices, this section explains the storage and backup approach suited to a first homelab.

The system uses RAID 1 to provide immediate disk redundancy. RAID 1 mirrors data across two drives, reducing the risk of data loss from a single drive failure.

Frequently accessed application data is kept on a fast SSD or NVMe. Using solid‑state storage for active workloads improves responsiveness and lowers application latency.

Snapshotting is enabled to protect against accidental deletion and to allow quick point‑in‑time recovery. Snapshots consume space but are efficient for short‑term restores and versioning.

Remote administration avoids exposing services to the public internet. A modern mesh VPN provides authenticated remote access while keeping management ports closed to external scanning.

Off‑site backups are pushed to affordable object storage to satisfy the 3‑2‑1 backup principle: three copies, on two different media, with one copy off site. Object storage services scale capacity and simplify automated retention policies.

These choices balance cost, performance, and data safety for a compact homelab. The next section will examine network and access controls to complement this storage strategy.

Hardware and basic configuration

I repurposed an older desktop as the homelab host to keep costs down while maintaining usable performance. The system uses a mid-range six-core AMD Ryzen processor and 16 GB of system memory. A dedicated GPU with 8 GB of VRAM supports lightweight AI experiments and hardware-accelerated media tasks.

Storage is tiered to balance capacity, performance and longevity. Two 8 TB mechanical drives provide bulk capacity and mirror redundancy for user data. A smaller SSD and a separate NVMe drive host the operating system and high-I/O services such as virtual machines, container images and databases. This layout isolates frequent writes to the faster media and preserves the mechanical drives for large, sequential storage.

Thermal management and power provisioning were addressed early in the build. The case has additional intake and exhaust fans to keep sustained loads under control. A modest UPS protects the system from short outages and provides clean shutdown capability for the storage array.

For expandability, the motherboard’s additional SATA and M.2 sockets remain available for future drives. Memory can be upgraded in matched pairs, and the power supply has spare connectors for additional storage or a second GPU if workloads grow.

The next section will examine network and access controls to complement this storage strategy.

Storage layout and redundancy

The storage is arranged to separate fast system operations from bulk data. System and application workloads run on the NVMe for low latency. Bulk file storage sits on the mirrored pair to preserve availability if one drive fails.

This separation improves performance for databases and search indexes while keeping user data resilient. I maintain scheduled point-in-time captures to enable file-state recovery without restoring entire volumes.

Operational practices reinforce the layout. I monitor drive health with SMART checks and set alerts for early signs of degradation. I also verify the integrity of captures regularly and practise drive replacement procedures to minimise downtime.

The next section will examine network and access controls to complement this storage strategy.

TrueNAS and data protection features

TrueNAS Community Edition runs from the NVMe to provide the system and data-protection services for the storage environment. The platform exposes network file shares and a catalog of applications while integrating core data-protection functions such as automated snapshots, replication and plugins. I configured hourly, daily and weekly snapshot retention policies so accidental deletions or unwanted filesystem changes can be rolled back without scanning raw backups.

Monitoring and drive health

A monitoring dashboard ingests drive telemetry and displays S.M.A.R.T. attributes alongside trend charts for temperature, power-on hours and read error rates. The system archives historical metrics to show gradual degradation and to distinguish transient spikes from real faults. Alerts notify administrators when measurements cross predefined thresholds, enabling planned drive replacement before catastrophic failure.

The combined snapshot and monitoring strategy reduces recovery time and operational risk for long-term storage. The next section will examine network and access controls to complement this storage strategy.

Self-hosted apps and backup strategy

Continuing the storage discussion, the homelab also hosts several user-facing services that extend data protection beyond the NAS.

The system runs a web frontend for restic that sends daily backups to an external bucket on Backblaze B2, ensuring an off-site copy exists in case local storage is compromised. Backups are encrypted client-side before upload to preserve confidentiality.

For mobile media, I use Immich to back up and browse photos. It provides native mobile apps and keeps media under local control while offering automated sync from devices.

A lightweight recipe manager supports meal planning without adding significant resource load. Separately, an AI model backend hosts small language models on the GPU to enable local experimentation and development without sending data to third-party cloud services.

Why this mix matters: off-site backups protect against hardware failure or site loss, local apps preserve privacy and reduce latency, and modest GPU hosting lets developers test models securely.

Operational safeguards include regular backup verification and restore drills, rotation and secure storage of access keys, client-side encryption of backup data, and monitoring of backup job success rates. Services exposed to the network are limited to necessary ports and protected by authentication and firewall rules.

The next section will examine network and access controls to complement this storage strategy.

Ai experiments and embeddings

The homelab operator runs compact language models locally because the GPU provides 8 GB of VRAM. This setup uses an on-premise inference engine to host models and test vector embeddings. Models are intentionally small to fit memory limits. Running them on-site avoids routing data through external services. The configuration preserves sensitive information and reduces latency for iterative experiments.

The work supports offline evaluation of embedding quality and retrieval tasks. It also enables quick prototyping of prompt and model variations without incurring cloud costs. Hardware constraints shape model selection and force trade-offs between accuracy and resource use.

Remote access and future improvements

Network and access controls complement the storage strategy described earlier. Remote connectivity is provided through Tailscale, a mesh networking solution that creates authenticated, encrypted tunnels between devices. This method removes the need to expose individual services to the public internet.

Using a private mesh reduces the system’s attack surface and centralizes trust decisions because each client must authenticate to reach the homelab. The design also simplifies firewall and routing rules compared with per-service port forwarding.

For off-site resilience, backups are replicated to cloud object storage to meet long-term durability goals. That redundancy separates operational access from disaster recovery, preserving recoverability if the local network becomes unavailable.

What’s next

Current access relies on IP addresses and ports, which complicates browser-based login storage and produces unwieldy URLs. The next step is to introduce friendly hostnames or short domain names in front of services. This change improves discoverability and makes password managers work more reliably with browser logins. Implementing automated certificate management and streamlined backup routines will further reduce operational friction. More granular snapshot policies and scheduled integrity checks will increase day-to-day reliability for media archives and hosted experiments.

Final notes on the setup

Repurposing a spare desktop into a compact homelab provides a low-cost platform for large photo libraries and personal services. Combining TrueNAS, disk monitoring, mirrored storage, secure remote access with Tailscale, and off-site backups to Backblaze B2 produces a resilient, private environment. That configuration separates operational access from disaster recovery and preserves recoverability if the local network becomes unavailable.

For ongoing maintenance, adopt automated tasks for certificate renewal, backup verification, and snapshot pruning. Monitor drive health and service logs to detect degradation early. These steps keep the system usable for creative workflows and technical learning without requiring constant manual intervention.

Scritto da AiAdhubMedia

Transform flat-pack furniture on a budget: 12 brilliant DIY upgrades that look expensive