Building a FreeBSD Plex Media Server, Part 2
Why ZFS?
ZFS alone is enough reason to give BSD distros a try for your server. While Linux has support for ZFS, due to complex licensing issues it’s not included in the kernel. ZFS is open source under Common Development and Distribution License (CDDL) 1.0 whereas Linux kernel is licensed under the GNU General Public License (GPL) 2.0.
This means on Linux you have to rely on Dynamic Kernel Module Support (DKMS). This works fine, but means ZFS on root for Linux could potentially break after a kernel update if the ZFS DKMS package has some incompatibility issue, or if there is manual intervention required that you happen to miss. Leaving your system unbootable, requiring a safety rescue.
Key Benefits of FreeBSD
With that said, FreeBSD defaulting to its built-in ZFS filesystem is a much more straightforward approach, and FreeBSD’s ZFS support is more mature. The kernel and base system are developed and maintained as a unified codebase, unlike Linux, which has components from various distributed sources.
This design means that all components, such as the kernel, core utilities, and system libraries are all versioned and tested together, reducing compatibility issues that can arise from independently updated packages. Upgrades are more predictable and reliable, with fewer dependencies breaking or services failing after a system update.
The benefits of ZFS are hard to ignore:
Key Benefits of ZFS on FreeBSD
- Data Integrity: End-to-end checksumming and self-healing protect against data corruption.
- Flexible Storage Pools: Dynamic allocation with pooled storage rather than fixed partitions.
- Snapshots and Clones: Fast, space-efficient snapshots and writable clones for testing and backup. Robust CLI for managing pools, datasets, and replication with
zfs send/receive. - Performance Optimization: Features like ARC caching, ZIL, and intelligent I/O improve speed.
- Integrated RAID: Native support for mirror and RAID-Z configurations without external tools.
- FreeBSD Integration: Full support for booting from ZFS, jails, and system utilities.
- Compression and Deduplication: Transparent data compression and optional deduplication.
- Copy-on-Write: Ensures consistent data at all times, simplifying snapshots and rollback.
After installing FreeBSD on my server, the first task was to organize storage using ZFS, taking advantage of its powerful features like snapshots, self-healing, and flexible datasets.
To begin, I needed to identify the available disks on the system. FreeBSD provides the geom command for this purpose:
$~ geom disk list
Geom name: ada0
Providers:
1. Name: ada0
Mediasize: 1000204886016 (931G)
Sectorsize: 512
Stripesize: 4096
Stripeoffset: 0
Mode: r1w1e1
descr: ST1000DM010-2EP102
ident: Z4A123CD
rotationrate: 7200
fwsectors: 63
fwheads: 16
Geom name: ada1
Providers:
1. Name: ada1
Mediasize: 1000204886016 (931G)
Sectorsize: 512
Stripesize: 4096
Stripeoffset: 0
Mode: r1w1e1
descr: ST1000DM010-2EP102
ident: Z4A456EF
rotationrate: 7200
fwsectors: 63
fwheads: 16
...etc
With the disk layout confirmed, I proceeded to create separate ZFS pools optimized for different workloads.
Setting Up a ZFS Pool for Plex
For Plex media storage, I created a ZFS pool named zplex using two mirrored vdevs. This provides redundancy in case of disk failure, essential for protecting a large media library.
zpool create zplex mirror ada0 ada1 mirror ada2 ada3
creates a new ZFS storage pool named zplex using four disks arranged as two mirrored vdevs.
Breaking it down
-
zpool create zplexCreates a new pool named zplex. -
mirror ada0 ada1The first vdev is a mirror made from the disks ada0 and ada1. This means the data is duplicated across both drives for redundancy. -
mirror ada2 ada3The second vdev is another mirror, using the disks ada2 and ada3.
What you end up with
You’re effectively building a pool made of four disks, grouped like this:
- Mirror 1: ada0 + ada1
- Mirror 2: ada2 + ada3
ZFS then stripes data across the two mirrors, giving you:
- Redundancy: You can lose one disk per mirror (up to two total) as long as they’re not in the same mirror.
- Performance: Read and write performance benefits from striping across both mirror vdevs.
This arrangement is commonly called a “RAID10-style” ZFS layout: mirrors for redundancy, striped for performance.
Setting Up a ZFS Datasets for Plex
Next, I created ZFS datasets for Plex and assigned them custom mount points to logically separate media content:
These ZFS commands create a set of datasets inside your zplex pool, each with its own mountpoint so they appear as separate directories on your filesystem.
Command breakdown
1. Parent Dataset
zfs create -o mountpoint=/plex zplex/plex
This creates a ZFS dataset named zplex/plex and sets its mountpoint to /plex.
Once created, the directory /plex becomes backed by this dataset.
2. First Child Dataset
zfs create -o mountpoint=/plex/movies zplex/plex/movies
This creates a child dataset named zplex/plex/movies, mounted at /plex/movies.
Using a separate dataset gives you independent control of snapshots, quotas, compression, and permissions for your movies library.
3. Second Child Dataset
zfs create -o mountpoint=/plex/tv zplex/plex/tv
This creates another child dataset, zplex/plex/tv, mounted at /plex/tv.
Why make separate datasets?
By splitting movies and tv into their own datasets:
- Each can have its own snapshot schedule
- You can apply different quotas or compression settings
- Permissions can be delegated more cleanly
- Plex libraries stay neatly organized at the filesystem level
- ZFS manages each dataset independently
This structure not only helps with organization but also allows for dataset-specific settings like compression or quotas in the future.
Preparing Encrypted Storage for Jails
To host applications in FreeBSD jails, such as Git and Nextcloud, I created a separate ZFS pool named zstor using a mirrored configuration for redundancy:
zpool create zstor mirror ada4 ada5
For sensitive data, encryption is a must. ZFS’s native encryption makes this straightforward. I created encrypted and compressed datasets for Git and Nextcloud:
These commands create encrypted ZFS datasets for storing application data, each with its own mountpoint and compression enabled.
Command breakdown
Both commands follow the same pattern:
zfs create– creates a new dataset.-o encryption=on– enables native ZFS encryption for the dataset.-o keyformat=passphrase– specifies that the encryption key will be a human-entered passphrase rather than a raw key file.-o keylocation=prompt– ZFS should ask you for the passphrase whenever the dataset is unlocked.-o compress=lz4– turns on lz4 compression, giving good performance with automatic space savings.-o mountpoint=…– sets the directory where the dataset will be mounted.
1. Git dataset
zfs create -o encryption=on -o keyformat=passphrase -o keylocation=prompt \
-o compress=lz4 -o mountpoint=/git zstor/git
This creates an encrypted dataset named zstor/git, mounted at /git.
It’s ideal for storing private Git repositories or anything that requires an encrypted filesystem.
2. Nextcloud dataset
zfs create -o encryption=on -o keyformat=passphrase -o keylocation=prompt \
-o compress=lz4 -o mountpoint=/nextcloud zstor/nextcloud
This creates a second encrypted dataset, zstor/nextcloud, mounted at /nextcloud, suitable for storing synced files, user data, and application content for a Nextcloud instance.
Why encrypted datasets?
Using encrypted datasets for Git and Nextcloud gives you:
- At-rest encryption handled natively by ZFS
- Clean separation between datasets
- Independent passphrases or unlock procedures
- Automatic compression to save space
- Flexibility to snapshot, clone, and replicate each dataset separately
With this layout, each jail has its own secure dataset, giving me fine-grained control, flexibility, and peace of mind—all integrated into FreeBSD’s native toolset.
Drive replacement
What good is a RAID setup if you can’t easily replace and recover from failed disks?
Not long after setting up my ZFS pool, I encountered a disk failure in the zplex pool. Fortunately, ZFS makes identifying and replacing failed drives straightforward. The first indication of a problem came from checking the pool’s health:
zpool status zplex
This command reports the status of each vdev and displays any errors or degraded conditions. In my case, one of the disks showed a status like:
NAME STATE READ WRITE CKSUM
zplex DEGRADED 0 0 0
mirror-0 ONLINE 0 0 0
ada0 ONLINE 0 0 0
ada1 ONLINE 0 0 0
mirror-1 DEGRADED 0 0 0
ada2 ONLINE 0 0 0
13095480495471608692 FAULTED 0 0 0
The long numeric value (13095480495471608692) identifies the failed disk. To find its physical device name prior to replacement, I used the following command:
geom disk list
Once the replacement disk was installed and recognized by the system (in this case as /dev/ada3), I instructed ZFS to begin the replacement:
zpool replace zplex 13095480495471608692 /dev/ada3
ZFS immediately began resilvering—rebuilding data on the new disk based on the surviving mirror. You can monitor the progress:
zpool status zplex
Example:
pool: zplex
state: ONLINE
scan: resilver in progress since Tue Nov 19 10:42:03 2025
150G scanned at 1.25G/s, 82G issued at 680M/s, 500G total
82G resilvered, 16.4% done, 0h58m to go
config:
NAME STATE READ WRITE CKSUM
zplex ONLINE 0 0 0
mirror-0 ONLINE 0 0 0
ada0 ONLINE 0 0 0
ada1 ONLINE 0 0 0
mirror-1 ONLINE 0 0 0
ada2 ONLINE 0 0 0
replacing-2 ONLINE 0 0 0
13095480495471608692 ONLINE 0 0 0 (old)
ada3 ONLINE 0 0 0 (resilvering)
errors: No known data errors
With minimal intervention and no downtime for media playback, the pool was back to a healthy state. This experience reinforced one of the strengths of ZFS: robust handling of hardware failures with clear diagnostics and built-in resiliency.
Conclusion
Building a solid storage foundation is the most critical step in creating a reliable homelab server, and this article demonstrates why FreeBSD with ZFS is an exceptional choice for the task. We’ve walked through the practical steps of creating distinct ZFS pools for different needs: a resilient, RAID10-style pool for our large Plex media library, and a separate, encrypted pool to securely house sensitive application data.
By leveraging ZFS datasets, we’ve not only organized our file systems logically for movies and TV shows but also enabled fine-grained control for future management. The real-world scenario of a disk failure underscored the power of ZFS, showcasing how easily a faulty drive can be identified, replaced, and resilvered without any data loss or significant downtime.
With a robust, self-healing, and secure storage backend now in place, our FreeBSD server is fully prepared for the next stage: deploying Plex and other services within isolated Jails. The initial setup proves that with ZFS, you get more than just storage, you get a professional-grade data management system that provides performance, flexibility, and peace of mind.