Modern Samba on ZFS: A Minimal, Clean Configuration for a 2026 Homelab
Modern Samba and ZFS configuration for a homelab NAS with macOS, Linux, and Windows 11 clients using SMB3 and NFSv4 ACLs.
For years, my NAS ran on a Samba configuration that had grown organically. It started sometime around the Samba 4.4 era, survived multiple distro upgrades, and accumulated knobs that I honestly couldn’t justify anymore. You know how it is — you copy a config from some random blog post that worked in 2015, then update it just enough to keep it functioning, and before you know it, you’ve got a Frankenstein setup that works but nobody really understands.
It worked. That’s always the dangerous part.
But “working” and “clean, modern, and secure” are two different things. When I recently revisited the box (ZFS-backed, HDD-based, turnkey NAS distro), I found myself staring at this configuration file thinking, “I have absolutely no idea why half of these settings are here.” That was the moment I decided to do what every sysadmin eventually has to do: remove the cargo-cult configuration and rebuild it from first principles.
The funny thing is, I kept expecting the new setup to be more complex. After all, the old config had so many settings. But it turns out most of them were either legacy workarounds or things that shouldn’t have been there in the first place. The modern stack is actually simpler and works better.
This post walks through the result: a minimalistic Samba config for ZFS, tuned for:
- macOS machines (including Time Machine backups)
- Linux clients and dev machines
- A few Windows 11 machines
- Media-heavy workloads with large files
- A privacy-focused homelab where I own all the hardware
No legacy NTLM hacks that nobody should be using anyway. No insecure wide links that cause more security headaches than they solve. No 2016-era fruit workarounds that were only needed because of old incompatibilities. Just a clean, modern setup that does what it’s supposed to do without all the cruft.
🛠️ Why Revisit Samba at All?
The honest answer is that I got tired of not understanding my own infrastructure. But there’s also a practical side to this.
If your Samba config is older than your SSD cache, chances are it’s carrying forward settings that made sense in a completely different era:
- It still enables NTLMv1 or other legacy authentication paths that we really shouldn’t be using in 2026
- It contains fruit options that were workarounds for bugs that have long since been fixed
- It explicitly disables extended attributes because someone, somewhere, once had a problem with them
- It mixes
create mode,force create mode,directory mode, and three other variants all fighting each other - It ignores how ZFS actually wants to handle ACLs, often fighting against the filesystem layer instead of working with it
That was basically my situation. I had comments in the config from like 2014 next to settings I’d copied from Stack Overflow in 2019. It was a mess.
The system was serving several important purposes in my homelab:
- Media shares for Kodi and tinyMediaManager workflows (4K files, streaming)
- Linux development machines that needed reliable NFS-like access
- macOS laptops from various family members
- A couple of Windows 11 machines for specific applications
- A dedicated Time Machine share for automated Mac backups
ZFS ran under the hood. Each major share had its own dataset. Everything was on HDD storage, so performance optimization mattered but wasn’t about raw speed — it was about efficiency and not thrashing the disks.
It was a perfect foundation. The Samba layer on top? It was functional but messy, and I kept second-guessing myself whenever I had to make changes. That’s a red flag that your infrastructure has become too complicated.
🧱 Understanding the ZFS + Samba Relationship
Here’s the thing many tutorials skip — and it cost me a full debugging session to understand:
Samba is only half the story.
If ZFS ACLs and extended attributes are misconfigured at the filesystem level, Samba will behave strangely no matter how clean your smb.conf looks. I learned this the hard way when I couldn’t figure out why Time Machine backups kept getting permission errors, and the entire issue was that my ZFS dataset was using POSIX ACLs instead of NFSv4 ACLs.
For modern clients to work reliably — I’m talking about Windows 11 expecting proper ACL inheritance and macOS expecting extended attributes to be stored correctly — you need alignment between all the layers:
- NFSv4 ACLs at the ZFS level (instead of the legacy POSIX ACLs)
- Extended attributes stored in system attributes (
xattr=sain ZFS), which is way more efficient - Samba configured to trust and pass through those ACLs instead of trying to manage them itself
If you don’t align those layers properly, you end up with weird permission inheritance bugs that show up at 2 AM when someone tries to move a directory with nested files. Trust me on this — I’ve spent weekends debugging permission issues that turned out to be a single misconfigured ZFS property on the parent dataset.
The key insight is this: if you’re supporting modern clients (anything Windows 7 and newer, any modern macOS), you should be using NFSv4 ACLs. They’re more powerful, more predictable, and they work better across different client types. The moment you accept that, everything else falls into place.
🔧 The Minimal Modern smb.conf
This is what my global section looks like now, and then I’ll walk through why each setting is there.
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
[global]
server string = NAS
# Protocol stack
server min protocol = SMB2_10
server max protocol = SMB3_11
# Apple compatibility
vfs objects = catia fruit streams_xattr
fruit:metadata = stream
fruit:resource = stream
fruit:locking = none
# ZFS + ACL handling
ea support = yes
map acl inherit = yes
store dos attributes = yes
# Security hygiene
unix extensions = no
wide links = no
# Safe performance tuning for HDD-backed ZFS
aio read size = 1
aio write size = 1
use sendfile = yes
Let me break down why each section matters:
Protocol Stack: By setting the minimum to SMB2_10, I’m dropping support for anything older (like Windows Vista era). In 2026, there’s just no reason to support those ancient clients. If you have one ancient Windows machine, you’re the exception, and you know it. SMB3_11 is the current standard and gives you better performance, better security, and better compatibility with modern OSes.
Apple Compatibility: The catia module handles Mac-specific filename characters. The fruit module (yes, that’s really what it’s called) handles how macOS extensions and metadata get stored. By setting metadata = stream and resource = stream, I’m telling Samba to store these things as SMB streams instead of trying to create separate files or use other hacks. This is cleaner and faster. The locking = none for fruit is important because Time Machine doesn’t lock files like traditional SMB clients do.
ZFS + ACL Handling: The ea support = yes is critical — without it, macOS will complain constantly about not being able to store extended attributes. map acl inherit = yes tells Samba to properly inherit ACLs from parent directories (something Windows 11 absolutely depends on). store dos attributes = yes preserves things like the “archived” flag that Windows cares about.
Security Hygiene: unix extensions = no disables SMB Unix Extensions, which is an old protocol extension that modern clients don’t need and that can create security issues. wide links = no prevents symlink traversal attacks — you want this enabled unless you have a specific reason not to.
Performance Tuning for HDD: The aio read size = 1 and aio write size = 1 tells Samba to use asynchronous I/O for all reads and writes. This is especially important on HDD arrays because it prevents single requests from blocking everything. use sendfile = yes lets the filesystem do zero-copy operations when sending files, which is a significant performance win on large files.
Notice what’s NOT here:
- No
allow insecure wide links - No
fruit:zero_file_idworkaround - No
fruit:nfs_aceshacks - No legacy NTLM tweaks
- No random compatibility switches you don’t understand
Modern clients don’t need any of that. In fact, they work better without it. This is liberating — the config is shorter and easier to understand.
🎬 Media Share Template
For my media datasets, I use one clean template that handles all the permission management correctly. Here’s what it looks like:
1
2
3
4
5
6
7
8
9
10
11
12
13
14
[Media]
path = /srv/media
browseable = yes
read only = no
valid users = myuser
write list = myuser
create mask = 0664
directory mask = 0775
force user = data
force group = users
inherit permissions = yes
The key choices here are worth explaining:
valid users and write list: I’m explicitly listing who can connect and who can write. This is a security boundary. I’m not trying to be fancy with group membership here — it’s just simple and auditable.
create mask and directory mask: These are important because they establish the baseline permissions. 0664 for files means readable and writable by owner and group, readable by others. 0775 for directories means the group can traverse. This works well with ZFS ACLs and means new files have predictable permissions.
force user and force group: This is where many people get confused. By setting force user = data, every file written through Samba gets owned by the data account, regardless of who logged in. This might sound weird, but it actually solves a bunch of problems — file ownership becomes predictable, and you don’t get permission tangles from different users creating files.
inherit permissions = yes: This tells Samba to honor the directory’s permissions when creating new files. Combined with ZFS’s ACL inheritance settings we’ll see later, this cascades permissions correctly through the hierarchy.
For additional shares using the same template, I just reference it:
1
2
3
[Media2]
copy = Media
path = /srv/media2
This is much cleaner than repeating all the settings. It means if I want to change the template, it applies everywhere.
🍎 Time Machine Share
Time Machine is special. It has different requirements than regular file sharing, and it honestly frustrated me for years until I researched exactly what it needs.
Here’s a dedicated share just for Time Machine:
1
2
3
4
5
6
7
8
9
10
11
12
13
14
[TM]
path = /srv/time_machine
valid users = myuser
read only = no
vfs objects = fruit streams_xattr
ea support = yes
fruit:time machine = yes
fruit:time machine max size = 2T
create mask = 0664
directory mask = 0775
force user = timemachine
Let me explain why these settings matter:
fruit:time machine = yes: This tells Samba to handle Time Machine’s special protocol requirements. Without this, Time Machine might work intermittently or get confused about whether a backup is complete. With it, Samba knows to handle the specific way Time Machine negotiates the backup protocol.
fruit:time machine max size = 2T: Time Machine will try to use all available space unless you tell it not to. By setting a hard limit, I avoid the disaster scenario where a single Mac’s backup fills the entire NAS. You need to think about this carefully — how much space should each Mac get for backups? I use 2TB per machine, which gives me room for multiple years of incremental backups.
vfs objects = fruit streams_xattr and ea support = yes: These are critical for Time Machine because it relies heavily on extended attributes to track backup metadata. Without these, Time Machine will either fail or behave erratically.
Dedicated force user: I use a separate system account for Time Machine backups rather than forcing it to the regular data user. This gives me better audit trails and makes it easier to reason about permissions. If something goes wrong with backups, I can check what the timemachine user did.
The reason Time Machine needs such careful configuration is that it has very specific requirements: it needs to create special sparsebundle files, it needs to track metadata carefully, and it uses SMB protocol features in ways that regular file sharing doesn’t. Modern Samba handles this better than it used to, but it still requires explicit configuration to work reliably.
Time Machine over SMB3 is stable today — but only if ACLs and xattrs are correct underneath.
🗄️ ZFS Dataset Configuration (The Real Optimization)
Here’s where I learned an important lesson: Samba is just one part of the equation. The real magic happens at the ZFS level.
Each share has its own dataset in my setup. That’s not accidental — it gives me precise control over several critical aspects:
- Record size: How much data ZFS groups together on disk
- Compression: Which algorithm to use (or none)
- Quotas: Hard limits on space usage
- Snapshots: Point-in-time backups for recovery
This separation is what makes the whole system reliable and maintainable. Instead of one big pool serving everything, I have granular control.
🔹 Apply to All SMB Datasets
These settings form the foundation and should be applied to every dataset you’re serving over SMB:
1
2
3
4
zfs set acltype=nfsv4 pool/dataset
zfs set xattr=sa pool/dataset
zfs set aclmode=passthrough pool/dataset
zfs set aclinherit=passthrough pool/dataset
Let me explain what each one does:
acltype=nfsv4→ This switches from POSIX ACLs to NFSv4 ACLs. NFSv4 is what Windows 11 expects, and it’s also more expressive and powerful. If you don’t set this, you’re stuck in the POSIX world, which is like playing with one hand tied behind your back.xattr=sa→ This stores extended attributes in the System Attribute pool instead of inline. It’s faster and cleaner, and it’s critical for macOS, which relies on xattrs for a lot of filesystem metadata.aclmode=passthrough→ This tells ZFS to not try to be clever about ACLs. When Samba sets an ACL, ZFS will honor it exactly as given. Without this, you get weird translation layers that cause problems.aclinherit=passthrough→ Same idea for inheritance. When you create a new file in a directory with specific ACLs, those ACLs should automatically apply to the file. Again, “passthrough” means ZFS does exactly what you ask.
If you’re still using POSIX ACLs with Windows clients, you’re fighting the system.
🎥 Media Datasets (Large Files, HDD)
For datasets storing media files — think 4K videos, media server libraries, that kind of thing — I use a different set of optimizations:
1
2
3
zfs set atime=off pool/media
zfs set compression=lz4 pool/media
zfs set recordsize=1M pool/media
Why atime=off? Every time a file is read, updating the access time requires a disk write. On spinning disks, that adds up. By disabling access time tracking, I eliminate those extra writes. I don’t need to know the last time a file was read for media serving anyway.
Why compression=lz4? LZ4 compression is fast and gives decent compression ratios. For media files that are already somewhat compressed (MP4, MKV, JPEG), you won’t get amazing compression, but LZ4 is so fast that there’s no downside. It just saves space and doesn’t slow things down.
Why recordsize=1M? This is the big one. Media files are large and tend to be read sequentially. By increasing the recordsize from the default 128K to 1M, I’m telling ZFS to group more data together. This means:
- Less metadata overhead per file
- Better throughput when reading large sequential files
- Fewer random disk seeks on HDD, which is the enemy
1M is the right size for media workloads. It’s large enough to matter but not so large that it wastes space on small files.
💾 Time Machine Dataset
Time Machine behaves differently from regular media serving. It doesn’t read large sequential files. Instead, it creates a special sparsebundle container and writes lots of smaller blocks inside it. This means the ZFS settings need to be tuned completely differently.
1
2
3
4
zfs set atime=off pool/time_machine
zfs set compression=lz4 pool/time_machine
zfs set recordsize=128K pool/time_machine
zfs set quota=2T pool/time_machine
Why smaller recordsize? Remember how I said 1M recordsize was perfect for large sequential media files? The opposite is true for Time Machine. The sparsebundle contains many small blocks that are updated frequently. A 128K recordsize is the sweet spot — it’s smaller than media files need, but large enough to still capture groups of related blocks. This gives better performance for the random access patterns that Time Machine uses.
Why quota=2T? This is probably the most important setting here, and I learned it the hard way. Without a quota, Time Machine will happily back up an entire family of Macs to a single dataset and eventually fill your entire pool. I’ve seen this happen. By setting a hard limit of 2TB per machine (adjust based on your needs), I ensure that even if someone forgets to configure Time Machine’s backup limits, the system won’t fill the pool and crash. It’s a safety net.
The compression and atime settings are the same as media datasets — they apply to pretty much all datasets and help reduce wasted I/O and storage.
🧪 Testing the Setup
After all this configuration, I did a thorough test. Here’s what I verified:
- macOS mounts without warnings: No mysterious dialog boxes or “this share doesn’t support…” messages. Just clean mounts.
- Time Machine creates backups cleanly: The Mac recognizes the share as a valid Time Machine destination, creates the sparsebundle, and starts backing up without any funky errors.
- Windows 11 shows correct inherited ACLs: I created a directory with specific permissions, then created files inside and confirmed they inherited the ACL correctly. This is the test that convinces you the whole system is working properly.
- Linux clients respect permissions: Files created on Windows show up with the right permissions on Linux, and vice versa.
- Large file transfers saturate HDD bandwidth: I copied a 4K video file from a Linux client and watched it use the full bandwidth of the HDD pool. Good sign.
-
No Samba warnings in logs: Check
/var/log/samba/log.*and make sure you’re not seeing errors about ACLs, extended attributes, or clients failing to negotiate.
One crucial debugging tip: if you see permission weirdness during testing, check ZFS first — not Samba. 99% of the time, permission problems are actually ZFS misconfiguration. Check that your datasets have the right acltype, xattr, and aclinherit settings before you start tweaking Samba.
🚀 What I Removed (And Don’t Miss)
Removing things is almost as important as adding them. Here’s what came out of the old config:
allow insecure wide links = yes— This was a security workaround from the old days. Modern clients don’t need it, and it opens up symlink traversal attacks. Good riddance.ea support = no— I had this set tonoto “work around” some old issue I don’t even remember. Turns out, modern Samba, macOS, and Windows all expect extended attributes to work. Disabling them causes more problems than it solves.create modeandforce create modeduplication — The old config had these settings duplicated in confusing ways. By simplifying to justcreate maskanddirectory mask, the file creation permissions are clear and predictable.fruit:zero_file_id— This was a workaround for an old macOS bug that was fixed ages ago. Keeping it around just adds confusion.Legacy NTLM compatibility switches — These were for ancient Windows machines that nobody uses anymore. Using
server min protocol = SMB2_10is cleaner and more secure than carrying forward a dozen NTLM-related config options.Random
fruit:settings copied from forum posts — I had options in there that I couldn’t even explain. When in doubt, remove it and see if things still work. Spoiler: they do.
🧠 Final Thoughts
This cleanup really wasn’t about squeezing out 5% more throughput or bragging about performance numbers. It was about alignment — a concept I think matters more than people realize.
Here’s the insight I had while working on this:
Modern Samba is designed for modern ACLs. ZFS with NFSv4 ACLs is what modern clients expect. macOS depends on proper extended attributes. Windows 11 assumes SMB3 with strong ACL support.
When you align all those layers — when each component trusts the layer below it to do its job correctly — the whole system just works. The configuration becomes almost invisible. You stop thinking about whether you have the right settings and start thinking about what you’re storing.
That’s when infrastructure stops being a drag on your time and becomes something that just quietly works.
In homelab setups especially (and I think this applies to any self-managed infrastructure), the temptation is always to tweak things endlessly. Add a performance knob here, work around an edge case there. But I’ve learned that the opposite approach is usually better: start minimal, understand every setting you have, and only add something when you have a concrete reason.
ZFS is already smart about what to do. Samba is mature and well-tested. The real skill isn’t in adding more options — it’s in trusting the defaults and removing the ones that shouldn’t be there.
This setup has been stable for months now. Backups run without intervention. File transfers are fast. Permissions make sense. And most importantly: I finally understand every line in every config file again. I can explain why each setting is there and what would happen if I changed it.
That clarity is worth a lot more than chasing that last percentage point of performance. It makes the system maintainable, and maintainability is what matters in the long run.
