hckrnws
Synology isn't about the NAS hardware and OS. Once setup, it doesn't really matter as long as your config is reliable and fast, so there are many competitive options to move to.
The killer feature for me is the app ecosystem. I have a very old 8-bay Synology NAS and have it setup in just a few clicks to backup my dropbox, my MS365 accounts, my Google business accounts, do redundant backup to external drive, backup important folders to cloud, and it was also doing automated torrent downloads of TV series.
These apps, and more (like family photos, video server, etc), make the NAS a true hub for everything data-related, not just for storing local files.
I can understand Synology going this way, it puts more money in their pocket, and as a customer in professional environment, I'm ok to pay a premium for their approved drives if it gives me an additional level of warranty and (perceived) safety.
But enforcing this accross models used by home or soho users is dumb and will affect the good will of so many like me, who both used to buy Synology for home and were also recommending/purchasing the brand at work.
This is a tech product, don't destroy your tech fanbase.
I would rather Synology kept a list of drives to avoid based on user experience, and offer their Synology-specific drives with a generous warranty for pro environments. Hel, I would be ok with sharing stats about drive performance so they could build a useful database for all.
They way they reduce the performance of their system to penalise non-synology rebranded drives is bascially a slap in the face of their customers. Make it a setting and let the user choose to use the NAS their bought to its full capabilities.
On the other had, they have also slowly destroyed their app ecosystem. The photo solution is much worse than it used to be both in terms of features and the now removed support for media codecs. Video station has pretty much been dead for years.
At this point, I'm not that convinced that there's anything that synology offers that isn't handled much better by an app running on docker. This wasn't true 10 years ago.
The photo apps is great because their photo backup app on android is great, and the only thing that works as well as google photo to ensure all your photo and videos are saved, untouched, no duplicate, no missed media.
That's it. For the actual viewing / sorting / album you need something like immich or photoprism, the photos app actually sucks.
Video station has been removed in the latest minor update, not even a major update, they just took it out no warning no replacement. But then again it was not that good, jellyfin is the way to go for me.
Their crown jewels are active backup, hyper backup and synology office. That's where they own their space.
This is sad... I've been using Synology for a very long time (over 15 years?) and have been pretty happy with my experience. The one time I needed their tech support also left me with a good impression...
This however is a deal breaker for me as I'd hate to be locked in to their drives for all the reasons in TFA but also as a matter of principle.
I hope Synology will reconsider!
Yeah, same. I have had three Synology boxes over the last 20 or so years, and they have been super reliable, easy to use, and easy to update. The last one is important to me because I would, over time, add more disks and, when the drive bays were all full, replace smaller disks with larger ones.
The first one I bought is still in service at my parents' place, silently and reliably backing up their cloud files and laptops.
I was fully expecting to buy more in the future, but this is a dealbreaker. If a disk goes bad, I want to go to the local store, pick one up, and have the problem fixed half an hour later. I do not want to figure out where I can get approved disks, what sizes are available, how long it will take to ship them, etc.
I've recently installed Unraid on an old PC, and the experience has been surprisingly good. It's not as nice as a Synology, but it's not difficult, either. It's just a bit more work. I've also heard that HexOS plans to support heterogeneous disks, and I plan to check it out once that is available.
So that's the direction I'll be going in instead.
> I have had three Synology boxes over the last 20 or so years
Sounds like this is the problem with Synology... How are they going to make money when their products are so good!
Honestly, seems like they got roughly one hardware sale to him every 6 years or so.
Which is along the same trend line I'm seeing for my purchases.
That's pretty solid for hardware sales.
My guess is that they've over invested in things like their "drive" office software suite, and don't know how to monetize it or recoup costs.
I like Synology, but locking me to their drives is a hard "no thanks" from me.
Next NAS won't be from them if that's their play...
Every six years is enough for apple and other companies who have other sources of revenue and have staked out this high quality niche. But androids, as an example, are more of an average 3 year lifespan if I'm not mistaken, which is closer to what Synology would probably want to achieve but cannot.
I don't think mobile is the right comparison. Those ecosystems are explicitly operating on the assumption that they will profit through the software ecosystem (app store revenue).
Synology seems to have gone entirely the other direction here. Most of their software is given away for free, but the hardware is being monetized.
Additionally - the hardware has different operating constraints. I think the big deal for Synology is that they probably assumed that storage need growth would equate to sales growth.
EX - Synology may have assumed that if I need to store 1TB in 2010, and 5TB in 2015, that would equate to me buying additional NAS hardware.
But often, HDD size increases mean that I can keep the same number of bays and just bump drive size.
Which... is great for me as a user, but bad for Synology (this almost single handedly explains this move, as an aside - I just think it's a bad play).
---
I'd rather they just charged for the software products they're blowing all their money on, or directly tie upgrades to the software products to upgrading hardware.
The comparison to phones is shaky here. Phones bring substantial performance and feature improvements over 6 years, HW and SW. Synology on the other hand still uses a 5-6 year old CPU and 1Gbps connectivity in their home "plus" line. The OS development is mostly security updates with substantial feature releases few and far between. I expect this from a NAS but it's not at all comparable to a phone.
Forcing their drives is a tax on top of an already existing tax. Synology already charges a premium for lower end specs than the competition. If that's not enough to compensate for the longer upgrade cycles, and they want a hand in every cookie jar it's just going to be a hard pass for me.
I upgraded my Synology box every few years and this is exactly the time I was looking to go to the next model. And I'd pull the trigger and buy a current model before they implement the policy but the problem is now I don't trust that they won't retroactively issue an update that cripples existing models somehow. QNAP or the many alternative HW manufacturers that support an arbitrary OS are starting to be that much more attractive.
What are "competitive options"? It's a genuine question. Before Synology, I had some DIY server in a Fractal Design case, and noise and, to be honest, bulk were a problem. Also, maintenance of the server wasn't funny.
I switched to Synology about six years ago (918+). The box is small, quiet, and easy to put in the rack together with the network gear. I started with 4TB drives, gradually switched to 8TB over time (drive by drive). I don't use much of their apps (mostly download station, backup, and their version of Docker to run Syncthing, plus Tailscale). But the box acts like an appliance - I basically don't need to maintain it at all; it just works.
I don't like all this stuff with vendor lock-in, so when the time comes for replacing the box, what are alternatives on par with the experience and quality I currently have with Synology?
The problem is that a lot of competitors don't necessarily have great software. For example QNAP on the hardware side is supposed to be good, you have more bang for the bucks in term of performance but they had several major CVEs that really call into question their security practices. I have a friend who is running Unraid on QNAP and is happy though.
The new Chinese NAS's due to hit the market look extremely promising.
- Minisforum N5 Pro NAS
- AOOSTAR WTR MAX
Good compute power as they know users will be running Docker and other services on them, using the NAS as a mini server.
OS agnostic allowing users to install TrueNas, Unraid, favourite Linus distro of choice.
The Minisforum and AOOSTAR look to be adding all the features power users and enthusiasts are asking for.
If you just want a NAS as a NAS and nothing else, the new Ubiquiti NAS looks great value as well.
Unraid is brilliant if you're interested in BYO hardware. It can be setup with mix and match drives, supports docker and virtual machines. Realistically it's a bit more work than Synology to get up and running, but once it is, the only thing you really need to do is update the software from time to time
I don't mind the idea of BYO hardware, especially if it's an old server with hotswap drive and hotswap power built in.
Increasingly, with the time I have towards the things that interest me, I just want storage and a bit of compute to be like a home appliance, reasonably set and forget it and leave my messing around on a USFF computer.
I've heard things about Unraid not being that performant due to the design of the disk array solution.
You can add a cache SSD to keep hot data to reduce access times, and why do you need that much of a throughput to begin with?
you can run ZFS without the Unraid disk array in unraid these days
Doesn't that get rid of one of the biggest benefit of Unraid where you can mix and match drives, just like in a Synology hybrid RAID?
I think this is just the tradeoff you need to make. I’m not aware of a solution where you can mix-and-match drives but also get the write performance of a traditional RAID array.
that is true, but you can make one fast pool using zfs and one slower one using unraids disk array, if you want to, or just use the zfs part as a cache for performance
I have an old Helios4 board. Too bad they don't make them anymore - it's tiny, has ECC, and was purpose built to be a NAS.
Marvell CN913x in QNAP TS435XeU NAS is the SoC successor to Armada A388 on Helios4. Still available, building on Linux support for Armada.
Kind of surprising, I went the other way. I started out with ReadyNAS 15 years ago and after that product faded due to lack of support I no longer wanted to be tied down to a manufacturer. I built a custom solution using a U-Nas chassis. Found FreeNAS back in the day and have stuck with it ever since. Maintenance is fairly minimal.
If you heavily rely on apps/services. I've just gone to self managed docker environments for things like that. A very simple script runs updates.
I have WD MyCloud NAS. It has Transmission to pirate movies and Twonky DLNA server to send them to my TVs. Not much, but honest work.
Some Intel N100/N105 board from Aliexpress with Fedora or Debian on top should be fine & much more flexible if you decided you want more than just a file server.
Or throw on TrueNAS or UnRAID if you want a GUI
Comment was deleted :(
Apart from the form factor, my custom built machine with Unraid pretty much works like what you describe. Soon two years of use without major issues.
I over-purchased a NAS and ended up with QNAP, even thought Synology provided more power (lower electricity use) to performance ratio.
In hindsight buying a QNAP that was more than the Synology equivalent felt like a good idea but I didn't really get into it quickly enough.
I also got burned by Western Digital's scandal of selling WD Red drives that really weren't that got them caught in a class action lawsuit. Can't see myself buying them again.
Anecdotally, I quickly gave up on their value-add apps, they didn't seem well thought out and had many missing features. My impression was that they were mostly there to tick all the boxes for their marketing material. It's been a few years since I looked at them so I can't give specific examples unfortunately.
Yes, it is the overall ease of configuration, operation - but also for me the app ecosystem.
Well, my Synology NAS is from... 2013 (have upgraded the drives 3-times), so... it is/was time to replace it, and I can tell you that it won't be with another Synology device...
I won't go back to QNAP, which is what I had before Synology, because during an OS update it wiped all my data (yes, there was a warning, but the whole purpose of having a RAID NAS is safe reliable data storage)
May check-out a custom hardware build, combined with Xpenology.
Important story to note - it's not a backup if you don't have more than one copy of it (beyond multiple copies on one NAS).
Fwiw I don't use a single one of their apps. I bought it for their hybrid raid feature.
Same here.
At one time, Drobo was the only manufacturer that did that, but I have had very bad luck with Drobos.
I’ve been running a couple of Synology DS cages for over five years, with no issues.
I’m running two Synology NAS devices, and I wouldn’t consider their app ecosystem to be their strong point. I started by trying to take advantage of the built-in Synology apps when I first got my NAS, but quickly realized how limited they are. Their bi-directional synchronization solution is so slow and archaic compared Syncthing! And the same is true for most of their software offerings. At this point, I’m happy with having Docker support, and don’t particularly care about the rest of their apps.
I still appreciate how easy and maintenance-free was their implementation of the core NAS functionality. I do have a Linux desktop for experiments and playing around with, but I prefer to have all of my actually important data to be on a separate rock solid device. Previously, Synology fulfilled this role and was worth paying for, but if this policy goes live, I wouldn’t consider them fro my next NAS.
I would count supported third-party apps like SyncThing as part of the app ecosystem. You can add the SynoCommunity repository to your Synology and install SyncThing directly, which is pretty nice.
It's a bit more convenient than how other solutions, like Unraid, handle this, where you manually configure a Docker container.
That’s true, but it’s only relevant for the initial setup. I wouldn’t think twice about giving up something so minor compared with the sheer anticompetitive nature of Synology locking down the devices.
Agree. Have a few Synologies and the apps are crap ware.
Yes, in the end, no matter how polished your apps are, a NAS is a tech product sold to tech people. Tech people want to choose their hard drive.
Synology did a good job of being relatively turnkey.
QNAP has more configurability for better and worse.
Curious ot hear what other manufactures can compare to them out of the box.
Self-configuring something is a different thing.
I simply do not care any more to rebuild raids and manually swap drives under duress when something is going down. I just replace existing drives with new ones well before they die after they've hit enough years. Backblaze's report is incredibly valuable.
How much of a market is there really for those apps? They are competing against most consumers accepting the ease (and significantly cheaper) of cloud based storage.
We (in the tech space) can scream privacy and risks of the cloud all day long but most consumers seem to just not care.
I have 2 Synology NAS and the only app that I actually use is Synology Drive thanks to the sync app, but there are open source alternatives that would work better and not require a client on the NAS side to work.
I can't imagine any enterprise would be using these features.
Been in the market for a new NAS myself and I am going to be looking into truenas or keep an eye on what Ubiquity is doing in this space (but its a no go until they add the ability to communicate with a UPS).
Without the apps they have even less market though.
While true, that assumes that the engineering effort is worth whatever extra market they are getting from it.
I just can't imagine there is that many people that would bother with a "private cloud" that may not already have a use case for a NAS at home for general data storage.
Maybe this will backfire like the keurig DRM coffee pods?
https://www.theverge.com/2015/2/5/7986327/keurigs-attempt-to...
As far as lists of drives to avoid, Synology could certainly do that, but we also already have Backblaze’s reports on their own failure rates. Synology also uses multiple vendors to produce “Synology” branded drives, so as the article states this may also lead to confusion about which Synology branded drives are “good” vs. “bad” in the future, even with seemingly identical specs.
The idea is not so much about which drives fail or whatever. It’s more that certain consumer drives have firmwares that don’t work well with NAS workloads. Long timeouts could be treated as a failed drive rather than a transient error by a desktop drive, for example.
I’d argue that anyone who is buying a NAS for personal use probably does enough research to figure out that NAS-focused/appropriate drives are a thing-though. And if they contact Synology support, it should be very easy for them to identify bad drive types. On top of that, they can (and have) warn about problematic drives.
>kept a list of drives to avoid based on user experience
Well, that sounds like a great way to get sued.
On what grounds exactly? You tested something, it turned out to perform below average, so you say you don't recommend buying it. Where's the crime?
Seems to have worked well enough for Backblaze for years and years now. Another major vendor publicly announcing that make X model Y has shitty reliability is as much pressure on the storage duopoly as we're likely to get.
There's no reason to be scared to share your experience with hardware.
Just do it in reverse: a list of drives that they have tested and can confirm work well; at the end of the list they just mention that they cannot recommend any other.
You would need to account for every drive firmware revision.
This is in fact standard practise for many software vendors.
How would it be a great way to get sued?
Backblaze publishes a great report.
https://www.backblaze.com/blog/backblaze-drive-stats-for-202...
qnap seems to have a similar app ecosystem, or is there a quality difference? I have only used QNAP NAS devices so I dont know.
QNAP's ecosystem is decent. There is a third party store by a former QNAP employee that has a lot more selection in it.
Getting a lower powered intel celeron QNAP nas basically lets you run anything you want software or app wise, including docker that just works instead of hunting for ARM64 binaries for anything that is not available off the shelf.
TrueNAS can do all those stuff for you.
No it can't. Let's be honest Synology's OS is covering more than just storage, and no, spinning up a lot of 3'rd party docker containers that you need to maintain, secure and manage isn't as easy.
What can't TrueNAS do that was listed in the parent comment?
I'd rather have the flexibility offered by TrueNAS, in addition to the robust community. Yes, Synology hardware is convienent in some use cases, but you can generally build yourself a more powerful and versatile home server with TrueNAS Scale. There is a learning curve, so it is not for everyone.
And for the learning curve folks there’s HexOS
Or OmniOS with Napp-It
Other manufacturers like Qnap also have this app ecosystem.
It doesn’t address the mandatory nature of drives when at most dell and hp have put their part number on drives for the most part.
The issue is QNAP has terrible quality/stability at the OS level compared to Synology (also with Apps).
The number of times I’ve broken things on QNAP systems doing what should be normal functionality, only to find out it’s because of some dumb implementation detail is over a dozen. Synology, maybe 1-2.
Roughly the same number of systems/time in use too.
> QNAP has terrible quality/stability at the OS level
Some QNAP devices can be coaxed into running Debian.
Mind that these are ancient models that are dog slow for anything more than serving files. Not that they are fast in serving files...
I did the procedure on my (now 15yo) TS-410, mostly because the vendored Samba is not compatible with Windows 11 (I had turned-off all secondary services years ago). It took a few days to backup around 8TB of data to external drives. And AROUND 2 WEEKS to restore them (USB2 CPU overhead + RAID5 writes == SLOOOOOW).
Even to get the time down to 2 weeks, I really had to experiment with different modes of copying. My final setup was HDD <-USB3-> RPi4 <-GbE-> TS-410. This relieved TS-410 CPU from the overhead of running the USB stack. I also had to use rsync daemon on TS-410 to avoid the overhead of running rsync over SSH.
So, it's definitely not for the faint of heart, but if you go through the trouble, you can keep the box alive as off-site backup for a few more years.
Having said that, I have to commend QNAP for providing security updates for all this time. The latest firmware update for TS-410 is dated 2024-07-01 [1]. This is really going beyond and above supporting your product when it comes to consumer-level devices.
[1] https://www.qnap.com/en/download?model=ts-410&category=firmw...
Wouldn't it be cheaper to just build any NAS and chuck Debian on it if you didn't care about the OS and vendor software to begin with?
e.g. QNAP has rare hardware combo of half-depth 1U low-power Arm NAS /w mainline Linux support, 32GB ECC RAM, dual NVME, 4x hotswap SATA, 2x10G SFP, 2x2.5G copper, hardware support for ZFS encryption, https://news.ycombinator.com/item?id=40868855.
In theory, one could fit an Arm RK3588 SBC with NVME-to-PCIe-to-HBA or NVME-to-SATA into half-depth JBOD case. That would give up 2x10G SFP, 2xNVME and ECC RAM.
Maybe it's just me, but rare harware isn't something I'd look for in a reliable storage system unless I had a really special need general hardware just couldn't be made to do
Per sibling comment, "unique" is a better descriptor than "rare". The NAS is made in Taiwan and has been readily available from Amazon or QNAP store.
The Marvell CN913x SoC has been shipping for 5 years, following the predecessor Armada SoC family released 10 years ago and used in multiple consumer NAS products, https://linuxgizmos.com/marvell-lifts-curtain-on-popular-nas.... Mainline Linux support for this SoC has benefited from years of contributions, while Marvell made incremental hardware improvements without losing previous Linux support.
This is spot on. I'd like to add that unique often in hardware is not forcing people to buy a few times to get it right, especially first time buyers.
"Rare" in this case is referring to a unique offering, not to the availability of that particular part.
As I understand, migrating to other hardware wouldn't be an issue if availability becomes an issue.
Rare more means a unique combination of common hardware products, where other manufacturers don't put all of the features into one piece of hardware like qnap or others might, to keep people buying more devices to get what they want, or buy a device that is way too overkill for their needs.
Storage should be an appliance, or you're the appliance repair man always on call.
I ended up doing that with a larger QNAP I had. It did have some odd bugs that I needed to track down, but otherwise was a good (albeit overly expensive) NAS. I used zfs.
Sure, but don't you lose the app ecosystem then?
Hacker flexibility or consumer take-it-or-leave-it, pick one.
Debian offers flexibility and control, at the cost of time and effort. PhotoSync mobile apps will reliably sync mobile devices with NAS over standard protocols, including SSH/SFTP. A few mobile apps do work with self-hosted WebDAV and CalDAV. XPenology attempts to support Synology apps on standard Linux, without excluding standard Debian packages.
Debian's software repo is about 500 times bigger than Synology's.
FWIW, i haven't had any real issues with QNAP since 10 years or so, but i'm pretty much only using basic features.
Once i added a 4th drive to a RAID 5 set and i was impressed that it performed the operation on-line. Neat.
Oh, there was one issue: A while ago my Timemachine backups were unreliable, but i haven't had that issue since three years or so.
It's news to me, maybe I haven't touched mine much out of leaving it absolutely stock.
Were you installing things manually or just using the app store?
Synology became so bad, they measure disk space in percent, and thresholds cannot be configured to lower than 5%. This may have been okay when volume sizes were in gigabytes, but now with multi-TB drives, 5% is a lot of space. The result of that is NAS in permanent alarm state because less than 5% space is free. And this makes it less likely for the user to notice when an actual alarm happens because they are desensitised to warnings. I submitted this to them at least four times, and they reply that this is fine, it’s already decided to be like that, so we will not change it. Another stupid thing is that notifications about low disk space are sent to you via email and push until it’s about 30 GB free. Then free space goes below 30 GB and reaches zero, yet notifications are not sent anymore. My multiple reports about this issue always responded along the lines of “it’s already done like that, so we will not change it”.
Most modern, especially software companies, choose not to fix relatively small but critical problems, yet they actively employ sometimes hundreds of customer support yes-people whose job seems to be defusing customer complaints. Nothing is ever fixed anymore.
I think preventing alarm fatigue is a very good reason to fix issues.
But 5% free is very low. You may want to use every single byte you feel you paid for, but allocation algorithms really break down when free space gets so low. Remember that there's not just a solid chunk of that 5% sitting around at the end of the space. That's added up over all the holes across the volume. At 20-25% free, you should already be looking at whether to get more disks and/or deciding what stuff you don't actually need to store on this volume. So a hard alarm at 5% is not unreasonable, though there should also be a way to set a soft alarm before then.
5% of my 500 GB is 25 GB, which is already a lot of space but understandable. Not many things would fit in there nowadays.
But 5% of a 5 TB volume is 250 GB, that's the size of my whole system disk! Probably not so understandable by the lay person.
This is partly why SSDs just lie nowadays and tell you they only have 75-90% of the capacity that is actually built into them. You can't directly access that excess capacity but the drive controller can when it needs to (primarily to extend the life of the drive).
Some filesystems do stake out a reservation but I don't think any claim one as large as 5% (not counting the effect of fixed-size reservations on very small volumes). Maybe they ought to, as a way of managing expectations better.
For people who used computers when the disks were a lot smaller, or who primarily deal in files much much smaller than the volumes they're stored on, the absolute size of a percentage reservation can seem quite large. And, in certain cases, for certain workloads, the absolute size may actually be more important than the relative size.
But most file systems are designed for general use and, across a variety of different workloads, spare capacity and the impact of (not) keeping it open is more about relative than absolute sizes. Besides fragmentation, there's also bookkeeping issues, like adding one more file to a directory cascading into a complete rearrangement of the internal data structures.
> sare capacity and the impact of (not) keeping it open is more about relative than absolute sizes
I don't think this is correct. At least btrfs works with slabs in the 1 GB range IIRC.
One of my current filesystmes is upwards of 20 TB. Reserving 5% of that would mean reserving 1 TB. I'll likely double it in the near future, at which point it would mean reserving 2 TB. At least for my use case those numbers are completely absurd.
We're not talking about optical discs or backup tapes which usually get written in full in a single session. Hard drive storage in general use is constantly changing.
As such, fragmentation is always there; absolute disk sizes don't change the propensity for typical workloads to produce fragmentation. A modern file system is not merely a bucket of files, it is a database that manages directories, metadata, files, and free space. If you mix small and large directories, small and large files, creation and deletion of files, appending to or truncating from existing files, etc., you will get fragmentation. When you get close to full, everything gets slower. Files written early in the volume's life and which haven't been altered may remain fast to access, but creating new files will be slower, and reading those files afterward will be slower too. Large directories follow the same rules as larger files, they can easily get fragmented (or, if they must be kept compact, then there will be time spent on defragmentation). If your free space is spread across the volume in small chunks, and at 95% full it almost certainly will be, then the fact that the sum of it is 1 TB confers no benefit by dint of absolute size.
Even if you had SSDs accessed with NVMe, fragmentation would still be an issue, since the file system must still store lists or trees of all the fragments, and accessing those data structures still takes more time as they grow. But most NAS setups are still using conventional spinning-platter hard drives, where the effects of fragmentation are massively amplified. A 7200 RPM drive takes 8.33 ms to complete one rotation. No improvements in technology have any effect on this number (though there used to be faster-spinning drives on the market). The denser storage of modern drives improves throughput when reading sequential data, but not random seek times. Fragmentation increases the frequency of random seeks relative to sequential access. Capacity issues tend to manifest as performance cliffs, whereby operations which used to take e.g. 5 ms suddenly take 500 or 5000. Everything can seem fine one day and then not the next, or fine on some operations but terrible on others.
Of course, you should be free to (ab)use the things you own as much as you wish. But make no mistake, 5% free is deep into abuse territory.
Also, as a bit of an aside, a 20 TB volume split into 1 GB slabs means there's 20,000 slabs. That's about the same as the number of 512-byte sectors in a 10 MB hard drive, which was the size of the first commercially available consumer hard drive for the IBM PC in the late 1980s. That's just a coincidence of course, but I find it funny that the numbers are so close.
Now, I assume the slabs are allocated from the start of the volume forward, which means external slab fragmentation is nonexistent (unless slabs can also be freed). But unless you plan to create no more than 20,000 files, each exactly 1 GB in size, in the root directory only, and never change anything on the volume ever again, then internal slab fragmentation will occur all the same.
Yes thank you I am aware of what fragmentation is.
There are two sorts of fragmentation that can occur with btrfs. Free space and file data. File data is significantly more difficult to deal with but it "only" degrades read performance. It's honestly a pretty big weakness of btrfs. You can't realistically defragment file data if you have a lot of deduplication going on because (at least last I checked) the tooling breaks the deduplication.
> If your free space is spread across the volume in small chunks, and at 95% full it almost certainly will be
Only if you failed to perform basic maintenance. Free space fragmentation is a non-issue as long as you run the relevant tooling when necessary. Chunks get compacted when you rebalance.
Where it gets dicey is that the btrfs tooling is pretty bad at handling the situation where you have a small absolute number of chunks available. Even if you theoretically have enough chunks to play musical chairs and perform a rebalance the tooling will happily back itself into a corner through a series of utterly idiotic decisions. I've been bitten by this before but in my experience it doesn't happen until you're somewhere under 100 GB of remaining space regardless of the total filesystem size.
If compaction (= defragmentation) runs continuously or near-continuously, it results in write amplification of 2x or more. For a home/small-office NAS (the topic at hand) that's also lightly used with a read-heavy workload, it should be fine to rely on compaction to keep things running smoothly, since you won't need it to run that often and you have cycles and IOPS to spare.
If, under those conditions, 100 GB has proven to be enough for a lot of users, then it might make sense to add more flexible alarms. However, this workload is not universal, and setting such a low limit (0.5% of 20 TB) in general will not reflect the diverse demands that different people put on their storage.
Also Synology use btrfs, a copy-on-write filesystem - that means there are operations that you might not expect that require allocation of new blocks - like any write, even if overwriting an existing file's data.
And "unexpected" failure paths like that are often poorly tested in apps.
No matter how many TB of online HD storage I have, hard disks are just a temporary buffer for my tape drives.
At home I have a 48xLTO5 changer with 4 drives (I picked for a song a while back! I actually don't need it but heck, it has a ROBOT ARM), and at work I'm currently provisioning a 96 LTO 9 tape drive dual-rack. With 640 tapes available :-)
I'm a STRONG believer in tapes!
Even LTO 5 gives you a very cheap 1.5TB of clean, pretty much bulletproof storage.. You can pick a drive (with a SAS HBA card) for less than $200, there is zero driver issue (SCSI, baby); the linux tape changer code is stable since 1997 (with a port to VMS!).
Tape FTW :-)
I don't have one but I'd definitely take a tape changer if it weren't too expensive. It would be amazing to have 72TB of storage just waiting to be filled, without needing to go out into my garage to load a tape up.
LTO tapes have really changed my life, or at least my mental health. Easy and robust backup has been elusive. DVD-R was just not doing it for me. Hard drives are too expensive and lacked robustness. My wife is a pro photographer so the never-ending data dumps had filled up all our hard drives, and spending hundreds of dollars more on another 2-disk mirror RAID, and then another, and another was just stupid. Most of the data will only need to be accessed rarely, but we still want to keep it. I lost sleep over the mountains of data we were hoarding on hard drives. I've had too many hard drives just die, including RAIDs being corrupted. LTO tape changed all of that. It's relatively cheap, and pretty easy and fast compared to all the other solutions. It's no wonder it's still being used in data centers. I love all the data center hand-me-downs that flood eBay.
And I do love hearing the tapes whir, it makes me smile.
This is an area that i'm quickly growing into, what are you curently using and what should I stay away from ?
I got a used internal LTO5 tape drive on eBay for about $150, and then an HBA card to connect it to for about $25 or $30. I bought some LTO5 tapes, and typically I pay about $3.50/TB on eBay for new/used tapes. Many sellers charge far more for tapes, but occasionally I find a good deal. Most tapes are not used very much and have lots of life left in them (they have a chip inside the tape that tracks usage).
Then I scored another 3 used LTO5 tape drives on eBay for about $100, they all worked. I mainly use 1 tape drive. I have it running on an Intel i5 system with an 8-drive RAID10 array (cheap used drives, with a $50 9260-8i hardware RAID card), which acts as my "offsite" backup out in my detached garage - it's off most of the time (cold storage?) unless I'm running a backup. I can loose up to 2 drives without losing any data, and it's been running really well for years. I have 3 of these RAID setups in 3 different systems, they work great with the cheapest used drives from Amazon. I'm not looking for high performance, I just need redundancy. I've had to replace maybe 3 drives across all 3 systems due to failure over the last 7 years.
On Windows the tape drive with LTFS was not working well, I think due to Windows Defender trying to test the files as it was writing them, causing a lot of "shoeshining" of the tape, but I think Windows Defender can be disabled. But I bought tape backup software from https://www.iperiusbackup.com - it just works and makes backups simple to set up and run. I always verify the backup. If something is really important I'll back up to at least 2 tapes. Some really important stuff I will generate parity files (win WinPar) and put those on tape too. Non-encrypted the drive runs at the full 140MB/s, but with encryption it runs at about 60MB/s, because I guess the tape drive is doing the encryption.
I love it, it has changed my data-hoarding life. At $3.50/TB and 140MB/s and 1.5TB per tape, it can't be beat by DVD-R or hard drives for backup. Used LTO5 is really in a sweet spot right now on eBay, but LTO6 is looking good too recently (2.5TB/tape). LTO6 drives can read LTO5 tapes, so there's a pretty easy upgrade path. I also love that there is a physical write-protect switch on the tapes, which hard drives don't have. If you plug in a hard drive to an infected system, that hard drive could easily be compromised if you don't know your system is infected.
[flagged]
100%. Those disks are likely working much harder moving the head all over the place to find those empty spaces when it writes.
....do you think the drive doesn't know where the empty space actually is?
Drives blindly store and retrieve blocks wherever you tell them, with no awareness of how or if they relate to one another. It's a filesystem's job to keep track of what's where. Filesystems get fragmented over time, and especially as they get full. The more full they get, the more seeking and shuffling they have to do to find a place to write stuff. This will be the case even after the last spinning drive rusts out, as even flash eventually has to contend with fragmentation. Heck, even RAM has to deal with fragmentation. See the discussion from the last few weeks about the ongoing work to figure out a contiguous memory allocator in Linux. It's one of the great unsolved problems in general comparing that you and your descendants would be set for life if you could solve.
Not quite, AFAIK? Drive controllers may internally remap blocks to physical disk blocks (e.g. when a bad sector is detected; see the SMART attribute Reallocated Sector Count).
Logical Block Addressing (LBA) by its very nature provides no hard guarantees about where the blocks are located. However, the convention that both sides (file systems and drive controllers) recognize is that runs of consecutive LBAs generally refer to physically contiguous regions of the underlying storage (and this is true for both conventional spinning-platter HDDs as well as most flash-based SSDs). The protocols that bridge the two sides (like ATA, SCSI, and NVMe) use LBA runs as the basic unit of accessing storage.
So while block remapping can occur, and the physical storage has limits on its contiguity (you'll eventually reach the end of a track on a platter or an erasable page in a flash chip), the optimal way to use the storage is to put related things together in a run of consecutive LBAs as much as possible.
Sure, but bad block tracking and error correction are pretty different from the implied file/volume awareness I was responding to.
Yes, to be clear, the drive controller generally (*) has no concept of volumes or files, and presents itself to the rest of the computer as a flat, linear collection of fixed-size logical blocks. Any additional structure comes from software running outside the drive, which the drive isn't aware of. The conventional bias that adjacent logical blocks are probably also adjacent physical blocks merely allows the abstraction to be maintained while also giving the file system some ability to encourage locality of related data.
* = There are some exceptions to this, e.g. some older flash controllers were made that could "speak" FAT16/32 and actually know if blocks were free or not. This particular use was supplanted by TRIM support.
I think you'll find that the word "find" doesn't mean "has to search", like one can find their nose in the middle of their face, if one desires.
Change the word to "seek" and it may make more sense.
It makes more sense but it's not true for the modern CoW filesystems that I'm familiar with. Those allocate free space in slabs that they write to sequentially.
Also, CoW isn't some kind of magic. There are two meanings I can think of here:
A) When you modify a file, everything including the parts you didn't change is copied to a new location. I don't think this is how btrfs works.
B) Allocated storage is never overwritten, but modifying parts of a file won't copy the unchanged parts. A file's content is composed of a sequence (list or tree) of extents (contiguous, variable-length runs of 1 or more blocks) and if you change part of the file, you first create a new disconnected extent somewhere and write to that. Then, when you're done writing, the file's existing extent limits are resized so that the portion you changed is carved out, and finally the sequence of extents is set to {old part before your change}, {your change}, {old part after your change}. This leaves behind an orphaned extent, containing the old content of the part you changed, which is now free. From what evidence I can quickly gather, this is how btrfs works.
Compared to an ordinary file system, where changes that don't increase the size of a file are written directly to the original blocks, it should be fairly obvious that strategy (B) results in more fragmentation, since both appending to and simply modifying a file causes a new allocation, and the latter leaves a new hole behind.
While strategy (A) with contiguous allocation could eliminate internal (file) fragmentation, it would also be much more sensitive to external (free space) fragmentation, requiring lots of spare capacity and/or frequent defrag.
Either way, the use of CoW means you need more spare capacity, not less. It's designed to allow more work to be done in parallel, as fits modern hardware and software better, under the assumption that there's also ample amounts of extra space to work with. Denying it that extra space is going to make it suffer worse than a non-CoW file system would.
Which is exactly why you periodically do maintenance to compact the free space. Thus it isn't an issue in practice unless you have a very specific workload in which case you should probably be using a specialized solution. (Although I've read that apparently you can even get a workload like postgres working reasonably well on zfs which surprises me.)
If things get to the point where there's over 1 TB of fragmented free space on a filesystem that is entirely the fault of the operator.
What argument are you driving at here? The smaller the free space, the harder it is to run compaction. The larger the free space, the easier it is. There are some confounding forces in certain workloads, but the general principle stands.
"Your free space shouldn't be very fragmented when you have such large amounts free!" is exactly why you should keep large amounts free.
If you delete files, or append to existing files, then the promises of the initial allocation strategy go out the window.
Not defending them in any way, but I know with my Infrant (then Netgear unfortunately, who last year killed the products) ReadyNASs which also used mdadm to configure BTRFS with RAID5 in a similar way to Synology and QNAP, the recommendation was that you don't want your BTRFS filesystem to run low on space, because then it runs out of metadata space, and if it does that it becomes read-only, and can become unstable.
Basically, the recommendation was to always have 5% free space, so this isn't just Synology saying this.
The recommendation of 5% free space comes from arrays in sizes of 100s of GB, not tens of TB.
This is the same kind of issue that Linux root filesystems had - a % based limitation made sense when disks were small, but now they don't make a lot of sense anymore when they restrict usage of hundreds of GB (which are not actually needed by the filesystem to operate).
Actually, reading the BTRFS docs, they recommend keeping 5-10% free space:
https://archive.kernel.org/oldwiki/btrfs.wiki.kernel.org/ind...
Yup. After having dealt with that, I err on the side of caution and don't even let it merge small files into inodes for space savings. I still love btrfs for the CoW, snapshots, and compression, but you really gotta give that metadata a wide berth.
ZFS has a similar limitation.
It tries to mitigate this by reserving some space for metadata, to be used in emergencies, but apparently it's possible to exhaust it and get your filesystem into a read-only state.
There was some talk about increasing the reservations to prevent this but can't recall if changes were made.
There is one argument for Synology doing this: There have been cases where hard drive companies mislead their customers. I personally fell victim to this when Western Digital started selling SMR drives as WD Red, without labelling them as SMR drives.
So lots of customers thought they were buying a drive that's perfect for NAS, only to discover that the drives were completely unsuitable and took days to restore, or failed alltogether. Synology had to release updates to their software to deal with the fake NAS drives, and their support was probably not happy to deal with all the angry customers who thought the problem was with Synology, and not Western Digital for selling fake NAS drives.
If you buy a drive from Synology, you know it will work, and won't secretly be a cheaper drive that's sold as NAS compatible even though it is absolutely unsuitable for NAS.
That's a great argument for selling drives, not for locking your devices down to practically require them
> That's a great argument for selling drives, not for locking your devices down to practically require them
The counterargument is, people won’t listen and then blame Synology when their data is affected. At which point it may be too late for anything but attempting data recovery.
Sufficiently nontechnical users may blame the visible product (the NAS) even if the issue is some nuance to the parts choice made by a tech friend to keep it within their budget.
Synology is seen as the premium choice in the consumer NAS argument, so vertically integrating and charging a premium to guarantee “it just works” is not unprecedented.
There are definitely other NAS options as well, if someone is willing to take on more responsibility for understanding the tech.
I don’t think anyone would care if Synology gave priority to their own drives. A checkbox during setup that says “Yes, I know I’m using these drives that have not been validated blah blah blah” would be plenty. That’s not what Synology did however and that’s the main reason everyone is pissed.
Funny you mention that…
I have a DS1515+ which has an SSD cache function that uses a whitelisted set of known good drives that function well.
If you plug in a non whitelisted ssd and try to use it as a cache, it pops up a stern warning about potential data loss due to unsupported drives with a checkbox to acknowledge that you’re okay with the risk.
So…there’s really no excuse why they couldn’t have done this for regular drives.
That assumes that the person setting up the NAS is the same person using it, which is not going to be the case for non-tech-savvy users.
Everyone will understand it costing more, fewer people will understand why the NAS ate their data without the warning it was supposed to provide, because cheap drives that didn’t support certain metrics were used.
If Synology wants to have there be only one way that the device behaves, they have to put constraints on the hardware.
But those people would call support when the array couldn’t rebuild. And many of them would blame Synology, and demand warranty replacement of the “defective” device, and generally cost money and stress.
As long as Synology is up front in the requirement and has a return policy for users who buy one and are surprised, I think they’re well within their rights to decide they’re tired of dealing with support costs from misbehaving drives.
As long as they don’t retroactively enforce this policy on older devices I don’t understand the emotionality here. Haven’t you ever found yourself stuck supporting software / systems / etc that customers were trying to cheap out on, making their problems yours?
It’s not that I don't understand. It’s that as an end user I don't give a shit.
Toyota might have great reasons for opening a chain of premium quality gas stations, but the second they required me to use them, I'd never buy another Toyota for as long as I lived.
I want to bring my own drives, just as I have since I bought my first DS-412+ 13 years ago.
People are going to ignore that and leave bad reviews online, which will have compounding effects. SMR drives works in RAID until CMR buffer regions are depleted, and then RAID starts falling apart. This will undoubtedly create wrong impressions that Synology products, not the drives, are not trustworthy.
What if the device had a minimum benchmark feature that would test any new drive ? And fail the worst ones ?
SMR drives work like SSD: writes are buffered to CMR zone, consolidated into an SMR track data, copied into onboard cache RAM and written to SMR zone. SMR tracks has sizes of 128MB or so, and can be written or erased in track-at-once manners by the head half-overwriting data like moving a broad whiteboard marker slowly outward on a pottery wheel, rather than giving each rings of data enough separation. This works because the heads has higher resolution in radial direction in reads than writes; the marker tip is broader than what the disk's eyes can see.
This copy operation is done either while the disk is idling, or forced by stop responding to read and write operations if CMR buffer zone is depleted and data has to be moved off. RAID softwares cannot handle the latter scenarios, and consider the disk faulty.
You can probably corner a disk into this depleted state to expose a drive being SMR based, but I don't know if that works reliably or if it's the right solution. This is roughly all I know on technical side of this problem anyway.
I see your argument here and this could also be solved by some type of somewhat difficult flag that Synology could implement.
Meaning that by default it could require a Synology drive that is at minimum going to work decently.
Want to mess around more and are more technical ? Make it a CLI command or something the average joe is going to be wary about. With a big warning message.
Personally I only like to buy very reliable enterprise class drives (probably much better than whatever Synology will officially sell) and this is my main concern.
> The counterargument is, people won’t listen and then blame Synology when their data is affected. At which point it may be too late for anything but attempting data recovery.
Is it though? Most (consumer) NAS systems are probably sold without the drives which are bough separately. When there is an issue with the drive and it breaks, I’m pretty sure most people technical enough to consider the need for a NAS would attribute that failure to the manufacturer of the drives, not to the manufacturer of the computer they put their drives into
I know a photographer who needs tech support for really anything, and who has bought drives and upgraded is NAS himself. I don’t think that’s unusual, but of course n=1.
Comment was deleted :(
Nobody blamed Synology when that WD SMR issue happened. Come on, let’s get real here. Locking the devices down so they only work with drives bearing Synology branding is about Synology’s profits.
> The counterargument is, people won’t listen and then blame Synology when their data is affected
I see this kind of arguments “X had to do Y otherwise customers would complain” a lot every time a company does something shady and some contrarian wants to defend them, but it really isn't as smart as you think it is: the company doesn't care if people complain, otherwise they wouldn't be doing this kind of move either, because it raises a lot more complaints. Company only care if it affects their bottom line, that is if they can be liable in court, or if the problem is big enough to drive customers away. There's no way this issue would do any of those (at least not as much as what they are doing right now, by a very large margin).
It's just yet another case of an executive doing a shady move for short terms profits, there's no grand reasoning behind it.
Your absolute conviction is misplaced. Support is expensive to provide, especially on hardware that’s expensive to ship around.
This may be a bad move, and you’re certainly right that Synology expects to make more profit with this policy than without it, but it’s a more complex system than you understand. Irate customers calling support and review-bombing for their own mistakes are a real cost.
I don’t blame Synology for wanting to sell fewer units at higher prices to more professional customers. Hobbyists are man attractive market but, well, waves hands at the comments in this thread.
The thing is, that more professional market would never make the mistake of putting SMR drives in a RAID array anyway and they are also (I hope) good enough at doing their own research to filter out reviews from uneducated retail consumers. So, again, we’re left with trying to find a justification for this move other than Synology’s profits.
And when this issue happened with WD drives, I don’t remember a backlash against Synology at all. WD, on the other hand, deserved and received plenty of blame.
Comment was deleted :(
For a BYOD product it would be fine to add a blacklist of DM-SMR drives imho. Or have a big red banner and a taint flag.
But that's not what Synology did.
Also, if the image on the Synology page is accurate, they are relabeled Toshiba drives. Which doesn't really seem a good choice for SMB/SOHO NAS devices, because the Toshiba "Machine Gun" MGxx drives are the loudest drives on the market.
I had some 12TB WD ultrastar that I happily replaced with some Toshiba MG09ACA18TE. To my ears at least the toshiba sound signficantly more bearable than the ultrastar were (lower pitch so less disturbing). Due to living in a small apartment my NAS is in my living room so noise matters.
That said, I've since added an ssd and moved almost everything to the ssd (docker, the database and all apps) and it's much nicer in term of noise.
TBF, the picture in TFA only shows rack-mounted Synology devices, where noise is not really a concern.
Synologoy SMB/SOHO NAS devices should not be affected by the drive lockdown (for now).
Per the article, it’s all “Plus” models from 2025 on, which definitely includes desktop 2-8 bay units.
I have been a happy enough Synology user since 2014, even though I had to hardware repair my main DS1815+ twice in that time (Intel CPU bug and ATX power-on transistor replacement).
Other than two hardware failures in 10 years (not good), the experience was great, including two easy storage expansions and the replacement of one failed drive. These were all BYOD, and the current drives are shucked WD reds with a white sticker (still CMR).
I happily recommended them prior to this change and now will have to recommend against.
So basically buy one of the older models now ?
Will new firmware updates to everything before this require the Synology branded drives ?
Seemingly not, at least not for the moment, based on the experiences of people migrating arrays [successfully] from older units and the massive backlash that would result if they trashed currently working arrays.
If you’re going to buy a DSM unit, I’d definitely buy a 2024 or earlier model. But even as an overall happy user under their old go-to-market approach, I can’t recommend them now.
but that's the proposed change, that their Plus lineup which is generally targeted for SMB/SOHO/enthusiast market, will be working only with their drives
Then the photo with the U-factor servers is obvs. a distraction. Thanks for the clarification.
It can't be programmatically identified because manufacturers actively hide them. There are ATA commands for that, DM-SMR drives lie to it.
I wouldn't be sure why Toshiba M-prefixed 7K2 drives would be bad for NAS use cases. They're descendants of what were used in high performance SPARC servers. Hot, dense, obnoxious, but that's just Fujitsu. They're plenty reliable, performant, and perfect for your all-important on-line/near-line data! You just have to look away from bills(/s).
0: https://www.techpowerup.com/265841/some-western-digital-wd-r...
1: https://www.techpowerup.com/265889/seagate-guilty-of-undiscl...
Which WD drives specifically were misleading customers?
It's fine to have 'Synology supported drives' which guarantee compatability, but requiring them is absolute bollocks.
WD Red drives. They released a new version of their WD Red drive, and the only difference they stated in the specs was that it had more cache. So I thought, great, this is their updated model with more cache, it's going to be faster.
After some time, people started to post about problems with the new WD Red drives. People had troubles restoring failed drives, I had a problem where I think the drives never stopped rewriting sectors (you could hear the hard drives clicking 24/7 even when everything was idle)
Then someone figured out that WD had secretely started selling SMR drives instead of CMR drives. The "updated" models with more cache were much cheaper disks, and they only added more cache to try and cover up the fact that the new disks suffored from catastrophic slowdowns during certain workloads (like rebuilding a NAS volume).
This was a huge controversy. But apparently selling SMR drives is profitable, so WD claims the problem is just that NAS software needs to be made compatible with SMR drives, and all is well. They are still selling SMR drives in their WD Red line.
Edit: Here's a link to one of the many forum threads where people discovered the switch: https://community.synology.com/enu/forum/1/post/127228
> the problem is just that NAS software needs to be made compatible with SMR drives
That's half of it ... maybe? Last time I looked drives that offer host managed SMR still weren't available to regular consumers. In theory that plus a compatible filesystem would work flawlessly. In practice you can't even buy the relevant hardware.
> This was a huge controversy. But apparently selling SMR drives is profitable, so WD claims the problem is just that NAS software needs to be made compatible with SMR drives, and all is well. They are still selling SMR drives in their WD Red line.
Well, SMR lets you store more stuff on the same platter (more or less); fewer platters reduces costs, etc.
WD's claims about it being a software problem would be more reasonable if they were providing guidance about what the software needs to do to perform well with these drives, and probably that would involve having information about the drive available to the OS/filesystem rather than hidden.
For future reference which WD drives are actually suitable for NAS? I remember someone saying you need to look for a specific more expensive type.
WD Red Plus -> CMR technnology, suitable for NAS
WD Red -> SMR technology, slightly cheaper, not suitable for NAS
…but still marketed for NAS users, alas.
I should have been more specific: not suitable for RAID NAS
RAID and NAS used to go together when drive capacities were lower. E.g. I had a 9TB NAS with RAID5 at times when 8TB drives were >$500 a pop. These days, NAS does not necessarily imply having a RAID setup. I see a new "build your SFF/RPi NAS" article every week, and it rarely involves RAID.
This is because a NAS setup with a single high-capacity drive and an online backup subscription (e.g. Backblaze) is more cost-effective and perfectly adequate for a lot of users, who have no interest in playing the sysadmin. In such a setup, you just need a drive that can withstand continuous operation, and SMR should work fine.
That's an interesting point I hadn't considered. To me, NAS implies RAID. You might be right that this is no longer true.
Frankly, I haven’t bought a single WD since. I no longer trust them as a brand, and trust is critical for this class of things.
buying HDDs these days currently feels a little like navigating the dark web.
I heard its about EU CYA directive, which requires IoT companies to deploy security patches in firmware within X months of those patches being released.
This would explain why they'd only want to support HDD models the Synology OS can flash firmware updates to.
(It's also convenient to get more margins.)
Nah. That wouldn’t require them to apply patches to someone else’s hardware than an end user installed after the fact. By analogy, Samsung isn't obligated to patch the firmware of an AirPods connected to it.
I have a mix of SMR and CMR drives in my NAS and it works perfectly (on mdadm). I understand ZFS hates it but what is the problem on Synology?
Afaik it's not much of a problem until you need to rebuild the raid.
My rough understanding is an SMR drive has a small section that's CMR and where the data goes when you do a write. Then when the drive is idle it moves the data to SMR because SMR writes are slow. If you fill up the small CMR section it starts writing directly to SMR and you see a huge performance loss. Without adaptation for SMR drives a lot of systems recognized this slowdown as a failure and would hault a restore. Even with that corrected for now you are looking at 10x the rebuild time of a CMR drive which increases the odds of another drive failing during the rebuild.
A friend of mine who moved from Communist Russia to the US once explained to me the "tyranny of choice." He explained that, in the US, sometimes we get overwhelmed with many, many options.
To be quite blunt: After choosing my NAS, the act of choosing hard drives is actually harder and somewhat overwhelming. To be quite honest, knowing that I can choose from a narrower set of drives that are guaranteed to work correctly is going to really tip my scale in favor of Synology the next time I'm in the market for a NAS.
Locking non-approved drives does absolutely nothing for your case though.
They can sell "guaranteed to work" drives for people like you who don't want to go through all the picking process, while letting other people the choice to put any drive they want.
Western Digital misrepresented their Red drives being as suitable for NAS as they advertised, which ended up with them receiving a class action lawsuit against them.
Luckily all of the settings can be searched for and verified.
Seagate and Hitachi have seemed to treat me well over the years and I was giving WD a chance.
Next drives to buy will be from this list: https://www.backblaze.com/blog/backblaze-drive-stats-for-202...
This is an argument in favor of all anti-competitive walled garden moves. But we've seen time and again the degrading service and price gouging that ultimately comes. People have a right to use the things they own as they please. Companies don't have a right to protect their image that supersedes this basic human right of independence.
I currently run 2 Synology NAS's in my setup. I am very satisfied with their performance, but nevertheless I will be phasing them out because their offerings are not evolving in line with customer satisfaction but with profit maximization through segmentation and vertical lock-in.
Do you have a plan on what you’re going to move to?
I’ve used (and still use) UnRaid before but switched to Synology for my data a while back due to both the plug-and-play nature of their systems (it’s been rock solid for me) and easily accessible hard drive trays.
I’ve built 3 UnRaid servers and while I like the software, hardware was always an issue for me. I’d love a 12-bay-style Synology hardware device that I could install whatever I wanted on. I’m just not interested in having to halfway deconstruct a tower to get at 1 hard drive. Hotswap bays are all I want to deal with now.
Not OP but TrueNAS is a good alternative - both the software and their all in one NAS builds.
I have an unraid on a usb stick somewhere in my rack, but overtime it started feeling limited, and when they began changing their license structure I decided it was time to switch, though I run it on a Dell r720xd instead of one of their builds (my only complaint is the fan noise - i think 730 and up are better in this regard)
Proxmox was also on my short list for hypervisors if you dont want TrueNAS.
I also have a TrueNAS, but because of its limitations (read-only root file system), I came to the conclusion that, if I ever need to reinstall it, I would switch to Proxmox and install TrueNAS as one virtual client, next the other clients for my home lab.
I have found workarounds for the read-only root file system. But they aren't great. I have installed Gentoo with a prefix inside the home directory, which provides me with a working compiler and I can install and update packages. This sort of works.
For running services, I installed jailmaker, which starts a lxc debian, with docker-compose. But I am not so happy about that, because I would rather have an atomic system there. I couldn't figure out how to install Fedora CoreOS inside a lxc container, and if that is even possible. Maybe NixOS would be a another option.
But, as I said, for those services I would rather just run them in Proxmox and only use the TrueNAS for the NAS/ZFS management. That provides more flexibility and better system utilization.
I use TrueNAS Scale as root OS and have it run a Linux VM, which is easily done via their 'Virtualization' feature. No need for Proxmox. Afaik it works a lot better to give zfs direct access to underlying hdds. TrueNAS also has an 'Apps' feature, which are basically glorified helm chart installs on k3s that TrueNAS installs for you. But I prefer more control so I have k8s on the Linux VM. Whats also great is that the k8s on the Linux VM can use the TrueNAS storage via democratic-csi.
I was using Truecharts before k8s was deprecated.
The deprecation caused me to move to something more neutral and stay away from all 'native' apps of TrueNAS and migrated to ordinary docker-compose, because that seem to be the most approachable.
I was also looking into running a Talos k8s cluster, but that didn't seem to be as approachable to me and a bit overkill for a single-node setup.
I run Proxmox on the bare metal and pass the HBA through to the TrueNAS VM (so it gets direct access to the attached drives).
> I also have a TrueNAS, but because of its limitations (read-only root file system)
It isn't really the case. TrueNAS wants you to look at it as an appliance so they make it work that way out of the box.
On the previous release, they had only commented out the apt repos but you could write to the root filesystem.
On the latest release, they went a little further and did lock the root filesystem by default but using a single command (`sudo /usr/local/libexec/disable-rootfs-protection`), root becomes writable and the commented out apt repos are restored. It just works.
But AFAIK, updates will overwrite everything, so installing anything is just temporary.
I have both of these releases running side by side for multiple years by now. It will not auto-update between releases anyway similarly to how nobody would do a dist-upgrade on you automatically. Neither have ever overwritten my changes to enable rootfs rw + apt repo fix and other changes to the filesystem, no more than a normal Debian would. Enabling apt actually gets you a more up to date system than you'd get otherwise.
I've been mostly happy with a Terrmaster DAS attached to a mini PC running UnRaid. The bays are hotswappable and overall it's been solid.
I say "mostly" happy because I almost returned it. The USB connection between mini PC and Terramaster would be fine for a few days and then during intense operations like parity checks would disconnect and look like parity error/disk failure, except the disks were fine. Eventually realised the DAS uses power from the USB port as well as the adapter plug and the mini PC wasn't supplying enough power. Since attaching a powered USB hub it's been perfect.
Explanation of symptoms and solution, in case anyone is considering one or has the same problem: https://forum.terra-master.com/en/viewtopic.php?t=5830
I had the same issue, same solution worked for me too.
It works well, but USB connection could be faster, and it bogs down when doing writes with soft-raid. I've been thinking about going for a DAS solution connected directly via SAS, instead. Still musing about what enclosure to use, though.
I haven't used them personally so I can't vouch for them, but UGREEN's NAS line is the same form factor as a Synology unit, but it lets you run any OS. I'd probably put straight Debian on mine and handle it all manually as I do now. I wouldn't be surprised if you could put Unraid on it.
Pretty sure Asustor also allows installing any other Linux you want.
I use a qnap TL-D800S for 8 bays connected to my home server. You could use as many as you have available PCIe ports.
No plans yet. My current NAS setup should be fine for another 2 years.
If it were now, I'd probably look deeper into Asus, QNap or a DIY TrueNAS.
There are a lot of NAS cases you can buy. I have a Jonsbo that I've been pretty happy with for a year or so.
I'm in a similar position. I'm on my second NAS in the last 12 years. I've been very satisfied with their performance, but this kind of behavior is just completely unacceptable. I guess I'll need to look into QNAP or some other brand. Also, I think my four disc setup is in a RAID 5, but it might be Synology's proprietary version, so I'll need to figure out how to migrate off of that. I don't think I'll be able to just insert the drives in a different NAS and have it work.
Even Synology's "proprietary" RAID is just Linux mdadm, and they have instructions on their website on how to mount it under Linux. One of the reasons I preferred Synology in the first place was their openness about stuff like that!
Awesome to know! I'll read up on mdadm, appreciate the pointer!
it's migrating to btrfs raid 1 now. and their docs just say to wipe the drivers in case of issues lol.
That was my first thought too. I am currently a very happy Synology customer and am selling them to B2B customers for storage.
I will still have to come accross something like Hyper Vault for backup and Drive for storage that works (mostly) smoothless. I would be happy to self-host, but the No Work Needed (tm) products of Synology are just great.
Sad to see them taking this road.
[dead]
I'm going to buck the nerds and say I wish Drobo was back. I love my 5N, but had to retire it as it began to develop Type B Sudden Drobo Death Syndrome* and switch out to QNAP.
It was simple, it just worked, and I didn't have to think about it.
* TB SDDS - a multi-type phenomenon of Drobo units suddenly failing. There were three 'types' of SDDS I and a colleague discovered - "Type A" power management IC failures, "Type B" unexplainable lockups and catatonia, and "Type C" failed batteries. Type B units' SOCs have power and clock go in and nothing going out.
My 2nd generation Drobo that I got back in 2008 is still chugging along. Haven't had to replace a hard drive in 10-12 years either. I love it even though it's super slow by today's standards. Been meaning to retire it for years, but it's been so rock solid I rarely have to think about it.
I still have two Drobo 5N2 NAS boxes going strong. One is the backup for the other. I really wish someone would take up the Drobo-like simplicity and run with it.
I'm not sure what customers Synology is targeting. Small office/home office (SoHo) was their original market, but these customers won't be willing to pay high prices per drive. Medium-sized businesses? They mostly move their infrastructure to the cloud, which probably leads to low sales volumes. Plus, they're very price-sensitive too. Large enterprises and corporations? This is the domain of established providers like NetApp. Synology might dream about the high prices that these major storage vendors can charge, but this market is difficult to enter without years and years and years of proven reliability in hardware and service.
I don't think this will work the way Synology imagines it.
Something I haven't seen emphasized enough with this move is that by obfuscating a drive's vendor and manufacture date, you won't know if your drives are from the same batch. This is important because if there's any manufacturing defects in a given batch of drives, the failures are likely to happen around the same time, greatly increasing the chance of losing data.
Basically, Synology drives are not only more expensive, they're also statistically speaking less reliable when building a RAID with them, negating the very purpose of the product. What a dumb move.
I still love the example some here may recall, where HN itself was hit by this with two disks within a RAID array AND THE BACKUP SERVER all failed within hours of one another:
<https://news.ycombinator.com/item?id=32048148>
Resulting in, FWIW, my top-rated-ever HN comment, I think:
I personally think that in 2025, you should treat the NAS as a purely storage product and buy your hardware from that perspective. TrueNAS or UniFi’s new NAS product fulfill that goal. From there, supplement your NAS with a Mac Mini or other mini-PC for storage-adjacent tasks.
Synology’s whole business model (arguably QNAP’s too) depends on you wanting more drive bays than 2 and wanting to host apps and similar services. The premium they ask is substantial. You can spec out a beefy Dell PowerEdge with a ton of drive bays for cheap and install TrueNAS, and you’ll likely be much happier.
But the fundamental suggestion I make is to consider a NAS a storage-only product. If you push it to be an app and VM server too, you’re dependent on these relatively closed ecosystems and subject to the whims of the ecosystem owner. Synology choosing to lock out drives is just one example. Their poor encryption support (arbitrary limitations on file filenames or strange full-disk encryption choices) is another. If you dive into any system like Synology long enough, you’ll find warts that ultimately you wouldn’t face if you just used more specialized software than what the NAS world provides.
> The premium they ask is substantial. You can spec out a beefy Dell PowerEdge with a ton of drive bays for cheap and install TrueNAS, and you’ll likely be much happier.
Yeah, but then you have a PowerEdge with all the noise and heat that goes along with it. I have an old Synology 918 sitting on my desk that is so quiet I didn't notice when the AC adapter failed. I noticed only because my (docker-app-based) cloud backups failed and alerted me.
Unless Synology walks back this nonsense, I'll likely not buy another one, but I do think there is a place for this type of box in the world.
> Unless Synology walks back this nonsense, I'll likely not buy another one, but I do think there is a place for this type of box in the world.
I would recommend a mini-ITX NAS enclosure or a prebuilt system from a vendor that makes TrueNAS boxes. iXSystems does sell prebuilt objects but they’re still pricey.
This is what I need to research, I don't understand why we need NAS hardware at all, aren't the controllers and drivers in an average box running Linux enough for the same software raid that Synology does?
You don't - similarly you don't need a Mac Book to run OS X (technically at least); You buy the full-flag NAS or MacBook because it's a bundled supportable quantity that reduces your cognitive load in exchange for money. Synology for an SMB or home lab is pretty good stuff and you don't spend (as much) of your time editing smb.conf, or configuring the core backup services or whatever. Some clickops and you're done - and you can do a lot - under the hood it's still Linux (or at least mine is), you can SSH in and do damage -- the hardware isn't "special", it's not necessarily substantially better than another system that could handle a similar number of drives in any measurable way.
I have a synology because I got tired of running RAID on my personal linux machines (had a drobo before that for the same reasons) - but as things like drive locking occur and arguably better OSS platforms available, I'm not sure I'd make the same decision today.
I perceive the main benefit to the NAS hardware to be ease-of-management in terms of RAID via a software stack, and of course, physical hardware slots for holding lots of disks. You can easily build a NAS with pure Linux/FreeBSD and a case with disks inside.
Synology sells hardware and you get the software without yearly license.
Investors want bigger returns. They know they will not get away at this point by selling a monthly license. A large percentage would not buy anymore.
What other options do you have for recurring revenue? Cloud storage, but I don't think that's a great success.
And then... yes, harddisks. They are consumable devices with a limited lifespan. Label them as your own and charge a hefty fee.
The disks in a (larger) NAS setup are more than what the NAS costs. They want a piece of that pie by limitting your options.
No more syno for me in the future
I'm confused.
It sounds like only certain features will be unavailable for non-Synology drives:
> Additionally, certain features such as volume-wide deduplication, lifespan analysis, and automatic firmware updates for third-party devices will be disabled.
It sounds like you can still use non-Synology drives just fine, but not do certain advanced things with them?
So why is this being called "locking"? I use Synology at home just as very basic RAID. Am I correct that this wouldn't affect me at all?
And are there any reasons why this is justifiable (e.g. hard drive manufacturers lying about health information) or is it just a cash grab?
Not automatically applying firmware updates to 3rd-party drives is reasonable.
Disabling filesystem features when using them is insane.
Whats next — no encryption if you're using a Seagate?
It sounds like deduplication is already a pretty advanced feature requiring Synology SSD's:
https://kb.synology.com/en-me/DSM/help/DSM/StorageManager/vo...
It might be more about performance, that they'll require their own drives with custom firmware that works better?
That's what I'm trying to understand here. Is Synology really removing important basic necessary features, or is this more about high-end consistency and performance?
The former. They say it’s the latter but almost no one else in the space makes those goofy claims. If I buy an SSD that's too slow, that's on me. And if I deliberately buy a slower one because the cost:performance ratio is better for my needs, then that's my choice. There’s no technical reason why their rebranded devices should capable of doing things the competition can't, other than low-level things like installing firmware updates.
Synology provided more info to the owner of YouTube channel NASCompares, who then posted it on Reddit: https://www.reddit.com/r/synology/comments/1k53gk0/official_...
It starts with "Synology's storage systems have been transitioning to a more appliance-like business model." As a long-time user, all of this collectively moves Synology from "highly recommended" to "avoid."
I've no experience with Synology and have no opinion regarding their motivations, execution, or handling of customers.
However...
Long long ago I worked for a major NAS vendor. We had customers with huge NAS farms [1] and extremely valuable data. We were, I imagine, very exposed from a reputation or even legal standpoint. Drive testing and certification was A Very Big Deal. Our test suites frequently found fatal firmware bugs, and we had to very closely track the fw versions in customer installations. From a purely technical viewpoint there's no way we wanted customers to bring their own drives.
[1] Some monster servers had tripple-digit GBs of storage, or even a TB! (#getoffmylawn)
For an entertaining/terrifying perspective on firmware, obligatory Bryan Cantrill talk "Zebras All the Way Down" https://www.youtube.com/watch?v=fE2KDzZaxvE
Synology is consumer and SMB focused, though. High end storage that level of integration makes sense, but for Synology it's just not something *most of their customers care about or want.
That being said, there aren't many major HDD manufacturers anymore, nor do they have many models. Synology is using vanilla linux features like md and lvm. You don't think those manufacturers have tested their drives against vanilla linux?
The reason I chose Synology over others was their SHR "filesystem", where you can continue adding heterogeneously sized disks after constructing the FS and it will make the most use possible out of the extra capacity in the new disks. When I researched it ZFS did not yet have the resizing feature merged, now it does, though I think it is still not able to use this extra space.
I'm wondering if anybody has any better recommendations given the requirement of being able to add storage capacity without having to completely recreate the FS.
BTRFS doesn't care how big the disks are and you can just tell it to keep x number of copies of each data / metadata / system block and it will do the work of keeping your copies on different devices across the file system. Much like SHR, performance isn't linear with different sized devices, but it's super simple to setup and in tree where ZFS has a lot more complexity and is not baked into the kernel.
Snapshots are available, but a little more work to deal with since you have to learn about subvolumes. It's not that hard.
Edit: TIL, SHR is just mdadm + btrfs.
Any Linux with LVM. You don't need fancy proprietary OS for that.
To expand on this with an example. Adding a new device we'll call sdz to an existing Logical Volume Manager (LVM) Volume Group (VG) called "NAS" such that all the space on sdz is instantly available for adding to any Logical Volume (LV):
pvcreate /dev/sdz
vgextend NAS /dev/sdz
Now we want to add additional space to an existing LV "backup": lvextend --size +128G --resizefs NAS/backup
*note: --resizefs only works for file-systems supported by 'fsadmn' - its man-page says:"fsadm utility checks or resizes the filesystem on a device (can be also dm-crypt encrypted device). It tries to use the same API for ext2, ext3, ext4, ReiserFS and XFS filesystem."
If using BTRFS inside the LV, and the LV "backup" is mounted at /srv/backup, tell it to use the additional space using:
btrfs filesystem resize max /srv/backup
How are redundancy and drive failure handled? The only capacity mix-and-match scheme I have familiarity with is btrfs.
Synology SHR is btrfs (or ext4) on top of LVM and MD. MD is used for redundancy. LVM is used to aggregate multiple MD arrays into a volume group and to allow creating one or move volumes from that volume group.
Comment I responded to was using LVM on its own and I was wondering about durability. The docs seem to suggest LVM supports various software raid configurations but I'm not clear how that interacts with mixing and matching physical volumes of different sizes.
Same here, went with synology for SHR.
However I did notice that the performance was substantially worse when using heterogeneous drives, which makes SHR somewhat less valuable to me.
SHR is just Linux MDADM and LVM.
In addition to the alternatives already mentioned, I've been very happy with SnapRAID+MergerFS for a few years now. I don't have to worry about a magical black box as with btrfs or ZFS, I can expand the array with disks of any size, if one disk fails I only lose the data on that disk while the array remains usable, and it's dead simple to setup and maintain.
The only drawback, if I can call it that, is that syncs are done on-demand, so the data is technically unprotected between syncs. But for my use case this is acceptable, and I actually like the flexibility of being in control of when this is done. Automating that with a script would be trivial, in any case.
I was disappointed when I fully understood the limitations of SHR after purchasing my Synology box, and subsequently failed to install MergerFS on it. It's the only thing I miss about my old self managed server.
- mergerfs: https://github.com/trapexit/mergerfs - snapraid
Not used either but these were 2 options that came up when I was researching few years ago.
I'm in exactly same situation myself.
Windows storage spaces
It's all fun and games until your volume shows up as RAW and you're dead in the water.
I don’t think that is comparable with a good NAS.
NAS? No, but SAN or DAS, yes. It provides similar features, plus any bog standard x86 application you wish to run.
Storing encrypted blobs in S3 is my new strategy for bulk media storage. You'll never beat the QoS and resilience of the cloud storage product with something at home. I have completely lost patience with maintaining local hardware like this. If no one has a clue what is inside your blobs, they might as well not exist from their perspective. This feels like smuggling cargo on a federation starship, which is way cooler to me than filling up a bunch of local disks.
I don't need 100% of my bytes to be instantly available to me on my network. The most important stuff is already available. I can wait a day for arbitrary media to thaw out for use. Local caching and pre-loading of read-only blobs is an extremely obvious path for smoothing over remote storage.
Other advantages should be obvious. There are no limits to the scale of storage and unless you are a top 1% hoarder, the cost will almost certainly be more than amortized by the capex you would have otherwise spent on all that hardware.
S3 or glacier? Glacier is cost competitive with local disk but not very practical for the sorts of things people usually need lots of local disk for (media & disk images). Interested in how you use this!
20TB which u can keep in a 2-bay cute little nas will cost you $4k USD / year on S3 infrequent access tier in APAC (where I am). So "payback time" of local hardware is just 6 months vs S3 IA. That's before you pay for any data transfers.
Did you factor in the resilience and redundancy S3 gives you and you cannot opt out from? I have my NAS, and it is cheaper than S3 if I ignore these, but having to run 2 offsite backups would make it much less compelling.
They probably factored RAID1 into that price, which you can skip if you're setting up three copies. (At least I hope they did, their hardware prices must be dire if $2000 only gets you a tiny NAS and two 10TB drives.) If I do napkin math based on US prices, a mini PC and a 20TB external drive are a bit under $500 total, and a 2 bay NAS and a 20TB internal drive are a bit over $500 total, so that's about $1500 for the triple-NAS option and $3000/year for the S3 infrequent access option. Still extremely compelling.
Agree, they are not the same thing. Yes, S3 provides much better durability. I just can't afford it.
For my use-case I'm OK with un-hedged risk and dollars staying in my pocket.
I backup my nas to rsync.net, it’s very cost effective using borg backup.
$0.01/GB/mo, that does not seem better than Glacier, is it?
Except it’s not glacier speeds, there are no bandwidth costs, support is on a completely different level than aws (you can actually reach an actual knowledgeable human), and you can use anything that speaks ssh. They also have an expert price at 0.008$/gb/mo here https://www.rsync.net/products/borg.html
Yes and no. I have been using NAS for a long time, and I use older drives as offline/offsite backups. So the cost is mostly amortized already. Those machines are off except once a week (local)/ once a month (offsite), to do an incremental backup. So this is a good use of some older drives.
> S3 or glacier
This is the same product.
> 20TB
I think we might be pushing the 1% case here.
Just because we can shove 20TB of data into a cute little nas does not mean we should.
For me, knowledge that the data will definitely be there is way more important than having "free" access to a large pool of bytes.
20 TB isn't that out of reach when you're running your media server and taking high resolution photos or video (modern cameras push a LOT of bits).
I'm the last person I know who buys DVDs, and they're 2/3s of the reason I need more space. The last third is photography. 45.7 megapixels x 20 FPS adds up quick.
S3's cost is extreme when you're talking in the tens of terabytes range. I don't have the upstream to seed the backup, and if I'm going outside of my internal network it's too slow to use as primary storage. Just the NAS on gigabit ethernet is barely adequate to the task.
> knowledge that the data will definitely be there is way more important than having "free" access to a large pool of bytes
Until Amazon inexplicably deletes your AWS account because your Amazon.com account had an expired credit card and was trying and failing to renew a subscription.
Ask me how I know
20TB isn't all that much anymore, especially if you do anything like filming, streaming, photography, etc. Even a handful of HQ TV shows can reach several TB rather quickly.
Additionally, 20TB is only going to run you $300-400 for a consumer drive.
It's wild consumer drives are so much more expensive than enterprise Exos drives with better performance, reliability and warranty
Yes. 20TB isn't a NAS. It's the HDD acting as bulk storage in your desktop.
[flagged]
> Just because we can shove 20TB of data into a cute little nas does not mean we should.
Okay, I'm curious now. When you were talking about "a bunch of local disks", what size disk did you have in mind?
Right now the best price per TB is found on disks in the 14-24TB range.
I currently store 10 TB on my NAS, and growing. The data is live, I access some of it every day, sometimes remotely. I have 3 rotating "independent" backups in addition to the NAS (by independent I mean they're made with rsync and don't depend on any specific NAS OS feature), stored in an old safe that would probably not be very effective against thieves but should protect the drives in case of fire.
There are no recurring costs to this setup except electricity. I don't think S3 can beat that.
hardly 1%, i’m sure anyone that works in the film industry or media in general has terabytes of video footage. Maybe even professional photographers who have many clients.
>This is the same product.
Confusingly "Glacier" is both its own product, which stores data in "vaults", and a family of storage tiers on Amazon S3, which stores data in "buckets". I think Glacier the product is deprecated though, since accessing the Glacier dashboard immediately recommends using Glacier the S3 storage tiers instead.
20TB is a single drive
> If no one has a clue what is inside your blobs, they might as well not exist from their perspective.
This is not the perspective of actors working on longer timescale. For a number is agencies, preserving some encrypted data is beneficial, because it will be possible to recover in N years, whether any classic improvements, bugs found in key generators, or advances in quantum.
Very few people here will be that interesting, but... worth keeping in mind.
The point of encryption in this context is to defeat content fingerprinting techniques, not the focused resources of a nation state.
The only thing making S3 a no-go for me is their outbound traffic costs at $90/TB, which at 100TB restore would make out $9K just to transfer the data back once
> You'll never beat the QoS and resilience of the cloud storage product with something at home.
You can’t be serious.
How much does S3 cost these days? I've been burned by their hidden surge pricing before and hesitate to rely on them for personal storage when self hosting is fairly cheap.
Disk hosting cost isn't much
Bandwidth to get all of that back down to your system is much pricier, depending on how much you use that data.
Storage cost is nice, data transfer costs are often prohibitively high.
For large datasets, the cost of that is like rebuying your NAS every 9-12 months.
3-2-1 says you want both. Can be convenient to centralize backups on a local NAS and then publish to the cloud from there.
Comment was deleted :(
As a Synology user, I don’t just feel a huge letdown by such a short-sighted company but I’m compelled to boycott them. And it’s not about the money but the major lock-in and nonsense decisions. My DS1821+ is full, no more purchases from Synology, no further expansions, and no good publicity.
That's understandable. It's a genuine betrayal. Synology, QNAP, et al. exist as a response to the traditional storage vendor lock-in, gouging and rent seeking.
Synology seems to have been on the wrong track for a while now: - They completely missed the NVMe transition. The vendor is still very much focused on the HDD world. Yes, NVMe/SSDs in the consumer space don't yet offer the same capacity as HDDs, but the technology is evolving rapidly. It feels like being a music executive in 2007 believing CDs were the future. - They dropped detailed S.M.A.R.T. access from the main UX, which is also the standard for NVMe health reporting. I personally run scrutiny, but Synology's built-in health reporting system in recent DSM versions feels not up to this fundamental task. - DSM updates around 7.2.2 negatively impacted H.265/HEVC codec support, affecting a large user base relying on their NAS for media [1]. - Synology is fundamentally a hardware company. While DSM is polished, their hardware is the centerpiece, and it has been lagging behind for a while now (NAS refreshes, CPU choices, Ethernet speeds, etc.). - Even package updates seem slower and further apart (e.g., Docker was stuck on 20.10.23 until relatively recently in the DSM 7.2.1 cycle).
I have had a Solaris ZFS filer that I've ran for a long time (due to historical reasons, I jumped on OpenSolaris when it came out and never had a chance to move off Oracle's lineage). I moved to Synology about three years ago b/c I was sick and tired of managing my own file server. Yet, I feel like at this point the cons of Synology are starting to outweigh the manageability advantages that drew me in.
[1] https://www.reddit.com/r/synology/comments/1feqy62/synology_...
Any experiences with Ugreen NAS? They're a new player in the space, but with very compelling hardware offerings, way ahead of Synology. Been meaning to replace my old Drobo setup for years, and Ugreen seems to finally be hitting the sweet point of specs and pricing that I've been looking for.
I've been looking into a NAS myself.
I think self-built is the best bang for buck you're going to get and not have any annoying limitations.
There's plenty of motherboards with integrated CPUs (N100 same as cheaper Ugreen ones) for roughly 100. Buy a decent PSU and get an affordable case. For my configuration with a separate AMD CPU I'm looking at right around 400 Euros but I get total control.
And as far as software is concerned, setting up a modern OS like TrueNAS I find about the same difficulty as an integrated one from Ugreen.
Just keep in mind that Intel is keeping the total PCIe bandwidth out of those CPUs very constrained on purpose.
The best solution in my opinion is to buy 5y old, used, server motherboards and CPUs (like AMD Epyc 3 right now). They are fairly cheap, it is durable products designed to work 24/7, and it comes with a huge number of PCIe lanes and extensibility. Same with enterprise SSDs for home usage, which is usually very little write. A used enterprise SSDs with a ton of endurance and very little writes is probably the best bang for your bucks. Wouldn't do that with hard drives though.
Power consumption is at a completely different level, though. The N100 gives you pretty good performance with very low power draw.
Ark says 9 lanes of PCIe 3.0?
For a NAS, I don't think I'd need more than 1-2 lanes for any single device. That sounds fine.
Yeah, I assume most home users these days are never pushing a NAS beyond 1Gbe, and 99% of people who have faster networks are still probably just doing 2.5Gbe (still just talking about home use). This wouldn’t make that PCIe bandwidth sweat.
Absolutely. This particular setup is meant more for your bog-standard home NAS.
I backed the Ugreen NAS on Kickstarter and I'm using it since. The Hardware is great, it is build like a tank. But the Software is not there after almost a year. No iSCSI support as of yet and snapshots work in some weird way. I can only access snapshots over the WebGUI and I am not able to get a simple list of available snapshots.
Shoutout to openmediavault. Just yesterday I installed it on my DXP8800 and now it works like a charm. But to install another OS you have to deactivate the watchdog timer in the BIOS, otherwise it resets the NAS every three minutes. Press CRTL + F12 to get into the BIOS and look for something like "watchdog" and disable it.
That's the first time I have heard about them but it looks very interesting and pretty. Synology has become way too expensive for me as I only need a 4-bay NAS and Ugreen is cheaper than the Synology. My only concern would be the software itself, and if they can avoid all the security holes that plagued some brands like Qnap.
Last but not least, they seem to have Docker support which was restricted to more powerful Synology models, and it's a nice bonus for self-hosting nowadays.
Had to replace my old micro server recently and was hard not to buy an ugreen, hardware looks nice, decent n100 CPU, software seems ok but wanted to run Linux myself.
Ended up buying a terrmaster DAS instead and connected it with usb to my NUC.
Also considered a NAS enclosure with an n110 mini itx board, would allow you to upgrade it in the future.
Isn't this a simple solution?: https://github.com/007revad/Synology_HDD_db
Some of us are using that with great success to eliminate the locking situation.
I personally think if you spend the money on a Synology setup and you depend on this script, you’re playing with fire. If you intend to keep DSM updated, you run the risk of Synology playing a cat-and-mouse game that doesn’t end well. Sure you can do this now, but forever? Who knows how long it will last.
Also https://github.com/007revad/Synology_M2_volume for NVMe M.2 drives (some synology models only allow NVMe drives to be used as read/write caches without this script)
Yes but the worry is that they might get rid of this eventually
I have some sympathy for this. With the disasters of the WD 'Green' series and the recent revelations on how used disks were being sold for new. Synology doesn't want to be lumped with other companies problems.
They really have to sell it by minimising the price differential and reducing the lead time.
Slapping Synology stickers on Seagate drives doesn't make them magically immune from being mislabeled out of refurbishment.
This is the same old tired argument Apple made about iPhone screens - complain about inferior aftermarket parts while doing everything in their power to not make the original parts available anywhere but AASPs. Except here we have the literal same parts with only a difference in the firmware vendor string.
Of course. But my hope is Synology does a little bit of QA before slapping that sticker on.
Honestly, you should just buy used enterprise drives. That they have hours on them is actually an upside, since most drives die either very early or very late into their expected lifespan. Our NAS is all Exos drives, no problems.
On the other hand, an NVMe drive from Crucial that lied about syncing data caused a write hole in ZFS and the associated pool broke to the point where we could only mount it with lots of flags in read only mode.
And SMR sold as NAS drives mostly.
I've been very happy with my Synology NAS that has served me so well, but forcing this sort of vender lock-in is simply unacceptable. I suppose this means I'll have to look for some other solution.
The problem is - I've formatted my drives with SHR(Synology Hybrid RAID - essentially another exclusive lock-in) and this would mean a rather painful transition to the new drive, since this now involves getting a whole new drives to format and move data to, rather than a simple lift-and-drop.
Ugh.
Like the adjacent comment mentioned it's mountable in linux, so I wouldn't call it a lock-in in the normal way https://zarino.co.uk/post/synology-shr-raid-1-ubuntu/
Not sure why people are saying SHR is proprietary in some of the comments I read, it's effectively a wrapper for mdadm — though I suppose the GUI itself could be called proprietary.
It’s readable / writable in any Linux system if mounted properly - the SHR is just an (extremely well done and convenient) UI for setting up standard raid partitions in a way that uses the entire disk.
They should've designed a model with open source firmware, written useful technical articles about storage, done PR campaigns, and sold premium versions of said hardware. I imagine that it would create more good will and be a better business move.
They could've even sold their own branded drives on the side and explained why they are objectively better, but still let customers choose.
I have a 8 bay nas from synology and i’m now considering a move out when i’ll have to replace my nas.
Is there something with 6-8 drives slots on which i could install whatever OS i want ? Ideally with a small form factor. I don’t want to have a giant desktop again for my nas purposes.
Terramaster F6-424. Most of the non-Synology NASes let you install whatever OS you want (but, don't provide any support other than "here's how you install it"). Unraid, TrueNAS Scale, and Open Media Vault are popular OS choices.
I used to be a Synology fan with owning many of their devices for work and office. But I've moved on to TrueNAS now and haven't looked back. TrueNAS with ZFS is just amazing.
It would be interesting to know the negatives of Synology, and why did you move from them. Gaining nothing from your comment, it could be that your boss was pissed his ex- got a job as security there.
My biggest reason is flexibility in machine configuration. I found a lot of flexibility in bringing my own machine to the table. I can set any mix of HDD or SSD or Nvme or ram in my machines. With synology or any other premade box I am stuck in a specific config.
For example my current TrueNAS at home has 128GB of Ram, 32TB of NVMe (live data) and 72TB of HDD (archive data) with significant cpu and gpu (well compared to synology box, I am running a Ryzen 5700G) and 10G networking.
It didn’t start out here but because TrueNAS works with your own hardware I can evolve myself incrementally as hardware costs and needs change.
It is a beast of a machine and it is no work to maintain - TrueNAS makes it an appliance.
I’ve written up previously on my home setup here: https://benhouston3d.com/blog/home-network-lessons
I can hardly see the point for devices like this. If you are tech-savvy enough to host your own NAS locally, you might as well build your own NAS and install whatever user-friendly OS (Unraid, OpenMediaVault as an example) you wish. No vendor-lock in whatsoever that way. If you arent tech savvy enough, then you should probably use cloud storage anyway.
I am very capable of building my own. But the plug and play of it is really nice. I basically popped in some drives and had a share up and running in a couple of hours. The same sort with DIY was do a part picker list build it (1-2 days) and then pop the drives in then figure out how to configure it correctly (another day or so because I do it very rarely). Then at that point yeah for what I use it for they are equivalent. Other than now I get to keep track of the CVE's that syno is doing for me with the occasional patch. Now I could put a stripped linux distro or trunas or something like that and get the same. Possible and ease of use are also a spectrum to weigh against. When you are young you have all the time you need. When you are older you just want to put it together and serve some files and do something else, because I have done this 6 times already.
But there is actually one reason I am going DIY next time. 'uname -a'. They ship with very old kernels. I suspect the other utilities are in the same shape. They have not updated their base system in a long time. I suspect they have basically left all of the amazing changes the kernels have had over the past decade out. They are randomly cherry picking things. Which is fine. But it has to be creating a 'fun' support environment.
Getting a machine and setting it up for local usage as storage accessible over your home network is very different from having to install bunch of duct taped software and hoping it reliably works all the time without fail are two very different things.
I'm a full time dev and even having my home assistant breaking every time I think of upgrading it, is annoyance enough. My home lights and what not are down for two hours and I'm mostly installing HA from scratch and recovering from the backup that I've started to take since the last collapse.
A NAS is a way more critical device and I don't want to lose my data or needing to spend 2 weeks recovering data under an anxiety attack because I hastily did one upgrade.
Time is money, I would rather buy a NAS these days even though in the past I ran my own FreeBSD ZFS server NAS. Much cheaper use of my time to pay a 2-3x premium on hardware if it means I spend 4 hours on build and 1 hour on admin per year vs 12 hours build, 6 hours admin per year.
Thanks for the OS recommendations! I'm a soon to be ex-Synology user looking for a new home (their killing of Video Station also irked me).
Honestly they got into the NAS business when cloud offerings were different, and many Internet connections were a lot slower.
Synology’s market is the intersection of:
People who have lots of pirated media, who want generous storage with torrent capabilities.
People who want to store CCTV feeds.
People who find the cloud too expensive, or feel it isn’t private enough.
People with a “two is one, one is none” backup philosophy, for whom the cloud alone is not enough.
Tiny businesses that just need a windows file share.
I wish 1 or more HD manufacturers would get together and sell a NAS that runs TrueNAS on it. Or even an existing NAS manufacturer (UGreen, etc)
All these NAS manufacturers a spending time developing their own OS, when TrueNAS is well established.
TrueNAS isn’t nearly friendly enough for the average user. HexOS may fit that bill, although it seems rather immature. It runs on top of TrueNAS.
My doctor was able to switch from Synology to TrueNAS after I advised him to replace his failing Synology NAS with a TrueNAS box and I gave him a link to the TrueNAS documentation. He is fairly average in my opinion.
I think the fact they're a doctor puts them well above the average person's intelligence (or for your sake at least i hope so)
Like the other poster said, your anecdote says more about the doctor than the average user. TrueNAS documentation isn't the worst by any means, but it has far too many extremely low level controls accessible. It would be overwhelming to most users unless you stay very within a small window of basic functionality. If you're just using it for network storage, maybe it's fine... anything beyond that though is going to trip up many folks.
As an owner of two Synology boxes (and maintainer of a couple more), I’m not happy about these news. Synology needs to rethink its user base, upgrade their hardware offerings and avoid be lead by profit hunting board members.
I hope someone high ranking from Synology reads all the comments from this post (and many others from https://mariushosting.com ) and takes the correct decisions. Please don’t let Synology become like the other greedy companies out there.
I was just about to replace my 2 Drobo units with 2 of 4 bay Synology, my very early Synology single drive NAS is still working so thought I'd go for familiarity.
Read about this BS and thought it's bound to end with Synology branded drives costing much more than other brands as people won't have the choice. QNAP took my order instead.
Bought a Synology in 2020, been using it for backups and Plex since then. Only recently started doing a little more with it (Immich, Kavita, etc.)
Sometimes I brush up against its limitations and its annoying to me; other times I like the convenience it provides (Cloud Sync, Hyper Backup). Even before this announcement, I think that when this thing bites the dust, I would likely build something myself and run Unraid or TrueNAS.
IMO what they really needed to do was improve the QuickConnect service to function similar to Cloudflare Zero Access/Tunnels, or integrate better with that. That's really the missing link in turning your NAS into a true self hosted service that can compete with big tech cloud services, in that you won't expose your home IP and won't need to fiddle around with a reverse proxy yourself.
> When a drive fails, one of the key factors in data security is how fast an array can be rebuilt into a healthy status. Of course, Amazon is just one vendor, but they have the distribution to do same-day and early morning overnight parts to a large portion of the US. Even overnighting a drive that arrives by noon from another vendor would be slower to arrive than two of the four other options at Amazon.
In a way this is a valid point, but it also feels a bit silly. Do people really make use of devices like this and then try to overnight a drive when something fails? You're building an array-- you're designing for failure-- but then you don't plan on it? You should have spare drives on hand. Replenishing those spares is rarely an emergency situation.
A lot of these are home power users.
They build the array to support a drive failure but as home power users without unlimited funds don’t have a hot spare or store room they can run to. It’s completely reasonable to order a spare on failure unless it’s mission critical data needing 24/7 uptime.
They completely planned for it. They’ve planned for if there is a failure they can get a new drive within 24 hours which for home power users is generally enough, especially when likely get a warning before complete failure.
I agree, I don't buy spares, but when I have a drive failure, the first thing I do is an incremental backup, so that I know my data is safe regardless, while I am waiting for a drive.
Also worth noting that I don't think I experienced hard fails, it's often the unrecoverable error count shooting up in more than one event, which tells me it's time to replace. So I don't wait for the array to be degraded.
But I guess that's an important point, monitor your drives. Synology will do that for you, but you should monitor all your other drives. I have a script that uploads all the smart data off all my drives across all my machines to a central location, to keep an eye on SSD wear levels, SSD bytes written (sometimes you have surprises), free disk space and smart errors.
Do you have a link to your script? Mostly I'd love to have a good dashboard for that data.
Not the full script but can share some pointers.
Using smartctl to extract smart data as it works so well.
Generally "smartctl -j --all -l devstat -l ssd /dev/sdXXX". You might need to add "-d sat" to capture certain devices on linux (like drive on an expansion unit on synology). By the way, synology ships with an ancient version of smartctl, you can use a xcopy newer version on synology. "-j" export to json format.
Then you need to do a bit of magic to normalise the data. Like some wear level are expressed in health (start = 100) or percent used (start = 0). There are different versions of smart data, the "-l devstat" outputs a much more useful set of stats but older SSDs won't support that.
Host writes are probably the messiest part, because sometimes they are expressed in blocks, or units of 32MB, or something else. My logic is:
if (nvme_smart_health_information_log != null)
{
return nvme_smart_health_information_log.data_units_written * logical_block_size * 1000;
}
if (scsi_error_counter_log?.write != null)
{
// should be 1000*1000*1000
return (long)(double.Parse(scsi_error_counter_log.write.gigabytes_processed) * 1024 * 1024 * 1024);
}
var devstat = GetAtaDeviceStat("General Statistics", "Logical Sectors Written");
if (devstat != null)
{
return devstat.value * logical_block_size;
}
if (ata_smart_attributes?.table != null)
{
foreach (var att in ata_smart_attributes.table)
{
var name = att.name;
if (name == "Host_Writes_32MiB")
{
return att.raw.value * 32 * 1024 * 1024;
}
if (name == "Host_Writes_GiB" || name == "Total_Writes_GB" || name == "Total_Writes_GiB")
{
return att.raw.value * 1024 * 1024 * 1024;
}
if (name == "Host_Writes_MiB")
{
return att.raw.value * 1024 * 1024;
}
if (name == "Total Host Writes")
{
return att.raw.value;
}
if (name == "Total LBAs Written" || name == "Total_LBAs_Written" || name == "Cumulative Host Sectors Written")
{
return att.raw.value * logical_block_size;
}
}
}
and even that fails in some cases where the logical block size is 4096.I think you need to test it against your drives estate. My advice, just store the raw json output from smartctl centrally, and re-parse it as you improve your logic for all these edge cases based on your own drives.
Failures should be rare, which means a spare HD might be sitting in a drawer without spinning for years, which HDs don’t like to do.
When you need to replace a drive, it’s better to purchase one new. It was manufactured recently and not sitting for very long.
> a spare HD might be sitting in a drawer without spinning for years, which HDs don’t like to do.
How so? Does this imply drives "age out" while sitting at distribution warehouses too?
My synology NAS is for my own use. I do not keep spare drives on hand, I would go to the nearby shop that's 20 minutes away from me to get a new drive. They wouldn't have synology branded drives but they have the toshiba MG series, Western Digital and Seagate.
Within my NAS, I have 2 different pool, 1 is for important data, it's 2 hard disk with SHR1 replicated to an offsite NAS. Another pool is for less important data (movies, etc), it's SHR1 with 5 hard disks, 75TB total capacity, none of the hard disks are the same batch or production date. Not having the data immediately is not a problem. Losing that data would suck but I'd rebuild so I'm fine not having a spare drive on hand.
>>You should have spare drives on hand.
I've never heard of anyone doing that for a home nas. I have one and I don't keep spare drives purely because it's hard to justify the expense.
The drives I have obtained ahead of failure only after drives have been used past the 8 year mark are for a rebuild. I would hardly call them spares.
I did end up with a spare before at the 3 year mark but the bathtub curve of failure has held true and now that so-called spare is 6 years old, unused, too small of a drive, and so never planned to be used in any way.
The conventional wisdom is that you should not store drives that don't get spun up infrequently, so what does it mean to have spares unless you are spinning them up once a month and expecting them to last any longer once actually used?
I do. Also, I have an unopened 990 EVO Plus ready to drop into whatever machine needs it.
I'm not made of money. I just don't want to make excuses over some $90 bit of junk. So I have have spare wifi, headset, ATX PSU, input devices, and a low cost "lab" PSU to replace any dead wallwart. That last one was a life saver: the SMPS for my ISPs "business class" router died one day, so I cut and stripped the wires, set the volts+amps and powered it that way for a few days while they shipped a replacement.
Heh, I suppose you've heard of one now. Fair enough, I could be in the minority here.
Yeah. If you don't have a couple spare 100TB ssd nases you can turn on in the event of failure yoh are doing it wrong
I had a hot spare in the form of a backup drive. It was a 12 TB external WD that I'd already burned in and had as a backup target for the NAS. Then when one of the drives in the NAS failed, I broke the HDD out of the enclosure and used it to replace the broken drive. It hadn't been in use for many months and I'd rather sacrifice some backups rather than the array. I also technically had offsite backups for it that I could restore in an emergency.
always run the previous drive gen space.
i budget 300usd each, for 2 or 3 drivers. that is always the sweet spot since forever. get the largest enterprise model for exactly that price.
that was 2tb 10yrs ago. 10tb 5yrs ago.
so 5yrs ago i rebuilt storage on those 10tb drivers but only using 2tb volumes (coulda be 5, but i was still keeping the last gen size as data haven't grow), now my old drivers are spares/monthly off-machine copies. i used one when getting a warranty for a failed new 10tb one btw.
now i can get 20tb drivers for that price, i will probably still only increase the volumes to 10tb at most and have two spares.
I have 3 Synology NAS units with about 80TB of total disk space across the 3. When I replace them I will not be replacing them with Synology, they have lost me as a customer. I switched to Synology after having done the homebuilt NAS for years and just wanted something that works. OSS has come a long way since then so time to go back to what I know and can trust.
Hopefully they don't start pushing this change to older products. I don't want to have to replace my NAS, but if I ever do, it certainly won't be with another Synology product, even if they walk this decision back.
It's really the apple for NAS. I'm not concerned since they are anyway to expensive for me but I would be pissed to be obliged to buy their rebranded hard drive
Good. Will stop collègues from buying that crap. Managing an aging fleet of those is a PITA. Why bother with a crippled Linux, if a full-blown, real OS is available on better hardware anyway.
Full blown OS is a valid option, but installing and maintaining a full blown OS with the equivalent of their ecosystem apps is more complex and time consuming. Some will consider that a PITA and that audience likely isn't bothered by a crippled Linux.
I bought a Synology last year and then had to return it because it didn't support two of the enterprise drive model revisions I have (but worked with the others, even one that's the same make).
For the same hardware cost I got a random mATX box that can hold 2.5x more hard drives, a much much beefier CPU, 10x the RAM, and an nvme. And yea it took an hour to set up trueNAS in a docker image, but w/e.
Same exact hard drives working perfectly fine in fedora. If it weren't for hard drive locking I'd have stuck with the Synology box out of laziness.
Running Ubuntu + Docker + Portainer on a NUC10i5FN (32GB RAM, 1TB HP EX900 M.2 SSD) — I’ve got containers for:
AdGuard Home (DNS filtering)
Scrypted (bridging CCTV to Apple Home)
Jellyfin (media streaming)
Immich (photos)
WireGuard (secure VPN)
The 2TB SSD is handling everything pretty well, but I’ve got a 2.5" SSD slot left unused. Thinking about adding a second SSD for either storage or backups, or maybe caching for media.
Any cool apps or tools people would recommend for setups like this? Also, curious about how others are using that extra SSD space in a home lab/NAS setup.
Their hardware has been dogshit for years TBH, this year's upgrades were to like ~2020 tech and some of these models won't be upgraded again until 2030!
The only parts of Synology I really like are some of their media apps are a very tidy package, I've previously written a compatible server using NodeJS that can use their apps so I think I'll have to pursue that idea further given the vastly superior consumer hardware options that exist for NAS.
Do you have some examples of the superior customer hardware options? I currently use 2x12 bay Synology NAS and one of my favorite parts of it is the hardware, the easy access to the hotswap hard drives.
If I could get that form factor, but with a custom NAS software solution, I’d be very interested.
I run a Sliger 3701 [0] with an ASRock Rack Ryzen motherboard (choose B650D4U or X570D4U to suit) using TrueNAS Scale. This replaced my old FreeNAS Atom C2000 setup that ran for over a decade with zero issues.
Add in a Mikrotik CCR2004-1G-2XS-PCIE [1] for high speed networking. Choose your own HBA.
0: https://www.sliger.com/products/rackmount/storage/cx3701/
Comment was deleted :(
I don't think you'd get 12x too easily but one recent post here is AMD Hawk Point with 6x 3.5" drives + 5x M.2 drives + OCuLink + up to 128GB of RAM.
Thank you for the link, currently my needs are more 3.5” high capacity drives, and less SSD storage (aside for a TB or 2 of cache. I just don’t need SSDs for my media collection, it seems like a waste when I can pick up 24-26+ TB drives for so much cheaper.
> Their hardware has been dogshit for years TBH
I think this too harsh. I bought a 1821 a few years back. It takes a generic 10gb card and does what it said on the box. It’s quick and reliable. What am I missing?
I had asked them for years for the ability to exclude folders by name (or regex) in Synology Drive Sync to be able to keep out node_modules, .git and other garbage from synchronizing to my NAS. Support responded that I can just unselect the folder manually. They have never implemented it.
Is there a good alternative these days for someone that only really wants network attached storage, e.g. some drives in a box, easy RAID setup and local network access via SMB?
I use my server for everything you'd use NAS Apps for. I have an aging Seagate NAS and had been eyeing Synology but this gives me pause.
Depends on your skill level. TrueNAS is pretty great at core NAS functionality and its apps situation is also pretty good now that it's stabilizing after working through some growing pains, but it's also not as easy to use as Synology. I've used it for years (since it was still called FreeNAS) and personally wouldn't consider anything else. They offer their own hardware that is not badly priced, or you can load it up for free on your own generic DIY hardware (which will typically save you quite a few bucks over buying any all-in-one solution). Unraid is another good option for DIY that is easier to learn, though not free. Rolling your own from scratch on top of Debian or something also isn't particularly difficult if you're experienced with Linux, and running apps through Docker with something like Portainer.
For more direct competitors to Synology in terms of maximum easy user experience there are some decent options out there like QNAP, ASUStore, etc. Youtube is a good place to look for reviews that go in depth to compare.
Can you “btrfs send” snapshots in a raid array in DSM to a Linux server?
If it was a ZFS NAS, I could ZFS send to another system.
I want to get the historical data out to an open portable system.
you really use btrfs send? found it so limiting i assumed only host providers will be using this when btrfs finally have integrated encryption of rest data.
always just rsync/scp things and it is less problematic and faster on my weekend tests.
How do you transfer snapshots with rsync?
I never understand these kind of short sighted decisions. It takes a long time to build a brand, this could potentially hurt their customer willingness to recommend and even buy their brand.
People aren't stupid, they know that yet they do it.
I believe Amazon became so popular because it treated its customer fairly well until recently, now that they are in extremely dominant position.
But Synology is far from it.
> I never understand these kind of short sighted decisions.
Thats probably because you are not a CEO or a member of a board of directors.
My understanding is that they will just block drives that are not in the compatibility list and that list includes drives from all major vendors. Right now when you add a non compatible drive you just get a warning. Am I missing something?
A friend of mine ditched his Synology for a very low cost multidrive chassis from Aliexpress, and this... https://unraid.net/
Seems like a good setup.
I built my own NAS loosely following this tutorial [1] (mostly for the bill of materials), currently with just 3 4TB disks. Installed NixOS w/ ZFS + K3s on it (I want to use it as a learning playground as well), and deployed a lot of software on it using TrueCharts helm charts, or some other random/self-written charts, and deployed with ArgoCD. Yes, I don't have a nice UI or a nice cloud integration, I have to do almost everything by hand but it's a side-project in itself and fun. For example, now I'm playing with a Postgres operator to manage the databases resources.
My next NAS is going to be Ubiquiti UNAS Pro . 7 drive bays for $499. Can’t beat it.
Can you put a custom OS on it, wiping their software? I have a network rack in my basement (so, shallow depth, can’t fit a “full” depth server) and I’m looking for a dumb box of hard drives with otherwise simple/supported-by-linux hardware that I can use to build my own NAS. I don’t really want to use unifi’s software for it (plus it’s nice to just run plex/jellyfin/etc directly on the box)
I currently plug a USB3 4-bay disk enclosure into my homelab server for this, but the cabling is messy and it doesn’t support 20TB drives. I could upgrade to a newer enclosure, but I’d rather have a “real” rack mount system with drive bays.
No can’t run custom OS as far as I know; without some seriously hackery. Admittedly UniFi NAS software currently is not nearly as feature rich and enterprise as TrueNAS for example, but it’s all very new. Ubiquiti has had a track record lately of iterating very quickly adding more and more enterprise features into their products. I expect to see more hardware offerings in the UNAS line soon as well. Highly recommend Ubiquiti! I have a complete setup at my house with their switches, routers, APs, and protect cameras. It’s awesome!
I assume the UniFi NAS isn't meant for running Plex, for example. Most people seem to want a general purpose mixed media server, which can handle a bunch of data but where a good chunk of that data is movies/music/books that they want to stream. A good percentage of people probably also want to run docker containers. The $500 for (was it 7?) bays of storage is great, but probably mainly functions as a true NAS, so maybe not what a lot of people want. (I think the term "NAS" is a bit overused and can be deceptive for newer users entering the market)
I fully expect Ubiquiti to add some sort of container / VM support eventually to their UNAS products.
That would be rather delicious.
Their weird status with network speeds quicker than 1gb is irritating, but slowly improving.
Assuming shallow depth is 450mm, you could fit in a Sliger shallow case. My workstation runs in one.
what i did is buy a 19" shelf and put a mini itx board and a few 5.25 cages next to it.
Looks like their $300/4GB/1U/4bay and $500/8GB/2U/7bay half-depth devices with AWS-Annapurna SoC can run either NAS or NVR Linux. Bluetooth in a rackable device is unusual. OS might be replaceable with mainline Debian.
https://github.com/NeccoNeko/UNVR-diy-os/blob/main/IMAGES.md
If it can run Gentoo it might be a big energy savings vs my old off lease machine with ZFS …
There's also a QNAP 1U for $600, which adds M.2 NVME and optional 32GB ECC RAM https://news.ycombinator.com/item?id=40868855
We need more Thunderbolt/USB4-to-JBOD 40Gbps storage enclosure options, for use with Ryzen mini PC or Lenovo Tiny.
At first I was going to balk but then I remembered I paid ~$1.5K for a 12-bay Synology (and again for the 12-bag expansion unit).
This is much larger I think (and it bugs me that it’s not an even number of drives, and the offset of the drives is unpleasant) but it’s rackable so that’s a plus.
The only thing I’d want to know is sound (and I’m sure I can find a YouTube video).
I’ve been looking for an excuse to go all-in on a Ubiquiti setup… Thanks for mention this, I wasn’t aware Ubiquiti had a NAS product.
I believe the UNAS Pro is exceptionally quiet. Agree 7 drive is strange but they wanted their touchscreen so. I expect to see additional hardware units in the UNAS lineup released. Perhaps non-pro and enterprise versions.
Highly recommended Ubiquiti. I have it all throughout my house.
What is the File System? BTRFS or ZFS?
I only wish they do a 2 Bay or 4 Bay version. Or better yet something like Time Capsule. 2 Bay + Ubnt Express.
Mdadm for the raid, btrfs for the filesystem. I expect to see them iterate and add ZFS eventually as well as new hardware units. Perhaps non-pro and enterprise versions.
[dead]
I personally thank Synology for removing themselves from my NAS shopping list and ceding the top of the list to Unraid.
I think they are on their way to exit the consumer market. Their linux kernel is old. And DSM dont update the kernel.
I am very disappointed to see that. I've been using Synology NASs for the last, oh, 14 years or so, and I have nothing but good things to say about them. Most importantly, their software works and requires nearly zero maintenance in the long term. I care a lot about this: I have too many things in my life that require my attention and impose on me. I am OK with paying more for things that Just Work, and Synology has been in that category.
The locking-down is disappointing and unnecesary. Sure, give me the option of using "certified" drives, but don't take away the option of using any drive I have.
What's better for running Plex?
Assuming I want 4 drives and something that can transcode multiple files in real time.
You’re really best off not trying to lump together your Plex server and your NAS on the same box. You can use an odroid H4 or any cheapo Alder Lake-N laptop for your Plex server, and any NAS solution to store the media.
>You’re really best off not trying to lump together your Plex server and your NAS on the same box.
Why not? If you are not going to do transcoding, Plex should sit fine on a NAS.
Basically an archaic idea that a NAS has to be only good at one thing which is just file sharing.
It's hard to actually buy a computer that bad these days, although Synology still makes it easy lol.
Tbh another archaic idea is that you need to transcode.
All you need is one of those sticks that plug into the hdmi of your tv and can run vlc. I use a chromecast. Vlc can play mkvs directly off samba shares. Done.
TBH the only reason anyone talks about transcoding is Synology's absolute dog-shit hardware options make it a problem. Most Intel CPUs and most AMD CPUs and many ARM CPUs and all the discrete GPUs are absolutely fine with munging video and audio codecs. They're just not well-represented in Synology's lineup.
Can you list things you don’t like about the Synology?
I can transcode just fine on a 1821 (though use a separate machine to do it for other reasons). The units seems fine, what am I missing out on?
Fundamentally I guess it feels like they have ground to a halt or even quietly retired the DSM line.
Let's look at your NAS for an example (it's actually worse for mine cause I always loved the Slims lmao).
DS1821 has no GPU transcoding because it uses the AMD Ryzen 1500B, what you've observed is this processor is just powerful enough to brute force some transcoding but it has no hardware transcoding:
https://www.synology.com/en-global/products/DS1821+#specs
DS1821 successor was recently revealed to be the DS1825. So if your NAS died in a few months, that would be your obvious choice for a replacement. They decided to continue using the AMD Ryzen 1500B, which is now seven years old. TBD if they bother releasing it this year. In a few months they could call it DS1826 instead.
https://nascompares.com/2025/03/13/synology-ds525-ds1525-ds4...
Meanwhile DSM still lacks support for NVMe volumes, in fact there are still no NVMe-only models at all, you can't even install DSM to NVMe, they cut support for USB drives, they cut support for HEVC, and they almost never update docker and some of the other tools.
The hardware's going nowhere and the software's going backwards.
I would like to use ssds for running containers, this is a good point. The elderly cpu is more than adequate for my needs but as you say, an upgrade, ever, would be nice.
I guess I’ve avoided these issues by accepting that solid state storage in a NAS is still too expensive.
My ultimate would be the Synology with an Apple silicon heart. One can dream.
While it's not officially supported you can use nvmes to run containers. I do this on my own nas and it works great. See https://www.reddit.com/r/synology/comments/1gobb14/guide_how...
That said, the person you're replying to is right. Synology has mostly stopped supporting their apps, they've removed features in cost-cutting features (media codecs), hardware is now hopelessly outdated and both kernel and docker are completely out of date.
It feels like any technical leadership completely disappeared and now only bean counters who don't understand the product or their target market are making decisions.
You got different replies from others, but my 2¢ since you were replying me - if you're not doing transcoding, then I agree, Plex on a NAS is fine - it's basically just a wrapper around some shared files in that case. (The post I was replying to did specify that they were doing transcoding.)
Even if you do transcoding, it's still fine so long as it has supported hardware for HW-accelerated encoding. Which doesn't require all that much - even fairly basic Synology models can do H.265.
Some do, but the models with HW-accelerated transcoding have been shrinking - it's only the "prosumer" models that have Intel Quick Sync (with no guarantee that refreshed products will maintain it) - the lower end models have ARM processors without HW-accelerated transcoding and higher end models have Ryzen processors without GPUs.
About a year ago ago I bit the bullet and frankestiend together a TrueNAS scale box. I hated the idea of the DIY route and stuck with frustrating SOHO nas devices for way too long. An all in one NAS product that has an app store or supports containers is nice in theory but in my experience always ended up with a NUC running proxmox sitting next to the NAS.
Managing it is fine. It expects you to understand ZFS more than a turnkey RAID 5 + btrfs job, but has an OK UI and seems born out of the fact ZFS people want that customization, not forcing you fend for yourself. I read a 15 minute explainer, built a pool, and didn't have to think about it at all other than replacing a failed drive last month. And all that took was a quick google to make sure I was doing it right, which is exactly how I replaced drives in a standalone nas.
Plex runs very happily on my Synology box. So I'd want something that was a reasonably drop-in replacement, and not something that required more work.
For "just Plex" - more specifically anything except Synology's own suite of apps - any Mac, Windows or Linux computer or headless docker server will provide the same experience with a ton of better options, like installing to a NVMe without shenanigans for a start!
That all sounds like something that will require maintenance. Being able to slide a dead drive out, slide a new one in, and have any extra space be added automatically is fantastic. Not having to worry about updates, etc. These are things I'm willing to pay for.
If Synology is not interested in your use-case it doesn't really matter if you'd pay them to support it. But also they suck at most of what you're claiming anyway IMHO. Even their docker version is six years out of date.
How do you install an nvme in a Mac?
How do you expand the storage to 100+ TB?
The web gui is nice to use, what does Windows/Linux or the Mac offer? I’d prefer a Mac option, but I don’t believe there is a good one.
Upside of Mac is good Thunderbolt support. A ThunderBay 8 (or comparable) gives you 8 SATA bays. The baseline Mac Mini supports up to 18 such Thunderbolt bays via chained Thunderbolt. (Can also get nvme thunderbolt drives.)
I saw on level1techs, a 4 Bay chassis with an AMD APU plenty capable of transcoding multiple streams. Can't remember who made it, and you'd have to DIY the OS install, but that's pretty ideal to me.
What i do, and will continue to do, is use a USB-C disk box (Level 1 also recently reviewed some they quite liked, despite the usual fears around USB) and whatever PC i have laying around. 5 years strong running ZFS over USB 3 with a 4770K, regularly serving 4 Plex streams at once without complaints and no failures (i mean, usual disks wearing out, but nothing caused by USB).
So if a 4770 can transcode 2x1080 and direct play 2x more, any old anything with hardware transcoding these days should be just fine.
I ended up just running Infuse on an AppleTV.
Then there is no need for transcoding at all.
I use a NUC with a terrmaster DAS connected with usb.
So if I don’t use Synology’s own drives, my NAS gets slower on purpose? That sounds kinda unfair?
Apparently no one remembers that a while back Synology already "locked" a whole bunch of people their drives, by ripping out BTRFS support on their NAS budget models. And they pushed this update without warning ahead. Your disks just suddenly up and stopped mounting.
Their answer to your drives no longer mounting is "connect them to a PC, pull the files and reformat". Where you are supposed to intermediate 20TB of data is left up to the customer to deal with.
Fuck Synology.
As someone who switched about a year ago from FreeNAS to Synology, this is extremely disappointing. Synology has been a great experience up to this point. I have pools, one on supported WD Reds, and the other on a hodge podge of drives I had collecting dust.
Both pools have been very reliable so far. But if they continue down this path, I'll seek a different solution.
How might Synology’s decision to lock its 2025 Plus NAS models to only support its own branded hard drives impact long-term reliability, upgradeability, and customer trust—especially when compared to more open alternatives like QNAP or TrueNAS?
Look at all the enshittification!
It's all profits, at the end of the day. Good for the shareholders!
just chiming in, I have 3 synology products, recommended them for a decade. As they've betrayed me, and I'll never buy another one, and I'll help friends move to whatever I move to. Thinking rock64 appliances with TrueNAS, will start donating to that project.
I don't know what sect of leadership (MBAs?) sees continual enshittification as the strategy but I'll fight this economic warfare forever.
We've deployed Synology across multiple client environments for almost a decade, and it's been an incredibly reliable platform. Barring the usual hard drive failures (which are inevitable over time), we've had zero issues with SSO integration, expanding arrays with expansion units, seamless hardware upgrades, or the application layer. It just works.
That’s why I’m hoping Synology rethinks its position. Swapping out trusted, validated drives for unknowns introduces risk we’d rather avoid, especially since most of our client setups now run on SSDs. If compatibility starts breaking, that’s a serious operational concern.
Time to move to Truenas.
The only moat Synology have is their software. How far is Truenas from catching up?
In my opinion, truenas is already caught up if you understand hardware and networking basics. The more advanced you are, the more likely truenas is right for you until you’re eventually ready for a proper zfs+bsd. The other nice thing about truenas is it does what you tell it to do in my experience.
Synology have excellent backup apps for O365 / Google Suite - Ill have to see what latest Truenas has to offer.
Honestly given how lots and lots of things just build on Debian, seems easier to me to just use an off the shelf older Supermicro motherboard with an plain IT mode SAS HBA and something like an Enthoo Pro case and just call it a day running stock Debian. Certainly far few surprises and bullshit.
I do the same for a few years and use that also for friends with failing NAS. Most of the time two big HDDs or SSDs in RAID 1 is sufficient.
Realistically, this will be par for the course among NAS vendors in 10 years, and there will be vigorous defenses of it on Hackernews. "If you want an open NAS, build one yourself. I choose a Yoyodyne NAS because it integrates with the apps I use. And the Yoyodyne disk restriction is no big deal since I know I'll be using something supported and compatible."
Any asshole thing a company does, provided they remain solvent enough to stick to their guns for enough time, becomes an accepted industry practice. BonziBuddy generated outrage back in the day because it was spyware. Now Microsoft builds BonziBuddy tech right into Windows, and people -- professional devs, even -- are like "Yeah, I don't understand why anyone would want the hassle of desktop Linux when there's WSL2."
I will never use their plus line, but I care about the company, so this is really sad. They provided security updates over a very long time. Recently they cut automatic updates (not updates at all) for older models to make it inconvenient to run older models. Now this. With that development, I'm not sure that I will buy my next NAS from them.
Spot on.
I’d add that mandatory drives when they aren’t the experts in it making drives a bad move.
Maybe other manufacturers are the way.
They're just white labeling Toshibas
If the drives must identify as a QNAP secured drive, they could be whitelabeled toshibas but a user can only buy their white-labeled toshibas I'm presuming.
As someone who was in the marekt for getting a Synology, this has moved Synology onto my "do not trust and avoid"-list.
It is very hard to get of that list and I will warn everybody wbo asks me about tech advice (so literally everybody in my vicinity) about vendors on that list. Good luck Synology.
The thing is they dont care about you anymore. They have moved firmly into the SME market which is why they are making decisions which dont really affect company purchase policies, but really upset personal users.
Well it is good then that I did not trust that company.
I have a DS920+ for home lab purposes that I'm very happy with, but this sort of bullshit is going to make me drop that brand from any future recommendations or purchases.
I'm likely to go down the BYO NAS path going forward. Just a stupid customer punishing policy. A real slap in the face.
I bought a N100 based device from AliExpress that supports two drives for my backup server. its a cracker and runs debian wonderfully. Very smooth and responsive. runs quietly and transfers data fairly quickly.
[dead]
Comment was deleted :(
Crafted by Rajat
Source Code