Well, I agree with most of what you're saying but a couple of phrases stuck out. _"Not only are disk and memory cheaper..."_ I work in DevOps and, in my experience, phrases like that are Developer speak for "we could do it efficiently but can't be a***d so just chuck more memory/disk/CPU at the problem." Indeed, memory and disk is cheap; if you're maintaining your own servers. As soon as you have to use managed hosting or cloud providers it starts getting decidedly expensive. This is especially true if you have applications to support that use lots of memory, disk and CPU at the same time as most cloud platforms are geared up to provide lots of one or two of those resources at once, but not all three. _"...but there aren't as many users running copies of the same program on the same machine either."_ So you're assuming multi-user systems only? The vast majority of Linux servers I have seen in the last 16 years of working with Linux almost exclusively run services and aren't used for multi-user access. In that case, shared libraries are essential, as running 60 identic al statically linked processes on a machine would result in huge memory requirements. Combine that with the lazy developers bloating memory use and you'll suddenly realize you need double or triple the hardware you needed before. I can see your point for Workstations and true multi-user systems, but for the vast majority of Linux servers then you're completely wrong: shared libraries are not obsolete by a long, long way.
You say "I challenge anyone to show empirical evidence that the fixes are more common than the breakage". But you wrote the article, and so it his actually your challenge to back it up with some evidence. Otherwise you /might/ be spreading nonsense. Now, I absolutely agree that its possible to botch a library update. I agree it creates an untested configuration. But it all depends. Library code is insulated by an API layer, and it should absolutely be possible to fix bugs in the library without changing the API or the behaviour. When an important security problem is discovered in a crucial piece of code, I think it is substantially better to fix this for all applications at once, instead of relying on each of them to update. And just think of how much more bandwidth that must suck up in the process.
I see dynamic linking also a security feature. If you find a CVE in your lib, you don't need to test (even if just by checking a reliable log showing which version was used for the build) every application for the issue and then rebuild every single one. Just update the library and you are done. Security auditing of your systems is also part of maintenance.
I'm on the side of static linking. Shared libraries don't help when running multiple copies of an application because the text segment will be shared anyway. Shared libraries will save space when running many different apps that share the library, but at least in a VM environment, the VM page deduplication code will detect and share read-only pages that are identical anyway. I think there's a paper recently about deduplication code for non-VM environments as well. Of course the linker would have to page-align libraries. In the old days with static libraries, you only pulled in routines that you actually touched, not all of them, so a statically linked application might be quite a bit smaller than the sum of the libraries. This reduces TLB misses as well. The OS might be able to use huge pages for text segments, saving further on TLB misses. The security argument from yaccz has merit, but changing library versions to solve security bugs doesn't automatically retest and requalify the application.
I think this is an interesting argument. I have a few 10 year old Linux binaries I can still run entirely thanks to the fact I thought to statically link them. Weirdly, MacOS does not even allow static linking. Well specifically against libc. Their [explanation](https://developer.apple.com/library/mac/qa/qa1118/_index.html) is that they don't want to guarantee binary compatibility to the kernel, only to the shared library interfaces. I don't much like that decision but I suspect they thought it through.