LWN: Comments on "LWN is back" https://lwn.net/Articles/1031536/ This is a special feed containing comments posted to the individual LWN article titled "LWN is back". en-us Mon, 13 Oct 2025 22:13:39 +0000 Mon, 13 Oct 2025 22:13:39 +0000 https://www.rssboard.org/rss-specification lwn@lwn.net Corporate separation https://lwn.net/Articles/1033249/ https://lwn.net/Articles/1033249/ corbet That separation also insulates one company from a failure of the other; if the company using the building goes bankrupt, the building itself is unaffected (beyond needing to find a new tenant). Mon, 11 Aug 2025 14:03:20 +0000 Sounds like double ungood day for Linode https://lwn.net/Articles/1033227/ https://lwn.net/Articles/1033227/ farnz I've noted a common pattern across the world (including the EU) where a publicly traded holding company owns two sub-companies: one owns capital assets like land and buildings, which it leases to the other sub-company that operates them. <p>The purpose is not taxation-related in the cases I've seen; rather, it's that the sub-companies have separately audited accounts, and thus you can be confident (as an investor in the holding company) that you're not seeing results "goosed" by accounting tricks to move losses between capital and operational sides. Instead, you've got an audited set of accounts for the capital side, and a separate set for the operational side, which ensures that you cannot be tricked by accounting games. Mon, 11 Aug 2025 10:02:24 +0000 Sounds like double ungood day for Linode https://lwn.net/Articles/1032912/ https://lwn.net/Articles/1032912/ bartoc <div class="FormattedComment"> I'm not sure how it works in the EU, but in the USA these sorts of leases are really, really common for all types of real estate.<br> <p> I think it's because C-corporation taxation is quite punishing compared to other ownership structures, particularly for capital gains realization.<br> </div> Thu, 07 Aug 2025 22:36:24 +0000 Sounds like double ungood day for Linode https://lwn.net/Articles/1031883/ https://lwn.net/Articles/1031883/ anselm <blockquote><em>once the cooling goes, you *have* to shut down servers or electronic components start to melt themselves.</em></blockquote> <p> Way back at university we once had a week or so of unscheduled downtime because legionella bacteria had been discovered in the main campus server room AC and they had to shut everything down in order to get the cooling system disinfected. This was at the height of summer, of course, with temperatures in the 30°C+ range outside. </p> <p> It sucked especially because apart from the NFS servers everybody's files were sitting on even if they had their own workstation on their desk, it included the machine everyone was reading their e-mail on (this was in the early 1990s when e-mail was still much more of a thing than it is today, but POP3/IMAP hadn't caught on, so you would telnet to the e-mail server and use the <tt>mail</tt> command there, or MH if you were a serious e-mail nerd), and some sysadmins spent sleepless nights jury-rigging a replacement system in some cool and airy location elsewhere on the premises just so people could get at their e-mail. </p> Wed, 30 Jul 2025 09:19:34 +0000 Backup providers https://lwn.net/Articles/1031865/ https://lwn.net/Articles/1031865/ meyert <div class="FormattedComment"> Of course you want to host your website on a multi regional kubernetes cluster for high availability!<br> </div> Tue, 29 Jul 2025 21:45:43 +0000 Backup providers https://lwn.net/Articles/1031860/ https://lwn.net/Articles/1031860/ mbunkus <div class="FormattedComment"> Again, I totally agree. What I was talking about, though, is that there were customers that _did pay_ for the optional backup addon/option, and those backups were lost, too, due to the backup servers being located in the same DC that burned down.<br> <p> You can argue that only having one backup location is not enough, and I also agree, but still — the setup chosen by OVH is downright incompetent.<br> </div> Tue, 29 Jul 2025 19:27:56 +0000 Backup providers https://lwn.net/Articles/1031849/ https://lwn.net/Articles/1031849/ Cyberax <div class="FormattedComment"> Typically cloud providers don't provide backups for servers. In fact, they explicitly tell you that you can lose the data if the DC is destroyed.<br> <p> It's still your responsibility to mirror the data into another region (datacenter) and/or use storage that is guaranteed to have durability guarantees (like S3 in AWS).<br> </div> Tue, 29 Jul 2025 17:54:12 +0000 HostHatch https://lwn.net/Articles/1031850/ https://lwn.net/Articles/1031850/ yodermk <div class="FormattedComment"> Curious if anyone has experience with <a href="https://hosthatch.com/">https://hosthatch.com/</a><br> I'm leaning towards putting some personal stuff on that. Looks like a really good value.<br> </div> Tue, 29 Jul 2025 17:53:03 +0000 Sounds like double ungood day for Linode https://lwn.net/Articles/1031848/ https://lwn.net/Articles/1031848/ yodermk <div class="FormattedComment"> <span class="QuotedText">&gt; I've worked at a company like this that had a similar outage in the late 2000s due to a truck hitting the building (yeah)</span><br> <p> Haha, I was there working support that night. Fun times!<br> </div> Tue, 29 Jul 2025 17:44:47 +0000 Backup providers https://lwn.net/Articles/1031818/ https://lwn.net/Articles/1031818/ mbunkus <div class="FormattedComment"> I totally agree with you wrt. backup responsibilities. The thing is that in the OVH case not only were bare-bones VMs affected, even customers who did pay for backups were affected as OVH had located the backup targets in the same data centers as the machines that were backed up, and therefore the backups burned down, too. I even think they lied to customers about where backups were stored, but don't quote me on that.<br> <p> That part's definitely on OVH.<br> </div> Tue, 29 Jul 2025 14:21:18 +0000 Sounds like double ungood day for Linode https://lwn.net/Articles/1031817/ https://lwn.net/Articles/1031817/ archaic <div class="FormattedComment"> During hurricane Sandy, I (and many others) found ourselves carrying 5 gallon buckets of diesel up 31 flights of stairs due to the pumps being offline. Since this was Manhattan, the combined inability of being able to buy that fuel, plus the ever increasing delays in leaving the island to get it and then trying to get back into Manhattan with it resulted in a human pipeline of nearly 100 people. Just to keep the 10 or so sad sacks from having nothing to carry up those stairs. Ahhh, better days.... :)<br> </div> Tue, 29 Jul 2025 14:08:21 +0000 LLMs looking over your recovery plan https://lwn.net/Articles/1031746/ https://lwn.net/Articles/1031746/ farnz It can also do things like decide that certain job titles imply particular people (not reliably, but often enough to be useful), and then highlight that your recovery plans depend on Chris being present at site, because Chris is the Site Manager, the Health and Safety Lead, and the Chief Electrician, and your plans depend on the Site Manager, the Health and Safety Lead, the Chief Electrician, or the Deputy Chief Electrician being on site. <p>While the LLM made a mistake here (since there's two people you rely on, not one), it's still highlighted that a document saying that 1 of 4 roles is needed to recover has, in fact, become 1 of 2 people. Tue, 29 Jul 2025 11:29:59 +0000 Sounds like double ungood day for Linode https://lwn.net/Articles/1031744/ https://lwn.net/Articles/1031744/ mathstuf <div class="FormattedComment"> I'm not saying to just blindly hand it to an LLM and call it a day on its first response. But you can use it to ask if it notices anything like a dependency cycle, implicit knowledge, or steps lacking clarification.<br> <p> It's obviously not going to know about problems like "Bob has the key, but it is lost on his 100-key keyring", but it (hopefully) can notice things like "it says to get the key, but not where the key lives" that might be implicit knowledge in the author's eyes but these instructions are those that really should have a thorough once-over by someone *not* intimately familiar with the process to help rid it of such implicit assumptions.<br> </div> Tue, 29 Jul 2025 11:02:04 +0000 Sounds like double ungood day for Linode https://lwn.net/Articles/1031740/ https://lwn.net/Articles/1031740/ tialaramex <div class="FormattedComment"> I think the train issue I had a few Christmas's ago was worse.<br> <p> I was at a complex London terminus. They told us all trains from one set of platforms aren't running due to a tree falling in the storm - which makes sense, tree on line, can't move trains, no service, checks out.<br> <p> But then every few minutes they would announce "one" exception which was running, the next train for those platforms, and they continued doing this for over an hour. "No trains" "Except the 1145" "No trains" "Except the 1152" "No trains" "Except the 1156" and so on. Rather than say OK, the trains are running normally but..., or actually not having trains because there's a tree in the way they had decided to gaslight their customers by insisting there wouldn't be any trains, then nonetheless running every single train on schedule anyway.<br> <p> I assume there's some mismatch between strategy to run a day's trains (tree blocks line, OK no trains) and tactics to run individual services (we can just avoid the closed section, for every single train) and it wasn't anybody's job to ensure the outcome makes coherent sense.<br> <p> The result however was the normal departure boards don't work - there aren't any trains, they've just said so - so you can't tell when your train will run or where from. But if you just give up and leave the station your train does run as scheduled anyway, and you've missed it, you need to know where it ought to be and then get aboard when inevitably it does run after all. Very silly.<br> </div> Tue, 29 Jul 2025 10:35:44 +0000 Sounds like double ungood day for Linode https://lwn.net/Articles/1031734/ https://lwn.net/Articles/1031734/ Wol <div class="FormattedComment"> I think I got this from Risks ...<br> <p> The generator in this incident had a five minute header tank, which was re-filled by a pump from the main tank. Guess what the pump was plugged into ... the mains ...<br> <p> Cheers,<br> Wol<br> </div> Tue, 29 Jul 2025 07:56:04 +0000 Backup providers https://lwn.net/Articles/1031731/ https://lwn.net/Articles/1031731/ taladar <div class="FormattedComment"> APIs are very useful to have, but you don't need them for all kinds of detailed services. APIs for creating and deleting servers and rebooting them and similar tasks are quite useful to automate installations though (e.g. the Hetzner API) and should be relatively trivial to replace if you need to move somewhere else.<br> </div> Tue, 29 Jul 2025 07:40:40 +0000 Backup providers https://lwn.net/Articles/1031730/ https://lwn.net/Articles/1031730/ taladar <div class="FormattedComment"> Lots of good experiences with Hetzner here too. About the only complaint is that sometimes our customers would like to get a bit more detail than Hetzner provides after short (rare) outages. But personally I think the amount of detail is pretty reasonable most of the time.<br> <p> I especially like their APIs which allow automating a lot of things.<br> </div> Tue, 29 Jul 2025 07:38:13 +0000 Sounds like double ungood day for Linode https://lwn.net/Articles/1031729/ https://lwn.net/Articles/1031729/ taladar <div class="FormattedComment"> Why would you ever entrust something like that to an LLM? Here you want reliable answers and algorithms exist to get reliable answers.<br> </div> Tue, 29 Jul 2025 07:33:15 +0000 Sounds like double ungood day for Linode https://lwn.net/Articles/1031727/ https://lwn.net/Articles/1031727/ MortenSickel <div class="FormattedComment"> We are running a small on-site DC at work. There we have a diesel generator that can power the entire building, it is tested (and refuelled) each 3rd month running the entire building off the generator. We also have it serviced once a year. So when we had a power outage about a year ago, the battery powered UPSs kicked in as they should to keep the servers running while the generator starts up, the generator started up, ran for about five minutes, overheated and shut down. (the power outage lasted for a couple of hours). <br> <p> Just a couple of weeks before, we had done a system test and had the yearly service. It turned out there was some contamination in the cooling liquid that clogged the radiatior which was not detected in the service... The company doing the service got a few questions to answer, but as mentioned elsewhere here, the good thing is that we learned how to get the DC up from nothing. <br> </div> Tue, 29 Jul 2025 06:42:29 +0000 Sounds like double ungood day for Linode https://lwn.net/Articles/1031722/ https://lwn.net/Articles/1031722/ mathstuf <div class="FormattedComment"> Something that LLMs might actually be useful at helping to spot assuming you have things written down in a LLM-consumable medium (i.e., not only stored in meatspace).<br> </div> Tue, 29 Jul 2025 00:08:41 +0000 Sounds like double ungood day for Linode https://lwn.net/Articles/1031718/ https://lwn.net/Articles/1031718/ bracher <div class="FormattedComment"> I'm reminded of an incident from early in my career. We were co-located in a DC that sat on two theoretically-independent power grids, plus diesel generators to backup in the unlikely event that both grids went down. It seemed as if these folks had thought of everything, they had really detailed procedures for just about everything.<br> <p> Naturally both grids eventually _did_ go down, and the generators kicked in, all according to plan. And a few minutes later all of the generators died. It turns out that they procedures for _many_ things, including testing the failover to the generators on a monthly basis. The one thing they did _not_ have a procedure for was the refueling of the generators. So when they were eventually needed they had only a few minutes of fuel left.<br> </div> Mon, 28 Jul 2025 23:05:05 +0000 Backup providers https://lwn.net/Articles/1031711/ https://lwn.net/Articles/1031711/ mrkiko <div class="FormattedComment"> I am having a good experience with them as well. Hoping they can implement a blind users friendly way to access system console for their Virtual Private server Instances. When networking does not come up for any reason, finding out what failed during boot might be tricky. Last time I Had to start a qemu VM from a live ISO environment.<br> </div> Mon, 28 Jul 2025 21:22:20 +0000 Sounds like double ungood day for Linode https://lwn.net/Articles/1031709/ https://lwn.net/Articles/1031709/ smurf <div class="FormattedComment"> This is why reasonable badge systems have an offline mode which knows the cards of a few senior DC operators, along with the card that's in the locker which the fire brigade has access to.<br> <p> The interesting part about cyclic dependencies is that you have customers, which need to be notified when their servers go belly-up for some reason (including when the router they're behind blows a fuse). Thus you need dependency tracking anyway. Which presumably should alert you when there's any cycles in that graph. Which should prevent this from happening. Famous last words …<br> </div> Mon, 28 Jul 2025 21:02:59 +0000 Backup providers https://lwn.net/Articles/1031708/ https://lwn.net/Articles/1031708/ chmod Arch Linux uses Hetzner for a huge part of its infrastructure, too. <a href="https://gitlab.archlinux.org/archlinux/infrastructure/-/blob/master/tf-stage1/archlinux.tf?ref_type=heads#L83">archlinux/infrastructure repository</a> <br> In the end, it's important to have two independent providers, in case of a disaster (like DC on fire). Mon, 28 Jul 2025 20:35:07 +0000 Backup providers https://lwn.net/Articles/1031700/ https://lwn.net/Articles/1031700/ dskoll <p>I agree with this. And it's another reason why I prefer to get just generic Linux machines or VMs rather than paying for fancy cloud stuff. There's much less vendor lock-in if all you need is root on your own Linux machine; it's cheaper; and you can manage your own backups. <p>Don't fall for the siren song of fancy cloud APIs that tie you to a specific vendor. They're convenient, but they hook you in. Mon, 28 Jul 2025 19:33:48 +0000 Backup providers https://lwn.net/Articles/1031695/ https://lwn.net/Articles/1031695/ Heretic_Blacksheep <div class="FormattedComment"> I'd argue that unless you're paying for something like SaaS, PaaS, or some other cloud operations, the onus of backups should be on the renter. There's a difference in simply paying for a VM/server hardware and paying for more expensive cloud oriented services. One of those differences is who's responsible for data backup. If I'm paying for basic virtual space or basic server hardware, then the main virtual image is on my own hardware along with any backups. The image being hosted is the copy. If I'm hosting mixed data in the cloud then I want to make sure the hosting provider is providing backup copy services. I'm unfamiliar with OVH's services, but if they weren't providing backups nor did they claim to provide them, then the fault is not OVH. Anyway, point is, buyer beware and read the fine print.<br> </div> Mon, 28 Jul 2025 18:36:55 +0000 Backup providers https://lwn.net/Articles/1031680/ https://lwn.net/Articles/1031680/ seneca6 <div class="FormattedComment"> Key is to have a full backup not just in another region, but at your own home or at a completely different company. Catastrophic failures like the OVH fire are one thing - and if you didn't pay for geographic redundancy, the provider won't proactively do backups of your data. Another worry is the sudden termination of your account because "algorithm said so".<br> <p> My two points for hosting are: look at classic dedicated servers, not "Cloud" if you don't need any Cloud feature. If you don't need modern hardware, go for the old stuff at OVH's "eco" servers or Hetzner's "auctions". (Hyperscalers happily sell you old hardware for very modern prices.) And look for providers where all traffic is included. Sure, at some point speed is capped, but it's a much better feeling that you can't have any bad billing surprise when someone popular links you - or some botfarm, or some scrapers. Especially for community projects.<br> </div> Mon, 28 Jul 2025 16:42:47 +0000 Re: LWN is back https://lwn.net/Articles/1031681/ https://lwn.net/Articles/1031681/ sionescu <div class="FormattedComment"> Something like a dedicated status.lwn.net would be good (and not too expensive either).<br> </div> Mon, 28 Jul 2025 16:41:33 +0000 Backup providers https://lwn.net/Articles/1031679/ https://lwn.net/Articles/1031679/ dskoll <p>I remember the OVH fire; that was in France, I believe. <p>And that's one reason I have VMs in two different data centres in cities about 550km apart. 🙂 Mon, 28 Jul 2025 16:29:47 +0000 Sounds like double ungood day for Linode https://lwn.net/Articles/1031672/ https://lwn.net/Articles/1031672/ farnz And I suspect that if you could review the chain of decisions that led to "door badge system depends on DC being up and running", every single one would make sense in isolation; it's only when you test "what happens when we take the DC down remotely? Can we get in?" that you discover that these decisions have chained together to imply "when the DC is down, door badge system is also down". <p>For example, the door badge system might be configured to automatically sync access permissions from internal directories (so that you don't have to manually update DC access permissions as people join and leave), and fail closed if internal directories including all mirrors in other DCs are unavailable (on the basis that you don't want an attacker to be able to cut the OOB Internet link and get in), then you allow the badge system to use DC Internet (because it's more reliable than the OOB link, so you get fewer problems where it can't sync) and then you terminate the OOB Internet link for the badge system (on the basis that the DC has redundant fibres and routers, so won't go down, whereas the OOB link is a consumer link without redundancy). And then you update the config management system so that, for all but legacy systems (like the DC routers), if you don't confirm that a change is good to test within 5 minutes, it autoreverts, so people develop a habit of testing rather than carefully confirming that they've not made a mistake. <p>All these decisions sound reasonable in isolation, but they chain together to a world where a mistake with the DC's router configurations result in the door badge system locking you out. Mon, 28 Jul 2025 16:05:32 +0000 Fediverse notifications https://lwn.net/Articles/1031678/ https://lwn.net/Articles/1031678/ jzb <p>Yes; we have realized that replies are not visible from the profile page unless one clicks on "Posts and Replies". We'll be working on more redundancy and better notifications in the event this happens again.</p> Mon, 28 Jul 2025 15:51:32 +0000 Fediverse notifications https://lwn.net/Articles/1031676/ https://lwn.net/Articles/1031676/ mote <div class="FormattedComment"> <span class="QuotedText">&gt; posted as replies rather than a new post, Mastodon seems to like to hide those</span><br> <p> Hopefully constructive feedback from a very lightweight Masto user (lurker/reader) - the problem is incoming information overload for the amount of time I want to devote to the platform; I quickly found that I needed to disable all replies and boosts in my main feed. Reply content is never seen unless I intentionally click-to-open a post and read the replies. My personal usage mirrors what you've felt as a content publisher. $0.02 hth<br> </div> Mon, 28 Jul 2025 15:51:17 +0000 Backup providers https://lwn.net/Articles/1031677/ https://lwn.net/Articles/1031677/ jorgegv <div class="FormattedComment"> +1 for Hetzner.<br> <p> Pretty much the same good experience here, for about 15v years now and counting.<br> <p> Dedicated servers are top-notch and quite cheap.<br> <p> And great support.<br> </div> Mon, 28 Jul 2025 15:48:37 +0000 Fediverse notifications https://lwn.net/Articles/1031673/ https://lwn.net/Articles/1031673/ zdzichu <div class="FormattedComment"> This post is not visible on <a href="https://fedi.lwn.net/@lwn">https://fedi.lwn.net/@lwn</a> . Visible posts are, at this moment: Security updates for Monday; LWN is back; 6.16 is out; Rethinking confidential VMs. Last post, about rethiking, is from "3d ago".<br> </div> Mon, 28 Jul 2025 15:43:35 +0000 Backup providers https://lwn.net/Articles/1031671/ https://lwn.net/Articles/1031671/ imcdnzl <div class="FormattedComment"> OVH had a massive fire a couple of years back (just search for OVH fire) and they didn't have backups and a lot of customers lost all their data - servers you can replace more easily, but not data.<br> <p> They're probably more safer now I guess though as they would have learnt from that.<br> <p> I think key is to have backup of all data / server config in another region whether you use a small provider or one of the big hyperscalers.<br> </div> Mon, 28 Jul 2025 15:32:02 +0000 Backup providers https://lwn.net/Articles/1031668/ https://lwn.net/Articles/1031668/ burki99 <div class="FormattedComment"> I've been with Hetzner for over 20 years - first dedicated servers, now cloud instances. The experience has been flawless, the few times I needed support it was quick and knowledgeable, and the pricing has always been very reasonable.<br> </div> Mon, 28 Jul 2025 14:46:01 +0000 Backup providers https://lwn.net/Articles/1031603/ https://lwn.net/Articles/1031603/ koverstreet <div class="FormattedComment"> I shifted all of my stuff to Hetzner awhile back. Data point of 1, and I'd love to hear from others, but so far I've been impressed.<br> </div> Mon, 28 Jul 2025 14:23:28 +0000 Sounds like double ungood day for Linode https://lwn.net/Articles/1031594/ https://lwn.net/Articles/1031594/ paulj <div class="FormattedComment"> It's like the time Facebook had a DC go down. Someone had to physically go from the nearest Prod. Eng. office to the DC to start the initial steps for bring-back. When they arrived they found they couldn't get in - the door badge-entry system was out, cause it depended on the DC. They + DC engineers had to resort to some good old physical mallet based engineering to hack their way in.<br> </div> Mon, 28 Jul 2025 14:19:23 +0000 Backup providers https://lwn.net/Articles/1031593/ https://lwn.net/Articles/1031593/ lyda <div class="FormattedComment"> Living in the EU, I agree. I'm looking at OVH and scaleways. Both are French.<br> </div> Mon, 28 Jul 2025 14:16:51 +0000 Backup providers https://lwn.net/Articles/1031591/ https://lwn.net/Articles/1031591/ dskoll <p>I don't know if you're looking to diversify providers, but these are two that I use that I've had good luck with: <ol> <li><a href="https://www.ovhcloud.com/en-ca/">OVHCloud</a>. I have a virtual machine in one of their Montreal data centres. <li><a href="https://www.lunanode.com/">Luna Node</a>. I have a VM in one of their Toronto data centres. (Unlike OVH, Luna Node doesn't own the data centre; they lease space from the owner.) </ol> <p>I'm in Canada, so having my data in Canada was important to me. If, for similar reasons, you want to say in the US, I'm pretty sure OVH has US data centres. Mon, 28 Jul 2025 14:09:51 +0000