Linux in the news
All in one big page
See also: last week's Letters page.
Letters to the editor should be sent to email@example.com. Preference will be given to letters which are short, to the point, and well written. If you want your email address "anti-spammed" in some way please be sure to let us know. We do not have a policy against anonymous letters, but we will be reluctant to include them.
October 25, 2001
From: Aldrin Martoq <firstname.lastname@example.org> To: <email@example.com> Subject: Thanks for "On the Desktop" Date: Fri, 19 Oct 2001 15:08:51 -0300 (CDT) Cc: <firstname.lastname@example.org> Michael, This letter is just for thank you for all the good stuff you put into "On the Desktop" section of lwn.net. I followed your column every day, you did a *very good job*, from the beggining to the end. "On the Desktop" is one of the kind of sections that was missing on lwn... I'm very sorry that the column is not there now. Well, I Hope the best for you and lwn... Greetings from Santiago de Chile, -- Aldrin Dar es dar, y no marcar las cartas simplemente dar. Dar es dar, y no explicarle a nadie no hay nada que explicar. -- Fito Paez, "Dar es dar"
From: "Jay R. Ashworth" <email@example.com> To: firstname.lastname@example.org Subject: Project Liberty Date: Tue, 23 Oct 2001 14:17:16 -0400 Cc: email@example.com, firstname.lastname@example.org, email@example.com, firstname.lastname@example.org, email@example.com, firstname.lastname@example.org In last week's Linux Weekly News, there was some preliminary coverage of Project Liberty, an "open" alternative to Microsoft's Hailstorm, which is -- very roughly -- an a attempt to embed Passport into everything on the planet. The short version is: a repository of information about your person, life, and preferences which can be accessed by people and companies you authorise, to provide authentication that you are you, and information about, for example, your purchase default desires (credit card numbers, which card to use, do you prefer first class or coach, etc). Now, this is, fundamentally, not an especially bad idea. But how it is implemented is -- given the sort of information which it might end up holding -- pretty crucial to your personal privacy: do you want anyone except your doctor and your pharmacist knowing that you have a prescription for protease inhibitors? (Drugs used to control AIDS and related conditions.) You probably don't even want your *health insuror* to know that, even though perhaps you want them to know *other* things about you, and therein lies the major problem: Hailstorm will be run by Microsoft. And we all know how pristine Microsoft's track record is for placing the interests of individuals above that of large corporations off of whom Microsoft makes lots of money. Right? So here comes Project Liberty, an "open" alternative to this. They've not much design done yet, I don't think, so we don't know what *specific* goals PL will be aiming towards. But that's good, because it means that this is the exact time for private individuals to be casting their bets on what they think is important: personal privacy and control are good choices there, IMHO. I know that in our New World, it's almost unpatriotic to be concerned about personal privacy, but you know what? That's a wrongheaded, short sighted, and dangerous outlook to have. Our country became something to be proud of, protect, and defend precisely *because* it attempted to secure such liberties to the people against government control, and corporations should be given no extra leash -- they work for *us*, in the final analysis, just like the government. But the most fundamental tenet of Project Liberty's operation must be, for it to succeed, that it will always favor the desires and interests of those one billion people whose identities it likes to tout it's representation of *over* the interests of the corporations with all the money. From a design standpoint, it must make it possible to break down your information to a sufficiently fine granularity to allow you to authorize access for someone to only the data which you want them to have... and indeed, to make it as difficult as possible for different providers to cross-correlate the information the hold privately about you with one another. (Why do I get my cablemode service from one company, my wireless Internet from someone else, and my cellphone service from yet another company? Because I *can*, and because it one bill is late, I don't get cut off from all three. Do I want to give that flexibility up? Certainly not.) Ensuring that the provision of the convenience of "single-sign on" won't deprive me of rights and conveniences I now have won't necessarily be easy for the Project Liberty folks. But if they don't do it, and stick to it, then I will not -- and you should not -- give them any more quarter than Microsoft. Regardless of whom they have on their side. Cheers, -- jr 'I regret that I have but one asterisk for my country' a -- Jay R. Ashworth email@example.com Member of the Technical Staff Baylink RFC 2100 The Suncoast Freenet The Things I Think Tampa Bay, Florida http://baylink.pitas.com +1 727 804 5015 "Usenet: it's enough to make you loose your mind." -- me
From: Alex Owen <firstname.lastname@example.org> To: <email@example.com> Subject: Open source BIOS/Firmware Date: Thu, 18 Oct 2001 10:54:51 +0100 (BST) Sir, I would like to comment on your article of October 18, 2001 entitled "Open Source BIOS Projects". We must remember what BIOS stands for "BASIC INPUT OUTPUT SYSTEM" or something like that! The BIOS in CP/M and DOS communicated with the hardware such that "drivers" were unheard of, the BIOS provided the hardware drivers. What many of us now use the BIOS for is booting. This is in fact the job of Firmware not a BIOS. BIOS in the days of operating system including optimised drivers is obsolete. What is required to boot a system is only Firmware. The three projects you describe have different goals but I believe you misinterpreted those goals. Here is my interpretation. FreeBIOS: Free implementation of BIOS code to allow a warm glow that no proprietary code is needed. This is essentially implementing as obsolete paradigm under a "free" licence. :-( LinuxBIOS: This is not really a BIOS project but an LINUX in ROM project. Why not put the OS in ROM then booting is quick and easy... BUT this ties the machine to one OS. :-( OpenBIOS: Again a misnomer as this is really a FIRMWARE project not a BIOS project. In my opinion this is the way forward. This project aims to produce a free implementation of the OpenFirmware standard. It is not a BIOS as it is not intended to be used by the OS after booting is complete. It is OS independent and indeed CPU independent! Yes the same card with the same on board boot code (Fcode) can be used by different CPU types! OpenFirmware provides a rich command line interface allowing booting over the serial port (yes down-loading the kernel over the serial interface!!!) network booting and booting from ROM or disk. This is a flexible and platform independent STANDARD which in my humble opinion can only be the way forward. Sadly I have not seen an implementation on i?86 machines probably because windows does not demand it... but then windows does not really need a BIOS some other boot Firmware would do! I hope this has opened the eyes of some LWN readers who have been unlucky enough to only experience i?86 hardware! Yours faithfully Alex Owen firstname.lastname@example.org
From: "Oleg P. Philon" <email@example.com> To: firstname.lastname@example.org Subject: long awaited 2.5 test kernel, sort of Date: Sat, 20 Oct 2001 13:01:40 +0300 More and more talks arise about opening next experimental kernel tree. It's seems to me, strictly from user's perspective, that Linus, intentionally or not, already create a new situation in his venerable project. This situation slightly resembles testing distribution in Debian development. For those not familiar with debian, this testing woody set of packages sits between the outdated stable potato and the most fresh and quick moving unstable sid distro. Also, debian users have the choice from 3 distros, with different degree of stability and actuality. This partly solves the problem of long periods between releases. So, back to kernel. It seems to me, we already have testing kernel long long ago. Recall all big changes, dropped in stable kernel since his initial release. Alan Cox called 2.4.10pre as "2.5 in disguise". Besides that, a really unstable and experimental patches are in a separate testing directory at ftp.kernel.org for all willing to try. This situation, from my user's point of view, more naturally accomodates the principles of open development. So called stable releases issued more often, have wider users base, and eventually more eyeballs to spot the potential problems. All that more careful users have to do, is to keep a couple of point point numbers behind and apply only really needed selected patches. Auf Wiederlesen ophil aka Dr. Anticommunii -- Oleg P. Philon http://gomelug.agava.ru/articles Linux Lab, Gomel, Belarus mailto:email@example.com http://anticommunist.narod.ru mailto:firstname.lastname@example.org
From: Leon Brooks <email@example.com> To: firstname.lastname@example.org Subject: Anarchy Date: Thu, 18 Oct 2001 07:38:11 +0800 Cc: email@example.com > if there hadn't been security vulnerabilities in Windows®, Linux, and > Solaris®, none of them could have been written. Linux is a registered trademark of Linus Torvalds. You come across as uneducated when you don't acknowledge that in your article. > Code Red. Lion. Sadmind. Ramen. Nimda. You seem to have forgotten these: SirCam Michaelangelo Happy99 Stoned LoveLetter AntiCMOS Qaz EmpireMonkey FunLove Valentine Sorry Hybris Magistr Melissa and 208 other current viruses listed at http://www.wildlist.org/WildList/ My point? These are *all* specific to Microsoft software, and in particular to Windows and Visual BASIC derivatives. The problem is Microsoft software, not bug reporting. If Microsoft's vulnerability were simply proportional to the number of accounted desktop users, one would expect one Solaris virus, about six Linux viruses and maybe twelve Mac viruses. You can only scrape together a combined total of three non-Windows viruses for your examples, and on top of that there is good evidence that the real Linux desktop presence is around threefold the accounting figures. Methinks the man protesteth overmuch. > We can and should discuss security vulnerabilities, but we should be > smart, prudent, and responsible in the way we do it. Absolutely! Notify the vendor first, give them an amount of time proportional to the severity (maybe a week, this _is_ the internet age) and then tell everyone so that individuals can take appropriate action. If there is already an exploit for the vulnerability in the wild, scratch the vendor time advantage. Remember that even though CodeRed was leveraging a Microsoft-only flaw, as usual, *everyone* had to deal with the side-effects, as usual. UNIX/Linux based automated software based on full disclosure helped both to absorb the attack and to speed the spread of awareness to impacted administrators. Consider a home-builder that erects easy-to-burgle homes. Full disclosure of his flawed methods would indeed help seriously dumb burglars, but any half competent burglar would either already know, or would better be able to figure out the weakness from a vague description than any householder would. Meanwhile, hereinbefore naive homeowners are aware that there is a problem, and have enough information to design a defense. Moreover, each defense may well be different, which means that a burglar can't expect to meet, deal with and systematise an attack against a factory-ordained workaround. Finally, other home-builders including owner-builders can study the weakness and avoid it or repair it in their own designs. > the evidence is more far conclusive than that. Not only do the worms > exploit the same vulnerabilities, they do so using the same techniques > as were published - in some cases even going so far as to use the same > file names and identical exploit code. Aren't you glad that the black hats chose a standardised attack instead of devising their own - probably harder to detect and/or deal with - methods? As for the code design, sometimes form follows function. > Providing a recipe for exploiting a vulnerability doesn't aid > administrators in protecting their networks. It certainly aids me. I can try the exploit against my own systems to determine the extent of their vulnerability. > we do need to make it easier for users to keep their systems secure, and > Microsoft acknowledged this very point in a recent major security > announcement You might want to think about the very same feature appearing in Mandrake Linux over a year ago, and a much more detailed version of it appearing in their 8.1 release, which pre-dated the Microsoft announcement and has been in preparation since before CodeRed struck. Mandrake, like many Linux distributors, publish their own vulnerabilities early. A scan of those vulnerabilities is informative: very few of them offer carte-blanche access to a standard installation, the vast majority are only invokable in very special circumstances and give very limited access. Many, maybe even most Microsoft vulnerabilities result in total submission of your system to alien invaders. > Security vulnerabilities are here to stay. Scott, I'm glad you took the time to clarify Microsoft's attitude to security, but please don't expect that sentiment to be echoed by every developer on the internet. As is the case in my own home town, people are switching more and more to fast-responding design-safe Open Source systems, as they read between the lines of presentations like your ``It's Time to End Information Anarchy'' and notice that the focus is on blame-sharing, and the worry is about loss of vendor control. Regardless of our pontifications, in practical terms it seems to be drawing on time to end information imperialism. Cheers; Leon
From: Zygo Blaxell <firstname.lastname@example.org> To: email@example.com Subject: Microsoft's latest FUD Date: Sat, 20 Oct 2001 02:45:36 -0400 >First, let s state the obvious. All of these worms made use of security >flaws in the systems they attacked, and if there hadn t been security >vulnerabilities in Windows®, Linux, and Solaris®, none of them could have >been written. This is a true statement, but it doesn t bring us any closer >to a solution. While the industry can and should deliver more secure >products, it s unrealistic to expect that we will ever achieve perfection. >All non-trivial software contains bugs, and modern software systems are >anything but trivial. Indeed, they are among the most complex things >humanity has ever developed. Security vulnerabilities are here to stay. This is what I have come to expect from the people who release web server software which is broken in the most fundamental ways. The security flaws that the recent IIS worms use arise from utterly trivial programming errors that could have been avoided by anyone who had sound knowledge of the overall architecture of the IIS system and how the components interact with each other...assuming that such knowledge even exists, or is humanly possible to possess. The technical expertise and time resources required to discover and exploit the recent IIS flaws vastly exceeds the respective requirements to prevent the flaws in the first place. Almost all of the flaws appear either in the implementation of a Microsoft-specific feature, or in the interaction of a common feature found in many operating systems and tools with a Microsoft-specific feature. No other vendor builds so many potentially dangerous features into their products, enables them by default, and then whines in public when nasty people abuse them. Microsoft's refusal to give up on their operating system, email, and web server projects and replace them with mature, industry-standard, peer-reviewed software tools leads to a lot of repetition of the same boring incorrect implementations of unsafe application architectures, as developers who work on Microsoft code must deal with subtle implementation details that are unique to Microsoft systems. In extreme cases, Microsoft products must work around quirks in their dependent components that do not exist--and cannot even _conceptually_ exist without significant implementation effort--in other, more mature systems. There is a serious lack of sound architectural design and review of implementation at Microsoft. There are so many different interacting layers of subsystems in IIS (even before we consider the many different interacting layers of subsystems in the OS that IIS runs on) that it's virtually impossible to make IIS secure. That does not mean that it is impossible to make secure web servers. Microsoft has not made any serious attempt to build a secure web server product yet, but they seem to have concluded already that the task is impossible. If Microsoft was truly serious about security, we would see ads for Microsoft security patches on TV, and they would motivate ordinary people to actually download and install them. >If we can t eliminate all security vulnerabilities, then it becomes all the >more critical that we handle them carefully and responsibly when they re >found. Indeed. One of the major obstacles to widespread deployment of security fixes is the set of current practices employed by consumers and producers of computer software. Certainly it is unreasonable to expect a vendor to produce completely secure software given the current structure of the industry; however, if the vendor is not to be held accountable for software quality, then that accountability must be transferred to the user, especially when third parties (i.e. the victims of virus attacks) become involved. Unfortunately, the vast majority of software consumers are not aware of their obligations under this model, and we hold almost none of them responsible, not even the organizations that leave thousands of exploitable machines accessible from the Internet. Part of the problem is the business model. Microsoft's current obligation to their customers begins when the customer pays the license fees and ends when Microsoft ships the installation media--and even that seems to be too onerous for Microosft, as they tend to outsource the actual collection and delivery to hardware vendors. This is an inappropriate model for software that can--by the vendor's own admission--never be considered complete. Ongoing post-installation maintenance by the vendor is essential--and in the closed-source business model, the vendor is in fact the _only_ entity who is capable of cost-effectively performing such maintenance. Another problem arises from the fact that many software consumers themselves do not choose to implement any mechanism at all to maintain their software. Given the extremely fragile nature of software, especially when products from several vendors are integrated together by the end user with strictly minimal technical support, it is not surprising that many organizations adopt a policy of never upgrading their software until the existing software is provably unusable, in order to avoid the risk of accidentally preventing the software from working at all. Published security exploits are very useful for administrators who must work under such conditions, because the exploit can be used to prove unusability--without such proof, corrective action is often avoided entirely, even if the vulnerability is well understood. Nothing can be done about this class of consumers. They will always run the latest and greatest malware--any product on any operating system--until sufficient legal or business pressures are exerted upon them, or they are physically disconnected from the Internet. Linux distributions that are distributed using a subscription-like service are much more effective at avoiding vulnerabilities in the field. Whenever a vulnerability is found, administrators can automatically apply patches from the vendor--which means that the patches tend to actually be applied much more often. Widespread adoption of this distribution model can significantly mitigate the spread of malware, although if Microsoft were to implement apt-get, I would have to assume that it could act as a _vector_ for malware until proven otherwise... >But regardless of >whether the remediation takes the form of a patch or a workaround, an >administrator doesn t need to know how a vulnerability works in order to >understand how to protect against it, This is IMHO the most significant sentence of Scott's entire article. This notion is simply absurd. There is an abundance of counterexamples. This can only be true if the administrator is not able to implement the fix by herself--this is certainly true for users of Microsoft products, but not true for many other groups of people. It also assumes that no administrators run software that is not supported by a vendor which is somehow connected to this "security community", but which may be vulnerable to the same exploit. Historically, when one vendor makes a mistake, similar problems are found in competing products from a few dozen other vendors. The exploitation details are essential information if you need to figure out if your product-which-is-similar-to-X is, or is not, vulnerable to the same exploit that works on product X. Understanding the workaround is usually not sufficient, and the patch is usually entirely useless unless it is (expensively) reverse-engineered. I recently talked to a number of people outside the computer industry, almost all of whom were surprised, even shocked, to learn: that dozens of security vulnerabilities in widely-deployed, commercial-quality software are reported every week, that many of the exploits are simple enough to explain in a single sentence, even to a technically unsophisticated user, and that the only corrective measure that is effective against these attacks is a software upgrade supplied by the vendor (or completely disabling the offending software, which is often worse than the effects of the exploit itself). The prevailing opinion among the general public is that vulnerabilities are rare, exploits are complex, and corrective actions are a matter of enabling or disabling a feature in a dialog box. This ignorance is what must change in order to improve the current sorry state of the software industry. Incidentally, the fact that extremely similar flaws are found in multiple products released by different vendors was not surprising to my "control group." I don't know what to make of that. >Likewise, if information anarchy is intended to spur users into defending >their systems, the worms themselves conclusively show that it fails to do >this. On the contrary, deployments of system defenses are now occurring at the highest rates in history, and awareness of security issues is now better than ever. At the same time, actual damages in economic and social terms are minimal--contrast what actually happened in the last two years with what could have happened if any one of the major recent Windoze viruses had carried a highly destructive payload. Vendors are now beyond merely feeling pressure to keep up to date with security patches--they are starting to audit their own code, albeit not very enthusiastically. At least one previously indifferent large vendor has recently declared that they intend to alter the installation procedure of their software to be less vulnerable by default. This is a milestone. I'd say that early exploit disclosure combined with active exploitation of well-known vulnerabilities is having _exactly_ its intended effect. >Many people have faulted the patching process itself for the low uptake >rate. Fair enough we do need to make it easier for users to keep their >systems secure, and Microsoft acknowledged this very point in a recent major >security announcement. For once I don't disagree. >Finally, information anarchy threatens to undo much of the progress made in >recent years with regard to encouraging vendors to openly address security >vulnerabilities. At the end of the day, a vendor s paramount responsibility >is to its customers, not to a self-described security community. If openly >addressing vulnerabilities inevitably leads to those vulnerabilities being >exploited, vendors will have no choice but to find other ways to protect >their customers. Security vulnerabilities will be openly addressed, if not in the security community, then in the marketplace and in the legal system. If a vendor addresses the vulnerability by themselves, they have a chance to put a positive "spin" on the situation ("look how attentive we are to security problems!" "We have a fix for that, we can accept no liability if the customer doesn't use it."). If a worm exposes the problem first, the vendor has to catch up, while the customer suffers real economic losses ("look how much money your software cost us!" "I'm going to sue you for criminal negligence and consequential damages!"). It seems like a pretty straightforward choice to me. I see nothing in recent events that might change this situation--vendors will still be motivated to fix vulnerabilities and publish patches because if they don't, nobody else will--then nobody will buy their products because every customer knows they'll be vulnerable to every script kiddie on the Internet. Vendors would probably like to avoid paying for security guards or even taxes, but they don't often bemoan in public the absolute necessity to do so. >By analogy, this isn t a call for >people for give up freedom of speech; only that they stop yelling fire in >a crowded movie house. Another wonderful analogy! Security professionals have been yelling "fire" in crowded movie houses for years. Most of the actual patrons fail to pay any attention, despite the fact that the seats are made of explosively flammable materials, the management allows patrons to smoke cigarettes in the theatre, and occasionally the movie is interrupted by ushers dousing patrons with fire hoses if they are noticeably ablaze. Patrons who do catch fire are not offered a refund, nor a credit for those parts of the movie that they miss, nor even so much as an apology. If a _real_ moviehouse was run this way, its management would be in jail by now. >This issue is larger than just the security community. All computer users >have a stake in this issue, and all of us can help ensure that >vulnerabilities are handled responsibly. Companies can adopt corporate >policies regarding how their IT departments will handle any security >vulnerabilities they find. Customers who are considering hiring security >consultants can ask them what their policies are regarding information >anarchy, and make an informed buying decision based on the answer. And >security professionals only need to exercise some self-restraint. Companies should adopt policies regarding how their IT departments will implement some basic security measures in the first place, including a thorough review of the risks associated with all software that has access to the communications infrastructure prior to deployment. Many organizations do not do even the most basic risk assessments--they simply plug in and install. Consumers should compare vulnerability assessments between vendors--the actual number of vulnerabilities is not as important as the vendor's service track record--when was the vulnerability discovered, and when (if) was it fixed? Many software consumers do not compare products at all. Customers should make sure that their vendors do not continue to distribute or install old versions of software with known vulnerabilities, nor release new versions of software with old vulnerabilities. Apparently some vendors--and even some IT departments--don't remember to put "fixes for all known vulnerabilities from previous releases" on the feature wish-list for their new releases. Customers should ask for a roadmap of security issues associated with the products they buy--even if it is as simple as "don't even _think_ about installing this software on an Internet-connected machine", it is important to have accurate information in order to fit the product into a site security policy. >For its part, Microsoft will be working with other industry leaders over the >course of the coming months, to build an industry-wide consensus on this >issue. We ll provide additional information as this effort moves forward, >and will ask for our customers support in encouraging its adoption. It s >time for the security community to get on the right side of this issue. I sincerely hope this effort fails. The security community is already clearly (and hopefully permanently) on the right side of this issue. It does not need or want Microsoft to interfere with it. We'd much rather that Microsoft simply catch up to it. Microsoft has made great strides in this direction recently, but obviously there are still some significant attitude problems among the managers there. Building a Microsoft-specific closed community will not help anyone--not even Microsoft. It would effectively keep vulnerability information within a group whose members all have a direct economic incentive to keep it unpublished indefinitely. This will slow down the rate of vulnerability assessment and correction (because there will be less information available to the public about these vulnerabilities), without decreasing the rate of exploitation. It will not slow down the rate at which vulnerable systems are deployed in the field, nor will it significantly slow down the rate at which exploits are released into the field. This is a disastrous combination. Maintaining the existing vendor-neutral open security community will help everyone, even Microsoft. Indeed, if anything, the recent Microsoft attacks would seem to be an opportunity for Microsoft--one that they would be stupid to ignore. Millions of customers, all suddenly realizing they need a software upgrade, all turning to one vendor to deliver it... >Scott Culp is the Manager of the Microsoft Security Response Center -- Zygo Blaxell (Laptop) <firstname.lastname@example.org> GPG = D13D 6651 F446 9787 600B AD1E CCF3 6F93 2823 44AD