|
|
Subscribe / Log in / New account

Mainstream means more malicious code for Linux (SearchSecurity.com)

SearchSecurity.com warns that as Linux becomes more mainstream it will become more of a target for malicious hackers. "On Windows, most of the viruses are e-mail borne. On the Linux side, today and in the future, viruses are network-aware, and [they] take advantage of vulnerabilities in networks or systems to infect machines. The Slapper worm, for example, attacked vulnerabilities in OpenSSL and Apache."

to post comments

More malicious code for Linux?

Posted Mar 15, 2004 20:17 UTC (Mon) by MathFox (guest, #6104) [Link]

My FUD meter gave a reading on the article: "Linux is every bit as susceptible to malicious code as Windows." and two paragraphs later: "Linux is more secure than Windows by default". It helps if you realise that the interviewed is working for an anti-virus software vendor.

I agree with the conclusion that the incentive for writing Linux virusses will become larger when Linux is deployed on a larger scale. But will that lead to Microsoft scale epidemics? I doubt that.

Mainstream means more malicious code for Linux (SearchSecurity.com)

Posted Mar 15, 2004 20:23 UTC (Mon) by tjc (guest, #137) [Link] (1 responses)

It's not FUD, it's fact: Linux is every bit as susceptible to malicious code as Windows. Experts say the only difference between the two as attack vectors is the greater prevalence of Windows in enterprise data centers and on desktops.

Uh, wait a minute. If this is the premise, then why does the discussion that follows primarily discuss server-side exploits against Apache, which out numbers IIS on the Internet by at least 2 to 1 (and has for some time now). I think the author is confusing two seperate issues here: server-side vulnerabilities, and client-side vulnerabilities.

Mainstream means more malicious code for Linux (SearchSecurity.com)

Posted Mar 15, 2004 22:10 UTC (Mon) by bex (guest, #16960) [Link]

They also seem to confuse linux with third party software.

Not quite. Just secure the desktops the right way

Posted Mar 15, 2004 22:49 UTC (Mon) by NZheretic (guest, #409) [Link] (4 responses)

On Windows, most of the viruses are e-mail borne. On the Linux side, today and in the future, viruses are network-aware, and [they] take advantage of vulnerabilities in networks or systems to infect machines. The Slapper worm, for example, attacked vulnerabilities in OpenSSL and Apache.

I have deployed Linux on the desktop (RH8+Ximian to RH9+StarOffice) in an enterprise and they do not suffer from such problems for long.
1) The only network service the desktop systems expose is OpenSSH and the Iptables limit access from only three addresses.( We use a custom script with ssh to keep the systems rpms uptodate from a private mirror).
2) The iptables are configured to allow the desktops client services to connect only to the specified server.
3) The /usr partions are mounted read only and the /tmp, /home, /var directories are mounted non executable.
4) None of the users have, or need, root access. They have access to printer setting etc via Webmin's Usermin which runs on a dedicated server.
5) Mounting the users home directory required shares etc ( we use Samba for domain, file and print services ) is performed by script when the user logs in.
6) We update all the desktops within minutes of a updated RPM package becoming available. The window of opportunity for any disclosed vulnerability is very small.
7) We schedule Tripwire to check the intergrity of the desktops a couple time a day.

Not quite. Just secure the desktops the right way

Posted Mar 15, 2004 23:57 UTC (Mon) by paulj (subscriber, #341) [Link] (3 responses)

3) The /usr partions are mounted read only

Good idea. Makes upgrades harder though.

and the /tmp, /home, /var directories are mounted non executable.

Hmm.. not worth much, might stop an automated worm, but otherwise noexec is worthless. If you can read data, you can execute it. (/lib/ld.so /tmp/bin).

5) Mounting the users home directory required shares etc ( we use Samba for domain, file and print services ) is performed by script when the user logs in.

Ever heard of autofs? ;)

Not quite. Just secure the desktops the right way

Posted Mar 16, 2004 0:28 UTC (Tue) by Ross (guest, #4065) [Link]

Not true if you are chrooted and there is no /lib :)


But seriously not every Linux admin is as careful as this. Linux worms
are possible and with a wider user base (which will be less paranoid on
average) they could become more common.

But thankfully most distributions are making things more secure by default
and I expect to see more stack smashing detection/protection/randomization,
W^R, pre-configured firewalls, and use of security modules to remove even
more permissions from started daemons.

I'd also like to see fewer suid and suid group binaries but that doesn't
seem to be happening. Is there a real need for users to run dump or
undump? And why is ssh suid? It works perfectly well without it. It's
annoying that rpm silently reverts file permission changes.

Updates on read-only /usr and /boot

Posted Mar 16, 2004 0:55 UTC (Tue) by AnswerGuy (guest, #1256) [Link]

It's easy to write an RPM wrapper that does:

mount -o remount,rw /usr; mount -o remount,rw /boot
/sbin/rpm.real "$@"
aide --update
mount remount,ro /usr; mount -o remount,rw /boot

... where "aide" can be supplemented with tripwire, samhain or other HIDS (host intrusion detection) updates and where you can insert any chattr -i and/or lidsadm commands or other commands that are needed to unlock and re-"seal" the system.

Under Debian its even easier since you can create a 999-local file in /etc/apt/apt.conf.d which can contain DPkg::Pre-Invoke and DPkg::Post-Invoke command suites, to run after automatically after every apt-get install, upgrade or dist-upgrade.

Granted any *other* updates can be hampered a little; but using the distributions own package management utilities with a wrapper should alleviate most of the issue and the rest is simply training. Provide a similar script "syslock.sh" that provides a switch to "unlock" the system for updates, and a default that re-locks the system; then add a cron job that relocks the system every night and an rc script that locks it on boot up (all calling the same sysunlock.sh script so you've consolidated all actions into a SPOT --- single point of truth; yes the rpm/dpkg wrapper script should also call syslock.sh for this same reason).

Similarly your own 'installkernel' script (called by the kernel build Makefile) should call the appropriate system locking/unlocking script.

JimD

Not quite. Just secure the desktops the right way

Posted Mar 16, 2004 0:59 UTC (Tue) by NZheretic (guest, #409) [Link]

3) The /usr partions are mounted read only
Good idea. Makes upgrades harder though

Not really, the upgrade script just remounts the /usr partition write enabled during upgrades.

and the /tmp, /home, /var directories are mounted non executable.

Hmm.. not worth much, might stop an automated worm, but otherwise noexec is worthless. If you can read data, you can execute it. (/lib/ld.so /tmp/bin).

It's actually more effective at stopping the users from "accidentally" executing downloaded scripts/binaries. To expect more than that would require a solution like SElinux's LSMs.

Ever heard of autofs? ;)

The whole point is to mount only the network filesystems required by each user on a per user/group basis.

Mainstream means more malicious code for Linux (SearchSecurity.com)

Posted Mar 15, 2004 23:10 UTC (Mon) by ccchips (subscriber, #3222) [Link] (17 responses)

Very few "reporters" seem to understand a fundamental, and simple, fact about Windows enterprises vs. anything else:

Windows was built on top of a fundamentally insecure operating system, It needs to keep much backward compatability with executables from that system, even though its designers have since moved on to (bolted on? retrofitted?) more secure underlying code. Because of this, the basic coding operations for viruses applies to *all* Windows platforms. The only thing the writers had to do was learn where the additional safeguards were, where the differences were between 16-bit and 32-bit code, etc.

In the Netware days, viruses got around an enterprise because the OS acted like a "carrier", but was not itself infected. If Windows executables are being loaded *from* a Linux box through SMB, then the Linux box becomes a carrier. If the Linux box is capable of serving up OLE and all that other stuff, then it also becomes a carrier by that route.

But if you use a fundamentally similar OS on the server, then, voila, the server becomes prone to infections as well, as soon as the virus writers learn how to scratch it in the right place.

I have held (and still hold) these views about Windows: that it has fundamental design flaws that make it inherrently insecure. Whether Linux becomes a target for malicious code or not, I don't see that fundamental security issue on Linux. I do see that certain people are goingt to SCREAM as loudly as possible, however, every time any little tiny Linux break-in happens, thinking that such loudness will somehow make *their* OS more appealing, when instead we all should be spending a little more time putting the thieves in jail where they belong.

What's the point? I tried to get some of these ideas across to my bosses *before* we went to an NT platform. If I couldn't get them across to my own management, how will a reporter get them right?

Mainstream means more malicious code for Linux (SearchSecurity.com)

Posted Mar 16, 2004 0:05 UTC (Tue) by paulj (subscriber, #341) [Link] (16 responses)

Windows was built on top of a fundamentally insecure operating system

And you think Unix is a fundamentally secure system? It isnt. It has pitiful security compared to even its predecessor, Multics. To get a Unix security certified to a meaningful level, EAL-2, etc, you essentially have to completely replace the traditional security model (Trusted Solaris, Trusted Tru64, SELinux, etc. (though SELinux hasnt been rated afaic)).

I'm not a fan of Windows, but WindowsNT and up actually do have a decent and powerful framework for security. The problem is it doesnt make use of it.

But, Unix secure? ROFL!

--paulj

Windows is insecure by design

Posted Mar 16, 2004 3:40 UTC (Tue) by Prototerm (guest, #20227) [Link] (13 responses)

You indicate that Linux isn't more secure than Windows, then you compare Linux with everything but Windows.

Windows is basically a single-user operating system with a layer of multi-user functionality added on top. Why else do you think XP considers "fast user switching" such a special feature? This is in contrast to Linux, which was multi-user from the very beginning. This makes a huge security difference.

In addition, the Windows design philosophy is to have all programs and system services interoperate at the lowest possible level. That means any bugs, trojans, worms, or viruses affecting one part of the system can quickly move on to the whole thing. Linux just doesn't work like this. The fact that Windows does will make it a perennial target for script kiddies and trojan-writing spammers for the forseeable future.

Windows cannot be made secure without changing it to such a degree that most of the programs on it will break. Think that's an exaggeration? Microsoft has already stated that the security improvements in XP Service Pack 2 will break some applications.

The problem's not in the code, it's in the design.

Windows is insecure by design - or is it?

Posted Mar 16, 2004 6:45 UTC (Tue) by eru (subscriber, #2753) [Link] (1 responses)

Windows is basically a single-user operating system with a layer of multi-user functionality added on top.

What you write here applies to the older "DOS kernel" based Windowses, but I think not the ones based on the NT kernel (NT, Windows2000, XP). In these, the situation is actually reversed: The kernel is multitasking and multiuser, and probably could be used as the basis of a secure OS, but it is buried under unsafe GUI and backward-compatibility layers...

By the way, reading LWN and other Linux boards, I frequently get the impression that people think of Windows as it was at about the Windows 98 level (or even at Windows 3.1 level, depending on when they stopped using it), and then criticise its ridiculous instability etc. compared to Linux. However, Windows has not been standing still, no doubt having been greatly spurred on by the competition from Linux. It is a lot more solid than it used to be. Linux can keep ahead, but it is not automatically a given.

Windows is insecure by design - or is it?

Posted Mar 16, 2004 7:11 UTC (Tue) by mdekkers (guest, #85) [Link]

"By the way, reading LWN and other Linux boards, I frequently get the impression that people think of Windows as it was at about the Windows 98 level (or even at Windows 3.1 level, depending on when they stopped using it), and then criticise its ridiculous instability etc. compared to Linux. However, Windows has not been standing still, no doubt having been greatly spurred on by the competition from Linux. It is a lot more solid than it used to be. Linux can keep ahead, but it is not automatically a given."

This may seem to be the case when you compare Win98 with Win2K or somesuch, but comparing Windows to Linux is a nobrainer. I use Linux as my regular desktop, but also use Mac OSX and Windows every now and then (OSX a lot more often then Windows, though). I have yet to see frequent crashes in anything other then a bleeding edge app, and have yet to that app take the whole machine down with it. My time working on Linux is spent working, not fiddling about trying to make the machine work, or figuring out why it comes tumbling down all the time. Windows and Mac OSX on the other hand are always going down hard. Granted, it is less then with Win98, but it is still a lot more often then with Linux.

I think it due to the fact that it is proprietary software, as opposed to Open Source. Open Source developers can "release whe it's ready". Closed Source ships to an arbitrary deadline, usually "ready or not"....

Windows is insecure by design

Posted Mar 16, 2004 14:18 UTC (Tue) by paulj (subscriber, #341) [Link] (10 responses)

I didnt compare Linux to anything, I just pointed out that the poster was incorrect and that the fundamentals of Unix security are pretty weak.

The post I responded to tried to make out that windows is fundamentally insecure, which it is not, fundamentally WinNT actually has quite a rich and sophisticated security model. The implied argument (this being the Linux Weekly News) therefore is that Linux is not fundamentally insecure, which is incorrect. The Unix security model, which Linux uses, is at a fundamental level, very spartan and weak. Works for boxes with reasonably trustworthy users, but no more. It is _very_ hard to really secure a Unix box, very hard to limit users other than by chroot's or jails (which isnt really giving a user access to the 'host' unix environment, and still isnt _that_ secure either). Windows has the security fundamentals to be very secure, but makes very little use of it.

Now, in common usage and practice, Windows may have a worse security record than Linux, or not, but that is a matter of practice, not the fundamental models on which those security practices are built. If anything, the fact that Unixes _can_ have a better record than Windows, _despite_ the huge deficit in the fundamental security model Unix/Linux has compared to WinNT (WinNT has much better), just shows how poor security practices are for Windows and how good the Unix/Linux practices can be. (and some of the Unix vendors dont have such good records either). Newer stuff like Linux and Solaris Privileges (aka Capabilities) do help strengthen the security model on Unix though (where available).

When you read this comment, please bear in mind I'm a Unix/Linux advocate. I wouldnt touch Windows with a bargepole (unless its for family or a good friend), but saying Windows is fundamentally insecure is FUD, and FUD is bad no matter who propogates it.

Incorrect information about Linux

Posted Mar 16, 2004 20:28 UTC (Tue) by hipparchus (guest, #20252) [Link] (9 responses)

I guess you are talking about a Linux Distro, and their current implementations, and not the Linux kernel in particular. I can't see how a standard unix-like filesystem would be such a bad model, although in object terms you get a better granularity of security in say Lotus Notes.
Having said that, I can't really see why the fundamental model of security used in say a modern Mandrake Linux distro would be so bad.
After all, any software installed (through the system) is signed.

As I said before I can't see what your problem is with the security model of most Linux distros. I guess you must be referring to the security implementation:

It is true that any password hashing based system is pretty insecure. But I think anything other than One Time Pad is a weak product since it has built in obsolesence: as cpu speeds increase, asymmetric keys need to get longer and longer. A good OTP implementation gives ultimate security independent of CPU speed increase.

My model would be that users are physically given OTPs bundles on Compact Flash or Smartcard or something. The OTP bundles come from a central security place. (sort of like verisign in the asymmetric model).
Further OTPs between a person and, say a vendor, are exchanged via the central security place. As both vendor and user can trust central security place, they can then trust each other.

Unlike the asymmetric model, if your OTP bundle is suspected or known to be compromised, you can flush it and get another bundle from the central security place (physically, not over the net): the big difference is that trust can easily be re-established. If someone brute forces verisign's organisational root certifier (for example with a giant internet hacker supercomputer via backdoor inserted by worm like SoBig etc etc), then someone can make any certificate name under the organisstional root certifier that certifies when you check it with verisign, meaning you don't even have to brute force all organisation certifiers. (unless clients have access to and always check organistaion public keys, but then who do you trust to give you that?).

Until OTPs are used instead of the vile putrid, pathetic excuse for security that is asymmetric keys, then there will be no security.

OTPs are hardly difficult to implement, you can do it with a pen and paper (perhaps with a dice, or just tear up a sheet of numbers and move them around randomly), so national security won't be additionally threatened by them being used by civilians. In fact I'd guess national security would benefit from not having weak security.

Incorrect information about Linux

Posted Mar 16, 2004 21:37 UTC (Tue) by flewellyn (subscriber, #5047) [Link] (8 responses)

Practical problem: One Time Pads are extraordinarily inconvenient. You would have to create a new pad for EACH transaction with new data; password encryption could, I suppose, use one pad per user, but even so, that's a large burden for a system with many hundreds of users. Using OTP for encrypting e-commerce becomes prohibitive.

This is why OTP is not more widely used; it's unbreakably secure, but it's also very impractical for mass use.

Do the math and some thinking....

Posted Mar 16, 2004 22:31 UTC (Tue) by hipparchus (guest, #20252) [Link] (7 responses)

Imagine using a USBKey like a checkbook (in the UK chequebook):

Imagine you got a usb key full of (say 64Meg USB Key) 6000 OTPs (using imagined size of a pad being 100Kbytes.

You put the old one in the envelope, and send it back, or maybe if they are standard parts, you can return them to any local shop that will give you a refund for them (like you used to do with old glass coke bottles).

64 meg would store the equivalent of a vast number of checks, far more than the number of transactions I'd do in my life: and I'm a comparatively high web purchaser/seller.

You could use the "checks" to log in to your email system. A few a day, every day for a year is still only 1000 virtual "checks".

What else could you use it for:
verifying the news site you're looking at is real: twice a day = only 600 in a year.

Even if you looked at 100 websites in a day, that's still only 36,000 authentications in a year.

File downloads - 100 per year maybe??.

And here's the good bit: USB keys of 1 gig are available, and getting cheaper.

On the server side, one user at a bank needing 64M for twenty million customers translates into approximately 1000 terabytes of storage. Given that you can buy a terabyte of storage for about $1000 now, you'd be looking at $1Million. Add on RAID,24x7,hot-swap everything, you're still looking at only $10Million to store the OTPs for 20 Million customers.

Tell me again it isn't possible.

Well, after some thinking...

Posted Mar 17, 2004 13:21 UTC (Wed) by flewellyn (subscriber, #5047) [Link] (1 responses)

It's plenty possible. The question is, is it practical? I still don't think so.

The problem is not storing the keys. The problem is that, in order to work, the (randomly generated) keys have to be as long as the message. Which means that the scheme you propose would work decently for messages known to be a specific length, all the time, without fail. Cheques wouldn't qualify, since some of the fields (such as the payee, amounts, and any info in the "Memo" field) would not be of identical length. The same goes for passwords, absent a mandated standard which required all passwords to be a specified (presumably maximal) length. And as for actual email messages? Forget it.

There's one other problem: centralization. The scheme you propose requires a central authority which creates and distributes the keys: this is a VERY big problem. For one, it creates a single point of failure for the entire system; compromise the agency which takes care of key creation and distribution, and you have complete failure of security. Another problem is, who administers this agency? The government? A business? A non-profit, perhaps? I don't see any such agency which would be completely trustworthy. And, with such a system, the central authority must be completely trustworthy. Otherwise, you have no guarantee that, for example, the One-Time Pads are truly "one-time"; a repeated pad is vulnerable to straightforward cryptanalysis of the simplest sort, frequency analysis, which can be pretty easily brute-forced on modern equipment. To say nothing of the problem of governmental intervention: if an intelligence agency, for whatever reasons, can simply order the authority to turn over keys for specified customers, the encryption provides NO defense against snooping by the authorities. Even to folks like me, who believe government is not inherently evil, that thought gives pause.

No, One-Time Pad is a very good system for sending occasional messages between two trustworthy points in absolute secrecy (say, between the White House and the Kremlin, a la the Red Phone, which does use OTP, or between an embassy and the capitol of a country), but it's not suited to massive amounts of traffic. The centralization and symmetry of the keys creates too many practical problems.

take a hint from SSL

Posted Mar 17, 2004 22:51 UTC (Wed) by hipparchus (guest, #20252) [Link]

1. If customer numbers + secrets are used instead of customer name (remember me talking about a dictionary) then the the check message can be a fixed size very easily. Often Banks have limited length customer names in any case. (but why weaken the encryption by putting in redundancy).

2. Data from a website could be sent unencrypted, and a checksum using variable algorithm could be sent using encryption to verify contents have not been adjusted "in flight").

3. If you have to have a secret variable length message, From memory SLL works thus: After authentication, a symmetric key is exchanged and data exchanged is encoded using the symmetric key. There is no reason why you can't just replace the SSL authentication with the OTP system.

4. Central point of failure: You are misunderstanding me. The OTP system described establishes trust between point A and point B. There is no reason why you couldn't establish trust between B and C,D,E,F,G,H etc etc.
Putting it another way - you could send mail to your ISP and be trusted. Your ISP (say acme.co.uk) could trust a more central system, (like security.co.uk), and so on. The topology could be heirarchical or a huge spiders web like the web is, with authentication to a destination determined through a kind of DNS system. Heirarchical is more efficient for extremely low node failure rates, spider web more efficient in a system where nodes fail more often. (hierarchical is a subset of spider web in any case).

Do the math and some thinking....

Posted Mar 18, 2004 12:03 UTC (Thu) by ekj (guest, #1524) [Link] (4 responses)

Sure it's *possible*. The relevant question though, is if it is *useful*.

Yes, a OTP (correctly implemented, and with correct key-management) is unconditionally secure. But for all practical purposes though, AES, or any other of the modern non-broken ciphers are also secure.

Do you *really* think it's worth spending huge resources to guard against weaknesses in AES in a world where noone has ever lost a single cent due to brute-force cracking of AES, but thousands of people got into trouble of some sort because of bad key-management.

Thing is, the key-management-problem is bigger with OTPs than with all other crypto-systems, especially compared to public-key ones. And the gain from "takes a gazillion years to crack" to "Cannot be cracked" will never be enough to compensate for this drawback.

useful things about asymmetric keys make them weak and AES is weak anyway

Posted Mar 20, 2004 0:29 UTC (Sat) by hipparchus (guest, #20252) [Link] (3 responses)

As you know asymmetric keys are a whole lot harder to make than symmetric keys. The "useful" things about them are that you can distribute a public key, and keep a private key hidden.
The whole principle is that the author keeps the private key, encrypts or "signs" data, and people with the public key can verify the data sent by the author to be his work.
The problem is by the nature of the above system, the keys cannot easily be thrown away. You might have documents all over the web signed by the author, and numerous clients with the public key who you'd have to distribute a new public key to (they'd have to keep your old public key to verify old documents they might have stored, too).

If you start by the premise that you (analogy) change your locks on a regular basis, you're system is a lot more secure.
I've provided a way in which authentication can happen irrespective of symmetry of keys.

GZILLION YEARS TO CRACK AES:
You should know the NSA make ASICs with hard wired asymmetric decode hardware which can decode something like 5000 encrypted messages per second. Imagine a large circuit board with 200 chips like this on it, then multiply by 20 in a 6ft tall 19 inch rack. Then multiple by say 10 6ft tall racks.
So now you're talking decoding 200 million encrypted messages per second, hardly the gzillions of years for one key.

Re: AES is weak anyway

Posted Mar 26, 2004 18:35 UTC (Fri) by robbe (guest, #16131) [Link] (2 responses)

Brute forcing a 128-bit key with 2*108 trials per second still takes more than 26 sextillion (2.6*1022) years (quite close to what people would term a "gzillion").

Calculating the years needed if computing power doubles every 18 months is left as an exercise to the reader.

Of course this says nothing about a rubber hose, cryptoanalysis, or brute-forcing the INPUT bits to the key (i.e. the possible passphrases), all of which will bring success in less than a century.

asymmetric keys have few solutions

Posted Mar 27, 2004 1:01 UTC (Sat) by hipparchus (guest, #20252) [Link]

Asymmetric keys are far less strong than symmetric keys. Note in the case of AES, the NSA (and perhaps many other people) already know the algorithm.
Therefore you have to brute force only a relatively small number of possible solutions.

What you get in trade off for the lack of strength is .... asymmetry (public and private keys).

I propose using symmetry instead, and totally discardable keys.

read this about DES for example, and go figure about AES

Posted Mar 27, 2004 1:41 UTC (Sat) by hipparchus (guest, #20252) [Link]

Please note AES is in itself a symmetric block cipher of say 128 or 256 bit size. HOWEVER the recommended imlementation for exchange of the cipher is public/private key encryption. (see my above mail about small number of solutions):

out of interest: DES cracker (they said it was uncrackable).

http://www.eff.org/Privacy/Crypto_misc/DESCracker/HTML/19980716_eff_des_faq.html#howsitwork

discussion on implementation of AES:

http://66.102.11.104/search?q=cache:OKqHzqpn-RcJ:www.parallaxresearch.com/dataclips/pub/infosec/cryptology/guidelines/STOA-Report3-5.pdf+nsa+chips+to+decode+aes&hl=en&ie=UTF-8


Mainstream means more malicious code for Linux (SearchSecurity.com)

Posted Mar 16, 2004 12:33 UTC (Tue) by bdixon (guest, #1055) [Link] (1 responses)

>To get a Unix security certified to a meaningful level, EAL-2, etc, you >essentially have to completely replace the traditional security model

I don't think that statement is true. SuSE Enterprise Linux 8 and Redhat Enterprise Linux 3 have each been certified to EAL2 without the extent of modification that you are suggesting:

http://niap.nist.gov/cc-scheme/vpl_type.html#operatingsystem

Perhaps you meant the other end of the scale, EAL6 or 7? These levels do require far higher assurance which may require unique capabilities and proofs.

I do not disagree with your central premise that Unix is not a fundamentally secure system, however.

Mainstream means more malicious code for Linux (SearchSecurity.com)

Posted Mar 16, 2004 13:47 UTC (Tue) by paulj (subscriber, #341) [Link]

Perhaps you meant the other end of the scale, EAL6 or 7?

Err, yes. i got it the wrong way around :) And not even as high as EAL-6 either, have any standard security model (ie uid/gid/set{g,u}id) unix's gotten past EAL-4? Are the Unixes which got EAL-4 using standard unix security models?

anti-virus companies, please stay away from Linux

Posted Mar 15, 2004 23:15 UTC (Mon) by JoeBuck (subscriber, #2330) [Link] (3 responses)

Certainly Linux is vulnerable, and will become a more attractive target of the black hats as its popularity increases.

That said, we should reject the solutions offered by the anti-virus companies. Real security does not fit their business model, since its purpose is to extract maximum cash from the public, and actually preventing all malware would not do that. They would rather do what they do now, which is to offer their customers a list of "known criminals" to check against, which has the virtue (from their point of view) of requiring a subscription so that their customers can keep current, but does not interfere in the least with the ability of virus writers to write new malware, thereby generating new business for the anti-virus companies.

I see multiple demonstrations of the lack of ethics of some anti-virus companies every day, every time I see a bounce of one of the current crop of email viruses, followed by a warning that I am infected, saying that I have some anti-virus company's mail filter to thank for this "service" and strongly suggesting that I need to buy such a product to be safe. Of course, the designers of these mail filters know full well that the return address is forged, but they happily spam me anyway.

What we need to do instead is take a systems approach, focusing on eliminating whole classes of attacks. The Gnome and KDE teams need to be sure, when cloning features of Windows, to avoid cloning those features that are demonstrated to be vulnerabilities. No hiding of file extensions in an attempt to be "friendly". No self-extracting archive formats that basically tell the user to run an untrusted program. "Taint" analysis to be really paranoid about untrusted data. Audit libraries to the point where we can mathematically prove, say, that the conversion from a JPEG or PNG to a bitmap/pixmap for use in an application contains no buffer overflows, and work to continually increase the amount of trusted code.

And if a feature could conceivably be unsafe, work hard to make it safe or leave it out. And when the bugs come anyway, fix them quickly.

anti-virus companies, please stay away from Linux

Posted Mar 16, 2004 7:09 UTC (Tue) by eru (subscriber, #2753) [Link] (1 responses)

What we need to do instead is take a systems approach, focusing on eliminating whole classes of attacks. [...]

Also it is most important that distribution makers (who after all produce what most end-users perceive as "Linux") take the "secure by default" approach. A sloppy system configuration (like inappropriate permissions for some key files) would nullify whatever auditing and analysis has been performed by gurus on the components.

I wonder if the "stackguard" compiler techniques would cause too much overhead to use on normal distros by default? It is no silver bullet, but would cause one common exploit type to be detected before it causes serious harm.

anti-virus companies, please stay away from Linux

Posted Mar 18, 2004 15:19 UTC (Thu) by nix (subscriber, #2304) [Link]

StackGuard techniques?

I'd say that on distros that aren't targeted at slow systems, the ~5% overhead imposed by SSP/ProPolice/StackGuard and the like are entirely worth it, such that the biggest problem with them is the draining of /dev/random that they cause (32 bytes read from there for canary seeding whenever a process starts)...

... but even given that, and even given that I use it on my firewalls, I *still* think it's an ineleggant kludge, and there Must Be a Better Way.

(Well, there is. Stop using C...)

anti-virus companies, please stay away from Linux

Posted Mar 16, 2004 15:16 UTC (Tue) by HunterA3 (guest, #20241) [Link]

There was a developer that had an idea for a new breed of anit-virus program that would learn how all your programs behaved under normal conditions and if it detected a program not working as it should, it would terminate it and isolate it for an admin to check into, thus doing away with actual virus definitions and creating a self-sustained anti-virus program. Naturally, all the current anti-virus vendors shot it down with extreme prejudice because it would have jepordized their business model of making money off of insecurity. So you're concerns have merit.

Mainstream means more malicious code for Linux (SearchSecurity.com)

Posted Mar 16, 2004 0:21 UTC (Tue) by petegn (guest, #847) [Link]

Never mind FUD meter.. you can almost bank on the fact that most if not
all worms virui ect come from or are connected to an very small number of
outlets .. do i need to name them ?. wel spose so that bunch in Redmond
for one the anti virus companies for two and some of the more persistant
so called security companies for three i know i will get slagged of for
these views by a few donkeys out there but what the hell speech is free
or so i am lead to believe ..just read and remember cus i aint been wrong
yet on a few counts like the M$ involvment in SCO for one .. but stil ..

Chers folks good readin..
Pete.

More of the same

Posted Mar 16, 2004 0:38 UTC (Tue) by AnswerGuy (guest, #1256) [Link]

We've been hearing this for years. There is a grain of truth (that more widespread adoption will make Linux a more attractive target, and that higher density of Linux systems on the Internet will make any successful worm spread faster --- if it's presenting in an ubiquitous package like ssh, ntpd, Apache, or in the kernel code itself).

However, that grain is suspended in a large greasy globule of FUD.

We (the FOSS, free and open source software communities) don't have to tolerate software or hardware monocultures. We can run Linux *and* any of the various *BSD flavors; we can run Linux (or NetBSD or OpenBSD) on many other hardware architectures; we can run alternatives to almost any software (many web servers, at least 4 major MTAs, a handful of DNS servers, even alternatives to sshd and for time synchronization). And there exist a number of different kernel security hardening patches (LIDS, GRSecurity, LOMAC, RSBAC, SELinux, syslinux, etc).

So, no single bug need threaten more than a minority of us.

Also the modularity and process protection model in Linux is substantially more effective than the practice evident in Microsoft's Windows OS, IIS, Exchange and other products. Theoretically they offer more elaborate protections in the OS, but in practice their own code is given permission to penetrate so many of these isolation semantics that the theory is practically a lie.

If someone breaks my Apache processes with a remote, arbitrary code exploit, they've gained access to "Mr. Webserver" The default installation and configuration of Linux somewhat limits what "Mr. Webserver" can do to the rest of the system and a competent, professional Linux system administrator will routinely tighten that up much more.

Competent systems administration makes a huge difference in either case. However, I would argue that a competent UNIX or Linux systems administrator can have a much greater impact on the security, stability and performance of their systems than a comparably trained and experienced NT/W2K/XP/ME administrator.

For instance I can easily lock down sshd so that connections are only accepted from specific hosts, so that passwords are never accepted, public/private keypairs are required, so that specific accounts can only be used to execute specific commands, and so that there are multiple levels of this protection going on (through iptables/ipchains/ipfwadm *and* the TCP Wrappers and internal hosts ACLs in sshd itself, as well as with chroot jails, systrace wrappers, etc). I routinely lock down my servers to only permit a couple of specific "management stations" to access their ssh services; and I set up liaison systems which are similarly restricted for transferring data to and from "business partners" (or vendors and customers).

That's a simple example, but it means that a bug in sshd can't spread directly from an arbitrary attacker or compromised system into my systems. The liaison systems become vulnerable only after one of my partners, vendors or strategic customers is compromised and that vulnerability is still limited to a chroot jail and a non-root account (probably) while my servers are secure until my internal management stations are compromised. In other words it's easy to put in these "firestops." (In the construction trades a firewall is a wall that extends from foundation to the roof and from one exterior wall to another, a firestop is a block of material, usually wood, between studs within a wall to limit and slow the spread of a file up through the inside of a wall --- reduce the drafting through wall segments which would otherwise act as chimneys, drawing oxygen more quickly and causing the fire to burn hotter).

By simply limiting the peers to protocols like ssh, snmp, and NTP we mitigate some of the risks that these privileged pockets of software monoculture pose. That's why I refer to them as firestops.

As another example we generally configure a small set of border NTP servers which can only receive NTP peering traffic from a limited set of external sources, sometimes we isolate those with a "protocol lock" using rdate across the firewall rather than NTP. Then the interior systems only use NTP to the internal NTP servers. In any case only a limited number of external hosts could exploit an NTP bug to compromise a border system. If a protocol lock is employed then a different bug (one that can propagate through this alternative time synchronization protocol) must be used to get through it. Even without that the systems administrators have time to be alerted to the problem and to temporarily block that service, update the border systems, etc. (Also it's notable that there are patches to the stock xntpd that use the Linux "capabilities" (from the POSIX.1e draft proposal) to allow that daemon to run as "nobody" with only the permissions necessary to adjust the system time --- it's also possible to write a trivial wrapper around xntpd using the lcap/lcap2 package (with sucap or execcap).

Using systrace to jail programs like xntpd, BIND (named) etc is even better, somewhat easier, and is portable to OpenBSD/NetBSD, and MacOS X
as well. I would like to see it deployed more widely and adopted as a tool across all versions of UNIX.

Granted that are far too few sites, distributions and software packages that are making use of all these techniques and options. However, the features exist for Linux, the knowlege is readily available, and the tools are free for the taking.

Jim

Mainstream means more malicious code for Linux (SearchSecurity.com)

Posted Mar 16, 2004 9:20 UTC (Tue) by beejaybee (guest, #1581) [Link]

Ho, I suppose Windows never had the Blaster worm, and viruses introduced to Windows systems through insecure e-mail are incapable of infecting other Windows systems which share folders located on a linux server running Samba.

Fact of the matter is, everything built by Man (or Woman) is imperfect; there always will be security weaknesses, and some of them will inevitably be exploited in the real world. Nevertheless Windows has always had a "security" policy based on obscurity and a pack-everything-in-and-obfuscate (by deep application links into the kernel) design philosophy. The open source model (make source code avaiable for peer review, make every application do one thing only but do it well, maintain a clear distinction between OS kernel and application) is so obviously better that I really don't see that there is much of a case to answer.

However, despite this wonderful piece of FUD, we can't be complacent - software designers need to "think secure", and the peer review really needs to be done, not just paid lip service to.

Mainstream means more malicious code for Linux (SearchSecurity.com)

Posted Mar 16, 2004 9:45 UTC (Tue) by jwharmanny (guest, #971) [Link]

I think the 'more widespread == more viruses' theory isn't always true. For example, there are far more Apache webservers online than ISS webservers, but the latter is much more often compromized.

Of course, securing a system connected to the internet requires more effort than simply accepting the default firewall settings. Maybe Windows' point&click interface for configuring (internet-)services made it too easy for sysadmins to deploy an insecure system.

Email viruses are not a threat to Linux-based desktops, for several reasons:
- there is no single email application for Linux, like Outlook and O-Express for Windows. Most people seem to use Ximian Evolution, but I don't know how many. So all the viruses that specifically exploit Outlook vulnarabilities (like hiding filename extensions, or buffer overflows in the preview pane) won't impact alternative email clients. Some email viruses use Outlook's address book to forward themselves, but this would be very hard to do on Linux because it lacks a common address book service (however the Gnome project seems to be working on it).
- Even if a virus would run, only the user's home directory (and maybe a few others, like /tmp, which can be emptied anyway on every reboot) could be affected. It is easy to setup a new, clean account, transfer all documents (non-binary, human-readable document formats are very hard to turn into virus-carriers) to the new account, copy the program settings and email, and delete the infected account. This could easily be automated using a friendly wizard.

Most virus-scanner companies will find a way to sell their stuff anyway. Most people know that running a decent antivirus program on Windows is important, and wouldn't trust a system without it.


Copyright © 2004, Eklektix, Inc.
Comments and public postings are copyrighted by their creators.
Linux is a registered trademark of Linus Torvalds