|
|
Log in / Subscribe / Register

NetBSD 6.1

The NetBSD Project has announced NetBSD 6.1, the first feature update of the NetBSD 6 release branch. "It represents a selected subset of fixes deemed important for security or stability reasons, as well as new features and enhancements." See the changelog for details.

to post comments

NetBSD 6.1

Posted May 20, 2013 6:09 UTC (Mon) by pranith (subscriber, #53092) [Link] (40 responses)

I am sure it does not make any sense to compare *BSDs vs Linux in terms of features or performance, since the BSDs lag(severly?) in the manpower department. Still, I would really appreciate if anyone could point me to a performance comparison between the various BSDs and Linux(and please, phoronix does not qualify).

NetBSD 6.1

Posted May 20, 2013 7:09 UTC (Mon) by Cyberax (✭ supporter ✭, #52523) [Link] (17 responses)

Weeeeell, Phoronix did quite a few comparisons. Linux quite predictably wins.

And also, why do people hate Phoronix? It's usually pretty accurate.

NetBSD 6.1

Posted May 21, 2013 20:45 UTC (Tue) by drag (guest, #31333) [Link] (16 responses)

> And also, why do people hate Phoronix? It's usually pretty accurate.

Phoronix's benchmarking on anything other then graphics sucks.

NetBSD 6.1

Posted May 21, 2013 20:49 UTC (Tue) by Cyberax (✭ supporter ✭, #52523) [Link] (15 responses)

Why?

I'd replicated several of their benchmarks in the past (particularly, PostgreSQL benchmarks) and got similar-looking results.

NetBSD 6.1

Posted May 22, 2013 16:37 UTC (Wed) by nix (subscriber, #2304) [Link] (14 responses)

These are people whose filesystem benchmarks involve big compile runs: they don't even comprehend the difference between CPU- and I/O-bound processes, let alone more subtle things like benchmarks that turn out to be measuring nothing but the shape of the cache hierarchy rather than whatever they were aimed at. Their benchmarks might as well be written by throwing random code at a wall and measuring random metrics. Sometimes it's useful, but a lot of the time it's useless, or actively misleading.

NetBSD 6.1

Posted May 22, 2013 20:46 UTC (Wed) by Cyberax (✭ supporter ✭, #52523) [Link] (13 responses)

Compile benchmarks are performed with the same starting conditions. They are repeated several times and the variance between runs is usually small.

Now, the _reasons_ for the speed difference is another question. But the testing methodology is sound.

NetBSD 6.1

Posted May 23, 2013 9:27 UTC (Thu) by nix (subscriber, #2304) [Link] (12 responses)

The testing methodology is ridiculous. If you're trying to test something filesystem-related, don't do something almost totally CPU-bound: you won't be testing the filesystem at all! Heck, they don't even pass -frandom-seed last I saw, so the CPU usage is going to vary between runs in any case, even if things like cache utilization were constant (which there is no *way* they will be: they're going to hugely dominate filesystem I/O in any reasonable analysis).

NetBSD 6.1

Posted May 23, 2013 12:39 UTC (Thu) by Cyberax (✭ supporter ✭, #52523) [Link] (11 responses)

What is ridiculous in measuring compilation speed? It's a very relevant real-world task and if the speed variation between kernels is significant (and it often is) then it's a good reason to try and find out why.

You might also note, that they ALSO test pure CPU-bound tasks.

NetBSD 6.1

Posted May 24, 2013 22:43 UTC (Fri) by lsl (guest, #86508) [Link] (9 responses)

> You might also note, that they ALSO test pure CPU-bound tasks.

Yep, they do. They use them to show that GNU Hurd is just as fast as Linux. Compute-bound numbercrunching software seems just the right stuff to measure performance differences between different kernels.

NetBSD 6.1

Posted May 24, 2013 22:48 UTC (Fri) by Cyberax (✭ supporter ✭, #52523) [Link] (8 responses)

Sure. But sometimes there ARE significant differences and it's interesting to check why.

Besides, you're inconsistent with nix - his gripe is that Phoronix also uses complex tests that stress various parts of the system :)

NetBSD 6.1

Posted May 27, 2013 21:33 UTC (Mon) by nix (subscriber, #2304) [Link] (7 responses)

No, my gripe was the same as lsl's: that they were attempting to test one component (e.g. the filesystem) using a test that didn't stress the filesystem at all, but imposed massive stress on numerous other components. Any result from such a test will be drowned in noise at best and actively misleading at worst. More benchmarks are not always better, if the benchmarks are bad enough.

NetBSD 6.1

Posted May 27, 2013 23:48 UTC (Mon) by Cyberax (✭ supporter ✭, #52523) [Link] (6 responses)

They ALSO have pure tests of filesystems. What's the problem?

Besides, sometimes these tests show that there IS a significant variation between filesystems (even though there shouldn't be). Which is very nice in itself, it's kinda like 'assert' statements in C/C++ - they are mostly useless but it can help immensely when they do trigger.

NetBSD 6.1

Posted May 28, 2013 20:31 UTC (Tue) by nix (subscriber, #2304) [Link] (5 responses)

The problem is that running a compiler (or any job that is CPU-bound to that extent) is *not a valid benchmark* of a filesystem, and including it in a benchmark ostensibly of a filesystem does nothing but add noise to the benchmark results. No halfway-competent benchmarker would do anything of the kind, but then the phoronix people are remarkably incompetent benchmarkers. (As has been widely noted.)

NetBSD 6.1

Posted May 29, 2013 7:14 UTC (Wed) by Cyberax (✭ supporter ✭, #52523) [Link] (4 responses)

Except that the file system activity is a significant part of the compilation. There really are differences in the compilation speed caused by file systems.

NetBSD 6.1

Posted May 30, 2013 17:59 UTC (Thu) by nix (subscriber, #2304) [Link] (3 responses)

Really? I dare you to find any such differences which are not hugely dominated by cache variations and other related things (CPU-bound, anyway).

Not even reading header files is *that* expensive. Particularly not if you're doing repeated runs, which you have to for any statistical validity at all: you'd have to clear the caches out between each run, and if you do that you throw out the compiler image as well, which is far larger than all the header files put together. So, at best, all you'll be benchmarking is how fast the filesystem is at reading in the compiler. In a *real* compilation, this will only happen once across a multi-file run (and, equally, most headers will be read once, then cached across a multi-file run: what costs is *parsing* them repeatedly), so even the filesystem effects are measuring something useless.

I maintain that this is a classic example of an at-best-useless benchmark.

NetBSD 6.1

Posted May 30, 2013 19:16 UTC (Thu) by Cyberax (✭ supporter ✭, #52523) [Link] (2 responses)

Try to compile Android - you'll see clear difference between file systems.

And no, cache doesn't always help. Particularly if your linker evicts most of it...

NetBSD 6.1

Posted May 30, 2013 22:33 UTC (Thu) by nix (subscriber, #2304) [Link] (1 responses)

So... I'm saying "you cannot benchmark a filesystem with a test that runs a compiler", and you're saying "try to compile Android -- you'll see clear difference between file systems".

You might observe that your answer has very little to do with the point I was trying to make, which is about the nature of benchmarking and repeatable tests. I do not contest that using a fast filesystem may make compiling slightly faster: I just suspect that it is nearly impossible to verify this in a sense meaningful to any other users using a benchmark (the simple 'ooh look it speeds up if I use filesystem X' does not count: how do you know the difference is not purely down to the fs implementation in one filesystem blowing out the L3 cache on *your particular test system* on simple operations while the other does not? Under high cache load such as triggered by compilers this sort of thing is routine), and that even if it were possible, there are many more sensible workloads to benchmark filesystems with than highly CPU- and memory-hierarchy-bound workloads like compilers.

NetBSD 6.1

Posted May 30, 2013 22:52 UTC (Thu) by dlang (guest, #313) [Link]

you are correct that a kernel compile is not a good filesystem benchmark.

It is however a pretty good overall performance benchmark. The user doesn't care if the difference is due to the filesystem, the kernel, or some other change, the end result is that they are faster or slower after the change.

Microbenchmarks that only stress one component have their use, but they can also be horribly misleading.

a filesystem change that is faster, but only with a LOT more ram available may show up well in a benchmark that only stresses the filesystem, and show up really badly in a kernel compile test because it's using RAM that would be better used in other ways.

Phoronix doesn't only do the kernel compile test, they have a bunch of different tests that they perform, and they run all of the tests when they do any comparison, some tests are expected to be more relevant to a particular set of changes than others, but you sometimes are surprised with a change affecting things that you didn't expect it to (either in good or bad ways), so it's valuable to do a full set of tests. And with an automated test framework, it's also easier to do the full set of tests (known as win-win)

Now, I am not defending their interpretation of the results of the tests. I've seen and given up arguing over some of their systemic mistakes (fsync based loads producing impossibly high results without them acknowledging that this means that the fsync can't possibly be taking place for example)

But I do think their choices of tests is fairly reasonable.

NetBSD 6.1

Posted May 27, 2013 21:35 UTC (Mon) by nix (subscriber, #2304) [Link]

Sure! It's a meaningful test. It's just not a meaningful test of filesystems, or, indeed, generally of kernels, unless you're intentionally running it out of memory during the test to look at working set size and swap performance. The only kernel change I can think of that sped up compilation *ever* by enough to detect without a huge number of runs (and -frandom-seed) is transparent hugepages. Compilation runs are not useful benchmarks for most kernel functionality. What they're useful for is benchmarking *compilers*. Which Phoronix was not doing.

NetBSD 6.1

Posted May 20, 2013 10:47 UTC (Mon) by HelloWorld (guest, #56129) [Link]

I think it's quite meaningless to compare the performance of operating systems unless you specify the kind of workload you're interested in.

NetBSD 6.1

Posted May 20, 2013 14:24 UTC (Mon) by rsidd (subscriber, #2582) [Link] (17 responses)

It used to be that FreeBSD was about performance, NetBSD portability, OpenBSD security. These days Linux is well ahead in the first two departments and perhaps comparable in the third. Dragonfly BSD is the most interesting in terms of new ideas (unique approach to MP and scalability, Hammer filesystem, etc). The BSDs are worth exploring if you want a different, somewhat minimalist approach and standard hardware requirements, or if you want a more readily grokkable source code. But not if you're after performance or need a system that 'just works', especially on the desktop...

NetBSD 6.1

Posted May 21, 2013 18:05 UTC (Tue) by roman (guest, #24157) [Link] (4 responses)

I'm not sure I entirely agree on the portability comparison. I've recently run NetBSD on SPARC (32-bit, not UltraSPARC 64-bit) and Alpha, and I don't think there's much ongoing support for those architectures in the Linux world. NetBSD even runs on VAX, which isn't of much practical use these days but has some popularity with retrocomputing hobbyists.

On the other hand, Linux seems to be ahead in modern architectures, especially a wide range of ARM platforms.

I think it's more accurate to say that NetBSD and Linux have some different goals regarding portability.

NetBSD 6.1

Posted May 21, 2013 18:35 UTC (Tue) by ballombe (subscriber, #9523) [Link] (1 responses)

Debian still support 32bit sparc officially. There is unofficial support for alpha too.

There are bunch of new 32bit ISA that are only supported by linux afaik.

NetBSD 6.1

Posted May 21, 2013 22:16 UTC (Tue) by andreasb (guest, #80258) [Link]

No, the last Debian release to support 32 bit SPARC was etch, which was released in 2007. There is the sparc port (32 bit userland on 64 bit kernel) and the unofficial (i.e. not yet part of the main archive) port sparc64 (64 bit userland and kernel). This may have been the source of confusion.

NetBSD 6.1

Posted May 21, 2013 22:40 UTC (Tue) by andreasb (guest, #80258) [Link]

It may be a bit hard to compare NetBSD and Linux in that regard, as NetBSD is a complete distribution. On the Linux side, the major distribution with the most ports is probably Debian.

If you go purely by kernel / toolchain support, NetBSD doesn't run anywhere where Linux doesn't also run with the exception of VAX. On the other hand Linux runs on plenty of architectures NetBSD doesn't run on.

NetBSD 6.1

Posted May 22, 2013 3:40 UTC (Wed) by creemj (subscriber, #56061) [Link]

The unofficial Debian Alpha port hosted at Debian-Ports was in quite good shape just before the release of Wheezy. We had just over 95% of the Debian archive built on Alpha which is a substantially better statistic than Gnu/Hurd. Desktops such as KDE and LXDE work well, but Gnome is broken (which I could not care less about). It has been a bit of hobby of mine keeping this going for the last couple of years and a great opportunity to learn a lot more about Linux and building Debian, however other projects on modern hardware beckon and I may soon move on. Coupled with Debian unstable unfrozen we are in a precarious position with numerous segfaults in the test suite of the new version of glibc of which I have less motivation now to investigate... Unless someone else has interest in pitching in with maintaining Debian/Alpha its days are very numbered.

NetBSD 6.1

Posted May 22, 2013 0:40 UTC (Wed) by wahern (subscriber, #37304) [Link] (11 responses)

Linux is not comparable to OpenBSD in terms of security (or NetBSD, for that matter). Not by a long shot. Linux local root exploits are almost a semi-monthly occurrence at this point.

The difference between OpenBSD and Linux is conservatism. Linux has become a dumping ground for features. And although Linux clearly has some of the most talented engineers in the world banging away on it, the sheer volume of code dumped into the kernel on a regular basis cannot possibly be properly vetted.

I use Linux for application-specific, proprietary services because of performance, compatibility, and for Apt. But for general-purpose servers with more exposure--e-mail, XMPP, DNS, HTTP, shell, etc--I would never dream of using Linux. I personally choose OpenBSD. Their conservative approach means system maintenance has remained almost identical since I began using it 13 years ago.

Big organizations can get by using only Linux--just dedicate a server to every single service you provide, keep rigorous, day-by-day backups, and be prepared to rebuild or simply discard those servers which are root'd.

But unless you can hire dedicated personnel... forget about it. Choose something else, preferably NetBSD or OpenBSD if only because their skill and aspirations are more commensurate with their ability to analyze code for bugs--particularly security bugs, which are often overlooked when the feature itself seems to work.

NetBSD 6.1

Posted May 22, 2013 20:42 UTC (Wed) by Cyberax (✭ supporter ✭, #52523) [Link] (10 responses)

> Linux is not comparable to OpenBSD in terms of security (or NetBSD, for that matter). Not by a long shot.
Yup. Nobody cares about OpenBSD.

It's about the ultimate Elusive Joe ("Why is he so elusive? Is he a great shooter, or does he have a fast horse? - Nah, nobody gives a damn about him.") in the terms of security.

NetBSD 6.1

Posted May 22, 2013 21:15 UTC (Wed) by wahern (subscriber, #37304) [Link] (9 responses)

And Windows was once universally considered more insecure than Linux simply because it was more popular and a bigger target, and not because the Linux developers were more conscientious about code correctness and closing privilege escalation loopholes.

Curious how the story has changed over the last decade.

NetBSD 6.1

Posted May 22, 2013 21:54 UTC (Wed) by Cyberax (✭ supporter ✭, #52523) [Link] (8 responses)

Actually, no. Windows has for very long time been about as secure as Linux.

The perceived insecurity was caused by users running untrusted software, and the classic 'desktop' Linux is not any better in this regard.

This XKCD explains it perfectly: https://xkcd.com/1200/

NetBSD 6.1

Posted May 22, 2013 23:31 UTC (Wed) by wahern (subscriber, #37304) [Link] (7 responses)

Your timeline is much too short.

Until Microsoft management made security a priority, Windows was a cesspool of exploitable bugs. And much of the IPC was broken by design. Linux was objectively better designed and implemented.

Windows is much better now because, as a Microsoft employee admitted, "We [Microsoft] started caring about security because pre-SP3 Windows XP was an existential threat to the business."

Unfortunately, the tables have turned. And non sequitur excuses just don't cut it, even when illustrated. Local exploits matter because breaking into web applications is absolutely routine. And that's why I never run generic web applications on Linux (unless the box is throw-away and doesn't contain sensitive data), because I can pretty much guarantee you that any particular Linux instance has a known local root exploit.

NetBSD 6.1

Posted May 22, 2013 23:41 UTC (Wed) by Cyberax (✭ supporter ✭, #52523) [Link] (6 responses)

Nope. Windows has always been about as exploitable as Linux (i.e. totally and almost continuously), if we don't consider fundamentally insecure Win9x family.

>Windows is much better now because, as a Microsoft employee admitted, "We [Microsoft] started caring about security because pre-SP3 Windows XP was an existential threat to the business."
This is not about the security from kernel exploits, it's more about protecting users from themselves, which is much more important in reality.

Microsoft also fixed lots of insecure defaults (like allowing ActiveX) in applications but it has nothing to do with OS itself.

NetBSD 6.1

Posted May 26, 2013 10:10 UTC (Sun) by AndreE (guest, #60148) [Link] (5 responses)

In Windows XP, the default user had full, non-protected administrator access. Local privilege escalation exploits weren't needed. There was no UAC, sudo, or similar mechanism to request additional privileges when needed, and most programs (and development guidelines from MS) assumed this administrator access, so running a restricted account was not feasible without delving into the enterprise-focused security configurations.

So everyone ran as Administrator, making application exploits all the more potent.

Back then, the root/user separation was a pretty decent security advantage Linux had over Windows in terms of saving users from themselves (and their own applications). Of course, now, this isn't nearly enough.

NetBSD 6.1

Posted May 26, 2013 10:43 UTC (Sun) by Cyberax (✭ supporter ✭, #52523) [Link] (4 responses)

This XKCD explains it perfectly: https://xkcd.com/1200/

User/root separation on personal computers doesn't have much sense. At most, it might prevent malware from installing kernel-level rootkits, otherwise root/user separation doesn't really constrain malware at all.

NetBSD 6.1

Posted May 26, 2013 11:09 UTC (Sun) by rsidd (subscriber, #2582) [Link]

This XKCD explains it perfectly:

Well, for users who don't use a password-protected screen lock. Most Linux distros enable that by default and, therefore, most Linux users would use it, I assume...

NetBSD 6.1

Posted May 26, 2013 11:53 UTC (Sun) by AndreE (guest, #60148) [Link] (2 responses)

Yeah I've seen that many times. Pretty simplistic treatment of of the issues.

If computer security was limited to protecting my stolen laptop data, that graphic might be correct. But I am also concerned with making sure that other processes running on my machine are constrained in what they do, that other users on my machine are constrained, that my machine isn't partaking in illegal activity without my knowledge, or that my activity isn't being logged in real-time by some unknown party. The ability to install drivers (or perform any other privilieged system action) on your system is perhaps the first step needed for someone without the ability to simply snatch your laptop out of your hands to gain access to your bank account.

I mean, do you think the threat economy is based around stolen laptops? Or things like undetected data retrieval and machine hijacking, of which rootkits and secretly installed programs with system-wide privileges are still of huge concern?

Privelege separation is a pretty legitimate approach to OS security, and an ideal (but probably unattainable or unusable system) would constrain processes to only the resources they require to function. This is why we are seeing sandboxing and similar strategies starting to surface in mainstream OSes. The root/user separation is form of privilege separation that clearly isn't comprehensive enough, but none-the-less beats out having every process run at the highest security privilege.

It's funny that you suggest it "at most" eradicates rootkits -- rootkits and other forms of deeply hidden/undetected malware were the absolute bane of Windows XP. In Windows XP they hardly had to be sophisticated, since your browser or email client (which launched the cute e-card) could write to the system32 directory, edit your hosts file, or even modify the registry.

Root/User might not be enough, but it surely beats out root-only.

NetBSD 6.1

Posted May 26, 2013 20:31 UTC (Sun) by Cyberax (✭ supporter ✭, #52523) [Link]

>If computer security was limited to protecting my stolen laptop data, that graphic might be correct.
Also it's true for the most malware infections.

>But I am also concerned with making sure that other processes running on my machine are constrained in what they do
No you are not. Malware can easily inject itself into all of your processes using ptrace. And even without injection, it can steal your data, send spam to your contacts, use your computer as a backdoor into your corporate network and so on.

We are starting to get real application isolation only now, it has not been available back during pre-SP2 times.

>that other users on my machine are constrained,
Again, most computers have exactly one user - its owner. Maybe a couple more in case of a shared family machine.

So Linux (or OpenBSD) had not been intrinsically more secure than Windows, it just has never been a target for similar attacks. Had it been a target then we'd have seen tons of messages: "Linux is insecure!".

NetBSD 6.1

Posted May 31, 2013 20:27 UTC (Fri) by khim (subscriber, #9252) [Link]

I mean, do you think the threat economy is based around stolen laptops? Or things like undetected data retrieval and machine hijacking, of which rootkits and secretly installed programs with system-wide privileges are still of huge concern?

Are we talking about spherical cows in space or about real threat economy? Typical scenario on Windows includes hijacked IM account which sends links with malware to all your friends. They click on the link, see the dancing pigs and then, day or two later the see message with message which explains that all their documents are encrypted and they must send few backs to receive decryption key.

Now, what part of this process requires rootkits or system-wide priveleges? Many such Windows ransom programs don't even bother with such trivialities and are written in Visual Basic! Will Linux really protect against such threat? XKCD slide is right, unfortunatelly: all systems completely bombed security in the modern world. Android is the only Linux distribution I know which even tries to solve this problem (although it's not all that successful). iOS also tries to solve it (although it comes from totally different direction). Desktop linux? Fuggetaboutit.

NetBSD 6.1

Posted May 21, 2013 11:19 UTC (Tue) by aristedes (guest, #35729) [Link] (1 responses)

As a long time user of various Linuxes and FreeBSD (from about version 3) I can say that for my own work, FreeBSD is superior in many ways. My own work is entirely server-side, so I have no experience with Gnome/KDE or desktops. My understanding is that Linux's superior support for consumer hardware and video cards makes it a better choice there.

On the server, performance differences between FreeBSD and Linux are insignificant and on modern hardware it is mostly not very relevant to worry about 2-3% cpu load either way. Phronix have pages which will tell you that running games under Linux emulation in FreeBSD is faster than running it on Linux natively. And other pages which tell you the opposite. I'd ignore all that as pretty much irrelevant.

What is important for me are the other things:

* user community
* stability of upgrades from one release to another (and very conservative changes)
* stability of ports system and ease of managing/maintaining packages
* ZFS

I can't imagine using a server without ZFS, and for me that alone makes all the difference. I think the Linux ZFS implementation is getting there, and btrfs will one day also achieve production ready...

Finally, you suggest that the BSDs lag in manpower. I don't think that is true in key parts of the kernel like networking or scheduling. But it would certainly be true if you look at consumer hardware peripheral support. Companies like Juniper rely heavily on BSD for major networking appliances, and new interesting work is performed by Cambridge Uni/Google on the capsicum security framework [1]

All I can suggest is try one of the BSDs and see what you think.

NetBSD is probably really nice too (after all that is what this article is about), but I know much less about it.

[1] http://www.cl.cam.ac.uk/research/security/capsicum/

NetBSD 6.1

Posted May 23, 2013 6:34 UTC (Thu) by Teho (guest, #86286) [Link]

>Phronix have pages which will tell you that running games under Linux emulation in FreeBSD is faster than running it on Linux natively.

In this case the Phoronix benchmark was utter garbage. They had previously released a bencmark between Unity, KDE, Gnome Shell... that showed KDE being considerably faster than Unity[1]. The benchmark was between FreeBSD with KDE desktop and Ubuntu with Unity desktop[2]

[1] http://www.phoronix.com/scan.php?page=article&item=li...
[2] http://www.phoronix.com/scan.php?page=article&item=li...

NetBSD 6.1

Posted May 21, 2013 15:33 UTC (Tue) by pranith (subscriber, #53092) [Link]

This seems current, but I am not sure how valid the test is... http://www.unix-experience.fr/2013/2451/#sthash.yRRR4B8d....


Copyright © 2013, Eklektix, Inc.
Comments and public postings are copyrighted by their creators.
Linux is a registered trademark of Linus Torvalds