|
|
Log in / Subscribe / Register

Daniel Bernstein: ten years of qmail security

Daniel J. Bernstein has posted a paper looking back at the security of qmail [PDF], ten years after 1.0 came out. "In retrospect, some of qmail's "security" mechanisms were half-baked ideas that didn't actually accomplish anything and that could have been omitted with no loss of security. Other mechanisms have been responsible for qmail's successful security track record. My main goal in this paper is to explain how this difference could have been recognized in advance--how software-engineering techniques can be measured for their long-term security impact."

to post comments

Daniel Bernstein: ten years of qmail security

Posted Nov 3, 2007 15:35 UTC (Sat) by TRS-80 (guest, #1804) [Link] (12 responses)

I find security much more important than speed.
I wonder if he finds the mythical security of qmail 1.03 more important than the actual insecurity caused by the license and failure to update the code to build on current systems and features like STARTTLS, leaving qmail users to play patch roulette.

Daniel Bernstein: ten years of qmail security

Posted Nov 3, 2007 15:50 UTC (Sat) by CyberDog (guest, #29668) [Link]

This.

I don't claim to be a mail expert by any means, but having recently entered the vast and
potentially scary world of open source mail servers, I was given to doing my fair share of
research on what was out there these days.  I'd run into qmail in a previous job, so it wasn't
entirely foreign to me.  Its age and blatant lack of modern features made it a non-option as
far as I was concerned.  Even the most common extentions like STARTTLS are nowhere present,
and as far as I'm concerned, that's required security these days.  Of course it's much easier
to write secure -code- when it's completely minimal, but like someone posted recently in
another article (OpenBSD maybe?), there's a decent and widening gap between
secure-by-any-means and practical-for-daily-use.

Daniel Bernstein: ten years of qmail security

Posted Nov 3, 2007 17:38 UTC (Sat) by ArbitraryConstant (guest, #42725) [Link]

Yes. By far the most important criteria for me in choosing software is how much effort it
takes for me to deal with it. This is particularly true of mail, which is a hard problem under
the best circumstances.

I find Qmail prohibitively time-consuming to deal with. Even if it's better than other
choices, it's not sufficiently better to justify itself in the vast majority of cases.

qmail doesn't *need* any patches

Posted Nov 3, 2007 20:45 UTC (Sat) by charlieb (guest, #23340) [Link] (9 responses)

> and failure to update the code to build on current systems ...

qmail builds on all modern systems. It requires a simple build time configuration item to
build on linux:

echo gcc --include /usr/include/errno.h > conf-cc

> and features like STARTTLS

Use qpsmtpd instead of qmail-smtpd.

qmail doesn't *need* any patches

Posted Nov 3, 2007 21:04 UTC (Sat) by CyberDog (guest, #29668) [Link] (6 responses)

> Use qpsmtpd instead of qmail-smtpd.

Well... then you're not really using qmail's code, at least in part.  So any benefits inherent
in the codebase would seem a bit moot, no?

qmail doesn't *need* any patches

Posted Nov 4, 2007 0:31 UTC (Sun) by njs (subscriber, #40338) [Link] (2 responses)

Esp. since qpsmtpd, though it's written in Perl, appears to be built on Apache -- so you have
another big chunk of C code talking to the network.  (Apache's C is far better than
traditional sendmail's C, but it still in no way comes close to meeting DJB's requirements.)

qmail doesn't *need* any patches

Posted Nov 4, 2007 1:23 UTC (Sun) by xanni (subscriber, #361) [Link] (1 responses)

qpsmtpd is not "built on Apache".  One supported mode of operation is to run it under Apache
on the basis that many sites are already running Apache anyway and it is well-understood and
supported, but qpsmtpd has always had and continues to support several other modes of
operation including running under djb's daemontools or even under xinetd.

qmail doesn't *need* any patches

Posted Nov 4, 2007 5:52 UTC (Sun) by njs (subscriber, #40338) [Link]

FWIW, I didn't mean 'built on Apache' as in 'runs as part of an Apache HTTPD'; Apache is in
part a very nice framework for writing generic server apps these days.  (Maybe this is
technically part of APR, I haven't followed where exactly they're drawing that boundary.)

On a further look, though, I see that you're right, when qpsmtpd is not running under httpd,
it uses a different home-brew network framework rather than APR.  I was misled by looking at
the first anti-malware plugin linked on their homepage:
  http://svn.perl.org/qpsmtpd/trunk/plugins/check_earlytalker
which contains a bunch of code using APR -- but it turns out that's because there are two
copies of all that code, one that works when being run under Apache and one that works with
the home-brew.  I don't know how typical this is of qpsmtpd's codebase, but it doesn't strike
me as The DJB Way either.

(If I were them, I'd consider just using Apache in all cases, even if it is a big hunk of
scary C that makes baby DJB cry, but I don't actually know what I'm talking about so *shrug*.)

qmail doesn't *need* any patches

Posted Nov 4, 2007 3:08 UTC (Sun) by charlieb (guest, #23340) [Link] (2 responses)

> So any benefits inherent in the codebase would seem a bit moot, no?

No. The only benefits which would be moot would be those which reside only in qmail-smtpd.
Logic 101, no?

qmail doesn't *need* any patches

Posted Nov 4, 2007 5:30 UTC (Sun) by CyberDog (guest, #29668) [Link] (1 responses)

The "benefit" alluded to here was djb's secure codebase.  As soon as your arrangement requires
[original codebase] + [random 3rd party codebase(s) tacked on], the security of the final
product becomes only as secure as the weaker of the two (or three or more) codebases.  It
could even be argued that a product which incorporates all the required features into a single
codebase, if written by even moderately competent programmers, could be less risky than
merging multiple products into one.

qmail doesn't *need* any patches

Posted Nov 4, 2007 14:04 UTC (Sun) by alankila (guest, #47141) [Link]

Interestingly, djb's paper talks about maintaining security expectations even in the face of
having to run untrusted, random codebases as part of secure application.

The basic idea is compartmentalization: for each component (especially those from a third
party) you should clearly define the input, the output, and set up access and resources
restrictions in which the component must operate. Finally, after it does its job, you
shouldn't trust it but do some validation to check the sanity of the result.

For instance, if the purpose of the component were to extract recipient address of the email,
then the component can only read the email, produce one string as response, not access
anything outside that email and have to run in limited time and memory. Once something comes
out, it must look like an email, for instance match the famous rfc822 pattern.

To achieve this, one might have to run untrusted components under a virtual machine and/or use
the operating system's primitives to constraint cpu, memory, available system calls, etc. I'm
not sure how well Linux can do these things, but the basic idea is that it should be possible
to run even completely random code safely provided that these relatively simple constraints
are worked out first.

qmail doesn't *need* any patches

Posted Nov 4, 2007 23:19 UTC (Sun) by job (guest, #670) [Link]

To be fair, in modern time you use netqmail which automates the patch-and-build process. There
is starttls support too, although I still think that's separate from netqmail which doesn't
want to stray unnecessarily far from the "pristine" sources.

qmail doesn't *need* any patches

Posted Nov 5, 2007 19:32 UTC (Mon) by dvdeug (subscriber, #10998) [Link]

That's not a simple build time configuration item; that's patching the source.

Many good points

Posted Nov 3, 2007 18:50 UTC (Sat) by epa (subscriber, #39769) [Link] (57 responses)

I'll leave others to discuss the pros and cons of the qmail software.  In the general points
he makes about security and bugs, djb is right on the money.

For example in this day and age why do we still tolerate or encourage language semantics where

    x = y + 1

could result in x either having a larger value than y or a smaller value, depending on what y
contains?  Who decided that

    a[55]

should have undefined behaviour if the array a has less than 55 elements allocated?  Surely it
would make more sense to do something safe instead of something random.  Even aborting the
whole program would be better than the current silently-bizarre semantics of arithmetic
overflow or bad memory access.

There were once good reasons why unchecked arithmetic and unchecked memory access were the
default.  But with machines thousands of times faster than they were in the 1970s, as djb
says, the time is long overdue to get the code right first, and then worry about speeding it
up later.  A programming language's job should be to make it harder to write incorrect code.
There can still be unsafe_add() and unsafe_array_access() builtins for those who really need
them.

Many good points

Posted Nov 3, 2007 21:43 UTC (Sat) by pynm0001 (guest, #18379) [Link] (19 responses)

In most modern computer languages, a[55] does have deterministic 
behavior, even when the array has less than 56 (ha!) elements.

However most UNIX code is in C, which does not (and without restricting 
the language, cannot) guarantee deterministic behavior in this case.  C++ 
is the same as C in this regard, if you continue to use C-style arrays 
rather than any of the gazillions of good container libraries (including 
the built-in STL).

Most languages I would imagine have the integer overflow problem.

Many good points

Posted Nov 3, 2007 22:59 UTC (Sat) by njs (subscriber, #40338) [Link] (12 responses)

> Most languages I would imagine have te integer overflow problem.

FWIW, Lisp dialects rarely do, and Python is many years into its transition to having
arbitrary-size integers by default (to be finished in Py3k).  There are probably others as
well.

Many good points

Posted Nov 4, 2007 0:51 UTC (Sun) by aquasync (guest, #26654) [Link] (11 responses)

Ruby will automatically transition from ints to BigNums as needed - eg `ruby -e 'p 10 ** 100'`
will just work.

Many good points

Posted Nov 4, 2007 3:24 UTC (Sun) by drag (guest, #31333) [Link] (10 responses)

Well ya. 

But isn't that example of 'dynamicly typed'?
I mean python can do that, no problem and I suppose pretty much all dynamicly typed languages
do that also (ie visual basic and perl)

(but also python is strongly typed.. meaning that you can just use a string as a int and visa
versa (unlike VB, for example0)

$ python 
Python 2.4.4 (#2, Aug 16 2007, 02:03:40) 
[GCC 4.1.3 20070812 (prerelease) (Debian 4.1.2-15)] on linux2
Type "help", "copyright", "credits" or "license" for more information.
>>> type(10)
<type 'int'>
>>> type(10 * 100)
<type 'int'>
>>> type(10 ** 100)
<type 'long'>
>>> type(10.0 ** 100)
<type 'float'>

so on and so forth. 

But if you go...

>>> type(10.0 ** 1000)
Traceback (most recent call last):
  File "<stdin>", line 1, in ?
OverflowError: (34, 'Numerical result out of range')

Many good points

Posted Nov 4, 2007 3:30 UTC (Sun) by drag (guest, #31333) [Link]

Errr.

>  meaning that you can just use a string as a int and visa

I ment:

meaning that you _can't_ just use a string as a int and visa versa

Sorry.

Many good points

Posted Nov 4, 2007 6:32 UTC (Sun) by njs (subscriber, #40338) [Link] (8 responses)

> But isn't that example of 'dynamicly typed'?

Not really.  The particular implementation in python and ruby (but not, AFAIK, in py3k)
requires dynamic typing because you have the same operation returning two different types
depending on the values involved.  But one could just as well have a dynamically typed
language whose integer operations overflowed instead of passing to a bignum type, and a
statically typed language whose basic integer type didn't overflow because it actually *was* a
bignum type.

> I suppose pretty much all dynamicly typed languages do that also (ie visual basic and perl)

I know nothing about VB, but perl does something different and weird -- its integers seem to
overflow into floats:

$ python -c 'print (10 ** 100 == (10 ** 100 - 1))'
False
$ perl -e 'if (10 ** 100 == (10 ** 100 - 1)) { print "True\n" } else { print "False\n" }'
True

I guess most code that expects programming integers to act like mathematical integers will be
more surprised by an overflow to -2**31 than by loss of integer precision, but either would
make me nervous.

> (but also python is strongly typed.. meaning that you can't just use a string as a int and
visa versa (unlike VB, for example0)

"Strongly typed" is an annoying and vague term, but AFAICT it is usually used to mean "you
can't poke around at the raw representation of objects in machine memory using only ordinary
variable access operations".

Many good points

Posted Nov 4, 2007 7:28 UTC (Sun) by drag (guest, #31333) [Link] (7 responses)

Hrm. My experience is severely limited:

"Dynamicly typed" always ment to me that the type is determined when the variable is created,
based on various rules. That is variable types do not have to be declared before you can use
them. (although they can be if you prefer it)

The opposite is "staticly typed" were you have to declare variables types before using them.


And "Strongly Typed" has always ment, to me, that once the variable is created then it can't
be changed. That is if you create a variable as a int you can only use it as a int. If you
want to use the int's value as a string you have to create a second string variable. The type
is enforced by the language.

The opposite of that is "weakly typed" were a int can be a string can be a float based on the
context in which it's used. That is if you make a 'int' you can use as a string if you feel
like it. The type is not enforced by the language.


So...

C == staticly, weakly typed
Perl == dynamicly, weakly typed
Java == staticly, strongly typed
Python == dynamicly, strongly typed


For example this is not legal in python:

a = "1" + 1

The "" is your way of declaring that value as a string, otherwise numbers by themselves are
always interpreted as int, hex, float, and other numeric types.

But, of course, none of this is hard and fast. Between numeric operators it'll do type
coercion.

a = 0x45 + 1.0 + 12
(so that is a hex + a float + a int)
And the result would 'a' be a float.

Or this could be a bit of a illusion as all these things support the 'add' function. I don't
know.


Maybe this is because my knowledge of all this stuff is purely from a python perspective and
the language that is used is to try to help programmers understand the differences between
other common languages.


But otherwise thanks for the clarification on the "But isn't that example of 'dynamicly
typed'?" That makes a lot of sense.

Many good points

Posted Nov 4, 2007 9:30 UTC (Sun) by elanthis (guest, #6227) [Link] (2 responses)

To be blunt: nobody really cares what you think those terms mean.  They already have
well-defined meanings.  Look them up.

Many good points

Posted Nov 4, 2007 12:40 UTC (Sun) by drag (guest, #31333) [Link] (1 responses)

I did. That was my understanding. 

Many good points

Posted Nov 5, 2007 0:09 UTC (Mon) by k8to (guest, #15413) [Link]

Dynimcally typed does not have anything to do with "when the variable is created"  it has to
do with the type of the variable being known at runtime, rather than at compile time.


Some corrections

Posted Nov 5, 2007 9:03 UTC (Mon) by flewellyn (subscriber, #5047) [Link] (3 responses)

"Dynamicly typed" always ment to me that the type is determined when the variable is created, based on various rules. That is variable types do not have to be declared before you can use them. (although they can be if you prefer it)

The opposite is "staticly typed" were you have to declare variables types before using them.

No. The "dynamic" verus "static" in the typing terms mean solely this: at what time is the type of this variable known? If it's known at compile-time, the variable is statically typed. If it can't necessarily be known until runtime, that variable is dynamically typed. Whether or not you have to declare types ahead of time is mostly irrelevant. I say "mostly" because some languages that are statically typed have facilities for dynamic typing if you want it, and some dynamically typed languages can do static typing if you ask for it.

A number of languages, like Haskell and Boo, have static typing, but by default use "type inference" to determine the type of a variable. So you can declare (using Boo here):

x = 1

And the variable x is determined to be numeric, and an integer. You can't thereafter assign a string, a float, or a value or object of some other type to x, since it's been determined that the type of x is integer.

You can, in Boo at least, declare a type, in case you need to do something special, so if I had said:

x as float = 1

Then x would be a float, and the 1 would be interpreted as 1.0. Also, Boo has optional "duck typing" that you can use to defer type resolution for a specific variable until runtime. This is a good idea if you are assigning user input to a value, and don't necessarily know what type that input will be. (If you DO know, it's a good idea to declare the type, so that the compiler knows what to do with it.)

On the other hand, some languages that are dynamically typed by default, such as Common Lisp, have optional type declarations; when you declare a variable's type, that variable becomes statically typed, and the compiler is free to leave out the usual type checks, which can improve performance. Some CL implementations will also treat type declarations as assertions if you set the compiler's optimization settings a certain way, so that you can get the benefit of static type checking if you want and need it. (Strictly speaking, a CL implementation is free to ignore type declarations altogether according to the spec, so this behavior is entirely implementation- dependent.)

But the crucial point here is that "static" versus "dynamic" typing has everything to do with WHEN a type is known, and nothing really to do with HOW it's known.

And "Strongly Typed" has always ment, to me, that once the variable is created then it can't be changed. That is if you create a variable as a int you can only use it as a int. If you want to use the int's value as a string you have to create a second string variable. The type is enforced by the language.

The opposite of that is "weakly typed" were a int can be a string can be a float based on the context in which it's used. That is if you make a 'int' you can use as a string if you feel like it. The type is not enforced by the language.

This is closer to correct, but still off. "Strongly typed" means that the VALUE'S type is strongly enforced: you can't add a string to an integer, or an integer to a character, without explicit casts, which may not work in any case (how do you coerce "Ich bin ein Berliner" to a numeric type?). You can have a strongly-typed dynamic language (Common Lisp), or a weakly typed static language (C).

The business of whether or not a variable can be rebound to a different type is a matter of static versus dynamic typing, not strong versus weak type safety. You can, in Common Lisp, bind a variable to a string value, then rebind it to a number, or a structure or class object, for that matter; just don't try to use string functions on the number. THAT'S strongly typed. (On the other hand, C will not let you assign a string value to an integer variable, but you could treat the int as a char.)

Some corrections

Posted Nov 6, 2007 10:09 UTC (Tue) by ekj (guest, #1524) [Link] (2 responses)

"Strongly typed" means that the VALUE'S type is strongly enforced: you can't add a string to an integer, or an integer to a character, without explicit casts, which may not work in any case

Well, that depends, now doesn't it ? If your strongly typed language comes with method overloading there's nothing stopping you from defining several add-functions, like say an "string add(int,string)" method. What, exactly, that'd do would be up to you. In some contexts it could make sense.

In python you can do: mystring = 10 * "-" + "Hello World" + 10 * "-", the very same thing would be perfectly possible in say C++, any language with operator overloading basically, regardless of if the language is statically or dynamically typed.

Some corrections

Posted Nov 7, 2007 0:16 UTC (Wed) by flewellyn (subscriber, #5047) [Link] (1 responses)

That doesn't really change what I said, actually. While in some languages you can use "+" to mean string concatenation as well as addition, if the language is strongly typed, it will choose which operation to do based on the types of the arguments. And you may need to cast things anyway, such as if you want to concatenate a number's string representation with a string. I've had to do such casts in Python.

Some corrections

Posted Nov 8, 2007 9:24 UTC (Thu) by ekj (guest, #1524) [Link]

Sure. You're running completely different functions for int+int and int+string, it just so
happens that the two functions have the same name. They don't need to have anything in common
other than the name.


Many good points

Posted Nov 4, 2007 12:25 UTC (Sun) by epa (subscriber, #39769) [Link] (2 responses)

Hmm, you caught me out with 55 versus 56, but isn't it the case that in C it is legal to point
to one element past the end of an array (as long as you don't try to read or write the value
held there).  So a[55] in an array of 55 elements is defined in so far as you can compare a
pointer to &(a[55]).

Many good points

Posted Nov 4, 2007 18:04 UTC (Sun) by pynm0001 (guest, #18379) [Link] (1 responses)

Well sure, you can construct a pointer to point pretty much anywhere you 
want as long as you don't dereference it (i.e. reading or writing).  
Making the element immediately following the end of the array special 
would mesh well with C++ iterators, where the end element is an iterator 
that cannot be dereferenced, always past the end of the data.

Pointers in C

Posted Nov 4, 2007 19:07 UTC (Sun) by tialaramex (subscriber, #21167) [Link]

You /can/ point anywhere but that isn't defined in the language and so your compiler might not
do what you expected. It so happens that the pointers are typically just hardware memory
addresses (virtual addresses on modern hardware) but they could be anything, and any false
assumptions you make in portable software could be expensive mistakes.

K&R says that pointers are only defined when they point /to/ something like an array element
or a variable. ANSI C improved on this by asserting that there is also a pointer value beyond
the end of an array which is larger than the pointer values for the elements of the array,
this means that...

while (pointer <= last_element)) {
  /* do something */
  pointer++;
}

is well defined in ANSI C and does what you expect whereas it would have been legitimate for a
K&R C compiler to do something most unexpected, like set the pointer variable to zero once you
get beyond the end of the array.

Many good points

Posted Nov 8, 2007 20:32 UTC (Thu) by dvdeug (subscriber, #10998) [Link] (2 responses)

Why would most languages have the integer overflow problem? You can detect an integer overflow
at runtime, and do something intelligent, like throw an exception. Even C as standardized
doesn't let you overflow an integer; it's undefined behavior, but wrap-around semantics
assumed so often that optimizing it breaks many programs.

Many good points

Posted Nov 8, 2007 21:53 UTC (Thu) by pynm0001 (guest, #18379) [Link] (1 responses)

"can detect" is not the same as "will detect".  If the language does not 
throw an exception (or otherwise intelligently handle the problem) for an 
overflow then it has an integer overflow problem.

C is even worse simply because it is undefined.  Undefined behavior is 
not a good thing in a program which is supposed to be secure and bug 
free.  The wrap-around behavior is not retained because of historical 
baggage, it's retained because that is the "optimized" form.  i.e. the 
underlying hardware performs the addition and the result is wrapped 
around without checking beforehand if the answer will fit.

Most processors have an "overflow" flag which can be set but checking 
that after every addition is pretty much not done.

Many good points

Posted Nov 9, 2007 4:14 UTC (Fri) by dvdeug (subscriber, #10998) [Link]

And there's no reason for any language that doesn't play fast and loose close to the bare
metal not to detect it, which is why I questioned your assumption that most languages would
have an integer overflow problem.

No, it's not the optimized form. GCC added optimization that in loops took advantage of the
fact that overflow is undefined and hence not done in legal programs, and got a great deal of
flack for it.

Many good points

Posted Nov 3, 2007 23:13 UTC (Sat) by jwb (guest, #15467) [Link] (36 responses)

I don't understand why such things are tolerated either.  I know people who have iPhones,
where the sophistication of the software is equivalent to what we had in the 70's: bare-metal
code with a thin coat of object orientation painted on.  And they are ecstatic that they can
go to a URL to "unlock" their phones, and less than receptive to the idea that replacing your
operating system via HTTP is a flaw, not a feature.

Meanwhile I stick with my Blackberry, with all its software written in Java, neatly
compartmentalized and virtualized.  I give me a warm feeling.

Writing any program today which operates on network streams and is written in C is malpractice
in my opinion.

Many good points

Posted Nov 4, 2007 11:14 UTC (Sun) by i3839 (guest, #31386) [Link] (33 responses)

> Writing any program today which operates on network streams and 
> is written in C is malpractice in my opinion.

Sorry, but then you simply have no idea what you're talking about.

Let's just say that in a project I worked on there was a Java and a 
C part communicating with each other, and the java part was horribly 
buggy, not the C part. In other words, it doesn't matter much what 
damn language you use, a bad programmer is a bad programmer.

Many good points

Posted Nov 4, 2007 11:44 UTC (Sun) by danieldk (guest, #27876) [Link] (1 responses)

Obviously it does matter. There is a difference between languages that will let you do
anything funny with a pointer, and those that prevent you from doing that.

Many good points

Posted Nov 4, 2007 14:16 UTC (Sun) by i3839 (guest, #31386) [Link]

People are resourceful, they just find other things to mess up.

A lot of funny things were done, but nothing with pointers.

Thanks for proving Bernstein right

Posted Nov 4, 2007 11:55 UTC (Sun) by man_ls (guest, #15091) [Link] (28 responses)

In other words, it doesn't matter much what damn language you use, a bad programmer is a bad programmer.
It matters a lot. Java code may be buggy, but you will never find pointer dereferences, buffer overflows or other stupid security bugs encouraged by the language (and sometimes the base libraries). Whatever bugs are there are just your own.

Thanks for proving Bernstein right

Posted Nov 4, 2007 14:26 UTC (Sun) by i3839 (guest, #31386) [Link] (27 responses)

There were no pointer related problems, buffer overflows or other stupid security bugs at all
(surely you didn't meant to say that simply dereferencing a pointer is a bug? ;-). I'm really
baffled by people who think that handling pointers and buffers correctly is in any way hard or
difficult.

Every bug has the potential to be a security problem, one way or the other. My experience is
that there aren't less bugs in Java code than in C code.

Thanks for proving Bernstein right

Posted Nov 4, 2007 15:43 UTC (Sun) by jordanb (guest, #45668) [Link] (10 responses)

History has demonstrated that the weakly-typed pointer system in C *has* been the source of
many, many security bugs.

I can't speak to Java because I don't know much about it but I *can* say that many modern
languages, probably beginning with modula2, have paid a lot of attention to the ways in which
programmers tend to make mistakes and structure the language to either eliminate them or at
least be able to recognize that a bug has been encountered. 

Not allowing pointers/access types to address parts of the memory that haven't been explicitly
allocated as their dereference type is an example of the former. Raising an exception if an
integer overflows rather than just rolling over to zero and continuing like nothing happened
is an example of the latter.

I, personally, am becoming a fan of Ada. It combines strong/safe typing and one of the most
capable regimes of constraints and discriminants that I've seen with an incredible amount of
static analysis, such that a surprising number of mistakes are caught at *compile* time. 

It's certainly true that you can write bad code in any language. But some languages don't do
anything special to try to help you out. And some have some language design decisions that
seem to go out of their way to cause you problems. I can say that C is the only language I use
for which I really need a debugger. That says something I think.

Even though it doesn't have memory pointers, another language that is not designed to help
prevent bugs is PHP. Just the other day I had a bug where a function returned a boolean and I
expected it to return an array. I went off and tried to use the "array" and my program failed
to operate as expected -- but with no warning or error as to what the problem was. As it was
it took my a half hour to track down that rather trivial thing. In any real language, it would
have *told* me that the problem was that I was using a boolean type object improperly, but PHP
shares C's anemic typing regime and thus is ripe for uncaught bugs. 

Thanks for proving Bernstein right

Posted Nov 4, 2007 17:54 UTC (Sun) by i3839 (guest, #31386) [Link] (9 responses)

Yes, it certainly has. But more or less all of those can be summarized by "altering the return
address", often in combination with injecting malicious instructions. Compiling programs with
-fstack-protection helps against that, as does random address layout and non-executable
stacks/heaps. So the problem looks worse than it is, because e.g. every buffer overflow is
flagged as a security breach, while in practice it might be near impossible to actually
exploit it.

You might also be interested in the -fmudflap option, but I don't know anything about it. Same
for -ftrapv which causes the program to abort when a signed integer overflows. I tried it, and
it works, but not when optimization is enabled. :-/

And those lowlevel errors distract from the higher level security problems in  applications,
which are more often than not language independent. It's incredible how many unsafe temp file
handling bugs are still found, to name one thing. It's true that many libc functions aren't
very security friendly (e.g. strncat), but by now people should know that and use the safer
alternatives.

The compiler could do much more compiletime checking too, and there are some options to enable
checking a few things, but more could be done. For runtime checking Valgrind is great. The
weak typing of C isn't as big a problem as it could be thanks to compiler warnings.

Only thing I use a debugger for is to get backtraces. And the reason you need a debugger for
that in C and not in some others is because in interpreted languages the backtrace is
generated by the virtual machine.

I really hate PHP, let's not go near there. Any language where mistyping a variable name
causes weird buggy behaviour instead of a (compiletime) error is not worth existing. Weak
typing combined with automatic variable declaration/memory handling is just a nightmare.


Thanks for proving Bernstein right

Posted Nov 4, 2007 18:50 UTC (Sun) by man_ls (guest, #15091) [Link] (8 responses)

That is exactly what we don't want: that your code requires you to use obscure compiler flags (i.e. not enabled by default) or to avoid otherwise perfectly good functions (I assume you mean strcat()). C places the burden of secure programming on developers, where other languages solve many of these issues automatically.

Most security issues that actually have any impact are caused by stupid little things like these. Funny, isn't it?

Thanks for proving Bernstein right

Posted Nov 4, 2007 19:15 UTC (Sun) by i3839 (guest, #31386) [Link] (7 responses)

Well, assuming we're talking about open source here, it's more a distro's choice. But
programmers knowing that their code is security critical and don't trust it enough should
indeed enable a few useful obscure compiler flags.

Oops, I gave the wrong example, I meant strncpy instead of strncat. The latter is indeed safe.

Thanks for proving Bernstein right

Posted Nov 5, 2007 3:59 UTC (Mon) by jordanb (guest, #45668) [Link] (6 responses)

Um, what's wrong with strncpy?

strncpy()

Posted Nov 5, 2007 5:35 UTC (Mon) by Ross (guest, #4065) [Link] (2 responses)

It fails to terminate the string in some cases, so you end up having to either make sure the
buffer is always bigger than the string (in which case you could use strcpy), or manually
terminate the buffer.

It's such a simple function, but it is still a horrible design.

strncpy()

Posted Nov 5, 2007 15:32 UTC (Mon) by nix (subscriber, #2304) [Link] (1 responses)

It's an excellent design for what it was meant for: filling in ancient 
Unix directory entries, which had exactly that format (14 byte max, 
null-terminated if shorter than that).

The mistake was putting it in the C library where people might be tempted 
to use it for other purposes. (See also that horrible pre-stdio function 
gets(), which I see no uses of other than wrapping in things like libssp, 
but which still cna never be removed. At least it's hardly used anymore 
thanks to the warning you get whenever you use it: but strncpy() is used 
too much to warn about, and there's no decent replacement in libc, 
although writing one is a matter of five minutes' work.)

strncpy()

Posted Nov 8, 2007 6:31 UTC (Thu) by ncm (guest, #165) [Link]

snprintf works well enough.

Thanks for proving Bernstein right

Posted Nov 5, 2007 10:47 UTC (Mon) by epa (subscriber, #39769) [Link] (2 responses)

Use strlcpy() instead.

Thanks for proving Bernstein right

Posted Nov 8, 2007 6:35 UTC (Thu) by ncm (guest, #165) [Link] (1 responses)

strlcpy is not in POSIX and never will be.  It doesn't actually do what any sane person would
want, unless you don't really care what ends up in the destination string.  But if you don't
care, why call it at all?

Thanks for proving Bernstein right

Posted Nov 11, 2007 11:30 UTC (Sun) by renox (guest, #23785) [Link]

>strlcpy is not in POSIX and never will be.

So what? There are enough dumb spec in POSIX to show that it's not the ultimate reference in
programming.

> It doesn't actually do what any sane person would
want, unless you don't really care what ends up in the destination string.  But if you don't
care, why call it at all?

That's false: when you don't make a mistake the destination string is correct, when you do
make a mistake, then even if the destination string is incorrect at least this isn't (normaly)
a security issue, which is much better that what those other string copy provides.

Thanks for proving Bernstein right

Posted Nov 4, 2007 16:41 UTC (Sun) by man_ls (guest, #15091) [Link] (10 responses)

surely you didn't meant to say that simply dereferencing a pointer is a bug? ;-)
Ehm, my C is a little rusty, but no :D I rather meant null pointer dereference, double dereference or whatever other strange things are allowed in C that lead to security problems.
Every bug has the potential to be a security problem, one way or the other.
That is a belief originated by OpenBSD people which is not shared by many. "Potential" is a weak word, but anyway the potential security problems related to many bugs is near zero. Some bugs are purely aesthetic, others just make things work wrong with no side effects. Some languages help keep side effects to a minimum, others don't.

Some security issues are not even bugs; failure to validate an input string maybe an excess of confidence, but it cannot be considered a bug unless you assume the string might come from a hostile party. Most program specifications just say what should happen, not what should not happen.

My experience is that there aren't less bugs in Java code than in C code.
Of course not, but I much rather prefer a NullPointerException than an undesired intrusion.

Bernstein right? Maybe, but Theo is too, mostly

Posted Nov 4, 2007 19:30 UTC (Sun) by tialaramex (subscriber, #21167) [Link] (7 responses)

No, the OpenBSD people are so close to absolutely correct that it's not worth calling the
difference. Bugs cause something unexpected to happen. If unexpected things happening was
acceptable then you wouldn't bother with security, just say "That was unexpected" when
anything bad happens.

Here's a nice simple example. You have a program which examines NTFS formatted hard disks to
check that everything on the disk is authorised by the company. The program is pretty simple,
but it has a small bug.

The bug is that it assumes all NTFS filenames are Unicode strings. This seems like a
reasonable assumption, because most likely every NTFS filename you've ever seen was a Unicode
string, and all the files you tested with have such names, there isn't any way to "type in"
anything except Unicode strings as the name for a new file in Explorer or Word or similar
software, so why would you assume anything else?

But now you've created an incentive to construct files with non-Unicode names. Perhaps the
code sequence 0xFFFF 0xFFFE 0xFEFF 0xFFFF would be a good name for a file. Your buggy software
cannot convert this into a Unicode string, so it ends up in an exception handler that you
never realised could be called under such circumstances. The exception handler normally fires
when a file has been deleted before it can be examined, so it just tidies up and moves on to
the next file. So now the magic file is invisible to your software and you have a security
breach.

There are billions of assumptions like this, regardless of whether you're programming in LISP
or Fortran, and if any of them are wrong in a security sensitive application the security
probably doesn't work. Worse, the only people likely to find out have an incentive not to tell
you. That's why security is actually hard, although you wouldn't think it from all the Mickey
Mouse security consultants and 3rd rate security software.

Oh and yes, it turns out that although the Win32 APIs don't believe in files with non-Unicode
names, the underlying NT kernel, like the Linux kernel, considers them all to just be opaque
identifiers. Don't laugh too loud at the programmer who wrote one byte too many into an array,
you'll have your own foot in your mouth soon enough.

Security bugs

Posted Nov 4, 2007 21:43 UTC (Sun) by man_ls (guest, #15091) [Link] (6 responses)

Bugs cause something unexpected to happen.
In the vast majority of cases, bugs cause something expected not to happen. You press the button, it doesn't work. These bugs normally don't pose security risks.

Examples are good for illustration, and yet often they are not so good for proof. In your example something expected does not happen (exception logging), and yet it poses a security risk. The key is in the part where you say:

Your buggy software cannot convert this into a Unicode string, so it ends up in an exception handler that you never realised could be called under such circumstances.
Here is really where something unexpected is happening.

In short, the vast majority of bugs result in minor failures which don't compromise the application. Treating all bugs as having the same priority ("security critical") leads to the kind of version paralysis we can see in OpenBSD; the rest of the world just moves along.

Security bugs

Posted Nov 5, 2007 10:47 UTC (Mon) by tialaramex (subscriber, #21167) [Link] (5 responses)

“You press the button, it doesn't work. These bugs normally don't pose security risks.”

Sure, like Logout in a default Windows install. There's no security implication to an
apparently "logged out" machine still actually being logged in with your user privileges
right? It's just a minor usability bug, not even worth fixing in a security sensitive
environment really...

Fortunately these days Microsoft doesn't believe optimists like you, and so they provide an
override, you can force the session to actually end when the user clicks Logout. It's rare
that sensitive environments enforce this, but at least it's documented.

I'm sorry, but your whole thesis is wrong in principle. Every time you make a false assumption
in a security system the actual security of the system becomes an unknown.

Worse, it turns out to be wrong in practice as well. Every so often a very narrow, apparently
minor problem is found in some security sensitive component which vendors declare not to be a
security risk after some analysis. And almost inevitably this is taken as a challenge by
readers of Bugtraq and other less salubrious lists and the result is a working exploit. Not
always a model example, it may be hard to get working on common platforms, or it may require
some inside knowledge or even be only a probabilistic attack. But suddenly "No security
problem" has transformed into "Oops, critical security fix needed".

IIRC there's even an example of this happening to the Apache HTTP server, which has a lot of
very smart people working on it. The trouble is that the black hats only need to find one
hole, while the white hats need to find every hole in the entire system. It's an unequal
battle, but it's certainly not helped by pretending it's easier than it is.

Yes, there undoubtedly have been examples that really were impossible to exploit in the wild,
but distinguishing them from the other type I described above is so hard as to be not worth
the engineering effort to make the distinction. That's how OpenBSD is able to maintain any
momentum at all - they just fix the bugs rather than trying to figure out whether they can
ignore them safely.

Security bugs

Posted Nov 5, 2007 20:08 UTC (Mon) by man_ls (guest, #15091) [Link] (4 responses)

OK, so you (and de Raadt) can go on treating all bugs as potential security holes. Meanwhile I (and the rest of the world) will go on assigning severity and impact to bugs, programming defensively, using defense in depth and the rest of accepted principles of secure programming. Yes, sometimes we will leave holes open -- but so will OpenBSD, and so will everyone else.

Security bugs

Posted Nov 6, 2007 8:55 UTC (Tue) by tyhik (guest, #14747) [Link] (1 responses)

Your ealier posts in this thread make me ask the following. Is there an open-source project
you are regularly contributing code to? Let alone maintaining. I'd consider avoid using it if
possible. Sorry, no offence, just business :)

Security bugs

Posted Nov 6, 2007 9:53 UTC (Tue) by man_ls (guest, #15091) [Link]

Oh, I'm sorry; in case it wasn't obvious, I regularly contribute to a plethora of Free software projects, especially in C. I enjoy referencing and dereferencing every so often; whenever something doesn't compile I add *'s until it works. (I know, sometimes it's &'s, but it's hard to know beforehand.)

Just joking, I don't actively contribute to any Free software projects. On the other hand, in the last seven years I have developed a number of network-exposed services and maintained several public servers, and they have experienced zero intrussions. Sure, it was not terribly popular stuff, mostly research projects. But still.

Sorry, no offence, just business :)
Not sure your business sense is too good. This makes me ask the following: is there any business investment you are regularly making? Let alone running. I'd consider taking all my money out. No offence, just business ;)

Security bugs

Posted Nov 6, 2007 21:44 UTC (Tue) by tialaramex (subscriber, #21167) [Link] (1 responses)

The fact that you still think

"treating all bugs as potential security holes"
and
"assigning severity and impact to bugs"

... are opposites is the source of your confusion. All bugs are security holes (or must be
assumed to be until comprehensively proved otherwise, which amounts to the same thing), but
that doesn't somehow magically make them all equally severe bugs or equally urgent to fix.

The important insight from Theo (who I respect but don't much like) is that we should fix
these lower priority bugs anyway, and find ways to avoid introducing new ones - because it's
easier than figuring out their security implications. The OpenBSD bug you referenced actually
illustrates this, even though in this case it was the OpenBSD team themselves who looked
foolish.

You might think it stands to reason that we should fix or prevent bugs, but actually our
resources are limited and there are other things we could do instead. Bill Gates argued fairly
convincingly that since customers / users don't notice bug fixes you should divert as much
engineering resource as possible to adding features instead. Theo's point makes it obvious
that this is a mistake, and Microsoft eventually concluded the same.

Security bugs

Posted Nov 7, 2007 1:21 UTC (Wed) by man_ls (guest, #15091) [Link]

Thanks for trying to make it clear, but it remains a mystery to me. Somehow we are expected to product zero-bug code, or otherwise we may compromise the whole system. We assign priorities but in the end we are expected to solve all issues, so it doesn't really matter.

Thank God the original creators of Unix did not share this frame of mind; they tried to isolate security-sensitive parts. It was a lesson that Julius Caesar himself had learned in the Gallic wars: it doesn't matter if a few thousand enemies get through our outer defenses, we have enough layers that not much will get through all of them; and then we butcher those few. It was thus that he conquered an estimated 250,000 gauls with just about 7500 men. Read it from the source if you have the time (book VII, chapter LXIII; or just search for "ditch").

And yes, it is still sensible practice to follow Caesar's advice and organize your application in layers (or compartments, or whatever) so you can defend in depth. Bugs in outer layers do not matter, at least not for security; bugs in just a handful of inner compartments must be watched carefully. Funnily enough that is what Bernstein seems to be asking for in his paper (which I will have to read carefully after all; I do not much like the guy anyway, just as you don't really like de Raadt). But it sure sounds like running applications in tight compartments, minimizing side effects.

Null pointer dereference is a crash, not a security bug

Posted Nov 5, 2007 15:26 UTC (Mon) by mheily (guest, #27123) [Link] (1 responses)

> Ehm, my C is a little rusty, but no :D I rather meant null pointer dereference, double
dereference or whatever other strange things are allowed in C that lead to security problems.

If a program attempts to dereference a NULL pointer, the program will be terminated
immediately with a SIGSEGV signal. This does not allow arbitrary code to be executed. A double
dereference is a perfectly normal and desirable condition in many programs, and the compiler
will catch double-vs-single pointer mismatches at compile time.

> Of course not, but I much rather prefer a NullPointerException than an undesired intrusion. 

Again, there is no way for a NULL pointer dereference to facilitate an intrusion since the
program will segfault instead of executing arbitrary code. 

Null pointer dereference is a crash, not a security bug

Posted Nov 5, 2007 17:51 UTC (Mon) by phiggins (guest, #5605) [Link]

A lot of Java programmers have gotten so rusty on their C that they can't remember how Java
saves them from these kinds of mistakes. It's actually the ArrayIndexOutOfBoundsException that
saves your bacon from memory corruption. Of course, Java programmers are often way too smug
and think that memory corruption problems are the only kinds of security bugs. It's very hard
to write an arbitrary code execution vulnerability in Java, but an unexpected and improperly
handled ArrayIndexOutOfBoundsException or NullPointerException could still violate the
security of your program. It will be more difficult to get shell access that way than with
arbitrary code execution, though!

The bigger concern is with the JVM implementation, which has had some vulnerabilities, but it
hasn't been nearly as bad as I expected it to be. Java really has done well in the
memory-related security area.

Thanks for proving Bernstein right

Posted Nov 4, 2007 17:54 UTC (Sun) by MattPerry (guest, #46341) [Link] (4 responses)

> I'm really baffled by people who think that handling pointers and buffers
> correctly is in any way hard or difficult.

Then you'll be equally baffled to know that it's still a problem for many programmers. Those
mistakes are the source of numerous problems, some of which are security related.

Thanks for proving Bernstein right

Posted Nov 4, 2007 18:04 UTC (Sun) by i3839 (guest, #31386) [Link] (3 responses)

If you read my previous post, I'm not baffled by that. My original point was that moving those
people to something else like Java only produces buggy Java code instead and doesn't solve the
problem of buggy code. You could even say that C has an advantage here because their code
won't work in the first place and would crash all over the place. ;-)

All right, that's perhaps a step too far, but it is indeed easier to make a big mess in C.
With Java you get "works for me, bug must be in your part"-code.

Thanks for proving Bernstein right

Posted Nov 4, 2007 22:43 UTC (Sun) by ms (subscriber, #41272) [Link]

The only way to help such programmers is to use languages which do proof carrying code. Then
you /know/ that if it type checks, some proof holds about the program. Whether or not that
proof means anything to you is for you to decide.

Thanks for proving Bernstein right

Posted Nov 8, 2007 19:49 UTC (Thu) by mrshiny (guest, #4266) [Link] (1 responses)

The thing is, the common memory related bugs in C are handled in the JVM:

1.  Reading/Writing invalid pointers: No way to use a pointer in Java without initializing it
to a valid object, no way to read past the end of an array, no way to read freed memory.
2.  Double-free: No manual memory freeing
3.  Memory leaks: Java memory leaks are more rare since unused objects are garbage collected.
You can still run into a problem where you have a cache of objects that is never cleared or
similar problems.

Sure, a bad programmer will write bad programs in Java where they don't check array sizes,
etc.  But let's say they do: if their program overruns its array Java will halt the execution
(well, throw an exception).  This prevents corrupting memory.  Also you never have dangling
pointers so you don't have to worry about "corrupt" memory which was re-used by something
else.

These bugs are hard to track down in C because a program may work for a while until the memory
bugs appear.  In Java it fails fast and safely.  This means you can concentrate on the real
issues at hand.  Your assertion that C code crashes quickly isn't totally accurate, it only
crashes quickly if you try to access memory that's not allocated... there's lots of other fun
ways to corrupt the memory before you crash.  I'd have to say that, in terms of the "memory
corruption" bugs, Java fails more quickly and 100% more safely than C.

Considering that the world is full of bad programmers, I'd rather they program in Java than C.

Let's take the argument further

Posted Nov 8, 2007 23:17 UTC (Thu) by man_ls (guest, #15091) [Link]

Imagine if some fellow said: "People should write only machine code (i.e. a string of hex values); after all, bad code is bad code, whatever its form, and in C you can still lose track of the execution point. Especially (but not limited to) if your code is full of GOTOs or it is very complex". The answer is obvious: don't use GOTOs and do not write complex code. Now imagine if you tried to have a meaningful discussion with a huge fan of machine code programming, who dismissed "modern" facilities such as pointers, variable names or structs. Or source code files. Or labels...

The real question (whatever die-hard C fans say) is: should we go even further, and create new languages with even more advanced facilities; or would it limit the expressivity of programmers too much? Where is that limit?

Many good points

Posted Nov 5, 2007 19:34 UTC (Mon) by dvdeug (subscriber, #10998) [Link] (1 responses)

If language doesn't matter, why do we bother creating them? Why don't we write in machine
language? High-level languages are here to make certain things trivial and hence bug-proof. If
you have to reinvent the wheel, to handle arbitrary-length strings for example, there's so
many more chances at getting things wrong, good programmer or bad programmer.

Many good points

Posted Nov 5, 2007 22:04 UTC (Mon) by i3839 (guest, #31386) [Link]

Language is a way of communication. If people talk rubbish, it doesn't matter in what language
they do it. Sure, some languages are harder than others, so trivial mistakes are made more or
less often. But if on the whole it's a pile of rubbish, it stays a pile of trash. You can try
to put the blame on the language people use when sprouting garbage, but I don't find that very
convincing.

Many good points

Posted Nov 5, 2007 11:09 UTC (Mon) by jonth (guest, #4008) [Link] (1 responses)

Rubbish. The Blackberry firmware is not entirely written in Java. The front-end may be, but
the modem is C. 

Many good points

Posted Nov 5, 2007 16:16 UTC (Mon) by jwb (guest, #15467) [Link]

So is the JVM.  What's your point?  The applications are written in Java, and that's the
important bit, because that's where the mid-level programmers meet the hostile inputs.

security based on whose definition ?

Posted Nov 4, 2007 11:52 UTC (Sun) by copsewood (subscriber, #199) [Link] (1 responses)

My limited understanding of unpatched qmail is that the modular architecture results in the
front end mail acceptance server not knowing that the backend mail delivery engine will find
the delivery address invalid, resulting in a bounce to a fake address in a spam. This might
have been considered acceptable MTA behaviour 10 years ago. The backscattering of spam is now
considered in the same light as operating any other unsecured promiscuous spam relay. The fact
that DJB doesn't classify this as a security bug combined with his source distribution policy
means that those installing qmail have to make sure they apply the appropriate patches before
installation, and we know that many inexperienced mail admins won't.

Offering a prize for anyone who finds a security hole based on the opinion of the author
strikes me as the kind of hubris which a more competent programmer would not display; the
assumption that something is perfect will alway interfere with security if the definition of
the latter involves taking into consideration a changing operating environment and changing
requirements.

I am sure, in connection with the technical aspects of his approach to coding for correctness,
that we all have a lot to learn from DJB, but in this particular aspect of his behaviour I
think he could have done better.

security based on whose definition ?

Posted Nov 4, 2007 14:20 UTC (Sun) by ArbitraryConstant (guest, #42725) [Link]

Indeed... DJB insists on strict RFC compliance, but that allows stuff like backscatter spam.

It's bad enough that a large site can grow the queue beyond the size that qmail can handle,
resulting in dropped mail.

Daniel Bernstein: ten years of qmail security

Posted Nov 4, 2007 20:35 UTC (Sun) by ms (subscriber, #41272) [Link]

I find this a very interesting paper. I was hoping for some sort of insight as to what his
current programming language de jour is - I can't help but read it as condemnation of C and
related languages, and as a endorsement of pure, referentially-transparent functional
lanuages, but then again, I'm very biased on that front anyway.

Daniel Bernstein: ten years of qmail security

Posted Nov 4, 2007 23:16 UTC (Sun) by job (guest, #670) [Link] (6 responses)

If you look at the slides for the talk there are two interesting bits of information there.

  1. Bernstein raised the security bounty to $1000.
  2. qmail is now released to the public domain.

That last bit is extra interesting to all of us running his software as the license is what leads to the strange build process and patch collecting so closely associated with administrating qmail.

There is nothing official on the qmail page yet, but perhaps we might see it soon?

"public domain" software

Posted Nov 5, 2007 3:42 UTC (Mon) by dmarti (subscriber, #11625) [Link] (3 responses)

Details on "public domain" as a software license: Why the Public Domain Isn’t a License (PDF) by Lawrence Rosen.

"public domain" software

Posted Nov 5, 2007 12:24 UTC (Mon) by epa (subscriber, #39769) [Link] (1 responses)

Lawrence Rosen first says that you cannot release a work into the public domain, and then
seems to contradict himself by saying you can do just that by writing a statement 'I hereby
give it away to anyone who wants it for any purpose whatsoever.'  Surely if you can say that
you can equally well say 'this work can be treated as if it were in the public domain'.  And
if it looks like a duck and quacks like a duck... a work which has no copyright restrictions
(either because they have expired with age, or been explicitly waived by the author) is indeed
in the public domain.

He makes a good point that promises are not enforceable (unlike contracts) and can be
withdrawn.  But if you accept that logic, then no free software licence can be relied on,
since they (almost) all claim to be licences and not contracts; there is no consideration you
pay in return for the right to copy the software.  I think there may be some confusion between
a promise of a gift (which can be withdrawn at any time) and a gift itself (which obviously
cannot; I can't give you a bicycle and then a week later steal it back with impunity).

He says, don't accept gifts of software assuming they are in the public domain.  Of course
not.  You need an explicit statement from the software's author saying that it is his express
wish that the software be treated as public domain.  If you have that, it should be
unambiguous enough even for lawyers to understand.

The FSF in <http://www.fsf.org/licensing/licenses/gpl-faq.html> say that it is possible to
disclaim copyright on a work and so place it in the public domain.  Presumably their legal
counsel has checked that page.  So you must decide which lawyer to believe.  For now I'm going
to side with common sense and assume that if djb or anyone else tells you he has released his
work into the public domain, you can take him at his word.

"public domain" software

Posted Nov 5, 2007 20:01 UTC (Mon) by charlieb (guest, #23340) [Link]

> For now I'm going to side with common sense and assume that
> if djb or anyone else tells you he has released his
> work into the public domain, you can take him at his word.

You could also read his views on exactly that subject:

http://cr.yp.to/publicdomain.html

FUD we can manage better without

Posted Nov 6, 2007 10:58 UTC (Tue) by copsewood (subscriber, #199) [Link]

Unless there are any legal precedents of relevance to the contrary, my own view (IANAL) is
that the idea, that someone can sue someone else who has placed source code into the public
domain for damage caused by it, is FUD without practical foundation. Anyone using public
domain software source code for a potentially damaging purpose could reasonably be expected to
have what it does examined by an expert in order to confirm its suitability before using it
for such an application. You might as well try to sue someone else for a published idea or
research which you misapplied and which went wrong when you did so; this course of action
would also not get the litigant anywhere in the courts.

I can imagine an exception if source code for a trojan was placed in the public domain,
particularly if the source features making this program a trojan were obscure, and good
evidence existed that the author intentionally and/or maliciously included these hidden and
potentially damaging features. But I don't think applying a free software license to such code
would protect the author of it from similar litigation under these circumstances either, as
any disclaimers in this license would be considered moot.

Personally I don't think spreading the FUD: that releasing well-intentioned software into the
public domain can make the author liable - will attract any programmer or decision-maker to
apply free software licenses to code who otherwise wouldn't, though it might turn some people
off free software altogether. 

Qmail in public domain

Posted Nov 6, 2007 0:53 UTC (Tue) by ncm (guest, #165) [Link]

If qmail is now in the public domain, that's good news: it means we can sue Bernstein for
qmail flooding our mailboxes with bounce messages, or for otherwise annoying us.

Seriously, the reason for not putting software into the public domain is that (as I understand
it) only a license gives you the power to make users of the software assume liability for
problems caused by running the code.  If you don't make the license to copy contingent on them
accepting liability, then people harmed by the software can come after *you*.  Of course they
might anyway, and if the person who copied the code has no money, a judge might allow it --
except of course *you* have no money either, right?  If you *do* have money, you're supposed
to hire a fixer to arrange that they sue somebody else instead.

(I am not a lawyer.  The above might just be superstition.)

Daniel Bernstein: ten years of qmail security

Posted Nov 6, 2007 18:40 UTC (Tue) by rickmoen (subscriber, #6943) [Link]

(The current paper, once again, compares qmail only with sendmail. How quaint.)

The licensing pronouncement is of course welcome news for qmail users, and Russ Nelson & company have announced that netqmail 1.06 will be produced soon, to add (relative to the aging 1.05 initial release) a pair of much-needed patches. My commendations to that group, as always.

Dan is of course quite correct that it's a settled principle of law that property can be abandoned. The problem that practice can create is that of predicting resulting effect in various legal jurisdications. Can it be claimed, and title assumed, by a subsequent "finder"? Can the original owner reclaim it? The original owner's heirs? Might it not, in some jurisdictions, become regarded as the property of the state (as is true in some places for abandoned automobiles, ships, and aircraft)?

Dan, in his words, would not be silly enough to go in front of a judge to reassert his title after explicitly abandoning it, but can he guarantee that isn't true of his heirs?

Different places have differing abandoned property and escheat laws: The effect of a public domain declaration may differ widely between countries.

It's an interesting and subtle area of law -- which is generally precisely what one doesn't want to be true of one's software licensing.

(My own modest compendium of people's writings on the subject: "Public Domain" on http://linuxmafia.com/kb/Licensing_and_Law/)

Rick Moen
rick@linuxmafia.com

"Extreme sandboxing"

Posted Nov 5, 2007 16:44 UTC (Mon) by charlieb (guest, #23340) [Link] (1 responses)

In the section "5.2 Isolating single-source transformations" Dan shows how to safely
sandbox(*) a program which does a data transformation (jpegtopnm for example) so that it can
only perform a data transformation. He says: "Existing UNIX tools make this sandbox tolerably
easy for root to create". Which is true. What he doesn't say is that existing UNIX tools don't
allow non-root accounts to create such a safe space.  That greatly limits the usefulness of
those particular techniques - but also could imply a program of future OS development. Why
shouldn't an unprivileged process be able to chdir and chroot to an empty directory?

*) The procedure might be flawed however. I notice that step one sets RLIMIT_NOFILES to zero.
The OpenGroup says that setting zero will produce undefined behaviour
(http://www.opengroup.org/onlinepubs/009695399/functions/g...).

"Extreme sandboxing"

Posted Nov 5, 2007 17:22 UTC (Mon) by i3839 (guest, #31386) [Link]

Linux has seccomp, which a process can enabled via prctl(), but hardly anyone knows about it.
Perhaps it's too secure, as it only enables read/write/exit/sigreturn, and disallows
everything else.


Copyright © 2007, Eklektix, Inc.
Comments and public postings are copyrighted by their creators.
Linux is a registered trademark of Linus Torvalds