|| ||Alan Cox <alan-AT-lxorguk.ukuu.org.uk>|
|| ||Will the real Linuxgazette please stand up|
|| ||Tue, 02 Dec 2003 20:48:30 +0000|
John Fisk founded Linux Gazette in 1995. He's not visibly part of either
side of the argument which begs the question who did he give it to
Well I had a dig both in the old copies I have and the email. In 1997
LGEI (The italian translation) ran this interview, the contents of which
I've verified are untampered from my copies (and you can too using
Most importantly it says the following (again remember back in 1997
before the argument blew up)
Francesco: When and why did SSC decide to publish Linux Gazette in the
current version? Originally, LG was edited only as an extra-curricular
activity by John M. Fisk.
Margie: During the summer of 1996, John Fisk decided he no longer had
the time to keep Linux Gazette up in the fashion it deserved. LG had
become very popular, and readers were wanting it to come out on regular
monthly basis. Between school and work, John just didn't have time to do
this, so he put out feelers looking for someone to take it over. We
responded and he accepted us as the right people to continue LG.
Now I don't like what SSC have done to Linux Gazette but from the 1997
discussion the question of ownership seems not to be in dispute unless
John has anything to add.
Mike Orr and friends may be the writers and their site may be the true
progression of the original magazine but it doesn't seem to alter the
facts that SSC obtained LG from John in 1997.
Comments (3 posted)
|| ||dlang <dlang-AT-invendra.net>|
|| ||interesting security article|
|| ||Tue, 2 Dec 2003 21:07:53 -0800 (PST)|
With the Debian server compromise fresh in mind I would like to go on a
minor rant about people's use of ssh.
All to frequently people use ssh and consider themselves completely secure
(as an example look at the comments on the latest story of the Debian
server compromise and how people are reacting to the password sniffing
with 'this isn't possible unless there is a hole in ssh')
Ssh doesn't not ensure security.
Ssh doesn't even tell you who is connecting to your server.
That's right, ssh doesn't tell you who is connecting to your server, it
tells you who the remote machine wants to tell you is connecting to your
server. This is not the same thing.
Ssh can do three things.
1. Prevent people from sniffing/hijacking the communications session
2. Only allow connections from a machine that knows the secret ssh key
3. Only allow connections from specific IP addresses
However the only thing that it does to identify a user is to ask for a
normal password (if it's even configured to do that, frequently people say
that certificates are in use so they don't even need the password). Yes if
the remote host has the secret key configured to require a pass-phrase you
can assume that someone typed that in, but you have no idea if that person
is the person that you intend to grant access to your server to, or if
it's anyone else that has had access to the remote host. Anyone who has
root access on the remote host has the ability to sniff the pass-phrase and
to then use the certificate as that user.
No matter what encryption you use the prompt and pass-phrase need to be in
plain text by the time they get to the end-user, if you have access to the
raw keystrokes and screen IO you can capture it (and before you say that
that should be protected as well go read the proposals by Microsoft to try
and do exactly that for their trusted computing stuff, the implications
are scary and you still are vulnerable if there are bugs in the system)
The ssh, ssl, and tls algorithms all have ways to 'verify a user' based
on the certificate that they have, but this is only valid if you can trust
the remote machine.
Ssh is a valuable tool to use (the importance of preventing the
communications from being intercepted is pretty high) but is is far from
being the solution to all problems.
If you really care about who is accessing your systems you need to use
something that isn't vulnerable to a compromised remote host. You can't
prevent a compromised remote host from letting a legitimate user start a
session and then hijacking it, but you can make sure that once that
session is terminated the remote attacker cannot get back in to your
In many cases it may be actually safer to user telnet with good user
authentication then to use ssh with poor user authentication.
As surprising as this statement is all that it takes to make it true is
for the probability that you are logging in from a compromised host be
higher then the probability that there is a person in the middle waiting to
hijack your session (this is assuming that the actual text of the session
is not valuable so that someone who looks over a transcript of it 5
minutes later doesn't gain anything).
How do you do this?
It's simple, Challenge-response authentication of some sort.
There are a lot of tools out there to do this, but the basic approach is
to have the server send some challenge and the user compute some response
and send it back. The person who has compromised the remote server can
gather this information, but it's useless to them unless the server issues
the same challenge again.
This challenge may or may not be explicitly shown to the user.
One example would be a one-time password sheet, the user knows to use the
next one on the list and crosses it out, the server doesn't need to say
'use password 63'.
Another would be sKey tokens, they have a clock in them that's synced to
the server and have a different password every minute so the 'challenge'
half of this is the time.
As one example where there is an explicit challenge there is the snk-004
protocol implemented in software and in hardware tokens sold by passgo in
their defender hand-held token. When using this the server sends a random
number to the user who types it into a token which DES encrypts the
number, displaying it to the use who types it in as the password.
Another option that is becoming possible is to use a smart-card to do this
for you so that you can skip the steps of having to type the challenge and
response into equipment. for it to be secure you still have the
challenge-response going on under the covers. In some cases the smart
cards implement certificate authentication which would seem to put them
back in the same risk as the remote servers, but since the smart-card is
not used for anything else the probability of it being compromised is MUCH
Which option you choose to use doesn't matter much (the all have
advantages and disadvantages) the important thing is to use one of them
and to keep the entire security picture in mind as you are doing your
One thing to note is that biometric identification devices (fingerprint
scanners, etc) do not always meet these criteria. If you have an eye
scanner that is just a camera and a bunch of software then this is not
safe as an attacker can capture the output of the camera and feed it back
to the program at a later time when it thinks it's reading from the
camera. you need to have your biometric reader actually participate in the
authentication like a smart card It must also be self-contained. Even
depending on data files on the systems hard drive (to store fingerprints
to compare against for example) puts you at risk because an attacker
could shuffle the files around so that their fingerprint becomes the valid
one for every user.
Comments (11 posted)
|| ||Przemek Klosowski <przemek-AT-tux.org>|
|| ||SCO's medieval tendencies|
|| ||Mon, 1 Dec 2003 00:57:49 -0500|
Slashdot published recently more info on SCO communications related
to their Linux lawsuit. I wanted to share some thougths with you on that.
I always maintained that there is an analogy between the software
technology and scientific knowledge. Just like science is the basis
for our civilization, software underlies the expanding digital sphere
of our lives. The development model of both science and sofware can vary
between proprietary and public, and the society has to make a policy choice
about supporting the right mix.
Even though scientific and technological knowledge started as
proprietary, we as society made a historical choice, dating back to
the age of Enlightenment, to develop knowledge in a collegial, public
fashion. This model, of course, works rather well, and no one
seriously argues that it should be rolled back to some kind of
proprietary science development.
Similarly, I argue that software, whose importance tracks the growing
influence of computing on our lives, must be developed in a public
model; the Free Software is currently the closest approach, which
eventually will be augmented by some sort of peer-reviewed public
commitment, just as is the case for scientific research.
The analogy of software and science is not perfect; but I argue that,
firstly, the negative effects of closed software are almost identical
to negative effects of closed knowledge: it forces duplicate work,
creates artificial monopolies, and slows down progress. Secondly,
because software _IS_ the infrastructure of the digital age, there is
the issue of public interest, and the development model must
In this context, the strategy of SCO in their Linux lawsuit is
especially retrograde. Their position, as laid out in their
recently issued letters
seems to counter the very idea of a public stake in technical
knowledge. It occurred to me to modify their argument, substituting
'human knowledge' for 'software'. Here's what we'd get:
As you may know, the development process for public scientific
knowledge has differed substantially from the development process
for other enterprise scientific research. Commercial research is
built by carefully selected and screened teams of scientists
working to build proprietary scientific results. The process is
designed to monitor the security and ownership of intellectual
property rights associated with the knowledge.
By contrast, much of human scientific knowledge has been built
from contributions by numerous unrelated and unknown scientists,
each contributing a small scientific discovery. There is no
mechanism inherent in the public science development process to
assure that intellectual property rights, confidentiality or
security are protected. The public science process does not
prevent inclusion of knowledge that has been stolen outright, or
developed by improper use of proprietary methods and concepts.
Put this way, their argument is nonsensical, and would find no support
in anyone even a tiny bit familiar with the scientific process, which
arguably forms the basis of our civilization.
Przemek Klosowski, Ph.D. <email@example.com>
Comments (none posted)
Page editor: Jonathan Corbet