| From: | Bruce Schneier <schneier@counterpane.com> | |
| To: | crypto-gram@chaparraltree.com | |
| Subject: | CRYPTO-GRAM, August 15, 2003 | |
| Date: | Fri, 15 Aug 2003 00:47:37 -0500 |
CRYPTO-GRAM
August 15, 2003
by Bruce Schneier
Founder and CTO
Counterpane Internet Security, Inc.
schneier@counterpane.com
<http://www.counterpane.com>
A free monthly newsletter providing summaries, analyses, insights, and
commentaries on computer security and cryptography.
Back issues are available at
<http://www.schneier.com/crypto-gram.html>. To subscribe, visit
<http://www.schneier.com/crypto-gram-faq.html> or send a blank message
to crypto-gram-subscribe@chaparraltree.com.
Copyright (c) 2003 by Counterpane Internet Security, Inc.
** *** ***** ******* *********** *************
In this issue:
New Book: Beyond Fear
The Doghouse: Top Secret Crypto
News
Counterpane News
Security Notes from All Over: Photo-ID Verification
Flying on Someone Else's Airplane Ticket
More Airline Insecurities
Hidden Text in Computer Documents
Comments from Readers
** *** ***** ******* *********** *************
New Book: Beyond Fear
I have a new book on security:
Beyond Fear
Thinking Sensibly About Security in an Uncertain World
This isn't a book about computer security; it's a book about security
in general. In it I cover the entire spectrum of security, from the
personal issues we face at home and in the office to the broad public
policies implemented as part of the worldwide war on terrorism. With
examples and anecdotes from history, sports, natural science, movies,
and the evening news, I explain how security really works, how it
fails, and how to make it effective.
If I can name one overarching goal of the book, it's to explain how we
all can make ourselves safer by thinking of security not in absolutes,
but in terms of trade-offs -- the inevitable expenses, inconveniences,
and diminished freedoms we accept (or have forced on us) in the name of
enhanced security. Only after we accept the inevitability of
trade-offs and learn to negotiate accordingly will we have a truly
realistic sense of how to deal with risks and threats.
This is a book for everyone. I believe that security, as a topic, is
something we all can understand. And even more importantly, I believe
that the subject is just too critical, too integral a part of our
everyday lives, to be left exclusively in the hands of experts. By
demystifying security, I hope to encourage all of us to think more
sensibly about the topic, to contribute to what should be an open and
informed public discussion of security, and to participate vocally in
ongoing security negotiations in our civic, professional, and personal
lives.
I am very pleased with this book. I started writing it in June 2002,
and continued writing it through spring 2003. It has been a lot of
work, and I think it's paid off. It's a good book.
Beyond Fear lists for $25, and the publisher is Copernicus Books. It's
on Amazon at a 30% discount, with free shipping if you order something
else as well. (And they have a really good package deal with my
previous book, Practical Cryptography.)
And finally, I have a favor to ask. I'd like to see if I can subvert
the Amazon bestseller system and get to #1. My previous big book,
"Secrets and Lies," made it to #4. (Harry Potter was #1, #2, #3, and
#5.) If everyone who plans on buying this book on Amazon waits until
12:15 PM Pacific time (that's 2:15 PM Central Time, 3:15 PM Eastern
time, 8:15 PM UK Time, and 9:15 PM Western European time) on Friday,
August 15, and all does it together, I might make #1. Don't worry if
you can't do this, but I would appreciate it if you can. Thanks.
Beyond Fear home page:
<http://www.schneier.com/bf.html>
Amazon's page:
<http://www.amazon.com/exec/obidos/tg/detail/-/0387026207/counterpane/>
Publisher's page:
<http://www.copernicusbooks.com/detail.tpl?cart=3268117965811&ISBN=03870
26207> or <http://tinyurl.com/k386>
** *** ***** ******* *********** *************
The Doghouse: Top Secret Crypto
This is your pretty standard doghouse crypto product: "true one time
pad," "the most powerful encryption program in the world," "RSA key
size from 480 to 16,384 bits," (why would anyone use a 480-bit RSA
key?), that sort of thing. But here's the funny part: this company
cites fiction writer Tom Clancy as an authoritative crypto
expert. Here are two quotes from their help file:
"There are many encryption programs that use the 56-bit DES cipher as
their conventional encryption algorithm. This has been broken. In
fact, the U.S. government has banned its use by government agencies
because it does not consider it secure any more. Most of the other
encryption programs use conventional encryption algorithms that have 80
or 128 bits keys, such as PGP(tm), which uses the 128 bit IDEAL
cipher. These may not be secure either. See Rainbow Six by Tom Clancy
for more information."
"Rainbow Six by Tom Clancy--(c) 1998--paperback edition pages 436 and
437. Here Tom Clancy writes that the NSA, with the application of
quantum theory to communications security can decipher codes with 128
bit keys, and it appears from his writing that it hardly takes any time
at all. The conventional key used by PGP(tm) is only 128 bits. Do you
suppose the NSA can break it? Tom Clancy is noted for his accuracy in
writing about technology, so I would not be a bit surprised."
There's a lot more in the help files, if you're looking for some good
laughs.
The company:
<http://www.topsecretcrypto.com/>
The help files:
<http://www.topsecretcrypto.com/files/TscgHelpFiles.zip>
** *** ***** ******* *********** *************
News
Interesting and amusing story about a handwritten signature in Delaware.
<http://www.delawareonline.com/newsjournal/local/2003/07/17manssignature
of.html> or <http://tinyurl.com/homg>
Reconstructing shredded documents. Note that they're not just talking
about those cheap shredders that cut documents into thin strips; those
have been reconstructed manually for decades now. The article is
saying that documents that have gone through cross-cut shredders can --
at least sometimes -- be reconstructed.
<http://www.sanmateocountytimes.com/Stories/0%2C1413%2C87~11271~1523119%
2C00.html> or <http://tinyurl.com/k388>
Interesting story about the tactics of electronic credit card thieves:
<http://www.securityfocus.com/news/6353>
A good article on John Gilmore's legal battle for the right to fly
anonymously:
<http://www.reason.com/0308/fe.bd.suspected.shtml>
And John Gilmore's story of being ejected from a plane for wearing a
button reading "Suspected Terrorist":
<http://www.politechbot.com/p-04973.html>
MS Windows passwords can be cracked in an average of 13.6
seconds. Assuming your password consists of just letters and numbers,
that is. But my guess is that almost everyone falls into that category.
<http://news.com.com/2100-1009_3-5053063.html>
<http://lasecpc13.epfl.ch/ntcrack/>
Identity thefts in the U.S. have increased by 70% over the last year,
but only about 1 in 700 thieves ever get caught.
<http://www.vnunet.com/News/1142517>
The governor of Wisconsin has a secure hotline to national homeland
security officials. And he gets telemarketing calls on it...
<http://www.jsonline.com/news/state/apr03/135674.asp>
Really interesting paper on security patches and their installation
rate. Turns out that lots and lots of system administrators don't
install security patches.
<http://www.rtfm.com/upgrade.html>
RFID implants for humans. I love the "LoJack for people" quote.
<http://www.conspiracyplanet.com/channel.cfm?channelid=74&contentid=900&
page=1> or <http://tinyurl.com/k38a>
Overhyping security threats is damaging:
<http://www.wired.com/news/infostructure/0,1377,59556,00.html>
The Internet epidemic du jour is the Blaster Worm. I'm not writing
about it because it isn't very interesting; it's just more of the same
thing we've been seeing for years. But there's one new idea. One
variant of the worm downloads a file whose name contains an anatomical
term that many spam filters block. I wonder how many emails about the
worm never reach their recipient because the filename is given? VERY
clever.
<http://www.counterpane.com/alert-v20030811-001.html>
One of the electronic voting machines has been analyzed by
computer-security experts, and the results aren't very promising:
<http://www.msnbc.com/news/943558.asp?0cv=TA00&cp1=1>
<http://www.avirubin.com/vote.pdf>
<http://www.scoop.co.nz/mason/stories/HL0307/S00198.htm>
<http://apnews.excite.com/article/20030726/D7SGTI780.html>
But no one cares about voting machine security:
<http://newsforge.com/article.pl?sid=03/07/25/1349255&tid=4>
A good article on how to rig an election with these sorts of machines:
<http://www.truthout.org/docs_03/voting.shtml>
Security and fear:
<http://www.csoonline.com/read/070103/fear.html>
Fun with credit card signatures and verification:
<http://www.zug.com/pranks/credit/index.html>
Good article on Communications Assistance to Law Enforcement Act
(CALEA). Among other things, the author asserts that CALEA terminals
have been hacked regularly.
<http://www.pbs.org/cringely/pulpit/pulpit20030710.html>
Very interesting article on ATM fraud, in all its varieties:
<http://www.iht.com/articles/105087.html>
Once again, the courts have ordered the Department of Interior to get
its computers off the Internet if they can't protect the privacy of
American Indian data.
<http://www.gcn.com/vol1_no1/security/22935-1.html>
<http://www.dcd.uscourts.gov/96-1285at.pdf>
<http://www.dcd.uscourts.gov/96-1285as.pdf>
I wrote about this exact problem a year and a half ago:
<http://www.counterpane.com/crypto-gram-0112.html#2>
Dealing with security regulations:
<http://www.csoonline.com/read/070103/chaos.html>
Police bees. A fascinating article about how bees cope with security
problems:
<http://www.nature.com/nsu/nsu_pf/020422/020422-16.html>
Will anonymous mail become a thing of history?
<http://computerworld.com/newsletter/0,4902,83804,00.html?nlid=SEC2>
Long, but very well written, article about identity theft:
<http://www.washingtonpost.com/wp-dyn/articles/A25358-2003Aug6.html>
SlashDot discussion of countermeasures:
<http://ask.slashdot.org/askslashdot/03/08/12/2113218.shtml?tid=126&tid=
172> or <http://tinyurl.com/k38a>
** *** ***** ******* *********** *************
Counterpane News
Counterpane has a new CEO. Paul Stich has been promoted from COO to
CEO. Former CEO Tom Rowley remains as Chairman of the Board.
<http://www.counterpane.com/pr-20030813.html>
Schneier is speaking at the International Design Conference in Aspen on
August 21st.
<www.idca.org>
Password Safe is available for the PocketPC:
<https://sourceforge.net/project/showfiles.php?group_id=41019&release_id
=172730> or <http://tinyurl.com/k38g>
And Release 1.92c for Windows is available for download. This is a
maintenance release, fixing a few minor annoyances.
<https://sourceforge.net/project/showfiles.php?group_id=41019&release_id
=177038> or <http://tinyurl.com/k38k>
** *** ***** ******* *********** *************
Security Notes from All Over: Photo-ID Verification
A reader sent in this conversation he overheard at a corporate security
desk one morning:
Employee: I have lost my photo-ID card, can I get a day pass please?
Security Guard: Certainly, what is your serial number?
Employee: 123456
[Security guard pulls up the details on his computer, which includes a
photograph of the employee.]
Security Guard: Do you have a driver's license or another piece of
identification which has your picture on it?
Employee: Why would you need that?
Security Guard: To match against our records.
Employee: A picture of my face?
Security Guard: Yes
Employee: This is my face -- I am wearing it on my head.
Security Guard: I need another piece of ID with a picture on it to
compare against this one.
This is a great story, because it illustrates how completely clueless
security guards can be about how security really works. The point of
the photo ID is to allow the guard to match a face with an
authorization. A photo ID that is only issued to employees
accomplishes that. The database does the same thing: it contains both
the employee's photo and his authorization. But the guard doesn't
understand that; all he knows is that he needs to look at a piece of
plastic with the person's picture.
** *** ***** ******* *********** *************
Flying on Someone Else's Airplane Ticket
The photo-ID requirement on airplanes was established in 1996 by a
still-secret FAA order. It was a reaction to TWA flight 800, which
exploded shortly after takeoff, killing all 230 on board. This was an
accident -- after 18 months the FBI concluded that there was no
evidence of a bomb or missile -- but the ID requirement was established
anyway. The idea is that checking IDs increases security by making
sure that the person flying is the person who bought the ticket. After
9/11, the government decided that checking IDs multiple times increased
security even more, especially since there is now a "watch list" of
suspicious people to check the names against.
It doesn't work. It's actually easy to fly on someone else's
ticket. Here's how: First, have an upstanding citizen buy an
e-ticket. (This also works if you steal someone's identity or credit
card.) Second, on the morning of the flight print the boarding pass at
home. (Most airlines now offer this convenient feature.) Third,
change the name on the e-ticket boarding pass you print out at home to
your own. (You can do this with any half-way decent graphics software
package.) Fourth, go to the airport, go through security, and get on
the airplane.
This is a classic example of a security failure because of an
interaction between two different systems. There's a system that
prints out boarding passes in the name of the person who is in the
computer. There's another system that compares the name on the
boarding pass to the name on the photo ID. But there's no system to
make sure that the name on the photo ID matches the name in the computer.
In terms of security, this is no big deal; the photo-ID requirement
doesn't provide much security. Identification of passengers doesn't
increase security very much. All of the 9/11 terrorists presented
photo-IDs, many in their real names. Others had legitimate driver's
licenses in fake names that they bought from unscrupulous people
working in motor vehicle offices.
The photo-ID requirement is presented as a security measure, but
business is the real reason. Airlines didn't resist it, even though
they resisted every other security measure of the past few decades,
because it solved a business problem: the reselling of nonrefundable
tickets. Such tickets used to be advertised regularly in newspaper
classifieds. An ad might read: "Round trip, Boston to Chicago,
11/22-11/30, female, $50." Since the airlines didn't check IDs and
could observe gender, any female could buy the ticket and fly the
route. Now that won't work. Under the guise of helping prevent
terrorism, the airlines solved a business problem of their own and
passed the blame for the solution on to FAA security requirements.
But the system fails. I can fly on your ticket. You can fly on my
ticket. We don't even have to be the same gender.
** *** ***** ******* *********** *************
More Airline Insecurities
Number one. By now everyone has seen those large CTX baggage scanning
machines. At about $2 million each, they're very good at finding
explosives in luggage. Unfortunately, they're also very good at
finding other things too -- the false positive rate is very
high. Turns out that peanut butter looks a whole lot like plastic
explosives (C-4, Semtec, etc). Smuggling a bomb on board an airplane
is as easy as taking a jar of peanut butter, breaking it in your
luggage so that it smears around everything, and then slipping a bomb
into the suitcase.
Number two I wrote about in my book, Beyond Fear: "You can even make a
knife on board the plane. Buy some steel epoxy glue at a local
hardware store. It comes in two tubes: a base with steel dust and a
hardener. Make a knifelike mold by folding a piece of cardboard in
half. Then mix equal parts from each tube and form into a knife shape,
using a metal fork from your first-class dinner service (or a metal
spoon you carry aboard) for the handle. Fifteen minutes later you've
got a reasonably sharp, very pointy, black steel knife."
The point here is to realize that security screening will never be 100%
effective. There will always be ways to sneak guns, knives, and bombs
through security checkpoints. Screening is an effective component of
a security system, but it should never be the sole countermeasure in
the system.
"Confessions of a Baggage Screener":
<http://www.wired.com/wired/archive/11.09/bagscan.html>
** *** ***** ******* *********** *************
Hidden Text in Computer Documents
In the beginning, computer text files were filled with weird formatting
commands. (Anyone remember WordStar's dot commands?) Then we had
WYSIWYG: What You See Is What You Get. Or, more accurately, what you
see on the screen is what you get on the printer. In the beginning,
what you saw on the screen what was what was actually in the digital
file. With WYSIWYG, what you saw on the screen was not in the digital
file; formatting commands remained hidden from view, and the screen
looked like the printed page.
WYSIWYG was an huge improvement, because it enabled writers to more
easily format documents and see the results of that formatting. But it
also brought with it a new security vulnerability: the leakage of
information not shown on the screen (or on the printed document). Most
of the time it's completely benign formatting information, but
sometimes it's actual text. And because the user sees what the printed
page looks like, he never even knows that this text is in the
file. But someone who is even a little bit clever can recover the
text, with embarrassing or even damaging results.
Three examples:
Last month, Alastair Campbell, Tony Blair's Director of Communications
and Strategy, was in the hot seat in British Parliament hearings
explaining what roles four of his employees played in the creation of a
plagiarized dossier on Iraq that the UK government published in
February 2003. The names of these four employees were found hidden
inside of a Microsoft Word file of the dossier, which was posted on the
10 Downing Street Web site for the press. The "dodgy dossier," as it
became known in the British press, raised serious questions about the
quality of British intelligence before the second Iraq war.
Last year, during the manhunt for the DC sniper, a letter was left for
the police by the sniper that included specific names and telephone
numbers. Perhaps in order to persuade the panicking public that the
police were in fact doing something, they allowed the letter to be
published -- in redacted form -- on the Washington Post's Web
site. Unfortunately, they implemented the redactions by the completely
pointless method of placing black rectangles over the sensitive text in
the PDF. A simple script was able to remove these boxes and recover
the full PDF.
And three years ago in Crypto-Gram, I told the story of a CIA document
that the New York Times redacted and posted as a PDF on its Web
site. The document concerned an old Iranian plot, and contained the
names of the conspirators. The New York Times redacted the document in
the same reversible way that the Washington Post did.
So much for examples. How pervasive is this problem? In a recent
research paper, S.D. Byers went out on the Internet to see what sorts
of hidden information he could find. He concentrated on Microsoft
Word, because Word documents are notorious for containing private
information that people would sometimes rather not share. This
information includes people who wrote or edited the document (as
Blair's government discovered), information about the computers and
networks and printers involved in the document, text that had been
deleted from the document at some prior time, and in some cases text
from completely unrelated documents.
Byers collected 100,000 MS Word documents, at random, from the Web. He
built three scripts to look for hidden text, and found it in all
documents. Most of it was uninteresting -- the name of the author --
but sometimes it was very interesting. His conclusion was that this
problem is pervasive.
MS Word was the subject of Byers's paper, but other data files can leak
private information: Excel, PowerPoint, PDF, PostScript, etc. There's
no excuse for the companies that own those formats not to create a
program that scrubs hidden information from these files. And certainly
there's a business opportunity for some third party to create such a
scrubber program, but they should be outside the U.S., because it might
be a violation of the DMCA to do it. Microsoft's closed proprietary
file formats make it harder to write such a scrubber, and unless
Microsoft makes some additional changes in its software (e.g. usage and
default values), scrubbers will remain an imperfect solution.
Oh, and the press uses techniques like this to unredact stuff all the
time. I believe they don't mention it much because they're afraid
they'll lose access to all that leaked information.
Byers's research paper:
<http://www.user-agent.org/word_docs.pdf>
Tony Blair bitten by inadvertent info left in MS Word files:
<http://www.computerbytesman.com/privacy/blair.htm>
The DC sniper letter:
<http://www.planetpdf.com/mainpage.asp?webpageid=2434>
DC sniper letter in redacted form:
<http://www.user-agent.org/washpost_sniperletter.pdf>
Same letter, unredacted:
<http://www.user-agent.org/washpost_unredacted.pdf>
The CIA and a redacted PDF file:
<http://www.counterpane.com./crypto-gram-0007.html#1>
** *** ***** ******* *********** *************
Comments from Readers
From: Elliotte Rusty Harold <elharo@metalab.unc.edu>
Subject: How to Fight
In your recent Cryptogram, you say: "Second, naming and shaming doesn't
work. Just as it doesn't make sense to negotiate with a clerk, it
doesn't make sense to insult him."
I have to disagree. Sometimes it does make sense to argue with or
occasionally even insult a clerk. Consider the case of airlines. In
particular, consider the counter clerks. For years, they have
routinely and repeatedly ignored and violated airline rules on excess
baggage and baggage weight. Why? Simply because these workers know
from experience that if they enforce the rules, the customer is very
likely to complain, become irate, call for a manager, slow down their
line, possibly yell at them, possibly insult them, and generally make
their day unpleasant. Multiply this by the dozens of customers they
see each day with too much baggage, and they simply stop enforcing the
rules.
How effective this is is all relative. It depends on how irate
customers get, how many customers become irate, who the guards are, and
how real the security measure is. I doubt becoming irate would have
much effect on the guards at the metal detector you pass through before
entering the concourse. There probably wouldn't be enough irate
customers to change the behavior of a pharmacist, though the loss of
business for an owner-operated pharmacist might be significant enough
to make a personal negotiation effective. (Yet another reason to
support family pharmacies instead of chains.) However, at the hotel I
think it could be very effective. If the desk clerk knew that every
time they insisted on photocopying a customer's driver license, they
were going to be subjected to a unpleasant experience, they would
simply stop asking for it, or they would back down very quickly as soon
as the customer raised an objection.
The clerks may not make the rules, but they do enforce them and they
have the direct and immediate power not to enforce them. Whether they
enforce them is directly tied to their expectations of the personal
consequences of enforcing or not enforcing those rules. As long as
customers are invariably polite and understanding, there are no
negative consequences for the clerk for enforcing the rules.
No, this isn't particularly nice; but neither is photocopying your
personal information for no good reason. Until the rules can be
changed, it is only reasonable to expect that hostile, anti-customer
requirements will elicit hostile customer feedback. Long term, this is
a significant component of changing the rules. As long as
airlines/pharmacists/hotels can argue that no one objects to the rules,
they don't have any incentive to change them. As soon as it becomes
obvious that hostile reactions to the rules are costing them money by
taking more time and making it harder to recruit good employees,
they'll take notice.
From: "Taylor, Stephen" <STEPHEN.TAYLOR@saic.com>
Subject: How to Fight
Bad experiences with people who do not have power to make decisions is
not new, but it is getting worse. I think that we suffer from the fact
that the world has so many people in it; we are always just another
face in line. To me it is an effect of the "mall"-ing of America and
of the growth of corporate and government bureaucracies. The
procedures which you don't like were probably given little if any
thought during their creation. Once in place, the employees follow
them or chance losing their jobs.
In particular situations, it may be worth the fight. Education is the
key to permanently changing the culture. The public needs to
understand security so that they are not fooled by the politicians and
the media managers. An airline should not be able to reject the
suggestion to put locks on cockpit doors (prior to 9/11), for
instance. The media should quit throwing the word "security" around as
if the word itself conveys the same meaning to everyone. And something
that is dear to me right now, companies that deal in financial
information should not be so easily fooled by someone with a stolen
Social Security number. The situation with the use of the SSN has gone
far beyond the need for action.
From: Carsten Turner <carsten@netway.com>
Subject: How to Fight
In the instance of the pharmacy, you don't write and say "I'll never
shop here again." Instead, you write, "I'm writing to the manufacturers
whose products you sell and tell them that as long as they do business
with you, I'll think less of their products." It's a variation of
telling the newspaper "I'm sick of your yellow journalism, so I'm
writing to your advertisers and telling them what I think."
You are only one consumer, and your spending habits will cost the
pharmacy only so much. If you bang on the door of enough
manufacturers, you might find the one that is sympathetic to your
opinions, and the pharmacy might stand to lose more.
From: Radovan Semancik <semancik@bgs.sk>
Subject: How to Fight
If I was an owner of a hotel, I would really want to know the identity
of my guests. The risk of unpaid bills or other damage could be quite
high. If I had a big corporate building, I would like to know the
identity of people entering it. There could be high-value assets to
protect, and if anyone enters to do the maintenance, I would like to
know who he is and check his permission to enter. I could even
understand the Japanese mobile company, their risk may be high. But I
could not understand why they want a passport number without checking
it. That is IMHO the real flaw in security, not the fact that they
want to identify their customers.
I cannot say I understand "American" way of life and your public
security ideals. If I've got it right, you live without any document
that asserts your identity (like national ID). If that's true, how do
you get an account in a bank? How do you prove your identity while
entering a higher-security area? How you get identified for the
university exams? Driver's license? What if I do not posses
one? Signature? I cannot produce the same signature twice. (My bank
occasionally refuses to give me my money because of this.) My
girlfriend can make a better signature of mine than I can. What else?
We in Europe (well, in central and eastern Europe at least) have
national IDs issued (legacy of 'communist' age) and I do not think our
security or privacy is worse. I have to prove my identity if I go to
the hotel, but the hotel owner must tell me why he wants my personal
data and he must to destroy it once I leave the hotel and all my bills
are paid. If I want to make bank withdrawal I need to present my photo
ID. Is that unnecessary annoyance? I don't think so. I see it as a
protection measure for my money (anyone stealing my money must steal or
counterfeit my ID first). If I want to rent a car or a boat, I need to
present my ID. That is a measure for me to return the rented machine
back or to be sued for stealing it.
I'm not arguing that all identification attempts are right. The
pharmacy example in your article may be an example, but as I do not see
all the details I do not dare to judge. Quick judgments with lack of
complete information could be really dangerous.
IMHO, the real problem is that pseudonymity (as used for example in
some digital identity systems) is not possible in real life (yet). And
we must present our full identity too often. But by fighting any and
each identification attempt without first telling apart the good ones
and the bad ones may cause much harm.
From: Richard Kay <rich@copsewood.net>
Subject: How to Fight
On the question of giving away personal details through corporate
rule-followers, I have got into the habit of giving scrambled details
where appropriate; e.g., my phone number if I don't want my real one on
the relevant database. Encouraging enough people to do this seems
likely to be easier than getting a lot of people politically active
over what may seem, to non-geeks, a technical and obscure cause.
Even reversing a couple of digits in a phone number or post (Zip) code
is enough to reduce the validity of a database, and if enough people do
this the corporate marketeers who create rules requiring employees to
collect this information will end up with unreliable and unusable
data. Even on official forms where there is a criminal penalty for
giving false information, apparently accidental minor dyslexia is
unlikely to be provable as intent to give false information in a court
of law and can help to throw grit in the wheels of unwelcome bureaucracy.
From: "bill" <bill@strahm.net>
Subject: National Threat Levels
Your comments about national threat levels don't seem quite accurate to
me. You said: "The U.S. military has a similar system; DEFCON 1-5
corresponds to the five threat alerts levels: Green, Blue, Yellow,
Orange, and Red. The difference is that the DEFCON system is tied to
particular procedures; military units have specific actions they need
to perform every time the DEFCON level goes up or down. The
color-alert system, on the other hand, is not tied to any specific
actions. People are left to worry, or are given nonsensical
instructions to buy plastic sheeting and duct tape. Even local police
departments and government organizations largely have no idea what to
do when the threat level changes."
There are specific things that happen at least in the two threat levels
that we have seen (yellow and orange). For one, during orange alerts I
can detect raised security; there are more police in and around the
airport, more checks, etc. I was very surprised that they didn't raise
the level over the 4th of July holiday weekend like they have over all
of the other holidays. However what I saw was a "orange" alert level
of security at the airports. I would love to know if security was
silently raised at the airports, if it was a local decision (at the two
airports I fly between) or was a silent raising of security on a
national level.
From: Bron Gondwana <brong@brong.net>
Subject: Hiding Jewelry in Red Wine
This is a very clever idea, but unfortunately here is a fantastic
example of how security through obscurity (possibly better said as
security through rarity or security through diversity) does work.
While the robbers are unaware of this technique, it will work, but once
it becomes common enough -- or well enough publicized -- the technique
no longer works. The robbers will just knock over every glass of red
wine in the house which is close enough to a woman.
If the restaurant offers cheap house red for this purpose, then the
robber would have to be blind (or very poor at doing their homework) to
miss this possibility.
I guess the more intelligent of those women will already be looking for
a new way to protect their possessions -- one which hasn't become
trendy enough to be detected yet. If I was one of them, I certainly
wouldn't be telling anyone what my technique was.
From: "Steven Alexander" <alexander.s@mccd.edu>
Subject: Teaching Viruses
Allan Dyer wrote: "We need more people who understand viruses and how
to combat them, but it is not necessary to create a virus to understand
them."
It is necessary. Granted, the basic idea behind a virus or worm can be
understood without writing one. However, to really understand how
viruses work, writing a virus does become necessary. Writing a program
that adds another program to itself is not the same thing. Infecting
new files from already infected executables is quite a bit more
difficult because you have to design a program that can handle a
general case rather than a specific one.
Viruses have to do things such as detect executable types and extract
their code from the infected program that they are running in. At
times, they have to perform some sort of privilege escalation in order
to spread. Copying another program into your own wouldn't normally
require this, though you could try to add a program that you don't have
permission to read. To be an expert on the subject you need to know
the difference between infecting a .COM, .EXE, Windows PE or ELF
executable; you need to know how the differences in Windows and Unix
memory organization affect viruses. The subtleties will escape you if
you've never sat down and actually written a virus.
From: "Singer, Nicholas" <nick.singer@us.army.mil>
Subject: American Express Security
When I called to activate an American Express credit card I had
received in the mail, the automated system told me that I would have to
associate a PIN with it. The system told me that other users liked the
idea of using their mother's birthday as a four digit PIN. After some
experimentation, I discovered that the system would accept only those
four digit PINs that corresponded to dates: 0229" was acceptable but
not 0230" and certainly not 3112" (New Year's Eve, European style.)
Thus the system policy administrators had reduced the 10,000 possible
four-digit PINs to 366.
When I asked a human being at American Express if I could be allowed to
choose a non-datelike PIN, they complied but warned me that they wouldn
t be able to give me a hint if I later forgot it.
From: Phil Stripling <philip@civex.com>
Subject: "I haven't a clue, really"
On the letter you received from "Somewhere," I'm surprised you have
published it and are treating it a source of entertainment for your
readers. It appears to me, as you say it does to you, to be from a
mentally ill person, and I am sorry to say I just don't see the
entertainment value of this poor person's suffering. I think you made
a slip in judgment.
From: Andy Brown <logic@warthog.com>
Subject: "I haven't a clue, really"
I am writing with regard to the message in the Letters-section of your
Crypto-Gram of 15 July, 2003, which contains the header,
"From: Somewhere / Subject: I haven't a clue, really". In a preamble
to this message, you assert, "I reprint it here solely for
entertainment purposes."
With respect, I must tell you that I did not find this letter
entertaining in the slightest degree; on the contrary, I found it
disturbing in its content, and annoying inasmuch as you should have
chosen to publicize it. Perhaps, as you intimate, the writer of this
missive may be delusional. If that were the case, surely s/he would be
deserving of compassion -- but hardly of public display; and in any
event, it is anything but yours (or mine) to arrogate to ourselves the
role of armchair-analyst omniscience. ("... delusional paranoia
..."? That is an extremely powerful term, which even the "experts"
appear to have trouble with.)
From: Andy Brown <logic@warthog.com>
Subject: "I haven't a clue, really"
Paranoia, in small doses, is a virtue of a diligent security
professional. Delusion is among the worst vices. Reading the two
clash in this month's Crypto-Gram was fascinating. More so after
spending a few minutes Googling and finding a few links to reality from
that poor woman's letter.
Being in the position you are, you must receive a barrage of bizarre
letters. But I'm sure I'm not your only reader who would like to see
more of these reprinted in Crypto-Gram, even if the identities of the
innocent are removed. I find the relationship between security and
psychology to be both important in practice, and
thought-provoking. You are in a unique position to share these
interesting (if occasionally disturbing) blends, and I encourage you to
do so.
From: Alexandre [mailto:salexru2000@sympatico.ca]
Sent: Friday, August 08, 2003 1:53 PM
To: info@counterpane.com
Subject: "I haven't a clue, really"
As I am familiar with some aspects of paranoia this letter was
interesting for me because it is known that in many cases persecutory
paranoia is caused by real reasons. Attacker has to be just persistent
and have sufficient resources to make the world lopsided for chosen
person. It could be drug, environmentally and/or motivationally induced.
This poses interesting and so far unsolved question: which techniques
could be used to differentiate reality from imagination if reality
exists at all? How reality could be "authenticated"?
This woman doesn't seem to be entertained by her situation -- she is
clearly "authenticating" wrongly. It might be already too late for
her. Anyway, when you receive e-mail from known or unknown source,
which looks like triggered or stimulated the process in her case, how
can you be sure that information there is authentic? Maybe passwords,
keys and other secrets were stolen or broken and used against you and
you just don't know yet about it. Maybe person, who calls you on the
phone is just computer synthesized or recorded voice? Future might be
even worse in this respect--how about bioclones and AI, which will
cheat any possible biometrics?
You can't fight delusions and you can't really ignore them, especially
when you can't discern between delusions and reality. In real life we
use "common sense" criteria. If something happens is it
unusual/harmful? Unusual is red flag. Harmful too. We exercise
caution then. And common sense helps us to stop before drowning in
"what ifs". Unfortunately on bigger scale this approach also doesn't
work properly -- we can't predict reliably enough if our actions will
be harmful in the long run and for whom -- like in your story with
faked passport data. Next year you might be filtered out as "suspect"
person or even worse, right?
Cryptography is reeking of paranoia -- can it help fight it? Or is it
helping to build up paranoid tendencies? The more you know about how
"they" can cheat you -- the more suspicious you become, right?
** *** ***** ******* *********** *************
CRYPTO-GRAM is a free monthly newsletter providing summaries, analyses,
insights, and commentaries on computer security and cryptography. Back
issues are available on <http://www.schneier.com/crypto-gram.html>.
To subscribe, visit <http://www.schneier.com/crypto-gram.html> or send
a blank message to crypto-gram-subscribe@chaparraltree.com. To
unsubscribe, see <http://www.schneier.com/crypto-gram-faq.html>.
Please feel free to forward CRYPTO-GRAM to colleagues and friends who
will find it valuable. Permission is granted to reprint CRYPTO-GRAM,
as long as it is reprinted in its entirety.
CRYPTO-GRAM is written by Bruce Schneier. Schneier is founder and CTO
of Counterpane Internet Security Inc., the author of "Secrets and Lies"
and "Applied Cryptography," and an inventor of the Blowfish, Twofish,
and Yarrow algorithms. He is a member of the Advisory Board of the
Electronic Privacy Information Center (EPIC). He is a frequent writer
and lecturer on computer security and cryptography.
Counterpane Internet Security, Inc. is the world leader in Managed
Security Monitoring. Counterpane's expert security analysts protect
networks for Fortune 1000 companies world-wide.
<http://www.counterpane.com/>
Copyright (c) 2003 by Counterpane Internet Security, Inc.
Copyright © 2003, Eklektix, Inc.
Comments and public postings are copyrighted by their creators.
Linux is a registered trademark of Linus Torvalds