At GUADEC 2013 in Brno, Czech
Republic, Stef Walter presented his
recent work to improve the security features of GNOME by removing
problematic—and frequently ignored—"security features."
The gist of Walter's approach is that interrupting users to force them
to make a security decision produces the wrong result most of the
time; far better is to try and determine the user's intent for the
task at hand, and design the application to work correctly without
intervention. This is a fairly abstract notion, but Walter presented
three concrete examples of it in action.
The users and the humans
He started off the session by tweaking the standard security
developer's notion of "the user." A "user," he said, is someone that
security people frequently get annoyed by; users click on the wrong
things, they fall for phishing attacks, and make plenty of other
mistakes. It is better to think of users in terms of "human beings,"
because "human beings" are active, creative, and use their computers
to do things—although they also get overwhelmed when faced with
too much information at once.
This is where security design enters into the picture. Humans' brains
filter out extraneous information, on a constant basis, as part of
making sense of the world. So developers should not be surprised when
those humans tune out or dismiss dialog boxes, for example. This means that
"if you force the user to be part of the security
system,"—primarily by forcing the user to make security
decisions—"you're gonna have a really bad time." He likened the
problem to a doctor that gives the patient all of the possible
treatment options: the patient will get frustrated and ask "what would
you do?" Software developers need to be prepared to make a strong
recommendation, rather than presenting all of the choices to the user.
Walter then had a few bits of wisdom to share from this approach to
security design. First, he said, the full extent of the humans'
involvement in security should be to identify themselves. You can ask them
for a password to prove who they are, but after that they should not
be interrupted with questions about security policy. Next, it is
important to remember that "professional users" are not different in
this regard. By "professionals" he seemed to mean developers, system
administrators, and others with knowledge of security systems. But
just because they have this knowledge does not mean they should be
interrupted.
That is because the worst possible time to ask the user to make a
risky decision is when they are in the middle of trying to do
something else, he said. "You're going to get results that are worse
than random chance."
Application to applications
For developers, Walter offered two design maxims. First:
Prompts are dubious, he said. If you are refactoring your
code and you see a user prompt, regard it with suspicion, asking if
you really need to prompt the user for a response. The end goal, he
said, should be to get rid of Yes/No prompts.
The second maxim follows from the first: Security prompts are
wrong. Or at least they are wrong 99% of the time or more, he
said. Sure, you ask for a password, but that is an identification
prompt, and passwords are an unfortunate fact of life. But prompts
that ask questions about security, like "Do you want to continue?" or
"Do you want to ignore this bad certificate?" are wrong. Furthermore,
he added, if you then make the user's choice permanent, you add insult
to injury.
He gave several examples of this bad design pattern, including the
all-too-familiar untrusted-certificate prompt from the web browser,
the "this software is signed by an untrusted provider" prompt from a
package manager, and an "a new update is available that fixes your
problem, please run the following command" prompt from Fedora's automatic bug reporting
tool.
The correct approach, he said, is instead to stop interrupting the user, let
the user take some action that expresses their intent, and then make a
decision based on that intent. In other words, figure out what the
user is trying to do, and design the software so that he can express
his intent while working.
A positive example in this regard is Android's Intents system,
which he called ripe with potential for getting it wrong, but actually
gets it right. So, for example, the "file open" Intent could
prompt the user with a bad dialog of the form "Application X has
requested read/write access to file /foo/bar/baz. Continue? Disallow?"
But, instead, it just opens up the file chooser and lets the user
select the desired file. Thus the user gets asked to take a clear
action, rather than asked a security-policy question.
A second, theoretical example would be the potentially private
information in the Exif tags of a photo. If the user starts to upload
a photo, the wrong approach would be to interrupt with a dialog asking
if the user is aware that there is private information in the Exif
tags. The better approach is simply to show the information (e.g.,
geographic location and a detailed timestamp) with the photo and make it
easy to clear out the information with a button click.
The fix is in
Walter then showed off three new pieces of work he is developing to
improve just such security-interruption problems. The first is the
removal of untrusted-certificate prompts. This garnered a round of
applause from the audience, although they were a bit more skeptical of
Walter's solution, which is to simply drop the connection.
Dropping the connection is usually the correct behavior on the
browser's part, he said, since the certificate problem is either an attack or a
server-side misconfiguration. But there is one major class of
exception, he added: enterprise certificate authorities (CAs). In
these situations, an enterprise deploys an "anchor" certificate for
its network which is not known to browsers out of the box. By adding
support for managing enterprise CAs, GNOME can handle these situations
without bringing back the untrusted certificate prompt.
Walter's solution is p11-kit-trust,
which implements a shared "Trust Store" where any crypto library can
store certificates, blacklists, credentials, or other information, and
they will automatically be accessible to all applications. So far,
NSS and GnuTLS support the Trust Store already, with a temporary
workaround in place for OpenSSL and Java. Packages are already
available for Debian and Fedora. There are command-line tools for
administrators to add new certificates to the store, but there are not
yet GUI tools or documentation. The same tools, he said, should be
used for installing test certificates, personal or self-signed
certificates, and other use-cases encountered by "professional" users.
The second new project is a change to how applications store
passwords. Right now, gnome-keyring stores all passwords for
all applications, but Walters noted that this is really surprising to
users, particularly when they learn that any application can request
any other application's stored passwords. The user's expectation, he
said, is that passwords are "account data" and would be stored with
other account information for the application. That is true, he
observed, but it has not been done in practice because there is not a
reliable way to encrypt all of this per-application storage.
The solution is libsecret, which
applications can use to encrypt and store passwords with their other
account information. Libsecret uses the Linux kernel keyring to hold
a session key that the applications request to use for encrypting
their saved passwords. Normally this session key is derived at the
start of the session from the user's login password, but other values
can also be returned to applications for policy reasons. Returning a
blank key, Walter said, means "store your data in the clear," while
not returning any value means the application is not permitted to save
data.
The third new feature Walter is working on is the solution to a
GNOME annoyance, in which the user is prompted at login time for the
password, even if they have logged in via another method (such as
fingerprint, PIN, or auto-login). The cause of this re-authentication
is that GNOME needs the user password to decrypt secret data; the same
double-step occurs when a user is prompted once for their password
when unlocking an encrypted hard disk, and again when logging in to
the session.
Walter's solution is a pluggable authentication module (PAM) called
pam_unsuck that, again, relies on the kernel keyring. The
kernel keyring will hold the user's password after login so it can be
reused. If an account does not use any password to log in, a password
will be created for it and saved in hardware-protected storage (where
possible). He noted that the decision to use auto-login,
fingerprints, or PINs already constitutes the user's conscious choice
to use an authentication method less secure than a password. This
scheme allows them to make that decision, it just prevents the
nuisances of being prompted for a password anyway.
Walter ended the session by imploring developers to "go forth and
kill ... prompts." There are many more places where changing the
user-interruption paradigm can help GNOME craft a more secure system
overall, he said, by putting fewer security decisions in front of the
user.
[The author wishes to thank the GNOME Foundation for assistance
with travel to GUADEC 2013.]
Comments (11 posted)
Brief items
This whole issue of privacy is utterly fascinating to me. Who's ever heard
of this information being misused by the government? In what way?
—
Larry
Ellison as quoted in
The Register
You, an executive in one of those companies, can fight. You'll probably
lose, but you need to take the stand. And you might
win. It's time we
called the government's actions what it really is:
commandeering. Commandeering is a practice we're used to in wartime, where
commercial ships are taken for military use, or production lines are
converted to military production. But now it's happening in peacetime. Vast
swaths of the Internet are being commandeered to support this surveillance
state.
—
Bruce
Schneier has advice for internet company executives
This experience has taught me one very important lesson: without
congressional action or a strong judicial precedent, I would _strongly_
recommend against anyone trusting their private data to a company with
physical ties to the United States.
—
Ladar Levison shuts down the Lavabit
email service
Today, another secure email provider, Lavabit, shut down their system lest
they "be complicit in crimes against the American people." We see the
writing the wall, and we have decided that it is best for us to shut down
Silent Mail now. We have not received subpoenas, warrants, security
letters, or anything else by any government, and this is why we are acting
now.
—
Silent
Circle shuts down its email service
Comments (12 posted)
On his blog, KDE hacker Martin Gräßlin issues a
call to action for free software developers to have their projects default to privacy-preserving operation.
"
With informational self-determination every user has to be always aware of which data is sent to where. By default no application may send data to any service without the users consent. Of course it doesn't make sense to ask the user each time a software wants to connect to the Internet. We need to find a balance between a good usability and still protecting the most important private data.
Therefore I suggest that the FLOSS community designs a new specification which applications can use to tell in machine readable way with which services they interact and which data is submitted to the service. Also such a specification should include ways on how users can easily tell that they don't want to use this service any more."
Comments (37 posted)
New vulnerabilities
chrony: two vulnerabilities
| Package(s): | chrony |
CVE #(s): | CVE-2012-4502
CVE-2012-4503
|
| Created: | August 12, 2013 |
Updated: | September 18, 2013 |
| Description: |
From the Red Hat bugzilla:
Chrony upstream has released 1.29 version correcting the following two security flaws:
* CVE-2012-4502: Buffer overflow when processing crafted command packets
When the length of the REQ_SUBNETS_ACCESSED, REQ_CLIENT_ACCESSES
command requests and the RPY_SUBNETS_ACCESSED, RPY_CLIENT_ACCESSES,
RPY_CLIENT_ACCESSES_BY_INDEX, RPY_MANUAL_LIST command replies is
calculated, the number of items stored in the packet is not validated.
A crafted command request/reply can be used to crash the server/client.
Only clients allowed by cmdallow (by default only localhost) can crash
the server.
With chrony versions 1.25 and 1.26 this bug has a smaller security
impact as the server requires the clients to be authenticated in order
to process the subnet and client accesses commands. In 1.27 and 1.28,
however, the invalid calculated length is included also in the
authentication check which may cause another crash.
* CVE-2012-4503: Uninitialized data in command replies
The RPY_SUBNETS_ACCESSED and RPY_CLIENT_ACCESSES command replies can
contain uninitalized data from stack when the client logging is disabled
or a bad subnet is requested. These commands were never used by chronyc
and they require the client to be authenticated since version 1.25. |
| Alerts: |
|
Comments (none posted)
cxf: denial of service
| Package(s): | cxf |
CVE #(s): | CVE-2013-2160
|
| Created: | August 12, 2013 |
Updated: | August 14, 2013 |
| Description: |
From the Red Hat bugzilla:
Multiple denial of service flaws were found in the way StAX parser implementation of Apache CXF, an open-source web services framework, performed processing of certain XML files. If a web service application utilized the services of the StAX parser, a remote attacker could provide a specially-crafted XML file that, when processed by the application would lead to excessive system resources (CPU cycles, memory) consumption by that application. |
| Alerts: |
|
Comments (none posted)
mozilla: multiple vulnerabilities
| Package(s): | firefox |
CVE #(s): | CVE-2013-1706
CVE-2013-1707
CVE-2013-1712
|
| Created: | August 14, 2013 |
Updated: | August 14, 2013 |
| Description: |
From the CVE entries:
Stack-based buffer overflow in maintenanceservice.exe in the Mozilla Maintenance Service in Mozilla Firefox before 23.0, Firefox ESR 17.x before 17.0.8, Thunderbird before 17.0.8, and Thunderbird ESR 17.x before 17.0.8 allows local users to gain privileges via a long pathname on the command line. (CVE-2013-1706)
Stack-based buffer overflow in Mozilla Updater in Mozilla Firefox before 23.0, Firefox ESR 17.x before 17.0.8, Thunderbird before 17.0.8, and Thunderbird ESR 17.x before 17.0.8 allows local users to gain privileges via a long pathname on the command line to the Mozilla Maintenance Service. (CVE-2013-1707)
Multiple untrusted search path vulnerabilities in updater.exe in Mozilla Updater in Mozilla Firefox before 23.0, Firefox ESR 17.x before 17.0.8, Thunderbird before 17.0.8, and Thunderbird ESR 17.x before 17.0.8 on Windows 7, Windows Server 2008 R2, Windows 8, and Windows Server 2012 allow local users to gain privileges via a Trojan horse DLL in (1) the update directory or (2) the current working directory. (CVE-2013-1712) |
| Alerts: |
|
Comments (none posted)
phpMyAdmin: multiple vulnerabilities
Comments (none posted)
putty: multiple vulnerabilities
| Package(s): | putty |
CVE #(s): | CVE-2013-4206
CVE-2013-4207
CVE-2013-4208
CVE-2013-4852
|
| Created: | August 12, 2013 |
Updated: | September 30, 2013 |
| Description: |
From the Debian advisory:
CVE-2013-4206:
Mark Wooding discovered a heap-corrupting buffer underrun bug in the
modmul function which performs modular multiplication. As the modmul
function is called during validation of any DSA signature received
by PuTTY, including during the initial key exchange phase, a
malicious server could exploit this vulnerability before the client
has received and verified a host key signature. An attack to this
vulnerability can thus be performed by a man-in-the-middle between
the SSH client and server, and the normal host key protections
against man-in-the-middle attacks are bypassed.
CVE-2013-4207:
It was discovered that non-coprime values in DSA signatures can
cause a buffer overflow in the calculation code of modular inverses
when verifying a DSA signature. Such a signature is invalid. This
bug however applies to any DSA signature received by PuTTY,
including during the initial key exchange phase and thus it can be
exploited by a malicious server before the client has received and
verified a host key signature.
CVE-2013-4208:
It was discovered that private keys were left in memory after being
used by PuTTY tools.
CVE-2013-4852:
Gergely Eberhardt from SEARCH-LAB Ltd. discovered that PuTTY is
vulnerable to an integer overflow leading to heap overflow during
the SSH handshake before authentication due to improper bounds
checking of the length parameter received from the SSH server. A
remote attacker could use this vulnerability to mount a local denial
of service attack by crashing the putty client. |
| Alerts: |
|
Comments (none posted)
python-glanceclient: incorrect SSL certificate CNAME checking
| Package(s): | python-glanceclient |
CVE #(s): | CVE-2013-4111
|
| Created: | August 14, 2013 |
Updated: | September 4, 2013 |
| Description: |
From the openSUSE advisory:
This update of python-glanceclient fixed SSL certificate
CNAME checking. |
| Alerts: |
|
Comments (none posted)
ReviewBoard, python-djblets: multiple vulnerabilities
| Package(s): | ReviewBoard, python-djblets |
CVE #(s): | |
| Created: | August 8, 2013 |
Updated: | October 2, 2013 |
| Description: |
From the Fedora advisory:
* Function names in diff headers are no longer rendered as HTML.
* If a user’s full name contained HTML, the Submitters list would render it as HTML, without
escaping it. This was an XSS vulnerability.
* The default Apache configuration is now more strict with how it serves up file attachments.
This does not apply to existing installations. See
http://support.beanbaginc.com/support/solutions/articles/... for
details.
* Uploaded files are now renamed to include a hash, preventing users from uploading malicious
filenames, and making filenames unguessable.
* Recaptcha support has been updated to use the new URLs provided by Google. |
| Alerts: |
|
Comments (none posted)
spice: denial of service
| Package(s): | spice |
CVE #(s): | CVE-2013-4130
|
| Created: | August 12, 2013 |
Updated: | September 4, 2013 |
| Description: |
From the Red Hat bugzilla:
Currently, both red_channel_pipes_add_type() and red_channel_pipes_add_empty_msg() use plaing RING_FOREACH() which is not safe versus removals from the ring within the loop body. Yet, when (network) error does occur, the current item could be removed from the ring down the road and the assertion in RING_FOREACH()'s ring_next() could trip, causing the process containing the spice server to abort.
An user able to initiate spice connection to the guest could use this flaw to crash the guest. |
| Alerts: |
|
Comments (none posted)
strongswan: denial of service
| Package(s): | strongswan |
CVE #(s): | CVE-2013-5018
|
| Created: | August 14, 2013 |
Updated: | August 23, 2013 |
| Description: |
From the openSUSE advisory:
This update of strongswan fixed a denial-of-service
vulnerability, that could be triggered by special XAuth
usernames and EAP identities. |
| Alerts: |
|
Comments (none posted)
swift: denial of service
| Package(s): | swift |
CVE #(s): | CVE-2013-4155
|
| Created: | August 13, 2013 |
Updated: | September 4, 2013 |
| Description: |
From the Debian advisory:
Peter Portante from Red Hat reported a vulnerability in Swift.
By issuing requests with an old X-Timestamp value, an
authenticated attacker can fill an object server with superfluous
object tombstones, which may significantly slow down subsequent
requests to that object server, facilitating a Denial of Service
attack against Swift clusters. |
| Alerts: |
|
Comments (none posted)
vlc: unspecified vulnerability
| Package(s): | vlc |
CVE #(s): | CVE-2013-3565
|
| Created: | August 12, 2013 |
Updated: | August 14, 2013 |
| Description: |
From the vlc 2.0.8 announcement:
2.0.8 is a small update that fixes some regressions of the 2.0.x branch of VLC.
2.0.8 fixes numerous crashes and dangerous behaviors. |
| Alerts: |
|
Comments (none posted)
xymon: unauthorized file deletion
| Package(s): | xymon |
CVE #(s): | CVE-2013-4173
|
| Created: | August 12, 2013 |
Updated: | August 14, 2013 |
| Description: |
From the Mageia advisory:
A security vulnerability has been found in version 4.x of the
Xymon Systems & Network Monitor tool
The error permits a remote attacker to delete files on the server
running the Xymon trend-data daemon "xymond_rrd".
File deletion is done with the privileges of the user that Xymon is
running with, so it is limited to files available to the userid
running the Xymon service. This includes all historical data stored
by the Xymon monitoring system. |
| Alerts: |
|
Comments (none posted)
Page editor: Jake Edge
Next page: Kernel development>>