By Michael Kerrisk
February 6, 2013
At the beginning of his talk on Linux game development at linux.conf.au
2013, Eric Anholt noted that his original motivation for getting into
graphics driver development was that he wanted to be able to play games on
alternative operating systems. At that time, there were a few games
available, but the graphics drivers were in such a poor state that he ended
up working on drivers instead of playing games. Thus, he has now been
working in Intel's graphics driver team for seven years; currently, he
works mainly on Mesa's OpenGL graphics
drivers.
Eric's talk took the form of a short review of some recent history in
Linux game development followed by a description of his experiences in the
Intel open source graphics driver team working with Valve Software to port
the Steam game platform to
Linux.
Recent history of game development on Linux
Eric started with a summary of significant changes that have taken
place in the graphics stack over the last seven years. These changes
include kernel mode
setting and improvements to memory management so that multiple
processes can reliably employ memory on the graphics card. On the OpenGL
side, things have improved considerably. "Back when I started, we
were about ten years behind". The then-current OpenGL version on
Linux was
2.1, no modern games ran on Linux, and there was no OpenGLES on Linux.
However, by now, Linux supports OpenGL 3.1 and has reached Khronos-certified
OpenGLES 2.0 conformance, and Open GLES 3.0 certification seems to be quite close.
Development of open
source games seems to have lagged. Eric suggested a number of reasons for
this. One of these is that creating a game requires building a
multi-talented team composed of artists, developers, and designers. It's
difficult to put a team like that together. And then, even if one does
manage to assemble such a team, it's hard to agree on a direction: when it
comes to game design, the design space is so large that it can be difficult
to agree on what you are creating. Finally, the move into open source game
development means that you spend less time doing the thing you want to do:
playing games.
Nevertheless, there have been a few open source games, such as etracer
(Extreme Tux Racer), Neverball, Xonotic, Foobillard, and Wesnoth. In
addition, there were closed source games such as Quake (later open
sourced), Unreal Tournament 2004, Loki, Minecraft, and whatever users could
force to run with Wine.
In May 2010, the Humble Indie
Bundle appeared. The concept was a package of games made available
DRM-free, with the user choosing the price that they would pay.
"They've actually released some surprisingly good games for
Linux." One of those games was Braid, and Eric noted that the developers
who participated in Humble
Bundle learned an important lesson from an earlier attempt to
port that game to Linux.
The developer of Braid, Jonathan Blow, made a blog
post asking for help on how to port Braid to Linux, asking questions
such as "how do I deal with mouse grabs, so that mouse clicks only go to
the game window?" and "how do I output sound?" The community
did try to help: in all, the blog post got 251 responses, many of
them containing directly conflicting advice. In response, the developer
gave up: he couldn't justify spending the time to determine the correct way
to do the port for what was a small target market.
The lesson that the Humble Bundle developers learned from the Braid
experience was that game developers should not be burdened with the task of
porting games. So instead, they employed Ryan C. Gordon, a developer who had
already ported a number of games to Linux, to port all of their games.
This approach has been surprisingly successful at quickly getting games to
run on Linux.
Working with Valve on the Linux port of Steam
There have been petitions for Valve Software to port Steam to
Linux for as long as Steam has been around, Eric said. The Intel
graphics driver team started working with Valve Software in July 2012.
During the porting project, the Intel team had access to the Steam source
code and worked with Valve on both tuning Steam to work better with the
Mesa graphics library and tuning Mesa to work better with Steam. The
closed beta test for the port was started in November 2012, and the open
beta started in December. The port included the Steam's Source
engine and the game Left For Dead 2.
The cooperative work between the graphics developers and Valve proved
to be quite productive, but in the process, the graphics developers learned
about a number of things that Valve found really disappointing.
The first of these disappointments concerned ARB_debug_output.
This is an extension in the Windows development environment for OpenGL that
provides a general event logging infrastructure between the graphics layer on
one side and middleware and applications on the other side. This is an
important debugging tool in the Windows environment, "where you don't really
have the console so much". There is an implementation of
ARB_debug_output in Mesa, but it is rudimentary, supporting only two event types.
Another major disappointment concerned bad debugging and
performance-measurement tools. The Valve developers described the tools
they used for debugging on Windows, and wanted to know what equivalents
existed on Linux. In response, the graphics developers were excited to
show Valve an API-tracing tool that they had developed. API tracing is a
technique that is extremely useful for driver developers because it allows
them to obtain a reproducible trace of the graphics operations requested by
the application. This means driver developers can get the application out of the
way while they replay the trace in order to debug problems in the driver or
improve the driver's performance. However, this sort of low-level tool
provides little assistance for analyzing performance at the application
level.
By contrast, Windows has some good tools in this area that allow the
application programmer to insert tracepoints to track application
steps such "starting to draw the world", "starting to draw a figure", and
"starting to draw a helmet". This enables the application developer to
obtain a hierarchical view of performance and determine if (say) drawing
helmets takes a long time. Linux has to do a lot to catch up in this area.
The Valve developers also complained that Mesa was simply too slow.
Many of the code paths in Mesa are not well optimized, with enormous
switch statements containing nested conditionals. One possible
solution is to offload some of the graphics work onto a separate thread
running on a different core. Some work has been done in this area, but, so
far, performance is no better than for the existing single-threaded
approach. More work is required.
Notwithstanding the disappointments, there were other aspects of
working on Linux that the Valve developers loved. For example, they
greatly appreciated the short development cycles that were made possible
when using open source drivers.
Although the support for ARB_debug_output was poor on Linux, the Valve
developers were impressed when Eric was able to quickly implement some
instrumentation of driver hotspots, so that the driver would log messages
when the application asked it to carry out operations that were known to
perform poorly. The Valve developers were also surprised that the logging
could be controlled by environment variables, so that the same driver would
be used in both quiet and "logging" mode.
A final pleasant surprise for the Valve developers was that drivers
could be debugged using breakpoints. That possibility is unavailable with
closed source Windows drivers. More generally, the Valve developers don't
have much insight into the closed source drivers that they use on Windows
(or, for that matter, closed source drivers on Linux). Thus, they have to
resort to experimentation to form mental models about performance problems
and how to get around them.
Concluding remarks
For gaming enthusiasts, the announcement by Valve—one of the largest
producers of game software—that Steam would be ported to
Linux was something of
a watershed moment. The Linux gaming landscape is poised to grow much
bigger, and even those of us who are not gamers can reflect that a much-improved games ecosystem will at the very least widen interest in Linux as
a platform for desktop computer systems.
Comments (17 posted)
February 6, 2013
This article was contributed by Martin Michlmayr
The Free Software Foundation (FSF) has received criticism in recent months for its copyright assignment policies
and for being slow in dealing with reported GPL
violations. In a talk at FOSDEM on
February 3, John Sullivan, the Executive
Director of the FSF, addressed some of these
issues. In his "State of the GNUnion" talk,
Sullivan highlighted the FSF's recent licensing and compliance activities and
described challenges that are important to the organization for 2013.
Licensing and Compliance Lab
Sullivan started with an overview of the members of the Licensing and
Compliance Lab and its activities. The team is led by Josh Gay, the former
FSF campaigns manager, and Donald Robertson, who has been handling
copyright assignments for some time. While Sullivan helps to define the
overall strategy employed for licensing in order to promote freedom, the team is
supported by Richard Stallman, Bradley Kuhn (a Director of the FSF) and
lawyers from the Software Freedom Law Center, in particular Aaron
Williamson and Eben Moglen. Finally, there's a team of volunteers that
helps out with questions that come in through the licensing@fsf.org
address. Sullivan noted that it is important for the FSF to communicate
with people about license choices and related topics.
The Licensing and Compliance Lab focuses on a number of areas. A big one
is the production of educational materials about GNU licenses. It also
investigates license violations, especially for code entrusted to the
FSF. Finally, it certifies products that use and require only free
software.
Licensing is important, Sullivan said, because all software is proprietary
by default. The GPL grants rights to users and he believes that the GPL is
the right license to boost free software adoption. He mentioned claims
that the use of the GPL is declining, but criticized those studies for not
publishing their methodology or data. His own study, based on Debian
squeeze, showed that 93% of packages contained code under the GPL family.
He noted the difficulty of measuring GPL adoption: does a package count if
it contains any GPL code, or should you count lines of code? And what about
software that is abandoned? Sullivan noted that his interest in GPL
adoption is obviously not because the FSF makes money from licensing but
because of his belief that the GPL provides the "strongest legal protection
to ensure that free software stays free".
Sullivan highlighted a new initiative to create more awareness of GNU
licenses. The lab has started publishing interviews on its blog to share insights
about the license choice of different projects. Recent posts have featured
Calibre and Piwik.
Compliance efforts and copyright assignment
The
FSF collects copyright assignments in order to enforce the GPL, Sullivan
said, but there are
a number of misconceptions about that. He explained that the GNU project does not mandate copyright
assignment and that individual projects have a choice when they join
the GNU project. However, if a project has
chosen to
assign copyright, all contributions to that project have to be assigned to
the FSF.
The FSF hears frequent complaints that the logistics of copyright
assignment slow down software development within the GNU project. It
has made a number of changes to improve this process. Historically, the
process involved asking for information by email, then mailing out a paper
form, getting it signed, and then sent back. These days, the FSF can email out
forms more quickly. It also accepts scanned versions in the US and
recently expanded this option to Germany after getting a legal opinion there.
Sullivan noted that the laws in many places are behind the times when it
comes to
scanned or digital signatures. Having said that, the FSF is planning to
accept GPG-signed assignments for some jurisdictions in the future.
Sullivan lamented that the FSF's copyright assignment policy is often used by
companies to justify copyright assignment. He noted that there are
significant differences between assigning copyright to an entity like the
FSF and a company with profit motives. Not only does the FSF promise that
the software will stay free software but it would also jeopardize FSF's
non-profit charity status if they were to act against their mission.
One reason the FSF owns the copyright for some GNU projects is to perform
GPL enforcement on behalf of the project. He discussed recent complaints
that the FSF is not actively pursuing license violations, notably the issues raised by Werner Koch
from the GnuPG project. Sullivan explained that this was, to a large degree, a
communication problem. The FSF had in fact gone much further than Koch
was aware of, but they failed to communicate that. He promised to keep projects
better informed about the actions taken. Unfortunately, a lot of
this work is not discussed publicly because of its nature. The FSF usually
approaches companies in private and will only talk about it in public
if no agreement can be reached. Also, if it comes to legal action, the FSF
once again cannot
talk about it in public.
The lab closed 400 violation reports in 2012, Sullivan said. Out of
those, some turned out not to be violations at all, but the majority of
violation reports were followed up by
actions from the lab that resulted in compliance. He also noted that the
FSF is planning to
add additional staff resources in order to respond to reported violations
more quickly.
JavaScript and non-free code running in browsers
Sullivan then went on to
cover a number of challenges facing the free software world. Richard
Stallman described the "JavaScript Trap" a
few years ago, which is the problem of non-free code running in web browsers.
Sullivan explained that these days browser scripts can be quite advanced
programs but "for some reason we've been turning a blind eye" to their
typically non-free nature. The FSF is
spending a lot of time on tackling this problem and has created LibreJS, which is an
extension for Mozilla-based browsers. LibreJS identifies whether
JavaScript files are free
software, and it can be configured to reject any script that's not free.
In order for this to work the FSF developed a specification that web
developers can follow to mark their JavaScript code as free software.
Developers can either put a specific header in their JavaScript files or
publish license information using JavaScript Web Labels. Gervase Markham
pointed out that Mozilla uses web server headers and Sullivan agreed that
LibreJS could be enhanced to support that method too.
Sullivan added that many JavaScript files are free software already, but
that developers have to start marking them as such. They are working with
upstream projects, such as MediaWiki and Etherpad Lite, on doing so.
Certification program
The FSF has launched a certification program to identify hardware that
only uses free software. It wants to make it easy for people to care.
Sullivan emphasized that the label has to be attractive and hopes
it will cause manufacturers to respect user freedom more. He showed a
different label similar in style to the warning note on a cigarette package
("This product may contain material licensed to you under the GNU General
Public License") and explained that this "is not what we want to do". The
actual
logo (seen at right) shows the Liberty Bell along with the word "freedom". The first
product to achieve certification is the LulzBot 3D printer.
User freedom
As an alternative to Android, Sullivan recommended Replicant—a
fully free Android distribution—for those willing to sacrifice some
functionality (such as WiFi and Bluetooth on Sullivan's mobile phone) for
freedom. He also encouraged Android users to take advantage of the F-Droid Repository to download free
software apps for their devices. F-Droid also provides the option to make
donations to the authors of the free software apps.
Sullivan also briefly commented on UEFI secure boot. He said that while
the FSF
is obviously "annoyed by it", it is not fully opposed—there is
nothing inherently wrong with secure boot as long as the power remains with
the users. However, it's important to make a distinction with what he
called "restricted boot". Restricted boot can be found on ARM-based Windows
devices which lock down the device and don't give users any choice. This
is obviously not acceptable, according to the FSF.
Concluding remarks
Sullivan gave an interesting overview of the FSF's recent activities and
upcoming
challenges it intends to tackle. He is aware of concerns that have been
expressed by members of the GNU community in recent months and is keen to
improve the situation. The talk showed that the FSF is working on many
activities and that it hopes to improve and expand its work as funding
allows.
Comments (28 posted)
By Jonathan Corbet
February 5, 2013
A user looking for the Firefox browser on a Debian system may come away
confused; seemingly, that program is not shipped by Debian. In truth, the
desired software is there; it just goes under the name "iceweasel." This
confusing naming is a result of the
often
controversial intersection of
free software and trademarks. Critics claim that trademarks can remove
some of the freedoms that should come with free software, while proponents
assert that trademarks are needed to protect users from scam
artists and worse. A look at the activity around free office suites tends
to support the latter group — but it also shows the limits of what
trademarks can accomplish.
The core idea behind a trademark is that it gives the owner the exclusive
right to apply the trademarked name to a product or service. Thus, for
example, the Mozilla Foundation owns the trademark for the name Firefox as
applied to "computer programs for accessing and displaying files on both
the internet and the intranet"; a quick search on the
US Patent and Trademark Office site shows other owners of the name for
use with skateboards, bicycles, wristwatches, power tools, and vehicular
fire suppression systems. Within the given domain, the Mozilla Foundation
has the exclusive right to control which programs can be called "Firefox".
The Foundation's trademark policy
has been seen by some as being overly restrictive (almost no patches can be
applied to an official release without losing the right to the name); that
is why Debian's browser is called "Iceweasel" instead. But those same
restrictions allow the Mozilla Foundation to stop distribution of a
program called "Firefox" that sends credit card numbers to a third party.
The Document Foundation (TDF) owns a trademark on "LibreOffice" in the US, while
the Apache Software Foundation (ASF) owns "Apache OpenOffice" and
"OpenOffice.org". Both foundations have established trademark usage policies (TDF,
ASF) intended to
ensure that users downloading their software are getting what they expect:
the software released by the developers, without added malware. Without
this protection, it is feared, the net would quickly be filled with
corrupted versions of Apache OpenOffice and LibreOffice that would barrage
users with ads or compromise their systems outright.
How effective is this protection? To a degree, trademarks are clearly
working. Reports of systems compromised by corrupt versions of free office
suites are rare; when somebody attempts to distribute malware versions,
trademarks give the foundations the ability to get malware distributors
shut down relatively quickly. It seems hard to dispute that the
application of trademark law has helped to make the net a somewhat safer
place.
Questionable distributors
One might ask: safer from whom? Consider, for example, a company called
"Tightrope Interactive." Tightrope was sued by
VideoLan.org (the developers of the VLC media player) and Geeknet (the
operators of SourceForge) in 2010; they were accused of "trademark
infringement, cyberpiracy and violating California's consumer protection
law against spyware." Tightrope had been distributing "value-added"
versions of VLC from its site at vlc.us.com; it was one of many unwanted VLC redistributors during that time.
That litigation was settled
in 2011; the terms are mostly private, but they included the transfer of
vlc.us.com over to VideoLan.org, ending the use of that channel by
Tightrope.
On Friday, April 15, 2011, Oracle announced
that OpenOffice.org would be turned into a "community project" of an (at that
point) unspecified nature. On April 18 — the next business day —
Tightrope Interactive filed for ownership of the OpenOffice trademark in
the US. That application was eventually abandoned, but not willingly; as
Apache OpenOffice contributor Rob Weir
recently noted in passing, "It took
some special effort and legal work to get that application
rejected." Companies in this sort of business clearly see the value
in controlling that kind of trademark; had Tightrope Interactive been
successful, it would have been able to legally distribute almost any
software under the name "OpenOffice."
The fact that the project successfully defended
the trademark in this case should impede the distribution of corrupted
versions of Apache OpenOffice in the future.
|
|
Sample OpenOffice ads
|
Or so one would hope. Your editor's daughter recently acquired a laptop
computer which, alas, appears to be destined to run a proprietary operating
system. After looking for an office suite for this machine, she quickly
came asking for help: which version should she install? In fact, one need
not search for long before encountering ads like those shown to the right:
there is, it seems, no shortage of sites offering versions of OpenOffice
and paying for ad placement on relevant searches.
One of those — openoffice.us.com — just happens to be run by the same folks
at Tightrope Interactive.
A quick search of the net will turn up complaints (example)
about unwanted toolbars and adware installed by redistributed versions of
OpenOffice, including Tightrope's version. This apparently happens often
enough that the Apache
OpenOffice project felt the need to put up a page on how
to safely download the software, saying:
When we at the Apache OpenOffice project receive reports like this
-- and we receive them a couple of times every week -- the first
thing I ask is, "Where did you download OpenOffice from?" In
today's case, when the user checked his browser's history he found
what I suspected, that it was not downloaded from
www.openoffice.org, but was a modified version, from another
website, that was also installing other applications on his system,
programs that in the industry are known as "adware", "spyware" or
"malware".
This problem is not restricted to Apache OpenOffice; a search for
LibreOffice will turn up a number of similar sites. Given that, one might
well wonder whether trademarks are actually living up to the hopes that
have been placed on them. Isn't this kind of abusive download site just
the sort of thing that trademarks were supposed to protect us from?
One answer to that question can be found on one of the LibreOffice download
sites, where it is noted that clicking on the "Download" button will start
with the "DomaIQ" installer. This bit of software is described in these
terms:
DomaIQ™ is an install manager which will manage the installation of
your selected software. Besides handling the installation of your
selected software, DomaIQ™ can make suggestions for additional free
software that you may be interested in. Supplemental software could
include toolbars, browser add-ons, game apps, and other types of
applications.
Herein lies the rub. The version of Apache OpenOffice or LibreOffice
offered by these sites is, most likely, entirely unmodified; they may well
be shipping the binary version offered by the project itself. But the
handy "installer" program that runs first will happily install a bunch of
unrelated software at the same time; by all accounts, the "suggestions" for
"additional free software" tend to be hard to notice — and hard to opt out
of. So users looking for an office suite end up installing rather more
software than they had intended, and that software can be of a rather
unfriendly nature. Once these users find themselves deluged with ads — or
worse — they tend to blame the original development project, which had
nothing to do with the problem.
The purveyors of this software are in complete compliance with the
licensing and trademark policies for the software they distribute; at
least, those that continue to exist for any period of time are. That
software is unmodified, links to the source are provided, and so on. What
they are doing is aggregating the software with the real payload in a way
that is quite similar to what Linux distributors do. Any attempt to use
trademark policies to restrict this type of aggregation would almost
certainly bite Linux distributors first.
Consider an example: a typical Linux distribution advertises the fact that
it includes an office suite; it also comes with an installer that can
install software that presents advertisements to the user (the music stores
incorporated into media players, for example, or Amazon search results from
Unity), phones home with hardware information (Fedora's Smolt) or exposes
the system to external compromise (Java browser plugins). It is hard to
imagine a trademark policy that could successfully block the abuses
described in this article while allowing Linux distributors to continue to
use the trademarked names. Free software projects are generally unwilling
to adopt trademark policies of such severity.
As a result, there is
little that the relevant projects can do; neither copyright nor trademark
law can offer much help in this situation. That is why these projects are
reduced to putting up pages trying to educate users about where the
software should actually be downloaded from.
The conclusion that one might draw is that trademarks are only partially
useful for the purpose of protecting users. They can be used as a weapon
against the distribution of overtly compromised versions of free software
programs, but they cannot guarantee that any given distribution is safe to
install. There is still no substitute, it seems, for taking the time to
ensure that one's software comes from a reliable source.
Comments (58 posted)
Page editor: Jonathan Corbet
Security
By Nathan Willis
February 6, 2013
At linux.conf.au
2013 in Canberra, Mozilla's François Marier presented a
talk on the Content
Security Policy (CSP), the browser-maker's proposed approach to
thwarting cross-site scripting attacks with a framework of
granular restrictions on what types of content a page can load.
We covered CSP in July 2009, just a
few months after development started. Since then, the idea has been
expanded, and, in November 2012, version 1.0 was declared a Candidate Recommendation by the
World Wide Web Consortium (W3C).
Cross-site scripting attacks, Marier explained, usually occur when
input and variables in a page are not properly escaped. An
unsanitized variable such as a user input field allows an attacker to
inject JavaScript or other malicious code that is loaded by a
visitor's browser. Even the templating systems used by modern content
management systems (CMS)—many of which auto-escape content—are
not foolproof. CSP offers an additional layer of protection, argued
Marier, because it is implemented as an HTTP header to be delivered
by the web server and not by the CMS. Thus, for an attacker to defeat
a CSP-equipped site, he or she would have to compromise the web
server, which is arguably more robust than the CMS.
A CSP policy is declarative, in which a site or web application
specifies the locations from which it wishes to allow scripts and
other page content to load. The header declares one or more
src directives, each of which specifies a list of acceptable
URIs for a specific content type. For example, the most basic policy
default-src 'self';
permits only loading content from the same site—in this case
meaning matching the protocol scheme, host, and port number. The
specification includes nine
src directives:
default-src,
script-src,
object-src,
style-src,
img-src,
media-src,
frame-src,
font-src, and
connect-src. Each
directive can be set to
none, or to a set of space-separated
expressions, optionally featuring the
* wildcard. URI values are
matched according to a standard algorithm that looks for
scheme:host:port syntax. For example, the directive
img-src 'self' data ;
from a site at www.foo.org would match both
www.foo.org and
data.foo.org. A site that uses external hosts for content
delivery or to serve ads would need to specify more complicated rules.
There is also a special reserved expression for allowing inline
content (such as inline scripts or CSS), which is somewhat editorially
named unsafe-inline as a reminder that permitting such inline
content is a risky prospect. The reason this warrants the
unsafe moniker being written
into the specification itself, said Marier, is that a browser has
no way to distinguish inline scripts that are written into the page at
the original server from any scripts which are injected into the page
content by an attacker.
The default-src directive allows
site owners to set a restrictive generic policy, which is then
overwritten only by whitelisting
specific additional content types, he said. At his personal site,
fmarier.org, he has the default-src directive set to
none and only turns on additional directives for "minor
stuff."
Policy makers
At the moment, CSP is available and "works really
well" in Firefox and Chromium/Chrome, and is somewhat
functional in Safari 6 or greater. Nevertheless, he continued, one
does not need to jump directly into converting one's sites over to
full CSP, which can be tricky to get right on the first try. He
instead suggested a few steps to implement CSP progressively.
The first step is removing all inline scripts and styles from the
site's pages. Simply moving them to external files should not affect
page functionality at all, and it removes the need to worry about
unsafe-inline (although, it should be noted, external scripts
and stylesheets do mean longer load times). The next step is to remove all
<javascript:> URIs, which, of course, may entail some
rewriting. Then one can proceed to implementing a CSP policy. Marier
recommended starting with a "relaxed" and permissive policy, then
working one's way progressively toward a stricter policy.
For this, CSP provides a helpful report-uri directive.
Unlike the other directives, report-uri does not set policy;
it tells the browser to report a policy violation to the URI provided
as the value. The example Marier provided is:
report-uri http://example.com/report.cgi
which, he said, would allow one to log false-positive matches. It is
important to note, however, that when
report-uri is in place,
CSP does
not block the rule violations it catches, so it is
vital to remove it once testing is complete.
Marier also recommended that interested site administrators add
their CSP rules in the web server, not through their CMS or
application framework, specifically to provide the extra layer of
protection described above. It is also useful as a reminder that CSP
is a complement to standard cross-site scripting hygiene, and not a
replacement for input escaping. There are some resources out there
for site maintainers to get started with policy writing, he said, such
as CSPisAwesome.com, a tool for
generating valid policies.
For users who are keen to get the benefits of CSP but cannot wait
for their sites to get it rolling, he recommended installing a browser
extension that implements CSP on the client-side. There appears to be
just one at the moment: UserCSP
for Firefox. This extension allows users to write policies for the
various sites they visit, which Firefox then applies just as it would
a CSP header originating from the server. Obviously, the user needs
to be aware of the risks of "injecting" (so to speak) CSP into their
browser, since applying a user-crafted policy could break the
site's functionality. On the other hand, by putting the policy
decision in the user's hands, the user can find his or her own balance
between what breaks and what risks are left open—as is the case
with other client-side security extensions like NoScript.
HTTPS, almost everywhere
As a "bonus header," Marier also discussed the HTTP Strict
Transport Security (HSTS) policy framework with the time remaining in
his session. HSTS, like CSP, is an HTTP header mechanism. It is
designed to protect again SSL downgrade attacks, in which an HTTPS
connection is stripped down to HTTP, presumably without attracting the
user's attention. HSTS allows the server to declare that it will
only allow browsers to connect over HTTPS. The header does not fix a
permanent condition; it includes a max-age directive giving a
time in seconds for which the browser should cache the HSTS setting.
Firefox has supported HSTS since
Firefox 4, but as a question from the audience revealed, it comes with
one hangup: the browser must successfully connect to the server over
HTTPS the first time in order to get the HSTS header.
Mozilla sought to alleviate the risk of attacks that exploit this by
shipping Firefox 17 pre-loaded with a list of verified banking web
sites that the browser should access over HTTPS the first time.
HSTS is supported in Chromium/Chrome in addition to Firefox, as
well as in Opera. Mozilla cannot do much to implement security policy for other
browsers—particularly the proprietary ones—so when asked
what to tell users of other browsers, Marier's response was "It works
in these browsers. If it doesn't work in your favorite browser
... switch browsers."
That is probably sound advice, which a lot of free software
security mavens would echo. But it is interesting to see that,
with both CSP and HSTS, Mozilla is pushing forward on web security from
the server side as well as within the browser itself.
Comments (4 posted)
Brief items
I continue to be amazed that elected officials can read constant articles
about hacking, and yet readily accept the assurances that there will be no
problems with internet voting. If the SBE [State Board of Elections] is so
good at stopping attacks,
perhaps they should supplement their paltry budget by providing security
for banks, Federal government agencies like DOD [Departement of Defense],
and the nation’s leading
newspapers!
--
Jeremy Epstein
The Internet's design isn't fixed by natural laws. Its history is a
fortuitous accident: an initial lack of commercial interests, governmental
benign neglect, military requirements for survivability and resilience, and
the natural inclination of computer engineers to build open systems that
work simply and easily. This mix of forces that created yesterday's
Internet will not be trusted to create tomorrow's. Battles over the future
of the Internet are going on right now: in legislatures around the world,
in international organizations like the International Telecommunications
Union and the World Trade Organization, and in Internet standards
bodies. The Internet is what we make it, and is constantly being recreated
by organizations, companies, and countries with specific interests and
agendas. Either we fight for a seat at the table, or the future of the
Internet becomes something that is done to us.
--
Bruce
Schneier
Comments (28 posted)
Ars technica
reports on a weakness found in various open source (and possibly proprietary) SSL/TLS implementations (e.g. OpenSSL, NSS). Exploiting it is fairly difficult, but it allows attackers to decrypt the ciphertext.
"
The attacks start by capturing the ciphertext as it travels over the Internet. Using a long-discovered weakness in TLS's CBC, or cipher block chaining, mode, attackers replace the last several blocks with chosen blocks and observe the amount of time it takes for the server to respond. TLS messages that contain the correct padding will take less time to process. A mechanism in TLS causes the transaction to fail each time the application encounters a TLS message that contains tampered data, requiring attackers to repeatedly send malformed messages in a new session following each previous failure. By sending large numbers of TLS messages and statistically sampling the server response time for each one, the scientists were able to eventually correctly guess the contents of the ciphertext."
Comments (5 posted)
Matthew Garrett
calls out Google for not allowing users to install their own keys on Chromebook systems. "
Some people don't like Secure Boot because they don't trust Microsoft. If you trust Google more, then a Chromebook is a reasonable choice. But some people don't like Secure Boot because they see it as an attack on user freedom, and those people should be willing to criticise Google's stance. Unlike Microsoft, Chromebooks force the user to choose between security and freedom. Nobody should be forced to make that choice."
Comments (70 posted)
New vulnerabilities
abrt and libreport: two privilege escalation flaws
| Package(s): | abrt and libreport |
CVE #(s): | CVE-2012-5659
CVE-2012-5660
|
| Created: | February 1, 2013 |
Updated: | February 10, 2013 |
| Description: |
From the Red Hat advisory:
It was found that the
/usr/libexec/abrt-action-install-debuginfo-to-abrt-cache tool did not
sufficiently sanitize its environment variables. This could lead to Python
modules being loaded and run from non-standard directories (such as /tmp/).
A local attacker could use this flaw to escalate their privileges to that
of the abrt user. (CVE-2012-5659)
A race condition was found in the way ABRT handled the directories used to
store information about crashes. A local attacker with the privileges of
the abrt user could use this flaw to perform a symbolic link attack,
possibly allowing them to escalate their privileges to root.
(CVE-2012-5660) |
| Alerts: |
|
Comments (none posted)
axis: incorrect certificate validation
| Package(s): | axis |
CVE #(s): | CVE-2012-5784
|
| Created: | February 1, 2013 |
Updated: | March 26, 2013 |
| Description: |
From the Fedora advisory:
This update fixes a security vulnerability that caused axis not to verify that the server hostname
matches a domain name in the subject's Common Name (CN) or subjectAltName field of the X.509
certificate, which allowed man-in-the-middle attackers to spoof SSL servers via an arbitrary valid
certificate (CVE-2012-5784). |
| Alerts: |
|
Comments (none posted)
chromium: multiple vulnerabilities
| Package(s): | chromium |
CVE #(s): | CVE-2012-5145
CVE-2012-5146
CVE-2012-5147
CVE-2012-5148
CVE-2012-5149
CVE-2012-5150
CVE-2012-5152
CVE-2012-5153
CVE-2012-5154
CVE-2013-0830
CVE-2013-0831
CVE-2013-0832
CVE-2013-0833
CVE-2013-0834
CVE-2013-0835
CVE-2013-0836
CVE-2013-0837
CVE-2013-0838
|
| Created: | February 4, 2013 |
Updated: | February 6, 2013 |
| Description: |
From the CVE entries:
Use-after-free vulnerability in Google Chrome before 24.0.1312.52 allows remote attackers to cause a denial of service or possibly have unspecified other impact via vectors related to SVG layout. (CVE-2012-5145)
Google Chrome before 24.0.1312.52 allows remote attackers to bypass the Same Origin Policy via a malformed URL. (CVE-2012-5146)
Use-after-free vulnerability in Google Chrome before 24.0.1312.52 allows remote attackers to cause a denial of service or possibly have unspecified other impact via vectors related to DOM handling. (CVE-2012-5147)
The hyphenation functionality in Google Chrome before 24.0.1312.52 does not properly validate file names, which has unspecified impact and attack vectors. (CVE-2012-5148)
Integer overflow in the audio IPC layer in Google Chrome before 24.0.1312.52 allows remote attackers to cause a denial of service or possibly have unspecified other impact via unknown vectors. (CVE-2012-5149)
Use-after-free vulnerability in Google Chrome before 24.0.1312.52 allows remote attackers to cause a denial of service or possibly have unspecified other impact via vectors involving seek operations on video data. (CVE-2012-5150)
Google Chrome before 24.0.1312.52 allows remote attackers to cause a denial of service (out-of-bounds read) via vectors involving seek operations on video data. (CVE-2012-5152)
Google V8 before 3.14.5.3, as used in Google Chrome before 24.0.1312.52, allows remote attackers to cause a denial of service or possibly have unspecified other impact via crafted JavaScript code that triggers an out-of-bounds access to stack memory. (CVE-2012-5153)
Integer overflow in Google Chrome before 24.0.1312.52 on Windows allows attackers to cause a denial of service or possibly have unspecified other impact via vectors related to allocation of shared memory. (CVE-2012-5154)
The IPC layer in Google Chrome before 24.0.1312.52 on Windows omits a NUL character required for termination of an unspecified data structure, which has unknown impact and attack vectors. (CVE-2013-0830)
Directory traversal vulnerability in Google Chrome before 24.0.1312.52 allows remote attackers to have an unspecified impact by leveraging access to an extension process. (CVE-2013-0831)
Use-after-free vulnerability in Google Chrome before 24.0.1312.52 allows remote attackers to cause a denial of service or possibly have unspecified other impact via vectors related to printing. (CVE-2013-0832)
Google Chrome before 24.0.1312.52 allows remote attackers to cause a denial of service (out-of-bounds read) via vectors related to printing. (CVE-2013-0833)
Google Chrome before 24.0.1312.52 allows remote attackers to cause a denial of service (out-of-bounds read) via vectors involving glyphs. (CVE-2013-0834)
Unspecified vulnerability in the Geolocation implementation in Google Chrome before 24.0.1312.52 allows remote attackers to cause a denial of service (application crash) via unknown vectors. (CVE-2013-0835)
Google V8 before 3.14.5.3, as used in Google Chrome before 24.0.1312.52, does not properly implement garbage collection, which allows remote attackers to cause a denial of service (application crash) or possibly have unspecified other impact via crafted JavaScript code. (CVE-2013-0836)
Google Chrome before 24.0.1312.52 allows remote attackers to cause a denial of service or possibly have unspecified other impact via vectors related to the handling of extension tabs. (CVE-2013-0837)
Google Chrome before 24.0.1312.52 on Linux uses weak permissions for shared memory segments, which has unspecified impact and attack vectors. (CVE-2013-0838)
|
| Alerts: |
|
Comments (none posted)
coreutils: multiple vulnerabilities
| Package(s): | coreutils |
CVE #(s): | CVE-2013-0221
CVE-2013-0222
CVE-2013-0223
|
| Created: | February 1, 2013 |
Updated: | March 13, 2013 |
| Description: |
From the Red Hat bugzilla entries [1, 2, 3]:
CVE-2013-0221: It was reported that the sort command suffered from a segfault when processing input streams that contained extremely long strings when used with the -d and -M switches. This flaw is due to the inclusion of the coreutils-i18n.patch.
CVE-2013-0222: It was reported that the uniq command suffered from a segfault when processing input streams that contained extremely long strings. This flaw is due to the inclusion of the coreutils-i18n.patch.
CVE-2013-0223: It was reported that the join command suffered from a segfault when processing input streams that contained extremely long strings when used with the -i switch. This flaw is due to the inclusion of the coreutils-i18n.patch.
|
| Alerts: |
|
Comments (none posted)
couchdb: multiple vulnerabilities
| Package(s): | couchdb |
CVE #(s): | CVE-2012-5649
CVE-2012-5650
|
| Created: | February 6, 2013 |
Updated: | February 8, 2013 |
| Description: |
From the Red Hat bugzilla entries [1, 2]:
CVE-2012-5649: A security flaw was found in the way Apache CouchDB, a distributed, fault-tolerant and schema-free document-oriented database accessible via a RESTful HTTP/JSON API, processed certain JSON callback. A remote attacker could provide a specially-crafted JSON callback that, when processed could lead to arbitrary JSON code execution via Adobe Flash.
(Couchdb advisory)
CVE-2012-5650: A DOM based cross-site scripting (XSS) flaw was found in the way browser-based test suite of Apache CouchDB, a distributed, fault-tolerant and schema-free document-oriented database accessible via a RESTful HTTP/JSON API, processed certain query parameters. A remote attacker could provide a specially-crafted web page that, when accessed could lead to arbitrary web script or HTML execution in the context of a CouchDB user session. (Couchdb advisory). |
| Alerts: |
|
Comments (none posted)
ettercap: code execution
| Package(s): | ettercap |
CVE #(s): | CVE-2013-0722
|
| Created: | February 1, 2013 |
Updated: | February 6, 2013 |
| Description: |
From the Red Hat bugzilla entry:
A stack-based buffer overflow was reported in Ettercap <= 0.7.5.1. A boundary error within the scan_load_hosts() function (in src/ec_scan.c), when parsing entries from a hosts list, could be exploited to cause a stack-based buffer overflow via an overly long entry. In order to exploit this, a user must be tricked into loading a malicious host file. |
| Alerts: |
|
Comments (none posted)
freeipa: multiple vulnerabilities
| Package(s): | freeipa |
CVE #(s): | CVE-2012-4546
CVE-2013-0199
|
| Created: | February 4, 2013 |
Updated: | March 11, 2013 |
| Description: |
From the Red Hat bugzilla [1], [2]:
[1] FreeIPA 3.0 introduced a Cross-Realm Kerberos trusts with Active Directory, a feature that allows IPA administrators to create a Kerberos trust with an AD. This allows IPA users to be able to access resources in AD trusted domains and vice versa.
When the Kerberos trust is created, an outgoing and incoming keys are stored in the IPA LDAP backend (in ipaNTTrustAuthIncoming and ipaNTTrustAuthOutgoing attributes). However, the IPA LDAP ACIs allow anonymous read acess to these attributes which could allow an unprivileged and unauthenticated user to read the keys. With these keys, an attacker could craft an invented Kerberos ticket with an invented PAC, encrypt the PAC with the retrieved key, and impersonate any AD user in the IPA domain or impersonate any IPA user in the AD domain. (CVE-2013-0199)
[2] It was found that the current default configuration of IPA servers did not publish correct CRLs (Certificate Revocation Lists). The default configuration specifies that every replica is to generate its own CRL, however this can result in inconsistencies in the CRL contents provided to clients from different Identity Management replicas. More specifically, if a certificate is revoked on one Identity Management replica, it will not show up on another Identity Management replica. (CVE-2012-4546)
|
| Alerts: |
|
Comments (1 posted)
jakarta-commons-httpclient: incorrect certificate validation
| Package(s): | jakarta-commons-httpclient |
CVE #(s): | CVE-2012-5783
|
| Created: | February 1, 2013 |
Updated: | February 27, 2013 |
| Description: |
From the Fedora advisory:
This update fixes a security vulnerability that caused jakarta-commons-httpclient not to verify
that the server hostname matches a domain name in the subject's Common Name (CN) or subjectAltName
field of the X.509 certificate, which allowed man-in-the-middle attackers to spoof SSL servers via
andaarbitrary valid certificate (CVE-2012-5783). |
| Alerts: |
|
Comments (none posted)
java: multiple unspecified vulnerabilities
| Package(s): | java |
CVE #(s): | CVE-2013-0431
CVE-2013-0437
CVE-2013-0444
CVE-2013-0448
CVE-2013-0449
CVE-2013-1479
CVE-2013-1489
|
| Created: | February 5, 2013 |
Updated: | March 12, 2013 |
| Description: |
From the CVE entries:
Unspecified vulnerability in the Java Runtime Environment (JRE) component in Oracle Java SE 7 through Update 11 allows user-assisted remote attackers to bypass the Java security sandbox via unspecified vectors related to JMX, aka "Issue 52," a different vulnerability than CVE-2013-1490. (CVE-2013-0431)
Unspecified vulnerability in the Java Runtime Environment (JRE) component in Oracle Java SE 7 through Update 11 and JavaFX 2.2.4 and earlier allows remote attackers to affect confidentiality, integrity, and availability via unknown vectors related to 2D. (CVE-2013-0437)
Unspecified vulnerability in the Java Runtime Environment (JRE) component in Oracle Java SE 7 through Update 11 allows remote attackers to affect confidentiality, integrity, and availability via unknown vectors related to Beans. (CVE-2013-0444)
Unspecified vulnerability in the Java Runtime Environment (JRE) component in Oracle Java SE 7 through Update 11 allows remote attackers to affect integrity via unknown vectors related to Libraries. (CVE-2013-0448)
Unspecified vulnerability in the Java Runtime Environment (JRE) component in Oracle Java SE 7 through Update 11 allows remote attackers to affect confidentiality via unknown vectors related to Deployment. (CVE-2013-0449)
Unspecified vulnerability in the Java Runtime Environment (JRE) component in Oracle Java SE 7 through Update 11, 6 through Update 38, and JavaFX 2.2.4 and earlier allows remote attackers to affect confidentiality, integrity, and availability via unknown vectors. (CVE-2013-1479)
Unspecified vulnerability in the Java Runtime Environment (JRE) component in Oracle Java SE 7 Update 10 and Update 11, when running on Windows using Internet Explorer, Firefox, Opera, and Google Chrome, allows remote attackers to bypass the "Very High" security level of the Java Control Panel and execute unsigned Java code without prompting the user via unknown vectors, aka "Issue 53" and the "Java Security Slider" vulnerability. (CVE-2013-1489)
|
| Alerts: |
|
Comments (none posted)
java: multiple unspecified vulnerabilities
| Package(s): | java |
CVE #(s): | CVE-2012-1541
CVE-2012-3213
CVE-2012-3342
CVE-2013-0351
CVE-2013-0409
CVE-2013-0419
CVE-2013-0423
CVE-2013-0424
CVE-2013-0425
CVE-2013-0426
CVE-2013-0427
CVE-2013-0428
CVE-2013-0429
CVE-2013-0430
CVE-2013-0432
CVE-2013-0433
CVE-2013-0434
CVE-2013-0435
CVE-2013-0438
CVE-2013-0440
CVE-2013-0441
CVE-2013-0442
CVE-2013-0443
CVE-2013-0445
CVE-2013-0446
CVE-2013-0450
CVE-2013-1473
CVE-2013-1475
CVE-2013-1476
CVE-2013-1478
CVE-2013-1480
CVE-2013-1481
|
| Created: | February 5, 2013 |
Updated: | March 20, 2013 |
| Description: |
From the CVE entries:
Unspecified vulnerability in the Java Runtime Environment (JRE) component in Oracle Java SE 7 through Update 11 and 6 through Update 38 allows remote attackers to affect confidentiality, integrity, and availability via unknown vectors related to Deployment, a different vulnerability than other CVEs listed in the February 2013 CPU. (CVE-2012-1541)
Unspecified vulnerability in the Java Runtime Environment (JRE) component in Oracle Java SE 7 through Update 11 and 6 through Update 38 allows remote attackers to affect confidentiality, integrity, and availability via unknown vectors related to Scripting. (CVE-2012-3213)
Unspecified vulnerability in the Java Runtime Environment (JRE) component in Oracle Java SE 7 through Update 11 and 6 through Update 38 allows remote attackers to affect confidentiality, integrity, and availability via unknown vectors related to Deployment, a different vulnerability than other CVEs listed in the February 2013 CPU. (CVE-2012-3342)
Unspecified vulnerability in the Java Runtime Environment (JRE) component in Oracle Java SE 7 through Update 11 and 6 through Update 38 allows remote attackers to affect confidentiality, integrity, and availability via unknown vectors related to Deployment, a different vulnerability than other CVEs listed in the February 2013 CPU. (CVE-2013-0351)
Unspecified vulnerability in the Java Runtime Environment (JRE) component in Oracle Java SE 7 through Update 11, 6 through Update 38, and 5.0 through Update 38 allows remote attackers to affect confidentiality via vectors related to JMX. (CVE-2013-0409)
Unspecified vulnerability in the Java Runtime Environment (JRE) component in Oracle Java SE 7 through Update 11 and 6 through Update 38 allows remote attackers to affect confidentiality, integrity, and availability via unknown vectors related to Deployment, a different vulnerability than other CVEs listed in the February 2013 CPU. (CVE-2013-0419)
Unspecified vulnerability in the Java Runtime Environment (JRE) component in Oracle Java SE 7 through Update 11 and 6 through Update 38 allows remote attackers to affect confidentiality, integrity, and availability via unknown vectors related to Deployment, a different vulnerability than other CVEs listed in the February 2013 CPU. (CVE-2013-0423)
Unspecified vulnerability in the Java Runtime Environment (JRE) component in Oracle Java SE 7 through Update 11, 6 through Update 38, 5.0 through Update 38, and 1.4.2_40 and earlier allows remote attackers to affect integrity via vectors related to RMI. (CVE-2013-0424)
Unspecified vulnerability in the Java Runtime Environment (JRE) component in Oracle Java SE 7 through Update 11, 6 through Update 38, 5.0 through Update 38, and 1.4.2_40 and earlier allows remote attackers to affect confidentiality, integrity, and availability via unknown vectors related to Libraries, a different vulnerability than CVE-2013-0428 and CVE-2013-0426. (CVE-2013-0425)
Unspecified vulnerability in the Java Runtime Environment (JRE) component in Oracle Java SE 7 through Update 11, 6 through Update 38, 5.0 through Update 38, and 1.4.2_40 and earlier allows remote attackers to affect confidentiality, integrity, and availability via unknown vectors related to Libraries, a different vulnerability than CVE-2013-0425 and CVE-2013-0428. (CVE-2013-0426)
Unspecified vulnerability in the Java Runtime Environment (JRE) component in Oracle Java SE 7 through Update 11, 6 through Update 38, and 5.0 through Update 38 allows remote attackers to affect integrity via unknown vectors related to Libraries. (CVE-2013-0427)
Unspecified vulnerability in the Java Runtime Environment (JRE) component in Oracle Java SE 7 through Update 11, 6 through Update 38, 5.0 through Update 38, and 1.4.2_40 and earlier allows remote attackers to affect confidentiality, integrity, and availability via unknown vectors related to Libraries, a different vulnerability than CVE-2013-0425 and CVE-2013-0426. (CVE-2013-0428)
Unspecified vulnerability in the Java Runtime Environment (JRE) component in Oracle Java SE 7 through Update 11, 6 through Update 38, and 5.0 through Update 38 allows remote attackers to affect confidentiality, integrity, and availability via vectors related to CORBA. (CVE-2013-0429)
Unspecified vulnerability in the Java Runtime Environment (JRE) component in Oracle Java SE 7 through Update 11 and 6 through Update 38, allows local users to affect confidentiality, integrity, and availability via unknown vectors related to the installation process of the client. (CVE-2013-0430)
Unspecified vulnerability in the Java Runtime Environment (JRE) component in Oracle Java SE 7 through Update 11, 6 through Update 38, 5.0 through Update 38, and 1.4.2_40 and earlier allows remote attackers to affect confidentiality and integrity via vectors related to AWT. (CVE-2013-0432)
Unspecified vulnerability in the Java Runtime Environment (JRE) component in Oracle Java SE 7 through Update 11, 6 through Update 38, and 5.0 through Update 38 allows remote attackers to affect integrity via unknown vectors related to Networking. (CVE-2013-0433)
Unspecified vulnerability in the Java Runtime Environment (JRE) component in Oracle Java SE 7 through Update 11, 6 through Update 38, 5.0 through Update 38, and 1.4.2_40 and earlier allows remote attackers to affect confidentiality via vectors related to JAXP. (CVE-2013-0434)
Unspecified vulnerability in the Java Runtime Environment (JRE) component in Oracle Java SE 7 through Update 11 and 6 through Update 38 allows remote attackers to affect confidentiality via vectors related to JAX-WS. (CVE-2013-0435)
Unspecified vulnerability in the Java Runtime Environment (JRE) component in Oracle Java SE 7 through Update 11 and 6 through Update 38 allows remote attackers to affect confidentiality via unknown vectors related to Deployment. (CVE-2013-0438)
Unspecified vulnerability in the Java Runtime Environment (JRE) component in Oracle Java SE 7 through Update 11, 6 through Update 38, 5.0 through Update 38, and 1.4.2_40 and earlier allows remote attackers to affect availability via vectors related to JSSE. (CVE-2013-0440)
Unspecified vulnerability in the Java Runtime Environment (JRE) component in Oracle Java SE 7 through Update 11, 6 through Update 38, 5.0 through Update 38, and 1.4.2_40 and earlier allows remote attackers to affect confidentiality, integrity, and availability via vectors related to CORBA, a different vulnerability than CVE-2013-1476 and CVE-2013-1475. (CVE-2013-0441)
Unspecified vulnerability in the Java Runtime Environment (JRE) component in Oracle Java SE 7 through Update 11, 6 through Update 38, 5.0 through Update 38, and 1.4.2_40 and earlier allows remote attackers to affect confidentiality, integrity, and availability via vectors related to AWT. (CVE-2013-0442)
Unspecified vulnerability in the Java Runtime Environment (JRE) component in Oracle Java SE 7 through Update 11, 6 through Update 38, 5.0 through Update 38, and 1.4.2_40 and earlier allows remote attackers to affect confidentiality and integrity via vectors related to JSSE. (CVE-2013-0443)
Unspecified vulnerability in the Java Runtime Environment (JRE) component in Oracle Java SE 7 through Update 11, 6 through Update 38, and 5.0 through Update 38 allows remote attackers to affect confidentiality, integrity, and availability via vectors related to AWT. (CVE-2013-0445)
Unspecified vulnerability in the Java Runtime Environment (JRE) component in Oracle Java SE 7 through Update 11 and 6 through Update 38 allows remote attackers to affect confidentiality, integrity, and availability via unknown vectors related to Deployment, a different vulnerability than other CVEs listed in the February 2013 CPU. (CVE-2013-0446)
Unspecified vulnerability in the Java Runtime Environment (JRE) component in Oracle Java SE 7 through Update 11, 6 through Update 38, and 5.0 through Update 38 allows remote attackers to affect confidentiality, integrity, and availability via vectors related to JMX. (CVE-2013-0450)
Unspecified vulnerability in the Java Runtime Environment (JRE) component in Oracle Java SE 7 through Update 11 and 6 through Update 38 allows remote attackers to affect integrity via unknown vectors related to Deployment. (CVE-2013-1473)
Unspecified vulnerability in the Java Runtime Environment (JRE) component in Oracle Java SE 7 through Update 11, 6 through Update 38, 5.0 through Update 38, and 1.4.2_40 and earlier allows remote attackers to affect confidentiality, integrity, and availability via vectors related to CORBA. (CVE-2013-1475)
Unspecified vulnerability in the Java Runtime Environment (JRE) component in Oracle Java SE 7 through Update 11, 6 through Update 38, 5.0 through Update 38, and 1.4.2_40 and earlier allows remote attackers to affect confidentiality, integrity, and availability via vectors related to CORBA, a different vulnerability than CVE-2013-0441 and CVE-2013-1475. (CVE-2013-1476)
Unspecified vulnerability in the Java Runtime Environment (JRE) component in Oracle Java SE 7 through Update 11, 6 through Update 38, 5.0 through Update 38, and 1.4.2_40 and earlier allows remote attackers to affect confidentiality, integrity, and availability via unknown vectors related to 2D. (CVE-2013-1478)
Unspecified vulnerability in the Java Runtime Environment (JRE) component in Oracle Java SE 7 through Update 11, 6 through Update 38, 5.0 through Update 38, and 1.4.2_40 and earlier allows remote attackers to affect confidentiality, integrity, and availability via vectors related to AWT. (CVE-2013-1480)
Unspecified vulnerability in the Java Runtime Environment (JRE) component in Oracle Java SE 6 through Update 38, 5.0 through Update 38, and 1.4.2_40 and earlier allows remote attackers to affect confidentiality, integrity, and availability via unknown vectors related to Sound. (CVE-2013-1481)
See the Oracle Java SE Critical
Patch Update Advisory for additional details. |
| Alerts: |
|
Comments (none posted)
keystone: denial of service
| Package(s): | keystone |
CVE #(s): | CVE-2013-0247
|
| Created: | February 6, 2013 |
Updated: | February 18, 2013 |
| Description: |
From the Ubuntu advisory:
Dan Prince discovered that Keystone did not properly perform input
validation when handling certain error conditions. An unauthenticated user
could exploit this to cause a denial of service in Keystone API servers via
disk space exhaustion. |
| Alerts: |
|
Comments (none posted)
libupnp: multiple vulnerabilities
| Package(s): | libupnp |
CVE #(s): | CVE-2012-5958
CVE-2012-5959
CVE-2012-5960
CVE-2012-5961
CVE-2012-5962
CVE-2012-5963
CVE-2012-5964
CVE-2012-5965
|
| Created: | February 4, 2013 |
Updated: | February 21, 2013 |
| Description: |
From the CVE entries:
Stack-based buffer overflow in the unique_service_name function in ssdp/ssdp_server.c in the SSDP parser in the portable SDK for UPnP Devices (aka libupnp, formerly the Intel SDK for UPnP devices) before 1.6.18 allows remote attackers to execute arbitrary code via a UDP packet with a crafted string that is not properly handled after a certain pointer subtraction. (CVE-2012-5958)
Stack-based buffer overflow in the unique_service_name function in ssdp/ssdp_server.c in the SSDP parser in the portable SDK for UPnP Devices (aka libupnp, formerly the Intel SDK for UPnP devices) before 1.6.18 allows remote attackers to execute arbitrary code via a long UDN (aka uuid) field within a string that contains a :: (colon colon) in a UDP packet. (CVE-2012-5959)
Stack-based buffer overflow in the unique_service_name function in ssdp/ssdp_server.c in the SSDP parser in the portable SDK for UPnP Devices (aka libupnp, formerly the Intel SDK for UPnP devices) before 1.6.18 allows remote attackers to execute arbitrary code via a long UDN (aka upnp:rootdevice) field in a UDP packet. (CVE-2012-5960)
Stack-based buffer overflow in the unique_service_name function in ssdp/ssdp_server.c in the SSDP parser in the portable SDK for UPnP Devices (aka libupnp, formerly the Intel SDK for UPnP devices) 1.3.1 allows remote attackers to execute arbitrary code via a long UDN (aka device) field in a UDP packet. (CVE-2012-5961)
Stack-based buffer overflow in the unique_service_name function in ssdp/ssdp_server.c in the SSDP parser in the portable SDK for UPnP Devices (aka libupnp, formerly the Intel SDK for UPnP devices) 1.3.1 allows remote attackers to execute arbitrary code via a long DeviceType (aka urn) field in a UDP packet. (CVE-2012-5962)
Stack-based buffer overflow in the unique_service_name function in ssdp/ssdp_server.c in the SSDP parser in the portable SDK for UPnP Devices (aka libupnp, formerly the Intel SDK for UPnP devices) 1.3.1 allows remote attackers to execute arbitrary code via a long UDN (aka uuid) field within a string that lacks a :: (colon colon) in a UDP packet. (CVE-2012-5963)
Stack-based buffer overflow in the unique_service_name function in ssdp/ssdp_server.c in the SSDP parser in the portable SDK for UPnP Devices (aka libupnp, formerly the Intel SDK for UPnP devices) 1.3.1 allows remote attackers to execute arbitrary code via a long ServiceType (aka urn service) field in a UDP packet. (CVE-2012-5964)
Stack-based buffer overflow in the unique_service_name function in ssdp/ssdp_server.c in the SSDP parser in the portable SDK for UPnP Devices (aka libupnp, formerly the Intel SDK for UPnP devices) 1.3.1 allows remote attackers to execute arbitrary code via a long DeviceType (aka urn device) field in a UDP packet. (CVE-2012-5965)
|
| Alerts: |
|
Comments (none posted)
libwebp: denial of service
| Package(s): | libwebp |
CVE #(s): | CVE-2012-5127
|
| Created: | February 4, 2013 |
Updated: | February 6, 2013 |
| Description: |
From the CVE entry:
Integer overflow in Google Chrome before 23.0.1271.64 allows remote attackers to cause a denial of service (out-of-bounds read) or possibly have unspecified other impact via a crafted WebP image. |
| Alerts: |
|
Comments (none posted)
ndjbdns: ghost domain attack
| Package(s): | ndjbdns |
CVE #(s): | |
| Created: | February 1, 2013 |
Updated: | February 6, 2013 |
| Description: |
From the NVD entry:
The resolver in dnscache in Daniel J. Bernstein djbdns 1.05 overwrites cached server names and TTL values in NS records during the processing of a response to an A record query, which allows remote attackers to trigger continued resolvability of revoked domain names via a "ghost domain names" attack.
|
| Alerts: |
|
Comments (none posted)
rhncfg: information disclosure
| Package(s): | rhncfg |
CVE #(s): | CVE-2012-2679
|
| Created: | February 4, 2013 |
Updated: | February 6, 2013 |
| Description: |
From the Red Hat bugzilla:
It was discovered that Red Hat Network Configuration Client set insecure (0644) permissions on the /var/log/rhncfg-actions file used to store (besides terminal) the output of different RHN Client actions (diff, verify etc.). A local attacker could use this flaw to obtain sensitive information, if the rhncfg-client diff action has been used to query differences between the (normally for unprivileged user not readable) config files stored by RHN and those, deployed on the system. |
| Alerts: |
|
Comments (none posted)
samba: multiple vulnerabilities in SWAT
| Package(s): | samba |
CVE #(s): | CVE-2013-0213
CVE-2013-0214
|
| Created: | February 4, 2013 |
Updated: | March 25, 2013 |
| Description: |
From the
Samba 4.0.2 announcement:
CVE-2013-0213:
All current released versions of Samba are vulnerable to clickjacking in the
Samba Web Administration Tool (SWAT). When the SWAT pages are integrated into
a malicious web page via a frame or iframe and then overlaid by other content,
an attacker could trick an administrator to potentially change Samba settings.
In order to be vulnerable, SWAT must have been installed and enabled
either as a standalone server launched from inetd or xinetd, or as a
CGI plugin to Apache. If SWAT has not been installed or enabled (which
is the default install state for Samba) this advisory can be ignored.
CVE-2013-0214:
All current released versions of Samba are vulnerable to a cross-site
request forgery in the Samba Web Administration Tool (SWAT). By guessing a
user's password and then tricking a user who is authenticated with SWAT into
clicking a manipulated URL on a different web page, it is possible to manipulate
SWAT.
In order to be vulnerable, the attacker needs to know the victim's password.
Additionally SWAT must have been installed and enabled either as a standalone
server launched from inetd or xinetd, or as a CGI plugin to Apache. If SWAT has
not been installed or enabled (which is the default install state for Samba)
this advisory can be ignored.
|
| Alerts: |
|
Comments (3 posted)
squid-cgi: denial of service
| Package(s): | squid-cgi |
CVE #(s): | CVE-2013-0189
|
| Created: | January 31, 2013 |
Updated: | February 6, 2013 |
| Description: |
From the Ubuntu advisory:
It was discovered that the patch for CVE-2012-5643 was incorrect. A
remote attacker could exploit this flaw to perform a denial of service
attack. (CVE-2013-0189)
|
| Alerts: |
|
Comments (none posted)
tinymce-spellchecker: code execution
| Package(s): | tinymce-spellchecker |
CVE #(s): | CVE-2012-6112
|
| Created: | February 4, 2013 |
Updated: | February 6, 2013 |
| Description: |
From the Red Hat bugzilla:
A security flaw was found in the way Google spellchecker of TinyMCE spellchecker plugin sanitized content of $lang and $str arguments from presence of control characters when checking for matches. A remote attacker could provide a specially-crafted string, to be checked by the TinyMCE spellchecker plugin that, when processed, could lead to arbitrary code execution with the privileges of the user running the TinyMCE spellchecker plugin. |
| Alerts: |
|
Comments (none posted)
v8: multiple vulnerabilities
| Package(s): | v8 |
CVE #(s): | |
| Created: | February 5, 2013 |
Updated: | February 6, 2013 |
| Description: |
The Javascript engine V8 3.16.4.0 fixes lots of bugs and security issues.
See this SUSE bug report for details. |
| Alerts: |
|
Comments (none posted)
virtualbox: unspecified vulnerability
| Package(s): | virtualbox |
CVE #(s): | CVE-2013-0420
|
| Created: | February 4, 2013 |
Updated: | February 6, 2013 |
| Description: |
From the CVE entry:
Unspecified vulnerability in the VirtualBox component in Oracle Virtualization 4.0, 4.1, and 4.2 allows local users to affect integrity and availability via unknown vectors related to Core. |
| Alerts: |
|
Comments (none posted)
xen: denial of service
| Package(s): | xen |
CVE #(s): | CVE-2013-0151
CVE-2013-0152
|
| Created: | February 4, 2013 |
Updated: | February 6, 2013 |
| Description: |
From the Red Hat bugzilla:
CVE-2013-0151: nested virtualization on 32-bit exposes host crash
When performing nested virtualisation Xen would incorrectly map guest
pages for extended periods using an interface which is only intended
for transient mappings. In some configurations there are a limited
number of slots available for these transient mappings and exhausting
them leads to a host crash and therefore a Denial of Service attack.
A malicious guest administrator can, by enabling nested virtualisation
from within the guest, trigger the issue.
CVE-2013-0152: nested HVM exposes host to being driven out of memory by guest
Guests are currently permitted to enable nested virtualization on
themselves. Missing error handling cleanup in the handling code makes
it possible for a guest, particularly a multi-vCPU one, to repeatedly
invoke this operation, thus causing a leak of - over time - unbounded
amounts of memory.
A malicious domain can mount a denial of service attack affecting the
whole system. |
| Alerts: |
|
Comments (none posted)
xorg-x11-drv-qxl: denial of service
| Package(s): | xorg-x11-drv-qxl |
CVE #(s): | CVE-2013-0241
|
| Created: | February 1, 2013 |
Updated: | February 7, 2013 |
| Description: |
From the Red Hat advisory:
A flaw was found in the way the host's qemu-kvm qxl driver and the guest's
X.Org qxl driver interacted when a SPICE connection terminated. A user able
to initiate a SPICE connection to a guest could use this flaw to make the
guest temporarily unavailable or, potentially (if the sysctl
kernel.softlockup_panic variable was set to "1" in the guest), crash the
guest. (CVE-2013-0241) |
| Alerts: |
|
Comments (none posted)
zim: multiple vulnerabilities
| Package(s): | Zim |
CVE #(s): | |
| Created: | February 5, 2013 |
Updated: | February 6, 2013 |
| Description: |
Zim 0.59 fixes multiple bugs. |
| Alerts: |
|
Comments (none posted)
Page editor: Jake Edge
Kernel development
Brief items
The current development kernel is 3.8-rc6,
released on February 1. "
I have a
CleverPlan(tm) to make *sure* that rc7 will be better and much
smaller. That plan largely depends on me being unreachable for the next
week due to the fact that there is no internet under water." Once he
returns from diving, Linus plans to be very aggressive about accepting only
patches that "
fix major security issues, big user-reported
regressions, or nasty oopses".
The code name for the release has
changed;
it is now "Unicycling Gorilla".
Stable updates:
3.0.62, 3.4.29, and 3.7.6 were released on February 3;
3.2.38 was released on February 6.
Comments (1 posted)
Paraphrasing the Alien films: "Under water, nobody can read your
email".
—
Linus Torvalds
Tonight’s mainline Linux kernel contains about 100,000 instances of
the keyword “goto”. The most deeply nested use of goto that I could
find is
here,
with a depth of 12. Unfortunately this function is kind of
hideous.
Here’s
a much cleaner example with depth 10.
Here are the goto targets that appear more than 200 times:
out (23228 times)
error (4240 times)
err (4184 times)
fail (3250 times)
done (3179 times)
exit (1825 times)
bail (1539 times)
out_unlock (1219 times)
err_out (1165 times)
out_free (1053 times)
[...]
—
John Regehr
diff --git a/Documentation/SubmittingPatches b/Documentation/SubmittingPatches
--- a/Documentation/SubmittingPatches
+++ b/Documentation/SubmittingPatches
@@ -93,7 +93,9 @@ includes updates for subsystem X. Please apply."
The maintainer will thank you if you write your patch description in a
form which can be easily pulled into Linux's source code management
-system, git, as a "commit log". See #15, below.
+system, git, as a "commit log". See #15, below. If the maintainer has
+to hand-edit your patch, you owe them the beverage of their choice the
+next time you see them.
—
Greg Kroah-Hartman
"a beverage".
Pilsener, please.
—
Andrew Morton
Comments (26 posted)
At long last, the code implementing RAID 5 and 6 has been merged into an
experimental branch in the Btrfs repository; this is an important step
toward its eventual arrival in the mainline kernel. The initial benchmark
results look good, but there are a few issues yet to be ironed out before
this code can be considered stable. Click below for the announcement,
benchmark information, and some discussion of how higher-level RAID works
in Btrfs. "
This does sound quite a lot like MD raid, and that's because it is. By
doing the raid inside of Btrfs, we're able to use different raid levels
for metadata vs data, and we're able to force parity rebuilds when crcs
don't match. Also management operations such as restriping and
adding/removing drives are able to hook into the filesystem
transactions. Longer term we'll be able to skip reads on blocks that
aren't allocated and do other connections between raid56 and the FS
metadata."
Full Story (comments: 24)
Kernel development news
By Jonathan Corbet
February 6, 2013
The kernel's
locking validator (often known
as "lockdep") is one of the community's most useful pro-active debugging
tools. Since its introduction in 2006, it has eliminated most
deadlock-causing bugs
from the system. Given that deadlocks can be extremely difficult
to reproduce and diagnose, the result is a far more reliable kernel and
happier users. There
is a shortage of equivalent tools for user-space programming, despite the
fact that deadlock issues can happen there as well. As it happens, making
lockdep available in user space may be far easier than almost anybody might
have thought.
Lockdep works by adding wrappers around the locking calls in the kernel.
Every time a
specific type of lock is taken or released, that fact is noted, along with
ancillary details like whether the processor was servicing an interrupt at
the time. Lockdep also notes which other locks were already held when the
new lock is taken; that is the key to much of the checking that lockdep is
able to perform.
To illustrate this point, imagine that two threads each need to acquire two
locks, called A and B:
If one thread acquires A first
while the other grabs B first, the situation might look something
like this:
Now, when each
thread goes for the lock it lacks, the system is in trouble:
Each thread will now wait forever for the other to release the lock it
holds; the system is now deadlocked. Things may not come to this point
often at all; this deadlock requires each thread to acquire its lock at
exactly the wrong time. But, with computers, even highly unlikely events
will come to pass sooner or later, usually at a highly inopportune time.
This situation can be avoided: if both threads adhere to a rule
stating that A must always be acquired before B, this
particular deadlock (called an "AB-BA deadlock" for obvious reasons) cannot
happen. But, in a system with a large number of locks, it is not always
clear what the rules for locking are, much less that they are consistently
followed. Mistakes are easy to make. That is where lockdep comes in:
by tracking the order of lock acquisition, lockdep can raise the
alarm anytime it sees a thread acquire A while already holding
B. No actual deadlock is required to get a "splat" (a report of a
locking problem) out of lockdep,
meaning that even highly unlikely deadlock situations can be found before
they ruin somebody's day. There is no need to wait for that one time when
the timing is exactly wrong to see that there is a problem.
Lockdep is able to detect more complicated deadlock scenarios than the one
described above. It can also detect related problems, such as locks that
are not interrupt-safe being acquired in interrupt context. As one might
expect, running a kernel with lockdep enabled tends to slow things down
considerably; it is not an option that one would enable on a production
system. But enough developers test with lockdep enabled that most problems
are found before they make their way into a stable kernel release. As a
result, reports of deadlocks on deployed systems are now quite rare.
Kernel-based tools often do not move readily to user space; the kernel's
programming environment differs markedly from a normal C environment, so
kernel code can normally only be expected to run in the kernel itself. In
this case, though, Sasha Levin noticed that there is not much in the
lockdep subsystem that is truly kernel-specific. Lockdep collects data and
builds graphs describing observed lock acquisition patterns; it is code
that could be run in a non-kernel context relatively easily.
So Sasha proceeded to put
together a patch set creating a lockdep
library that is available to programs in user space.
Lockdep does, naturally, call a number of kernel functions, so a big part
of Sasha's patch set is a long list of stub implementations shorting out
calls to functions like local_irq_enable() that have no meaning in
user space. An abbreviated version of struct task_struct is
provided to track threads in user space, and functions like
print_stack_trace() are substituted with user-space equivalents
(backtrace_symbols_fd() in this case). The kernel's internal
(used by lockdep) locks are reimplemented using POSIX thread ("pthread")
mutexes. Stub versions of
the include files used by the lockdep code are provided in a special
directory. And so on. Once all that is
done, the lockdep code can be built directly out of the kernel tree and
turned into a library.
User-space code wanting to take advantage of the lockdep library needs to
start by including <liblockdep/mutex.h>, which, among other
things, adds a set of wrappers around the pthread_mutex_t and
pthread_rwlock_t types and
the functions that work with them. A call to liblockdep_init() is
required; each thread should also make a call to
liblockdep_set_thread() to set up information for any problem
reports. That is about all that is required; programs that are
instrumented in this way will have their pthreads mutex and
rwlock usage checked by lockdep.
As a proof of concept, the patch adds instrumentation to the (thread-based)
perf tool contained within the kernel source tree.
One of the key aspects of Sasha's patch is that it requires no changes to
the in-kernel lockdep code at all. The user-space lockdep library can be
built directly out of the kernel tree. Among other things, that means that
any future lockdep fixes and enhancements will automatically become
available to user space with no additional effort required on the
kernel developers' part.
In summary, this patch looks like a significant win for everybody involved;
it is thus not surprising that opposition to its inclusion has been hard to
find. There has been a call for some
better documentation, explicit mention that the resulting user-space
library is GPL-licensed, and a runtime toggle for lock validation (so that
the library could be built into applications but not actually track locking
unless requested). Such
details should not be hard to fill in, though. So, with luck, user space
should have access to lockdep in the near future, resulting in more
reliable lock usage.
Comments (5 posted)
By Jonathan Corbet
February 6, 2013
The kernel's "IDR" layer is a curious beast. Its job is conceptually
simple: it is charged with the allocation of integer ID numbers used with
device names, POSIX timers, and more. The implementation is somewhat less
than simple, though, for a straightforward reason: IDR functions are often
called from performance-critical code paths and must be able to work in
atomic context. These constraints, plus some creative programming, have
led to one of the stranger subsystem APIs in the kernel. If Tejun Heo has
his way, though, things will become rather less strange in the future —
though at least one reviewer disagrees with that conclusion.
Strangeness notwithstanding, the IDR API has changed little since it was documented here in 2004. One includes
<linux/idr.h>, allocates an idr structure, and
initializes it with idr_init(). Thereafter, allocating a new
integer ID and binding it to an internal structure is a matter of calling
these two functions:
int idr_pre_get(struct idr *idp, gfp_t gfp_mask);
int idr_get_new(struct idr *idp, void *ptr, int *id);
The call to idr_pre_get() should happen outside of atomic context;
its purpose is to perform all the memory allocations necessary to ensure
that the following call to idr_get_new() (which returns the newly
allocated ID number and associates it with the given ptr) is able
to succeed. The
latter call can then happen in atomic context, a feature needed by many IDR
users.
There is just one little problem with this interface, as Tejun points out
in the introduction to his patch set: the
call to idr_get_new() can still fail. So code using the IDR layer
cannot just ask for a new ID; it must, instead, execute a loop that retries
the allocation until it either succeeds or returns a failure code other than
-EAGAIN. That leads to the inclusion of a lot of
error-prone boilerplate code in well over 100 call sites in the kernel; the
2004 article and Tejun's patch both contain
examples of what this code looks like.
Failure can happen for a number of reasons, but the mostly likely cause is
tied to the fact that the memory preallocated by idr_pre_get() is
a global resource. A call to idr_pre_get() simply ensures that a
minimal amount of memory is available; calling it twice will not increase
the amount of preallocated memory. So, if two processors simultaneously call
idr_pre_get(), the amount of memory allocated will be the same as
if only one processor had made that call. The first processor to call
idr_get_new() may then consume all of that memory, leaving nothing
for the second caller. That second caller will then be forced to drop out
of atomic context and execute
the retry loop — a code path that is unlikely to have been well tested by
the original developer.
Tejun's response is to change the API, basing it on three new functions:
void idr_preload(gfp_t gfp_mask);
int idr_alloc(struct idr *idp, void *ptr, int start, int end, gfp_t gfp_mask);
void idr_preload_end(void);
As with idr_pre_get(), the new idr_preload() function is
charged with allocating the memory necessary to satisfy the next allocation
request. There are some interesting differences, though. The attentive
reader will note that there is no struct idr argument to
idr_preload(),
suggesting that the preallocated memory is not associated with any
particular ID number space. It is, instead, stored in a single per-CPU
array. Since this memory is allocated for the current CPU, it is not
possible for any other processor to slip in and steal it — at least, not if
the current thread is not preempted. For that reason,
idr_preload() also disables preemption. Given that, the existence
of the new idr_preload_end() function is easy to explain: it is
there to re-enable preemption once the allocation has been performed.
A call to idr_alloc() will actually allocate an integer ID. It
accepts upper and lower bounds for that ID to accommodate code that can
only cope with
a given range of numbers — code that uses the ID as an array index, for
example. If need be, it will attempt to allocate memory using the given
gfp_mask. Allocations will be unnecessary if
idr_preload() has been called, but, with the new interface,
preallocation is no longer necessary. So code that can call
idr_alloc() from process context can dispense with the
idr_preload() and idr_preload_end() calls altogether.
Either way, the only way
idr_alloc() will fail is with a hard memory allocation failure;
there is no longer any need to put a loop around allocation attempts. As a
result, Tejun's 62-part patch set, touching 78 files, results in the net
deletion of a few hundred lines of code.
Most of the developers whose code was changed by Tejun's patch set
responded with simple Acked-by lines. Eric Biederman, though, didn't like the API; he said "When
reading code with idr_preload I get this deep down creepy feeling. What is
this magic that is going on?" As can be seen in Tejun's response, one developer's magic is
another's straightforward per-CPU technique. As of this writing, that
particular discussion has not reached any sort of public resolution. Your
editor would predict, though, that the simplification of this heavily-used
API will be sufficiently compelling that most developers will be able to
get past any resulting creepy feelings. So the IDR API may be changing in
a mainline kernel in the not-too-distant future.
Comments (5 posted)
By Michael Kerrisk
February 6, 2013
The Linux kernel developers have long been aware of the need for better
testing of the kernel. That testing can take many forms, including testing for performance regressions and testing
for build and boot regressions.
As the term suggests, regression testing is concerned with detecting cases
where a new kernel version causes problems in code or
features that already existed in previous versions of the kernel.
Of course, each new kernel release also adds new features. The Trinity fuzz tester
is a tool that aims to improve testing of one class of new (and existing)
features: the system call interfaces that the kernel presents to user
space.
Insufficient testing of new user-space interfaces is a long-standing issue in kernel
development. Historically, it has been quite common that significant bugs
are found in new interfaces only a considerable time after those interfaces
appear in a stable kernel—examples include epoll_ctl(),
kill(),
signalfd(),
and utimensat().
The problem is that, typically, a new interface is tested
by only one person (the developer of the feature) or at most a handful
of people who have a close interest in the interface. A common problem that
occurs when developers write their own tests is a bias toward tests which
confirm that expected inputs produce expected results. Often, of
course, bugs are found when software is used in unexpected ways that test
little-used code paths.
Fuzz testing is
a technique that aims to reverse this testing bias. The general idea is to
provide unexpected inputs to the software being tested, in the form of
random (or semi-random) values. Fuzz testing has two obvious
benefits. First, employing unexpected inputs mean that rarely used code
paths are tested. Second, the generation of random inputs and the tests
themselves can be fully automated, so that a large number of tests can be
quickly performed.
History
Fuzz testing has a
history that stretches back to at least the 1980s, when fuzz testers
were used to test command-line utilities. The history of system call fuzz
testing is nearly as long.
During his talk at linux.conf.au 2013 [ogv video, mp4 video], Dave Jones, the developer of
Trinity, noted that the earliest
system call fuzz tester that he had heard of was Tsys, which was
created around 1991 for System V Release 4. Another early example was a fuzz
tester [postscript] developed at the University of Wisconsin in the
mid-1990s that was run against a variety of kernels, including Linux.
Tsys was an example of a "naïve" fuzz tester: it simply generated random
bit patterns, placed them in appropriate registers, and then executed a
system call. About a decade later, the kg_crashme tool was developed to
perform fuzz testing on Linux. Like Tsys, kg_crashme was a naïve fuzz
tester.
Naïve fuzz testers are capable of finding some kernel bugs, but the use of purely
random inputs greatly limits their efficacy. To see why this is, we can
take the example of the madvise() system call, which allows a
process to advise the kernel about how it expects to use a region of
memory. This system call has the following prototype:
int madvise(void *addr, size_t length, int advice);
madvise() places certain constraints on its arguments:
addr must be a page-aligned memory address, length must
be non-negative, and advice
must be one of a limited set of small integer values. When any of these
constraints is violated, madvise() fails with the error
EINVAL. Many other system calls impose analogous checks on their
arguments.
A naïve fuzz tester that simply passes random bit patterns to
the arguments of madvise() will,
almost always, perform uninteresting tests that fail with the (expected)
error EINVAL. As well as wasting time, such naïve testing reduces
the chances of generating a more interesting test input that reveals an
unexpected error.
Thus, a few projects started in the mid-2000s with the aim of bringing
more sophistication to the fuzz-testing process. One of these projects,
Dave's scrashme, was started in 2006. Work on that project languished for a
few years, and only picked up momentum starting in late 2010, when Dave
began to devote significantly more time to its development. In December
2010, scrashme was renamed Trinity. At around the same time, another quite
similar tool, iknowthis,
was also developed at Google.
Intelligent fuzz testing
Trinity performs intelligent fuzz testing by incorporating specific
knowledge about each system call that is tested. The idea is to reduce the
time spent running "useless" tests, thereby reaching deeper into the tested
code and increasing the chances of testing a more interesting case that may
result in an unexpected error. Thus, for example, rather than passing
random values to the advice argument of madvise(),
Trinity will pass one of the values expected for that argument.
Likewise, rather than passing random bit patterns to address arguments,
Trinity will restrict the bit pattern so that, much of the
time, the supplied address is page aligned. However, some system
calls that accept address arguments don't require memory aligned
addresses. Thus, when generating a random address for testing, Trinity will
also favor the creation of "interesting" addresses, for example, an address
that is off a page boundary by the value of sizeof(char) or
sizeof(long). Addresses such as these are likely candidates for
"off by one" errors in the kernel code.
In addition, many system calls that expect a
memory address require that address to point to memory that is actually
mapped. If there is no mapping at the given address, then these system
calls fail (the typical error is ENOMEM or EFAULT). Of
course, in the large address space available on modern 64-bit
architectures, most of the address space is unmapped, so that even if a
fuzz tester always generated page-aligned addresses, most of the resulting
tests would be wasted on producing the same uninteresting error. Thus,
when supplying a memory address to a system call, Trinity will favor
addresses for existing mappings. Again, in the interests of triggering
unexpected errors, Trinity will pass the addresses of "interesting"
mappings, for example, the address of a page containing all zeros or all
ones, or the starting address at which the kernel is mapped.
In order to bring intelligence to its tests, Trinity must have some
understanding of the arguments for each system call. This is accomplished
by defining structures that annotate each system call. For example, the
annotation file for madvise() includes the following lines:
struct syscall syscall_madvise = {
.name = "madvise",
.num_args = 3,
.arg1name = "start",
.arg1type = ARG_NON_NULL_ADDRESS,
.arg2name = "len_in",
.arg2type = ARG_LEN,
.arg3name = "advice",
.arg3type = ARG_OP,
.arg3list = {
.num = 12,
.values = { MADV_NORMAL, MADV_RANDOM, MADV_SEQUENTIAL, MADV_WILLNEED,
MADV_DONTNEED, MADV_REMOVE, MADV_DONTFORK, MADV_DOFORK,
MADV_MERGEABLE, MADV_UNMERGEABLE, MADV_HUGEPAGE, MADV_NOHUGEPAGE },
},
...
};
This annotation describes the names and types of each of the three
arguments that the system call accepts. For example, the first argument is
annotated as ARG_NON_NULL_ADDRESS, meaning that Trinity should
provide an intelligently selected, semi-random, nonzero address for this
argument. The last argument is annotated as ARG_OP, meaning that
Trinity should randomly select one of the values in the corresponding list
(the MADV_* values above).
The second madvise() argument is annotated ARG_LEN,
meaning that it is the length of a memory buffer. Again, rather than
passing purely random values to such arguments, Trinity attempts to
generate "interesting" numbers that are more likely to trigger errors—for
example, a value whose least significant bits are
0xfff might find an off-by-one error in the logic of some system call.
Trinity also understands a range of other annotations, including
ARG_RANDOM_INT, ARG_ADDRESS (an address that can be
zero), ARG_PID (a process ID), ARG_LIST (for bit masks
composed by logically ORing values randomly selected from a specified
list), ARG_PATHNAME, and ARG_IOV (a
struct iovec of the kind passed to system calls such as
readv()). In each case, Trinity uses the annotation to generate a
better-than-random test value that is more likely to trigger an unexpected
error. Another interesting annotation is ARG_FD, which causes
Trinity to pass an open file descriptor to the tested system call. For this
purpose, Trinity opens a variety of file descriptors, including descriptors
for pipes, network sockets, and files in locations such as /dev,
/proc, and /sys. The open file descriptors are randomly
passed to system calls that expect descriptors. By now, it might start to
become clear that you don't want to run Trinity on a system that has the
only copy of your family photo albums.
In addition to annotations, each system call can optionally have a
sanitise routine (Dave's code employs the British
spelling) that performs further fine-tuning of the arguments for the
system call. The sanitise routine can be used to construct arguments that
require special values (e.g., structures) or to correctly initialize the
values in arguments that are interdependent. It can also be
used to ensure that an argument has a value that won't cause an expected
error. For example, the sanitise routine for the madvise() system
call is as follows:
static void sanitise_madvise(int childno)
{
shm->a2[childno] = rand() % page_size;
}
This ensures that the second (length) argument given to
madvise() will be no larger than the page size, preventing the
ENOMEM error that would commonly result when a large length
value causes madvise() to touch an unmapped area of
memory. Obviously, this means that the tests will never exercise the case where
madvise() is applied to regions larger than one page. This
particular sanitize routine could be improved by sometimes
allowing length values that are larger than the page size.
Running trinity
The Trinity home
page has links to the Git repository as well as to the latest stable
release (Trinity 1.1, which was released in
January 2013). Compilation from source is straightforward; then Trinity can
be invoked with a command line as simple as:
$ ./trinity
With no arguments, the program repeatedly tests
randomly chosen system calls. It is also possible to test selected system
calls using one or more instances of the -c command-line
option. This can be especially useful when testing new system calls.
Thus, for example, one could test just the madvise() system call
using the following command:
$ ./trinity -c madvise
In order to perform its work, the trinity program creates a
number of processes, as shown in the following diagram:
The main process performs various initializations (e.g.,
opening the file descriptors and creating the memory mappings used for
testing) and then kicks off a number (default: four) of child processes
that perform the system call tests. A shared memory region (created by the
initial trinity process) is used to record various pieces of
global information, such as open file descriptor numbers, total system
calls performed, and number of system calls that succeeded and failed. The
shared memory region also records various information about each of the
child processes, including the PID, and the system call number and
arguments for the system call that is currently being executed as well as
the system call that was previously executed.
The watchdog process ensures that the test system is still
working correctly. It checks that the children are progressing (they may be
blocked in a system call), and kills them if they are not; when the
main process detects that one of its children has terminated
(because the watchdog killed it, or for some other reason), it
starts a new child process to replace it. The watchdog also
monitors the integrity of the memory region that is shared between the
processes, in case some operation performed by one of the children has
corrupted the region.
Each of the child processes writes to a separate log file, recording
the system calls that it performs and the return values of those system
calls. The file is synced just before each system call is performed, so
that if the system panics, it should be possible to determine the cause of
the panic by looking at the last recorded system call in each of the log
files. The log file contains lines such as the following, which show the
PID of child process, a sequential test number, and the system call
arguments and result:
[17913] [0] mmap(addr=0, len=4096, prot=4, flags=0x40031, fd=-1, off=0) = -1 (Invalid argument)
[17913] [1] mmap(addr=0, len=4096, prot=1, flags=0x25821, fd=-1, off=0x80000000) = -541937664
[17913] [2] madvise(start=0x7f59dff7b000, len_in=3505, advice=10) = 0
...
[17913] [6] mmap(addr=0, len=4096, prot=12, flags=0x20031, fd=-1, off=0) = -1 (Permission denied)
...
[17913] [21] mmap(addr=0, len=4096, prot=8, flags=0x5001, fd=181, off=0) = -1 (No such device)
Trinity can be used in a number of ways. One possibility is simply to
leave it running until it triggers a kernel panic and then look at the
child logs and the system log in order to discover the cause of the
panic. Dave has sometimes left systems running for hours or days in order
to discover such failures. New system calls can be exercised using the
-c command-line option described above. Another possible use is to
discover unexpected (or undocumented) failure modes of existing system
calls: suitable scripting on the log files can be used to obtain summaries
of the various failures of a particular system call.
Yet another way of using the trinity program is with the
-V (victim files) option. This option takes a directory argument:
the program will randomly open files in that directory and pass the
resulting file descriptors to system calls. This can be useful for
discovering failure modes in a particular filesystem type. For example,
specifying an NFS mount point as the directory argument would exercise
NFS. The -V flag can also be used to perform a limited kind of
testing of user-space programs. During his linux.conf.au
presentation, Dave demonstrated the use of the following command:
$ ./trinity -V /bin -c execve
This command has the effect of executing random programs in /bin with random
string arguments. Looking at the system log revealed a large number of
programs that crashed with a segmentation fault when given unexpected arguments.
Results
Trinity has been rather successful at finding bugs. Dave
reports that he has himself found more than 150 bugs in 2012, and many more were
found by other people who were using Trinity. Trinity usually finds bugs in
new code quite quickly. It tends to find the same bugs repeatedly, so that
in order to find other bugs, it is probably necessary to fix the already
discovered bugs first.
Interestingly, Trinity has found bugs not just in system call code. Bugs have
been discovered in many other parts of the kernel, including the networking
stack, virtual memory code, and drivers. Trinity has found many error-path
memory leaks and cases where system call error paths failed to release kernel locks. In
addition, it has discovered a number of pieces of kernel code that had poor
test coverage or indeed no testing at all. The oldest bug that Trinity has
so far found dated back to 1996.
Limitations and future work
Although Trinity is already quite an effective tool for finding bugs,
there is scope for a lot more work to make it even better. An ongoing task
is to add support for new system calls and new system call flags as they
are added to the kernel. Only about ten percent of system calls currently
have sanitise routines. Probably many other system calls could do with
sanitise routines so that tests would get deeper into the code of those
system calls without triggering the same common and expected errors.
Trinity supports many network protocols, but that support could be further
improved and there are other networking protocols for which support could
be added.
Some system calls are annotated with an AVOID_SYSCALL flag,
which tells Trinity to avoid testing that system call. (The --list
option causes Trinity to display a list of the system calls that it knows
about, and indicates those system calls that are annotated with
AVOID_SYSCALL.) In some cases, a system call is avoided because it
is uninteresting to test—for example, system calls such as
fork() have no arguments to fuzz and exit() would simply
terminate the testing process. Some other system calls would interfere with
the operation of Trinity itself—examples include close(),
which would randomly close test file descriptors used by child processes,
and nanosleep(), which might put a child process to sleep for a
long time.
However, there are other system calls such as ptrace() and
munmap() that are currently marked with AVOID_SYSCALL,
but which probably could be candidates for testing by adding more
intelligence to Trinity. For example, munmap() is avoided because
it can easily unmap mappings that are needed for the child to
execute. However, if Trinity added some bookkeeping code that recorded
better information about the test mappings that it creates, then (only)
those mappings could be supplied in tests of munmap(), without
interfering with other mappings needed by the child processes.
Currently, Trinity randomly invokes system calls. Real programs demonstrate
common patterns for making system calls—for example,
opening, reading, and closing a file. Dave would like to add test support for
these sorts of commonly occurring patterns.
An area where Trinity currently provides poor coverage is the
multiplexing ioctl() system call, "the worst interface known
to man". The problem is that ioctl() is really a mass of
system calls masquerading as a single API. The first argument is a file
descriptor referring to a device or another file type, the second argument is
a request type that depends on the type of file or device referred to by
the first argument, and the data type of the third argument depends on the
request type. To achieve good test support for ioctl() would
require annotating each of the request types to ensure that it is
associated with the right type of file descriptor and the right data type
for the third argument. There is an almost limitless supply of work here,
since there are hundreds of request types; thus, in the first instance, this
work would probably be limited to supporting a subset of more interesting
request types.
There are a number of other improvements that Dave would like to see in
Trinity; the source code tarball contains a lengthy TODO
file. Among these improvements are better support for "destructors" in the
system call handling code, so that Trinity does not leak memory, and
support for invoking (some) system calls as root. More generally,
Trinity's ability to find further kernel bugs is virtually limitless: it
simply requires adding ever more intelligence to each of its tests.
Comments (7 posted)
Patches and updates
Kernel trees
Core kernel code
Development tools
Device drivers
Documentation
Filesystems and block I/O
Networking
Architecture-specific
Security-related
Miscellaneous
Page editor: Jonathan Corbet
Distributions
By Jake Edge
February 6, 2013
Linux containers, which are implemented using kernel namespaces and control groups, allow
processes to operate
in an isolated manner, so that the interactions with other processes and
kernel services are limited. That makes containers attractive for a
variety of tasks, including many that might have once been done using
chroot(). As namespace support in the kernel matures, tools
to set up and use containers are becoming more prevalent—and easier
to use. A feature
proposed for Fedora 19 will make use of systemd to create and manage
containers.
At first blush, systemd does not really seem like a
container-management tool. In fact, detractors might see that as feature
creep. But systemd already has infrastructure to spawn containers in
the form of the systemd-nspawn
command. In addition, creating a new process ID (PID) namespace means that
an init program (i.e. PID 1) is needed, which is, of course, the
role that systemd normally fills.
Beyond that, systemd is designed around the idea of "socket activation", so
that services can be started when the first connection is made to them.
That idea can be applied to containers, so that a new container gets
started when a connection is made to a certain port. This "container
activation" feature is reminiscent of a similar idea in the SELinux-based secure containers feature that
was added to Fedora 18. Unlike the secure containers, though, those
created with systemd-nspawn are not primarily intended for
security. With proper care and feeding, however, they can
provide another layer of a "defense in depth".
One goal of the "systemd lightweight containers" feature is to make it easy
to run an unmodified Fedora 19 inside the containers created by
systemd-nspawn. But it isn't just Fedora that could run in those
containers, Debian is another candidate; other distributions are possible
too. By installing a minimal system
into a directory somewhere—using yum or
debootstrap for example—and
then pointing systemd-nspawn at it, a usable version of the
distribution can be run. Users can log into it from the "console", set
up a service or services to run inside of it, and so on. Rudimentary
directions on setting that up are part of the feature proposal.
By default, systemd-nspawn sets up separate PID, mount, IPC
(inter-process communication), and
UTS (host and domain name) namespaces, and executes the given command
inside of them. If invoked with the -b option, it will search for
an init binary to execute, and pass any arguments to that
program. This command:
systemd-nspawn -bD /srv/rawhide 3
would start a container with a root filesystem at
/srv/rawhide,
execute the
init found there (which would be Rawhide's
version of systemd) and pass the runlevel "3" to it. Note that due to a
bug in
Fedora's audit support (or the kernel, or
systemd-nspawn,
depending on who you talk to), auditing needs to be disabled in the kernel
by booting with "
audit=0". Even then, some systems will still
experience problems unless they give the container extra capabilities using
a command like:
systemd-nspawn --capability=cap_audit_write,cap_audit_control -bD /srv/rawhide 3
Presumably, that particular problem will be shaken out before long, as
giving those capabilities to the container allows it to control auditing in
the host—just the kind of thing a container is meant to avoid.
With a simple unit file, the container can be turned into a service that
can be started, stopped, and monitored with systemctl. Fans of
the systemd journal can use the -j option of
systemd-nspawn to effectively export
the container's journal to the host. A "journalctl -m"
command on the host will then show merged journal entries from the host and any
containers.
Multiple containers can be started using the same directory
and they won't be able to see each other. Changes to the filesystem will
be immediately visible in any container using it, but processes in one container cannot interact with processes
in another, nor with the processes on the host.
Using the techniques described in "systemd
for Administrators, Part XX", these containers can easily be made
socket activated. An incoming connection on a particular host port would
spawn the container, which would have unit files that recognized the
incoming connection to start the right service on the inside. Users will
likely also want to set up
sshd inside the container to run on a different port (the host
presumably already uses
22) for ease of accessing the container.
There is also an option to run the container in a separate network
namespace (--private-network), which essentially turns off
networking for the container. Only the loopback interface is available to
the container, so no network connections of any kind can be made, though
it could still read and write using socket file descriptors that were
passed to it. That would be a way to isolate an internet-facing service,
for example.
There are a number of different use cases for the feature, but it also
looks like something that will be built upon in the future. Allowing for tightened
security, possibly using user ID namespaces, would be one possibility.
Adding support for network namespaces that have more than just the loopback
interface could be interesting as well. Since FESCo approved the feature
for Fedora 19 at its February 6 meeting, more users of the feature can be
expected. That means that
more use cases will be found, which seems likely to lead to expanded
functionality, but it's a useful feature as it stands.
Comments (10 posted)
Brief items
I tend to think that when a project is hurting its users instead of helping them, even with good intentions, something is very wrong about that project.
--
Lionel Dricot
Anaconda didn’t just shed its skin
--
Ryan
Lerch
the real solution to all these problems is openCDE, which I look
forward to proposing as default in the F20 cycle
--
Jef Spaleta
Comments (none posted)
The wait for a Fedora 18 build for ARM systems is over. "
The Fedora 18 for ARM release includes pre-built images for Versatile Express (QEMU),
Trimslice (Tegra), Pandaboard (OMAP4), GuruPlug (Kirkwood), and Beagleboard (OMAP3)
hardware platforms. Fedora 18 for ARM also includes an installation tree in the
yum repository which may be used to PXE-boot a kickstart-based installation on
systems that support this option, such as the Calxeda EnergyCore
(HighBank)." See
the
release announcement for more information.
Full Story (comments: 4)
Linaro 13.01 has been
released. Linaro is a project that focuses on "
consolidating and optimizing open source software for the ARM architecture". Linaro provides a common foundation of system software (kernel, etc.) and tools for various
ARM distributions to use. Detailed information on 13.01 can be found in the
release notes. "
The Developer Platform Team has enabled 64bit HipHop VM development in OpenEmbedded, continued to merge ARMv8 support into the OpenEmbedded platform and upstream, engaged initial support for the Arndale board and released Linux Linaro 3.8-rc4 2013.01."
Comments (none posted)
Distribution News
Red Hat Enterprise Linux
Red Hat has issued an advisory that Red Hat Enterprise Linux 3
will reach the end of its Extended Lifecycle Support January 30, 2014.
Full Story (comments: none)
Newsletters and articles of interest
Comments (none posted)
ExtremeTech
takes
a look at five Ubuntu derivatives; BackBox, Bio Linux, PinguyOS, Poseidon, and XBMCbuntu. "
Although BackTrack Linux is generally-considered the de facto distribution for penetration testing, BackBox has emerged as a promising Ubuntu alternative. The latest release is BackBox Linux 3 and it features an Ubuntu base with Linux kernel 3.2, a customized XFCE 4.8 desktop, and a number of computer forensics tools. The project began as a small project led by Raffaele Forte approximately three years ago."
Comments (1 posted)
Page editor: Rebecca Sobol
Development
By Jake Edge
February 7, 2013
For three days just prior to FOSDEM,
GNOME contributors gathered in Brussels to discuss and work on the
"developer experience". The developer
experience hackfest was well attended, attracting people from all
over the world. It was also quite productive, judging from various blog
postings about the meeting. The headline decision—making JavaScript the preferred language for developing
GNOME applications—was certainly noteworthy, but there were
other plans made as well.
The JavaScript choice makes sense on a number of levels, but it also
has clearly caused a fair amount of disdain to be directed at the GNOME project. Any
language chosen would have likely resulted in much the same level of
discontent. As Travis Reitter noted in his announcement: "The
important thing was that we had to make a decision." That helps in
having
a single answer to the "how do I develop a GNOME
application?" question, which is part of the justification for the move:
- It allows us to focus when we write developer documentation, fixing bugs in the development environment and the development of tools. This reduces our [maintenance] costs and enables us to be vastly more efficient.
- It enables code and knowledge sharing to occur, so that people can easily copy and paste code from existing applications, or find information about common problems and challenges.
- It provide a coherent and easy-to-follow path for new developers.
- It allows us to include the full GNOME framework within the language itself.
None of the existing supported languages are being deprecated because of the
decision; it's really just a matter of focus for the project. While there
have been a
fair number of complaints heard about the choice, here at LWN and
elsewhere, it's
not clear how many GNOME contributors are among those disgruntled. On the
other hand,
some GNOME developers who might seem like candidates for grumbling are
on-board with the change. John "J5" Palmieri is a big Python fan who
worked on GObject Introspection for that language, but is pleased with the
decision:
Day in and day out I work with many computer languages. While I may hold my
favorites close to me, I have also come to recognize there are times when
even languages I may not be fond of are a better fit for a particular
problem space. Like it or not, JavaScript is pervasive and really is the
way forward for rapid development in GNOME. It must have been a tense
moment when the decision was made but I applaud that a hard decision was
made and we can now move forward with a clear vision of delivering a great
developer story for the GNOME desktop.
Another interesting discussion took place in the "application distribution
and sandboxing" subgroup, as reported
by Alexander Larsson. The group considered two different pieces of the
application infrastructure puzzle: how to deploy (i.e. create, install, and run)
application bundles and how to protect the rest of the user's
session from application misbehavior.
For deployment, GNOME is considering having applications declare their
library dependencies, in either a coarse or fine-grained manner, and then
installing and running them in an environment that guarantees those
dependencies regardless of what the system itself is running. That would
be done using containers that provide a private view showing the
platform dependencies that the application said it requires. As Larsson
describes, there are
some benefits to that approach:
With this kind of isolation we guarantee two things, first of all there
will never be any accidental leaks of dependencies. If your app depends on
anything not in the supported platform it will just not run and the
developer will immediately notice. Secondly, any incompatible changes in
system libraries are handled and the app will still get a copy of the older
versions when it runs.
Beyond that, the idea of application isolation can be extended to provide a
fully isolated sandbox for untrusted applications. It would require D-Bus
routing in the kernel, which is proving to be a hard sell, but
"hopefully this will work out this time", Larsson said. There
is interest in adding a facility like the Android
Intents system to allow sandboxed applications to communicate. Since
that kind of communication implies a security domain transition, the group
came up with the name "Portals" for this new feature. The discussion
continues post-hackfest, he said, "hopefully we can start [to] implement
parts of this soon".
The JavaScript decision was also helpful for the documentation subgroup, as
Meg
Ford and Allan
Day reported. Those two, along with several others, started reworking
the GNOME developer web site,
including adding new tutorials for first-time
application developers. Significant redesign of the site with several new
features was discussed. Ford described some of those:
We had several new ideas which I think will really improve the resources
for coding, including making all of the API documentation available as a
single download (perhaps available in the SDK download). Another idea that
I thought was particularly good is adding code search to the online API
documentation so developers can easily find existing code in git.gnome.org
to use as a reference. The idea here is to possibly use the
re2 library,
which Google open sourced after it shut down Google Code Search.
It would seem that the GNOME project has chosen the direction for its
"developer story". That story is an important piece of the puzzle when
trying to
attract application developers to a platform. One could argue about the
choices made, but it is significant (and important as Reitter said) that a
choices was made. JavaScript is certainly popular, and many
developers are already comfortable using the language. That may prove
helpful in attracting some of those developers, but having one place to
point when someone asks about developing a GNOME application is likely to
be more important still.
Comments (1 posted)
Brief items
While this change is "obviously correct", every programmer has also had
the experience of spending hours trying to find a bug, only to discover
it was an invisible one-character typo. Thus every programmer should
also know that "obviously correct" and "correct" are not quite the same.
—
Matt Mackall
What we do to deprecate functionality in X is we break it accidentally, we
wait three or four years, see if we've gotten any bug reports—if we've
gotten any bug reports we may actually go fix it; if we've gotten no
bug reports we silently delete the feature.
— Keith Packard, at LCA 2013, shortly before suggesting that the kernel adopt the same approach.
If you are an upstream of a software that uses autoconf - Please run autoreconf against autotools-dev 20120210.1 or later, and make a release of your software.
Aarch64 porters will be grateful as updated software trickles down to distributions.
—
Riku Voipio
Comments (23 posted)
The KDE Community has announced the 4.10 releases of KDE Plasma Workspaces,
Applications and Development Platform. "
This release combines the
latest technologies along with many improvements to bring users the premier
collection of Free Software for home and professional use."
Full Story (comments: none)
Barman, the Backup and Recovery Manager for PostgreSQL, version 1.2.0 has been released. This release introduces "automated support for retention policies based on redundancy of periodical backups or recovery window." Such policies are "integrated by a safety mechanism that allows administrators to specify a minimum number of periodical backups that must exist at any time for a server."
Full Story (comments: none)
The
Topaz project, which
is creating a new Ruby implementation done in RPython, has
announced
its existence. "
Because Topaz builds on RPython, and thus much
of the fantastic work of the PyPy developers, it comes out of the box with
a high performance garbage collector, and a state of the art JIT
(just-in-time) compiler. What does this mean? Out of the box Topaz is
extremely fast."
Comments (13 posted)
Version 5.6.10 of the MySQL database server has been released. Despite the x.x.10 version number, this is the first stable release of the 5.6 series to officially be declared "general availability" (GA). The set of changes included is extensive; the project has a full changelog available on its site.
Full Story (comments: none)
Firefox 18.0.2 has been released. See the
release
notes for details.
Comments (3 posted)
Version 2.6 of Krita, the KDE-flavored painting and natural-media simulation application, has been released. This version includes improvements to OpenRaster support, Photoshop (.PSD) file export, and new support for the OpenColorIO color-management system used in some professional video workflows.
Full Story (comments: none)
Newsletters and articles
Comments (none posted)
Travis Reitter
reports that the
GNOME project has settled on JavaScript as the primary language for
application development. "
It's critical that everyone understands
this decision as a plan to elevate the language, bindings, tools, and
documentation to a level of quality we have not yet achieved. It is not a
decision to abandon any other language bindings. We will continue to
distribute other bindings and documentation as we do now and compatibility
for the other languages will continue to be developed as they are today by
the developers involved with those modules."
Comments (159 posted)
Sourcefabric, creator of open source journalism applications such as the Airtime radio station management system, has released a newsletter that rounds up recent developments in its software frameworks. This edition includes the addition of Apache Solr searching in the Newscoop CMS and Booktype's appearance at the Tools of Change conference.
Full Story (comments: none)
According to blog postings from both the
Chrome
and
Mozilla
projects, the Chrome and Firefox browsers have achieved an
interoperability milestone: it is possible to run video calls directly
between the two browsers with WebRTC, using no intervening server.
"
RTCPeerConnection (also known simply as PeerConnection or PC)
interoperability means that developers can now create Firefox WebRTC
applications that make direct audio/video calls to Chrome WebRTC
applications without having to install a third-party plugin. Because the
functionality is now baked into the browser, users can avoid problems with
first-time installs and buggy plugins, and developers can deploy their apps
much more easily and universally."
Comments (6 posted)
Page editor: Nathan Willis
Announcements
Brief items
The GNOME Foundation has
announced the acceptance of 25 women into the next round of its Outreach Program for Women internships at ten different free software organizations. "
Interns from the most recent completed round of the Outreach Program for Women added to the long list of accomplishments of the 47 women who took part in the program since December 2010.
[...]
The dramatic increase in participation of women in GNOME and our experience with the Outreach Program for Women show that there are many women interested in contributing to Free Software and that reaching out to them with targeted opportunities is an effective way to help them get involved. We anticipate the expansion of the program will create a big shift in the demographic of Free Software contributors."
Comments (136 posted)
The Free Software Foundation's licensing team has posted
a
brief report on what it did in 2012. "
We responded and resolved
over 400 reports of suspected license violations and over 600 general
licensing and compliance questions."
Comments (none posted)
The linux.conf.au organizers have started putting up videos of the talks
from the 2013 event; they are available in
Ogg or
MP4 format.
Daniel Stone's
Wayland
talk may be of special interest to some; he has a number of messages for
those who post comments about Wayland and X on LWN.
Comments (16 posted)
The first FOSDEM videos are
now
available on YouTube. (Thanks to Peter Sztanojev)
Comments (none posted)
Articles of interest
This issue of the Free Software Foundation newsletter covers RIP Aaron
Swartz, Interview with Matthieu Aubry of Piwik, Lulu drops DRM, What can we
ask of the USPTO?, Where in the world is RMS? community contest, a new
edition of the Emacs manual, and several other topics.
Full Story (comments: none)
Completing the set, Koen Vervloesem has put out the last
three speaker interviews for FOSDEM 2013, which starts on Saturday, February 2 in Brussels, Belgium. In this edition:
Kohsuke Kawaguchi on "How we made the Jenkins community",
Jeremy Allison on Samba4, and
Morgan Quigley on "ROS: towards open source in robotics".
Comments (none posted)
Linux.com has posted a series (
part
1,
part
2) on 3D printers and Linux. "
Since Linux.com reported on 3D
printing a year ago, the industry has exploded. At the time, it was hard to
find a preassembled open source 3D printer, but there are now over a dozen
models available in both finished and kit form. Most are based on RepRap,
but there are also some original open source designs ranging from the
low-cost, $400-$800 Printrbot and Solidoodle printers to more feature-rich
$1,400-$1,800 models like Type A and Ultimaker."
Comments (none posted)
Matthew Garrett has posted
a summary of currently-known
problems with UEFI-based machines and Linux. "
Some Lenovos will
only boot Windows or Red Hat Enterprise Linux. I recommend drinking,
because as far as I know they haven't actually got around to doing anything
useful about this yet."
Meanwhile, James Bottomley has put up a
report on his work with the Linux Foundation's secure boot loader.
"The upshot of all of this is you can now use Pre-BootLoader with
Gummiboot (as demoed at LCA2013). To boot, you have to add two hashes: one
for Gummiboot itself and one for the kernel you’re booting, but actually
this is a good thing because now you have a single security policy
controlling all of your boot sequence. Gummiboot itself has also been
patched to recognise a failure due to secure boot and pop up a helpful
message telling you which hash to enrol."
Comments (14 posted)
Calls for Presentations
The submission deadline for the Linux Audio Conference has been extended
until February 17. The conference will take place May 9-12 in Graz, Austria.
Full Story (comments: none)
The 20'th Annual Tcl/Tk Conference (Tcl'2013) will take place September
23-27 in New Orleans, Louisiana. The call for papers deadline is August
5. "
The program committee is asking for papers and
presentation proposals from anyone using or developing with Tcl/Tk
(and extensions)."
Full Story (comments: none)
Upcoming Events
If you are at SCALE (Southern California Linux Expo) February 22-23
consider creating or attending a BoF. "
BoFs should be as informal as
possible with group interaction -- not just one person giving a
presentation. Even better is if there is positive inter-group interaction
such as multiple local user groups meeting together or various database
groups meeting to discuss future database and big data needs in the FLOSS
ecosystem."
Full Story (comments: none)
The Linux Professional Institute (LPI) will be participating in
CeBIT (March 5-9). "
During the
conference, LPI representatives will present on the subject of advancing
IT workforce development in Linux and Open Source, specifically during
the conference's activities for CeBIT's Partner Country of Poland. CeBIT
has a 20+ year history as one of the world's largest conferences and
trade shows for IT and telecommunications solutions and LPI has been a
regular participant for several years."
Full Story (comments: none)
Registration is open for LibrePlanet, March 23-24, in Cambridge, MA. "
This year, the conference focuses on bringing together the diverse voices that have a stake in free software, from software developers to activists, academics to computer users. The theme is called "Commit Change," and it's about drawing ideas from everyone to create the software freedom we need."
Full Story (comments: none)
The North American IPv6 Summit will take place April 17-19 in Denver,
Colorado. "
This
career-enhancing event will help attendees master the transition to IPv6,
offer IPv6 certification, and ensure network professionals stay relevant
and ahead of the curve in their profession through the next 10 years. The
educational event includes an optional pre-conference tutorial session and
a 2-day general session on IPv6 related topics."
Full Story (comments: none)
Events: February 7, 2013 to April 8, 2013
The following event listing is taken from the
LWN.net Calendar.
| Date(s) | Event | Location |
February 15 February 17 |
Linux Vacation / Eastern Europe 2013 Winter Edition |
Minsk, Belarus |
February 18 February 19 |
Android Builders Summit |
San Francisco, CA, USA |
February 20 February 22 |
Embedded Linux Conference |
San Francisco, CA, USA |
February 22 February 24 |
Mini DebConf at FOSSMeet 2013 |
Calicut, India |
February 22 February 24 |
FOSSMeet 2013 |
Calicut, India |
February 22 February 24 |
Southern California Linux Expo |
Los Angeles, CA, USA |
February 23 February 24 |
DevConf.cz 2013 |
Brno, Czech Republic |
February 25 March 1 |
ConFoo |
Montreal, Canada |
February 26 February 28 |
ApacheCon NA 2013 |
Portland, Oregon, USA |
February 26 February 28 |
O’Reilly Strata Conference |
Santa Clara, CA, USA |
February 26 March 1 |
GUUG Spring Conference 2013 |
Frankfurt, Germany |
March 4 March 8 |
LCA13: Linaro Connect Asia |
Hong Kong, China |
March 6 March 8 |
Magnolia Amplify 2013 |
Miami, FL, USA |
March 9 March 10 |
Open Source Days 2013 |
Copenhagen, DK |
March 13 March 21 |
PyCon 2013 |
Santa Clara, CA, US |
March 15 March 16 |
Open Source Conference |
Szczecin, Poland |
March 15 March 17 |
German Perl Workshop |
Berlin, Germany |
March 16 March 17 |
Chemnitzer Linux-Tage 2013 |
Chemnitz, Germany |
March 19 March 21 |
FLOSS UK Large Installation Systems Administration |
Newcastle-upon-Tyne , UK |
March 20 March 22 |
Open Source Think Tank |
Calistoga, CA, USA |
| March 23 |
Augsburger Linux-Infotag 2013 |
Augsburg, Germany |
March 23 March 24 |
LibrePlanet 2013: Commit Change |
Cambridge, MA, USA |
| March 25 |
Ignite LocationTech Boston |
Boston, MA, USA |
| March 30 |
Emacsconf |
London, UK |
| March 30 |
NYC Open Tech Conference |
Queens, NY, USA |
April 1 April 5 |
Scientific Software Engineering Conference |
Boulder, CO, USA |
April 4 April 5 |
Distro Recipes |
Paris, France |
April 4 April 7 |
OsmoDevCon 2013 |
Berlin, Germany |
April 6 April 7 |
international Openmobility conference 2013 |
Bratislava, Slovakia |
If your event does not appear here, please
tell us about it.
Page editor: Rebecca Sobol