In a CloudOpen Japan talk that included equal parts advocacy and information,
Rackspace's Muharem Hrnjadovic looked at OpenStack, one of the entrants in the
crowded open source "cloud" software derby. In the "tl;dr" that he
helpfully provided, Hrnjadovic posited that "cloud computing is the future" and
that OpenStack is the "cloud of the future". He backed those statements up
with lots of graphs and statistics, but the more interesting piece was the
introduction to what cloud computing is all about, as well as where
OpenStack fits in that landscape.
Just fashion?
Is "cloud computing" just a fashion trend, or is it something else, he
asked. He believes that it is no mere fashion, but that cloud computing
will turn the IT world "upside-down". To illustrate why, he put up a graph
from an Amazon presentation that showed how data centers used to be built
out. It was a step-wise function as discrete parts of the data center were
added to handle new capacity, with each step taking a sizable chunk of
capital. Overlaying that graph was the actual demand for the services, which
would sometimes be above the build-out (thus losing customers) or below it
(thus wasting money on unused capacity). The answer, he said, is elastic
capacity and the ability to easily increase or decrease the amount of
computation available based on the demand.
There are other reasons driving the adoption of cloud computing, he said.
The public cloud today has effectively infinite scale. It is also "pay as
you go", so you don't have sink hundreds of thousands of dollars into a
data center, you just get a bill at the end of the month. Cloud computing
is "self-service" in that one can get a system ready to use without going
through the IT department, which can sometimes take a long time.
Spikes in the need for capacity over a short period of time (like for a
holiday sale) are good uses of cloud resources, rather than building more
data center capacity to handle a one-time (or rare) event. Finally, by
automating the process of configuring servers, storage, and the like, a
company will become more efficient, so it either needs fewer people or can
retrain some of those people to "new tricks". Cloud computing creates a
"data center
with an API", he said.
OpenStack background
There are lots of reasons to believe that OpenStack is the cloud of the
future, Hrnjadovic said. OpenStack has been called the "Linux of the cloud"
because it is following the Linux growth path. In just three years,
support for OpenStack from companies in the IT sector has "exploded". It
was originally started by the US
National Aeronautics and Space Administration (NASA) and Rackspace, though
NASA eventually withdrew because OpenStack didn't fit with its organizational
goals. When that happened, an independent foundation was created to
"establish a level playing field". That made OpenStack into a credible
project, he said, which helped get more companies on board.
The project is "vibrant", with an active community whose size is
"skyrocketing". The graph of the number of contributors to OpenStack shows
the
classic "hockey stick" shape that is so pleasing to venture capitalists and
other investors. Some of those graphs come from this blog post. There were 500+
contributors to the latest "Grizzly"
release, which had twice as many changes as the "Essex" release one
year earlier. The contributor base is a "huge force", he said; "think of
what you could do with 500 developers at your disposal".
Where do these developers come from? Are they hobbyists? No, most of them
are earning their paycheck by developing OpenStack, Hrnjadovic said. When
companies enter the foundation, they have to provide developers to help
with the project, which is part of why the project is progressing so quickly.
Another indication of OpenStack's momentum is the demand for OpenStack
skills in the job market. Once again, that graph shows "hockey stick"
growth. Beyond that, Google Trends shows that OpenStack has good
mindshare, which means that if you want to use OpenStack, you will be able
to find answers to your
questions, he said.
OpenStack consists of more than 330,000 lines of Python code broken up into
multiple components. That includes the Nova compute component, various
components for storage
(block, image, and object), an identity component
for authentication and authorization, a network
management component, and a web-based
dashboard to configure and control the cloud resources.
There is an incubation process to add new components to OpenStack proper.
Two features went through the incubation process in the Grizzly cycle and
are now being integrated into OpenStack: Heat,
which is an orchestration service to specify and manage multi-tier
applications, and Ceilometer, which
allows measuring and metering resources. Several other projects (Marconi, Reddwarf, and Moniker) are in various
stages of the incubation process now. The project is "developing at a fast
clip", Hrnjadovic said.
There are a number of advantages that OpenStack has, he said. It is free,
so you don't have to ask anyone to start using it. It is also open source
(Apache licensed), so you "can look under the hood". It has a nice
community where everyone is welcomed. The project is moving fast, both in
squashing bugs and adding features. It is written in Python, which is "much
more expressive" than C or Java.
A revolution
"There are some early warning signs that what we have here is a revolution",
Hrnjadovic said. Cloud computing is an equalizer that allows individuals or startups
to be able to "play the same games" as big companies. Because it has a low
barrier to entry, you can "bootstrap a startup on a very low budget".
Another sign that there is a revolution underway is that cloud computing is
disruptive; the server industry is being upended. He quoted Jim Zemlin's
keynote that for every $1 consumed in cloud services, there is $4 not being
spent on data centers. Beyond that, there is little or no waiting for
cloud servers, unlike physical servers that need to be installed in a data
center, which can take some time. Lastly, cloud technologies provide "new
possibilities" and allow doing things "you couldn't do otherwise".
In the face of a revolution, "you want to be on the winning side".
Obviously, Hrnjadovic thinks that is OpenStack, but many of his arguments in the
talk could equally apply to other open source cloud choices (Eucalyptus,
CloudStack, OpenNebula, ...).
These days, everything is scaling
horizontally (out) rather than vertically (up), because it is too expensive
to keep upgrading to more and more powerful servers. So, people are
throwing "gazillions" of machines—virtual machine instances, bare
metal, "whatever"—at the problems.
That many machines requires automation, he said. You can take care of five
machines without automating things, but you can't handle 5000 machines that
way.
Scaling out also implies "no more snowflakes". That means there are no special
setups for servers, they are all stamped out the same. An analogy he has
heard is that it is the difference between pets
and cattle. If a pet gets injured, you take them to the veterinarian to get
them fixed, but if one of a herd of cattle is injured, you "slaughter it
brutally and move on". That's just what you do with a broken server in the
cloud scenario; it "sounds brutal" but is the right approach.
Meanwhile, by picking OpenStack, you can learn about creating applications
on an "industrial
strength" operating system like Linux, learn how to automate repetitive
tasks with Chef or puppet, and pick up a bit of Python programming along
the way. It is a versatile system that can be installed on anything from
laptops to servers and can be deployed as a public or private cloud.
Hybrid clouds are possible as well, where the base demand is handled by a
private cloud and any overage in demand is sent to the public cloud; a
recent slogan he has heard: "own the base and rent the spike".
Hrnjadovic finished with an example of "crazy stuff" that can be done with
OpenStack. A German company called AoTerra is selling
home heating systems that actually consist of servers running
OpenStack. It is, in effect, a distributed OpenStack cloud that uses its
waste heat to affordably heat homes. AoTerra was able to raise €750,000 via
crowd funding to create one of the biggest OpenStack clouds in Germany—and
sell a few heaters in the deal.
He closed by encouraging everyone to "play with" OpenStack. Developers,
users, and administrators would all be doing themselves a service by
looking at it.
[I would like to thank the Linux Foundation for travel assistance to Tokyo
for CloudOpen Japan.]
Comments (33 posted)
When one is trying to determine if there are compliance problems in a body
of
source code—either code from a device maker or from someone in the supply chain
for a device—the sheer number of files to consider can be a difficult
hurdle. A simple technique can reduce the search space
significantly, though it does require a bit of a "leap of faith", according
to Armijn Hemel. He presented his technique, along with a
case study and a war story or two at LinuxCon Japan.
Hemel was a longtime core contributor to the gpl-violations.org project before retiring
to a volunteer role. He is currently using his compliance background
in his own company, Tjaldur
Software Governance Solutions, where he consults with clients on
license compliance issues. Hemel and Shane Coughlan also created the Binary Analysis Tool (BAT)
to look inside binary blobs
for possible compliance problems.
Consumer electronics
There are numerous license problems in today's consumer electronics market,
Hemel said. There are many products containing GPL code with no
corresponding
source code release. Beyond that, there are products with only a partial
release of the source code, as well as products that release the wrong
source code. He mentioned a MIPS-based device that provided kernel source
with a configuration file that chose the ARM architecture. There is no way
that code could have run on the device using that configuration, he said.
That has led to quite a few cases of license enforcement in various
countries, particularly Germany, France, and the US. There have been many
cases handled by gpl-violations.org in Germany, most of which were settled
out of court. Some went to court and the copyright holders were always
able to get a judgment upholding the GPL. In the US, it is the Free
Software Foundation, Software
Freedom Law Center, and Software Freedom
Conservancy that have been
handling the GPL enforcement.
The origin of the license issues in the consumer electronics space is the
supply chain. This chain can be quite long, he said; one he was involved
in was four or five layers deep and he may not have reached the end of it.
Things can go wrong at each step in the supply chain as software gets
added, removed, and changed. Original design manufacturers (ODMs) and
chipset vendors are notoriously sloppy, though chipset makers are slowly
getting better.
Because it is a "winner takes all" market, there is tremendous pressure to
be faster than the competition in supplying parts for devices. If a vendor
in the supply chain can deliver a few days earlier than its competitors at
the same price point, it can dominate. That leads to companies cutting
corners. Some do not know they are violating licenses, but others do not
care that they are, he said. Their competition is doing the same thing and
there is a low chance of getting caught, so there is little incentive to
actually comply with the licenses of the software they distribute.
Amount of code
Device makers get lots of code from all the different levels of
the supply chain and they need to be able to determine whether the licenses
on that code are being followed.
While business relationships should be based on trust, Hemel said, it is
also important to verify the code that is released with an incorporated
part. Unfortunately, the number of files being distributed can make that
difficult.
If a company receives a letter from a lawyer requesting a
response or fix
in two weeks, for example, the sheer number of files might make that
impossible to do.
For example, BusyBox, which is often distributed with embedded systems, is
made up of 1700 files. The kernel used by Android has increased
from 30,000 (Android 2.2 "Froyo") to 36,000 (Android 4.1 "Jelly
Bean")—and the 3.8.4 kernel has
41,000 files. Qt 5 is 62,000 files. Those are just some of the
components on a device, when you add it all up, an
Android system consists of "millions of files in total", he said. The
lines of code in just the C source files is similarly eye-opening, with
255,000 lines in BusyBox and 12 million in the 3.8.4 kernel.
At LinuxCon Europe in 2011, the
long-term support initiative was
announced. As part of that, the Yaminabe
project to detect duplicate work in the kernel was also introduced.
That project focused on the changes that various companies were making to
the kernel, so it ignored all files that were unchanged from the upstream
kernel sources as "uninteresting". It found that 95% of the source code
going into Android handsets was unchanged. Hemel realized that the same
technique could be applied to make compliance auditing easier.
Hemel's method starts with a simple assumption: everything that an upstream
project has published is safe, at least from a compliance point of view.
Compliance audits should focus on those files that aren't from an
upstream distribution. This is not a mechanism to find code snippets that
have been copied into the source (and might be dubious, license-wise),
as there are clone detectors for that purpose. His method can be used as a
first-level pre-filter, though.
Why trust upstream?
Trusting the upstream projects can be a little bit questionable from a license
compliance perspective. Not all of them are diligent about the license on
each and every file they distribute. But the project members (or the
project itself) are the copyright holders and the project chose its
license. That means that only the project or its contributors can sue for
copyright infringement, which is something they are unlikely to do on files
they distributed.
Most upstream code is used largely unmodified, so using upstream projects
as a reference makes sense, but you have to choose which upstreams
to trust. For example, the Linux kernel is a "high trust" upstream, Hemel
said, because of its development methodology, including the developer's
certificate of origin and the "Signed-off-by" lines that accompany
patches. There is still some kernel code that is licensed as GPLv1-only,
but there is "no chance" you will get sued by Linus Torvalds, Ted Ts'o, or
other early kernel developers
over its use, he said.
BusyBox is another high trust project as it has been the subject of various
highly visible court cases over the years, so any license oddities have
been shaken out. Any code from the GNU project is also code that he treats
as safe.
On the other hand, projects like the Maven build tool central repository for Java are an
example of a low or no trust upstream. Maven is an "absolute mess" that
has become a dumping ground for Java code, with unclear copyrights, unclear
code origins, and so on. Hemel "cannot even describe how bad" the Maven
code base central repository is; it is a "copyright time bomb waiting to explode", he said.
For his own purposes, Hemel chooses to put a lot of trust in upstreams like
Samba, GNOME, or KDE, while not putting much in projects that pull in a
lot of upstream code, like OpenWRT, Fedora, or Debian. The latter two are
quite diligent about the origin and licenses of the code they distribute, but he
conservatively chooses to trust upstream projects directly, rather than
projects that collect code from many other different projects.
Approach
So, his approach is simple and straightforward: generate a database of
source code file checksums (really, SHA256 hashes) from upstream projects.
When faced with a large body of code with unknown origins, the SHA256 of
the files is computed and compared to the database. Any that are in the
database can be ignored, while those that don't match can be analyzed or further
scanned.
In terms of reducing the search space, the method is "extremely effective",
Hemel said. It takes about ten minutes for a scan of a recent kernel, which
includes running Ninka and FOSSology on source
files that do not match the hashes in the database. Typically, he finds that
only 5-10% of files are modified, so the search space is quickly reduced by
90% or more.
There are some caveats.
Using the technique requires a "leap of faith" that the upstream is doing
things well
and not every upstream is worth trusting. A good database that contains
multiple upstream versions is time consuming to create and to keep up to
date. In addition, it cannot help with non-source-related compliance
problems (e.g. configuration files). But it is a good tool to help prioritize
auditing efforts, even if the upstreams are not treated as trusted.
He has used the technique for Open
Source Automation Development Lab (OSADL) audits and for other
customers with great success.
Case study
Hemel presented something of a case study that looked at the code on a
Linux-based router made by a "well-known Chinese router manufacturer". The
wireless chip came from well-known chipset vendor as well. He looked at
three components of the router: the Linux kernel, BusyBox, and the U-Boot
bootloader.
The kernel source had around 25,000 files, of which just over 900 (or 4%)
were not found in any kernel.org kernel version. 600 of those turned out
to be just changes made by the version control system (CVS/RCS/Perforce
version numbers, IDs, and the like). Some of what was left were
proprietary files from the chipset or device manufacturers. Overall, just
300 files (1.8%) were left to look at
more closely.
For BusyBox, there were 442 files and just 62 (14%) that were not in the
database. The changed files were mostly just version control identifiers
(17 files), device/chipset files, a modified copy of bridge-utils, and a
few bug fixes.
The situation was much the same for U-Boot: 2989 files scanned with 395
(13%) not in the database. Most of those files were either chipset vendor
files or ones with Perforce changes, but there were several with different
licenses than the GPL (which is what U-Boot uses). But there is also a
file with the text: "Permission
granted for non-commercial use"—not something that the router could
claim. As it turned out, the file was just present in the U-Boot directory
and was not used in the binary built for the device.
Scripts to create the database are available in BAT version
14, a basic scanning script is coming in BAT 15 but is already
available in the Subversion
repository for the project. Fancier tools are available to Hemel's
clients, he said. One obvious opportunity for collaboration, which did not
come up in the talk, would be to collectively create and maintain a
database of hash values for high-profile projects.
How to convince the legal department that this is a valid approach was the
subject of some discussion at the end of the talk. It is a problem, Hemel
said, because legal teams may not feel confident about the technique even
though it is a "no brainer" for developers. Another audience member suggested
that giving examples of others who have successfully used the technique is
often the
best way to make the lawyers comfortable with it. Also, legal calls, where
lawyers can discuss the problem and possible solutions with other lawyers
who have already been down that path, can be valuable.
Working with the upstream projects to clarify any licensing ambiguities is
also useful. It can be tricky to get those projects to fix files with an
unclear license, especially
when the project's intent is clear. In many ways, "git pull"
(and similar commands) have made it much easier to pull in code from
third-party projects, but sometimes that adds complexity on the legal side.
That is something that can be overcome with education and working with
those third-party projects.
[I would like to thank the Linux Foundation for travel assistance to Tokyo
for LinuxCon Japan.]
Comments (6 posted)
At Texas Linux Fest 2013 in Austin, Rikki Endsley from the USENIX
Association spoke about a familiar topic—diversity in
technology companies and software projects—but from a different
angle. Specifically, she looked at how companies recruit new team
members, and the sorts of details than can unintentionally keep
applicants away. Similarly, there are practices that companies can
engage in to help them retain more of their new hires, particularly
those that come from a different background than their co-workers.
A lot of what Endsley said was couched in terms of "hiring," but
she said that it applies to recruiting volunteers to open source
projects as well. As most people are aware, demographic diversity in
technical fields is lower than in the population at large, she said,
and it is particularly low in free software projects. Of course,
these days paid employees do a large share of the work on free
software projects; for companies that manage or produce open source
code, the diversity problem is indeed one of finding, hiring, and
retaining people.
Everyone understands the value of hiring a diverse team, Endsley
said, but a fairly common refrain in technology circles is "we don't
have any women on our team because none applied." Obviously there are
women out there, she noted, the challenge is just to make sure that
they know about one's company and its job opportunities. This can be a
problem in any scientific and engineering field, she said, but it is
particularly troublesome in open source, where the demand for
developers already exceeds the supply. In a job-seeker's market,
companies need to "sell" their company to the employee, not
vice-versa, so if your company is not getting the applicants it would
like to see, you ought to look closely at how you sell yourself, and
be adaptable.
Endsley said that she did not have all of the answers to how to
recruit more diverse applicants, but she did at least have a number of
things that a concerned company could try. Most of her observations
dealt directly with recruiting women, but she said that the advice
applied in general to other demographics as well. She offered
examples that addressed other diversity angles, including ethnicity
and age.
The hunt
Recruiting really begins with identifying what a company needs, she
said. It is tempting to come up with a terse notion of what the new
recruit will do (e.g., "a Python programmer"), but it is better to
consider other facets of the job: representing the company at events,
helping to manage teams and projects, etc. The best plan, though, is
to come up with not one, but three or four "talent profiles," then go
out and change recruiting practices to find the people that fit.
Where one looks for new talent is important. Not everyone who
meets the talent profile is reading job board sites like Monster.com.
Companies can find larger and more diverse pools of potential talent
at events like trade shows and through meetups or personal networking
groups.
In short, "think about where people engage" and go there. After all,
not everyone that you might want to hire is out actively looking for a
job. It can also help to reach out on social networks (where, Endsley
noted, it is the "word of mouth" by other people spreading
news that your company is hiring that offers the real value) and to
create internship programs.
Apart from broadening the scope of the search, Endsley said that a
company's branding can greatly influence who responds to job ads.
Many startups, she said, put a lot of emphasis on the corporate
culture—particularly being the "hip" place to work and having games
and a keg in the break room. But that image only appeals to a narrow
slice of potential recruits. What comes across as hip today is only
likely to appeal to Millennials, not
to those in Generation X or
earlier. In contrast, she showed Google's recruiting slogan, "Do cool
things that matter." It is simple and, she said, "who doesn't want to
do cool things that matter?"
Companies should also reconsider the criteria that they post for
their open positions, she said. She surveyed a number of contacts
in the technology sector and asked them what words they found to be a
turn-off in job ads. On the list of negatives were "rock star,"
"ninja," "expert," and "top-notch performer." The slang terms again
appeal only to a narrow age range, while the survey respondents said
all of them suggest an atmosphere where "all my colleagues will be
arrogant jerks." Similarly, the buzzwords "fast-paced and dynamic"
were often interpreted to mean "total lack of work/life balance and
thoughtless changes in direction." The term "passionate" suggested
coworkers likely to lack professionalism and argue loudly, while the
phrase "high achievers reap great rewards" suggested back-stabbing
coworkers ready to throw you under the bus to get ahead.
Endsley showed a number of real-world job ads (with the names of
the companies removed, of course) to punctuate these points. There
were many that used the term "guys" generically or "guys and gals", which
she said would not turn off all female applicants, but would
reasonably turn off quite a few. There were plenty of laughably bad
examples, including one ad that devoted an entire paragraph to
advertising the company's existing diversity—but did so by
highlighting various employees' interests in fishing, motorcycle-racing,
and competitive beard-growing. Another extolled the excitement of
long work days "in a data center with a rowdy bunch of guys."
Honestly, Endsley observed, "that's really not even going to appeal
to many other guys."
Onboarding and retention
After successfully recruiting an employee, she said, there is still
"onboarding" work required to get the new hire adjusted to the
company, engaged in
the job, and excited about the work. Too often, day one involves
handing the new hire an assignment and walking away. That is
detrimental because research shows that most new hires decide within a
matter of months whether or not they want to stay with a company long
term (although Endsley commented that in the past she has decided
within a few hours that a new company was not for her).
She offered several strategies to get new hires acclimated and
connected early. One is to rotate the new employee through the whole
company a few days or weeks at a time before settling into a permanent
team. This is particularly helpful for a new hire who is in the
minority at the office; for instance, the sole female engineer on a
team would get to meet other women in other teams that she otherwise
might not get to know at all. Building those connections makes the
new hire more likely to stay engaged. It is also helpful to get the
new hire connected to external networks, such as going to conferences
or engaging in meetups.
Retaining employees is always an area of concern, and Endsley
shared several strategies for making sure recent hires are
happy—because once an at-risk employee is upset, the chances are
much higher that the company has already lost the retention battle.
One idea is to conduct periodic motivation checks; for example, in the
past USENIX has asked her what it would take for her to leave for
another job. Checks like these need to be done more than once, she
noted, since the factors that determine whether an employee stays or
leaves change naturally over time. Companies can also do things to
highlight the diversity of their existing employees, she said; Google
is again a good example of doing this kind of thing right, holding
on-campus activities and events to celebrate different employees'
backgrounds, and cultivating meetup and interest groups.
Another important strategy is to have a clear and fair reward
system in place. No one likes finding out that a coworker makes more
money for doing the same work solely because they negotiated
differently during the interview. And it is important that there be
clear ways to advance in the company. If developers cannot advance
without shifting into management, they may not want to stay. Again,
most of these points are valuable for all employees, but their impact
can be greater on an employee who is in the minority—factors
like "impostor syndrome" (that is, the feeling that everyone else in
the group is more qualified and will judge you negatively) can be a
bigger issue for an employee who is
already the only female member of the work group.
The audience asked quite a few questions at the end of the
session. One was from a man who had concerns that hiring for
diversity can come across as hiring a token member of some demographic
group. Endsley agreed that it can certainly be interpreted that way—if done wrong. But her point
was not to give advice to someone who would think "I need two more
women on my team," but to someone who is interested in hiring from a
diverse pool of applicants. That is, someone who says "I have no
women on my team, and none are applying; what am I doing wrong?" Most
people these days seem to agree on the benefits of having a diverse
team, but most people still have plenty of blind spots that can be
improved upon. But with demand for developers working on open source
code exceeding supply, successfully reaching the widest range of
possible contributors is a wise move anyway.
Comments (15 posted)
Page editor: Jonathan Corbet
Security
At the 2013 Tizen
Developer Conference (TDC) in San Francisco, several sessions
dealt with the challenges of implementing security on mobile phones
and other consumer electronics devices. Among the security sessions, Casey
Schaufler and Michael Demeter described their work applying Smack to Tizen, including the
effort required to develop a sensible set of security domains that are
not too complicated to maintain.
Bringing the Smack down
Schaufler and Demeter presented a session that can only be
described as part talk, part vaudeville act—complete with props,
audience participation, and Three Stooges clips. They began with a
brief overview of Smack itself, but quickly moved on to how it should
be used; that is, how application developers should determine what sorts
of Smack rules their code needs. They also discussed the difficulty
of finding the right system-level policies for a platform like
Tizen, and presented case studies comparing two
approaches—including the one they officially adopted.
Smack implements mandatory access control, they explained, starting
with the basic rule "what's mine is mine; what's yours is yours."
That rule, of course, means that no processes can talk to each other,
so working with Smack in practice is a matter of defining the right
exceptions to that basic premise, which are known as Smack rules.
Each Smack rule grants one "access mode" to an object to a particular outside party. To
illustrate the key exceptions needed by applications, Schaufler and
Demeter went through a series of Abbott and Costello–esque
routines mimicking processes trying (usually unsuccessfully) to share
data via a notebook, while arguing about what permissions were needed. They
then pulled a volunteer from the audience on stage, and mimicked
processes trying to exchange packets by tossing rubber balls from one
side of the stage to the other.
For example, Schaufler would announce that he had write access to
send a packet to the audience member on the other end of the stage,
then throw the ball, at which point Demeter would shout
"Smack!" and slap it to the crowd mid-flight, explaining that the
audience member also needed read access in order for the packet to be
delivered. With read and write permissions for files and network
message-passing thus artfully explained, the two moved on to the
problem of granularity in Smack rule sets, which they illustrated with
an attempt to build a tower out of dice.
The issue is that the Smack identifiers used to distinguish "mine" and
"yours" in Smack are simple text labels, and generally the identifier used
is chosen to be the same as the name of the application's main
process. In Tizen, they explained, installable
HTML5 applications declare their Smack identifier in their .manifest
file. The Tizen application installer is a privileged process, and
sets up the Smack identifier when it installs the app. The manifest
essentially allows each application to create its own unique Smack
domain. But that approach quickly becomes unworkable because using a
domain for every application means each application must specify Smack
rules for every other application on the system—or, at least,
the sheer number of Smack rules can grow that fast. Too many rules to
manage means more mistakes, and more resources used up on the system.
However, Schaufler explained, there are about a dozen different
definitions of what a "security domain" is, so trying to reduce the
number of Smack rules by grouping applications and services into fewer
domains is not trivial. They described a "case study" of an unnamed
mobile OS that used the one-domain-per-app model (an approach which had
previously been announced as the plan for Tizen). In that "naive"
setup, just 600 applications generated more than 20,000 Smack rules.
Furthermore, there were numerous rules in the rule set that were
obviously bad (such as apps that wanted write access to the system
clock). And, by and large, the huge corpus of rules simply sketched
out an "everybody shares with everybody else" platform, which is
functionally equivalent to not having the one-domain-per-app
configuration, but is unmaintainable.
Less is more
Considering these results, eventually the Tizen security team
decided to reducing the granularity of the Smack configuration problem
as much as possible, primarily by putting all end-user apps into a
single security domain—by default, though individual apps can
still create a domain of their own if necessary.
The concept stratifies the system into three levels. The lowest is
called the "floor" domain, which includes those filesystem locations that
every process must have read access to: /, /dev/,
/bin/sh, system libraries, and so on; read-only access is
available to all applications. Above "floor" sits the "system"
domain, which is reserved for system processes like systemd, D-Bus,
and DNS, which need read/write access to platform resources. The
"application" domain sits at level furthest from the floor; by default
all apps belong to this domain and can exchange messages and files.
It is possible that the team will add another domain specifically for
telephony apps, they said, but they are being cautious about any such
expansions. The three-tiered system still requires a set of Smack
rules to define access to "system" domain items; there will just be
fewer of them required.
Any application can still declare its own Smack domain,
so those applications that need to protect data (such as a password
storage app) can do so. The decision to lump all end-user apps into
one domain shifts more of the responsibility for protecting the device
onto the maintainers of the application store. Defining how any
device vendor implements its store is out of scope for Tizen itself,
although the project does discuss
many of the angles in its documentation. The most common scenario is
expected to be an app creating a domain of its own (and listing the
access rules for other apps); related apps can share a single Smack
domain, with reviewers at the vendor's application store testing
submissions to find apps that ask for questionable access.
The upcoming Tizen 3.0 release will be the first implement the
three-domain system, and will include a base set of Smack rules. The
Tizen 2.1 release used much finer granularity; its rule set contains
41,000 access rules. At this point the focus is on making the default
rule set "more correct," with "smaller" still to come, but it is
already considerably smaller and easier to understand than the
competition. Schaufler and Demeter said that while Tizen 2.1's Smack
rule set contained 41,000 rules, the SELinux reference policy is over
900,000 lines.
[The author wishes to thank the Linux Foundation for travel
assistance to Tizen Dev Con.]
Comments (21 posted)
Brief items
One thing is pretty much certain, however. Passwords as we've traditionally known them are on the way out. They are doomed. The sooner we're rid of them, the better off we're all going to be.
Especially if your password is "12345" ...
—
Lauren Weinstein
There's another, more strategic reason why wholesale Internet
disconnections are pretty unlikely in Turkey. Turkey's international
telecommunications networks play a key role in interconnecting Syria, Iraq,
Iran, Georgia, Azerbaijan, and the Gulf States to the greater Internet.
Turkey's domestic Internet market is large, but the international market
whose consumers could be reached by Turkish-hosted content is even larger.
Turkey finds itself at a decision point today: take the necessary steps to
encourage large content providers to host Middle Eastern content in Turkey,
and reap the benefits of becoming the regional Internet hub, or let that
opportunity pass.
—
Jim Cowie
in the Renesys blog
Could Google tip an election by manipulating what comes up from search
results on the candidates?
[...] Turns out that it could. And, it wouldn't even be illegal for Google
to do it.
—
Bruce
Schneier
To register their vote on-line, Parisians were supposed to make a
credit-card payment of €3 and give the name and address of someone on the
city's electoral roll. Metronews said that one of its journalists had
managed to vote five times, paying with the same credit card, using names,
including that of Nicolas Sarkozy.
—
The
Independent
Comments (20 posted)
New vulnerabilities
kernel: code execution
| Package(s): | Linux kernel |
CVE #(s): | CVE-2013-2850
|
| Created: | May 31, 2013 |
Updated: | July 1, 2013 |
| Description: |
From the SUSE advisory:
CVE-2013-2850: Incorrect strncpy usage in the network
listening part of the iscsi target driver could have been
used by remote attackers to crash the kernel or execute
code.
This required the iscsi target running on the machine
and the attacker able to make a network connection to it
(aka not filtered by firewalls). |
| Alerts: |
|
Comments (3 posted)
libtirpc: denial of service
| Package(s): | libtirpc |
CVE #(s): | CVE-2013-1950
|
| Created: | May 31, 2013 |
Updated: | June 5, 2013 |
| Description: |
From the Red Hat advisory:
A flaw was found in the way libtirpc decoded RPC requests. A
specially-crafted RPC request could cause libtirpc to attempt to free a
buffer provided by an application using the library, even when the buffer
was not dynamically allocated. This could cause an application using
libtirpc, such as rpcbind, to crash. (CVE-2013-1950)
|
| Alerts: |
|
Comments (none posted)
mesa: code execution
| Package(s): | mesa |
CVE #(s): | CVE-2013-1872
|
| Created: | June 4, 2013 |
Updated: | July 17, 2013 |
| Description: |
From the Red Hat advisory:
An out-of-bounds access flaw was found in Mesa. If an application using
Mesa exposed the Mesa API to untrusted inputs (Mozilla Firefox does
this), an attacker could cause the application to crash or, potentially,
execute arbitrary code with the privileges of the user running the
application. |
| Alerts: |
|
Comments (none posted)
nagios-plugins: should be built with PIE flags
| Package(s): | nagios-plugins |
CVE #(s): | |
| Created: | June 3, 2013 |
Updated: | June 5, 2013 |
| Description: |
From the Red Hat bugzilla:
http://fedoraproject.org/wiki/Packaging:Guidelines#PIE says that "you MUST
enable the PIE compiler flags if your package has suid binaries...".
However, currently nagios-plugins is not being built with PIE flags. This is a clear violation of the packaging guidelines. |
| Alerts: |
|
Comments (none posted)
python-keystoneclient: PKI token expiration botch
| Package(s): | python-keystoneclient |
CVE #(s): | CVE-2013-2104
|
| Created: | June 4, 2013 |
Updated: | August 12, 2013 |
| Description: |
From the Ubuntu advisory:
Eoghan Glynn and Alex Meade discovered that python-keystoneclient did not
properly perform expiry checks for the PKI tokens used in Keystone. If
Keystone were setup to use PKI tokens (the default in Ubuntu 13.04), a
previously authenticated user could continue to use a PKI token for longer
than intended. |
| Alerts: |
|
Comments (none posted)
qemu-kvm: unauthorized file access
| Package(s): | qemu-kvm |
CVE #(s): | CVE-2013-2007
|
| Created: | June 4, 2013 |
Updated: | July 15, 2013 |
| Description: |
From the CVE entry:
The qemu guest agent in Qemu 1.4.1 and earlier, as used by Xen, when started in daemon mode, uses weak permissions for certain files, which allows local users to read and write to these files. |
| Alerts: |
|
Comments (none posted)
slock: should be built with PIE flags
| Package(s): | slock |
CVE #(s): | |
| Created: | June 5, 2013 |
Updated: | June 5, 2013 |
| Description: |
From the Red Hat bugzilla:
http://fedoraproject.org/wiki/Packaging:Guidelines#PIE says that "you MUST
enable the PIE compiler flags if your package has suid binaries...".
However, currently slock is not being built with PIE flags. This is a
clear violation of the packaging guidelines. |
| Alerts: |
|
Comments (none posted)
telepathy-gabble: man-in-the-middle attack
| Package(s): | telepathy-gabble |
CVE #(s): | CVE-2013-1431
|
| Created: | June 4, 2013 |
Updated: | June 18, 2013 |
| Description: |
From the Debian advisory:
Maksim Otstavnov discovered that the Wocky submodule used by
telepathy-gabble, the Jabber/XMPP connection manager for the Telepathy
framework, does not respect the tls-required flag on legacy Jabber
servers. A network intermediary could use this vulnerability to bypass
TLS verification and perform a man-in-the-middle attack.
|
| Alerts: |
|
Comments (none posted)
transifex-client: no HTTPS certificate validation
| Package(s): | transifex-client |
CVE #(s): | CVE-2013-2073
|
| Created: | June 3, 2013 |
Updated: | June 5, 2013 |
| Description: |
From the Fedora advisory:
transifex-client does not validate HTTPS server certificate. Fixed in version 0.9. |
| Alerts: |
|
Comments (none posted)
wireshark: multiple vulnerabilities
| Package(s): | wireshark |
CVE #(s): | CVE-2013-3555
CVE-2013-3557
CVE-2013-3558
CVE-2013-3559
CVE-2013-3560
CVE-2013-3562
|
| Created: | June 3, 2013 |
Updated: | September 30, 2013 |
| Description: |
From the CVE entries:
epan/dissectors/packet-gtpv2.c in the GTPv2 dissector in Wireshark 1.8.x before 1.8.7 calls incorrect functions in certain contexts related to ciphers, which allows remote attackers to cause a denial of service (application crash) via a malformed packet. (CVE-2013-3555)
The dissect_ber_choice function in epan/dissectors/packet-ber.c in the ASN.1 BER dissector in Wireshark 1.6.x before 1.6.15 and 1.8.x before 1.8.7 does not properly initialize a certain variable, which allows remote attackers to cause a denial of service (application crash) via a malformed packet. (CVE-2013-3557)
The dissect_ccp_bsdcomp_opt function in epan/dissectors/packet-ppp.c in the PPP CCP dissector in Wireshark 1.8.x before 1.8.7 does not terminate a bit-field list, which allows remote attackers to cause a denial of service (application crash) via a malformed packet. (CVE-2013-3558)
epan/dissectors/packet-dcp-etsi.c in the DCP ETSI dissector in Wireshark 1.8.x before 1.8.7 uses incorrect integer data types, which allows remote attackers to cause a denial of service (integer overflow, and heap memory corruption or NULL pointer dereference, and application crash) via a malformed packet. (CVE-2013-3559)
The dissect_dsmcc_un_download function in epan/dissectors/packet-mpeg-dsmcc.c in the MPEG DSM-CC dissector in Wireshark 1.8.x before 1.8.7 uses an incorrect format string, which allows remote attackers to cause a denial of service (application crash) via a malformed packet. (CVE-2013-3560)
Multiple integer signedness errors in the tvb_unmasked function in epan/dissectors/packet-websocket.c in the Websocket dissector in Wireshark 1.8.x before 1.8.7 allow remote attackers to cause a denial of service (application crash) via a malformed packet. (CVE-2013-3562) |
| Alerts: |
|
Comments (none posted)
xmp: code execution
| Package(s): | xmp |
CVE #(s): | CVE-2013-1980
|
| Created: | May 31, 2013 |
Updated: | June 5, 2013 |
| Description: |
From the Red Hat bug report:
A heap-based buffer overflow flaw was found in the way xmp, the extended module player, a modplayer for Unix-like systems that plays over 90 mainstream and obscure module formats, loaded certain Music And Sound Interface (MASI) files. A remote attacker could provide a specially-crafted MASI media file that, when opened, would lead to xmp binary crash or, potentially, arbitrary code execution with the privileges of the user running the xmp executable. |
| Alerts: |
|
Comments (none posted)
Page editor: Jake Edge
Kernel development
Brief items
The current development kernel is 3.10-rc4,
released on June 2. Linus says:
"
Anyway, rc4 is smaller than rc3 (yay!). But it could certainly be
smaller still (boo!). There's the usual gaggle of driver fixes (drm,
pinctrl, scsi target, fbdev, xen), but also filesystems (cifs, xfs, with
small fixes to reiserfs and nfs)."
Stable updates: 3.2.46 was released
on May 31.
Comments (none posted)
Our review process is certainly not perfect when you have to wait
for stuff to break in linux-next before you get people to notice
the problems.
—
Arnd Bergmann
I have recently learned, from a very reliable source, that ARM
management seriously dislikes the Lima driver project. To put it
nicely, they see no advantage in an open source driver for the
Mali, and believe that the Lima driver is already revealing way too
much of the internals of the Mali hardware. Plus, their stance is
that if they really wanted an open source driver, they could simply
open up their own codebase, and be done.
Really?
—
Luc Verhaegen
Comments (1 posted)
Kernel development news
By Jonathan Corbet
June 5, 2013
The kernel's block layer is charged with managing I/O to the system's block
("disk drive") devices. It was designed in an era when a high-performance
drive could handle hundreds of I/O operations per second (IOPs); the fact
that it tends to fall down with modern devices, capable of handling
possibly millions of IOPs, is thus not entirely surprising. It has been
known for years that significant changes would need to be made to enable
Linux to perform well on fast solid-state devices. The shape of those
changes is becoming clearer as the multiqueue block layer patch set,
primarily the work of Jens Axboe and Shaohua Li, gets closer to being ready
for mainline merging.
The basic structure of the block layer has not changed a whole lot since it
was described for 2.6.10 in Linux Device
Drivers. It offers two ways for a block driver to hook into the
system, one of which is the "request" interface. When run in this mode,
the block layer maintains a simple request queue; new I/O requests are
submitted to the tail of the queue and the driver receives requests from
the head. While requests sit in the queue, the block layer can operate on
them in a number of ways: they can be reordered to minimize seek
operations, adjacent requests can be coalesced into larger operations, and
policies for fairness and bandwidth limits can be applied, for example.
This request queue turns out to be one of the biggest bottlenecks in the
entire system. It is protected by a single lock which, on a large system,
will bounce frequently between the processors. It is a linked list, a
notably cache-unfriendly data structure especially when modifications must
be made —
as they frequently are in the block layer. As a result, anybody who is
trying to develop a driver for high-performance storage devices wants to do
away with this request queue and replace it with something better.
The second block driver mode — the "make request" interface — allows a
driver to do exactly that. It hooks the driver into a much higher part
of the stack, shorting out the request queue and handing I/O requests
directly to the driver. This interface was not originally intended for
high-performance drivers; instead, it is there for stacked drivers (the MD
RAID implementation, for example) that need to process requests before
passing them on to the real, underlying device. Using it in other
situations incurs a substantial cost: all of the other queue processing
done by the block layer is lost and must be reimplemented in the driver.
The multiqueue block layer work tries to fix this problem by adding a third
mode for drivers to use. In this mode, the request queue is split into a
number of separate queues:
- Submission queues are set up on a per-CPU or per-node basis. Each CPU
submits I/O operations into its own queue, with no interaction with the
other CPUs. Contention for the submission queue lock is thus
eliminated (when per-CPU queues are used) or greatly reduced (for
per-node queues).
- One or more hardware dispatch queues simply buffer I/O requests for
the driver.
While requests are in the submission queue, they can be operated on by the
block layer in the usual manner. Reordering of requests for locality
offers little or no benefit on solid-state devices; indeed, spreading
requests out across the device
might help with the parallel processing of requests. So reordering will
not be done, but coalescing requests will reduce the total number of I/O
operations, improving performance somewhat. Since the submission queues
are per-CPU, there is no way to coalesce requests submitted to different
queues. With no empirical evidence whatsoever, your editor would guess
that adjacent requests are most likely to come from the same process and,
thus, will automatically find their way into the same submission queue, so
the lack of cross-CPU coalescing is probably not a big problem.
The block layer will move requests from the submission queues into the
hardware queues up to the maximum number specified by the driver. Most
current devices will have a single hardware queue, but high-end devices
already support multiple queues to increase parallelism. On such a device,
the entire submission and completion path should be able to run on the same
CPU as the process generating the I/O, maximizing cache locality (and,
thus, performance). If desired, fairness or bandwidth-cap policies can be
applied as requests move to the hardware queues, but there will be an
associated performance cost. Given the speed of high-end devices, it may
not be worthwhile to try to ensure fairness between users; everybody should
be able to get all the I/O bandwidth they can use.
This structure makes the writing of a high-performance block driver
(relatively) simple. The driver provides a queue_rq() function to
handle incoming requests and calls back to the block layer when requests
complete. Those wanting to look at an example of how such a driver would
work can see null_blk.c in the
new-queue branch of Jens's block repository:
git://git.kernel.dk/linux-block.git
In the current patch set, the multiqueue mode is offered in addition to the
existing two modes, so current drivers will continue to work without
change. According to this
paper on the multiqueue block layer design [PDF], the hope is that drivers will
migrate over to the multiqueue API, allowing the eventual removal of the
request-based mode.
This patch set has been significantly reworked in the last month or so; it
has gone from a relatively messy series into something rather
cleaner.
Merging into the mainline would thus appear to be on the agenda for the
near future. Since use of this API is optional, existing drivers should
continue to work and this merge could conceivably happen as early as 3.11.
But, given that the patch set has not yet been publicly posted to any
mailing list and does not appear in linux-next, 3.12 seems like a more
likely target. Either way, Linux seems likely to have a much better block
layer by the end of the year or so.
Comments (10 posted)
By Jonathan Corbet
June 5, 2013
A visit from the kernel's out-of-memory (OOM) killer is usually about as
welcome as a surprise encounter with the tax collector. The OOM killer is
called in when the system runs out of memory and cannot progress without
killing off one or more processes; it is the embodiment of a
frequently-changing set of heuristics describing which processes can be killed for
maximum memory-freeing effect and minimal damage to the system as a whole.
One would not think that this would be a job that is amenable to handling
in user space, but there are some users who try to do exactly that, with
some success. That said, user-space OOM handling is not as safe as some users
would like, but there is not much consensus on how to make it more robust.
User-space OOM handling
The heaviest user of user-space OOM handling, perhaps, is Google. Due to
the company's desire to get the most out of its hardware, Google's internal
users tend to be packed
tightly into their servers. Memory control groups (memcgs) are used to
keep those users from stepping on each others' toes. Like the system as a
whole, a memcg can go into the OOM condition, and the kernel responds in
the same way: the OOM killer wakes up and starts killing processes in the
affected group. But, since an OOM situation in a memcg does not threaten
the stability of the system as a whole, the kernel allows a bit of
flexibility in how those situations are handled. Memcg-level OOM killing
can be disabled altogether, and there is a mechanism by which a user-space
process can request notification when a memcg hits the OOM wall.
Said notification mechanism is designed around the needs of a global, presumably
privileged process that manages a bunch of memcgs on the system; that
process can respond by raising memory limits, moving processes to different
groups, or doing some targeted process killing of its own. But Google's
use case turns out to be a little different: each internal Google user is
given the ability
(and responsibility) to handle OOM conditions within that user's groups.
This approach can work, but there are a couple of traps that make it less
reliable than some might like.
One of those is that, since users are doing their own OOM handling, the OOM
handler process itself will be running within the affected memcg and will
be subject
to the same memory allocation constraints. So if the handler needs to
allocate memory while responding to an OOM problem, it will block and be
put on the
list of processes waiting for the OOM situation to be resolved; this is,
essentially, a deadlocking of the entire memcg. One can try to avoid this
problem by locking pages into memory and such, but, in the end, it is quite
hard to write a user-space program that is guaranteed not to cause memory
allocations in the kernel. Simply reading a /proc file to get a
handle on the situation can be enough to bring things to a halt.
The other problem is that the process whose allocation puts the memcg into
an OOM condition in the first place may be running fairly deeply within the
kernel and may hold any number of locks when it is made to wait. The
mmap_sem semaphore seems to be especially problematic, since it
is often held in situations where memory is being allocated — page fault
handling, for example. If the OOM handler process needs to do anything
that might acquire any of the same locks, it will block waiting for exactly the
wrong process, once again creating a deadlock.
The end result is that user-space OOM killing is not 100% reliable and,
arguably, can never be. As far as Google is concerned, somewhat unreliable OOM
handling is acceptable, but deadlocks when OOM killing goes bad are not.
So, back in 2011, David Rientjes posted a
patch establishing a user-configurable OOM killer delay. With that
delay set, the (kernel) OOM killer will wait for the specified time for an OOM
situation to be resolved by the user-space handler before it steps in and
starts killing off processes. This
mechanism gives the user-space handler a window within which it can try to
work things out; should it deadlock or otherwise fail to get the job done
in time, the kernel will take over.
David's patch was not merged at that time; the general sentiment seemed to
be that it was just a workaround for user-space bugs that would be better
fixed at the source. At the time, David said that Google would carry the patch
internally if need be, but that he thought others would want the same
functionality as the use of memcgs increased. More than two years later,
he is trying again, but the response is not
necessarily any friendlier this time around.
Alternatives to delays
Some developers responded that running the OOM handler within the control
group it manages is a case of "don't do that," but, once David explained
that users are doing their own OOM handling, they seemed to back down a bit
on that one. There does still seem to still be a bit of a sentiment that
the OOM handler should be locked into memory and should avoid performing
memory allocations. In particular, OOM time seems a bit late to be
reading /proc files to get a picture of which processes are
running in the system. The alternative, though, is to trace process
creation in each memcg, which has performance issues of its own.
Some constructive thoughts came from Johannes Weiner, who had a couple of
ideas for improving the current situation. One of those was a patch intended to solve the problem of
processes waiting for OOM resolution while holding an arbitrary set of
locks. This patch makes two changes, the first of which comes into play
when a problematic allocation is the direct result of a system call. In
this case, the allocating process will not be placed in the OOM wait queue
at all; instead, the system call will simply fail with an ENOMEM error.
This solves most of the problem, but at a cost: system calls that might
once have worked will start returning an error code that applications might
not be expecting. That could cause strange behavior, and, given that the
OOM situation is rare, such behavior could be hard to uncover with testing.
The other part of the patch changes the page fault path. In this case,
just failing with ENOMEM is not really an option; that would result in the
death of the faulting process. So the page fault code is changed to
make a note of the fact that it hit an OOM situation and return; once the
call stack has been unwound and any locks are released, it will wait for
resolution of the OOM problem. With these changes in place, most (or all)
of the lock-related deadlock problems should hopefully go away.
That doesn't solve the other problem, though: if the OOM handler itself
tries to allocate memory, it will be put on the waiting list with everybody else
and the memcg will still deadlock. To address this issue, Johannes suggested that the user-space OOM handler
could more formally declare its role to the kernel. Then, when a process
runs into an OOM problem, the kernel can check if it's the OOM handler
process; in that case, the kernel OOM handler would be invoked immediately
to deal with the situation. The end result in this case would be the same
as with the timeout, but it would happen immediately, with no need to wait.
Michal Hocko favors Johannes's changes, but had an additional suggestion: implement a global
watchdog process. This process would receive OOM notifications at the same
time the user's handler does; it would then start a timer and wait for the OOM
situation to be resolved. If time runs out, the watchdog would kill the
user's handler and re-enable kernel-provided OOM handling in the affected
memcg. In
his view, the problem can be handled in user space, so that's where the
solution should be.
With some combination of these changes, it is possible that the problems
with user-space OOM-handler deadlocks will be solved. In that case,
perhaps, Google's delay mechanism will no longer be needed. Of course,
that will not be the end of the OOM-handling discussion; as far as your
editor can tell, that particular debate is endless.
Comments (29 posted)
By Jonathan Corbet
June 5, 2013
As mobile and embedded processors get more complex — and more numerous —
the interest in improving the power efficiency of the scheduler has
increased. While
a number of power-related
scheduler patches exist, none seem all that close to merging into the
mainline. Getting something upstream always looked like a daunting task;
scheduler changes are hard to make in general, these changes come from a
constituency that the scheduler maintainers are not used to serving, and
the existence of competing patches muddies the water somewhat. But now it
seems that the complexity of the situation has increased again, to the
point that the merging of any power-efficiency patches may have gotten even
harder.
The current discussion started at the end of May, when Morten Rasmussen
posted some performance measurements
comparing a few of the existing patch sets. The idea was clearly to push
the discussion forward so that a decision could be made regarding which of
those patches to push into the mainline. The numbers were useful, showing
how the patch sets differ over a small set of workloads, but the apparent
final result is unlikely to be pleasing to any of the developers involved:
it is entirely possible that none of those patch sets will be merged in
anything close to their current form, after Ingo Molnar posted a strongly-worded "line in the sand" message
on how power-aware scheduling should be designed.
Ingo's complaint is not really about the current patches; instead, he is
unhappy with how CPU power management is implemented in the kernel now.
Responsibility for CPU power management is currently divided among three
independent components:
- The scheduler itself clearly has a role in the system's power usage
characteristics. Features like deferrable timers and suppressing the timer tick when idle have
been added to the scheduler over the years in an attempt to improve
the power efficiency of the system.
- The CPU frequency ("cpufreq") subsystem regulates the clock frequency
of the processors in response to each processor's measured idle time.
If the processor is idle much of the time, the frequency (and, thus,
power consumption) can be lowered; an always-busy processor, instead,
should run at a higher frequency if possible. Most systems probably
use the on-demand cpufreq governor,
but others exist. The big.LITTLE
switcher operates at this level by disguising the difference
between "big" and "little" processors to look like a wide range of
frequency options.
- The cpuidle subsystem is charged with
managing processor sleep states. One might be tempted to regard
sleeping as just another frequency option (0Hz, to be exact), but
sleep is rather more complicated than that. Contemporary processors
have a wide range of sleep states, each of which differs in the amount
of power consumed, the damage inflicted upon CPU caches, and the time
required to enter and leave that state.
Ingo's point is that splitting the responsibility for power management
decisions among three components leads to a situation where no clear policy
can be implemented:
Today the power saving landscape is fragmented and sad: we just
randomly interface scheduler task packing changes with some idle
policy (and cpufreq policy), which might or might not combine
correctly. Even when the numbers improve, it's an entirely random,
essentially unmaintainable property: because there's no clear split
(possible) between 'scheduler policy' and 'idle policy'.
He would like to see a new design wherein the responsibility for all of
these aspects of CPU operation has been moved into the scheduler itself.
That, he claims, is where the necessary knowledge about the current
workload and CPU topology lives, so that is where the decisions should be
made. Any power-related patches, he asserts, must move the system in that
direction:
This is a "line in the sand", a 'must have' design property for any
scheduler power saving patches to be acceptable - and I'm NAK-ing
incomplete approaches that don't solve the root design cause of our
power saving troubles.
Needless to say, none of the current patch sets include a fundamental
redesign of the scheduler, cpuidle, and cpufreq subsystems. So, for all
practical purposes, all of
those patches have just been rejected in their current form — probably not
the result the developers of those patches were hoping for.
Morten responded with a discussion of the
kinds of issues that an integrated power-aware scheduler would have to deal
with. It starts with basic challenges like defining scheduling policies
for power-efficient operation and defining a mechanism by which a specific
policy can be chosen and implemented. There would be a need to represent
the system's power topology within the scheduler; that topology might not
match the cache hierarchy represented by the existing scheduling domains data structure. Thermal
management, which often involves reducing CPU frequencies or powering down
processors entirely, would have to be factored in. And so on. In summary,
Morten said:
This is not a complete list. My point is that moving all policy to
the scheduler will significantly increase the complexity of the
scheduler. It is my impression that the general opinion is that
the scheduler is already too complicated. Correct me if I'm wrong.
In his view, the existing patch sets are part of an incremental solution to
the problem and a step toward the overall goal.
Whether Ingo will see things the same way is, as of this writing, unclear.
His words were quite firm, but lines in the sand are also relatively easy
to relocate. If he holds fast to his expressed position, though, the
addition of power-aware scheduling could be delayed indefinitely.
It is not unheard of for subsystem maintainers to insist on improvements to
existing code as a precondition to merging a new feature. At past kernel
summits, such requirements have been seen as being unfair, but they
sometimes persist anyway. In this case, Ingo's message, on its face,
demands a redesign of one of the most complex core kernel
subsystems before (more) power awareness can be added. That is a
significant raising of the bar for developers who were already struggling to
get their code looked at and merged. A successful redesign on that scale
is unlikely to happen unless the
current scheduler maintainers put a fair amount of their own time into the
requested redesign.
The cynical among us could certainly see this position as an easy way to
simply make the power-aware scheduling work go away. That is
certainly an incorrect interpretation, though. The more straightforward
explanation — that the scheduler maintainers want to see the code get
better and more maintainable over time — is far more likely. What has to
happen now is the identification of a path toward that better scheduler
that allows for power management improvements in the short term. The
alternative is to see the power-aware scheduler code relegated to vendor
and distributor trees, which seems like a suboptimal outcome.
Comments (27 posted)
Patches and updates
Kernel trees
- Sebastian Andrzej Siewior: 3.8.13-rt10 .
(June 3, 2013)
Core kernel code
Development tools
Device drivers
Filesystems and block I/O
Memory management
Networking
Architecture-specific
Miscellaneous
Page editor: Jonathan Corbet
Distributions
By Jake Edge
June 5, 2013
Debian's recent announcement that it would
stop backporting security fixes into Iceweasel—Debian's version of
Firefox—is not much of a surprise at some level. While the famously stable
distribution is loath to change its software versions midstream, keeping an
older version of Firefox up to date with the latest security fixes is a
huge job. In addition, Mozilla has created the Extended Support Release
(ESR) for its products which gives roughly one year of support for selected
releases. One year is, of course, not long by Debian standards, but using
the ESR releases may in fact result in more stability—at least from a
security perspective.
It is not just Iceweasel that is affected by this change; all of the
Debian-ized versions of Mozilla products—Icedove (Thunderbird) and Iceape
(Seamonkey)—will be treated similarly. Actually, Iceape/Seamonkey is not
truly a Mozilla product any more, as it has been a community-maintained
project since 2005, but it shares much of its code with Firefox and
Thunderbird. Seamonkey doesn't follow the same version scheme as Firefox
and Thunderbird, but does seem to follow the Firefox release schedule.
Most other distributions switched to using the ESRs for the Mozilla products
some time
ago, but Debian had continued trying to support whatever version was
incorporated into its stable release.
The current ESR version is Firefox and Thunderbird 17, which was released
in November 2012. It will continue to be supported until December 2013,
when version 26 is released. In the meantime, the next ESR will be version
24, which is slated for September 2013. Mozilla releases are done every
six weeks, and there is a two-cycle overlap where two ESRs are supported to
allow time for the newest to stabilize.
The recently released Debian 7.0 ("wheezy") will carry version 17 of the
Mozilla products. Toward the end of the year, it will move to version 24,
which will force users to either forgo updates or to take a new version of
the browser and mail client. That may come as a surprise to Debian users
since the user interface and other aspects of the browser (e.g. add-ons)
will suddenly change.
In another year, presumably version 31 (or whatever the next ESR is) will
be picked up for wheezy. In the perhaps unlikely scenario of a "jessie"
(8.0) release in that time frame, it would start with version 24 as well.
Web browsers, and to a slightly lesser extent mail clients, are
particularly sensitive bodies of code. Browsers are directly exposed to
the Internet, thus subject to whatever tricks malicious attackers have up
their
sleeves. Mail clients should generally not be directly handling executable
content from the web (e.g. JavaScript, Java applets)—by default,
Thunderbird doesn't—but will render HTML and CSS, which can sometimes lead
to security problems. Sadly, some users may require bouncing cows in their
email as well as their browser, so they may override the default. HTML5
content is also quite JavaScript-dependent in many cases, so rendering
email that contains it may also require rendering "active" content.
In any case, though, the core of the problem remains the same: a large,
complex body of code that evolves quickly doesn't necessarily mesh well
with a distribution intent on version stability. But Debian was the last
major holdout that tried to continue taking fixes from later versions and
backport them into the version in the stable distribution. It seems to
be a question of a lack of developer time to do those (sometimes difficult)
backports.
In fact, the current plan is to stop doing updates entirely for Iceweasel
in the
"oldstable" (Debian 6.0 or "squeeze") release if volunteers cannot be
found. That Iceweasel is based on
Firefox 3.5.16, which was released late in 2010 (before Mozilla started its
six-week major-version-incrementing regimen). Given how far Mozilla has
moved in the interim, there are likely to be many undiscovered security
holes in that release because Mozilla and others have focused their
testing and review on more relevant (to them) versions.
One could argue that there is an inherent flaw in the idea of maintaining
software packages long after the upstream project has moved on. Large
organizations with paid staff (e.g. the enterprise distribution vendors)
may be able to handle the load, but smaller, volunteer-driven projects like
Debian are sometimes going to struggle. Upstream projects with smaller
code bases, slower moving development, and installation in a less hostile
environment—an office suite or photo editing tool, say—may be more amenable
to being maintained that way. Firefox and Thunderbird seem to just be a
bit too far of a reach.
On the other hand, the Debian kernel is maintained throughout the
life of the release. The wheezy kernel is 3.2, which Debian developer Ben
Hutchings is maintaining as a stable kernel. It is not clear what will
happen with the 2.6.32-based kernel in squeeze going forward.
Much of the reason that Debian created the non-branded versions of Firefox
and Thunderbird stemmed from its insistence on backporting security fixes.
Since that is changing, is there really any need for Iceweasel, Icedove,
and Iceape? The Mozilla
trademark guidelines do not allow modified versions of its products to
carry names like Firefox—without written permission from Mozilla. It
is too soon to say, and Debian may have other changes it puts into the
Mozilla code base, but it seems at least possible that Debian may be
distributing Firefox rather than Iceweasel in the not-too-distant future.
Comments (13 posted)
Brief items
I wish we had a better system where some, but not all errors would latch
and need acknowledgment, there would be correlation (between hosts and
between messages, so if the router's down, you get a message about data
centre A not being able to successfully complete $process, rather than a
zillion individual messages), there would be merging of identical
messages, so I get a message about $process being broken for the last
$time period (or having a failure rate above $threshold), rather than a
thousand mails because of some error.
Oh, and a pony. Don't forget the pony. Or an otter, I like otters.
--
Tollef Fog Heen
Comments (1 posted)
Distribution News
Debian GNU/Linux
The May "Bits from the Debian Project Leader" posting includes a notice
that the debian-multimedia.org domain — once the site of a popular Debian
package repository — has expired and been grabbed by an unknown entity. If
any Debian users have references to that site in their APT configurations,
now would be a good time to take them out. As Lucas Nussbaum says:
"
This is a good example of the importance of the use of cryptography
to secure APT repositories (and of the importance of not blindly adding
keys)."
Full Story (comments: 11)
Fedora
For those of you wanting to play with Fedora 19 in a different setting,
there is now
an
installer for the Nexus 4 handset available. "
So if you
have an n4 and a bit of free space, you can play around with accelerated
open-source gpu goodness." Good backups are recommended.
Comments (none posted)
Red Hat Enterprise Linux
Red Hat has retired Red Hat Enterprise Linux 6.1 Extended Update Support.
"
In accordance with the Red Hat Enterprise Linux Errata Support
Policy, Extended Update Support for Red Hat Enterprise Linux 6.1 was
retired on May 31, 2013, and support is no longer provided. Accordingly,
Red Hat will no longer provide updated packages, including critical impact
security patches or urgent priority bug fixes, for Red Hat Enterprise Linux
6.1 EUS. In addition, technical support through Red Hat's Global Support
Services is no longer provided."
Full Story (comments: none)
Other distributions
Allan McRae
cautions
that the /usr/bin merge will require manual intervention for Arch Linux
users. "
The update merges all binaries into a unified /usr/bin
directory. This step removes a distinction that has been meaningless for
Arch systems and simplifies package maintenance for the development
team. See this
post for more explanation of the reasoning behind this change."
Comments (none posted)
Newsletters and articles of interest
Comments (none posted)
On his blog, Andy Grover has some
thoughts on how to make Fedora more relevant for servers. Because of the 13-month supported lifespan of a Fedora release, administrators are typically wary of using it, but new deployment schemes make it more viable. "
Let's come back to the odd fact that Fedora is both a precursor to RHEL, and yet almost never used in production as a server OS. I think this is going to change. In a world where instances are deployed constantly, instances are born and die but the herd lives on. Once everyone has their infrastructure encoded into a configuration management system, Fedora's short release cycle becomes much less of a burden. If I have service foo deployed on a Fedora X instance, I will never be upgrading that instance. Instead I'll be provisioning a new Fedora X+1 instance to run the foo service, start it, and throw the old instance in the proverbial bitbucket once the new one works."
Comments (27 posted)
LinuxGizmos
looks
at the 3.0 release of Enea Linux, an embedded Linux distribution
compatible with Yocto Project 1.4 code. "
Enea Linux 3.0 arrives with Yocto Project certification but not yet with the CGL certification Enea last year suggested would come in 2013. Version 3.0 moves up to Yocto Project v1.4 (“Dylan”), offering improvements including support for Linux kernel 3.8, decreased build-times, and Enea’s automated test framework, called Ptest. The latter integrates test suites from all open source projects whose applications are used in Yocto Project, enabling it to vastly increase the amount of tests that are performed on Yocto Project Linux packages, says Enea."
Comments (none posted)
LinuxInsider
covers Mozillux, a
live DVD/USB Lubuntu-based distribution that hails from France. "
As its name suggests, Mozillux promotes Mozilla software and is designed as a complete software suite. Many computer users are familiar with various Mozilla cross-platform applications such as browsers and email clients -- Firefox and Thunderbird, in particular. In similar fashion, the Mozillux OS is an ideal Linux distro for both beginners and intermediate users."
Comments (none posted)
Page editor: Rebecca Sobol
Development
Healthcare is a popular subject in the open source software
community today—as it is in the proprietary software world—with a
number of high-profile projects tackling problems like electronic
health records (EHR) and hospital management. But, as Neetu Jain
explained in her talk at Texas Linux Fest 2013 in
Austin, thus far open source developers are not addressing the
needs of the most at-risk patients in developing countries. She
highlighted several successful but closed-source projects already
deployed in Africa and India, which have taken off because they center
around mobile phones rather than desktop systems, and she encouraged
more open source developers to get involved.
Mobile healthcare (or "mHealth") can encompass a range of different
subjects, from measurement, to diagnostics, to patient treatment, to
large-scale global initiatives. Measurement includes sensor-based
patient monitoring and personal (that is, non-automated) monitoring that is often focused on data-mining or research. Diagnostics is
focused on tools for doctors and other healthcare providers, such as
point-of-care decision-making or portable imaging devices. Treatment
projects include everything from personal calorie counting to clinical
trial management. The global initiatives include an array of large-scale
efforts, from information dissemination to disease surveillance to
data collection.
First-world and third-world challenges
But within the rather large scope of mHealth, there is a big
disconnect between mHealth services in the developed countries and
those in the developing world. For starters, developed countries
focus on different healthcare topics: personal fitness, chronic
disease management, and aging, for example. Initiatives in developing
countries focus on basic healthcare service access, prenatal and
childhood health, and infectious disease control. Both have their
place, of course; she highlighted several mHealth projects that assist
the elderly, such as "smart" medicine bottles that sync with a mobile
phone to help the patient remember to take medication.
There are also technical differences. Most mHealth projects in
developed countries are built on smartphone platforms and are tied to
always-on, ubiquitous Internet access. Both are rarely found in poor
and developing countries. Nevertheless, "dummy phones" with cellular
network access are widespread, she said, citing a United
Nations report that there are now more people with access to cell
phones than people with access to toothbrushes or to clean toilets.
No matter how poor people are, Jain said, they recognize the value of
mobile phone communications, although in many cases entire families or
even multiple families share a single device. mHealth projects have
taken advantage of this situation largely by building software systems
that rely on SMS text-messaging as the communication medium, which is
a system exploited only rarely in developed countries whose smartphone
and tablet users prefer apps using data connections and WiFi.
The other major difference that distinguishes mHealth in developed
and developing countries is that the majority of mHealth initiatives
in developing countries receive no corporate backing. That is a stark
contrast with the investment support that surrounds the startup
culture in many developed nations, and it makes mHealth projects in
developing countries a good opportunity for open-source projects.
Yet there are relatively few open source developers volunteering,
perhaps in part because so many open source developers live in developed
regions like Europe and North America.
Success stories
Jain then discussed a series of successful mHealth initiatives
deployed in developing countries. HealthLine is an interactive "help
line" that connects callers to a call center staffed by physicians.
mDhil is healthcare information service in India that sends out
broadcast messages over SMS. Sproxil is an anti-counterfeiting system
deployed in Nigeria, with which patients can verify the authenticity
of medication by texting a code number from the bottle. TextToChange
is a program used in Uganda that tracks patient satisfaction after
treatment. Changamka is a project that helps poor people in Kenya
save money for healthcare expenses by making small deposits through a
mobile phone. Project Masiluleke is a service that uses South
Africa's free "public service" SMS system to distribute information
about HIV and tuberculosis, connecting users with counselors and
clinics.
There are many more examples, but Jain went on to describe the two
projects with which she volunteers. The first is Raxa, an open source health information
system (HIS) that has been deployed in India for two or three years.
Raxa consists of several related components, such as an EHR system,
patient tracking, and in-the-field doctor support tools. Raxa is
based on the open source OpenMRS
platform, which is used in a variety of other EHR projects as well.
But Raxa is different in that it focuses on building mobile client
software in HTML5, rather than desktop applications.
The second project Jain is involved with is Walimu, a nonprofit
organization working with the largest hospital in Uganda. In the past
the organization built and deployed a low-cost severe-illness
diagnostic kit for doctors, but it is currently working on building a
clinical decision support system. The software project is currently
in the nascent stage, Jain said, so more help is welcome.
Jain also suggested that interested developers visit the "get involved"
section of the mHealth Alliance web site, which helps people find
projects and initiatives that they can contribute to.
There are a lot of challenges facing any mHealth initiative in the
developing world, Jain said, but open source developers are capable of
helping on a number of fronts. The funding problem means that volunteers
are needed to work on development and on administering system
infrastructure. There are also cultural challenges, such as the fact
that an SMS-based application in India needs to support more than 400
languages. Most mHealth initiatives face other cultural issues (such
as the complexity introduced by large groups of people sharing one
phone) that do not have development solutions, and they face
regulatory challenges, but more volunteers can help ease the burden.
The Q&A session after the talk was lively; one member of the
audience asked a question that revealed yet another complexity in
mHealth development. Asked why so many of the initiatives discussed
were deployed in just a single region, Jain responded that the two
biggest developing-nation regions are sub-Saharan Africa and the
Indian subcontinent, but that the same project rarely succeeds in both
places. The two are similar in one respect—the constraints on
resources—but in practice the linguistic, cultural, and
regulatory differences between them mean that a solution usually needs
to be re-implemented to work in both regions.
mHealth projects in the developing world, like most humanitarian
software projects, are relatively easy to "sell" as a Good Thing. But
that fact, naturally, does not make the technical hurdles (nor the
regulatory or administrative ones) go away. Fortunately, interested
developers have already seen the value of utilizing SMS messaging to
work around the connectivity problem in developing countries;
hopefully the community will continue to find practical solutions to such
unique problems.
Comments (none posted)
Brief items
We don't test in EFL, we just assume things work.
—
Tom Hacohen (hat tip to Olav Vitters)
implementing UNO IDL support in doxygen: 9 days of work
converting IDL file comments to doxygen: 5 days of work
removing 57k lines of unmaintained buggy autodoc, bespoke String and File classes: priceless
—
Michael
Stahl, on a not-insignificant commit to LibreOffice.
(hat tip
to Cesar Eduardo Barros)
Comments (none posted)
The GCC 4.8.1 release is out. It is primarily a bug-fix release, but it is
not limited to that: "
Support for C++11 ref-qualifiers has been added
to GCC 4.8.1, making G++ the first C++ compiler to implement all the
major language features of the C++11 standard."
Full Story (comments: 39)
Version 4.0 of the PulseAudio audio server is out. Changes include better
low-latency request handling, improved JACK integration, a new role-based
audio "ducking" module, various performance improvements, and more; see
the
release notes for details.
Full Story (comments: 28)
Version 3.0 of the PyTables package for wrangling large datasets has been released. This version is the first to support Python 3, and as the announcement notes, almost all of the core numeric/scientific packages for Python already support Python 3 and thus are immediately usable. Other changes include support for in-memory image files, basic HDF5 drivers, and the extended precision floating point data types Float96, Float128, Complex192 and Complex256.
Full Story (comments: none)
Version 2013.05 of the Buildroot tool for creating embedded Linux systems has been released. The release notes indicate there are 84 new packages, as well as support for multiple OpenGL providers. The default compiler has changed to GCC 4.7, a new external Microblaze toolchain has been added, and there are both new CPU architectures supported and a few old architectures dropped.
Full Story (comments: none)
Newsletters and articles
Comments (1 posted)
The H
looks
at the Processing 2.0 release. "
The new version of the language,
which has been in development since mid-2011, brings OpenGL rendering to
the core of the platform, replacing the older software-based P2D and P3D
renderers with new OpenGL-accelerated P2D and P3D renderers. A new OpenGL
library, based on work done on the Android version of Processing, has also
been incorporated and OpenGL is now part of the core of Processing."
For some background on Processing, see
this LWN
article from last October.
Comments (none posted)
At his personal blog, Mozilla's Robert O'Callahan offers
some criticism of Google's Portable Native Client (PNaCl) project,
which it was recently announced would be enabled in Google's "Blink"
rendering engine. At issue is that PNaCl support seems to go against
Google's pledge that Blink would stick to supporting open standards. "PNaCl and Pepper are not open standards, and there are not even any proposals on the table to standardize them in any forum. They have documentation, but for the details one must defer to the large bundle of Chrome code that implements them. Other vendors wishing to support PNaCl or Pepper would have to adopt or reverse-engineer this code."
Comments (none posted)
At his blog, K Lars Lohn argues
for a stylesheet-like approach to source code formatting, which can be
restyled to adhere to any of several coding styles, rather than the
rigid-format approach of today. Lohn's discussion stems from Python style (the PEP8 style guide in particular), but he seems to have broader applicability in mind. "I want to be able to load that method into my editor and see it styled in the manner that works for me. If someone else wants to see it in the PEP 8 style, that ought to be an option on the editor. Our source code should express functionality and functionality alone. Like the relationship between HTML and CSS, source code style should be left to some presentation layer."
Comments (4 posted)
Page editor: Nathan Willis
Announcements
Brief items
CIOL
reports
that Atul Chitnis has passed away. "
His was a name
that was synonymous with open source. He championed its cause for a major
part of his life. Finally, his fruitful existence, touching millions of
lives, was to be stolen away by cancer." Your editor had a number
of encounters with Atul over the years, including one visit to FOSS.in; he
will be much missed.
Comments (4 posted)
Articles of interest
The Free Software Foundation's newsletter for May 2013 is available. Topics
include freeing JavaScript; SFC fund raising for accounting software;
Google abandons XMPP; OPW, GSoC and MediaGoblin; GNU/Linux flag at the top
of the Americas; GNU/Linux chosen by the ISS; GNU Hackers Meeting 2013;
DRM; and much more.
Full Story (comments: none)
New Books
Rocky Nook has released "GIMP 2.8 for Photographers" by Klaus Goelker.
Full Story (comments: none)
Calls for Presentations
The Tcl/Tk conference will take place September 23-27, in New Orleans,
Louisiana. The proposal deadline has been extended until July 6.
"
The program committee is asking for papers and presentation
proposals from anyone using or developing with Tcl/Tk (and
extensions)."
Full Story (comments: none)
Upcoming Events
Events: June 6, 2013 to August 5, 2013
The following event listing is taken from the
LWN.net Calendar.
| Date(s) | Event | Location |
June 6 June 9 |
Nordic Ruby |
Stockholm, Sweden |
June 7 June 8 |
CloudConf |
Paris, France |
June 7 June 9 |
SouthEast LinuxFest |
Charlotte, NC, USA |
June 8 June 9 |
AdaCamp |
San Francisco, CA, USA |
| June 9 |
OpenShift Origin Community Day |
Boston, MA, USA |
June 10 June 14 |
Red Hat Summit 2013 |
Boston, MA, USA |
June 13 June 15 |
PyCon Singapore 2013 |
Singapore, Republic of Singapor |
June 17 June 18 |
Droidcon Paris |
Paris, France |
June 18 June 20 |
Velocity Conference |
Santa Clara, CA, USA |
June 18 June 21 |
Open Source Bridge: The conference for open source citizens |
Portland, Oregon, USA |
June 20 June 21 |
7th Conferenza Italiana sul Software Libero |
Como, Italy |
June 22 June 23 |
RubyConf India |
Pune, India |
June 26 June 28 |
USENIX Annual Technical Conference |
San Jose, CA, USA |
June 27 June 30 |
Linux Vacation / Eastern Europe 2013 |
Grodno, Belarus |
June 29 July 3 |
Workshop on Essential Abstractions in GCC, 2013 |
Bombay, India |
July 1 July 5 |
Workshop on Dynamic Languages and Applications |
Montpellier, France |
July 1 July 7 |
EuroPython 2013 |
Florence, Italy |
July 2 July 4 |
OSSConf 2013 |
Žilina, Slovakia |
July 3 July 6 |
FISL 14 |
Porto Alegre, Brazil |
July 5 July 7 |
PyCon Australia 2013 |
Hobart, Tasmania |
July 6 July 11 |
Libre Software Meeting |
Brussels, Belgium |
July 8 July 12 |
Linaro Connect Europe 2013 |
Dublin, Ireland |
| July 12 |
PGDay UK 2013 |
near Milton Keynes, England, UK |
July 12 July 14 |
5th Encuentro Centroamerica de Software Libre |
San Ignacio, Cayo, Belize |
July 12 July 14 |
GNU Tools Cauldron 2013 |
Mountain View, CA, USA |
July 13 July 19 |
Akademy 2013 |
Bilbao, Spain |
July 15 July 16 |
QtCS 2013 |
Bilbao, Spain |
July 18 July 22 |
openSUSE Conference 2013 |
Thessaloniki, Greece |
July 22 July 26 |
OSCON 2013 |
Portland, OR, USA |
| July 27 |
OpenShift Origin Community Day |
Mountain View, CA, USA |
July 27 July 28 |
PyOhio 2013 |
Columbus, OH, USA |
July 31 August 4 |
OHM2013: Observe Hack Make |
Geestmerambacht, the Netherlands |
August 1 August 8 |
GUADEC 2013 |
Brno, Czech Republic |
August 3 August 4 |
COSCUP 2013 |
Taipei, Taiwan |
If your event does not appear here, please
tell us about it.
Page editor: Rebecca Sobol