The latest round of OpenSSL bugs was disclosed to the public on June 5, but it is clear that some organizations and distributions had earlier knowledge of the flaws. That is fairly typical for security holes of this sort; distributions get some time to fix the flaws before they are made public (typically simultaneously with the release of the updates). But OpenBSD was not one of the organizations notified in advance; why that is, and whose fault it is, are much in dispute since then.
OpenBSD project leader Theo de Raadt complained about the lack of early notice in a message to the OpenBSD misc and tech mailing lists. OpenBSD has famously forked the OpenSSL code post-Heartbleed into a new library called LibReSSL (or LibreSSL). Both its OpenSSL and LibReSSL packages were affected by the bugs, though, so it is unsurprising that de Raadt is unhappy first hearing about the bugs several days after others had been informed.
According to a timeline published by OpenSSL project member Mark J. Cox, who handled the issues for the project, the distros mailing list was notified of the problem on June 2. That allowed members of that private list—restricted to security representatives from Linux distributions and the BSDs—to request the patches and a copy of the draft advisory. OpenBSD is conspicuously absent from the list of those participating in that list.
As it turns out, de Raadt had been asked if he wanted to join the distros list back in early May. A different OpenSSL problem led Red Hat security response team member Kurt Seifried to CC de Raadt on the report and ask if he or some other OpenBSD member would like to join the list. The distros mailing list is meant to disclose and discuss security problems that affect the entire Unix ecosystem (rather than those that just affect Linux, for which there is a linux-distros mailing list). In characteristic fashion, de Raadt replied:
We don't get paid. And therefore, I don't know where I should find the time to be on another mailing list. It is not like I would have sent a mail to anyone. In general our processes are simply commit & publish. So I'll decline.
Once Cox's timeline made it clear that most other distributions (both Linux and BSD) had been given an advance heads-up about the issue, de Raadt and other OpenBSD developers accused OpenSSL of knowingly keeping the knowledge of the bugs from the project: "Unfortunately I find myself believing reports that the OpenSSL people intentionally asked others for quarantine, and went out of their way to ensure this information would not come to OpenBSD and LibreSSL."
For his part, Cox states that OpenSSL chose the distros mailing list as its means of disclosing the bug early to the various affected operating systems. Because OpenBSD was not on the list, it didn't find out, he said in a comment on his timeline post. Furthermore, "OpenBSD have approached us to be notified about future issues and we've asked them to join the list as they certainly would qualify and would find it beneficial not just for any future OpenSSL issues."
The timeline shows that there were some organizations that received early warning of the bugs, including a few well in advance of the distros posting. Others were notified at around the same time as the posting, but without any details. Whether OpenSSL considered notifying OpenBSD separately from the mailing list is not clear. The project is certainly aware of the LibReSSL effort (and likely unhappy with how its code has been characterized by the OpenBSD crowd), and that it would likely be affected by these problems. But it is entirely possible that notifying OpenBSD just slipped through the cracks as well.
The conversation fairly quickly degenerated. It is clear that de Raadt and others do not see the distros list as the appropriate venue for early disclosure of vulnerabilities. They believe that affected organizations and projects should be contacted individually, it seems. Regardless of whether anyone at OpenBSD gets paid to read security mailing lists, it is undeniable that having a representative on the list would have gotten the project the early disclosure it is looking for, however.
The conversation is also a bit hard to follow since various participants, including Seifried and distros/linux-distros administrator Solar Designer (Alexander Peslyak), sent private mail to de Raadt that he responds to publicly. In addition, de Raadt's emails don't seem to thread correctly for some reason. But he makes it abundantly clear that he is livid about the issue and he lashes out at Peslyak, Seifried, and Cox.
But, ultimately, it is de Raadt's opposition to embargoes (which typically come with early disclosure) that is part of the reason no one from OpenBSD is on the relevant list. Peslyak said that de Raadt had been invited to join the list in 2012, but declined not just for himself but for the entire OpenBSD project. Peslyak, who has been a voice of reason throughout (for example, he has encouraged OpenSSL to contact LibreSSL directly in the future), also said that de Raadt's anti-embargo stance contributed to the current situation:
It is most unfortunate for their users that OpenBSD and LibReSSL did not get the extra few days to fix the problems found in OpenSSL. It is not exactly clear who is most "to blame" for that, but it is clear that things could be done better (by both OpenBSD and OpenSSL) in the future. For some on the OpenBSD/LibReSSL "side", this episode is evidence of why those projects cannot work with OpenSSL. That may be, but the tone and contents of the emails from de Raadt and others may have also made it obvious (again) why it is hard for anyone outside of the OpenBSD clique to work with that project. It is a project that does a lot of good work, but it is not one that is known for getting along with others.
Every now and then, one finds oneself in a place where the near-ubiquitous Internet connectivity of today is absent, unusably slow, or prohibitively expensive. Some network functionality (like email) may be worth hassle and expense, while others (like streaming media) are not. Somewhere in between, though, lies reference data, which would be nice to cache locally for offline access, if it were technically feasible. To that end, some "open content" projects, such as OpenStreetMap, make configuring offline access relatively painless, but many others do not. For Wikipedia and the related Wikimedia projects (Wiktionary, Wikivoyage, etc.), the combination of an exceptionally large data set, constant editing, and multiple languages makes for a more challenging target—and a niche has developed for offline Wikipedia access software.
Of course, the "correct" solution to providing offline Wikipedia access would arguably be to run a mirror of the real site, which it is certainly possible to do. But, even then, mirrors start with a hefty Wikipedia database dump that requires considerable storage space: around 44GB for the basic text of the English Wikipedia site, without the "talk" or individual user pages. The media content is larger still; around 40TB are currently in Wikimedia's Commons, of which roughly 37TB is still images. Moreover, the database-import method does not allow a mirror to keep up with ongoing edits, although doing so would consume considerable system resources anyway.
On the other hand, in many cases, Wikipedia's usefulness as a general-purpose reference does not depend on having the absolute newest version of each article. Wikimedia makes periodic database dumps, which can suffice for weeks or even months at a time, depending on the subject. It is probably no surprise, then, that the most popular offline-Wikipedia tools focus on turning these periodic database releases into an approximation of the live site. Many also take a number of steps to conserve space—usually by storing a compressed version of the data, but in some cases by also omitting major sections of the content as well. There are two actively developed open-source tools for desktop Linux systems at present: XOWA and Kiwix. Both support storing compressed, searchable archives of multiple Wikimedia sites, although they differ on quite a few of the details.
Kiwix uses the openZIM file format for its content storage. The Wikipedia database dump is converted into static HTML beforehand, then compressed into the ZIM format. The basic ZIM format includes a metadata index that supports searching article titles, but to enable full-text search, the file must be indexed. The Kiwix project offers both indexed and unindexed archives for download; the indexed files are (naturally) larger, and they also come bundled with the Windows build of Kiwix. The ZIM format is designed with this usage in mind; its development is spearheaded by Switzerland's Wikimedia CH.
As far as content availability is concerned, the Kiwix project periodically updates its official ZIM releases for Wikipedia only—albeit in multiple languages (69 at present, not counting image-free variants available for a handful of the larger editions). In addition, volunteers produce ZIM files for other sites, at the moment including Wikivoyage, Wikiquote, Wiktionary, and Project Gutenberg, with TED and other efforts still in the works.
Kiwix itself is a GPLv3-licensed, standalone graphical application that most closely resembles a "help browser" or e-book reader. The content displayed is HTML, of course, but the user interface is limited to the content installed in the local "library." Users can search for new ZIM content from within the application as well as check for updates to the installed files.
Interestingly enough, there are many more ZIM archives listed within Kiwix's available-files browser than there are listed on the project's web site; why any particular offering is listed in the application is not clear, since some of the options appear to be personal vanity-publishing works. Searching and browsing installed archives is simple and fast; type-ahead search suggestions are available and one can bookmark individual pages. There are also built-in tools for checking the integrity of downloaded archives and exporting pages to PDF.
In broad strokes, XOWA offers much the same experience as Kiwix: one installs a browser-like standalone application (AGPL-licensed, in this case), for which individual offline-site archives must be manually installed. Like Kiwix, XOWA can download and install content from its own, official archives. But while Kiwix archives contain indexed, pre-generated HTML, XOWA archives include XML from the original database dumps (stored in SQLite files), which is then dynamically rendered into HTML whenever a new page is opened.
In theory, the XML in the Wikipedia database dumps is the original Wiki markup of the articles, so it should be more compact than the equivalent rendered HTML. In practice, though, such a comparison is less simple. The latest Kiwix ZIM file for the English Wikipedia is 42GB with images, 12GB without, whereas the latest XOWA releases are 89.6GB with images and 14.6GB without. But XOWA also makes a point of the fact that in includes not only the basic articles, but also the "Category," "Portal," and "Help" namespaces, as well as multiple sizes of the included images.
When comparing the two approaches, it is also important to note that XOWA is specifically designed for use with Wikimedia database dumps, a choice that has both pros and cons. In the pro column, virtually any compatible database dump can be used with the application; XOWA offers Wikipedia for 30 languages and a much larger selection of the related sites (Wiktionary, Wikivoyage, Wikiquote, Wikisource, Wikibooks, Wikiversity, and Wikinews, which are bundled together for most languages). XOWA's releases also tend to be more up-to-date; at present none is older than a few months, while some of the less-popular Kiwix archives are several years old.
The downsides, though, start with the fact that only Wikimedia-compatible content is supported. Thus, there is no Project Gutenberg archive available, nor could your favorite Linux news site generate a handy offline article archive should it feel compelled to do so. But perhaps more troubling is the fact that XOWA archives do not support full-text searching. Lookup by title is supported, but that may not always be sufficient for research.
The browsing experience of the XOWA application is similar to Kiwix; both HTML renderers use Mozilla's XULRunner. XOWA also supports bookmarking pages and library maintenance. XOWA gains a point for allowing the user to seamlessly jump between installed wikis; a Wikipedia link to a Wiktionary page works automatically in XOWA, while a Kiwix user must return to the "library" screen and manually open up a second archive in order to change sites.
On the other hand, XOWA does not support printing or PDF export, and there is a noticeable lag between clicking on a link and seeing the page load. The status bar at the bottom of the window is informative enough to indicate that the delay is due to XOWA's JTidy-based parser; it reports the loading of the page content as well as each template and navigation element used. The parser can also still trip up in its XML-to-HTML conversion. If one is concerned about the accuracy of the conversion, of course, Kiwix's pre-generated HTML offers no guarantees either, but at least its results are static and will not crash on an odd bit of Wiki-markup syntax.
Ultimately, though, if the question is whether XOWA or Kiwix generates pages more like those one sees in the web browser from the live Wikimedia site, neither standalone application is perfect. But users may chafe at the very need to run a separate application to read Wikipedia to begin with. Fortunately, both projects are also pursuing another option: serving up their content with an embedded web server, which permits users to access the offline archives from any browser they choose.
XOWA's server can be started with:
java -jar /xowa/xowa_linux.jar --app_mode http_server --http_server_port 8080
Kiwix's server (which, like Kiwix, is written in C++) can be started from the command line with:
kiwix-serve --port=8000 wikipedia.zim
or launched from the application's "Tools" menu. A nice touch for those experimenting with both is that Kiwix defaults to TCP port 8000, XOWA to port 8080. The XOWA project also offers a Firefox extension that directs xowa: URIs to the local XOWA web server process.
Moving forward, it will be interesting to watch how both projects are affected by changes to Wikimedia's infrastructure. The XOWA internal documentation notes that Wikipedia is, at some point, planning to implement diff-style database update releases in addition to its full-database dumps. Incremental updates are one of the factors that makes OpenStreetMap so usable in offline mode, and Wikipedia's lack of such updates is what contributes the most pain to Kiwix and XOWA usage: waiting for those multi-gigabyte downloads to finish.
As unsatisfying as it may seem, neither application emerges as the clear winner for someone inspired to head off to a rustic cabin in the mountains and read Wikipedia at length. At its most basic, the trade-off would seem to be Kiwix's support for non-Wikimedia sites and its full-text search versus XOWA's cross-wiki link support and more predictable update process. Either will likely serve the casual user well.
The relationship between CentOS and Red Hat has always been interesting. Red Hat provides the source packages that, after removal of branding elements, are built into the CentOS release. By one measure, CentOS is sustaining freeloaders who want to benefit from Red Hat's work without paying for it. By another, CentOS helps Red Hat by bringing users into its ecosystem; some of those users eventually become paying Red Hat customers. So it is not surprising that users can see the recent acquisition of CentOS by Red Hat in two different lights: it's either an attempt to squash a competing distribution or an effort to sustain that distribution with much-needed support.
Either way, changes were always going to happen after the acquisition. CentOS users will certainly be happy about the first of those changes: support for CentOS developers so they can work on the distribution full time, and support for the infrastructure needed to keep CentOS going. But when CentOS project leader Karanbir Singh proposed a change to the seemingly trivial issue of version numbers, users were quick to express their disapproval.
Traditionally, CentOS releases have used the same version number as the RHEL release they are based on; CentOS 6.5 is a rebuild of the RHEL 6.5 release, for example. The CentOS developers now want to change to a scheme where the major number matches the RHEL major number, but the minor number is generated from the release date. So, if the CentOS version of RHEL 7.0 were to come out in July 2014, it might have a version number like 7.1407. Derivative releases from CentOS special interest groups (SIGs) would have an additional, SIG-specific tag appended to that number.
To the CentOS developers, this change offers a number of advantages. The close tie with RHEL version numbers, it is claimed, can confuse users into believing that a release is supported with security updates when it is not; see this detailed message from Johnny Hughes for an explanation of the reasoning there. Putting the release date into the version number makes the age of a release immediately obvious, presumably inspiring users to upgrade to current releases. This scheme would also make it easier to create releases that are not directly tied to RHEL releases; that is something that the SIGs, in particular, would like to be able to do.
Supporting the SIGs is a big part of the project's plan for the future in general. Karsten Wade described it this way:
So it seems that CentOS wants to follow Red Hat into the cloud. Simply providing a rebuild of RHEL is not as exciting as it once was, so the project wants to expand into other areas where, it is hoped, more users are to be found.
It should be possible to expand in this way as long as the core CentOS distribution remains what it has always been. Unfortunately, some users are worried that things will not be that way. Ljubomir Ljubojevic, the maintainer of the CentOS Facebook page, described his feelings about the change:
A large number of "me too" posts made it clear that Ljubomir is not alone in feeling this way. There is a lot of concern that the project might break the core distribution and that adopting a new version numbering scheme looks like a first step in that direction.
For their part, the CentOS developers have tried to address that concern. Karanbir stated directly that there is no plan to change how the core distribution is managed:
For the most part, the users in the discussion seemed to accept that promise, but that made them no happier about the version numbering change. The date-based numbers, they say, make it harder to know which version of RHEL a CentOS release is based on, and it can make it harder to justify installations (or upgrades) to management. All told, it was hard to find a single supportive voice for this change outside of the CentOS core developers.
Those developers have not said anything about what changes, if any, they might make to their plans in response to the opposition on the list. They are in a bit of a difficult position: they want to make changes aimed at attracting a broader set of users, but those changes appear threatening to their existing users, most of whom are quite happy with the distribution as it is now and are not asking for anything different. If the existing users start to feel that their concerns are not being heard, they may start to look for alternatives. In this case, the powers that be at CentOS may want to make a show of listening to those users and finding a way to resolve their version number concerns that doesn't appear to break the strong connection between RHEL and CentOS releases.
Like all computing platforms that allow users to install arbitrary applications, the Tizen project has expended considerable effort designing its security framework. At the 2014 Tizen Developer Conference (TDC), Casey Schaufler and Tomasz Świerczek presented a talk on the latest iteration of Tizen's application-security design, which introduces a privilege-checking service called Cynara.
The fundamental problem that Tizen faces in application security, Schaufler said, is that privileges are specified (in documents like the W3C APIs that Tizen supports for app development) with respect to abstract services, such as "telephony," rather than with respect to system components, such as network devices. All security policies attempt to bridge this gap by writing set of rules and exceptions that map the abstracts onto specific devices and filesystem locations.
In Tizen 2.x, he explained, the system's security policy was written as a set of Smack rules that attempted to isolate individual applications from each other by creating a separate Smack domain for each installed application. Each app package includes a manifest file detailing the files and directories it creates, and the API privileges it requests. At install time, the system's package manager would read the manifest, create a domain for the new app, and assign a Smack label for that domain to each file and directory installed. It would also compute the new Smack rules that correspond to the new app's combination of privileges and its Smack domain, and add those rules to the system Smack policy.
The problem is that this level of granularity resulted in a huge policy database that was difficult to maintain. "It was almost as big as an SELinux policy," Schaufler said; "I had to go apologize to people at the Security Summit." The upcoming Tizen 3.0 changes things dramatically, however—starting with a simplified, three-domain policy model, which puts all installed apps at one basic privilege level, the "User" domain. It also defines a "Floor" domain for static system data that will not change and a "System" domain for basic system services. This model defines a well-known set of Smack rules (such as allowing all processes to access /tmp and /dev/null) that do not need to be appended to for every installed app.
The Tizen security team decided to revisit how the app privilege framework was implemented as well, so it held a "policy-off" face-to-face meeting at which representatives from Intel and Samsung offices each presented their ideas. When the two offices presented essentially the same design, they decided to move forward with it.
The centerpiece of the new plan is a policy "service" called Cynara. Each installed app is still assigned its own unique Smack label (to protect its private files and directories), but rather than creating a new set of Smack rules and exceptions for each privilege an app requests, Cynara creates a shorter record of the label and its privileges. The complicated mapping between the set of available privileges and the system's resources is created beforehand and is implemented in the Smack rule set, but does not grow for every new app.
When a running app requests access to a system component (for example, the current geolocation reading), the component sends a cynara_check() query to Cynara, including the app's Smack label, the user ID that the app is running as, and the name of the privilege the app is requesting. The Cynara service returns either ALLOW or DENY, based on whether the policy database indicates that the combination of Smack label and privilege are allowed. Other return values are also supported, the speakers said, such as "Ask the user," but the essence is that a straightforward yes-or-no question is answered.
Thus, the Cynara API is quite simple, but the real benefits come by maintaining the simpler database of allowed privileges. In performance testing, Świerczek said, the average response time was under 10ms, as opposed to more than 30ms for some of the alternative solutions they explored—such as PolKit. He also noted that PolKit performance suffers due to some design decisions, such as its use of D-Bus for communication and its use of JSON and XML to store the policy database. That database format meant that the entire policy had to be read and parsed for every call; Cynara, in contrast, stores its database in SQLite.
The two then described the current state of Cynara development and outlined a rough roadmap. The core privilege-checking library is operational, they said, but is not yet working as a full-fledged service. That milestone would likely be reached by the end of June, as would the utilities for updating the policy database. The essential tools necessary for deployment should be in place by the end of July, after which the team would work on adding an asynchronous privilege-checking API and a mechanism for adding extensions to the system's security policy.
There were several questions from the audience, many of which concerned how Tizen uses Smack labels. For example, one audience member asked whether there was a possibility that two apps could accidentally or maliciously get assigned the same Smack labels when installed—which would cause several security problems. Schaufler explained that apps do not choose or assign their own Smack labels; the package manager does. In the Tizen 2.2 release, the Smack label is created from the app's cryptographic signature, so it is guaranteed to be unique (barring collisions, of course).
Perhaps the most difficult aspect of the system to grasp in a 40-minute conference talk is how the Cynara approach to storing security privileges compares in real-world terms to the older Tizen approach of storing a longer, more convoluted set of Smack rules. Unfortunately, time makes it difficult to compare the approaches in detail, but the real-world test will have to wait for the deployment of actual apps—some of which, no doubt, will test the security framework in ways its creators have not yet contemplated. Cynara, however, promises a simpler way to keep track of privileges and access-control rules, so hopefully it will also make it simpler to catch—and fix—problems.
[The author would like to thank the Tizen Association for travel assistance to attend TDC 2014.]
|Created:||June 10, 2014||Updated:||June 11, 2014|
|Description:||From the Red Hat bugzilla:
LSE Leading Security Experts GmbH discovered that the Check_MK agent (Nagios plugin) processed files from the /var/lib/check_mk_agent/job directory which had 1777 permissions. The mk-job program did not check whether any files in this directory where symbolic or hard links. Due to the permissions of this directory, any user could add a symbolic or hard link to any file on the filesystem, and because the Check_MK agent ran as the root user, it could expose arbitrary files via the agent, which exposes all the contents of this directory on TCP port 6556 by default.
This can be worked-around by setting mode 0755 on /var/lib/check_mk_agent/job (removing the sticky bit).
|Package(s):||dpkg||CVE #(s):||CVE-2014-3864 CVE-2014-3865|
|Created:||June 9, 2014||Updated:||July 21, 2014|
|Description:||Multiple vulnerabilities were discovered in dpkg that allow file modification through path traversal when unpacking source packages with especially-crafted patch files.|
|Created:||June 6, 2014||Updated:||April 10, 2015|
From the Gentoo advisory:
A boundary error exists within the "TLS_readline()" function, which can be exploited to overflow a global buffer by sending an overly long encrypted HTTP reply to Echoping. Also, a similar boundary error exists within the "SSL_readline()" function, which can be exploited in the same manner.
A remote attacker could send a specially crafted HTTP reply, possibly resulting in a Denial of Service condition.
|Created:||June 11, 2014||Updated:||June 11, 2014|
|Description:||From the CVE entry:
Multiple stack-based buffer overflows in Icinga before 1.8.5, 1.9 before 1.9.4, and 1.10 before 1.10.2 allow remote authenticated users to cause a denial of service (crash) and possibly execute arbitrary code via a long string to the (1) display_nav_table, (2) page_limit_selector, (3) print_export_link, or (4) page_num_selector function in cgi/cgiutils.c; (5) status_page_num_selector function in cgi/status.c; or (6) display_command_expansion function in cgi/config.c. NOTE: this can be exploited without authentication by leveraging CVE-2013-7107.
|Created:||June 5, 2014||Updated:||July 23, 2014|
|Description:||From the Debian advisory:
Pinkie Pie discovered an issue in the futex subsystem that allows a local user to gain ring 0 control via the futex syscall. An unprivileged user could use this flaw to crash the kernel (resulting in denial of service) or for privilege escalation.
|Created:||June 6, 2014||Updated:||September 23, 2014|
From the Red Hat bug report:
Linux kernel built with the system-call auditing support(CONFIG_AUDITSYSCALL) is vulnerable to a kernel crash or information disclosure flaw caused by out of bounds memory access. It could occur when system call audit rules are configured on a system. Administrative privileges are required to add such audit rules.
When system call audit rules are present on a system, an unprivileged user/program could use this flaw to leak kernel memory bytes or crash the system resulting DoS.
|Package(s):||kfreebsd-9||CVE #(s):||CVE-2014-1453 CVE-2014-3000 CVE-2014-3880|
|Created:||June 6, 2014||Updated:||June 11, 2014|
From the Debian advisory:
CVE-2014-1453: A remote, authenticated attacker could cause the NFS server become deadlocked, resulting in a denial of service.
CVE-2014-3000: An attacker who can send a series of specifically crafted packets with a connection could cause a denial of service situation by causing the kernel to crash.
Additionally, because the undefined on stack memory may be overwritten by other kernel threads, while difficult, it may be possible for an attacker to construct a carefully crafted attack to obtain portion of kernel memory via a connected socket. This may result in the disclosure of sensitive information such as login credentials, etc. before or even without crashing the system.
CVE-2014-3880: A local attacker can trigger a kernel crash (triple fault) with potential data loss, related to the execve/fexecve system calls. Reported by Ivo De Decker.
|Created:||June 11, 2014||Updated:||September 17, 2014|
|Description:||From the Ubuntu advisory:
It was discovered that Libav incorrectly handled certain malformed media files. If a user were tricked into opening a crafted media file, an attacker could cause a denial of service via application crash, or possibly execute arbitrary code with the privileges of the user invoking the program.
|Created:||June 5, 2014||Updated:||June 11, 2014|
|Description:||From the Debian advisory:
Several security issues have been corrected in multiple demuxers and decoders of the libav multimedia library. A full list of the changes is available at http://git.libav.org/?p=libav.git;a=blob;f=Changelog;hb=refs/tags/v0.8.12
|Created:||June 6, 2014||Updated:||June 13, 2014|
From the Mageia advisory:
XSS vulnerability in MediaWiki before 1.22.7, due to usernames on Special:PasswordReset being parsed as wikitext. The username on Special:PasswordReset can be supplied by anyone and will be parsed with wgRawHtml enabled. Since Special:PasswordReset is whitelisted by default on private wikis, this could potentially lead to an XSS crossing a privilege boundary (CVE-2014-3966).
|Created:||June 10, 2014||Updated:||March 29, 2015|
|Description:||From the Red Hat bugzilla:
Steve Kemp discovered the _rl_tropen() function in readline, a set of libraries to handle command lines, insecurely handled a temporary file. This could allow a local attacker to perform symbolic link attacks. As noted in the CVE request, _rl_tropen() is typically only called during debugging.
|Package(s):||firefox thunderbird seamonkey||CVE #(s):||CVE-2014-1533 CVE-2014-1538 CVE-2014-1541|
|Created:||June 11, 2014||Updated:||August 11, 2014|
|Description:||From the CVE entries:
Multiple unspecified vulnerabilities in the browser engine in Mozilla Firefox before 30.0, Firefox ESR 24.x before 24.6, and Thunderbird before 24.6 allow remote attackers to cause a denial of service (memory corruption and application crash) or possibly execute arbitrary code via unknown vectors. (CVE-2014-1533)
Use-after-free vulnerability in the nsTextEditRules::CreateMozBR function in Mozilla Firefox before 30.0, Firefox ESR 24.x before 24.6, and Thunderbird before 24.6 allows remote attackers to execute arbitrary code or cause a denial of service (heap memory corruption) via unspecified vectors. (CVE-2014-1538)
Use-after-free vulnerability in the RefreshDriverTimer::TickDriver function in the SMIL Animation Controller in Mozilla Firefox before 30.0, Firefox ESR 24.x before 24.6, and Thunderbird before 24.6 allows remote attackers to execute arbitrary code or cause a denial of service (heap memory corruption) via crafted web content. (CVE-2014-1541)
|Package(s):||iceweasel firefox thunderbird seamonkey||CVE #(s):||CVE-2014-1545|
|Created:||June 11, 2014||Updated:||July 17, 2014|
|Description:||From the CVE entry:
Mozilla Netscape Portable Runtime (NSPR) before 4.10.6 allows remote attackers to execute arbitrary code or cause a denial of service (out-of-bounds write) via vectors involving the sprintf and console functions.
|Package(s):||firefox thunderbird seamonkey||CVE #(s):||CVE-2014-1534 CVE-2014-1536 CVE-2014-1537 CVE-2014-1540 CVE-2014-1542|
|Created:||June 11, 2014||Updated:||January 26, 2015|
|Description:||From the CVE entries:
Multiple unspecified vulnerabilities in the browser engine in Mozilla Firefox before 30.0 allow remote attackers to cause a denial of service (memory corruption and application crash) or possibly execute arbitrary code via unknown vectors. (CVE-2014-1534)
The PropertyProvider::FindJustificationRange function in Mozilla Firefox before 30.0 allows remote attackers to execute arbitrary code or cause a denial of service (out-of-bounds read) via unspecified vectors. (CVE-2014-1536)
Use-after-free vulnerability in the mozilla::dom::workers::WorkerPrivateParent function in Mozilla Firefox before 30.0 allows remote attackers to execute arbitrary code or cause a denial of service (heap memory corruption) via unspecified vectors. (CVE-2014-1537)
Use-after-free vulnerability in the nsEventListenerManager::CompileEventHandlerInternal function in the Event Listener Manager in Mozilla Firefox before 30.0 allows remote attackers to execute arbitrary code or cause a denial of service (heap memory corruption) via crafted web content. (CVE-2014-1540)
Buffer overflow in the Speex resampler in the Web Audio subsystem in Mozilla Firefox before 30.0 allows remote attackers to execute arbitrary code via vectors related to a crafted AudioBuffer channel count and sample rate. (CVE-2014-1542)
|Package(s):||MySQL||CVE #(s):||CVE-2013-4316 CVE-2013-5860 CVE-2013-5881 CVE-2013-5882 CVE-2013-5894 CVE-2014-0427 CVE-2014-0430 CVE-2014-0431 CVE-2014-0433 CVE-2014-2434 CVE-2014-2435 CVE-2014-2442 CVE-2014-2444 CVE-2014-2450 CVE-2014-2451|
|Created:||June 9, 2014||Updated:||June 11, 2014|
|Description:||Another batch of unspecified vulnerabilities in MySQL.|
|Package(s):||openssl||CVE #(s):||CVE-2014-0195 CVE-2014-0221 CVE-2014-3470|
|Created:||June 5, 2014||Updated:||August 14, 2014|
|Description:||From the Red Hat advisory:
A buffer overflow flaw was found in the way OpenSSL handled invalid DTLS packet fragments. A remote attacker could possibly use this flaw to execute arbitrary code on a DTLS client or server. (CVE-2014-0195)
A denial of service flaw was found in the way OpenSSL handled certain DTLS ServerHello requests. A specially crafted DTLS handshake packet could cause a DTLS client using OpenSSL to crash. (CVE-2014-0221)
A NULL pointer dereference flaw was found in the way OpenSSL performed anonymous Elliptic Curve Diffie Hellman (ECDH) key exchange. A specially crafted handshake packet could cause a TLS/SSL client that has the anonymous ECDH cipher suite enabled to crash. (CVE-2014-3470)
|Created:||June 5, 2014||Updated:||July 24, 2014|
|Description:||From the Red Hat advisory:
It was found that OpenSSL clients and servers could be forced, via a specially crafted handshake packet, to use weak keying material for communication. A man-in-the-middle attacker could use this flaw to decrypt and modify traffic between a client and a server. (CVE-2014-0224)
More information is available in this blog post by Masashi Kikuchi, who discovered the bug.
|Created:||June 5, 2014||Updated:||August 15, 2014|
|Description:||From the Debian advisory:
It was discovered that Bottle, a WSGI-framework for Python, performed a too permissive detection of JSON content, resulting a potential bypass of security mechanisms.
|Package(s):||qemu||CVE #(s):||CVE-2014-0222 CVE-2014-0223 CVE-2014-3461|
|Created:||June 10, 2014||Updated:||September 15, 2014|
|Description:||From the Red Hat bugzilla:
CVE-2014-0223: Qemu block driver for the QCOW version 1 image format is vulnerable to an integer overflow flaw. It occurs due to weak input validations or logic errors. Such integer overflow could lead to buffer overflows, memory corruption or crash in Qemu instance.
An user able to alter the Qemu disk image files loaded by a guest could use this flaw to crash the Qemu instance resulting in DoS or corrupt QEMU process memory on the host, which could potentially result in arbitrary code execution on the host with the privileges of the QEMU process.
CVE-2014-0222: Qemu block driver for the QCOW version 1 image format is vulnerable to an integer overflow flaw. It occurs due to weak input validations or logic errors. Such integer overflow could lead to buffer overflows, memory corruption or crash in Qemu instance.
An user able to alter the Qemu disk image files loaded by a guest could use this flaw to crash the Qemu instance resulting in DoS or corrupt QEMU process memory on the host, which could potentially result in arbitrary code execution on the host with the privileges of the QEMU process.
CVE-2014-3461: Correct post load checks:
|Created:||June 6, 2014||Updated:||March 29, 2015|
From the Slackware advisory:
This release fixes one security related bug by properly closing file descriptors (except stdin, stdout, and stderr) before executing programs. This bug could enable local users to interfere with an open SMTP connection if they can execute their own program for mail delivery (e.g., via procmail or the prog mailer).
Page editor: Jake Edge
Brief itemsreleased on June 8. Headline features in 3.15 include some significant memory management improvements, the renameat2() system call, file-private POSIX locks, a new device mapper target called dm-era, faster resume from suspend, and more.
The 3.16 merge window remains open as of this writing; see the separate summary below for details of what has been merged. Linus noted that, while overlapping the 3.16 merge window with the final 3.15 stabilization worked well enough, he is not necessarily inclined to do it every time. "I also don't think it was such a wonderful experience that I'd want to necessarily do the overlap every time, without a good specific reason for doing so. It was kind of nice being productive during the last week or rc (which is usually quite boring and dead), but I think it might be a distraction when people should be worrying about the stability of the rc."
So perhaps we should be using robust software engineering processes rather than academic peer review as the model for our code review process?
If you (vendors [...]) do not want to play (and be explicit and expose how your hardware functions) then you simply will not get power efficient scheduling full stop.
There's no rocks to hide under, no magic veils to hide behind. You tell _in_public_ or you get nothing.
Kernel development news
This is the second installment of our coverage of the 3.16 merge window. See last week's article for a rundown of what happened in the first few days of the window. Since then, Linus Torvalds has returned to the master branch of his repository after merging back 6800 or so non-merge commits from his next branch. At this point, he has merged 8179 patches for 3.16, which is 2831 since last week's article.
Here are some of the larger changes visible to users:
Kernel developers will see the following changes:
We should be most of the way through the merge window at this point, but there may still be merges of interest in the next few days. Stay tuned for next week's thrilling conclusion ..."budget fair queuing" (BFQ) I/O scheduler, which brings some interesting new ideas to this part of the kernel.
BFQ, which has been developed and used out of tree for some years, is, in many ways, modeled after the "completely fair queuing" (CFQ) I/O scheduler currently found in the mainline kernel. CFQ separates each process's I/O requests into a separate queue, then rotates through the queues trying to divide the available bandwidth as fairly as it can. CFQ does a reasonably good job and is normally the I/O scheduler of choice for rotating drives, but it is not without its problems. The code has gotten more complex over the years as attempts have been made to improve its performance, but, despite the added heuristics, it can still create I/O latencies that are longer than desired.
The BFQ I/O scheduler also maintains per-process queues of I/O requests, but it does away with the round-robin approach used by CFQ. Instead, it assigns an "I/O budget" to each process. This budget is expressed as the number of sectors that the process is allowed to transfer when it is next scheduled for access to the drive. The calculation of the budget is complicated (more on this below), but, in the end, it is based on each process's "I/O weight" and observations of the process's past behavior. The I/O weight functions like a priority value; it is set by the administrator (or by default) and is normally constant. Processes with the same weight should all get the same allocation of I/O bandwidth. Different processes may get different budgets, but BFQ tries to preserve fairness overall, so a process getting a smaller budget now will get another turn at the drive sooner than a process that was given a large budget.
When it comes time to figure out whose requests should be serviced, BFQ examines the assigned budgets and, to simplify a bit, it chooses the process whose I/O budget would, on an otherwise idle disk, be exhausted first. So processes with small I/O budgets tend not to wait as long as those with large budgets. Once a process is selected, it has exclusive access to the storage device until it has transferred its budgeted number of sectors, with a couple of exceptions. Those are:
There is still the question of how each process's budget is assigned. In its simplest form, the algorithm is this: each process's budget is set to the number of sectors it transferred the last time it was scheduled, subject to a systemwide maximum. So processes that tend to do small transfers then stop for a while will get small budgets, while I/O-intensive processes will get larger budgets. The processes with the smaller budgets, which often tend to be more sensitive to latency, will be scheduled more often, leading to a more responsive system. The processes doing a lot of I/O may wait a bit longer, but they will get an extended time slice with the storage device, allowing the transfer of a large amount of data and, hopefully, good throughput.
Some experience with BFQ has evidently shown that the above-described algorithm can yield good results, but that there is room for improvement in a number of areas. The current posting of the code has, in response, added a set of heuristics intended to push the behavior of the system in the desired direction. These include:
The list of heuristics is longer than this, but one should get the idea: tuning the I/O patterns of a system to optimize for a wide range of workloads is a complex task. From the results posted by BFQ developer Paolo Valente, it seems that a fair amount of success has been achieved. The task of getting this code into the mainline may be just a little bit harder, though.
If BFQ does have a slow path into the mainline, it will not be because the kernel developers dislike it; indeed, almost all of the comments have been quite positive. The results speak for themselves, but there was also a lot of happiness about how the scheduler has been studied and all of the heuristics have been extensively described and tested. The CFQ I/O scheduler also contains a lot of heuristics, but few people understand what they are or how they work. BFQ appears to be a cleaner and much better documented alternative.
What the kernel developers do not want to see, though, is the merging of another complex I/O scheduler that tries to fill the same niche as CFQ. Instead, they would like to see a set of patches that evolves CFQ into BFQ, leaving the kernel with a single, improved I/O scheduler. As Tejun Heo put it:
Changing CFQ in an evolutionary way would also help when the inevitable performance regressions turn up. Finding the source of regressions in BFQ could be challenging; bisecting a series of changes to CFQ would, instead, point directly to the offending change.
The BFQ scheduler has been around for a while, and has seen a fair amount of use. Distributions like Sabayon and OpenMandriva ship it, as does CyanogenMod. It seems to be a well-proven technology. All that's needed is some time put into packaging it properly for inclusion into the mainline. Once that has been done, more extensive performance testing can be done. After any issues found there are resolved, this scheduler could replace CFQ (or, more properly, become the future CFQ) in the kernel relatively quickly.
(See this paper [PDF] for a lot more information on how BFQ works).this article from early 2012, for example. However, that talk has not yet translated into much in the way of user-visible changes to the kernel. That situation will change in the 3.16 release, which will include the new unified control group hierarchy code. This article will be an overview of how the unified hierarchy will work at the user level.
At its core, the control group subsystem is simply a way of organizing processes into hierarchies; controllers can then be applied to the hierarchies to enforce policies on the processes contained therein. From the beginning, control groups have allowed the creation of multiple hierarchies, each of which can contain a different mix of processes. So one could, for example, create one hierarchy and attach the CPU scheduler controller to it. Another hierarchy could be created for the memory controller; it could contain the same processes, but with a different organization. That would allow memory usage policy to be applied to different groupings of the same processes.
This flexibility has a certain appeal, but it has its costs. It can be expensive for the kernel to keep track of all the controllers that apply to a given process. Controllers also cannot effectively cooperate with each other, since they may be operating on entirely different hierarchies. In some cases (memory and block I/O bandwidth control, for example), better cooperation is needed to effectively control resource use. And, in the end, there has been little real-world use of this feature. So the plan has long been to get rid of the multiple-hierarchy feature, though it has always been known that this change would take a long time to effect fully.
Work on the unified control group hierarchy has been underway for some time, with much of the preparatory work being merged into the 3.14 and 3.15 kernels. In 3.16, this feature will be available, but only to users who ask for it explicitly. To use the unified hierarchy, the new control group virtual filesystem should be mounted with a command like:
mount -t cgroup -o __DEVEL__sane_behavior cgroup <mount-point>
Obviously, the __DEVEL__sane_behavior option is not intended to be a permanent fixture. It may still be some time, though, before the unified hierarchy becomes available as a default feature.
It is worth noting that the older, multiple-hierarchy mode continues to work even if the unified hierarchy mode is used; it will be kept around for as long as it seems to be needed. The unified hierarchy can be instantiated alongside older hierarchies, but controllers cannot be shared between the unified hierarchy and any others. The care that has been taken in this area should allow users to experiment with the unified mode while avoiding changes that would break existing systems.
In current kernels, controllers are attached to control groups by specifying options to the mount command that creates the hierarchy. In the unified hierarchy world, instead, all controllers are attached to the root of the hierarchy. (Strictly speaking that's not quite true; controllers attached to old-style hierarchies will not be available in the unified hierarchy, but that's a detail that can be ignored for now). Controllers can be enabled for specific subtrees of the hierarchy, subject to a small set of rules. For the purposes of illustrating these rules, imagine a control group hierarchy like the one shown on the right; groups A and B live directly under the root control group, while C and D are children of B.
Each control group in the hierarchy has (in its associated control directory) a file called cgroup.controllers that lists the controllers that can be enabled for children of that group. Another file, cgroup.subtree_control, lists the controllers that are actually enabled; writing to that file can turn controllers on or off. It is worth repeating that these files manage the controllers attached to the children of the group; in the unified hierarchy, a control group is thought of as delegating its resources to subgroups for management. There are some interesting implications resulting from this design.
One of those is that a control group must apply a controller to all of its children or none. If the memory controller is enabled in B's cgroup.subtree_control file, it will apply to both C and D; there is no way (from B's point of view) to apply the controller to only one of those subgroups. Further, a controller can only be enabled in a specific control group if it is enabled in that group's parent; a controller cannot be enabled in group C unless it is already enabled in group B. That suggests that all controllers that are actually meant to be used must be enabled in the root control group, at which point they will apply to the entire hierarchy. It is, however, possible to disable a controller at a lower level. So, if the CPU controller is enabled in the root, it can be disabled in group A, exempting all of A's descendant groups from CPU control.
Another new rule is that the cgroup.subtree_control file can only be used to change the set of active controllers if the associated group contains no processes. So, for example, if group B has controllers enabled in its cgroup.subtree_control file, it cannot contain any processes; those processes must all be placed into group C or D. This rule prevents situations where processes in the parent control group are competing with those in the child groups — situations that current controllers handle inconsistently and, often, badly. The one exception to the "no processes" rule is the root control group.
One other control file found in the unified hierarchy is called cgroup.populated; reading it will return a nonzero value if there are any processes in the group (or its descendants). By using poll() on this file, a process can be notified if a control group becomes completely empty; the process would presumably respond by cleaning up and removing the group. Current kernels, instead, create a helper process to provide the notification; this technique has been frowned on for years.
The unified hierarchy will allow a privileged process to delegate access to control group functionality by changing the owner of the associated control files. But this delegation only works to an extent: a unprivileged process with access to the control files can create child control groups and move processes between groups, but it cannot change any controller settings. This policy is there partly to keep unprivileged processes from disrupting the system, but the intent is also to restrict access to the more advanced control knobs. These knobs are currently deemed to expose too much information about the kernel's internals, so there is a desire to avoid having applications depend on them.
All of this work has been extensively discussed for years, with most of the major users of control groups having had their say. So it should be suitable for most of the known uses today, but that is no substitute for actually seeing things work. The 3.16 kernel will provide an opportunity for interested users to try out the new mode and find out which problems remain; actual migration by users to the new scheme cannot be expected to happen for a few more development cycles at the earliest, though. But, at some point, the control group rework will cease being something that's mostly talked about and become just another big job that eventually got done.
Patches and updates
Core kernel code
Filesystems and block I/O
Page editor: Jonathan Corbet
Having a bunch of people with laptops and smartphones in fairly close proximity (e.g. a user group meeting or family reunion) would seem like an opportunity to share data among them, but that generally is not quite as simple as it ought to be. Without a WiFi access point of some kind, the devices probably won't even talk to each other—besides which, without a server, there's no easy way to actually share the data. An access point connected to the internet might solve both problems, but introduces others—a lack of privacy and anonymity to start with. An access point that can also act as a server, but is not connected to the internet, takes care of those problems—so that's exactly what the PirateBox project has set out to create.
The project is not making hardware and, due to the existence of the OpenWrt project, it doesn't actually have to write much software. But pulling together those pieces and adding some tweaks for ease-of-installation and configuration of the services leads to a simple way to get a WiFi mini-server up and running quickly—and cheaply. All that's needed is one of three TP-Link wireless router models and a USB flash drive—all of which can be had for $35 or less.
Installing PirateBox is quite straightforward. I tried it on the TP-Link MR3040, which went smoothly. The process is easy to follow, requiring an install_piratebox.zip file and the appropriate squashfs filesystem for the device. Once it is powered up, connecting to the device over ethernet and logging into the administrative interface (which was on 192.168.1.1 for the MR3040, contrary to the PirateBox instructions) allows you to upgrade the firmware using the squashfs image. Once that completes, you can log into the new PirateBox using telnet, set the password (which will enable sshd), and continue on from there.
That new installation procedure is one of the headline features for the PirateBox 1.0 release that was made at the end of May. In addition, the "distribution" now includes a Universal Plug and Play (UPnP) media server and a "4chan-style" image and message board. Beyond that, the in-browser chat and file-sharing functionality is still present from earlier versions. All of that comes in a portable, battery-driven package that allows sharing and collaboration in a remote location for up to five hours.
There are a few manual steps required to finish the installation, but that is mostly just customization (passwords, UPnP display name, and so forth). The PirateBox is a working OpenWrt installation, so additional customization via new packages is also possible. Ebook library servers, wikis, OpenStreetMap servers, and so on, are all possibilities. It's only a question of what kind of data you want the PirateBox to share.
On the MR3040, which is meant to be used with a USB cellular modem, the flash drive that provides the storage for the server occupies the USB interface, which means that the device can't provide its usual service of routing traffic to the internet. That is clearly by design, as the PirateBox is meant to enable anonymity. Users simply connect to the "PirateBox - Share Freely" SSID and open a browser, which will be redirected to the main server page (seen at right). There are no logins or passwords and any names associated with chats or message board postings are completely up to the user. The PirateBox does not log any user information, either. Because it is not connected to the internet, there is no easy way for information to leak, even if personally identifiable information is entered.
Many kinds of files can be stored and retrieved from the PirateBox using a variety of mechanisms. The UPnP server allows streaming media (video and audio) directly to devices, either dedicated playback devices or smartphones and computers running UPnP clients. I tried both MediaHouse UPnP/DLNA Browser and Slick UPnP on my Android phone, which seemed to work just fine.
Smartphone browsers can connect to the PirateBox too, of course. But perhaps the most interesting smartphone initiative is the PirateBox app for Android. It turns an Android phone into a PirateBox, though without the UPnP server. It is a perfect way to repurpose an old SIM-less Android phone that you may have hanging around. Another possibility is the Raspberry Pi(rate)Box, which puts the PirateBox software onto the ubiquitous Raspberry Pi single-board computer.
PirateBox has a forum to discuss both developing and using the tool. The PirateBox team consists of creator David Darts, lead developer Matthias Strubel, and a handful of other core developers. Others are encouraged to participate via the Forum, IRC channel, and PirateBox Camp, which will be held July 12-13 in Lille, France.
One area that could use some attention is security updates. There seems to be little mention of that at the PirateBox site. It is based on OpenWrt, which has its own set of package update problems. Reinstalling new versions may be the only sensible route to updating packages with security problems. In any case, explaining that and documenting the process would certainly be helpful.
Once you start thinking about uses for the PirateBox, more and more ideas seem to spring up. A FAQ entry lists a number of places where it has been used, including by musicians to share their music at gigs, by emergency response workers to publish information and updates, and by conference organizers for conference materials and local comments. The FAQ also describes efforts to use mesh networking between PirateBox systems; an alpha version of that feature is slated for the next release. Overall, it is an interesting and useful tool that is worth a look for your local file sharing needs.
Newsletters and articles of interest
Page editor: Rebecca Sobol
One of the more entertaining aspects of the 2014 Tizen Developer Conference (TDC) was the opportunity to explore applications written for "wearable" devices. These days, the term "wearable" is often code for "smartwatch"—although as Google Glass shows, other classes of hardware device are certainly possible. The first Tizen-based smartwatch to reach the market is Samsung's Gear 2 series, which is supported by a special version of the project's SDK. Between the SDK and the wearable-devices track at the conference, it is now possible to see what device manufacturers have in mind for smartwatch software development. The focus is primarily on tethering to other devices (starting with smartphones and tablets), but there is also a framework in place for standalone apps—a prospect that probably makes executives in the "health tracker" industry break into a cold sweat.
The Tizen SDK for Wearable was first released in beta form in March; the most recent update landed on April 22. As with the other Tizen SDKs, the integrated development environment (IDE) is based on Eclipse, bundled with the appropriate Tizen templates, frameworks, emulator targets, and documentation. While the SDK does include support for the APIs defined in the official Tizen Wearable device profile, developing apps for the Gear 2 smartwatch requires an additional step: installing Samsung's Mobile SDK. It includes the component needed for developing the Android side of apps that tether the Gear 2 to a smartphone or tablet.
As one would expect, the Tizen SDK is cross-platform, although the only officially supported Linux options are relatively old Ubuntu releases (12.04 and 12.10). The IDE is also a bit picky about Java releases, preferring Oracle's Java 6, and may require some additional tweaking on newer Ubuntu releases. The licensing of the SDK can also present some issues; it is based primarily on Eclipse (as mentioned above), but it also bundles in several proprietary components from Samsung. The complications that result for free-software developers have been raised in the past, so far without much in the way of resolution.
The SDK also includes a suite of sample applications to experiment with, either in the built-in device emulator or on a USB-connected smartwatch. Tizen's wearable apps are entirely HTML5—while the other device profiles include native app frameworks, too, the assumption seems to be that resource constraints of smartwatch hardware make HTML5 the only viable option.
At this point, the Tizen Wearable profile uses the WebKit-based web runtime engine that shipped with the most recent Tizen release, version 2.2. Interestingly enough, at TDC, there were several sessions devoted to rolling out a new web runtime called Crosswalk, that is based largely on Google's Blink with several additions pulled in from other projects. The Tizen 3.0 release scheduled for later this year will include the shift to Crosswalk; how this will affect the SDK for wearables is not clear.
Wearable devices in Tizen support a rather limited subset of the same HTML5 APIs found in the project's mobile device profile. For example, from the W3C Device APIs (a set that encompasses quite a few interfaces), only the Touch Events, DeviceOrientation, Battery Status, and Vibration APIs are supported. Multimedia support includes both audio and video, and, notably, includes media input via getUserMedia (in order to support smartwatch cameras, which some Gear 2models include). In addition, there are a number of Tizen-specific APIs available, covering alarms, filesystem access, time and time-zone information, and multimedia content discovery.
The Tizen project has always maintained that original APIs like these are mere placeholders, and will be replaced with W3C-approved APIs when and if applicable standard APIs are developed. What is a bit less clear is whether that same stance applies to some of the Samsung-specific APIs offered on the Gear 2.
At the moment there are three such APIs: one covering motion-sensing data (e.g., for pedometer usage), one covering infrared transmitters (to support the Gear 2's IR LED, which is used by several TV remote-control apps), and the Samsung Accessory Protocol (SAP). For practical purposes, SAP is the most noteworthy of the three, since it encapsulates how the Gear 2 watch communicates with a tethered mobile device. The connection between the devices is run over Bluetooth, but, it should be noted, SAP is not a standard Bluetooth profile.
On the second day of TDC, a session delivered by Samsung's Piotr Karny and Konrad Lipner explored the basic SAP functionality and the general landscape of Gear 2 app development. In essence, Samsung has defined three distinct classes of wearable app: Linked, Integrated, and Standalone. The Linked and Integrated classes are the two variations of tethered app, designed to work in concert with a smartphone.
A Linked app, they explained, is one with a component that runs on the tethered smartphone and a component that runs on the smartwatch, but in which the watch side of the app merely relays information from the phone side. A simple example would be a missed-call or SMS log: the calls and messages go to the phone, but the information is relayed to the watch for convenience's sake—without the watch, the phone continues to function. In an Integrated app, by contrast, the phone and watch sides are both required. Examples include most of the fitness-monitoring apps, which leverage the motion and heart-rate sensors in the watch; without them, the phone app does not function.
But Samsung is also supporting Standalone apps, which run only on the watch. In fact, there are several standalone apps available from Samsung itself: a calendar, calculator, even a music player. Naturally, the limited processing power of a smartwatch restricts what can be expected of a standalone app. Karny and Lipner noted that several of the Samsung-specific APIs found in the Gear 2 were created to fill in for pieces missing from the relevant W3C standards. For example, the Gear 2's camera supports autofocus, which can be accessed through the autoFocus() method of the CameraControl object.
SAP, they explained, is designed specifically for data connections between tethered apps. It operates over Bluetooth 4.0 Low Energy, and provides a fixed set of services: pop-up style notifications, alarms, calendar events, file transfer, music playback, and "context" management (which covers the device's motion and position sensors).
The Gear 2 also introduces its own user interface framework for apps, named the Tizen Advanced UI (or TAU) framework. The speakers explained that it was designed from scratch to fit the specific needs of smartwatch apps, most notably fast start-up time and usability on the significantly smaller screen size of a watch. On a desktop system, they said, users will forgive a three-second startup time, but on a watch, that same amount of time becomes unbearable. TAU attempts to optimize startup performance by pre-building as many HTML widgets as possible when the app is built in the IDE; the result is a larger HTML5 app package (and one filled with auto-generated <id> and <div> elements), but it starts up much faster than a smartphone app that builds the UI at launch time. They also commented that although TAU was designed initially for small, low-resolution displays on smartwatches, it will be usable on other device profiles, too, including the company's forthcoming line of Tizen-powered TVs.
The Gear 2 is, ultimately, a Samsung product, and one should be careful not to generalize too much about Tizen's wearable device profile from it. For instance, other device makers may opt to skip SAP entirely and offer a more general-purpose Bluetooth API. Nevertheless, the Gear 2 is a real-world product already in the hands (or, if one prefers, on the wrists) of consumers all over the world. As such, it is interesting to note how the three classes of app break down.
Is the fact that most fitness apps require tethering to a smartphone, for example, simply an artifact of the app vendors' desire to have a presence on every device a person owns? Samsung offers a standalone music-player app for the Gear 2, and recording pedometer and heart-rate data is certainly less intensive than audio playback. Once the smartwatch app marketplace breaks open in a big way, vendors that insist on requiring both devices to be present and running may find themselves at a competitive disadvantage. It is also interesting to note what ideas the smartwatch app developers have come up with so far. Most duplicate functionality already found in smartphone apps (which is not surprising), offering the slight increase in convenience of finding the information on one's wristwatch rather than in one's pocket.
On the other hand, there is already a wealth of fitness-related apps that leverage the built-in sensors of a wearable device. If smartwatch prices fall to reasonable levels, that would seem to put the squeeze on vendors who sell standalone fitness trackers like the Fitbit, most of which already require tethering to a phone or connecting to a computer with USB. Fundamentally, of course, the smartwatch is just another generic computing device: here at the beginning stages of its popularity there is a lot of differentiation to be found versus smartphones and other platforms, but surely one day there will be terminal emulators and web servers available for installation as well. Tizen for Wearable, at least, is a nice on-ramp to development for that platform.
[The author would like to thank the Tizen Association for travel assistance to attend TDC 2014.]
How about just signing keys with people you would actually say you know well enough to trust? It's not the Web of Amateur ID Checking.
Version 3.0 of the GNU Nettle cryptographic library has been released. This is a major update, incorporating several interface changes that break ABI compatibility with older releases. New interfaces are provided for DSA, AES, and Camellia, providing access to several new parameters and structures. All users are encouraged to study the new release carefully, as "there may be some problems in the new interfaces and new features which really need incompatible fixes. It is likely that there will be an update in the form of a 3.1 release in the not too distant future, with small but incompatible changes, and if that happens, bugfix-only releases 3.0.x are unlikely."
Pump.io and StatusNet creator Evan Prodromou writes about the future of pump.io development. Since Prodromou is no longer working full-time on pump.io, the pace of development has slowed, but he has recently been working on reducing the administrative overhead needed for the public pump.io servers, which will ultimately leave more time for coding. "Over the next week, I'm going to take the current state of pump.io and release it as version 0.3 and start the 0.4 development, he says. "In the same time, I'm going to deal with the long list of pull requests and open issues with pump.io. The PRs will either get a reply, get pulled to 0.4, or closed." A 0.4 release could arrive as soon as September.
Newsletters and articles
Libre Graphics World has an interview with Alexandre Gauthier (the developer behind the open-source video compositor Natron) as well as an overview of the most recent release. Gauthier addresses the at times controversial decision to build an interface similar to that of proprietary applications that also support the OpenFX plugin standard: "when you implement an application which will be used by professionals who potentially have a lot of background in the usage of such software, you want to make sure you don't break all their habits, otherwise they won't bother. When you have an entire keyboard layout in mind and you need to switch to another, this is a lot of pain. When you have to spend afternoons just to find how to configure the same plug-in but on another application this can be very frustrating." Among other topics, the interview also delves into the complex history behind Natron and other OpenFX applications.writes about a redesign of the GNOME 3 notification mechanisms. It includes a new Message Tray design as well as reworking the lock-screen notifications and the notification banners themselves. "The final goal is one that was at the core of the original design, and which is central to the design of GNOME 3 as a whole: that is, to be noticable and useful without being distracting. Wherever possible with GNOME 3, we have tried to produce a distraction-free experience which helps you concentrate on the task in hand. This requires a fine balancing act, which can be tricky to get right. With the new designs, we want to change that balance slightly, by making notifications a bit more noticable and by providing more effective reminders, but we still want to retain the emphasis on avoiding distraction."
Page editor: Nathan Willis
Brief itemsGNOME Foundation is governed by a seven-member board of directors who are elected annually. The just-completed vote had eleven people vying for those seats. Unless there is a challenge to the voting process, the new board members are: Sriram Ramkrishna, Ekaterina Gerasimova, Karen Sandler, Andrea Veri, Jeff Fortin, Tobias Mueller, and Marina Zhurakhinskaya. We looked at the question of corporate involvement in GNOME as one of the election issues being discussed in last week's edition. We've long been supporters of Tor, and we're pumped to join our allies in promoting it. As we write this, there have already been 370 new relays set up in the past two days. [Let's help double that!]" Krita is an open source digital painting application. The project has announced a kickstarter campaign to fund development for the next version, 2.9. The base goal of the campaign is to fund one developer to work full time on the 2.9 release. There is a stretch goal to fund an additional full time developer. fundraiser to support the Randa Meetings. "Participants donate their time to help improve the software you love and this is why we need money to cover hard expenses like accommodation and travel to get the volunteer contributors to Randa. If you are not attending, you can still support the Randa Meetings by making a donation. As in the past, the Randa Meetings will benefit everyone who uses KDE software."
Articles of interesttells free software projects that they need not worry about contributor license agreements. "Thus, I encourage those considering a CLA to look past the 'nice assurances we'd like to have — all things being equal' and focus on the 'what legal assurances our FLOSS project actually needs to assure its thrives'. I've spent years doing that analysis; I've concluded quite simply: in this regard, all a project and its legal home actually need is a clear statement and/or assent from the contributor that they offer the contribution under the project's known FLOSS license."
Calls for PresentationsLinux.conf.au 2015 will be held January 12-16 in Auckland, New Zealand. The call for papers has just gone out; submissions will be accepted through July 13. Submissions are not limited to traditional talks: you could propose a performance, art installation, debate, or anything else."
|June 20||August 18
|Linux Security Summit 2014||Chicago, IL, USA|
|June 30||November 18
|Open Source Monitoring Conference||Nuremberg, Germany|
|July 1||September 5
|BalCCon 2k14||Novi Sad, Serbia|
|July 4||October 31
|Free Society Conference and Nordic Summit||Gothenburg, Sweden|
|July 5||November 7
|Jesień Linuksowa||Szczyrk, Poland|
|July 7||August 23
|Debian Conference 2014||Portland, OR, USA|
|July 11||October 13
|CloudOpen Europe||Düsseldorf, Germany|
|July 11||October 13
|Embedded Linux Conference Europe||Düsseldorf, Germany|
|July 11||October 13
|LinuxCon Europe||Düsseldorf, Germany|
|July 11||October 15
|Linux Plumbers Conference||Düsseldorf, Germany|
|July 14||August 15
|GNU Hackers' Meeting 2014||Munich, Germany|
|July 15||October 24
|Firebird Conference 2014||Prague, Czech Republic|
|July 20||January 12
|linux.conf.au 2015||Auckland, New Zealand|
|July 21||October 21
|PostgreSQL Conference Europe 2014||Madrid, Spain|
|July 24||October 6
|Qt Developer Days 2014 Europe||Berlin, Germany|
|July 24||October 24
|Ohio LinuxFest 2014||Columbus, Ohio, USA|
|July 25||September 22
|Lustre Administrators and Developers workshop||Reims, France|
|July 27||October 14
|KVM Forum 2014||Düsseldorf, Germany|
|July 27||October 24
|Seattle GNU/Linux Conference||Seattle, WA, USA|
|July 30||October 16
|GStreamer Conference||Düsseldorf, Germany|
|July 31||October 23
|Free Software and Open Source Symposium||Toronto, Canada|
|August 1||August 4||CentOS Dojo Cologne, Germany||Cologne, Germany|
If the CFP deadline for your event does not appear here, please tell us about it.
|Ubuntu Online Summit 06-2014||online, online|
|State of the Map EU 2014||Karlsruhe, Germany|
|Texas Linux Fest 2014||Austin, TX, USA|
|2014 USENIX Federated Conferences Week||Philadelphia, PA, USA|
|USENIX Annual Technical Conference||Philadelphia, PA, USA|
|SouthEast LinuxFest||Charlotte, NC, USA|
|AdaCamp Portland||Portland, OR, USA|
|YAPC North America||Orlando, FL, USA|
|LF Enterprise End User Summit||New York, NY, USA|
|Open Source Bridge||Portland, OR, USA|
|Automotive Linux Summit||Tokyo, Japan|
|Libre Software Meeting||Montpellier, France|
|Tails HackFest 2014||Paris, France|
|SciPy 2014||Austin, Texas, USA|
|July 8||CHAR(14)||near Milton Keynes, UK|
|July 9||PGDay UK||near Milton Keynes, UK|
|2014 Ottawa Linux Symposium||Ottawa, Canada|
|GNU Tools Cauldron 2014||Cambridge, England, UK|
|Conference for Open Source Coders, Users and Promoters||Taipei, Taiwan|
|OSCON 2014||Portland, OR, USA|
|EuroPython 2014||Berlin, Germany|
|Gnome Users and Developers Annual Conference||Strasbourg, France|
|PyCon Australia||Brisbane, Australia|
|August 4||CentOS Dojo Cologne, Germany||Cologne, Germany|
|Flock||Prague, Czech Republic|
|August 9||Fosscon 2014||Philadelphia, PA, USA|
If your event does not appear here, please tell us about it.
Page editor: Rebecca Sobol
Copyright © 2014, Eklektix, Inc.
Comments and public postings are copyrighted by their creators.
Linux is a registered trademark of Linus Torvalds