|
|
Log in / Subscribe / Register

Leading items

Accessibility and the graphics stack

By Jake Edge
October 22, 2014

X.Org Developers Conference

At the 2014 X.Org Developers Conference, Samuel Thibault led off three days of presentations with a look at accessibility and how it relates to the X and Wayland graphics stacks. Accessibility is the blanket term for making computers more usable by people with specific needs—not necessarily handicaps. Those needs range from vision problems to loss of motor control, which might be temporary (e.g. a broken arm) or more permanent (e.g. blindness, deafness, or Parkinson's disease)

[Samuel Thibault]

Introduction

Thibault started the talk with a slide of gnuplot output and asked what accessibility problems it illustrated (slides [PDF]). Since the two functions plotted used red and green lines, the problem is for those with color blindness. He asked those who could not see one or more of the colors to raise their hands and got three from around fifty in the room. Roughly 8% of males and 0.5% of females are affected by color blindness.

Accessibility is helping people who have special needs when interacting with a computer. Blindness is an obvious candidate, but there are also people who have low vision (or color blindness). Deafness is not really an issue for X.Org, but it clearly requires changes to assist those who are affected by it. There are people who can only use one hand ("try to press ctrl-alt-backspace with one hand", he said) or with motor control issues. Elderly people often have several different needs all at once.

But, he asked, why make the graphical user interface (GUI) accessible when text mode is so much easier to handle? There are many things that are not available in text mode, however, JavaScript support for example. There are also business applications that are GUI-oriented. Beyond that, non-technical people need to be able to get help from others, but that is much harder if the two are not running the same applications.

One approach might be to make dedicated software for those with special needs, but that is generally a bad idea, Thibault said. Typically, those types of applications are oriented toward a single disability. His example was edbrowse, which is a combination editor and browser targeted at blind users. That means it is not helpful for those with other disabilities.

Developers would need to work on both the browser (e.g. JavaScript, Flash, and CSS) and on office suite features such as compatibility with Microsoft Office and OpenOffice. With limited developer resources, that becomes an unneeded duplication of effort. Also, when both disabled and non-disabled people are working together, it would be best to be using the same software.

So, one of the design principles for the accessibility work is that it uses the same software, but makes it accessible. That makes it easier to work together and to get help from others. Another principle is that it should support "synchronized work" for two people working together; it should just alternate input and output mechanisms as needed to support both users. Finally, accessible software should be pervasive; it should not require a special software installation or configuration, it should just be available to be enabled on "all platforms all the time".

The status of accessibility support in free software is a bit of a mixed bag. Text mode is generally accessible, he said, but is not good for beginners. GNOME is "quite accessible", though GNOME 3 was a restart from scratch on accessibility. But free software is late compared to Windows, which has a many-year lead on accessibility. Compared to Apple, though, free software support is "Stone Age". The accessibility in Apple's products is easily available and well integrated.

Pressing "a"

Next up was a look at input and output in X, with an eye toward where accessibility fits in. When you press an "a" on a keyboard, it doesn't matter what is printed on that key, it is the first key on the third row and gets converted to a scancode (0x1e) that represents that location. The keyboard driver in the kernel turns that into an internal kernel value (KEY_A, which is 30 or 0x1e) and passes it to the input driver and then to the evdev driver. The evdev driver creates an event that it hands to the input-evdev driver in the X server, which adds 8 (for reasons that were not explained in the talk) before pushing it out over the wire to the X client as a KeyPress.

It is important to recognize that it is still just a location on the keyboard (first key of third row) at that point. The client toolkit will get the event, recognize that it is a KeyPress, and hand it off to Xkb, which finally translates it into a KeySym: an "a". At that point, the KeySym is sent to the widget it was directed at, which may decide to do something with it.

Perhaps the widget wants to append the "a" to its text. These days, clients will typically use a library like Pango to render the text. That generates a PixMap, which is pushed to the driver and the video card to display. It is important to notice that the PixMap is generated early, in the client. But, there may not be a screen to direct the PixMap to. Blind people do not have screens and do not want them. There need to be alternatives on the output side so that blind people do not have to buy screens they will not use.

Input

Certain people can only use certain kinds of input devices. There are those who can only use keyboards; they need to use keyboard shortcuts and to move the cursor using it. Others can only use a joystick (as a mouse, essentially) or only a mouse—to type they have to use some kind of virtual keyboard. There are many more kinds of input devices, because "you never know what people may need".

If you can only use one hand and it is a permanent condition, you may want a faster way to input text. Using a standard keyboard requires a lot of hand motion. One way to deal with that is to have a key that "mirrors" the keyboard, effectively switching between the keys on each side without moving the hand. Thibault initially said that it might be difficult to do that in X, but an audience member said that Xkb can already do those kind of remappings.

AccessX adds a number of features to Xkb and X clients that can be used for accessibility. StickyKeys allows someone to press modifier keys (e.g. ctrl, alt, shift) multiple times to make them "stick" so they can input things like "ctrl-alt-backspace". MouseKeys turns the keyboard into a mouse. For those who have trouble moving their hands quickly or often initially miss the key they wanted, SlowKeys will wait for a bit before deciding a key has been pressed. BounceKeys is useful for those with Parkinson's or other motor control problems; it will see only one key press even if there are several of them in a short period of time. There are others as well.

A virtual keyboard is available with xvkdb. It injects physical positions as KeyPress events on the wire, directly to the client. Some braille devices have regular PC keyboards, so brltty just hands key symbols (e.g. KEY_A) to the kernel input driver.

Actual braille keyboards are a harder problem. These have eight keys that correspond to the eight dots in braille. That gives 256 possibilities, but the representation of some characters and their braille equivalents are not standard. "a" to "z" are standard, but the rest depend on both language and country—Belgian French, Canadian French, and French in France do not share the same symbol mappings. Because a key press is not really a physical position on a standard keyboard, it results in brltty creating a KeySym, rather than a KeyPress. Xkb is used to back-translate the "a" to a KeyPress then to send it into the client.

There are a number of other issues with braille, including users wanting to use certain keys on a regular keyboard to represent the dots from a braille keyboard, composing certain characters (e.g. "ô"), and handling braille abbreviations (which sometimes map to regular characters, with rules to disambiguate). Handling all of that is complex, but some of it will be moving to use ibus soon. Support for ibus was just completed in the most recent Google Summer of Code cycle.

He then asked the audience: what about Wayland? He wondered if it was passing keyboard codes or KeySyms. Ideally, he said, it would handle both which would help remove some of the existing complexity. Someone replied that Wayland was using keyboard codes currently, but it might be possible to change that.

Output

On the output side, there are already a fair number of features that are available. People with low vision can tweak the DPI setting which will result in larger icons and fonts. XRandR can be used for panning and to do some basic zooming; there is also support for gamma tuning and color inversion for assisting those with color blindness. In addition, GTK 3 has a method to do perfect zooming by rendering into a much larger PixMap.

All of that doesn't really help blind people, however. In order to provide information for them, the application must help. In the past, snooping the X protocol was used to help determine what text was being displayed and its context, but that no longer works as PixMaps are sent over the wire. So applications need to have an abstract representation of the information they are displaying that can be passed to a screen reader over a bus. Then the screen reader can "render" the information in a sensible fashion.

So the widget that receives the "a" and is displaying it will also send a signal via the Accessibility Toolkit (ATK) to the screen reader. But that information also needs some context, including the hierarchy of widgets, windows, and so on. Similar to CSS, the application needs to describe its windows, menus, labels, buttons, etc. in a way that the screen reader can query. It can then do the right thing for the user when it puts it on a braille device, speaks the data aloud, or outputs it in some other way.

Making applications accessible from the start is fairly easy, Thibault said. It is a matter of designing without the GUI in mind. If you use the standard widgets, they have already been made accessible. In addition, images should always have some alternate text that the screen reader can use.

There is a tool for testing an application for accessibility called Accerciser. It will show the application and the tree of widgets, so that you can see what a screen reader would get.

Thibault concluded by saying that accessibility has diverse requirements that causes it to have to be plugged in at various levels. When moving to Wayland, there should be no regressions in terms of accessibility, which means that developers need to be paying attention to that now. In the end, accessibility needs the semantics from a desktop application, not just the rendering, so it is important to separate form from content.

For those interested, there is a video [WebM] and, interestingly, a transcript of Thibault's talk.

[ I would like to thank the X.Org Foundation for travel assistance to Bordeaux for XDC. ]

Comments (12 posted)

Tizen developers contemplate the Internet of Things

By Nathan Willis
October 22, 2014

Tizen Developer Summit

At the Tizen Developer Summit in Shanghai, several presentations dealt with hardware makers' desire to develop "smart appliance"-style devices in the vein of the Internet of Things (IoT), but there was not a single vision from the project on how exactly Tizen would best fit into an IoT product. One team advocated developing an IoT-specific protocol stack that could be attached to existing Tizen device profiles, while another proposed stripping additional layers off of the usual Tizen profiles, leaving a minimalist system behind. Reconciling the two approaches may not be a simple task.

OIC

On the first day of the event, Intel's Martin Xu presented an IoT strategy that was centered around the Open Interconnect Consortium (OIC), a newly announced industry group founded to develop networking protocols for IoT products. Xu began by outlining the challenges to making IoT's oft-cited promise of "connected everything" a reality. "Everything," naturally, encompasses a wide range of devices that vary in computing power and in networking capabilities.

[Martin Xu at TDS 2014]

But therein, evidently, lies the difficulty. The tiniest IoT devices tend to be equipped with Bluetooth Low Energy (BLE) radios, while even slightly more powerful devices step up to different network technologies altogether, such as WiFi, and only the fullest-featured, most-PC-like devices end up with the hardware and software to speak both BLE and WiFi. Similarly, the small, simple devices typically run embedded Linux distributions that are stripped down to the bare minimum, sometimes essentially booting to a single application. Thus, the more feature-rich devices designed to talk to these simple devices (for example, a smartphone that needs to connect to a house thermostat or lighting controller) end up with a separate application stack dedicated to each IoT device it interacts with. All too often, the protocols used by the IoT devices are either not supported in smartphone hardware (like ZigBee), or the manufacturer uses an open standard like BLE to wrap around binary, proprietary data packets.

The OIC, Xu said, was started to address these problems by providing an open-source software stack that is transport- and OS-independent. The group's founding members include Intel, Samsung, MediaTek, Broadcom, and Wind River. The OIC reference stack would be available for Tizen, he said, but would also be usable in other Linux distributions and other operating systems.

OIC's architecture (which was shown in a slide) requires multiple layers. At the top level is a resource-oriented API; applications would interact with IoT devices by requesting resources and sending resource-modification commands. The layers beneath provide an abstraction of what networking protocols are actually used as the transport between any given pair of devices. Xu also showed a two-by-two grid depicting various combinations of "client" and "server" packages that any one OIC-capable device would need to implement. A smartphone would be client-only, since it does not offer any IoT resources, but merely connects to the various "things." A traditional embedded device, though, would be server-only, since it does not offer a direct user interface, while other devices might both offer a service and a user-facing interface.

Unfortunately, what the OIC presentation lacked was any specific detail about what the resource-oriented API would actually be—a fact that several audience members seemed to pick up on and subsequently raised questions about during the Q&A period at the end of the session. One audience member commented that what was described in the diagrams amounted to a good approach to API design, but was not an API itself. Two others asked how what OIC intended to do differed from existing IoT efforts like AllSeen (from Qualcomm and others) or Google's Android APIs (which, while not IoT-specific, are nevertheless used by quite a few smart-device products already on the market).

Answers to these questions were not forthcoming. Xu declined to comment on how OIC compared to AllSeen and Android on the grounds that he did not wish to criticize the competition, but said only that OIC's stack, when released, would be more complete than the alternatives. Another audience member asked when OIC's reference implementation would be available, and was told in reply that development was very active, hopefully with something to be released by the end of the year.

Based on the questions asked, more than a few people in the audience found the lack of detail frustrating, especially given that Intel and Samsung, the two main driving forces behind Tizen, were also said to be key players in OIC. After all, the challenges to turning IoT from a hodgepodge of individual devices that do not speak a common protocol into a set of well-connected nodes that interoperate smoothly are well-known; the existence of other solutions like AllSeen is evidence of that.

Tizen Micro

[Bingwei Liu at TDS 2014]

A far more concrete approach to building software for IoT devices was presented in a pair of talks about Tizen Micro, a stripped-down profile designed for "headless" products. Intel's Bingwei Liu presented an overview of Tizen Micro in a session on the second day of the event, and was followed by a joint talk from Biao Lu and Austin Zhang on how Tizen Micro is built and deployed using Yocto.

Liu's session also started with a description of the challenges of making a reusable IoT software platform, although he emphasized that, even within a given product class, the specifics of a device can make for drastically different system specifications. An "IP camera," for example, could describe anything from a simple room monitor that streams video back to a single source all the way up to a truly smart system that did object- and facial-recognition and responded to gesture input.

Despite the common perception that IoT devices were machines of modest resources, Liu argued that they still required many of the same system components found in more feature-rich consumer electronics devices. For instance, although IoT devices are generally "headless" in the sense that they do not have an attached display, many of them still require a multimedia processing stack—either because they come with cameras (as in the IP camera examples), or provide microphones and speakers for voice interaction. As a result, while they can dispense with a compositor, they may still need the majority of the GStreamer components used by a device with a graphical user interface.

Similarly, while IoT products are often associated with small storage, memory, and CPU requirements, those measurements can be misleading. The small storage and memory footprints often stem from a desire to reserve as much space as possible for user-installed files and applications, so the total amount of memory and filesystem space to be managed by the OS may not be all that small. And while each running process may be modest, consumers expect most IoT devices to support multi-tasking: no one wants the video stream from their smart-home server to slow to a crawl whenever someone else in the house adjusts the climate controls running on the same device.

[Biao Lu at TDS 2014]

Liu and the others on the Tizen Micro team set out to adapt the base Tizen system for such headless, yet still full-featured, systems. The effort is currently in the proposal stage; if it is accepted by the Tizen Steering Group, it is not yet clear whether Tizen Micro would be seen as yet another device profile (along with smart TVs, smartphones, in-vehicle infotainment systems, and wearables) or something different.

The Tizen Micro "bill of materials" (BOM), as Liu described it, is essentially Tizen Common (Tizen's base layer) with certain packages removed. It includes the kernel, BusyBox, the dropbear SSH server, SQLite, nginx, node.js, Python, and various networking modules (including BLE). Interestingly enough, Liu commented that it may also include OIC as soon as there is an OIC release to include.

Lu and Zhang's presentation focused on how the team actually builds Tizen Micro test images. Lu, who led most of the session, started off by saying that developing software for small embedded systems like IoT products is notoriously difficult, but it is difficult for known reasons that can be worked around. Setting up the development environment is complicated (and small errors on the development system can cause large problems on the final device), developers need to possess an understanding of the entire software stack (from kernel drivers right up through middleware and applications), and the stability demanded of embedded systems means rigorous verification and testing are required.

Tizen Micro can alleviate some of these problems by building on the results of other Tizen profiles, he said: the middleware and development environment are well-tested and reliable, which is often not the case when building an embedded Linux stack from scratch. To handle the other challenges, the Tizen Micro team has focused on using Yocto as the build system, rather than git-build-system (GBS), which is used by the existing Tizen profiles.

The team has developed a Yocto meta-layer (called "meta-micro-common") that it tests on two Intel platforms: the Galileo and MinnowBoard MAX single-board computers. He explained the build process, which is more or less a standard Yocto build. That said, the team has so far only worked with one hypothetical IoT device in mind: a "smart hub" that serves as a router, file server, and a connection point to home-automation devices. Users interested in building Tizen Micro for other classes of device would need to modify the bitbake configuration files—but that, too, is a standard Yocto development chore.

[Austin Zhang at TDS 2014]

Zhang responded to most of the audience's questions. Attendees wanted to know what the roadmap was for Tizen Micro, to which Zhang replied that so far, all of the work has taken place within Intel—the next step will be to reach out to interested parties at other Tizen partners (most notably Samsung) and test the tools on ARM systems. Only after that would Tizen Micro be a candidate to become an official Tizen project.

Zhang told another audience member that the team has not yet received permission to release its work to the public, but that he hopes to imminently. The team would also like to add support for GBS and the rest of the standard Tizen build tools, but that Yocto made it easier to build a minimal system image, which was the goal. He also chuckled when asked if the team had tested Tizen Micro on Intel's diminutive Edison boards: apparently they are in such high demand that not even everyone at Intel has gotten the opportunity to use one yet.

Less is more

Competing approaches to IoT within one project or company may not sound like an encouraging sign, but in reality it is probably par for the course. IoT is a popular buzzword but, like most tech industry buzzwords (think "cloud computing"), a lot of the buzzing stems from the fact that few people agree on precisely what the term means. Liu highlighted the wide range of embedded platforms that could qualify as an IoT product, and that sentiment was echoed by many of the speakers and audience members. No one is quite sure where the cut-off between IoT and traditional embedded Linux systems lies, so there are naturally a lot of different ideas about what the ideal IoT operating system should contain.

When it comes to Tizen specifically, the Tizen Micro idea certainly seems more fully realized than OIC. But it also may be too far removed from the standard Tizen use case to fit neatly into the overall project. If there are too many levels of "base Tizen" to choose from, eventually some of them may not really be Tizen anymore. Tizen Micro strips out a lot; it remains to be seen whether having the system developed within the auspices of Tizen is a sufficient selling point over a run-of-the-mill Yocto build.

[The author would like to thank the Tizen Association for travel assistance to attend TDS 2014.]

Comments (1 posted)

Metrics for free-software communities

October 22, 2014

This article was contributed by Deb Nicholson

In September, I attended the first Free and Open Source Software Expo and Technology Conference or Fossetcon in Orlando, Florida, and went to a talk by James Falkner, Liferay's Orlando-based Community Manager, about making better use of metrics in free-software projects. Measuring the size and activity level of a project's community (usually in code contributions and participation in communication channels) is a popular task within free software, but doing it right isn't always simple.

The session was entitled "Metrics are fun, but which ones really matter?". In the talk summary, Falkner promised to help us learn to distinguish between "vanity metrics" and actual, useful metrics. He also finished up with a bit of advice on what to do with those metrics once you've got them.

I do some work with OpenHatch, an organization dedicated to welcoming new people to free software. The metrics that OpenHatch tracks are very user-focused, which makes them a bit different than what you might track for traditional free software project. Still, I figured it couldn't hurt to learn some new ways to use numbers to improve our metrics at OpenHatch and at some of the other projects I work with.

Liferay is a free-software project that provides a suite of tools known as a portal. It includes social office tools, built-in website-management themes, and gadgets—both for internal and external sites. A community edition and a separate (paid) enterprise edition are available. Thus, the Liferay community is a mix, including do-it-yourselfers who are looking to optimize their experience with the community edition and full-time paid staffers from companies that use one or more of the enterprise products. In addition, while many of Liferay's projects are designed to be used by people with no programming experience, there are other users busy tweaking their "portlets" and sharing their changes back with the community.

Knowing what to measure

Falkner first talked about how you can improve your project by understanding your contributors better—but not just by looking at the source code repository. For several years now, Ohloh has been able to show you what's going on with your project in Git. Certainly, lots and lots of work happens in the repository, but many of us know that it might not be the whole story. For instance, we often reward producing lines of code, but it is clear that longer code contributions aren't necessarily a good thing.

Falkner stressed that it is important to carefully figure out what you want to measure, rather than settling for what might seem easy to measure. He illustrated this distinction with a few funny stories where the wrong things were measured, leading to wrong thing being encouraged.

One particularly illustrative example was a historical story of city officials trying to control the rat population. The officials offered their citizens money for rat tails. But they had mistakenly assumed that a rat tail represented a dead rat. It doesn't. So the first problem they created was a lot of tail-less rats.

Furthermore, the city officials had made a second mistake when they incentivized more tails, but not fewer rats. Once they started offering money, many people began to farm rats for their tails. So, measuring the wrong thing led to more rats—now without tails—and to the city officials having a giant pile of rat tails, which is both gross and ineffective.

Falkner offered a few other examples of people either creating more of the problem that the "numbers folks" had been trying to reduce, or managing to create brand new problems—but it is hard to top the visual image of a city inundated with tailless rats.

Two other lessons that I took away were (1) pick a metric that you might be able to do something about, and (2) work hard to make sure you understand any factors that affect the accuracy of your measurements. For example, it does you no good to correlate weather and forum activity, since you can't control the weather. However, figuring out how long people stay in your community is something you should be able to impact.

Metrics and context

Falkner presented lots of graphs from the Liferay community. And, sure enough, the graphs told a story—but knowing the context of the graphs was critical to understanding what actually happened in the community. For example, an audience member asked about a bump in Liferay's contributor activity a few years back. Falkner explained that the bump wasn't due to any exciting community management trick that was later lost to the ether. Instead, it represented a giant influx of Sun employees all signing up for the forums when Sun decided to partner with Liferay.

Initially, lots of Sun employees registered on the Liferay forums, but over time they didn't end up behaving like other new forum members. Without looking at that event in context, though, it looked like lots of new people got interested in Liferay, then ended up just lurking. A lesson there is that potential contributors behave very differently when their employer tells them to sign up "just in case," as opposed to those users who find a forum because they need it. But, without the context, a person examining the data might waste a lot of time wondering what was so magical about 2007.

Software for metrics

The tools that Liferay uses to integrate activity data from its forums, mailing lists, and code repositories all come from a project called Bitergia. Bitergia itself is free software, largely written in Python, plus a few bits in other languages (CSS, JavaScript, R, etc.). The tools are all licensed under either GPLv2 or GPLv3.

In addition to the utilities used by Liferay, Bitergia has also written tools for looking at MediaWiki changes, IRC activity, and various bug-trackers. Pretty much any way you could think of to quantify participation in a free-software project is covered. An active spin-off of Bitergia's suite of tools is also available on GitHub; it is known as Metrics Grimoire.

The Bitergia project grew out of a research group based at University Rey Juan Carlos in Madrid. The group developed the tools to analyze free-software work and communities for themselves first. But, like many things in FLOSS, it turned out that their tool was useful to others. Although the team is based in Spain, they have been traveling recently to promote their work around the world; they visited the US for OSCON and the Community Leadership Summit, and Sweden for the Open Automotive event hosted by the GENIVI Alliance.

Falkner quoted Simon Phipps, longtime FOSS advocate, as saying "you become what you measure"—which means it is pretty important to get metrics right. Done correctly, with full buy-in and solid planning, metrics can help you pilot your community toward the place that you want to go, instead of that place you happened to end up in. I'm looking forward to trying some of these tools out in my projects, and I am curious to see what other projects are able to learn with them.

Comments (2 posted)

Page editor: Jonathan Corbet
Next page: Security>>


Copyright © 2014, Eklektix, Inc.
Comments and public postings are copyrighted by their creators.
Linux is a registered trademark of Linus Torvalds