|
|
Subscribe / Log in / New account

Leading items

The 2016 Debian Project Leader election

By Nathan Willis
March 16, 2016

The Debian Project Leader (DPL) is the project's democratically elected leader; each year, the Debian Developers vote, and whichever of the candidates comes out on top is deemed the winner. At least, that is the way it usually works; this year, the process is a bit different, due to the fact that there is only one candidate. Unless something peculiar happens, then, candidate Mehdi Dogguy will take over as DPL on April 17. It may sound a tad unusual from the outside but, apart from the actual vote, the election process has proceeded as normal, with Dogguy publishing a candidate platform and taking questions from project members on the election mailing list. Some are beginning to worry that the paucity of candidates indicates that the burden of serving as DPL has become too burdensome, however, which is a problem that Debian will need to address in the long term.

Debian project secretary Kurt Roeckx sent out the call for nominations on March 5, six weeks before current DPL Neil McGovern's term ends. As usual, the election was open to all Debian developers, who must nominate themselves as candidates. McGovern declined to run for a second term, and Dogguy was the only candidate who stepped forward prior to the March 12 deadline. The DPL election method can be complicated when there are many candidates, but unless the majority of the voters select "none of the above" (which is a ballot option), Dogguy will almost certainly become the new DPL.

Platforming

According to the election rules, March 13 through April 2 is reserved for the "campaign," during which project members can examine each candidate's platform and ask questions on the debian-vote mailing list.

Dogguy's platform centers on his vision for the project. He began by noting that the project has grown to the point where it complicates collaboration:

Debian has grown so much that it has become a federation of team-sized, smaller projects. As a consequence, we are having a hard time making solutions that scale up to the size of the bigger project. This becomes an even more challenging problem when the number of packages grows more rapidly than we’re able to onboard new contributors.

He proposed a review of processes and tools to identify bottlenecks and points of friction between teams, and said he will "work on collecting and compiling a repository of Debian use cases that can be used by contributors to find their way more easily into the project." In a related point, Dogguy highlighted the recruitment of new contributors as a task he will work on. Debian has successfully participated in third-party internship programs like Outreachy and Google Summer of Code, he said, "but we should also think about sponsoring such programs or make our own." Unlike outside efforts, such a program could emphasize Debian-specific goals:

We lack a program that focuses on (simply) getting more people familiar with the project, its philosophy, its community, its processes and its work-flow. I would like to encourage initiatives like Debian Women Mentoring Program and Mentoring of the Month (MoM) by the DebianMed team and generalize it to not focus purely on packaging tasks. I see this as an opportunity to join efforts and create a more generalized and project-wide mentoring program.

Dogguy also proposed two initiatives that would alter how Debian operates with respect to the outside world. The first is writing and publishing a project roadmap (which Debian does not currently do). Publishing a roadmap would help the various teams within Debian publicize their work, and enable the project as a whole to shine more light on its original work, beyond simply packaging and delivering upstream code. As DPL, he would describe each roadmap item in S.M.A.R.T. criteria, (that is, "Specific, Measurable, Assignable, Realistic, and Time-related") and make sure that progress is made.

The second initiative is pushing Debian to innovate and embrace new challenges. As an example, he cited installation media:

While our biggest sponsors in the past were manufacturers and hosters, today cloud actors joined us and usage of virtualized systems became very common. Still, we are only shipping installer images, but not pre-built system images (in various formats) or virtual system images. Aurélien Jarno has been providing QEMU images for quite some time now but I think that such initiatives could be more official and advertized. The status of system images for various Cloud providers is also not so clear and would deserve some attention.

We got used to what we have. We should work on innovating and making sure the way we do Debian is still relevant to the world. We have to make sure that the way we install and deploy Debian is relevant to our users, because they are our priority. We should make sure that our users’ concerns are fulfilled!

Among the other new challenges, he listed improving security, making upgrades "unbreakable", and improving usability.

Debate

So far, there have been no questions about Dogguy's platform on the debian-vote list. This is not too surprising; the platform does not advocate for what one would call radical change, so it might not have generated debate in a year with multiple candidates, either.

But the scarcity of DPL candidates was raised in a question to Dogguy from Paul Wise. Wise noted that the only prior occasion when a candidate ran unopposed was in 2011, when then DPL Stefano Zacchiroli ran for a second term, and asked whether "this situation reflects on the health of the Debian project".

In his reply, Dogguy countered that while single-candidate elections are rare (he himself was surprised no one else volunteered, he said), most DPL elections have a small slate. He also said that he would "not generalize this as a symptom of an unhealthy situation" for Debian as a whole, seeing it instead as a sign that the role of DPL is difficult.

Wise also asked Dogguy if he thought voters should collectively choose the "none of the above" option, in hopes of triggering a new election that would attract more candidates. Dogguy replied that such a tactic would make the situation worse.

IMHO, standing up for a DPL election requires preparation and serious thinking. You don't usually decide within the nomination week, but start preparing it a while before. I am not convinced that waiting for another week will help us to magically find another candidate.

If people didn't want to nominate themselves for DPL, then we should not force them to do so. Having "fake" candidates is not doing the project any favor. No one wants an inactive DPL. No one wants a DPL that is unprepared for the job.

Dogguy also noted that he had run for DPL in 2015 (coming close to winning) and said that "I don't think my candidacy would be more serious if [there] were two candidates." Nevertheless, he concluded, if project members do not want him as DPL, they are free to choose "none of the above."

Non-candidates

The question of whether or not this year's slim ballot indicates a problem within the Debian project drew replies from several others on the mailing list. Daniel Pocock responded that perhaps the public self-nomination process is to blame, and that nominations should be secret.

But Pocock and others also expressed concern that the role of DPL is simply too time-consuming, and that the level of commitment it demands is scaring off potential candidates. Ian Jackson raised the idea of replacing the lone DPL position with a board, although he worried that "decision making would be too slow if everything had to be done by committee."

Martin Krafft was skeptical, asking what sorts of powers such a board would have. Jackson listed budgetary issues and working with Debian's legal advisors, but pointed out that:

The DPL's very broad powers and strong legitimacy mean that they are often called on to give an informal opinion in circumstances where a board member who needed approval of other board members to do anything would have less authority.

The board idea did not gain much traction, but everyone seemed to agree that DPLs would do well to delegate tasks to other project members—which can be difficult to do in practice. Debian, like many free-software projects, is driven by volunteers, and volunteers are notoriously short on time. Dogguy noted in his platform that his employer will permit him to spend a small portion of each week working on DPL-related jobs. Such leeway with employers is not unusual (even just among prior DPLs), but at best these arrangements increase pressure on the DPL to take on tasks that the volunteer community may be slow at completing.

In the long term, Debian's growth as a project may mean that the DPL role becomes more and more of a time commitment. Whether that will mean redefining it or supplementing it with other leadership roles remains to be seen. At present, however, the lack of DPL candidates has only brought the issue to the forefront as a topic of potential concern. As Dogguy pointed out on the mailing list, there has rarely been a long line of volunteers willing to take on the task.

In spite of the larger concerns raised about the process itself, however, Dogguy seems to be regarded as a good candidate. There are no signs that there is a movement to reset the election process. This means that Debian, barring some unforeseen turn of events, already knows who its next project leader will be—and that new DPL has a solid understanding of the task ahead.

Comments (11 posted)

Managing heterogeneous environments with ManageIQ

March 16, 2016

This article was contributed by Geert Jansen

ManageIQ is an open-source project that allows administrators to control and manage today's diverse, heterogeneous environments that have many different cloud and container instances spread out all over the world. It can automatically discover these environments wherever they are running and bring them all under one management roof. Beyond that, it can simplify life for users by allowing them to choose new virtual machines (VMs) and containers and have them immediately "spun up" and available for use.

Discovery

[Discovered topology]

The first step in managing a complex environment is to discover what is actually there. ManageIQ does this by accessing the APIs of the virtualization systems, public clouds, and other management systems that make up the environment. Using the APIs it will download the lists of VMs, hypervisors, containers, networks, and whatever else is relevant to the system in question. All these "things" it discovers are called "managed elements" and are stored and tracked in the Virtual Management Database (VMDB). Currently the VMDB schema consists of over 200 entities and relationships. It defines elements such as "Virtual Machine" and "Hypervisor", ensures that a "Virtual Machine" has a "name" attribute, and that a "Virtual Machine" is related to a "Hypervisor" by a "runs on" relationship. The individual management systems are called "element managers" in ManageIQ parlance, and the pieces of code that connect to the APIs are called "Providers".

After initial discovery, ManageIQ uses the APIs to listen for events that might indicate a managed element has changed, and uses those to refresh the VMDB. The result is that the ManageIQ VMDB is almost always up to date with respect to what is actually present in the environment, even if changes are made outside of ManageIQ. Most of today's APIs (but unfortunately not all) support these change notifications. In addition to the on-demand refresh, a full refresh is also scheduled every 24 hours.

The discovered inventory is visualized through the ManageIQ web interface, which shows all of the discovered elements and their relationships. For example, when used together with VMware, it will show a list of virtual machines, their attributes, the hypervisor they run on, the connected networks, etc. The inventory can also be visualized in reports, which can be scheduled and emailed, or displayed as dashboards.

One interesting thing about the VMDB is that it allows an abstract approach to management. The advantage of that was seen recently when support for containers was added. It involved extending the VMDB schema with elements including "Container" and "Pod", and creating a provider that connects to the container management system (Kubernetes in this case).

Operational management

After discovery, ManageIQ provides for ongoing operational management. This covers quite a few disciplines, and we'll look at the most important ones here below.

[VM management]

ManageIQ provides control actions for the things it manages. For example, VMs and Instances have "power on" and "power off" actions. Not every possible action is covered, but the goal is to expose the most common actions so that the usual day-to-day management can be completely done within ManageIQ.

Change management is another operational management discipline. ManageIQ can show reports of what attributes (e.g. memory, disks, network devices of a VM, or even installed software versions when using SmartState Analysis, which is described below) of an entity have changed, and when. Attributes can be compared across different objects of the same type, for example, to compare the state of a VM against a golden image. It can also compare the configuration of the same object against itself from an earlier time. This is called drift tracking.

A third discipline is capacity management. ManageIQ providers track various utilization metrics such as CPU, memory, and disk. These metrics can be visualized in charts, and aggregated to understand when capacity will run out. Modeling "what if" scenarios is possible as well.

Financial management is another area where ManageIQ can help operations staff. It can be used to create a cost model for elements that it discovers. For example, a certain cost can be allocated to VM memory and disk. Reports can then be generated to show the total cost of the various groups in the system.

Self-service

[Self-service catalog]

Self-service allows an administrator to maintain a catalog of requests that can be ordered by regular users, for example, to provision a single VM or an application stack. Self-service is good for both the administrator and the end user: it saves a lot of time for the administrator while end users get their service going much faster.

Self-service is one of the more powerful use cases of ManageIQ. It starts with an administrator creating a "service bundle," which is a collection of "service items". Each service item is a "thing" that ManageIQ knows how to create, for example, a VM or a container. There is also a generic service item that can call into the ManageIQ workflow engine (more on that below), and can be used to provision arbitrary things by invoking arbitrary actions. The order in which items in a bundle are provisioned is specified by the administrator.

Services typically require some amount of input. For example, if the request is to provision a VM, then a typical question would be the size of the memory and the disk. This information can be requested from the user through a dialog, which can be created using a built-in dialog editor.

Once the service bundle and the dialog are created, they need to be associated with an "entry point" in the ManageIQ workflow engine (called "Automate"). The entry point defines the process to provision the bundle. There is a default entry point to provision bundles, but this entry point can be changed so that custom logic can be invoked. Workflows are Ruby-based and can be edited through a built-in integrated development environment. (Many actions in ManageIQ are actually workflows in Automate; they can be inspected and also modified by the administrator.) Aside from provisioning, the workflow engine is also used to run an approval process before the provisioning takes place.

With the bundle definition, dialog, and entry point, the request can be published in a service catalog, which then enables users to order the service.

Once a service is deployed, the user will see it under the "Services" tab in the web interface. While a service is operational, a user can interact with it. For example, if configured to do so, ManageIQ will allow a user to start and stop VMs comprising the service, or to get a console for them. Custom actions can also be created by adding menu items that can be connected to entry points in Automate. An example of a custom action would be to backup a service, or to run it on more nodes (i.e. scale it out).

The self-service model in ManageIQ also includes a process for termination. The administrator can specify a lifetime for the service. Once the lifetime has expired, the service can (optionally) be decommissioned automatically. Users can be given the privilege to extend the lifetime and to get warnings about upcoming expirations via email.

Compliance

ManageIQ allows administrators to define compliance policies and apply those against elements that are discovered. This is especially useful when users are deploying their own systems through self-service as it gives a certain amount of control back to the administrator.

[Policy engine]

Compliance policies consist of a number of rules, and are enforced by the ManageIQ policy engine, which is called "Control". The policy engine is modeled on the Event-Condition-Action model. If a certain event happens, a condition is evaluated, which, if true, results in an action. For compliance purposes, the event is usually "element discovered" or "element updated", the conditions are the sets of rules to enforce, and the action is "update compliance status", optionally combined with an automated remediation workflow in Automate. Control can also be used to invoke automation based on any type of event. For example, a high load can trigger a scale-out action.

A nice thing about compliance in ManageIQ is that it works on more than just the metadata of the items discovered through the various APIs. It is also possible to define rules for the contents of VMs, hypervisors, and containers. Extracting these contents is done by a process called SmartState Analysis (SSA).

SSA can discover configuration files, event logs, and package databases; it stores that information in the VMDB. Interestingly, SmartState is a fully agent-less technology. It works by accessing the disks remotely over platform-specific APIs, usually snapshot and/or backup APIs. As the disks are untrusted and potentially concurrently updated, they cannot be safely mounted by a Linux kernel. To get around this, ManageIQ contains Ruby-based read-only filesystem and volume manager implementations that access the disks from user space.

The benefits of the agent-less approach is that it doesn't require cooperative guests, which means that it also works with VMs that are deployed through self-service, vendor provided "black box" appliances, or VMs that predate the implementation of the cloud management platform. Another benefit of being agent-less is that it also works for VMs that are shut down.

SmartState can give a lot of insight into the environment. A nice example of a compliance policy based on data from SmartState is this policy that checks if a Red Hat operating system is vulnerable to the recently discovered DROWN attack.

Supported providers

ManageIQ ships with a number of providers that are listed below. If there is a commercial variant of an open-source project it is given in parentheses.

  • VMware vSphere
  • oVirt (Red Hat Enterprise Virtualization)
  • OpenStack and TripleO (Red Hat Enterprise Linux OpenStack Platform)
  • Microsoft System Center Virtual Machine Manager
  • Amazon Web Services
  • Microsoft Azure
  • Kubernetes (Red Hat Enterprise Linux Atomic and Red Hat OpenShift)
  • The Foreman and Katello (Red Hat Satellite)

Other providers are in progress in the master branch, such as providers for the Google Cloud Platform, Ansible Tower, and software-defined networking.

Community

ManageIQ is developed by the ManageIQ community. Development happens on GitHub using a pull-request-based development model. Discussions of development topics happens on Gitter, and users interact with each other on the talk.manageiq.org forum. ManageIQ is available under the Apache 2 license.

Despite its young age as an open-source project, ManageIQ has a large and mature code base. The code for ManageIQ was originally developed by ManageIQ Inc., starting in 2006. This company was acquired by Red Hat in December 2012, which released the ManageIQ code in June 2014. The code base weighs in at over 200,000 lines of code excluding tests and gem-ified components, and is written in the Ruby on Rails framework.

Since being released as open source, almost 6,000 requests have been merged, by over 100 contributors. While the majority of contributions are from Red Hat staff, the project is actively growing and seeking outside contributions. Recently, companies like Booz Allen Hamilton, Produban/Banco Santander, and Google have been making contributions to the project.

Releases happen approximately every 6 months and are named after chess grandmasters. Most recently the project made its third release named "Capablanca." ManageIQ is also the upstream project for the Red Hat CloudForms product.

The project also holds an annual design summit where users and developers from all over the world come together to exchange ideas and establish the development roadmap. The second ever design summit will be held June 6-7 in Mahwah, NJ.

Getting started

ManageIQ is distributed as a Linux-based virtual appliance that is a little over 1GB in size. After downloading it and importing it into a supported solution (e.g. QEMU/KVM on Linux), the web interface will start up. The first task is to configure a provider by connecting to an element manager such as oVirt, OpenStack, or Amazon Web Services, then waiting a couple of minutes for discovery to complete. The steps are documented in the ManageIQ documentation, with video walkthroughs of the Top Tasks. After the basic inventory has been discovered, it is possible to create custom dashboards, define offerings for self-service, or create compliance policies.

ManageIQ is a big project with a lot of features. It is quite powerful, and is even fun to use. It is best to start simply by focusing on a single objective, for example self-service or reporting. If you get stuck, or even just want to say hello, please contact the community at the talk.manageiq.org forum.

[Geert Jansen is the manager of the CloudForms product at Red Hat.]

Comments (1 posted)

The Car Hacker's Handbook

By Nathan Willis
March 16, 2016

No Starch Press recently released a book about working with automotive software systems: The Car Hacker's Handbook: A Guide for the Penetration Tester, written by Craig Smith. The book is an expansion of Smith's popular and widely circulated e-book of the same title. The old version remains available online at no cost, but there is considerably more content in the new revision—enough to make it a tempting purchase not just for automotive-software fans in general, but for those interested in embedded-device security and in reverse engineering other classes of consumer product.

Roadmap

As the subtitle suggests, the book is written as an overview of reverse engineering and security testing car computers—meaning all of the embedded and user-facing computer systems running in a modern vehicle. It covers interacting with embedded sensors and controllers, in-vehicle infotainment (IVI) dash units, the powertrain control modules (PCMs) that control engine operation, and various wireless systems distributed throughout a vehicle. The book is a comfortable length (278 pages), and retails for $49.95.

[Car Hacker's Handbook]

The approach Smith takes is to consider the vehicle a security target like any other computer system that one might analyze for vulnerabilities. He constructs a threat model, itemizing and rating every attack surface; he then explores the threats in a systematic fashion. For instance, the controller area network (CAN) bus is one of the easiest entry points to the system, since it connects all of the networked modules in the car. Thus, he first examines the on-wire format of CAN bus traffic, moves on to sniffing and understanding the higher-level message formats transported over CAN, and eventually considers which CAN messages are interesting and how to generate them.

That said, it is clear from the outset that The Car Hacker's Handbook is not intended as a guide to exploiting the vehicles of other people. Smith is a co-founder of the automotive hacker community Open Garages, and the book is peppered with examples of how hacking on one's own car is a valuable skill to possess. Apart from tweaking engine characteristics to improve performance or fuel efficiency (which are the two most common goals), being able to break into a car's computer network is increasingly necessary to swap in aftermarket parts, replace broken or missing components, or simply to understand why something is not behaving as expected.

On this front, the book is quite successful. The text goes out of its way to de-mystify a number of car-computing topics, from reading and decoding the Diagnostic Trouble Codes (DTCs) emitted by electronic control units (ECUs), to capturing messages in various undocumented and proprietary formats, and even to reverse engineering the read-only memory (ROM) found in the PCM. All of these are topics that various industry players (from car makers to the manufacturers of overpriced diagnostic gadgets) takes pains trying to keep from the public eye. In addition, many of the automotive standards and specifications involved are not freely available and must be purchased—usually at high cost. The information in the book on these topics, especially when coupled with the pointers to additional online resources, levels the playing field quite a bit.

Naturally, the same could be said of most of the software discussed; Open Garages is a project driven by open-source software ideals, and Linux is the easiest platform for interfacing with automotive computer systems. Nevertheless, Smith does highlight several cross-platform and web-based tools that will be of interest to Windows users. Smith also discusses several Linux-based and open-source automotive projects, but the focus of the book is on getting into real-world systems, regardless of whether they run Linux, QNX, Windows, or some peculiar, one-off operating system from an automotive subcontractor.

Along for the ride

To briefly outline the topics covered in the book, Smith starts out with a discussion of the networking protocols used in car computing. The aforementioned CAN bus is a low-level transport protocol; he also describes the packet formats of ISO-TP, CANopen, and GMLAN (all of which are protocols that run on top of raw CAN). He also explains SAE J1850, Keyword Protocol 2000, Local Interconnect Network (LIN), Media Oriented Systems Transport (MOST), and FlexRay; each of these protocols is found in a limited subset of cars, but recognizing them in the field is important. Similarly, there are a variety of DTC messaging protocols; Smith describes the major formats and conventions (many of which do not have formal names or specifications).

The book then discusses how to work with the kernel's SocketCAN interface, providing some valuable tips on the CAN utilities available and on how capturing CAN traffic differs from sniffing Ethernet or WiFi packets. For example, Wireshark performs poorly as a CAN sniffer because of how many times noisy CAN modules repeat their messages; the specialized can-utils package is better at filtering the flood of messages and catching only important changes. The book also discusses some tools for capturing, analyzing, and replaying CAN traffic, like Kayak and caringcaribou, plus Smith's own CAN-traffic generator, ICSim.

This section of the book is focused on how one might discover and isolate the CAN message that performs a specific function in the car (say, unlocking the doors). One of the later chapters then picks up the topic again, explaining how to develop a compact program to use this message of interest, writing the unlock signal out to the bus (drowning out other, contradictory messages if necessary). It also describes how to adapt the necessary code for use with the Metasploit penetration-testing tool. Perhaps the most interesting aspect of that discussion is that it detours into how to fingerprint the make and model of a car by passively observing its CAN traffic; Open Garages is in the process of developing a fingerprinting tool called CAN of Fingers (c0f) for this purpose.

The other major section of the book deals with reverse engineering ECUs. While a complete description of reverse engineering an embedded system would be a topic vast enough to fill multiple books, Smith does an admirable job of outlining the basic process and pointing readers in the right direction. He discusses side-channel attacks, brute-force attacks, and the comparatively straightforward process of dumping an EPROM and analyzing the contents. He looks at some specific tools, mostly of the hardware variety, but with an emphasis on open-hardware options like ChipWhisperer. If all one wants to do is adjust the engine timing or fuel mixture, he notes, little or no code decompilation may be needed: simply finding the right data tables and altering them can be simple.

From a practical standpoint, car hackers have it easier than some reverse engineers, since so many cars (especially older ones) use small CPUs with modest amounts of memory. But this is changing fast as car makers catch up to other electronics companies. It is particularly interesting to note that this section of the car-hacking space is the one with the most missing pieces from a free-software standpoint: most of the ECU reverse-engineering tools, it seems, are still of the proprietary flavor.

In addition to these big topics, there are several self-contained chapters that cover smaller subjects, including vehicle-to-vehicle networking, sourcing test components, attacking wireless systems (like key-fob remotes and tire-pressure monitors), and attacking vehicles via the IVI unit. Some of these discussions are surprisingly brief, but that is in line with the feel of the book, which emphasizes getting into the car's network in order to do something interesting. The IVI unit and wireless interfaces are enormous attack surfaces, but they are primarily of interest to the car hacker precisely because one can go through them to get to the vehicle's other systems. Fiddling around with the IVI system itself to alter or side-load apps may be fun, but is somewhat tangential. A lot has been written elsewhere about vulnerabilities in IVI units and wireless interfaces; Smith points readers to other resources rather than repeat their information.

The book closes with an encouragement for readers to start up their own Open Garages local group (or form a similar car-hacking meet-up), whether attached to a hackerspace or as a stand-alone entity. At the moment, there are only a handful of such groups anywhere in the world, but interest in car hacking as a topic is certainly on the rise. It may take time to catch up with "maker" subcultures like wearable electronics and 3D printing, but Open Garages is a good reminder of how much value a community can add. Its members have developed a lot of code, including projects of vital importance like c0f.

In the rear-view mirror

As a reviewer, I may be personally biased in favor of any book that addresses car hacking, because it is a hobby I find personally interesting (although most of my involvement has been in the Linux-based IVI development camp) and a topic on which there are precious few long-form resources to be found. Setting that aside, however, I remain convinced that The Car Hacker's Handbook is well worth reading in its own right.

For starters, the practical information on automotive networks and protocols is invaluable. Not only are many of the protocols poorly documented (if at all) elsewhere, but many of them are in the "legacy code" bin: companies have moved on and have no interest in continuing to discuss them, but car owners still own a lot of the systems and will for many years to come.

At a little more fundamental level, though, the book also works as an introduction to car computer systems for those who are already experienced hacking other types of devices. And it addresses the various security issues and approaches without assuming too much prior engineering experience, which should make the book useful for shade-tree mechanics with little hacking experience, too. A reader cannot learn how to reverse-engineer ECU firmware in a single chapter, but Smith puts that task in context, explains when it is necessary, and puts the reader on the right path forward. All things considered, that is what one wants from a hacker's handbook.

[The author would like to thank No Starch Press for sending a review copy of the book.]

Comments (none posted)

Page editor: Jonathan Corbet
Next page: Security>>


Copyright © 2016, Eklektix, Inc.
Comments and public postings are copyrighted by their creators.
Linux is a registered trademark of Linus Torvalds