|
|
Subscribe / Log in / New account

Leading items

What's new in Scribus 1.5

By Nathan Willis
June 17, 2015

The Scribus project released version 1.5.0 in late May. The 1.5 release is unusual in that it is designated as an unstable version. However, it has been so long since the last new stable release of the desktop publishing (DTP) application that the Scribus team decided to highlight it anyway, asking users to test and review the numerous changes in what will eventually become the 1.6 release.

The announcement notes that Scribus 1.5 should not be considered production-ready, but it suggests that users test out the new release to get a feel for the features added and the changes made since the 2012 release of version 1.4.0. We looked at 1.4.0 around that time, as well as at the 1.4.3 release in 2013. The source code for 1.5.0 is available for download through the project's Subversion repository. At the moment, the only binary builds for Linux systems appear to be those in the project's Ubuntu personal package archive (PPA), although if history is any guide, builds for other distributions should arrive in due course.

As the 1.5.0 release notes explain, there are both technical and user-interface changes to be found in the new release. The biggest difference found in the technical column is the port to Qt 5 as the application toolkit; as is often the case, the new major version of the toolkit makes extensive testing a wise move. A slightly less obvious change, however, is that Scribus 1.5.0 is pushing more and more functionality out into external libraries, which makes for a larger set of package dependencies.

For example, in 1.5.0, several new file types are supported through the UniConvertor library, rather than through internal import code. Similarly, several new features make use of GraphicsMagick and OpenSceneGraph. For regular Scribus users, it is also vital to note that the 1.5.0 release makes several changes to the native file format—and the project is not offering guarantees that the new format will not change again between now and the eventual 1.6.0 release.

These changes are not arbitrary, however, and it is important to point out that they are tied to new, user-visible features. Using UniConverter allows Scribus 1.5 to open the native files produced by other DTP applications (such as Adobe InDesign and PageMaker, Microsoft Publisher, and Apple Pages). This is a feature that has been requested for years, so many users will be happy to see it available at last.

Along the lines of "oft-requested" features, the new file format adds support for several key DTP constructs of interest to Scribus users, such as footnotes, endnotes, and sidenotes. Since Scribus allows users to create text frames and place them anywhere on the page, it has always been possible to approximate footnotes and the like, but having real, first-order support is another matter entirely. The user does not have to manually keep track of which note belongs on which page, nor re-number the notes when a subsequent one is added, removed, or rearranged.

[Scribus 1.5 picture browser]

Another major addition is support for "real" tables. Here, too, the earlier versions of Scribus allowed for a workaround that would suffice for simple documents, but it was a hack. Earlier Scribus tables were nothing more than a grid of generic text frames grouped together. The new implementation is a rewrite that makes the table behave more like the "table" objects users are accustomed to in word processors: rows, columns, and entire tables can be selected, styled, and acted upon as units. Rows and columns can be inserted or removed, and their dimensions adjusted on the fly.

The new release also allows control over typographic orphans and widows: isolated lines from the beginning or end of longer paragraphs that end up on a page by themselves due to inconvenient placement of page breaks. It is also now possible to link objects (such as images or text frames) together but retain the ability to edit them individually. Previously, using Scribus's "group" feature would lock all of the selected objects as-is. This is similar to how grouping works in Inkscape and many other applications, but it can be inconvenient. Version 1.5 adds a "weld" feature that links selected objects in this new, still-editable fashion.

Several features have been added in this release that offer new functionality even though they have not been the subject of repeated feature requests from users. One is the "Picture browser," an asset-management aid that lets users create libraries of tagged images. Later, someone designing a document can open the picture browser, pull up images matching a particular tag, and drag them into the document. This feature has many applications for teams of designers, as well as for those users with a habit of forgetting where they store their images.

[Scribus 1.5 symbols]

Version 1.5 also adds a "Symbol" feature akin to the "Clone" feature found in Inkscape. Essentially, the user can select an object as the master copy and create linked clones of it; subsequently, whenever the user alters the master, all of the clones get updated as well. Calling this feature "Symbol" is a bit of an odd choice, but it will no doubt come in handy for many.

Many smaller additions make their debut in this release as well: there are now "arc" and "spiral" tools for creating additional vector shapes, there is a drop shadow effect tool (complete with full control over color, opacity, blur, and other parameters), and there is support for gradient fills and color palettes created in other applications (like GIMP and Adobe Photoshop). Several new options for rendering objects are supported, such as transparency and cross-hatch fills.

Finally, the UI itself has been reconfigured in a number of places. The most obvious change is that all tool and option palettes can be docked to the active window, rather than floating (where it often needs to be dragged out of the way). The document preferences dialog has been redesigned, as has the right-click context menu. The release notes say both of these redesigns were undertaken with an eye toward simplification.

It will, however, be up to Scribus users to provide feedback to the project regarding how useful the simplifications are. The project is soliciting such feedback from those willing to test 1.5.0. Most users may be able to get along fine with the stable 1.4.x series, but those who make heavy use of footnotes, tables, and large image collections might just be tempted to make the jump to 1.5 now, stable or otherwise.

Comments (none posted)

Testing the ColorHug 2

By Nathan Willis
June 17, 2015

Richard Hughes's ColorHug device made a splash in free-software circles when it was released in 2012 (at which point we took a look). Although there were other colorimeters supported by software projects like Argyll CMS, the ColorHug was open hardware designed with first-class Linux support in mind. Users could create color profiles for their displays using free software and free firmware. But the ColorHug had its share of weaknesses, too, and Hughes soon turned his attention to designing a more ambitious spectrometer device. That spectrometer has yet to arrive, so it was a bit of a surprise when, in mid-April, Hughes announced the first test batch of ColorHug 2 devices: a serious refresh of the original ColorHug colorimeter that looks poised to close most of that generation's lingering bugs and feature requests—but without the strikingly higher price tag of a true spectrometer.

To recap, the first ColorHug is a tristimulus colorimeter, which profiles the output of a display device by measuring the color of a bank of on-screen color patches precisely selected to sample the entire color space. The heart of the device is a set of light sensors behind filters that limit them to sensing specific frequency ranges. That works well enough as long as the red, green, and blue sub-pixels of the display device match up with the filters. But the method starts to break down for devices with significantly different sub-pixel components.

That was one of the two major complaints heard about the original ColorHug. It worked great for CRT displays (which are increasingly hard to find) and for many LCD displays with cold-cathode backlights. But it did not produce good results for the distinctly different illumination characteristics of newer LED-backlit LCD monitors. The other issue cited with the original ColorHug device was that it was—by necessity—calibrated against an ideal sRGB color space. For many individual monitors, generating an optimal color profile would require loading a color-correction matrix (CCMX) file to replace the generic sRGB matrix. Unfortunately, creating the CCMX file was not possible using the ColorHug, since that would be akin to trying to measure a ruler with itself—one needs an impartial reference.

Initially, Hughes concluded that the right fix for both problems was to design and build a different class of hardware device altogether, a spectrophotometer (or spectrometer for short). The key difference is the sensor used; a spectrometer measures the entire visible wavelength by breaking the light sample up into a spectrum—like a prism—and then measuring along the whole spectrum. That makes it immune to the differences between the RGB primitives used, whereas if the colorimeter's filters do not line up conveniently with the primitives, the measurements could be garbage. That also makes spectrometers a lot more expensive; at Libre Graphics Meeting 2014, Hughes speculated that such a device could run £300.

That price point did not seem to engender much interest in the community, but Hughes kept looking into it, periodically noting ways to reduce the up-front manufacturing costs. He generally referred to this device as the ColorHug+, but in social media and general conversation others variously called it the ColorHug2 or ColorHug v2, which led to some confusion when Hughes announced something called the ColorHug 2 in April. The announcement may have eluded many readers, because it was posted to the colorhug-users discussion list, rather than on Hughes's blog.

The new device is a prototype; just 24 were built in the initial batch. But it indicates that, regardless of whether or not a spectrometer product is ever released, Hughes has found ways to cope with the original ColorHug's limitations, at least for the majority of users. The ColorHug 2 includes a substantially upgraded color sensor, an on-board temperature sensor, and RAM that allow the firmware to take more complex readings.

The sensor module used in the original ColorHug was the TCS3200. The ColorHug 2 uses a JENCOLOR sensor with spectral sensitivity characteristics that match the CIE 1931 XYZ color space. This color space is specifically tailored to correspond to human vision; while the sensor is still not reading the entire visible spectrum like a spectrometer would, it does produce output that works without a CCMX file and is accurate for any reasonably normal display type. The new sensor also adds nearly $30 (£19) to the price of the device, he said. The prototypes were sold for £75.

The inclusion of a temperature sensor will allow the ColorHug 2 firmware to compensate for readings taken outside of the JENCOLOR sensor's calibrated range. Hughes cites the range at 20 to 40 degrees Celsius, either end of which could be experienced in real-world conditions (although more likely with a laptop being toted from place to place than with a desktop machine kept indoors). Firmware support for the feature is still in development.

Last but not least, the new device should be capable of performing some additional measurements thanks to some on-board RAM. While the first-generation ColorHug could output only instantaneous sensor readings, the new device can measure a display's latency by measuring the time it takes for the pixels to change color. For color purists, the measurement of interest would likely be the rise and fall response times of the display hardware (which could even be different), although interesting data could also be collected about the overall performance of the entire graphics stack in use.

As of today, using the ColorHug 2 is no different from using the original model. The devices are identical in size and function the same way when used with GNOME Color Manager. When the device is attached via USB, the "Calibrate" button in GNOME Color Manager becomes clickable. Creating a color profile involves securing the ColorHug device to the center of the display (a long elastic band is included to help with this, although holding it in place is certainly possible for quick profiles) while the calibration routine cycles through a series of color patches.

The result of the process is a .icc color profile file, which GNOME Color Manager automatically adds to the list of profiles available for the display. It also lists the age of each available profile; re-calibrating periodically is regarded as a good idea since display characteristics degrade over time.

I first attempted to test the ColorHug 2 using the live USB stick included in the package (which was running a beta release of Fedora 22). Unfortunately, this live image did not boot on my machine (which is a problem others have reported as well). Compiling the latest release of colord did not help either, which seems to be related to problems reported with the nouveau video driver: on newer GPUs, nouveau may fail to initiate 3D mode at startup, which prevents GNOME Color Manager from correctly detecting the display.

Ultimately, though, the proprietary NVIDIA driver did work, and allowed me to create a new profile. For the laptop that I tested the device with, the ColorHug 2 profile did not differ significantly from that created with the original ColorHug. For a recently purchased external display, however, the difference was stark, removing a strong teal cast most likely due to the peculiarities of the display's characteristics.

Many users may not be convinced that color calibration is worth the effort, of course—and there is probably little that can be said to change those minds. But for anyone who is interested in display accuracy or in matching the output of multiple monitors, keeping an eye out for the full production run of the ColorHug 2 is a good idea. We may yet see a true spectrometer, which will be a boon for printer calibration and other tasks, but in the meantime, this update of the original ColorHug corrects most of problems users encountered with the original. And, naturally, it will be interesting to see what new firmware features Hughes—and others—manage to come up with to leverage the expanded hardware capabilities.

Comments (2 posted)

Micro Python on the pyboard

By Jake Edge
June 17, 2015

A 2013 Kickstarter project brought us Micro Python, which is a version of Python 3 for microcontrollers, along with the pyboard to run it on. Micro Python is a complete rewrite of the interpreter that avoids some of the CPython (the canonical Python interpreter written in C) implementation details that don't work well on microcontrollers. I recently got my hands on a pyboard and decided to give it—and Micro Python—a try.

All of the core Python language has been implemented in Micro Python, as well as a number of the standard libraries. Some C-language Python standard modules have not been ported—at least yet. Beyond that, Micro Python developer Damien George has added a library that simplifies access to the pyboard and its peripherals (e.g. LEDs, buttons, accelerometer).

[pyboard front]

The key to running Python on a microcontroller (such as the STM32F405 used on the pyboard) is to keep memory usage low while still providing good performance. The footprint for Micro Python is 260KB (as detailed in the "official end" message for the Kickstarter back in April), though removing floating point support brings it down to 240KB. A stripped-down version of the interpreter can be as small as 75KB (and run in 8KB of RAM), but leaves out support for SD cards, FAT filesystems, USB serial ports, USB mass storage, and so on.

While Micro Python will run any Python 3.4 code, it is not byte-code compatible with CPython. In particular, it avoids heap allocations for integer operations and for calling methods. Heap allocations inevitably lead to garbage collection, which can cause problems for time-critical processing and for code running from interrupts (since garbage collection is not reentrant).

So Micro Python encodes "small" integers in the top 31 bits of "pointers" on the stack (with the low bit set to one to distinguish them from real pointers). It also adds a new byte code (CALL_METHOD) that retrieves the information it needs from the stack, rather than creating a new object (which requires a heap allocation) as CPython does. The latter is a technique adopted from PyPy. Information about all of this is, unfortunately, buried in the first FAQ entry on the Kickstarter page and is not separately linkable.

Micro Python has four different "emitters" that generate different kinds of code from the compiler. The default byte-code emitter does what its name implies: generates byte codes to run on the Micro Python virtual machine. The other emitters are accessed using Python decorators. The native emitter (@micropython.native) translates each byte code to ARM Thumb (the native code for the pyboard CPU) machine code. The viper emitter (@micropython.viper) also generates ARM Thumb code, but it further optimizes integer operations to bypass the normal Python binary operation runtime code, which speeds things up considerably. Viper does not support all of the Python language, however. The final emitter (@micropython.asm_thumb) allows for inline assembly code using a Python-like syntax. The emitters are detailed in some of the Kickstarter update entries: "The 3 different code emitters", "The 3 different code emitters, part 2", and "Inline assembler".

The emitters are targeted at situations where better performance might be needed for a time-critical section. Both the native and viper emitters generate faster code, but they also generate larger code. Viper actually generates somewhat smaller and faster (sometimes much faster) code than native, but the viper emitter does not support all of the language types and constructs (though it is still under development), so it may not be the best choice for all cases. Using the emitters is as simple as decorating a function:

    @micropython.native
    def foo(bar):
        ...

The asm_thumb emitter allows Micro Python to easily interface with assembly code; it handles putting function arguments into registers according to the ARM embedded-application binary interface (EABI), having the function return the contents of register r0, and so on. Python's syntax is used so that there are no changes needed to the parser to support asm_thumb. Here is a snippet from the example in the Kickstarter update posting:

    @micropython.asm_thumb
    def delay(r0):
        b(loop_entry)      # branch to 'loop_entry'
	label(loop1)       # 'loop1' label
	movw(r1, 55999)    # move 55999 to register 1
	...
Instead of the usual syntax, things like branches, labels, and opcodes are encoded as functions for the emitter.

The documentation for Micro Python is fairly extensive, covering the standard libraries that come with it, as well as certain libraries that have "micro-fied" in keeping with Micro Python's focus on small size. Beyond that, there are some additional libraries that are not part of the standard Micro Python distribution, but that can be downloaded and added as needed. The documentation also covers the pyboard hardware and the pyb library written to access various features of the device.

[pyboard back]

The board itself is perhaps two large postage stamps (or two large coins) in size. It is powered by USB, which also acts as its means of communication (as a serial port and a mass storage device). The device effectively has Python as its operating system; connecting to it with a terminal program leads to a Micro Python prompt. The board can also boot into a Python program by putting a suitable main.py in the boot filesystem.

There are two choices for the boot filesystem, either the small internal partition in the microcontroller's 1MB flash or a micro SD card that has been inserted into a slot on the board. If there is a micro SD card present, it takes precedence. There are also two safe modes that are accessible by holding down buttons at boot time: one that can bypass the startup scripts (both a boot.py and the main.py that get run by default) and another that resets the filesystem in the flash partition to its factory default state.

A third way to run code on the device is to use pyboard.py to transfer a file over the serial connection and run it on the board. So if you get tired of typing a program at the prompt, or uploading it to the flash or SD card and rebooting, you can edit it locally, then send and run it all at once.

The pyboard has two buttons (one reset, RST, and one user definable, USR), four LEDs, an accelerometer, a realtime clock, and 30 general-purpose I/O (GPIO) pins. The pyb library has made it easy to access this functionality. For example:

    Micro Python v1.3.10 on 2015-02-13; PYBv1.0 with STM32F405RG
    
    >>> leds = [pyb.LED(n) for n in range(1,5)]  # LEDs are numbered 1-4
    >>> for i in range(10):
    ...     for l in leds:
    ...         l.on()
    ...         pyb.delay(1000)
    ...         l.off()
    ... 

That program will blink each LED in turn for one second (pyb.delay() is in milliseconds) ten times. A somewhat more complex example reads the USR button to stop the loop:

    >>> sw = pyb.Switch()
    >>> flag = True
    >>> while flag:
    ...     for l in leds:
    ...         l.on()
    ...         pyb.delay(200)
    ...         l.off()
    ...         if sw():
    ...             flag = False
    ... 

Hitting the RST button instead of USR (as I did, twice, sadly) will only lead to having to type the code in again (or to using one of the other means for running the code). One final example:

    >>> acc = pyb.Accel()
    >>> while True:
    ...     print(acc.x(), acc.y(), acc.z())
    ...     pyb.delay(200)
    ... 
    2 0 22
    -8 5 17
    -11 4 18
    -16 4 12
    ...

The accelerometer returns signed values -30 to 30 for each of the three axes. The accelerometer tutorial page has some additional examples. In fact, the Micro Python tutorial has much of interest (e.g. turning the pyboard into a USB mouse).

Overall, the pyboard is an interesting device to play with. One can imagine various kinds of gadgets that could be built (or prototyped) using it. The recently completed WiPy Kickstarter envisions Micro Python as an entrant into the Internet of Things (IoT) sweepstakes. The Micro Python store also has devices of various sorts that can be hooked up to the pyboard.

Micro Python and the pyboard are in some ways reminiscent of the Python in the GRUB bootloader project (known as the BIOS Implementation Test Suite or BITS) that was presented at PyCon. One big difference is that Micro Python only implements Python 3; another is the size of the systems targeted. But getting a Python prompt when booting a system is a reminder of days gone by—when a BASIC (or DOS) prompt would greet computer users.

There are plenty of other things to investigate with the pyboard: updating the firmware to something more recent than February, for example, or messing around with interrupts from the switch rather than polling. For those looking for an easily programmed microcontroller to use in a project, the pyboard may fit the bill well. As mentioned, the tutorial has some interesting ideas, but others are also starting to use the board (e.g. to control a hexapod robot). The Raspberry Pi and other single-board computers also support Python, of course, but they are a far more heavyweight solution. For smaller tasks, especially those that need to be battery operated, the pyboard is definitely an attractive option.

Comments (7 posted)

Page editor: Jonathan Corbet
Next page: Security>>


Copyright © 2015, Eklektix, Inc.
Comments and public postings are copyrighted by their creators.
Linux is a registered trademark of Linus Torvalds