Fedora mulls ARM as a primary architecture
The ARM architecture is growing in popularity and is expected to expand its reach beyond the mobile and "small embedded" device space that it currently occupies. Over the next few years, we are likely to see ARM servers and, potentially, desktops. Fedora has had at least some ARM support for the last few years, but always as a secondary architecture (SA), which meant that the support lagged that of the two primary architectures (32 and 64-bit x86) of the distribution. Recently, though, there has been discussion of "elevating" ARM to a primary architecture (PA), but, so far, there is lots of resistance to a move like that.
The subject came up at a meeting of the
Fedora engineering steering committee (FESCo) on March 19. Adding ARM as a primary
architecture for Fedora 18 was a late addition to the agenda, which annoyed some, but
the discussion was largely to "start the ball rolling and collect feedback from
everyone
", as Kevin Fenzi put it.
There will be many other opportunities to discuss the idea, he said.
The meeting
log bears that out as the only vote taken (or even proposed) was to ask
for input from various teams (QA, release engineering, kernel, and
infrastructure) about the impact of a change like that.
The difference between primary and secondary architectures for Fedora is rather large. Releases cannot be made without all of the packages building and working for each primary architecture, whereas secondary architecture packages can languish. In fact, the current release of Fedora for ARM is based on Fedora 14—though there are alphas of Fedora 15 and 17—which is past its end-of-life on x86.
The meeting discussion focused mostly on the motivation for making ARM a PA and why the project's goals couldn't be met, at least for now, by remaining as an SA. Much of the motivation, it would seem, is for Fedora to get out ahead of the curve on ARM support. Making ARM a "first class citizen" would increase its visibility and put the full weight of the Fedora community behind the effort. Therein, it seems, lies part of the problem.
One could argue that Fedora has already fallen behind the curve with respect to ARM given what Ubuntu and Debian are doing to support the architecture. There is a lot going on with Linux on ARM, and Fedora may well find itself becoming less relevant if support for ARM does not improve. But the question seems to be whether that support needs to improve as an SA before even considering whether it can be a new PA.
Based on the discussion in the meeting, Matthew Garrett posted an RFC draft of the requirements to promote an architecture to a PA. So far, there has been no architecture that has transitioned from an SA to a PA, so some kind of ground rules need to be established. Garrett lists seven potential criteria, prefaced by:
Much of the response to that posting concerns the amount of time it (currently) takes to build ARM packages (vs. the time for x86 and other architectures). Jakub Jelinek noted that GCC builds for 64-bit x86 are on the order of two hours, while building for ARM takes much longer. He followed that up with actual numbers from GCC 4.7 builds for Fedora 17, which ranged from one-and-a-half hours for x86_64 to more than 26 hours for armv5tel (and more than 24 for armv7hl). Brendan Conoboy pointed out that plans for newer "enterprise hardware" would cut those ARM build times in half, but that still leaves a substantial gap.
Slow builds are not just an annoyance, as there are some impacts for the distribution if package building takes "too long". Josh Boyer lists two. If a package builds for the x86 family, but then fails to build for ARM, the x86 build will have to be resubmitted after the ARM problem is fixed. In addition, when trying to do an update (for a security issue for example), it has to wait for the slowest build to finish before the update can go out. Adam Williamson also notes another problem that could arise in the release verification process:
If builds get significantly slower, that could have a concrete impact on the release validation process: it's plausible that we'd either need to extend the validation period somewhat - earlier freezes - or we would have to eat a somewhat higher likelihood of release slippages.
Build speed is a technical issue that can presumably be overcome (eventually) with faster hardware. Other possibilities like cross-compiling on faster x86_64 servers or parallelizing the Koji build system (perhaps using something like distcc) seem to have been ruled out by Fedora release engineering or the Fedora ARM team. While some remain unconvinced, Conoboy is adamant that cross-compilation is not a good solution:
But some question the wisdom of even having criteria for promoting SAs to
PAs, whether it makes sense for Fedora to even consider ARM for a PA, or
both. Kevin Kofler is definitely in the last category as he believes that the current list of PAs "should be set in stone unless a MAJOR change in hardware
landscape happens
". Some would argue that the change is already
happening. But he is concerned that additional PAs put a
burden on all of the package maintainers, so that it should require an
extraordinary event (like "x86 gets discontinued by the hardware manufacturers
and everyone uses ARM instead
") before any change like that is even
considered. He continues:
The focus should be on finding ways to make secondary architecture releases more timely (i.e. it's not acceptable that e.g. the stable ARM release is still Fedora 14 which doesn't even get security updates anymore), not to cheat around the problem by making ARM a primary architecture (which does not help all the other secondary architectures).
Kofler harps on the same points throughout the thread, belittling the ARM market share (at least in the market segments that he thinks should be targeted) and finding the build times for ARM packages to be untenable. He considers the large existing base of ARM devices to be unsuitable for installing Fedora, at least at this point. But, as Richard W.M. Jones points out, that is changing rapidly:
My £400 tablet has plenty enough power, storage and whatever else to run Fedora. Fedora works pretty well on £200 Trim Slice servers. Fedora is going to be shipped with £25 Raspberry Pi devices in the near future.
Others were also skeptical of the current ARM hardware being a good target for Fedora, but Williamson points out that getting Fedora ARM running does more than just target those devices. The ARM project is looking toward the future, both on servers and mobile devices. Getting the distribution running on one is a big step toward having it available for the other.
But the speed of the build system is just one symptom of the problems that another PA will bring. One of the bigger questions, which remains largely unanswered as yet, is what making ARM a PA would do for Fedora as a distribution. It's reasonably clear why it would help the Fedora ARM project to have ARM as a primary, but the advantages to the distribution, at least at this point, are less clear. As Garrett put it:
The only reward you'll get from being a primary architecture is basking in the knowledge that the project thinks your work is good enough to be a primary architecture. The better you can demonstrate that in advance, the easier the process will be.
Peter Jones Robinson outlined many of the advantages
that the Fedora ARM team sees in another fedora-devel thread. Essentially,
it would spread the load of responsibility throughout the Fedora
community. That is, of course, the underlying concern of many posters in the
threads. But Jones sees it this way:
There is, of course, nothing stopping the ARM team from achieving most of its goals while staying as secondary architecture. It will be more difficult and likely require more volunteers, but the Fedora project as a whole needs to be convinced of the advantages of taking on the "burden" of ARM as a primary. So far, the ARM project doesn't seem to have made a convincing case for that, but, given the importance of the architecture going forward, one might guess that the situation may change in the next year or two. In the meantime, setting some goal posts for any secondary architecture that wants to be promoted seems like a good first step.
Posted Mar 22, 2012 2:32 UTC (Thu)
by yarikoptic (guest, #36795)
[Link] (5 responses)
just a fact: Debian has supported (included in the official stable release) ARM architecture since woody 3.0 release (19th of July, 2002)
source: http://www.debian.org/ports/arm/
Posted Mar 22, 2012 4:50 UTC (Thu)
by rahulsundaram (subscriber, #21946)
[Link] (4 responses)
Posted Mar 22, 2012 8:35 UTC (Thu)
by rvfh (guest, #31018)
[Link] (3 responses)
Posted Mar 22, 2012 11:47 UTC (Thu)
by rahulsundaram (subscriber, #21946)
[Link] (2 responses)
Posted Mar 22, 2012 15:56 UTC (Thu)
by rvfh (guest, #31018)
[Link] (1 responses)
You said: "Fedora does the job better than anyone else including being at the forefront of developing new technologies [blah, blah, blah...]" hence my remark that this remains to be proven.
Posted Mar 22, 2012 16:49 UTC (Thu)
by JoeBuck (subscriber, #2330)
[Link]
Posted Mar 22, 2012 2:38 UTC (Thu)
by slashdot (guest, #22014)
[Link] (37 responses)
Why the heck are they even considering using native compilation, which is just absurd, given that it gives no advantage, and that of course all non-x86-64 CPUs just suck horribly?
And what does many core have to do with it?
Just run distcc on an x86-64 cluster and cross-build everything...
BTW, in case they are worried about it, if a build system tries to execute a program it builds (= it's broken), just have qemu-arm set up to run it automatically.
If really needed, they can even just run a whole ARM distribution via qemu-arm on x86-64, and simply substitute gcc, as and ld with native x86-64 binaries, since they are usually the only performance critical programs.
Sometimes one wonders how things work at all, when the people in charge are so clueless.
Posted Mar 22, 2012 2:54 UTC (Thu)
by corbet (editor, #1)
[Link] (17 responses)
Posted Mar 22, 2012 3:32 UTC (Thu)
by jcm (subscriber, #18262)
[Link] (16 responses)
Fedora (and other distros) haven't traditionally *done* cross-compilation for everything. We would have to fix a /lot/ of assumptions from the get go. So many that in fact I think we'd still be looking into that without the releases we have seen so far. Further, we want to be *boring* in ARM. We want to look and smell as much like x86 where it makes sense to do so. That means unless x86 is going to be cross-compiled (on a future 64-bit ARM system perhaps - and I'm not serious), we're not going to do something totally out of whack. No, we have to live with the world in which we live.
Jon.
Posted Mar 22, 2012 5:29 UTC (Thu)
by mjthayer (guest, #39183)
[Link] (2 responses)
In theory cross-compilation should be just as good as native, and helping to turn that into practice by creating heavy users would actually be lovely. Sadly this has become a lot harder (before it becomes easier one would hope) on Ubuntu 12.04 with their new multi-architecture system - at least many X11 development packages currently won't install for several architectures in parallel. At least in this case the fixes are not too hard, so I haven't yet given up hope for a fix before the actual release!
Back to Fedora though - if x86_64 is better at compiling and ARM at saving power, it seems to me that being able to build on x86_64 is a good example of what a cross-platform system is good for.
Posted Mar 22, 2012 6:50 UTC (Thu)
by ttonino (guest, #4073)
[Link] (1 responses)
What if the distribution wants to run on a DSP based architecture with hard memory and storage limits?
In a sense, it is nearly always cross-compilation unless you compile Gentoo-style on the target box.
I'd say use the best tool for the box. Being able to cross-compile well also means that if ARM128 or PowerSparcWhatever would arrive, it could be used to do those x86-64 builds on.
So cross-compilation really solves tomorrows problems also if it means 'compile on the fastest system' and not 'compile on x86-64'.
Posted Mar 23, 2012 20:11 UTC (Fri)
by filipjoelsson (guest, #2622)
[Link]
I can see how the package/build system could get in your way if you're not constantly refining it (as Gentoo has a natural incentive to do). However, distcc goes out of its way to make these things easy, and I really do wonder if it has to be all that hard. I mean, the build can run on an ARM-box, which may share some of the build with an amd64-box by way of distcc. The way you'd achieve it is by putting the path with cc linked to distcc before the cc->gcc in the PATH. For builds that can't be distributed, just use the full path.
Posted Mar 22, 2012 10:56 UTC (Thu)
by slashdot (guest, #22014)
[Link] (12 responses)
First of all, you are supposed to be releasing good source code as well as binaries, and if cross compilation is so broken, you are supposed to fix it.
Second, even if you are totally lazy and don't want to fix it, YOU CAN WORK AROUND IT.
Again, simply build packages in a full ARM system (either native or running on x86 with qemu-arm), and then replace gcc, as and ld either with a distcc (or other remote) client, or if running in qemu, with native cross-compilers.
Of course, this should only be done if a package fails to build with normal cross compilation.
The idea of running a compile farm on an embedded architecture like ARM is just insane and idiotic.
I mean, if the next architecture you want to support only comes inside a dishwasher, do you purchase a cluster of dishwashers to build the distribution?
Posted Mar 22, 2012 13:14 UTC (Thu)
by khim (subscriber, #9252)
[Link] (11 responses)
Citation is needed really badly. I see no such requirements in GPLv2, GPLv3 or any other sane license. And I, for one, just flat our refuse to spend my time doing useless work. Sometimes cross-compilation makes sense (for example if I build package for Arduino), but often it's just easier to use native build. This may break native compilation instead. Sometimes it's not a big deal (for example some ARM-packages here can not be build on ARM because linker needs >4GB of address space), but presumably they want to keep this possibility - and then it must be tested. Sure. As long as ARM remains “an embedded architecture”. But in this case it makes little sense to promote it to “primary architecture” status. The whole tempest started as admission that ARM is no longer just an embedded architecture. Well, if that's true then it should be capable enough to build itself.
Posted Mar 22, 2012 13:50 UTC (Thu)
by slashdot (guest, #22014)
[Link] (10 responses)
It's not a legal requirement, but having a non-broken build system (which includes cross build capability) is a part of releasing open source quality software.
> This may break native compilation instead
Then test native compilation too once in a release cycle.
> Sure. As long as ARM remains “an embedded architecture”. But in this case it makes little sense to promote it to “primary architecture” status
Embedded architectures can be extremely important, since everyone uses a cell phone for instance; that doesn't mean it's a good idea to use them as build farm servers.
Maybe ARM servers will be viable eventually, but definitely not now.
For instance according to Wikipedia, Cortex A15 has 1/3 the Dhrystone IPC of Core i7 2600k, which isn't a good sign, although it could perhaps still win on system price/performance (kind of unlikely though).
Posted Mar 22, 2012 15:53 UTC (Thu)
by pboddie (guest, #50784)
[Link] (9 responses)
And this is a concern right now for anyone doing development for things like mobile phones. The attitude amongst the people who are supposedly most active (and best funded) has apparently mostly been to ignore the need for a decent cross-compilation workflow and instead either claim that running the compiler on a phone is "not that bad" or use second-rate workarounds like qemu, wasting an absurd amount of CPU time and energy emulating a native compiler and toolchain. It should be completely possible to cross-build a distribution: very little code is actually in an architecture-specific machine language, and the build process should be using portable languages as well. I have high hopes that stuff like multiarch will help to work around the issues with tools and their liking for specific, immutable filesystem paths. That cross-building cannot be done for a distribution (various Debian-related efforts seem to undermine such claims) shouldn't be an excuse for not doing anything about it. It has been over two decades since ARM had any kind of performance advantage over mainstream architectures, and although you can certainly go and buy a bunch of ARM-based devices to do parallel native builds, and although the performance per core of ARM-based devices is improving, it's wasteful and absurd to suggest that people would go and buy a cluster of modestly performing gadgets when the machine on their desk/lap/rack could race through the process in comparatively little time. To label anyone as clueless may be rude, but it is fair to state that the situation is ridiculous.
Posted Mar 22, 2012 18:05 UTC (Thu)
by wookey (guest, #5501)
[Link] (5 responses)
Debian has of course been doing this (native building on build farms for all arches)for 15 years. The distro isn't broke because of it, in fact code quality is dramatically improved as a result, but yes, security updates do take longer. And yes you have to do some things differently (like not doing regular complete-rebuilds on all arches).
Cross-building will never be as reliable as native-building. You can't run tests*, and for some things at least you can't ask the machine: you have to ask some config, which is more likely to get stale than the machine. That's _why_ it's good CS practise to run configure- and build-time tests to check things, which is why builds increasingly do this. Calling it 'broken' is hopelessly simplistic. This trend does annoy the hell out of cross-compiling peeps.
On the other hand cross-building remains useful even as ARM hardware gets faster, and it could work a whole heap better than it does now (in Debian/Ubuntu). Multiarch is indeed a big part of making that reliable, but there will always be tradeoffs, as anyone who's actually done some will be able to tell you.
If anyone _is_ interested in making ARM cross-building work better in the multi-arch context, then do please look at my multiarch cross-autobuilder and fix things (as you'll see there is plenty to fix right now as the state could reasonably be summarised as 'mostly broken'). It's particularly bad today as there is version skew on libc so almost no cross-build-deps installed):
Of course that's just Ubuntu (Debian unstable will be added shortly now that multiarch dpkg has finally arrived). Fedora is not (yet?/ever?) using multiarch so would need to use a different cross-build mechanism anyway, such as the Open Build System or Scratchbox models ('fake-native' building using emulation and native back-end tools such as the compiler), or classical/sysroot mechanisms.
Personally I think it's important that cross-building works as well as it reasonably can in a distro, for the same reasons that it's good for a distro to build properly on lots of architectures. It improves code quality, roots out things that are just plain wrong, and makes things like bootstrapping new ports and rebuilding for optimisations _much_ easier.
But suggesting that a whole distro should be built this way does not make much sense outside of some relatively small use-cases (e.g. arches where you have no choice such as AVR, and subset rebuilds for speed reasons, and for build-time configurability of the sort that Yocto/OE/buildroot/openbricks/ptxdist take advantage of). Native-building must be considered canonical for large binary-package distros, and cross-building will remain subsidiary to that.
All IMHO of course, but I claim to have some clue.
* You might be able to run tests via qemu, but that may be available for the target arch and it may not give the same results.
Posted Mar 22, 2012 19:30 UTC (Thu)
by jcm (subscriber, #18262)
[Link]
Posted Mar 22, 2012 22:35 UTC (Thu)
by pboddie (guest, #50784)
[Link] (3 responses)
I don't disagree with most of what you've written, but I will dispute the following: There are configuration-level tests which I accept are difficult to manage unless you're building natively, and there are validation-level tests which are actually needed to show that the executables really do work on the target hardware, but I see no need to punctuate the build process with the latter and permeate the process with native executable dependencies. It's a bit like the multi-stage aspect of things like debootstrap: a certain amount of time can be done without depending on the target environment, and then stuff that actually requires the target environment can be done there. Where ample resources are available to do the heavy lifting of compilation on one architecture, it's wasteful to disregard those and to put the burden on the target architecture. I look forward to the point where people try and build PyPy for ARM - that's already a very heavy (and not parallelised) exercise on x86(-64) requiring a fast CPU and lots of RAM - so the case for cross-compilation isn't a curious historical anomaly just yet.
Posted Mar 23, 2012 1:02 UTC (Fri)
by wookey (guest, #5501)
[Link] (1 responses)
We probably could fix this - I must admit I've not really looked at it in any detail. Maybe it's not even too difficult?
Posted Mar 23, 2012 5:30 UTC (Fri)
by khim (subscriber, #9252)
[Link]
it's not that difficult to fix, but it's PITA to keep fixed. Unless you are doing cross-compilation geared project like Android (in which case on-target compilation bitrots instead).
Posted Apr 4, 2012 20:09 UTC (Wed)
by cmsj (guest, #55014)
[Link]
https://launchpad.net/ubuntu/+source/pypy/1.8+dfsg-2/+bui...
Posted Mar 23, 2012 12:16 UTC (Fri)
by nhippi (subscriber, #34640)
[Link] (2 responses)
The main target of cross-building is developers, who want to run their edit - build - run - debug cycle as fast as possible.
For building a distro, native building enables the eat-our-own-dogfood testing of all the build tools. It is not just the testsuites, but everything that is run during builds.
Posted Mar 23, 2012 23:53 UTC (Fri)
by dlang (guest, #313)
[Link] (1 responses)
this makes me doubt the 'eat your own dogfood' justification. There is some of that, but it's a rule they break in so many other cases (because it's so much work to compile everything)
Posted Mar 24, 2012 0:07 UTC (Sat)
by rahulsundaram (subscriber, #21946)
[Link]
Posted Mar 22, 2012 8:42 UTC (Thu)
by rvfh (guest, #31018)
[Link]
We only cross-compile the kernel for development, but then native-compile it for the releases.
What is more, given the performance of the new CPUs (for example the Cortex-A15 you can find in TI's OMAP5 and Samsung's Exynos 5250), cross-compiling is indeed yesterday's problem.
Posted Mar 22, 2012 10:04 UTC (Thu)
by mpr22 (subscriber, #60784)
[Link] (1 responses)
Out of interest, what's "broken" about writing the parsers that turn your human-readable data into the deliverable binaries in lex and yacc instead of $runtime_parsed_language (and would you care to explain it to the Nethack Dev Team)?
Posted Mar 22, 2012 14:05 UTC (Thu)
by Yorick (guest, #19241)
[Link]
Perhaps most of these obstacles can be overcome by interposing an emulator into the build system for the execution of target code, but even with a fully-functional emulator, such an undertaking requires substantial effort and expertise for every package. And running the entire build inside an emulator is likely to be no faster than using native hardware.
I'm not fond of autoconf and its associated tools, but mainly because of their cruftiness (shell scripts, m4 macros) and how they seem to be designed for solving yesterday's problems. A modern software configuration and build system that makes it easy to do cross-buildable packages would be useful.
Posted Mar 22, 2012 13:10 UTC (Thu)
by juliank (guest, #45896)
[Link] (1 responses)
Running build-time tests is surely not broken, but good practice. And using qemu to run them is broken, as the emulation is not good enough and may actually run more programs than a native ARM would run (i.e. unaligned memory accesses are not catched in qemu). Thus, tests could run fine on the emulated build system, but the code fails to run on the native system.
Posted Mar 23, 2012 16:29 UTC (Fri)
by BenHutchings (subscriber, #37955)
[Link]
Posted Mar 22, 2012 13:16 UTC (Thu)
by hanwen (subscriber, #4329)
[Link] (10 responses)
I've written a lot of code to make common linux packages cross-compile, and I think the fedora team is completely right to avoid it.
Packages that build and execute stuff to build are broken for cross-compile, but unfortunately, they are also exceedingly common. Any package that is some sort of platform needs to read its own platform definitions/programs, either to prepare the platform or to document it. Examples: ghostscript, python (and all other interpreters), every compiler (look at the horrendous 3 phase build of GCC), document processing (eg. LaTeX), programs that work with interface definitions (like protocol buffers and IDL files). You can run all these binaries on an emulator, but that will hardly be faster than running on the target system to start with.
Getting packages to work for cross compiling is a painful process of infinite recompiles, where you have to figure which parameters (eg. the ones from autoconf configure) should come from the target, and which ones from the host, then the same for the object files. Root cause of the problem are the make/autoconf build tools that work on arbitrary files and arbitrary variables, so the distinction between host and target is not made explicit.
Posted Mar 22, 2012 13:36 UTC (Thu)
by dlang (guest, #313)
[Link] (6 responses)
That being said, I am amazed that anything more than a one-person distro isn't using distcc or similar to spread the compile load over a farm of machines.
Getting lots of ARM machines for this sort of thing is pretty easy, and power wise is far more efficient than any x86 machine. People make off-the-shelf boxes which have dozens of ARM SoC systems plugged into them. individually they are poor, but as a compile farm they would be very efficient (the final serialized link step would still be a bottleneck, but that's a small part of the overall CPU time)
Posted Mar 22, 2012 15:10 UTC (Thu)
by rwmj (subscriber, #5474)
[Link] (5 responses)
The problem is that you get hit by Amdahl's law: single builds are simply not very parallelizable. Recursive Makefiles have to be run sequentially. Even a large project may only have dozens of C files, but to really exploit multicore ARM you need hundreds of parallel tasks. Tests have to run sequentially (at least, they do when using automake). There's a fixed "top and tail" overhead of unpacking the tarball and constructing the final package.
Posted Mar 22, 2012 15:42 UTC (Thu)
by dlang (guest, #313)
[Link]
as for the "tip and tail" overhead, that's one place where I would say to use a amd64 machine, use it to unpack the tarball onto a network accessible drive and then package up the result. Ideally you do this on the machine providing the network accessible drive so that it's all local I/O.
This isn't going to scale linearly with the number of machines for any one package build, but there are a LOT of packages that need to be built, so overall you should be able to keep dozens, if not hundreds of cores busy.
Posted Mar 22, 2012 19:33 UTC (Thu)
by nix (subscriber, #2304)
[Link]
Posted Mar 25, 2012 3:33 UTC (Sun)
by ndye (guest, #9947)
[Link] (1 responses)
With horizontal scaling defining the future, should we consider an archive format with a random-access catalog (like ZIP) to replace linear tar?
(Just an idea.)
Posted Mar 26, 2012 4:19 UTC (Mon)
by mathstuf (subscriber, #69389)
[Link]
Posted Apr 4, 2012 19:53 UTC (Wed)
by cmsj (guest, #55014)
[Link]
(which is to say that all of the currently available ARM hardware is extremely unreliable under continuous duress, and none of the hardware is built for low friction remote management. This will change when ARM servers are a real thing, but for now I welcome the work being done by Linaro folks to enable Ubuntu building ARM packages with qemu!)
Posted Mar 22, 2012 15:34 UTC (Thu)
by slashdot (guest, #22014)
[Link] (1 responses)
Posted Mar 22, 2012 20:42 UTC (Thu)
by oak (guest, #2786)
[Link]
Scratchbox2 is designed to deal also with that:
It uses LD_PRELOAD and other techniques to map file accesses to host binaries, runs cross-compiled code through Qemu (again, with path mapping) and so on. It's used by Tizen, MeeGo's Mer successor, was available for building Maemo stuff is and packaged in Debian (since Lenny) & Ubuntu (since Hardy).
Posted Mar 22, 2012 16:02 UTC (Thu)
by pboddie (guest, #50784)
[Link]
There's actually hardly any reason to do what the Python build process does, which is to run the built executable in order to perform a bunch of tasks that could in many cases be done by a suitable host-native executable: compiling .py files to .pyc files merely demands an executable supporting the same Python version, not the specific executable to be run in the target environment; copying files into a particular location does not depend on executing ARM code just because the target device happens to use an ARM CPU.
Posted Mar 22, 2012 17:24 UTC (Thu)
by wookey (guest, #5501)
[Link] (2 responses)
Posted Mar 22, 2012 19:34 UTC (Thu)
by jcm (subscriber, #18262)
[Link]
Jon.
Posted Mar 22, 2012 23:39 UTC (Thu)
by giraffedata (guest, #1954)
[Link]
I would call that incorrectly pitched. Satire is supposed to make a point and if readers don't recognize it as satire, it doesn't. In fact, it does worse than say nothing at all.
And it's a well known fact that sarcasm doesn't work in written discussions like this one. Seasoned participants know not to try it.
My personal policy is always to assume a person in a forum like this said what he meant, even if I have a strong suspicion it was meant sarcastically. It's more confusing, not to mention disrespectful, to do otherwise.
Posted Mar 22, 2012 3:25 UTC (Thu)
by jcm (subscriber, #18262)
[Link]
Posted Mar 22, 2012 11:05 UTC (Thu)
by etienne (guest, #25256)
[Link] (3 responses)
Posted Mar 22, 2012 19:12 UTC (Thu)
by jmorris42 (guest, #2203)
[Link] (2 responses)
Nah, I'll tell ya what THE problem is. ARM isn't one arch and no current shipping hardware makes it close to easy to install a new OS. When the Pi ships that counter will increment to one but it is very pitiful and suitable only for very limited tasks.
That first point is important. Ubuntu won't run on the Pi because it doesn't support ARMs that old. But the problem is worse. The word ARM encompasses several related arches, some even run big endian. Some have hardware float, some don't. It makes the variations in x86 between i386 and current seem somewhat managable.
Now we take this multitude of similar but incompatible processors and marry them to an equally bewildering array of memory management, dma and interrupt controllers in the various SoC solutions ARMs are almost always packaged into. Add in binary blobs to get video and the CPU playing second banana behind another controller in charge of DRM with it's own blob, none of which can be distributed in Fedora and are 100% required to have a bootable computer.
Then finally take these SoC chips and stick em in a bewildering array of phones, tablets, wall bricks, NAS boxes, WiFi routers, servers, netbooks, whatever that almost all boot in different, mostly undocumented ways and over half employ DRM to outright prevent loading an alternate OS. Where does Anaconda fit into this picture? Or for that matter Fedora itself?
That is why it won't be promoted to primary arch anytime soon. Before that happens a couple of things have to happen that are outside Fedora or RedHat's control.
First there needs to be hardware available suitable for running it. By suitable I mean hardware that is capable enough to run Fedora in both the server and desktop roles. Pi is just too little in the CPU and RAM departments. Imagine Firefox 11 hauling itself onto a GNOME 3 desktop hosted on a Pi. Now imagine clicking a link to a .doc file and firing up OO.o and the horrific swapfest to an SD card that would trigger.
Second that hardware needs to be designed to either allow the end user to replace the OS or come preloaded. Fedora, with it's lifespan more similar to an insect than a mammal isn't likely to be picked by an OEM for a preload. Sorry, just stating facts here. So we are left with a major OEM making a capable device with an easy way to load.
Third, that 'easy' way to load an OS needs to be standardized enough that Fedora won't need a separate OS download and set of install instructions to create, test and maintain for each vendor (or worse, each product).
Until all three of those things happen you can't promote ARM to primary because a random developer CAN'T be expected to test their software on Fedora running on an ARM. You can't use what you can't buy and will rightly resist/ignore any directive otherwise. Exotic developer boards that cost more than a whole x86_64 developer's station cost do not count.
And yes, the build time problem is a real issue as well. The fastest and hottest ARM currently available in running in Intel Atom/VIA Epia territory with 1GB the max ram load you can readily obtain. Try building OO.o, kde-base or any other C++ horror on that and get back to me.
Posted Mar 22, 2012 20:08 UTC (Thu)
by oak (guest, #2786)
[Link]
GCC 4.7 news just mentioned managing to get 64-bit Firefox LTO link time memory usage down to 3GB from 8GB... Building "C++ horrors" can demand surprising amounts of RAM.
Posted Mar 23, 2012 10:48 UTC (Fri)
by etienne (guest, #25256)
[Link]
I think that was what happened during the 8086/68000 processor war, plenty of hardware which were 10 times more powerfull on paper but not backward compatible.
Posted Mar 22, 2012 14:02 UTC (Thu)
by pjones (subscriber, #31722)
[Link] (2 responses)
Posted Mar 22, 2012 14:21 UTC (Thu)
by jake (editor, #205)
[Link] (1 responses)
oh my, that's a mistake I shouldn't have made, sorry to both of you, fixed now, thanks!
jake
Posted Mar 22, 2012 14:26 UTC (Thu)
by pjones (subscriber, #31722)
[Link]
Posted Mar 22, 2012 23:44 UTC (Thu)
by giraffedata (guest, #1954)
[Link] (6 responses)
Posted Mar 23, 2012 1:17 UTC (Fri)
by mgedmin (subscriber, #34497)
[Link] (5 responses)
Posted Mar 23, 2012 5:07 UTC (Fri)
by jcm (subscriber, #18262)
[Link] (4 responses)
Posted Mar 23, 2012 10:59 UTC (Fri)
by etienne (guest, #25256)
[Link] (1 responses)
With the ARM instruction set, unless using Thumb, you have each instruction coded on 32 bits, so 256 assembly instructions weight 1 Kbyte.
Posted Mar 24, 2012 5:19 UTC (Sat)
by BenHutchings (subscriber, #37955)
[Link]
Posted Mar 25, 2012 0:25 UTC (Sun)
by jzbiciak (guest, #5246)
[Link] (1 responses)
I wonder what the compile time might look like on this beast when it comes out.
(Full disclosure: I work on the team making that chip, so mine is more than just a passing interest.)
Posted Mar 28, 2012 12:29 UTC (Wed)
by stevem (subscriber, #1512)
[Link]
Seriously, we're always looking for donations of newer/faster hardware for building and testing with.
Posted Apr 4, 2012 20:08 UTC (Wed)
by cmsj (guest, #55014)
[Link]
https://launchpad.net/ubuntu/+source/gcc-4.6/4.6.3-1ubunt...
Fedora mulls ARM as a primary architecture
Fedora mulls ARM as a primary architecture
Please remove your PR hat
Please remove your PR hat
Please remove your PR hat
I don't think it would be hard to make the case that Fedora has been a leader in introducing any number of new technologies into GNU/Linux distributions. This sometimes makes Fedora a poor choice for users who aren't willing to put up with flakiness, as sometimes technologies are introduced before they are ready. But there really isn't much of a question about Fedora's leadership in that area.
Please remove your PR hat
Fedora mulls ARM as a primary architecture
Compilation is mostly embarassingly parallel, so there's no issue with SMP or clusters.
Come on, you don't need to talk about the people who are working to build free distributions that way. They have good reasons for being concerned about cross compilation and distcc. Maybe they'll eventually find ways around some of them, but they have thought about this stuff. "Clueless" is not fair and not conducive to a productive conversation.
Fedora mulls ARM as a primary architecture
Fedora mulls ARM as a primary architecture
Fedora mulls ARM as a primary architecture
Fedora mulls ARM as a primary architecture
Fedora mulls ARM as a primary architecture
Fedora mulls ARM as a primary architecture
Fedora mulls ARM as a primary architecture
First of all, you are supposed to be releasing good source code as well as binaries, and if cross compilation is so broken, you are supposed to fix it.
Again, simply build packages in a full ARM system (either native or running on x86 with qemu-arm), and then replace gcc, as and ld either with a distcc (or other remote) client, or if running in qemu, with native cross-compilers.
The idea of running a compile farm on an embedded architecture like ARM is just insane and idiotic.
Fedora mulls ARM as a primary architecture
Fedora mulls ARM as a primary architecture
Embedded architectures can be extremely important, since everyone uses a cell phone for instance; that doesn't mean it's a good idea to use them as build farm servers.
Fedora mulls ARM as a primary architecture
http://people.linaro.org/~wookey/buildd/precise/sbuild-ma...
Fedora mulls ARM as a primary architecture
Fedora mulls ARM as a primary architecture
Cross-building will never be as reliable as native-building. You can't run tests*, and for some things at least you can't ask the machine: you have to ask some config, which is more likely to get stale than the machine.
Fedora mulls ARM as a primary architecture
Fedora mulls ARM as a primary architecture
We probably could fix this - I must admit I've not really looked at it in any detail. Maybe it's not even too difficult?
Fedora mulls ARM as a primary architecture
Fedora mulls ARM as a primary architecture
Fedora mulls ARM as a primary architecture
Fedora mulls ARM as a primary architecture
Cross-compilation
Fedora mulls ARM as a primary architecture
BTW, in case they are worried about it, if a build system tries to execute a program it builds (= it's broken)
Fedora mulls ARM as a primary architecture
Fedora mulls ARM as a primary architecture
> tries to execute a program it builds (= it's broken),
> just have qemu-arm set up to run it automatically.
Fedora mulls ARM as a primary architecture
if a build system tries to execute a program it builds (= it's broken)
Fedora mulls ARM as a primary architecture
Fedora mulls ARM as a primary architecture
Fedora mulls ARM as a primary architecture
Fedora mulls ARM as a primary architecture
Fedora mulls ARM as a primary architecture
Fedora mulls ARM - Is tar holding us back?
Amdahl's law [shows up in] a fixed "top and tail" overhead of unpacking the tarball . . .
Fedora mulls ARM - Is tar holding us back?
Fedora mulls ARM as a primary architecture
Fedora mulls ARM as a primary architecture
Fedora mulls ARM as a primary architecture
http://maemo.gitorious.org/scratchbox2
Fedora mulls ARM as a primary architecture
Fedora mulls ARM as a primary architecture
Well, of course all architectures must be cross compiled on x86-64!
Everyone else seems to have taken this comment seriously, and carefulkly rebutted it. I assumed it to be obvious satire. But if course if satire is correctly pitched it can be quite hard to tell if it is or not...
Fedora mulls ARM as a primary architecture
Fedora mulls ARM as a primary architecture
if satire is correctly pitched it can be quite hard to tell if it is or not...
Fedora mulls ARM as a primary architecture
Fedora mulls ARM as a primary architecture
Something like what we have in ia32/amd64 PCs?
Fedora mulls ARM as a primary architecture
Fedora mulls ARM as a primary architecture
Fedora mulls ARM as a primary architecture
Until a big company described what they supported, imposed standards, and described what they required for the next generation of their OS.
And this big company still do it today, even if those requirement do not seem to be available without NDA.
I'm fully aware that Primary Arch isn't the perfect panacea...Fedora mulls ARM as a primary architecture
While I have commented on this thread, and hopefully meaningfully, this quote, and the referenced post, are from the inimitable Peter Robinson.
Fedora mulls ARM as a primary architecture
> Robinson.
Fedora mulls ARM as a primary architecture
What is included in the 2 hour / 26 hour build? Is this building every package from source?
2 hour ARM build
2 hour ARM build
2 hour ARM build
2 hour ARM build
You will need a lot more code cache than a ia32 processor to compete, probably around twice the amount.
2 hour ARM build
2 hour ARM build
2 hour ARM build
Fedora mulls ARM as a primary architecture