|
|
Subscribe / Log in / New account

GitHub?

GitHub?

Posted May 17, 2017 17:51 UTC (Wed) by mathstuf (subscriber, #69389)
In reply to: GitHub? by Cyberax
Parent article: A proposal to move GNOME to GitLab

Yeah, but we still need macOS tested too :/ . I also think those GPUs tend to be homogeneous which makes testing wider ranges less viable. If we wanted to go full-cloud for testing, it'd be really expensive (either persistent machines to take advantage of incremental builds versus full builds every time) and I think we'd have to manage things across 3 or 4 providers. Just having the hardware local makes build management uniform at least even if maintenance is more of a burden (all of the builder descriptions live in one place and use a uniform description "language" rather than some being Docker containers, others being VM images, and whatever one uses for provisioning macOS testers.

And one project supports platforms that will never be in the cloud (VS 2008, macOS 10.7 (or so?), HP-UX, AIX, Solaris, etc.), so we're still back to some kind of local test infrastructure management solution.


to post comments

GitHub?

Posted May 17, 2017 18:50 UTC (Wed) by excors (subscriber, #95769) [Link] (2 responses)

You could have short-lived VMs with persistent storage, so you can still do incremental builds despite starting a fresh VM each time. Or use separate machines for building and for running the tests. EC2's on-demand GPU instances cost ~50%-100% more than reserved ones per hour, so they're cheaper if you're only using it <50% of the time (after rounding up each use to an integer number of hours).

("Cheaper" is relative of course, it looks like EC2's cheapest modern one (with half of a GRID K520) is around $3K/year reserved, and the ones with more compute power cost more, so probably not worthwhile if you could get away with a cheap consumer GPU and don't need any of the other cloud features.)

GitHub?

Posted May 17, 2017 19:18 UTC (Wed) by mathstuf (subscriber, #69389) [Link] (1 responses)

For clean builds, our build/test cycle takes about 20–40 minutes (depending on platform) machines on 10-core CPUs with hyperthreading for one project; another takes at least 45 on the same hardware. Incremental builds can be as small as 5 minutes, 15 for the larger project. The machines cost ~$2500 up front and can run multiple builds (depending on the project). I don't know what the electricity costs. There's also a benefit in being able to sit down with a machine to see what's gone wrong on it (helpful when you're dealing with things like rendering differences).

GitHub?

Posted May 17, 2017 20:00 UTC (Wed) by Cyberax (✭ supporter ✭, #52523) [Link]

Yes, for GPU it might make sense to do local builds.

BTW, one of our clients uses pre-built containers to run stuff on expensive instances (with 1Tb of RAM). The build is handled on cheap instance types and only the final containers are run on the expensive instances that are spun down once the calculation is over.

You can also sometimes get GPU instances on EC2 Spot Instances for very cheap.


Copyright © 2025, Eklektix, Inc.
Comments and public postings are copyrighted by their creators.
Linux is a registered trademark of Linus Torvalds