|
|
Subscribe / Log in / New account

Puppet fork OpenVox makes first release

The Vox Pupuli project has announced the first release of OpenVox, a "soft-fork" of the Puppet automation framework. The intention to fork was announced in December 2024.

OpenVox 8.11 is functionally equivalent to Puppet and should be a drop-in replacement. Be aware, of course, that even though you can type the same commands, use all the same modules and extensions, and configure the same settings, OpenVox is not yet tested to the same standard that Puppet is. [...]

Please don't use these packages on critical production infrastructures yet, unless you're comfortable with troubleshooting and reporting back on the silly errors we've made while rebranding and rebuilding.



to post comments

Puppet use today

Posted Jan 22, 2025 17:05 UTC (Wed) by yodermk (subscriber, #3803) [Link] (16 responses)

Anyone here still use Puppet? I haven't since 2017. It's definitely a more pleasant language than Ansible for config management, but Ansible is just so much easier to get started with. Plus, in the cloud, custom images largely replace the need for this kind of thing, and K8s has Helm. I suppose it's still useful in traditional datacenter deployments though.

Puppet use today

Posted Jan 22, 2025 18:18 UTC (Wed) by legoktm (subscriber, #111994) [Link] (1 responses)

Yes, all of Wikimedia's servers rely on it: https://gerrit.wikimedia.org/g/operations/puppet/

Puppet use today

Posted Jan 23, 2025 9:40 UTC (Thu) by yoe (guest, #25743) [Link]

Puppet use today

Posted Jan 22, 2025 22:33 UTC (Wed) by docontra (guest, #153758) [Link]

I have a very small puppet deployment (~10 puppet clients), since circa 2011-2012, for semi-automatic long-lived/persistent full-system Ubuntu-based lxc[1] containers. For deployment, I setup the base container, change the uid 1000 password, install and connect the puppet agent to my server, let it do its thing (mostly installing packages, registering the containers with my nagios/icinga2 instance[2], and creating my needed local accounts), and wrap up the setup by manually setting up LDAP authentication (nowadays via sss). Although it hasn't had anywhere near perfect backwards/forwards/"API" compatibility, it mostly never got in my way and the language(s) are a breeze to use. Not a fan of the Java-based puppetserver/puppetdb for resource usage, but it's still within reason.

Didn't know puppet had discontinued/closed-sourced their communtiy edition, gonna have to migrate these coming weeks.

[1]: Have been trying incus on a different deployment and have loved it so far, next refresh cycle (2026) I'll most likely migrate to that.
[2]: nagios 3 integration was amazing; icinga2 has lots of moving parts and has forced me to use puppetdb, athough I admit my monitoring solution for a loooong time has been a well-stocked byobu session...

Puppet use today

Posted Jan 23, 2025 0:53 UTC (Thu) by himi (subscriber, #340) [Link] (7 responses)

My experience is that the various tools in this space (Ansible, Puppet, Chef, etc) are all just as good/bad as each other, in different ways that generally result from the different models that they use. Ansible definitely has a lower barrier to entry than Puppet, and has advantages when it comes to managing clustered systems with interconnected state, but once you get to more complex deployments it doesn't matter what you use - it's a hideously complex problem space, there's no magic bullet that will make it easy and simple.

The biggest thing is really the module library that's available - yes, you can write your own Ansible/Puppet/etc code to manage your particular deployment of whatever, but almost all of the time it's a hell of a lot faster to use code written by someone else. This means the breadth and quality of the module library, and how well it's maintained and kept up to date, is one of the key differentiating factors for the different tools. There are also more intangible things like coolness and "momentum" that influence other factors - Ansible is better known and has more momentum these days than Puppet, so people are more likely to use it if they're starting from scratch, regardless of whether it's an objectively better choice; that feeds back into the maintenance and development time that people put into the module library.

Having made a choice at some point, though, you pretty quickly end up committed to that choice - it's extraordinarily difficult to change the configuration management system you're using without basically rebuilding everything from scratch. Where I work we've been using Puppet to manage our in-house OpenStack deployment for ten years - that's a lot of history and domain-specific knowledge that would need to be replicated if we wanted to move to Ansible, even though the OpenStack project's Ansible code is better maintained than their Puppet code.

Custom images are kind of orthogonal to this, though only kind of - the ad-hoc image build scripts that are used to build the vast majority of custom images (big piles of shell code in most cases) are really just another different attempt at tackling the same problems Puppet and Ansible attempt to solve. The only real advantage is that they're generally more discrete, only tackling a single application rather than a whole system; at the same time, you then require lots of other tooling around those images to pull everything together into a full software stack, at which point you get all the complexity of the various container orchestration systems that are out there. And where do you run your Kubernetes deployment? If it's bare metal, you're probably using Ansible to manage everything underneath Kubernetes . . .

Puppet use today

Posted Jan 23, 2025 11:40 UTC (Thu) by taladar (subscriber, #68407) [Link] (1 responses)

> The biggest thing is really the module library that's available - yes, you can write your own Ansible/Puppet/etc code to manage your particular deployment of whatever, but almost all of the time it's a hell of a lot faster to use code written by someone else.

This is not my experience at all using Puppet. It is usually much faster and easier to write something custom than to attempt to figure out which of the badly written or maintained modules publicly available are suitable for all platforms we use, will be maintained long enough to be usable,...

The custom solution also tends to be much better in this space because a large part of config management is to hard-code all the configurable parameters a software allows but that are the same for your entire administrative domain and to only have to specify the things that are different for you for every system. And also to avoid specifying these same values multiple times for implementation, monitoring,... resources.

Puppet use today

Posted Jan 24, 2025 1:54 UTC (Fri) by himi (subscriber, #340) [Link]

> This is not my experience at all using Puppet. It is usually much faster and easier to write something custom than to attempt to figure out which of the badly written or maintained modules publicly available are suitable for all platforms we use, will be maintained long enough to be usable,...

That's where my point about the quality of the module library comes in - there are quite a lot of excellent modules out there, and there are projects that develop and maintain their own decent quality modules (a good example being OpenStack), and that can get you a long way very quickly . . . if your use case fits. Though there are also examples of key modules provided by Puppetlabs that are a massive pain in the arse - I keep being bitten by the apt module, for example, since my use case is more complex than that module wants to allow for. I do definitely find that the collection of custom code that's replacing or working around modules keeps growing, as well as the number of cases where I've had to fork upstream modules (generally temporarily, thankfully).

I think it's also a question of how complex your systems are, and where that complexity sits in the stack - if you're deploying OpenStack there's a mess of complexity in configuring all the various services, but for most use cases only a fairly small part of that configuration needs anything beyond the default value, and the upstream puppet modules make it really easy to just configure those important bits while setting defaults everywhere else. Custom coding all of that is a lot more work than using the upstream modules - I've done it from scratch, it's a lot of scut work but perfectly doable if you have the patience. Configuring other types of software stack would have different trade-offs that mitigate more towards custom coding - e.g. I've found that wrangling systemd configuration is easier with custom code than with upstream modules, because so much of that is a matter of drop-in files and so forth.

Ultimately, though, it gets down to the same core problem: automation of generalised system configuration is a hideously complex and difficult problem space, all real-world attempts to solve it are going to end up being ugly and full of hacks, whether they're hacks you've written yourself or hacks you're reusing from someone else. If anyone thinks their particular solution is perfect I'm pretty confident in saying they're deluding themselves, and probably missing all the nasty edge cases that make this such an unpleasant task . . .

Puppet use today

Posted Jan 23, 2025 15:55 UTC (Thu) by mbunkus (subscriber, #87248) [Link] (4 responses)

> Having made a choice at some point, though, you pretty quickly end up committed to that choice - it's extraordinarily difficult to change the configuration management system you're using without basically rebuilding everything from scratch.

There's no "basically" about it, it's exactly "rebuilding everything from scratch". Over the last 20+ years I've been doing this first with cfengine, then SaltStack & now Ansible, both migrations required full rewrites. First of all the configuration languages used are all different. Next the way things are organized are radically different. Lastly the templating languages used are different.

For example, SaltStack & Ansible both (can) use YAML as the format for their configuration. However, in SaltStack _the whole YAML file is run through the Jinja templating language_ whereas in Ansible Jinja is only used _in certain values of data structures_ (e.g. in the values of hashes, but not in the keys). This alone means that loops in SaltStack are loops in the template, generating multiple instances of whole YAML blocks, whereas in Ansible loops are part of the task description DSL. It's really easy to do things such as data modification in SaltStack as you have the whole expressivity of Jinja at your fingertips, wheras in Ansible you have to rely on built-in support for loops or use really whacky things such as json_query().

What you can use old installations for is as a kind of reminder of what to do for each topic, which files to roll out, which packages to install in which circumstances etc. etc.

For us the journey was well worth it, mostly due to three facts:

1. cfengine's execution model & configuration language is so archaic that migration to any other modern system is worth it just for that reason alone.
2. The amount of content out there for Ansible vastly outweighs the one for SaltStack: documentation, pre-made roles/collections, third-party modules etc.
3. The agent-less way of doing things in Ansible suits us as an MSP with multiple customers with which we share admin responsibilities better than SaltStack's/Puppet's mode with an installed agent, as several of our customers use Ansible as well (on the same hosts we use Ansible on; just a matter of communication who configures what), and that wouldn't work with systems that employ agents tied to certain control nodes.

If there's something I miss about SaltStack is its incredible speed, though. Even if you try things in Ansible such as using Mitogen (which breaks often enough), you can still end up having playbook runtimes that are 10× as long as corresponding execution in SaltStack, or worse.

Puppet use today

Posted Jan 24, 2025 2:26 UTC (Fri) by himi (subscriber, #340) [Link]

> There's no "basically" about it, it's exactly "rebuilding everything from scratch". Over the last 20+ years I've been doing this first with cfengine, then SaltStack & now Ansible, both migrations required full rewrites. First of all the configuration languages used are all different. Next the way things are organized are radically different. Lastly the templating languages used are different.

Yeah, "basically" there is really a weasel word, it's very much rebuilding everything. The main reason for not being more emphatic is that there's a definite difference between replicating the functionality of an existing system and building a whole new system from scratch - a whole lot of decisions have already been made and it's a matter of figuring out how to implement those decisions via the new tooling.

Though that can actually make it harder, since things like module libraries are generally opinionated (even if they're not being explicit about it they have defaults and make assumptions about how things are done), and if there's a mismatch you can end up having to fight the new tooling to replicate what was the default behaviour of the old tooling.

I think the big takeaway is this: systems administration is a mug's game, automating most of it is the only sane thing to do; automating systems administration is /also/ a mug's game, though, so the only sane thing to do is knock off early and enjoy a nice relaxing drink . . .

Puppet use today

Posted Jan 28, 2025 10:01 UTC (Tue) by taladar (subscriber, #68407) [Link] (2 responses)

In my experience sharing admin responsibilities always leads to trouble. The problem is complex enough if you don't have multiple organizations with multiple potentially unknowingly incompatible policies and workflows trying to share it.

Only managing part of a system also removes a large advantage of config management, namely that you can create another system that works in exactly the same way reliably and that everyone using the config management tool will produce identical results in all the small details that otherwise diverge if you tell 5 different admins to perform the same task.

Puppet use today

Posted Jan 28, 2025 10:38 UTC (Tue) by mbunkus (subscriber, #87248) [Link]

That's a pretty purist view that simply doesn't apply to my own situation here, with our customers. I'm not saying it's ideal, but monetary constraints require compromises.

Note that I'm not advocating for this type of setup. My earlier statement only said that using agent-less approaches such as Ansible's makes this type of cooperation possible, whereas agent-based approaches such as SaltStack's simply don't work.

Puppet use today

Posted Jan 29, 2025 0:13 UTC (Wed) by mathstuf (subscriber, #69389) [Link]

Eh. Sysadmin here deploys the "base" machine (OS, monitoring, vulnerability scanning, etc.) and then hands it over to us for our CI deployment configuration layer. It's worked well so far (probably mostly because the former is once-and-done for the most part).

Puppet use today

Posted Jan 23, 2025 11:52 UTC (Thu) by taladar (subscriber, #68407) [Link]

We still use Puppet for most of our systems (about 100+ Linux servers, mostly Debian, some RHEL/Centos).

Having used Helm and K8S for a bit in some projects I sort of like K8S even though it has some flaws but I would rather gauge out my eyes than debug Helm templates on the scale to replace all our Puppet deployments. K8S also seems quite unsuitable for relatively small deployments where we currently use something like the smallest (or slightly larger) Cloud servers Hetzner offers for production and development systems each.

Personally I would change quite a few things about Puppet, almost none of which are any better in K8S, mainly around stronger static checks and better abilities to use a single source of truth to setup data used on multiple systems (e.g. a system, the server setting up DNS for it, the server setting up a backup storage location for it, the monitoring for it,...) and to coordinate the timing of those changes on various systems better.

Speaking of timing, K8S is also even slower than Puppet to deploy, especially if you take the image build into account but even ignoring that restarting containers tends to take in the order of 10 times as long as rebooting one of the cloud servers and probably 2-3 times as long as even the longer Puppet runs on those servers.

Puppet use today

Posted Jan 23, 2025 12:29 UTC (Thu) by zdzichu (guest, #17118) [Link]

There are environment where direct login to root (or sudo-to-root capable) accounts is not possible. Admin logins are possible, but must be done through session-recording proxies with MFA. This tend to work poorly with Ansible's model.

On the other hand, such envs are fine with agent system, where agents connect to central mothership to get instructions what to do. Like Puppet.
Periodic, automatic config drift remediation is a nice bonus for Puppet, too.

Puppet use today

Posted Jan 26, 2025 6:24 UTC (Sun) by ssmith32 (subscriber, #72404) [Link] (1 responses)

Wasn't sure if you were implying Ansible makes sense in the cloud? Does it?

The last three companies I worked at used Terraform. Before that, the offerings in the cloud were a bit lower level then nowadays, and it was salt all the way.

I don't particularly like (or dislike) Terraform, so wondering about the alternatives.

Puppet use today

Posted Jan 26, 2025 10:01 UTC (Sun) by mbunkus (subscriber, #87248) [Link]

We usually use OpenTofu (Open Source fork of Terraform after their license change) for handling the infrastructure level (creating/reconfiguring VMs, DNS entries etc.) & Ansible for the software level (everything after VMs are reachable via ssh), both on premises & in the cloud. We also use Ansible for a lot of switch configuration as there's a lot of third-party support for Ansible from vendors.

Sure, both tools have overlap, but their strengths are in different areas. Use them both where they're strongest.

Puppet use today

Posted Jan 29, 2025 12:37 UTC (Wed) by zigo (subscriber, #96142) [Link]

Yes. All of OpenStack public cloud infrastructure is maintained using Puppet.

working for me

Posted Jan 23, 2025 1:00 UTC (Thu) by robert.cohen@anu.edu.au (subscriber, #6281) [Link]

We have a serverless puppet installation on about 200 boxes running code developed over a decade by multiple people.
We mostly use ansible for creating VMs then installing puppet on them :-)
I tried openvox on a test box. It just worked.

Have to admit I find puppet much easier than ansible, but thats mostly because I know it better.
It is a different mind set, and until you grok the puppet way of doing things, you can hit issues where things that seem
like they ought to be easy, are just about impossible.

The biggest hurdle is the fact that puppet is not designed to react to the current config of a box.
Its designed to impose config on a box.

So things like, "if this file exists, do this thing" are very clumsy to achieve
in puppet.
Unless you can derive the fact that the file exists either from facter or hiera.


Copyright © 2025, Eklektix, Inc.
Comments and public postings are copyrighted by their creators.
Linux is a registered trademark of Linus Torvalds