|
|
Subscribe / Log in / New account

Home Assistant deprecates the "core" and "supervised" installation modes

Our recent article on Home Assistant observed that the project emphasizes installations using its own Linux distribution or within containers. The project has now made that emphasis rather stronger with this announcement of the deprecation of the "core" and "supervised" installation modes, which allowed Home Assistant to be installed as an ordinary application on a Linux system.

These are advanced installation methods, with only a small percentage of the community opting to use them. If you are using these methods, you can continue to do so (you can even continue to update your system), but in six months time, you will no longer be supported, which I'll explain the impacts of in the next section. References to these installation methods will be removed from our documentation after our next release (2025.6).

Support for 32-bit Arm and x86 architectures has also been deprecated.


to post comments

Bugger

Posted May 22, 2025 22:04 UTC (Thu) by gerdesj (subscriber, #5446) [Link] (6 responses)

I look after rather a lot of Supervised HAs. This will cause me a lot of work migrating

Do I start the Openhome Assistant project? Hmmm.

Bugger

Posted May 23, 2025 5:17 UTC (Fri) by pbonzini (subscriber, #60935) [Link] (5 responses)

As I understand it, Supervised will still be there but only as part of HAOS?

Bugger

Posted May 23, 2025 6:12 UTC (Fri) by WolfWings (subscriber, #56790) [Link] (4 responses)

Which still means 'Supervised' installs will need to be re-done entirely, since they're not on HAOS currently or they'd be HAOS installs not 'Supervised' installs.

Bugger

Posted May 23, 2025 6:14 UTC (Fri) by smurf (subscriber, #17840) [Link] (2 responses)

Apparently that's easy enough, just back up your current config and restore it onto HAOS.

Bugger

Posted May 26, 2025 1:58 UTC (Mon) by gerdesj (subscriber, #5446) [Link] (1 responses)

"Apparently that's easy enough, just back up your current config and restore it onto HAOS."

That's spot on but misses the point of why I generally prefer Supervised in the first place!

An "elderly" laptop has a battery (built in UPS), a few USB interfaces and is generally capable of being optimised for a bit of power sippin'. You don't have to fire up the GPU unless you want to. With a SSD, you've got a pretty decent platform that can be powerful if needed but also quite modest, powerwise too.

OK that is still valid for HAOS. However HAOS is a cut down distro with no frills so you can't run anything else that is not already covered with a blessed HA container: "addon".

I run quite a few out of band things such as SMS daemons which are so uncool but so useful to my customers and me too.

It's all very well trying to reduce the support load but I don't need any guarantees. I'd just like some understanding. The last deprecation effort shoved a middle finger to Debian derivatives (Ubuntu) OK, I got it and rolled with it. Fairly major re-installations efforts done.

Now you are telling me I have to do all that again. Well, thank you very much. Is this the last one? It won't even be classed as a breaking change for the project but it will fuck over quite a few implementations.

I will roll with it all again because I have to but it isn't a good look and I don't see why it is necessary this time.

Bugger

Posted May 26, 2025 2:27 UTC (Mon) by smurf (subscriber, #17840) [Link]

> That's spot on but misses the point of why I generally prefer Supervised in the first place!

Me too, to be honest.

> I will roll with it all again because I have to but it isn't a good look and I don't see why it is necessary this time.

I won't. I'll keep on running it in Supervised mode no matter what. They said it'll continue to work, just not be end-user documented+supported, and I'm taking them at their word.

Bugger

Posted May 23, 2025 6:27 UTC (Fri) by pbonzini (subscriber, #60935) [Link]

The code is out there, you will have to gather the info yourself (and someone will surely do it) and they don't want to hear about you but it's possible to use it.

That said, Supervised has the advantage of managing add-ons but it's indeed tricky to debug. Containers are much easier to manage if you don't need add-ons.

Well …

Posted May 23, 2025 6:12 UTC (Fri) by smurf (subscriber, #17840) [Link] (9 responses)

I can understand why they don't want to deal with people who botch up their Supervised installation and then cry for help.

On the other hand, I hate blowing additional resources on my HA setup. A separate Raspberry Pi costs money and eats power (this matters if you need to deal with power outages), and a Docker or some-other-VM installation isn't feature complete. Also, it wastes memory and CPU.

Seems like "unsupported but continues to work" is the best we can hope for.

Well …

Posted May 23, 2025 6:25 UTC (Fri) by Cyberax (✭ supporter ✭, #52523) [Link]

The container mode is still supported. It doesn't support automatic installation of addons from the store, but it also doesn't work reliably with the supervised mode.

Well …

Posted May 23, 2025 7:23 UTC (Fri) by merge (subscriber, #65339) [Link]

Similar situation here, but as I understand it, I can now "buy" convenience / support with setting up a (libvirt) VM and forward port 8123. my server is an old laptop and dedicating 1 of 4 cpus to HA kind of hurts but I guess I'll prepare that. AFAIUI that would feature-wise be the same as the current "supervised" installation. Am I wrong?

Well …

Posted May 23, 2025 18:20 UTC (Fri) by aphedges (subscriber, #171718) [Link] (6 responses)

> it wastes memory and CPU

How does Docker waste memory and CPU? From my understanding, the overhead is quite low because it's just namespaces. I've never actually seen benchmarks to empirically test that, though.

Or maybe I misunderstood and you were talking about a VM wasting memory and CPU?

Well …

Posted May 24, 2025 15:46 UTC (Sat) by smurf (subscriber, #17840) [Link] (5 responses)

Docker images have their own copy of the C library, the Python interpreter, et al.
It's not quite as bad as a VM which also has its own kernel, but on systems that don't have swap space I want to keep resource usage to a minimum.

Well …

Posted May 26, 2025 1:35 UTC (Mon) by aphedges (subscriber, #171718) [Link] (4 responses)

I don't run applications on tiny systems (I always have least have 2 GBs of RAM), but I don't think the duplicated libc and CPython matters much when everything else is taken into account.

Looking on Docker Hub, homeassistant/home-assistant:2025.5 for linux/amd64 has a "compressed size" of 626.41 MB, but a separate copy of the same distro and same Python version, python:3.13.3-alpine3.21, has a "compressed size" of only 15.89 MB.

This matches my experience that most of the size of a Python-based OCI image is taken up by Python libraries, not any of the core runtime or system utilities.

Well …

Posted May 26, 2025 8:27 UTC (Mon) by geert (subscriber, #98403) [Link] (3 responses)

> I don't run applications on tiny systems (I always have least have 2 GBs of RAM)

Do you need 2 GiB of RAM to monitor the house?

Well …

Posted May 26, 2025 8:48 UTC (Mon) by zdzichu (subscriber, #17118) [Link] (1 responses)

That's not what they wrote. 2GiB is a tiny system, it is an absolute minimum of usable system. We are 1/4 into XXI century.

Well …

Posted May 26, 2025 22:31 UTC (Mon) by aphedges (subscriber, #171718) [Link]

I agree! I'm obviously not advocating for wasting RAM, but most systems you can purchase now are going to have 2 GB or more. It's just not worth limited developer time trying to shave a couple MB off that usage unless you are working at a large scale, which I doubt anyone is doing with Home Assistant.

The only exception I've seen to having these larger amounts of RAM is more special-purpose hardware like routers, but the OpenWRT One sells for a price comparable to standard consumer routers (at least in the US) and still has 1 GB of RAM.

Well …

Posted May 26, 2025 22:25 UTC (Mon) by aphedges (subscriber, #171718) [Link]

When I built my machine, I ended up deciding on 16 GB of RAM because the cost is only marginally higher than if I used a smaller amount of RAM. Plus, I wanted to run other applications on it and not worry about it upgrading hardware for years.

I expect that you won't find much in the way of general-purpose compute hardware with less than 2 GB of RAM. I was curious about how much is available on the Raspberry Pi 5, and the smallest version they sell has 2 GB.

An ongoing theme with the developers

Posted May 24, 2025 16:23 UTC (Sat) by donbarry (guest, #10485) [Link] (23 responses)

A very unfortunate but predictable decision. The developers are indifferent to any packaging considerations other than theirs. Thus always chasing after the latest Python version, deemphasizing and in some cases removing YAML alternatives for configuration, becoming less and less an interoperable tool and more a monolithic OS of its own. Soon it'll have a mail client within.

One can certainly speculate that the trimming reflects the economic interests of the company behind it and the desire to keep the client population focused behind its offerings, rather than, say, an "apt-get install homeassistant" world.

Too many corporately-trained agile developers today have forgotten old mantras like "be liberal in what you consume and conservative in what you offer" and its related library restatement to not rely on recent library innovations when you can avoid it but do the innovation in your own product until the outside interfaces are stable and widely deployed in the libraries.

A considerate community development methodology would use conservative and stable APIs to external interfaces and be able to run on at least the current stable Debian (to use an example). But the direction from the top is pretty hostile to such suggestions. Several years ago answering their objections to providing a version of their Android remote client not dependent on Google's proprietary play services led to my being blocked from their Facebook page. (They later added this alternative after enough other protest!)

Still, it's the best alternative in this domain and I'll continue to use it, and to use it without their OS and hated containers.

An ongoing theme with the developers

Posted May 24, 2025 17:49 UTC (Sat) by smurf (subscriber, #17840) [Link] (14 responses)

Yes, it'd be nice if somebody would do the work to make "apt-get homeassistant" happen. Shouldn't be too much work, to be honest, as the first step (after adding a HASS user) is to create a Python virtualenv. The biggest problem is that it's going to be a year or two out of date on your stable distro. (Unless step two is to download the current version and then auto-update it. At that point you don't need apt-get any more: you just want a comprehensive installation script.)

It'd be even nicer if that could work without a venv, but there are far too many supported integrations for that to happen, given that each one needs its own Python package (with wildly varying actual version specificity) *and* basically no automated PyPI import to Debian.

An ongoing theme with the developers

Posted May 25, 2025 13:25 UTC (Sun) by philh (subscriber, #14797) [Link] (12 responses)

> Yes, it'd be nice if somebody would do the work to make "apt-get homeassistant" happen. Shouldn't be too much work ...

See: https://wiki.debian.org/Python/HomeAssistant

That page states that there were 675 missing python packages required when they started the recent push to package Home Assistant. The aim there was to finish packaging it this year, and they seem to have already done well over 400, so I guess strictly speaking it looks like it will not in fact be "too much" effort in the end ;-)

Hopefully that will then give people a place to gather in order to be able to maintain an installable version.

Regarding the out-of date aspect of packaging such a thing, one could maintain a fresh version in `fasttrack.debian.net`, but personally I'd trade something that had proven reliable over a couple of years for whatever this week's latest feature is when it comes to controlling my home when I'm not in the building.

An ongoing theme with the developers

Posted May 27, 2025 17:48 UTC (Tue) by Sesse (subscriber, #53779) [Link] (1 responses)

I find it amusing that anything that depends on 600+ Python packages and wants 2GB+ RAM “for a basic setup” describes itself as “very lean”. I mean, what stuff exists in this space that has higher requirements?

An ongoing theme with the developers

Posted May 27, 2025 18:09 UTC (Tue) by smurf (subscriber, #17840) [Link]

It depends on 600 Python packages only if you activate each and every one of the gazillion integrations that come with Home Assistant. Fortunately the typical home uses far fewer than 100 different kinds of device.

The reason you want more than enough RAM isn't that you can't set up a usable system with less, one of my test systems runs perfectly well in a 400MB VM. The reason is that if there's even a hint of memory pressure, the system might need to swap something in when you hit a light switch. Few things in a home automation setup are more annoying to the users, esp. the not-computer-savvy ones, than random delays.

An ongoing theme with the developers

Posted May 27, 2025 20:36 UTC (Tue) by jdulaney (subscriber, #83672) [Link] (5 responses)

Needing 600 dependencies just means you don't know how to code.

An ongoing theme with the developers

Posted May 27, 2025 20:53 UTC (Tue) by Cyberax (✭ supporter ✭, #52523) [Link]

Real coders have 6000 dependencies!

The HA has a rule that integrations should not have any device interaction logic inside them, they should farm it out to a library. That's where most of 600 dependencies come from.

This makes it easier to develop device integrations, and forces a clean separation between device handlers and HA core logic.

An ongoing theme with the developers

Posted May 28, 2025 8:00 UTC (Wed) by taladar (subscriber, #68407) [Link]

Counterpoint, having only a few dependencies in a large project means your language makes it unreasonably hard to start a new library so you end up with kitchen sink dependencies like Qt instead of the many independent ones you get in a language with better tooling.

An ongoing theme with the developers

Posted May 28, 2025 8:17 UTC (Wed) by smurf (subscriber, #17840) [Link] (2 responses)

Not in this case.

Those 600 deps mean that HA uses existing and hopefully-tested-on-actual-hardware Python libraries to talk to $THING, which can be stubbed out for unit testing rather easily, instead of re-inventing 200 wheels that can't be tested without the actual hardware.

An ongoing theme with the developers

Posted May 28, 2025 9:54 UTC (Wed) by mathstuf (subscriber, #69389) [Link] (1 responses)

> which can be stubbed out for unit testing rather easily, instead of re-inventing 200 wheels that can't be tested without the actual hardware.

And with hardware, you can test it directly instead of needing an HA installation to drive it.

An ongoing theme with the developers

Posted May 28, 2025 19:13 UTC (Wed) by smurf (subscriber, #17840) [Link]

… and without hardware *and* without an abstraction layer, you can't test anything and are SOL.

You get to guess how many CI runners, or users for that matter, have 100 different kinds of home automation hardware attached to them.

An ongoing theme with the developers

Posted May 29, 2025 10:43 UTC (Thu) by Rigrig (subscriber, #105346) [Link]

> I'd trade something that had proven reliable over a couple of years for whatever this week's latest feature is

Unfortunately "this week's latest feature" is often "talking to devices with this week's firmware update". Even if the manufacturer is decent enough to give you the choice, that still means choosing between "works with x-year old Home Assistant" and "latest (security) fixes".

An ongoing theme with the developers

Posted May 29, 2025 20:06 UTC (Thu) by zigo (subscriber, #96142) [Link] (2 responses)

Hi,

I'm one of the 2 crazy guys that started this effort. Packaging all of these libraries, I found out that some are very well maintained, contain unit and functional tests, and some are really bad, with only a few lines of Python. So quality varies a lot.

What is pushing me to do this packaging work, is mostly because I kind of hate the way HomeAssistant is delivered by upstream. I also hate that it's updating every odd days: I don't need that, I want my home automation to be STABLE. In the sense: I don't want it to be a constantly moving target. I don't need new features more often than Debian Stable (ie: once every 2 years is really enough). I also hate having 100s of containers, and having a very poor access to it. I prefer all of these python things to be installed flat out, on my system, together with other things. I also don't want a specialized VM (which is what I'm currently using because ... no other viable choice), and I would like to be able to install other things in my home server than just HomeAssistant. I don't want also HomeAssistant to use the root of my home web server, or to use a specialized port, I would like it to just share port 443, like every other app in Debian, and just use /homeassistant (or /ha ?), and leave the rest of the namespace for *ME* to decide what to host in my web server/haproxy/whatever. Simple things like having SSL support in HA is just horrible. :(

So far, it takes us (me and the other DD) about 15 to 20 minutes to package a Python library, thanks to highly automated tooling.

I have to admit that I did a lot from July to October, and then did other stuff. I intend to restart the effort this summer, hopefully a bit during Debconf in July. However, help would be highly appreciated. If we get 20 people doing 10 package each, we're done! Having to finish the 200+ packages with only 2 persons WILL take a lot of time. Then once we're done, the fun part begins: we'll start playing the the core stuff, it's going to be fun.

Hopefully, we can have something that works well enough for Debian 14 (aka: Forky). Also hopefully: it's going to be practical in Debian Testing before then.

An ongoing theme with the developers

Posted May 29, 2025 22:08 UTC (Thu) by Cyberax (✭ supporter ✭, #52523) [Link]

> I don't want also HomeAssistant to use the root of my home web server, or to use a specialized port, I would like it to just share port 443, like every other app in Debian, and just use /homeassistant (or /ha ?), and leave the rest of the namespace for *ME* to decide what to host in my web server/haproxy/whatever. Simple things like having SSL support in HA is just horrible. :(

You can do that just fine with HA and containers. Have the HA run on a port like 8080, and set up your frontend webserver/proxy to redirect everything for the '/ha/*' path to that port.

HA itself works fine with URLs that are not rooted, I believe you just need to set the `external_url` property in the YAML config for that.

An ongoing theme with the developers

Posted May 30, 2025 6:43 UTC (Fri) by smurf (subscriber, #17840) [Link]

> I prefer all of these python things to be installed flat out, on my system, together with other things.

Amen, brother.

> I also don't want a specialized VM (which is what I'm currently using because ... no other viable choice),

I'm using a simple venv, and a startup script that activates it and then starts the HA that's been pip-install'd into it. Works perfectly fine. So I wonder what do you need the VM for?

An ongoing theme with the developers

Posted Aug 9, 2025 22:51 UTC (Sat) by Rudd-O (guest, #61155) [Link]

HA won't be happening as a traditional system-wide apt-get or dnf install, because HA insists on installing various versions of various packages in its own very much owned env. Those packages are only installed at runtime when an integration is added by the user.

Difference in perspective

Posted May 24, 2025 18:17 UTC (Sat) by DemiMarie (subscriber, #164188) [Link] (7 responses)

Another perspective is that nowadays, containers and VMs have made it possible for an application to take advantage of recent versions of its dependencies, without having to worry about them being unavailable on the target platform. From that perspective, they are a huge improvement.

Difference in perspective

Posted May 24, 2025 21:02 UTC (Sat) by ballombe (subscriber, #9523) [Link] (6 responses)

No, this just create a house of card situation where bugs cannot be fixed.

Difference in perspective

Posted May 24, 2025 23:25 UTC (Sat) by jkingweb (subscriber, #113039) [Link] (5 responses)

Could you elaborate? I don't see how your statement follows.

Difference in perspective

Posted May 25, 2025 8:18 UTC (Sun) by smurf (subscriber, #17840) [Link] (4 responses)

So you have a container with a mountain of hard-coded dependencies, one of which is buggy, and all of this lives in a Docker image which you got from the net.

It's not at all easy to replace that module there. In a "real" installation you can just update the requirements file with the version# with the bugfix and restart HomeAssistant.

That being said, this is for end users. A user won't muck about with Docker images, but then they wouldn't know how to edit their Supervised installation either (not without messing it up anyway). The user files the bug, somebody else notifies upstream with a fix, and you get that in the next regular update.

Actually *running* HA in Supervised mode will continue to work because they (and we) need that mode for development and testing. It'll just not be end-user-supported by them. After all, you can't rebuild a Docker image or re-provide+-boot your appliance every time you fix a typo, that'd take much too long.

Difference in perspective

Posted May 25, 2025 9:49 UTC (Sun) by ballombe (subscriber, #9523) [Link]

And in addition , this encourages supporting only the latest version of the dependencies instead of the most tested and most reliable. So anybody whose system is slightly different from upstream CI will hit bugs that upstream will flag as
"nonreproducible".

Difference in perspective

Posted May 25, 2025 11:36 UTC (Sun) by pizza (subscriber, #46) [Link] (1 responses)

> After all, you can't rebuild a Docker image or re-provide+-boot your appliance every time you fix a typo, that'd take much too long.

On the contrary, it's increasingly common these days that the only build artifact produced is a docker image.

Difference in perspective

Posted May 25, 2025 18:52 UTC (Sun) by smurf (subscriber, #17840) [Link]

Python is a script language. No sane developer is going to wait for a Docker build on every edit-test-debug cycle.

Difference in perspective

Posted May 25, 2025 18:56 UTC (Sun) by Cyberax (✭ supporter ✭, #52523) [Link]

> Actually *running* HA in Supervised mode will continue to work because they (and we) need that mode for development and testing. It'll just not be end-user-supported by them. After all, you can't rebuild a Docker image or re-provide+-boot your appliance every time you fix a typo, that'd take much too long.

The way the HA environment is set up, you can create "patches" in the `config/custom_components` folder, the Python file loader looks there first. So you can replicate the directory structure of the affected package and patch individual files. Once you're done, you can do a full-blown build.

And even for the core development, you don't need the full Supervised mode. Just clone the repo and install dependencies for the core and the parts that you want to change.

Shouldn't have wasted my time

Posted May 27, 2025 20:38 UTC (Tue) by jdulaney (subscriber, #83672) [Link] (2 responses)

Haos doesn't work on my hw

Shouldn't have wasted my time

Posted May 28, 2025 8:20 UTC (Wed) by smurf (subscriber, #17840) [Link] (1 responses)

So what's the problem with your hardware? It's a bunch of Python. It should work out of the box anywhere there's a current Python interpreter.

Shouldn't have wasted my time

Posted May 29, 2025 11:17 UTC (Thu) by kpfleming (subscriber, #23250) [Link]

HAOS is not HA, and is far more than a bunch of Python.

MythTV

Posted May 27, 2025 22:00 UTC (Tue) by linuxrocks123 (subscriber, #34648) [Link] (2 responses)

Reading the Home Assistant article reminded me a lot of MythTV. You want to make a DVR: okay, that requires reading a database with a TV schedule, recording video from a tuner, and presenting an interface and playing back the video. Oh, and, for remote players, you'll need NFS or an FTP/HTTP server running on the storage dir.

Pretty simple problem. If I were writing such a program, I'd string together about three executables with shell script and cron and be finished pretty quickly.

That's not the approach MythTV took. MythTV is a bloated catastrophe. Everything is integrated with everything else. Nothing is supported when used by itself. They didn't use any standard tooling, and there's NIH all over the place. They even created their own file sharing API for no reason.

MythTV is absolutely terrible. Oracle couldn't have done worse.

With HomeAssistant, I want to control lights and doohickeys in my house. Pretty simple problem. I'll need some commands to query and toggle the states of the lights. Maybe some cron jobs to switch them on/off at certain times. Maybe a watchdog program that hangs until one of the sensors detects something so I can run that in a loop taking whatever actions are needed.

That's pretty much what I did when setting up my old house with X-10, and it's what I'll do again whenever I finally have time to set up stuff at my current house. It's pretty simple, and I was done pretty quickly.

Now I read an article about HomeAssistant, which solves this problem with a Python monstrosity with so many tentacles that it can't even run without taking over the entire operating system. That's even worse than MythTV, and the problem HomeAssistant is solving is even simpler. That's horrifying.

MythTV

Posted May 28, 2025 8:39 UTC (Wed) by smurf (subscriber, #17840) [Link] (1 responses)

HA isn't MythTV. The HA core is reasonably lean and doesn't depend on much (even less if you turn off some defaults which almost everybody wants). Also it's discoverable, meaning you don't have to remember where the timeout for your outside light is, it's right there in the UI, and it's all in one place — a switch is a switch in HA, no matter which tech it's physically connected to, so moving your light from X.10 to Wifi and hooking up an additional switch via Enocean is *easy*.

Yes it can run without taking over a whole box, quite easily actually. The point is that they'll stop supporting end users that use Supervised mode, simply because there's all sorts of ways you can get that wrong in not-quite-obvious ways. The point is not, and they say so quite clearly, that Supervised mode will stop working.

Point in case: I'm the maintainer of knxd, a reasonably ancient C++ daemon that's routing KNX datagrams. I took it over and rewrote the internals ten years ago. Until two years ago, the nonsense from botched outdated installation scripts that somebody found on some blog somewhere outnumbered *real* bugs+questions. And that's with a README, with build instructions and everything right alongside the source code, for a limited-domain daemon that does *one* job.

MythTV

Posted May 28, 2025 17:57 UTC (Wed) by Cyberax (✭ supporter ✭, #52523) [Link]

> Point in case: I'm the maintainer of knxd, a reasonably ancient C++ daemon that's routing KNX datagrams.

Thank you for that! I've been experimenting with KNX and I love it!

Core REPRESENT

Posted Aug 9, 2025 22:48 UTC (Sat) by Rudd-O (guest, #61155) [Link]

I will continue to be running HA Core because HAOS is not an option when running HA under Qubes OS (there's actually a fuck ton of things that just aren't an option, like Umbrel for example). I've been dealing with Python since Python 1.4, I can help myself to solve issues; I will just be doing fewer bug reports to the project. *shrug*


Copyright © 2025, Eklektix, Inc.
Comments and public postings are copyrighted by their creators.
Linux is a registered trademark of Linus Torvalds