From the article:
"At this point, an audience member opined that the classical UNIX init process was designed to be as small and robust as possible, but that this seemed to contrast with where systemd was going. Is systemd at risk of becoming a large single point of failure? Lennart pointed out that while systemd replaces many earlier tools, it performs those tasks in separate processes. However, the notable difference is that the various components operate in a much more integrated fashion."
Which presumably only goes half-way to answering said audience member's question. Doing everything in separate processes should certainly make it easier to keep the init process small and robust, on condition that the overall process is well tested for how well it copes with failures of individual components. Does anyone know if that is being done?