|
|
Subscribe / Log in / New account

PipeWire: The Linux audio/video bus

PipeWire: The Linux audio/video bus

Posted Mar 4, 2021 16:06 UTC (Thu) by floppus (guest, #137245)
In reply to: PipeWire: The Linux audio/video bus by dancol
Parent article: PipeWire: The Linux audio/video bus

The trouble is, that breaks everything that uses select(2).

Difficult to see how you could avoid that without requiring applications to opt-in to a higher limit... which is what distros seem to be doing lately, by increasing the hard limit to 262144 or 1048576, while leaving the soft limit at 1024.


to post comments

PipeWire: The Linux audio/video bus

Posted Mar 4, 2021 21:39 UTC (Thu) by zuki (subscriber, #41808) [Link] (1 responses)

Yep. Systemd does that since v240:

> The Linux kernel's current default RLIMIT_NOFILE resource limit for userspace processes is set to 1024 (soft) and 4096 (hard). Previously, systemd passed this on unmodified to all processes it forked off. With this systemd release the hard limit systemd passes on is increased to 512K, overriding the kernel's defaults and substantially increasing the number of simultaneous file descriptors unprivileged userspace processes can allocate. Note that the soft limit remains at 1024 for compatibility reasons: the traditional UNIX select() call cannot deal with file descriptors >= 1024 and increasing the soft limit globally might thus result in programs unexpectedly allocating a high file descriptor and thus failing abnormally when attempting to use it with select() (of course, programs shouldn't use select() anymore, and prefer poll()/epoll, but the call unfortunately remains undeservedly popular at this time). This change reflects the fact that file descriptor handling in the Linux kernel has been optimized in more recent kernels and allocating large numbers of them should be much cheaper both in memory and in performance than it used to be. Programs that want to take benefit of the increased limit have to "opt-in" into high file descriptors explicitly by raising their soft limit. Of course, when they do that they must acknowledge that they cannot use select() anymore (and neither can any shared library they use — or any shared library used by any shared library they use and so on). Which default hard limit is most appropriate is of course hard to decide. However, given reports that ~300K file descriptors are used in real-life applications we believe 512K is sufficiently high as new default for now. Note that there are also reports that using very high hard limits (e.g. 1G) is problematic: some software allocates large arrays with one element for each potential file descriptor (Java, …) — a high hard limit thus triggers excessively large memory allocations in these applications. Hopefully, the new default of 512K is a good middle ground: higher than what real-life applications currently need, and low enough for avoid triggering excessively large allocations in problematic software. (And yes, somebody should fix Java.)

PipeWire: The Linux audio/video bus

Posted Mar 6, 2021 1:01 UTC (Sat) by plugwash (subscriber, #29694) [Link]

Technically it's not select that is the problem per-se, it's the types and macros used alongside it which use a fixed-size array with no bounds checking.


Copyright © 2025, Eklektix, Inc.
Comments and public postings are copyrighted by their creators.
Linux is a registered trademark of Linus Torvalds