The trouble with symbolic links
The trouble with symbolic links
Posted Jul 15, 2022 6:54 UTC (Fri) by sven_wagner (guest, #114232)Parent article: The trouble with symbolic links
The problem starts when higher privileged accounts use user data to do tasks with higher privileges. I.e. when an admin is archiving (or restoring) user homes, syncing them from one Server to another (i.e. rsync).
Or when a build-process (in one user account) uses another users data tree to act on it.
Then TOCTOU becomes interesting.
A few possible solutions:
In nearly all cases (i believe) either the root job/cronjob should have been run under lesser privileges or should have been splitted into two phases like:
- archive/restore only the user home folders (/home/*) but NOT their contents as at that directory level root privileges are needed to create the folders with their owners/permissions and
- use separated archives per home folder and archive/restore them from within user space (sudo -u $user tar -xf $archive)
Then the root task would not need symlink checking of user input as the restore task runs with user permissions only.
The build process for example should not be able to write it's own config/scripts, so that a toctou attack could only alter the data actually used for the build which could be checked after a recursive copy into the build-process user space.
Another way to prevent TOCTOU when reading (as root) user provided data (that resides in symlinks) would be to use immutable flag on the directories in the tree during the root task. The user himself cannot set the flag, so the admin already knows when this flag is set somewhere (and needs to stay) or otherwise can set it blindly and just remove it afterwards. Symlinks cannot be set immutable, but setting the directory immutable removes the users ability to remove or alter the symlinks or directories there, so after setting immutable flags to directories in user tree, the admin can once recheck, that all directories have the flag and then has no TOCTOU problems with directory/symlink changes until he removes the flags again after finishing the task. Maybe the user could still use fuse to overmount directories in is space, but the thread is about symlinks. (To prevent collusion with user scripts or cronjobs, flock could help)
Using chroot could also be used to solve the "overwriting /etc/passwd" risk.
Yet another way:
SElinux provides a way to allow a script to have root privileges (as in setting the owner of arbritary files) while by design it needs a "whitelist" which group of files /dirs (sockets .. etc) it is allowed to modify (and how exactly), before it can do so. A process that can write to home_t or samba_share_t (which i assume often needs chown privileges) would not be able to write to etc_t. Instead of getting tricked by the user with changing symlinks to write to /etc/passwd it would create an audit record of this attack vector for beeing used (and prevented).
On the other hand, the cronjob that runs as root would also not need to read shadow_t and (as it is not whitelisted) could also not leak data on beeing tricked by TOCTOU attacks.
Symlinks are not a problem here, administration with only(!) multi-decades old security mechanisms, actually is. There is a reason for SELinux to exist and there are hundreds (or more) of reasons to use it. =)
Other than by preventing symbolic links from beeing used to trick some other process by entirely removing the whole symlink mechanism, selinux prevents all not whitelisted accesses on a syscall level AND also reports the already prevented attack.
Posted Jul 15, 2022 10:57 UTC (Fri)
by ma4ris5 (guest, #151140)
[Link] (1 responses)
I remember when I started studying at "Helsinki University of Technology (HUT)" in 1992.
There were VT computer terminals with UART serial ports, attached to Unix desktops for shared
IT administrators told, that Unix vendors had fixed the TOCTTOU soft link race conditions:
Fortunately symbolic link races were history: students were safe, and we learned to prevent the race conditions in the future.
Posted Jul 15, 2022 22:19 UTC (Fri)
by ma4ris5 (guest, #151140)
[Link]
If somebody would try to search for the remaining race conditions,
IT departments were in control again.
The world would look quite different in Linux side, if universities would have demanded
Something like this might happen in next 10 years:
Security scanners
Other attack mitigations
Converting all existing code bases into safety should be as easy as possible (the 10 year project).
Posted Jul 16, 2022 8:08 UTC (Sat)
by sven_wagner (guest, #114232)
[Link]
Before starting the job, move the complete user space folder into another one where the user cannot reach it, check for still open file handles of the user, then do your job without fear of toctou by evil users and at the end move the folder back in place where the user can work on it.
Another opportunity would be to temporarily remove login of the user, sigstop all of his processes and disable his cron/at jobs, let the privileged task be done and afterwards sigcont the processes again.
Similar for shares, disable the user, check no currect connection is alive, do the job and enable the user again. Or just completely shutdown samba while the higher privileged job runs.
Those who wants the user to be able to work while the higher privileged job is ongoing, would just add more insecurities that reside within the programs that work on the files but don't expect the file to be changed while reading or maybe even mmapping it. At least whatever the higher privileged task does with the data, cannot be assumed to be consistent.
If you let the user change anything inside his userspace while higher privileged tasks are running, the user might just add data to the end of a file currently read by root and punch the data at the beginning so that he does not exceed his quota. The process reading the file (into / partition?) could end uptrying to read like 16TiB before the user has to try using collapse instead of punch to see if the root cronjob continues to read even more of the file.
The trouble with symbolic links
It is named now as "Aalto University", this is different from "University of Helsinki", where Linus studied.
use with private home folders on NFS. I remember learning Gopher, and also Mosaic browser:
Maybe it was an early version of Mosaic - according to Wikipedia first release was at 1993.
Some students in universities were using soft links in /tmp/ folders to gain root access before the fixes.
Unix administrators in HUT forbid to try such attacks on the computers: attempts would be noticed and there were severe consequences.
The trouble with symbolic links
TOCTTOU races were (partially) fixed, for the remaining ones,
there was logging and software level booby traps.
alerts would be raised for one of the first attempts,
and user would be thrown out and blocked.
for Unixes to remove soft link TOCTTOU races at file system level before 1992.
- open(), chmod() chcon() stat() etc. TOCTTOU hazard libc functions will be marked as insecure at source code level.
- All applications that use those functions, will need to be altered (just like Samba multi year project to fix one CVE),
to avoid unsafe calls, otherwise those projects will become obsolete.
- All commands, that have in source code such calls (like "setfacl -R" command which was mentioned in the Samba video )
are marked as vulnerable because of the possibility of existing TOCTTOU races.
- Security aware application container deployments must use latest OS, because older OS versions don't get the (massive multi year) fixes.
- Container (non-root-path) mounts that use data disks that follow soft links by default will be marked as insecure.
- Of course, SELinux might come into containers too as mandatory, when OS is present at file system level.
- Single binary containers take more ground (Examples: Rust, Go).
There is only host kernel, read only binary, configuration and root disk, data partition with no
soft link support, no vulnerable OS binaries at file system (only minimal set like "/work/", "/dev/", "/proc" and "/sys")).
No suid binaries. Container runs as non-root user.
Creating new safe applications should be the easy (default) way. Creating vulnerable code is fine to be a bit harder to do.
The trouble with symbolic links
(Fortunately the user cannot directly see the offset of the root processes filehandle, so he must rather guess where the other process currently is, to do this type of attack)
Is this attack vector now caused by PUNCH or COLLAPSE? Do we have to remove them just for beeing able to run root commands on userspace data while the user is able to work on it as is suggested here with symlinks?