A rough start for ksmbd
A rough start for ksmbd
Posted Oct 7, 2021 19:31 UTC (Thu) by developer122 (guest, #152928)In reply to: A rough start for ksmbd by flussence
Parent article: A rough start for ksmbd
Maybe I just want to deploy nfs on my network after already having set up a bunch of computers, and maybe they all just happen to use slightly different usernames and IDs for the same users. Right now the only thing I can do is squash all the users from any particular host down to one user, but I'd really like to be able to map them. It works so long as I'm the only one using it, but I can see a multiuser future on the horizon (eg guests/family).
/nfs_gripe
Posted Oct 7, 2021 20:27 UTC (Thu)
by ejr (subscriber, #51652)
[Link] (1 responses)
Posted Oct 8, 2021 14:48 UTC (Fri)
by bfields (subscriber, #19510)
[Link]
In the absence of kerberos, the server's mostly just trusting the clients to correctly represent who's accessing the filesystem anyway.
I seem to recall one or two people making an attempt at adding this kind of mapping, and it turning out to be more complicated than expected. But that was a while ago. I wonder if any of the container work done since then would be useful here.
Anyway, there'd have to be someone willing to look into it and do the work.
Posted Oct 8, 2021 7:32 UTC (Fri)
by Wol (subscriber, #4433)
[Link]
How hard is it to set up LDAP? Can't you implement some simple "single sign on"?
Cheers,
Posted Oct 8, 2021 9:09 UTC (Fri)
by geert (subscriber, #98403)
[Link]
Posted Oct 9, 2021 11:50 UTC (Sat)
by ballombe (subscriber, #9523)
[Link] (1 responses)
Posted Oct 11, 2021 13:19 UTC (Mon)
by bfields (subscriber, #19510)
[Link]
Looking at old documentation: the old userspace nfsd daemon (which preceded both Ganesha and knfsd) supported a "map_daemon" export option. When that was set, it would query the client's rpc.ugidd for id mappings using an rpc protocol. So you ran rpc.ugidd on the client.
No distribution carries rpc.ugidd any more, and the map_daemon export option was never supported by knfsd.
Might be interesting to know more of the history. Digging through old nfs-utils tarballs (it predates git) might be one way to figure it out.
If we were to support uid/gid mapping today, we'd do it some other way.
Posted Oct 9, 2021 16:24 UTC (Sat)
by Baughn (subscriber, #124425)
[Link]
It works.
Posted Oct 25, 2021 10:12 UTC (Mon)
by roblucid (guest, #48964)
[Link] (21 responses)
Posted Oct 25, 2021 17:37 UTC (Mon)
by nybble41 (subscriber, #55106)
[Link] (20 responses)
That advice just makes NFS utterly impractical in any situation where you don't have absolute control over UID & GID assignments for every system you want to export files to. (You want to export NFS shares to Android without remapping IDs? Good luck with that…)
Every so often I start thinking that it would be nice to have a network filesystem without the overhead of FUSE, but the cost of setting up Kerberos (or doing without ID mapping) and the headaches of making that work reliably and securely when the systems may not always be on the same local network always send me back to SSHFS.
Posted Oct 26, 2021 17:45 UTC (Tue)
by bfields (subscriber, #19510)
[Link] (19 responses)
I'd think instead you want to map everyone to one user, and export with something like (all_squash,anonuid=MYID,anongid=MYGID).
Posted Oct 27, 2021 2:58 UTC (Wed)
by nybble41 (subscriber, #55106)
[Link] (18 responses)
Actually what I would want is not squashing all requests down to one UID/GID per export, but rather performing all accesses as the particular user whose credentials were used to authenticate to the server (like SSHFS does, or NFS with LDAP and Kerberos, or mount.cifs) without making any assumptions about the UID/GID (or username / group name(s)) on the client. There should also be options to control how the UIDs, GIDs, and permissions of files from the server are presented locally (again, like SSHFS with -o uid=X,gid=Y,umask=NNN).
Or perhaps what I really want is just SSHFS with less overhead. (ksshfs?) Until something like that is available, the FUSE implementation works well enough that I don't really see a need for NFS.
Posted Oct 27, 2021 7:06 UTC (Wed)
by mbunkus (subscriber, #87248)
[Link] (9 responses)
Posted Oct 27, 2021 11:18 UTC (Wed)
by mathstuf (subscriber, #69389)
[Link] (1 responses)
Posted Oct 27, 2021 11:28 UTC (Wed)
by mbunkus (subscriber, #87248)
[Link]
You can look into the "unix extensions" parameter on Samba.
My comment was really just an answer to nybble41's requirements, not a general endorsement to use CIFS as the one and only network file system. That being said, I'm still envious of the various fine-grained controls Samba offers whenever I run into the various limitations of what NFS can do.
Posted Oct 27, 2021 16:43 UTC (Wed)
by nybble41 (subscriber, #55106)
[Link] (6 responses)
If the security were comparable to SSH (including authentication via public keys rather than passwords) then I would agree, CIFS has most of the other properties I'm looking for. You can even set up a multi-user mount point and have the kernel track per-user login credentials using the cifscreds utility.
Posted Oct 27, 2021 17:24 UTC (Wed)
by mbunkus (subscriber, #87248)
[Link]
It surely isn't trivial as you absolutely must restrict the protocol version to the latest one (SMB3.1 or so) due to security issues in earlier versions. Then again, you have to do that with all the usual services; one shouldn't run HTTPS with SSL 1.0 anymore either, after all. And no, most Apache & nginx default installations on current popular server distributions do not come with the best/tightest SSL/TLS security settings either.
Things are… complicated. What I don't get, though, is the naysayers offering things such as Nextcloud/Owncloud (web-based file hosting services) as a supposedly secure alternative. What's more secure about it? Both run over protocols for which older versions have security issues. Samba-the-project has had a couple of well-known security issues, but then again so do Nextcloud/Owncloud. Both usually authenticate via user & password (unless the server belongs to a company environment where Kerberos is used for Samba & maybe SAML for NC/OC). They're both… roughly identical. What am I missing here?
I do use it regularly for backups that are encrypted on the client side, accessing my hosting provider's storage via CIFS. There are two different layers (factors) of security, and that suffices for me, and the other alternatives are NFS without any type of transport layer security and sshfs, being its usual slow and sometimes unreliable self. Meh.
Posted Oct 28, 2021 0:44 UTC (Thu)
by nybble41 (subscriber, #55106)
[Link] (3 responses)
> Can you use SMB/CIFS over the Internet these days (without a VPN), or is that still considered insecure?
The only notable example I found of anything similar was the use of SMB 3.1.1 in Microsoft Azure, which isn't exactly "over the Internet" but comes fairly close. But everywhere else the consensus seemed to be "don't use SMB, even SMB 3, over the Internet without a VPN."
> You can even set up a multi-user mount point and have the kernel track per-user login credentials using the cifscreds utility.
Despite the warnings, I spent a few hours configuring the most secure Samba configuration I could come up with for Linux-to-Linux file sharing (forcing SMB 3.1.1, inhibiting anonymous / guest logins, disabling netbios) and attempted to make this work.
The first obstacle I encountered was that Samba (or at least the latest version available in any Debian release: 4.13) doesn't support Unix extensions in SMB 3 mode—or the POSIX extensions which are meant to replace them. The Linux kernel supports them, but the server does not. Easy enough to work around—just mount without Unix or POSIX extensions. But this means certain features are unavailable.
The real problem, though, was that there does not appear to be any way to set up a mount point for a SMB 3 share in multiuser mode without providing a username and password at mount time for an account with access to that share. This completely defeats the point of the "multiuser" option. The credentials which can access the share(s) should only be provided by individual users via the cifscreds utility—they aren't available when the share is mounted from /etc/fstab or a systemd mount unit. Which implies that the kernel should just set up the mount point locally and not actually connect to the server until a user comes along with login credentials, but in practice the kernel tries to connect immediately. Allowing that connection to succeed so that it will create the mount point would mean either storing one user's credentials for the entire system to use or else opening up the share to guest users on the server, neither of which is an attractive option.
Anyway, it was an interesting challenge and I learned a lot about configuring modern Samba versions, but I'll be sticking with SSHFS for the foreseeable future.
Posted Nov 2, 2021 13:00 UTC (Tue)
by JanC_ (guest, #34940)
[Link] (1 responses)
Posted Nov 2, 2021 20:48 UTC (Tue)
by nybble41 (subscriber, #55106)
[Link]
Naturally. But if you're setting up a mount point with -o multiuser then you're probably doing so as root (with or without /etc/fstab) and not as one of the (locally) unprivileged users with the login credentials for that share on the server. The mechanics of -o multiuser are that when a user accesses the local mount point the kernel gets the credentials from that user's keychain and establishes a new connection to the server for that user. It doesn't make sense to require "default credentials" to set up the mount point.
The alternative is to install mount.cifs with the SUID bit enabled and let each user mount their own shares, which works (more or less, if you're okay with the Windows version of the SMB3 protocol without POSIX extensions) but isn't as nice as having a common multi-user mount point.
Posted Nov 2, 2021 13:20 UTC (Tue)
by mbunkus (subscriber, #87248)
[Link]
I've never set that up without Kerberos, though.
[1] Maybe that initial mount could also be done via automounting, not at boot, though I don't know whether or not that works when the initial request for a currently unmounted directory comes from a user process.
Posted Oct 29, 2021 3:49 UTC (Fri)
by Fowl (subscriber, #65667)
[Link]
https://techcommunity.microsoft.com/t5/itops-talk-blog/sm...
Posted Oct 28, 2021 1:51 UTC (Thu)
by neilbrown (subscriber, #359)
[Link] (7 responses)
This doesn't mean anything for NFS. NFS doesn't authenticate a connection, it authenticates each request.
With NFSv4, there is a "first" request (EXCHANGE_ID I think in v4.1 and v4.2) and almost all other requests inherit a "state" from that. This is mostly used for clear ordering and exactly-once semantics.
If don't think that would be useful with any current NFS client, as they use "machine" credentials to authenticate the state management, and that doesn't necessarily map to any UID. Obviously you could change the NFS client to behave differently, but then you would just change it to send the credential you want the server to honour.
What precisely is it that you want to achieve. I'm in favour of making NFS useful for more use-cases, but we would need a clear description of what the use-case is.
Posted Oct 28, 2021 17:19 UTC (Thu)
by nybble41 (subscriber, #55106)
[Link] (6 responses)
Not exactly. The CIFS/SMB multiuser model is a closer fit, where the kernel maintains the credentials for each server in a per-user keyring. One would need to do something about the flaw that SMB multiuser mounts still require valid credentials for an account with access to the share at mount time[0], though perhaps an NFS-based equivalent wouldn't have that problem. It doesn't really matter whether there is a single connection or multiple connections as long the credentials are not tied to a specific shared UID or username between the client and the server and all access checks are enforced on the server (i.e. the client can be untrusted). And of course I'd rather have POSIX/Linux filesystem semantics like NFS as opposed to a protocol originally designed around the Windows VFS. The protocol would obviously need to be hardened and encrypted to be a suitable alternative to SSHFS (SFTP) over the Internet and not just LANs. Regarding authentication, I currently require public keys for all SSH logins on my server, and I'd rather not go back to passwords.
The full use case is basically this: Given any random Linux server which can be accessed through SSH, I would like to be able to mount a filesystem from this server from a separately-administered client machine using a kernel-based filesystem module, with the full POSIX semantics available from NFSv4 mounts and without the overhead and limitations of FUSE. The same mount point should be available to multiple users on the client, with each user accessing files on the server through their own existing SSH login credentials. In other words: Starting with SMB-style multiuser mounts, allow mounting without any default credentials, use the NFS protocol for the actual filesystem operations, and add public-key authentication and secure encryption akin to SSH.
(One option for the authentication would be to actually perform an SSH login in userspace when adding the credentials with a fixed command which, on success, registers a temporary session key which can be loaded into the client's keyring and used for all further requests. This seems like it would be fairly ergonomic and wouldn't require the kernel to implement all the different authentication types supported by SSH.)
The existing SMB3 support would probably be "good enough", though not ideal due to limited POSIX support, if it weren't for the issue of requiring mount-time credentials. I could even emulate SSH authentication by scripting a remote smbpasswd command with a temporary password, though that only allows one client machine at a time for each account and might involve running smbpasswd as root (with restricted options) to allow a new temporary password to be set without knowing the old one.
Posted Oct 28, 2021 22:17 UTC (Thu)
by nix (subscriber, #2304)
[Link] (4 responses)
A place to start on the server side of this is already written in the form of the sftp subsystem, though it doesn't implement remotely enough operations and probably the serialization protocol should be rethought, since we are not at all wedded to the sftp protocol. The biggest problem is that by default this thing would be single-threaded, but a multithreaded version is perfectly possible that fires up multiple worker threads (possibly in an expanding-as-needed thread pool), kicks off separate ssh -s's for each one, and lets things rip accordingly.
Nobody has written any of this, but it's purely userspace coding, likely fairly humdrum, and the performance impact of FUSE is probably going to be ignorable compared to the unavoidable performance impact of, well, using SSH (and honestly for all but really big bulk ops or ops on machines with slow CPUs I think you won't even notice that).
... oh dammit I want to write this thing now. (Assuming nobody already has. I haven't even looked, but given the number of people who seem to be even *aware* of SSH subsystems, let alone how damn useful they are for things like this, I do strongly suspect that nothing like this exists.)
Posted Oct 29, 2021 6:04 UTC (Fri)
by nybble41 (subscriber, #55106)
[Link] (3 responses)
An SSHFS equivalent using something like the NFS protocol (without any NFS authentication, just acting as the logged-in user) through an SSH tunnel instead of SFTP would be an interesting design, though it doesn't address my main design goal of migrating the filesystem away from FUSE and into the kernel.
Posted Oct 29, 2021 12:51 UTC (Fri)
by nix (subscriber, #2304)
[Link] (2 responses)
A true multiuser permission-respecting filesystem... well, I guess if you ssh as root it could setfsuid as needed as requests came in. That's what the fsuid is for, after all.
Posted Oct 29, 2021 14:54 UTC (Fri)
by nybble41 (subscriber, #55106)
[Link] (1 responses)
The code in sshfs.c[0] appears to pass "-s sftp" to the SSH command by default (i.e. using the subsystem mechanism) unless the sftp_server option is set (with a path) or the SSHv1 protocol is selected.
> A true multiuser permission-respecting filesystem... well, I guess if you ssh as root it could setfsuid as needed as requests came in.
The kernel SMB3 implementation creates a separate connection for each user, and I'd probably do the same thing here. Many systems, my own included, don't allow direct root logins via SSH; ssh as root + setfsuid on the server would essentially mean trusting the client machine with root access to the server, and even with restrictions such as only allowing this one approved subsystem it could be used to bypass SSH login policies.
The FUSE filesystem would need to be set up by root on the client with the allow_other option to permit shared access. You could have an interface for users to link their ssh-agent to the FUSE filesystem so it can connect on their behalf (using keys), though I'm sure there would be all sorts of interesting security and UX implications.
Posted Oct 29, 2021 17:32 UTC (Fri)
by nix (subscriber, #2304)
[Link]
OK I'm too tired to think then, or simply can't read. It really is there and really obvious :) I guess that shows I was thinking of the right design, since sshfs is already doing it!
OK, so the right thing to do is to soup up sftp-server until it can do everything FUSE can be asked for, then soup up sshfs to talk to it and add a thread pool etc to it :) if this doesn't work (rejected by upstream), sshfs could ship its own variant (under another name: sshfs-server) and use it if set up on a remote system.
Posted Oct 29, 2021 4:14 UTC (Fri)
by neilbrown (subscriber, #359)
[Link]
I interpret your problem description as "You want a key distribution protocol based on ssh rather than kerberos, and you want NFS to be able to work with the keys thus distributed".
NFS is designed to have pluggable authentication systems, but krb5 wrapped in rpcsec/gss is the only one that is actually implemented.
I wonder if it would be possible to use an ssh-based scheme to distribute keys. I have no knowledge of the internals of krb5 certificates, but my guess is that it isn't completely out of the question. You would need to modify or replace gssproxy on the server and rpc.gssd on the client.
An alternate possible direction involves NFS over TLS. This is a draft standard for this, and I think there is prototype code. Whether the standard allows the credential for the connection to be used for FS requests, I don't know. If it did, then this might be a direction that could be standards-complient and so more likely to be implemented widely.
A rough start for ksmbd
A rough start for ksmbd
A rough start for ksmbd
Wol
A rough start for ksmbd
Less tech-savvy people ("family") just use "Connect to Server" with "sftp://nas/..." in the GUI file manager. No further setup needed.
A rough start for ksmbd
A rough start for ksmbd
A rough start for ksmbd
A rough start for ksmbd
A rough start for ksmbd
A rough start for ksmbd
A rough start for ksmbd
A rough start for ksmbd
A rough start for ksmbd
A rough start for ksmbd
A rough start for ksmbd
A rough start for ksmbd
A rough start for ksmbd
A rough start for ksmbd
A rough start for ksmbd
A rough start for ksmbd
A rough start for ksmbd
A rough start for ksmbd
You seem to be suggesting that the credentials used to authenticate all subsequent requests should be ignored, and the credentials of the "first" request should be used throughout.
A rough start for ksmbd
A rough start for ksmbd
A rough start for ksmbd
A rough start for ksmbd
A rough start for ksmbd
A rough start for ksmbd
A rough start for ksmbd
The kernel "knows" about krb5 certificates and encryption scheme, but out-sources to user-space for distributing those certificates and keys.
