|
|
Subscribe / Log in / New account

A rough start for ksmbd

A rough start for ksmbd

Posted Oct 7, 2021 19:31 UTC (Thu) by developer122 (guest, #152928)
In reply to: A rough start for ksmbd by flussence
Parent article: A rough start for ksmbd

nfsd would be a lot more friendly if it let you map arbitrary users at arbitrary hosts to arbitrary local users....without having to set up a whole kerberos scheme to "authenticate" them.

Maybe I just want to deploy nfs on my network after already having set up a bunch of computers, and maybe they all just happen to use slightly different usernames and IDs for the same users. Right now the only thing I can do is squash all the users from any particular host down to one user, but I'd really like to be able to map them. It works so long as I'm the only one using it, but I can see a multiuser future on the horizon (eg guests/family).

/nfs_gripe


to post comments

A rough start for ksmbd

Posted Oct 7, 2021 20:27 UTC (Thu) by ejr (subscriber, #51652) [Link] (1 responses)

So if any one is compromised, it opens *every* user's files by changing that local mapping.

A rough start for ksmbd

Posted Oct 8, 2021 14:48 UTC (Fri) by bfields (subscriber, #19510) [Link]

I'm not sure I understand your threat model here.

In the absence of kerberos, the server's mostly just trusting the clients to correctly represent who's accessing the filesystem anyway.

I seem to recall one or two people making an attempt at adding this kind of mapping, and it turning out to be more complicated than expected. But that was a while ago. I wonder if any of the container work done since then would be useful here.

Anyway, there'd have to be someone willing to look into it and do the work.

A rough start for ksmbd

Posted Oct 8, 2021 7:32 UTC (Fri) by Wol (subscriber, #4433) [Link]

> nfsd would be a lot more friendly if it let you map arbitrary users at arbitrary hosts to arbitrary local users....without having to set up a whole kerberos scheme to "authenticate" them.

How hard is it to set up LDAP? Can't you implement some simple "single sign on"?

Cheers,
Wol

A rough start for ksmbd

Posted Oct 8, 2021 9:09 UTC (Fri) by geert (subscriber, #98403) [Link]

I only use NFS for root file systems on development boards.
Less tech-savvy people ("family") just use "Connect to Server" with "sftp://nas/..." in the GUI file manager. No further setup needed.

A rough start for ksmbd

Posted Oct 9, 2021 11:50 UTC (Sat) by ballombe (subscriber, #9523) [Link] (1 responses)

Was it not done by rpc.ugidd ? what happened to that ?

A rough start for ksmbd

Posted Oct 11, 2021 13:19 UTC (Mon) by bfields (subscriber, #19510) [Link]

Well, that's interesting. I've been working on nfs for almost 20 years and I don't remember hearing about rpc.ugidd.

Looking at old documentation: the old userspace nfsd daemon (which preceded both Ganesha and knfsd) supported a "map_daemon" export option. When that was set, it would query the client's rpc.ugidd for id mappings using an rpc protocol. So you ran rpc.ugidd on the client.

No distribution carries rpc.ugidd any more, and the map_daemon export option was never supported by knfsd.

Might be interesting to know more of the history. Digging through old nfs-utils tarballs (it predates git) might be one way to figure it out.

If we were to support uid/gid mapping today, we'd do it some other way.

A rough start for ksmbd

Posted Oct 9, 2021 16:24 UTC (Sat) by Baughn (subscriber, #124425) [Link]

It doesn't *really* answer your understandable gripe, but assuming you're willing to run NixOS everywhere there's at least one fix — use a central user database with assigned UIDs, like I do here: https://github.com/Baughn/machine-config/blob/master/modu...

It works.

A rough start for ksmbd

Posted Oct 25, 2021 10:12 UTC (Mon) by roblucid (guest, #48964) [Link] (21 responses)

Using the same [UG]IDs in your whole network is a far better idea, allowing arbitary user remapping is asking for a huge steaming mess

A rough start for ksmbd

Posted Oct 25, 2021 17:37 UTC (Mon) by nybble41 (subscriber, #55106) [Link] (20 responses)

> Using the same [UG]IDs in your whole network is a far better idea…

That advice just makes NFS utterly impractical in any situation where you don't have absolute control over UID & GID assignments for every system you want to export files to. (You want to export NFS shares to Android without remapping IDs? Good luck with that…)

Every so often I start thinking that it would be nice to have a network filesystem without the overhead of FUSE, but the cost of setting up Kerberos (or doing without ID mapping) and the headaches of making that work reliably and securely when the systems may not always be on the same local network always send me back to SSHFS.

A rough start for ksmbd

Posted Oct 26, 2021 17:45 UTC (Tue) by bfields (subscriber, #19510) [Link] (19 responses)

No expert, but my understanding was that Android manages uid's in a pretty non-traditional way, dynamically allocating uid's one per app. So there's probably not any sensible way to map those individually.

I'd think instead you want to map everyone to one user, and export with something like (all_squash,anonuid=MYID,anongid=MYGID).

A rough start for ksmbd

Posted Oct 27, 2021 2:58 UTC (Wed) by nybble41 (subscriber, #55106) [Link] (18 responses)

> I'd think instead you want to map everyone to one user, and export with something like (all_squash,anonuid=MYID,anongid=MYGID).

Actually what I would want is not squashing all requests down to one UID/GID per export, but rather performing all accesses as the particular user whose credentials were used to authenticate to the server (like SSHFS does, or NFS with LDAP and Kerberos, or mount.cifs) without making any assumptions about the UID/GID (or username / group name(s)) on the client. There should also be options to control how the UIDs, GIDs, and permissions of files from the server are presented locally (again, like SSHFS with -o uid=X,gid=Y,umask=NNN).

Or perhaps what I really want is just SSHFS with less overhead. (ksshfs?) Until something like that is available, the FUSE implementation works well enough that I don't really see a need for NFS.

A rough start for ksmbd

Posted Oct 27, 2021 7:06 UTC (Wed) by mbunkus (subscriber, #87248) [Link] (9 responses)

All of what you want is what Samba (the project) can offer, and much more (e.g. forcing authenticated access to use a certain group or to use certain bits in the file permissions, making setting up a shared directory where all group members have full access to all files trivial).

A rough start for ksmbd

Posted Oct 27, 2021 11:18 UTC (Wed) by mathstuf (subscriber, #69389) [Link] (1 responses)

When I set up my shared filesystem, I first used CIFS, however the lack of support for arbitrary filenames and munging permissions was not suitable. Is there a way to say "I don't care about Windows, please do not mangle filenames it does not accept" and "please store and expose proper Unix permissions"?

A rough start for ksmbd

Posted Oct 27, 2021 11:28 UTC (Wed) by mbunkus (subscriber, #87248) [Link]

I honestly don't know the answer to the that, as I'm usually working in mixed environments, forcing me to forgo characters in file names not supported on Windows anyway. And I don't usually use traditional Unix permission setups for stuff on CIFS shares.

You can look into the "unix extensions" parameter on Samba.

My comment was really just an answer to nybble41's requirements, not a general endorsement to use CIFS as the one and only network file system. That being said, I'm still envious of the various fine-grained controls Samba offers whenever I run into the various limitations of what NFS can do.

A rough start for ksmbd

Posted Oct 27, 2021 16:43 UTC (Wed) by nybble41 (subscriber, #55106) [Link] (6 responses)

Can you use SMB/CIFS over the Internet these days (without a VPN), or is that still considered insecure? I used to run a Samba server for interoperability with Windows clients, just over the LAN, but the authentication requirements kept changing (on the Windows side) and I eventually decided it wasn't worth the security risk. I always heard that one shouldn't allow SMB connections outside the local network, but perhaps that's changed.

If the security were comparable to SSH (including authentication via public keys rather than passwords) then I would agree, CIFS has most of the other properties I'm looking for. You can even set up a multi-user mount point and have the kernel track per-user login credentials using the cifscreds utility.

A rough start for ksmbd

Posted Oct 27, 2021 17:24 UTC (Wed) by mbunkus (subscriber, #87248) [Link]

Not sure, hopefully someone else can chime in. A quick Google search seems to suggest mostly everyone still recommending against it. I'm not aware of anything like ssh public key authentication for Samba (or the SMB protocol), even though it does use Kerberos — which is rather involved to set up manually, though.

It surely isn't trivial as you absolutely must restrict the protocol version to the latest one (SMB3.1 or so) due to security issues in earlier versions. Then again, you have to do that with all the usual services; one shouldn't run HTTPS with SSL 1.0 anymore either, after all. And no, most Apache & nginx default installations on current popular server distributions do not come with the best/tightest SSL/TLS security settings either.

Things are… complicated. What I don't get, though, is the naysayers offering things such as Nextcloud/Owncloud (web-based file hosting services) as a supposedly secure alternative. What's more secure about it? Both run over protocols for which older versions have security issues. Samba-the-project has had a couple of well-known security issues, but then again so do Nextcloud/Owncloud. Both usually authenticate via user & password (unless the server belongs to a company environment where Kerberos is used for Samba & maybe SAML for NC/OC). They're both… roughly identical. What am I missing here?

I do use it regularly for backups that are encrypted on the client side, accessing my hosting provider's storage via CIFS. There are two different layers (factors) of security, and that suffices for me, and the other alternatives are NFS without any type of transport layer security and sshfs, being its usual slow and sometimes unreliable self. Meh.

A rough start for ksmbd

Posted Oct 28, 2021 0:44 UTC (Thu) by nybble41 (subscriber, #55106) [Link] (3 responses)

Replying to myself for follow-up.

> Can you use SMB/CIFS over the Internet these days (without a VPN), or is that still considered insecure?

The only notable example I found of anything similar was the use of SMB 3.1.1 in Microsoft Azure, which isn't exactly "over the Internet" but comes fairly close. But everywhere else the consensus seemed to be "don't use SMB, even SMB 3, over the Internet without a VPN."

> You can even set up a multi-user mount point and have the kernel track per-user login credentials using the cifscreds utility.

Despite the warnings, I spent a few hours configuring the most secure Samba configuration I could come up with for Linux-to-Linux file sharing (forcing SMB 3.1.1, inhibiting anonymous / guest logins, disabling netbios) and attempted to make this work.

The first obstacle I encountered was that Samba (or at least the latest version available in any Debian release: 4.13) doesn't support Unix extensions in SMB 3 mode—or the POSIX extensions which are meant to replace them. The Linux kernel supports them, but the server does not. Easy enough to work around—just mount without Unix or POSIX extensions. But this means certain features are unavailable.

The real problem, though, was that there does not appear to be any way to set up a mount point for a SMB 3 share in multiuser mode without providing a username and password at mount time for an account with access to that share. This completely defeats the point of the "multiuser" option. The credentials which can access the share(s) should only be provided by individual users via the cifscreds utility—they aren't available when the share is mounted from /etc/fstab or a systemd mount unit. Which implies that the kernel should just set up the mount point locally and not actually connect to the server until a user comes along with login credentials, but in practice the kernel tries to connect immediately. Allowing that connection to succeed so that it will create the mount point would mean either storing one user's credentials for the entire system to use or else opening up the share to guest users on the server, neither of which is an attractive option.

Anyway, it was an interesting challenge and I learned a lot about configuring modern Samba versions, but I'll be sticking with SSHFS for the foreseeable future.

A rough start for ksmbd

Posted Nov 2, 2021 13:00 UTC (Tue) by JanC_ (guest, #34940) [Link] (1 responses)

You don't have to automount from fstab to use a remote filesystem…

A rough start for ksmbd

Posted Nov 2, 2021 20:48 UTC (Tue) by nybble41 (subscriber, #55106) [Link]

> You don't have to automount from fstab to use a remote filesystem…

Naturally. But if you're setting up a mount point with -o multiuser then you're probably doing so as root (with or without /etc/fstab) and not as one of the (locally) unprivileged users with the login credentials for that share on the server. The mechanics of -o multiuser are that when a user accesses the local mount point the kernel gets the credentials from that user's keychain and establishes a new connection to the server for that user. It doesn't make sense to require "default credentials" to set up the mount point.

The alternative is to install mount.cifs with the SUID bit enabled and let each user mount their own shares, which works (more or less, if you're okay with the Windows version of the SMB3 protocol without POSIX extensions) but isn't as nice as having a common multi-user mount point.

A rough start for ksmbd

Posted Nov 2, 2021 13:20 UTC (Tue) by mbunkus (subscriber, #87248) [Link]

I think the way "-o multiuser" is supposed to work is without automounting but with Kerberos credentials. The initial mount attempt will have to be made with the machine's Kerberos key (keytab). All subsequent accesses by users to said mount point will then be made with the user's Kerberos credentials, though.

I've never set that up without Kerberos, though.

[1] Maybe that initial mount could also be done via automounting, not at boot, though I don't know whether or not that works when the initial request for a currently unmounted directory comes from a user process.

A rough start for ksmbd

Posted Oct 29, 2021 3:49 UTC (Fri) by Fowl (subscriber, #65667) [Link]

Well "SMB over QUIC" is now a thing apparently.

https://techcommunity.microsoft.com/t5/itops-talk-blog/sm...

A rough start for ksmbd

Posted Oct 28, 2021 1:51 UTC (Thu) by neilbrown (subscriber, #359) [Link] (7 responses)

> but rather performing all accesses as the particular user whose credentials were used to authenticate to the server

This doesn't mean anything for NFS. NFS doesn't authenticate a connection, it authenticates each request.

With NFSv4, there is a "first" request (EXCHANGE_ID I think in v4.1 and v4.2) and almost all other requests inherit a "state" from that. This is mostly used for clear ordering and exactly-once semantics.
You seem to be suggesting that the credentials used to authenticate all subsequent requests should be ignored, and the credentials of the "first" request should be used throughout.

If don't think that would be useful with any current NFS client, as they use "machine" credentials to authenticate the state management, and that doesn't necessarily map to any UID. Obviously you could change the NFS client to behave differently, but then you would just change it to send the credential you want the server to honour.

What precisely is it that you want to achieve. I'm in favour of making NFS useful for more use-cases, but we would need a clear description of what the use-case is.

A rough start for ksmbd

Posted Oct 28, 2021 17:19 UTC (Thu) by nybble41 (subscriber, #55106) [Link] (6 responses)

> NFS doesn't authenticate a connection, it authenticates each request. … You seem to be suggesting that the credentials used to authenticate all subsequent requests should be ignored, and the credentials of the "first" request should be used throughout.

Not exactly. The CIFS/SMB multiuser model is a closer fit, where the kernel maintains the credentials for each server in a per-user keyring. One would need to do something about the flaw that SMB multiuser mounts still require valid credentials for an account with access to the share at mount time[0], though perhaps an NFS-based equivalent wouldn't have that problem. It doesn't really matter whether there is a single connection or multiple connections as long the credentials are not tied to a specific shared UID or username between the client and the server and all access checks are enforced on the server (i.e. the client can be untrusted). And of course I'd rather have POSIX/Linux filesystem semantics like NFS as opposed to a protocol originally designed around the Windows VFS. The protocol would obviously need to be hardened and encrypted to be a suitable alternative to SSHFS (SFTP) over the Internet and not just LANs. Regarding authentication, I currently require public keys for all SSH logins on my server, and I'd rather not go back to passwords.

The full use case is basically this: Given any random Linux server which can be accessed through SSH, I would like to be able to mount a filesystem from this server from a separately-administered client machine using a kernel-based filesystem module, with the full POSIX semantics available from NFSv4 mounts and without the overhead and limitations of FUSE. The same mount point should be available to multiple users on the client, with each user accessing files on the server through their own existing SSH login credentials. In other words: Starting with SMB-style multiuser mounts, allow mounting without any default credentials, use the NFS protocol for the actual filesystem operations, and add public-key authentication and secure encryption akin to SSH.

(One option for the authentication would be to actually perform an SSH login in userspace when adding the credentials with a fixed command which, on success, registers a temporary session key which can be loaded into the client's keyring and used for all further requests. This seems like it would be fairly ergonomic and wouldn't require the kernel to implement all the different authentication types supported by SSH.)

The existing SMB3 support would probably be "good enough", though not ideal due to limited POSIX support, if it weren't for the issue of requiring mount-time credentials. I could even emulate SSH authentication by scripting a remote smbpasswd command with a temporary password, though that only allows one client machine at a time for each account and might involve running smbpasswd as root (with restricted options) to allow a new temporary password to be set without knowing the old one.

[0] https://lwn.net/Articles/874180/

A rough start for ksmbd

Posted Oct 28, 2021 22:17 UTC (Thu) by nix (subscriber, #2304) [Link] (4 responses)

Another way to emulate this stuff would be doing what sftp does: use SSH subsystems. An ordinary command that forks ssh -s $subsystem_name implements the client side, using FUSE; it serializes the requests, echoes them into the ssh -s process's stdin. The server side of this is an ordinary filter forked by sshd just as it forks sftp-server; this takes the serialized requests from the client on stdin, does... whatever on the filesystem (using ordinary fs ops: no need for special permission magic or fsuids because you are *already* the right user, properly authenticated by ssh), then serializes the results over its stdout. The client-side program receives them from its ssh -s invocation and hands them back via FUSE.

A place to start on the server side of this is already written in the form of the sftp subsystem, though it doesn't implement remotely enough operations and probably the serialization protocol should be rethought, since we are not at all wedded to the sftp protocol. The biggest problem is that by default this thing would be single-threaded, but a multithreaded version is perfectly possible that fires up multiple worker threads (possibly in an expanding-as-needed thread pool), kicks off separate ssh -s's for each one, and lets things rip accordingly.

Nobody has written any of this, but it's purely userspace coding, likely fairly humdrum, and the performance impact of FUSE is probably going to be ignorable compared to the unavoidable performance impact of, well, using SSH (and honestly for all but really big bulk ops or ops on machines with slow CPUs I think you won't even notice that).

... oh dammit I want to write this thing now. (Assuming nobody already has. I haven't even looked, but given the number of people who seem to be even *aware* of SSH subsystems, let alone how damn useful they are for things like this, I do strongly suspect that nothing like this exists.)

A rough start for ksmbd

Posted Oct 29, 2021 6:04 UTC (Fri) by nybble41 (subscriber, #55106) [Link] (3 responses)

I believe you're describing SSHFS[0], though perhaps with a richer subsystem then SFTP. SSHFS is great; I use it all the time. But it does tend to have some issues. FUSE filesystems are rarely as performant as their native equivalents. If nothing else you need several extra context switches for each operation (app -> kernel -> FUSE -> kernel -> app), and in my experience large file transfers without explicit bandwidth limits can make the rest of the filesystem non-responsive. The latter issue may be more of an implementation issue with SSHFS or SFTP rather than FUSE itself. It's not strictly single-threaded, so you can still access other files, but it doesn't seem to load-balance fairly. FUSE filesystems also run as ordinary users while servicing requests from the kernel, perhaps from other users or even root, which means they have some security issues to mitigate which may not apply to an in-kernel filesystem. And it would be difficult (though not impossible) to implement something like SMB3 multiuser mounts via FUSE where all local users see the same paths but access them with their own remote credentials.

An SSHFS equivalent using something like the NFS protocol (without any NFS authentication, just acting as the logged-in user) through an SSH tunnel instead of SFTP would be an interesting design, though it doesn't address my main design goal of migrating the filesystem away from FUSE and into the kernel.

[0] https://github.com/libfuse/sshfs

A rough start for ksmbd

Posted Oct 29, 2021 12:51 UTC (Fri) by nix (subscriber, #2304) [Link] (2 responses)

Sort of. To minimize installation difficulties (since subsystems have to be configured on the server side with one line in sshd_config), sshfs doesn't use the subsystem mechanism but implements its own transport, which means it has to encode everything passing over the wire and relies on the far side's shell being set up sanely and the like. But sshfs is probably a good place to start from!

A true multiuser permission-respecting filesystem... well, I guess if you ssh as root it could setfsuid as needed as requests came in. That's what the fsuid is for, after all.

A rough start for ksmbd

Posted Oct 29, 2021 14:54 UTC (Fri) by nybble41 (subscriber, #55106) [Link] (1 responses)

> sshfs doesn't use the subsystem mechanism but implements its own transport

The code in sshfs.c[0] appears to pass "-s sftp" to the SSH command by default (i.e. using the subsystem mechanism) unless the sftp_server option is set (with a path) or the SSHv1 protocol is selected.

> A true multiuser permission-respecting filesystem... well, I guess if you ssh as root it could setfsuid as needed as requests came in.

The kernel SMB3 implementation creates a separate connection for each user, and I'd probably do the same thing here. Many systems, my own included, don't allow direct root logins via SSH; ssh as root + setfsuid on the server would essentially mean trusting the client machine with root access to the server, and even with restrictions such as only allowing this one approved subsystem it could be used to bypass SSH login policies.

The FUSE filesystem would need to be set up by root on the client with the allow_other option to permit shared access. You could have an interface for users to link their ssh-agent to the FUSE filesystem so it can connect on their behalf (using keys), though I'm sure there would be all sorts of interesting security and UX implications.

[0] https://github.com/libfuse/sshfs/blob/master/sshfs.c

A rough start for ksmbd

Posted Oct 29, 2021 17:32 UTC (Fri) by nix (subscriber, #2304) [Link]

> The code in sshfs.c[0] appears to pass "-s sftp" to the SSH command by default (i.e. using the subsystem mechanism) unless the sftp_server option is set (with a path) or the SSHv1 protocol is selected.

OK I'm too tired to think then, or simply can't read. It really is there and really obvious :) I guess that shows I was thinking of the right design, since sshfs is already doing it!

OK, so the right thing to do is to soup up sftp-server until it can do everything FUSE can be asked for, then soup up sshfs to talk to it and add a thread pool etc to it :) if this doesn't work (rejected by upstream), sshfs could ship its own variant (under another name: sshfs-server) and use it if set up on a remote system.

A rough start for ksmbd

Posted Oct 29, 2021 4:14 UTC (Fri) by neilbrown (subscriber, #359) [Link]

Thanks for the extra context.

I interpret your problem description as "You want a key distribution protocol based on ssh rather than kerberos, and you want NFS to be able to work with the keys thus distributed".

NFS is designed to have pluggable authentication systems, but krb5 wrapped in rpcsec/gss is the only one that is actually implemented.
The kernel "knows" about krb5 certificates and encryption scheme, but out-sources to user-space for distributing those certificates and keys.

I wonder if it would be possible to use an ssh-based scheme to distribute keys. I have no knowledge of the internals of krb5 certificates, but my guess is that it isn't completely out of the question. You would need to modify or replace gssproxy on the server and rpc.gssd on the client.

An alternate possible direction involves NFS over TLS. This is a draft standard for this, and I think there is prototype code. Whether the standard allows the credential for the connection to be used for FS requests, I don't know. If it did, then this might be a direction that could be standards-complient and so more likely to be implemented widely.


Copyright © 2025, Eklektix, Inc.
Comments and public postings are copyrighted by their creators.
Linux is a registered trademark of Linus Torvalds