LWN: Comments on "A rough start for ksmbd" https://lwn.net/Articles/871866/ This is a special feed containing comments posted to the individual LWN article titled "A rough start for ksmbd". en-us Sat, 18 Oct 2025 11:24:02 +0000 Sat, 18 Oct 2025 11:24:02 +0000 https://www.rssboard.org/rss-specification lwn@lwn.net A rough start for ksmbd https://lwn.net/Articles/901954/ https://lwn.net/Articles/901954/ llamafilm <div class="FormattedComment"> One of the main reasons stated for including in the kernel is for better RDMA support. This is essential for speeds above 10Gbps. SMB Direct on a Windows server can get pretty close to saturating a 100GbE connection with fairly low CPU usage. (I haven&#x27;t tried more than one 100Gb connection yet). I&#x27;m curious to hear if anyone has experience with high throughput RDMA in Samba vs ksmbd.<br> </div> Thu, 21 Jul 2022 01:24:16 +0000 A rough start for ksmbd https://lwn.net/Articles/874867/ https://lwn.net/Articles/874867/ nybble41 <div class="FormattedComment"> <font class="QuotedText">&gt; You don&#x27;t have to automount from fstab to use a remote filesystem…</font><br> <p> Naturally. But if you&#x27;re setting up a mount point with -o multiuser then you&#x27;re probably doing so as root (with or without /etc/fstab) and not as one of the (locally) unprivileged users with the login credentials for that share on the server. The mechanics of -o multiuser are that when a user accesses the local mount point the kernel gets the credentials from that user&#x27;s keychain and establishes a new connection to the server for that user. It doesn&#x27;t make sense to require &quot;default credentials&quot; to set up the mount point.<br> <p> The alternative is to install mount.cifs with the SUID bit enabled and let each user mount their own shares, which works (more or less, if you&#x27;re okay with the Windows version of the SMB3 protocol without POSIX extensions) but isn&#x27;t as nice as having a common multi-user mount point.<br> </div> Tue, 02 Nov 2021 20:48:02 +0000 A rough start for ksmbd https://lwn.net/Articles/874736/ https://lwn.net/Articles/874736/ mbunkus <div class="FormattedComment"> I think the way &quot;-o multiuser&quot; is supposed to work is without automounting but with Kerberos credentials. The initial mount attempt will have to be made with the machine&#x27;s Kerberos key (keytab). All subsequent accesses by users to said mount point will then be made with the user&#x27;s Kerberos credentials, though.<br> <p> I&#x27;ve never set that up without Kerberos, though.<br> <p> [1] Maybe that initial mount could also be done via automounting, not at boot, though I don&#x27;t know whether or not that works when the initial request for a currently unmounted directory comes from a user process.<br> </div> Tue, 02 Nov 2021 13:20:03 +0000 A rough start for ksmbd https://lwn.net/Articles/874734/ https://lwn.net/Articles/874734/ JanC_ <div class="FormattedComment"> You don&#x27;t have to automount from fstab to use a remote filesystem…<br> </div> Tue, 02 Nov 2021 13:00:49 +0000 A rough start for ksmbd https://lwn.net/Articles/874392/ https://lwn.net/Articles/874392/ nix <div class="FormattedComment"> <font class="QuotedText">&gt; The code in sshfs.c[0] appears to pass &quot;-s sftp&quot; to the SSH command by default (i.e. using the subsystem mechanism) unless the sftp_server option is set (with a path) or the SSHv1 protocol is selected.</font><br> <p> OK I&#x27;m too tired to think then, or simply can&#x27;t read. It really is there and really obvious :) I guess that shows I was thinking of the right design, since sshfs is already doing it!<br> <p> OK, so the right thing to do is to soup up sftp-server until it can do everything FUSE can be asked for, then soup up sshfs to talk to it and add a thread pool etc to it :) if this doesn&#x27;t work (rejected by upstream), sshfs could ship its own variant (under another name: sshfs-server) and use it if set up on a remote system.<br> </div> Fri, 29 Oct 2021 17:32:06 +0000 A rough start for ksmbd https://lwn.net/Articles/874362/ https://lwn.net/Articles/874362/ nybble41 <div class="FormattedComment"> <font class="QuotedText">&gt; sshfs doesn&#x27;t use the subsystem mechanism but implements its own transport</font><br> <p> The code in sshfs.c[0] appears to pass &quot;-s sftp&quot; to the SSH command by default (i.e. using the subsystem mechanism) unless the sftp_server option is set (with a path) or the SSHv1 protocol is selected.<br> <p> <font class="QuotedText">&gt; A true multiuser permission-respecting filesystem... well, I guess if you ssh as root it could setfsuid as needed as requests came in.</font><br> <p> The kernel SMB3 implementation creates a separate connection for each user, and I&#x27;d probably do the same thing here. Many systems, my own included, don&#x27;t allow direct root logins via SSH; ssh as root + setfsuid on the server would essentially mean trusting the client machine with root access to the server, and even with restrictions such as only allowing this one approved subsystem it could be used to bypass SSH login policies.<br> <p> The FUSE filesystem would need to be set up by root on the client with the allow_other option to permit shared access. You could have an interface for users to link their ssh-agent to the FUSE filesystem so it can connect on their behalf (using keys), though I&#x27;m sure there would be all sorts of interesting security and UX implications.<br> <p> [0] <a href="https://github.com/libfuse/sshfs/blob/master/sshfs.c">https://github.com/libfuse/sshfs/blob/master/sshfs.c</a><br> </div> Fri, 29 Oct 2021 14:54:33 +0000 A rough start for ksmbd https://lwn.net/Articles/874336/ https://lwn.net/Articles/874336/ nix <div class="FormattedComment"> Sort of. To minimize installation difficulties (since subsystems have to be configured on the server side with one line in sshd_config), sshfs doesn&#x27;t use the subsystem mechanism but implements its own transport, which means it has to encode everything passing over the wire and relies on the far side&#x27;s shell being set up sanely and the like. But sshfs is probably a good place to start from!<br> <p> A true multiuser permission-respecting filesystem... well, I guess if you ssh as root it could setfsuid as needed as requests came in. That&#x27;s what the fsuid is for, after all.<br> </div> Fri, 29 Oct 2021 12:51:12 +0000 A rough start for ksmbd https://lwn.net/Articles/874315/ https://lwn.net/Articles/874315/ nybble41 <div class="FormattedComment"> I believe you&#x27;re describing SSHFS[0], though perhaps with a richer subsystem then SFTP. SSHFS is great; I use it all the time. But it does tend to have some issues. FUSE filesystems are rarely as performant as their native equivalents. If nothing else you need several extra context switches for each operation (app -&gt; kernel -&gt; FUSE -&gt; kernel -&gt; app), and in my experience large file transfers without explicit bandwidth limits can make the rest of the filesystem non-responsive. The latter issue may be more of an implementation issue with SSHFS or SFTP rather than FUSE itself. It&#x27;s not strictly single-threaded, so you can still access other files, but it doesn&#x27;t seem to load-balance fairly. FUSE filesystems also run as ordinary users while servicing requests from the kernel, perhaps from other users or even root, which means they have some security issues to mitigate which may not apply to an in-kernel filesystem. And it would be difficult (though not impossible) to implement something like SMB3 multiuser mounts via FUSE where all local users see the same paths but access them with their own remote credentials.<br> <p> An SSHFS equivalent using something like the NFS protocol (without any NFS authentication, just acting as the logged-in user) through an SSH tunnel instead of SFTP would be an interesting design, though it doesn&#x27;t address my main design goal of migrating the filesystem away from FUSE and into the kernel.<br> <p> [0] <a href="https://github.com/libfuse/sshfs">https://github.com/libfuse/sshfs</a><br> </div> Fri, 29 Oct 2021 06:04:39 +0000 A rough start for ksmbd https://lwn.net/Articles/874310/ https://lwn.net/Articles/874310/ neilbrown <div class="FormattedComment"> Thanks for the extra context.<br> <p> I interpret your problem description as &quot;You want a key distribution protocol based on ssh rather than kerberos, and you want NFS to be able to work with the keys thus distributed&quot;.<br> <p> NFS is designed to have pluggable authentication systems, but krb5 wrapped in rpcsec/gss is the only one that is actually implemented.<br> The kernel &quot;knows&quot; about krb5 certificates and encryption scheme, but out-sources to user-space for distributing those certificates and keys.<br> <p> I wonder if it would be possible to use an ssh-based scheme to distribute keys. I have no knowledge of the internals of krb5 certificates, but my guess is that it isn&#x27;t completely out of the question. You would need to modify or replace gssproxy on the server and rpc.gssd on the client.<br> <p> An alternate possible direction involves NFS over TLS. This is a draft standard for this, and I think there is prototype code. Whether the standard allows the credential for the connection to be used for FS requests, I don&#x27;t know. If it did, then this might be a direction that could be standards-complient and so more likely to be implemented widely.<br> <p> </div> Fri, 29 Oct 2021 04:14:58 +0000 A rough start for ksmbd https://lwn.net/Articles/874309/ https://lwn.net/Articles/874309/ Fowl <div class="FormattedComment"> Well &quot;SMB over QUIC&quot; is now a thing apparently.<br> <p> <a href="https://techcommunity.microsoft.com/t5/itops-talk-blog/smb-over-quic-files-without-the-vpn/ba-p/1183449">https://techcommunity.microsoft.com/t5/itops-talk-blog/sm...</a><br> </div> Fri, 29 Oct 2021 03:49:19 +0000 A rough start for ksmbd https://lwn.net/Articles/874296/ https://lwn.net/Articles/874296/ nix <div class="FormattedComment"> Another way to emulate this stuff would be doing what sftp does: use SSH subsystems. An ordinary command that forks ssh -s $subsystem_name implements the client side, using FUSE; it serializes the requests, echoes them into the ssh -s process&#x27;s stdin. The server side of this is an ordinary filter forked by sshd just as it forks sftp-server; this takes the serialized requests from the client on stdin, does... whatever on the filesystem (using ordinary fs ops: no need for special permission magic or fsuids because you are *already* the right user, properly authenticated by ssh), then serializes the results over its stdout. The client-side program receives them from its ssh -s invocation and hands them back via FUSE.<br> <p> A place to start on the server side of this is already written in the form of the sftp subsystem, though it doesn&#x27;t implement remotely enough operations and probably the serialization protocol should be rethought, since we are not at all wedded to the sftp protocol. The biggest problem is that by default this thing would be single-threaded, but a multithreaded version is perfectly possible that fires up multiple worker threads (possibly in an expanding-as-needed thread pool), kicks off separate ssh -s&#x27;s for each one, and lets things rip accordingly.<br> <p> Nobody has written any of this, but it&#x27;s purely userspace coding, likely fairly humdrum, and the performance impact of FUSE is probably going to be ignorable compared to the unavoidable performance impact of, well, using SSH (and honestly for all but really big bulk ops or ops on machines with slow CPUs I think you won&#x27;t even notice that).<br> <p> ... oh dammit I want to write this thing now. (Assuming nobody already has. I haven&#x27;t even looked, but given the number of people who seem to be even *aware* of SSH subsystems, let alone how damn useful they are for things like this, I do strongly suspect that nothing like this exists.)<br> </div> Thu, 28 Oct 2021 22:17:11 +0000 A rough start for ksmbd https://lwn.net/Articles/874222/ https://lwn.net/Articles/874222/ nybble41 <div class="FormattedComment"> <font class="QuotedText">&gt; NFS doesn&#x27;t authenticate a connection, it authenticates each request. … You seem to be suggesting that the credentials used to authenticate all subsequent requests should be ignored, and the credentials of the &quot;first&quot; request should be used throughout.</font><br> <p> Not exactly. The CIFS/SMB multiuser model is a closer fit, where the kernel maintains the credentials for each server in a per-user keyring. One would need to do something about the flaw that SMB multiuser mounts still require valid credentials for an account with access to the share at mount time[0], though perhaps an NFS-based equivalent wouldn&#x27;t have that problem. It doesn&#x27;t really matter whether there is a single connection or multiple connections as long the credentials are not tied to a specific shared UID or username between the client and the server and all access checks are enforced on the server (i.e. the client can be untrusted). And of course I&#x27;d rather have POSIX/Linux filesystem semantics like NFS as opposed to a protocol originally designed around the Windows VFS. The protocol would obviously need to be hardened and encrypted to be a suitable alternative to SSHFS (SFTP) over the Internet and not just LANs. Regarding authentication, I currently require public keys for all SSH logins on my server, and I&#x27;d rather not go back to passwords.<br> <p> The full use case is basically this: Given any random Linux server which can be accessed through SSH, I would like to be able to mount a filesystem from this server from a separately-administered client machine using a kernel-based filesystem module, with the full POSIX semantics available from NFSv4 mounts and without the overhead and limitations of FUSE. The same mount point should be available to multiple users on the client, with each user accessing files on the server through their own existing SSH login credentials. In other words: Starting with SMB-style multiuser mounts, allow mounting without any default credentials, use the NFS protocol for the actual filesystem operations, and add public-key authentication and secure encryption akin to SSH.<br> <p> (One option for the authentication would be to actually perform an SSH login in userspace when adding the credentials with a fixed command which, on success, registers a temporary session key which can be loaded into the client&#x27;s keyring and used for all further requests. This seems like it would be fairly ergonomic and wouldn&#x27;t require the kernel to implement all the different authentication types supported by SSH.)<br> <p> The existing SMB3 support would probably be &quot;good enough&quot;, though not ideal due to limited POSIX support, if it weren&#x27;t for the issue of requiring mount-time credentials. I could even emulate SSH authentication by scripting a remote smbpasswd command with a temporary password, though that only allows one client machine at a time for each account and might involve running smbpasswd as root (with restricted options) to allow a new temporary password to be set without knowing the old one.<br> <p> [0] <a href="https://lwn.net/Articles/874180/">https://lwn.net/Articles/874180/</a><br> </div> Thu, 28 Oct 2021 17:19:55 +0000 A rough start for ksmbd https://lwn.net/Articles/874181/ https://lwn.net/Articles/874181/ neilbrown <div class="FormattedComment"> <font class="QuotedText">&gt; but rather performing all accesses as the particular user whose credentials were used to authenticate to the server</font><br> <p> This doesn&#x27;t mean anything for NFS. NFS doesn&#x27;t authenticate a connection, it authenticates each request.<br> <p> With NFSv4, there is a &quot;first&quot; request (EXCHANGE_ID I think in v4.1 and v4.2) and almost all other requests inherit a &quot;state&quot; from that. This is mostly used for clear ordering and exactly-once semantics.<br> You seem to be suggesting that the credentials used to authenticate all subsequent requests should be ignored, and the credentials of the &quot;first&quot; request should be used throughout.<br> <p> If don&#x27;t think that would be useful with any current NFS client, as they use &quot;machine&quot; credentials to authenticate the state management, and that doesn&#x27;t necessarily map to any UID. Obviously you could change the NFS client to behave differently, but then you would just change it to send the credential you want the server to honour.<br> <p> What precisely is it that you want to achieve. I&#x27;m in favour of making NFS useful for more use-cases, but we would need a clear description of what the use-case is.<br> </div> Thu, 28 Oct 2021 01:51:39 +0000 A rough start for ksmbd https://lwn.net/Articles/874180/ https://lwn.net/Articles/874180/ nybble41 <div class="FormattedComment"> Replying to myself for follow-up.<br> <p> <font class="QuotedText">&gt; Can you use SMB/CIFS over the Internet these days (without a VPN), or is that still considered insecure?</font><br> <p> The only notable example I found of anything similar was the use of SMB 3.1.1 in Microsoft Azure, which isn&#x27;t exactly &quot;over the Internet&quot; but comes fairly close. But everywhere else the consensus seemed to be &quot;don&#x27;t use SMB, even SMB 3, over the Internet without a VPN.&quot;<br> <p> <font class="QuotedText">&gt; You can even set up a multi-user mount point and have the kernel track per-user login credentials using the cifscreds utility.</font><br> <p> Despite the warnings, I spent a few hours configuring the most secure Samba configuration I could come up with for Linux-to-Linux file sharing (forcing SMB 3.1.1, inhibiting anonymous / guest logins, disabling netbios) and attempted to make this work.<br> <p> The first obstacle I encountered was that Samba (or at least the latest version available in any Debian release: 4.13) doesn&#x27;t support Unix extensions in SMB 3 mode—or the POSIX extensions which are meant to replace them. The Linux kernel supports them, but the server does not. Easy enough to work around—just mount without Unix or POSIX extensions. But this means certain features are unavailable.<br> <p> The real problem, though, was that there does not appear to be any way to set up a mount point for a SMB 3 share in multiuser mode without providing a username and password at mount time for an account with access to that share. This completely defeats the point of the &quot;multiuser&quot; option. The credentials which can access the share(s) should only be provided by individual users via the cifscreds utility—they aren&#x27;t available when the share is mounted from /etc/fstab or a systemd mount unit. Which implies that the kernel should just set up the mount point locally and not actually connect to the server until a user comes along with login credentials, but in practice the kernel tries to connect immediately. Allowing that connection to succeed so that it will create the mount point would mean either storing one user&#x27;s credentials for the entire system to use or else opening up the share to guest users on the server, neither of which is an attractive option.<br> <p> Anyway, it was an interesting challenge and I learned a lot about configuring modern Samba versions, but I&#x27;ll be sticking with SSHFS for the foreseeable future.<br> </div> Thu, 28 Oct 2021 00:44:25 +0000 A rough start for ksmbd https://lwn.net/Articles/874158/ https://lwn.net/Articles/874158/ mbunkus <div class="FormattedComment"> Not sure, hopefully someone else can chime in. A quick Google search seems to suggest mostly everyone still recommending against it. I&#x27;m not aware of anything like ssh public key authentication for Samba (or the SMB protocol), even though it does use Kerberos — which is rather involved to set up manually, though.<br> <p> It surely isn&#x27;t trivial as you absolutely must restrict the protocol version to the latest one (SMB3.1 or so) due to security issues in earlier versions. Then again, you have to do that with all the usual services; one shouldn&#x27;t run HTTPS with SSL 1.0 anymore either, after all. And no, most Apache &amp; nginx default installations on current popular server distributions do not come with the best/tightest SSL/TLS security settings either.<br> <p> Things are… complicated. What I don&#x27;t get, though, is the naysayers offering things such as Nextcloud/Owncloud (web-based file hosting services) as a supposedly secure alternative. What&#x27;s more secure about it? Both run over protocols for which older versions have security issues. Samba-the-project has had a couple of well-known security issues, but then again so do Nextcloud/Owncloud. Both usually authenticate via user &amp; password (unless the server belongs to a company environment where Kerberos is used for Samba &amp; maybe SAML for NC/OC). They&#x27;re both… roughly identical. What am I missing here?<br> <p> I do use it regularly for backups that are encrypted on the client side, accessing my hosting provider&#x27;s storage via CIFS. There are two different layers (factors) of security, and that suffices for me, and the other alternatives are NFS without any type of transport layer security and sshfs, being its usual slow and sometimes unreliable self. Meh.<br> </div> Wed, 27 Oct 2021 17:24:45 +0000 A rough start for ksmbd https://lwn.net/Articles/874155/ https://lwn.net/Articles/874155/ nybble41 <div class="FormattedComment"> Can you use SMB/CIFS over the Internet these days (without a VPN), or is that still considered insecure? I used to run a Samba server for interoperability with Windows clients, just over the LAN, but the authentication requirements kept changing (on the Windows side) and I eventually decided it wasn&#x27;t worth the security risk. I always heard that one shouldn&#x27;t allow SMB connections outside the local network, but perhaps that&#x27;s changed.<br> <p> If the security were comparable to SSH (including authentication via public keys rather than passwords) then I would agree, CIFS has most of the other properties I&#x27;m looking for. You can even set up a multi-user mount point and have the kernel track per-user login credentials using the cifscreds utility.<br> </div> Wed, 27 Oct 2021 16:43:29 +0000 A rough start for ksmbd https://lwn.net/Articles/874098/ https://lwn.net/Articles/874098/ mbunkus <div class="FormattedComment"> I honestly don&#x27;t know the answer to the that, as I&#x27;m usually working in mixed environments, forcing me to forgo characters in file names not supported on Windows anyway. And I don&#x27;t usually use traditional Unix permission setups for stuff on CIFS shares.<br> <p> You can look into the &quot;unix extensions&quot; parameter on Samba.<br> <p> My comment was really just an answer to nybble41&#x27;s requirements, not a general endorsement to use CIFS as the one and only network file system. That being said, I&#x27;m still envious of the various fine-grained controls Samba offers whenever I run into the various limitations of what NFS can do.<br> </div> Wed, 27 Oct 2021 11:28:29 +0000 A rough start for ksmbd https://lwn.net/Articles/874096/ https://lwn.net/Articles/874096/ mathstuf <div class="FormattedComment"> When I set up my shared filesystem, I first used CIFS, however the lack of support for arbitrary filenames and munging permissions was not suitable. Is there a way to say &quot;I don&#x27;t care about Windows, please do not mangle filenames it does not accept&quot; and &quot;please store and expose proper Unix permissions&quot;?<br> </div> Wed, 27 Oct 2021 11:18:57 +0000 A rough start for ksmbd https://lwn.net/Articles/874086/ https://lwn.net/Articles/874086/ mbunkus <div class="FormattedComment"> All of what you want is what Samba (the project) can offer, and much more (e.g. forcing authenticated access to use a certain group or to use certain bits in the file permissions, making setting up a shared directory where all group members have full access to all files trivial).<br> </div> Wed, 27 Oct 2021 07:06:56 +0000 A rough start for ksmbd https://lwn.net/Articles/874083/ https://lwn.net/Articles/874083/ nybble41 <div class="FormattedComment"> <font class="QuotedText">&gt; I&#x27;d think instead you want to map everyone to one user, and export with something like (all_squash,anonuid=MYID,anongid=MYGID).</font><br> <p> Actually what I would want is not squashing all requests down to one UID/GID per export, but rather performing all accesses as the particular user whose credentials were used to authenticate to the server (like SSHFS does, or NFS with LDAP and Kerberos, or mount.cifs) without making any assumptions about the UID/GID (or username / group name(s)) on the client. There should also be options to control how the UIDs, GIDs, and permissions of files from the server are presented locally (again, like SSHFS with -o uid=X,gid=Y,umask=NNN).<br> <p> Or perhaps what I really want is just SSHFS with less overhead. (ksshfs?) Until something like that is available, the FUSE implementation works well enough that I don&#x27;t really see a need for NFS.<br> </div> Wed, 27 Oct 2021 02:58:01 +0000 A rough start for ksmbd https://lwn.net/Articles/874054/ https://lwn.net/Articles/874054/ bfields <div class="FormattedComment"> No expert, but my understanding was that Android manages uid&#x27;s in a pretty non-traditional way, dynamically allocating uid&#x27;s one per app. So there&#x27;s probably not any sensible way to map those individually.<br> <p> I&#x27;d think instead you want to map everyone to one user, and export with something like (all_squash,anonuid=MYID,anongid=MYGID).<br> <p> </div> Tue, 26 Oct 2021 17:45:29 +0000 A rough start for ksmbd https://lwn.net/Articles/873974/ https://lwn.net/Articles/873974/ nybble41 <div class="FormattedComment"> <font class="QuotedText">&gt; Using the same [UG]IDs in your whole network is a far better idea…</font><br> <p> That advice just makes NFS utterly impractical in any situation where you don&#x27;t have absolute control over UID &amp; GID assignments for every system you want to export files to. (You want to export NFS shares to Android without remapping IDs? Good luck with that…)<br> <p> Every so often I start thinking that it would be nice to have a network filesystem without the overhead of FUSE, but the cost of setting up Kerberos (or doing without ID mapping) and the headaches of making that work reliably and securely when the systems may not always be on the same local network always send me back to SSHFS.<br> </div> Mon, 25 Oct 2021 17:37:57 +0000 A rough start for ksmbd https://lwn.net/Articles/873894/ https://lwn.net/Articles/873894/ roblucid <div class="FormattedComment"> Using the same [UG]IDs in your whole network is a far better idea, allowing arbitary user remapping is asking for a huge steaming mess<br> </div> Mon, 25 Oct 2021 10:12:19 +0000 Supporting Microsoft filesystem server but not multicast IPC https://lwn.net/Articles/873792/ https://lwn.net/Articles/873792/ HelloWorld <div class="FormattedComment"> Binder is part of the mainline kernel since 3.19.<br> </div> Sat, 23 Oct 2021 13:54:51 +0000 A rough start for ksmbd https://lwn.net/Articles/872771/ https://lwn.net/Articles/872771/ xophos <div class="FormattedComment"> To me it&#x27;s a distiction without a difference anyway. I don&#x27;t think that something this complex can be implemented in a secure way.<br> </div> Wed, 13 Oct 2021 06:36:21 +0000 A rough start for ksmbd https://lwn.net/Articles/872583/ https://lwn.net/Articles/872583/ jra <div class="FormattedComment"> It&#x27;s a natural mistake. Even Microsoft web pages have occasionally referenced &quot;the Samba protocol&quot; :-) :-). One of the hazards of being a well-known implementation I guess :-).<br> <p> <p> </div> Mon, 11 Oct 2021 18:55:45 +0000 A rough start for ksmbd https://lwn.net/Articles/872573/ https://lwn.net/Articles/872573/ slowfranklin <div class="FormattedComment"> Yes, latest tests with io_uring shows Samba is actually *faster* then ksmbd at this point. Still working out the details, but generally I would say for streaming IO performance Samba and ksmbd should achieve similar numbers. For metadata oriented workloads ksmbd is going to come out first.<br> </div> Mon, 11 Oct 2021 17:29:40 +0000 A rough start for ksmbd https://lwn.net/Articles/872485/ https://lwn.net/Articles/872485/ bfields <div class="FormattedComment"> Well, that&#x27;s interesting. I&#x27;ve been working on nfs for almost 20 years and I don&#x27;t remember hearing about rpc.ugidd.<br> <p> Looking at old documentation: the old userspace nfsd daemon (which preceded both Ganesha and knfsd) supported a &quot;map_daemon&quot; export option. When that was set, it would query the client&#x27;s rpc.ugidd for id mappings using an rpc protocol. So you ran rpc.ugidd on the client.<br> <p> No distribution carries rpc.ugidd any more, and the map_daemon export option was never supported by knfsd.<br> <p> Might be interesting to know more of the history. Digging through old nfs-utils tarballs (it predates git) might be one way to figure it out.<br> <p> If we were to support uid/gid mapping today, we&#x27;d do it some other way.<br> </div> Mon, 11 Oct 2021 13:19:52 +0000 A rough start for ksmbd https://lwn.net/Articles/872438/ https://lwn.net/Articles/872438/ ballombe <div class="FormattedComment"> s/SAMBA/the SMB protocol/<br> Do not shoot the implementer...<br> </div> Mon, 11 Oct 2021 08:47:58 +0000 A rough start for ksmbd https://lwn.net/Articles/872415/ https://lwn.net/Articles/872415/ Rudd-O <div class="FormattedComment"> Thanks for the amazing software you guys create! 🤘<br> </div> Sun, 10 Oct 2021 14:06:02 +0000 A rough start for ksmbd https://lwn.net/Articles/872409/ https://lwn.net/Articles/872409/ iainn <div class="FormattedComment"> TrueNAS (formerly: FreeNAS) uses Ganesha.<br> <p> TrueNAS is FreeBSD based, so using an in-kernel Linux implementation wouldn&#x27;t work so well.<br> <p> However, TrueNAS now also has a Linux port. (Of course, it&#x27;ll be easier to share the NFS config code between Linux and FreeBSD, by sticking with Ganesha.)<br> </div> Sun, 10 Oct 2021 10:51:58 +0000 A rough start for ksmbd https://lwn.net/Articles/872368/ https://lwn.net/Articles/872368/ Baughn <div class="FormattedComment"> It doesn&#x27;t *really* answer your understandable gripe, but assuming you&#x27;re willing to run NixOS everywhere there&#x27;s at least one fix — use a central user database with assigned UIDs, like I do here: <a href="https://github.com/Baughn/machine-config/blob/master/modules/users.nix">https://github.com/Baughn/machine-config/blob/master/modu...</a><br> <p> It works.<br> </div> Sat, 09 Oct 2021 16:24:53 +0000 A rough start for ksmbd https://lwn.net/Articles/872356/ https://lwn.net/Articles/872356/ ballombe <div class="FormattedComment"> Was it not done by rpc.ugidd ? what happened to that ?<br> </div> Sat, 09 Oct 2021 11:50:56 +0000 A rough start for ksmbd https://lwn.net/Articles/872349/ https://lwn.net/Articles/872349/ xophos <div class="FormattedComment"> Samba is best kept out of the network. Putting it into kernel space is completely bonkers.<br> </div> Sat, 09 Oct 2021 09:32:23 +0000 Supporting Microsoft filesystem server but not multicast IPC https://lwn.net/Articles/872323/ https://lwn.net/Articles/872323/ atai <div class="FormattedComment"> Consider Android&#x27;s binder is not in the standard kernel also... it is not too bad.<br> </div> Fri, 08 Oct 2021 22:43:51 +0000 Supporting Microsoft filesystem server but not multicast IPC https://lwn.net/Articles/872295/ https://lwn.net/Articles/872295/ bluca <div class="FormattedComment"> Because we can&#x27;t have nice things. Instead, we need to do bad things in userspace to work around kernel deficiencies - like reinventing a different transport protocol from scratch (varlink) instead of having reliable kernel-backed early-boot IPC primitives.<br> </div> Fri, 08 Oct 2021 16:08:39 +0000 A rough start for ksmbd https://lwn.net/Articles/872287/ https://lwn.net/Articles/872287/ bfields <div class="FormattedComment"> I&#x27;m not sure I understand your threat model here.<br> <p> In the absence of kerberos, the server&#x27;s mostly just trusting the clients to correctly represent who&#x27;s accessing the filesystem anyway.<br> <p> I seem to recall one or two people making an attempt at adding this kind of mapping, and it turning out to be more complicated than expected. But that was a while ago. I wonder if any of the container work done since then would be useful here.<br> <p> Anyway, there&#x27;d have to be someone willing to look into it and do the work.<br> <p> </div> Fri, 08 Oct 2021 14:48:10 +0000 A rough start for ksmbd https://lwn.net/Articles/872258/ https://lwn.net/Articles/872258/ bfields <div class="FormattedComment"> Thanks for the link, I hadn&#x27;t read those slides and they&#x27;re interesting.<br> <p> On p. 3, under &quot;Kernel based implementation&quot;, it says &quot;Better I/O performance.&quot;<br> <p> The other goals listed on that and the following slide--I don&#x27;t know, some look like they could be easier if the file server was in-kernel, with some (ACLs?) I don&#x27;t see off hand why that would help.<br> <p> Historically I think it&#x27;s true that in the case of knfsd the motivation has been the difficulty of providing correct semantics without deeper integration with the filesystems. But I haven&#x27;t seen a good performance comparison recently.<br> <p> <p> </div> Fri, 08 Oct 2021 14:36:42 +0000 A rough start for ksmbd https://lwn.net/Articles/872252/ https://lwn.net/Articles/872252/ geert <div class="FormattedComment"> I only use NFS for root file systems on development boards.<br> Less tech-savvy people (&quot;family&quot;) just use &quot;Connect to Server&quot; with &quot;sftp://nas/...&quot; in the GUI file manager. No further setup needed.<br> </div> Fri, 08 Oct 2021 09:09:07 +0000 A rough start for ksmbd https://lwn.net/Articles/872249/ https://lwn.net/Articles/872249/ Wol <div class="FormattedComment"> <font class="QuotedText">&gt; nfsd would be a lot more friendly if it let you map arbitrary users at arbitrary hosts to arbitrary local users....without having to set up a whole kerberos scheme to &quot;authenticate&quot; them.</font><br> <p> How hard is it to set up LDAP? Can&#x27;t you implement some simple &quot;single sign on&quot;?<br> <p> Cheers,<br> Wol<br> </div> Fri, 08 Oct 2021 07:32:29 +0000