A rough start for ksmbd
Why create an in-kernel SMB server at this point? In a sense, ksmbd is not meant to compete with Samba; indeed, it has been developed in cooperation with the Samba project. It is, however, meant to be a more performant and focused solution than Samba is; at this point, Samba includes a great deal of functionality beyond simple file serving. Ksmbd claims significant performance improvements on a wide range of benchmarks; the graphs on this page show a doubling of performance on some tests. An in-kernel server is an easier place to support variants like SMB Direct, which uses RDMA to transfer data between systems. By drawing more eyes to the code, merging into the mainline may also facilitate faster development in general. One other reason — which tends to be spoken rather more quietly — is that a new implementation can be licensed under GPLv2, while Samba is GPLv3.
Ksmbd was first posted for review (as "cifsd") by Namjae Jeon in late March; the eighth revision came out just before the opening of the 5.15 merge window in late August. The last version received no review comments, but previous versions had clearly been looked at by a number of developers. Nobody objected when Steve French asked Linus Torvalds to pull ksmbd into the mainline on August 29.
It is not unusual for a new subsystem to receive a lot of fixes after its entry into the mainline kernel. Merging tends to draw a lot more attention to the code, and the number of testers normally goes up, leading to the discovery of more problems. That is what the stabilization period after the merge window is for, after all. That said, the nature of the fixes being applied can give some insight into the quality of the underlying code, and the indications for ksmbd are not entirely good.
The commit history for ksmbd shows a steady stream of fixes, as expected. Worryingly, though, many of the problems being fixed are clearly security issues — not a good thing in a network filesystem implementation. Examples include:
- The code to change ownership and permissions did not check existing file permissions first.
- Failure to validate data lengths could lead to access to invalid data.
- The server would blindly follow symbolic links during pathname lookup.
- Numerous failures to validate buffer lengths, such as this one or this one.
All of those fixes were applied after ksmbd landed in the mainline; there are others that came before. Currently, twelve fixes to ksmbd credit Coverity scans in their changelogs.
Again, it would not be surprising for a security issue or three to turn up in a new network-filesystem implementation. But ksmbd has shown enough problems to have raised a few eyebrows in the kernel community, though the discussion of those problems was evidently held in private for some time. When French pushed another set of ksmbd fixes in mid-September, though, Kees Cook took the discussion public:
I was looking through the history[1] of the ksmbd work, and I'm kind of surprised at some of the flaws being found here. This looks like new code being written, too, I think (I found[0])? Some of these flaws are pretty foundational filesystem security properties[2] that weren't being tested for, besides the upsetting case of having buffer overflows[3] in an in-kernel filesystem server.I'm concerned about code quality here, and I think something needs to change about the review and testing processes.
French replied
that he was surprised by some of the problems too. He pointed to a wiki page
describing the ongoing security review for this code, which seems to have
acquired a new urgency. A number of new procedures are being instituted,
he said, and there will be testing done at various interoperability
events. French said he was "
There are also some things to look forward to on the security front, he
continued:
The NTLMv1 removal has since
been merged into the mainline. On reading French's message, Cook responded:
"
The work on cleaning up ksmbd proceeds; French pushed
another 11 fixes on October 1.
At this point, there is little doubt that ksmbd will be properly reviewed
and cleaned up; there are eyes on the code, and ksmbd itself is small
enough that a comprehensive review should be feasible. At that point, the
kernel should have an SMB
implementation that is feature-rich, performant, and secure.
That said, waiting
another kernel development cycle or two for the developers to "
This is all good, but it is still a little worrisome that this code got as
far as it did in the condition it was in. It seems clear that security
concerns were not at the forefront when this code was being developed and
that the review it received before being merged failed in this regard as
well. The
addition of security features is great, but they do not help much in the
absence of a secure implementation. If we ever want to reach a point where
we are not adding more security problems to the kernel than we are fixing,
we will need to do better than this.pleased with the progress that is
being made, but also conceded that ksmbd "
is not ready for production
use yet
".
There is some good news (relating to security), once Namjae et al
get past these buffer overflow etc. patches.
Thanks for making these recent changes; I feel much better about
ksmbd's direction
".
get
past these buffer overflow etc. patches
" before deploying it might
well be prudent.
Index entries for this article Kernel Filesystems/ksmbd Kernel Releases/5.15
Posted Oct 7, 2021 15:02 UTC (Thu)
by geert (subscriber, #98403)
[Link] (5 responses)
Anyone who remembers khttpd? When it was merged, it was more performant than any existing userspace web server.
Posted Oct 7, 2021 15:24 UTC (Thu)
by bfields (subscriber, #19510)
[Link] (3 responses)
There's also an actively developed userspace NFS server, Ganesha. My impression is that it's mainly focused on exporting userspace filesystem implementations. It's also capable of exporting filesystems implemented in the kernel--that's what we have name_to_handle_at() and open_by_handle_at() syscalls for--but I think that's still a little more difficult for it. Most people exporting xfs or ext4 or btrfs are probably using knfsd.
I believe Solaris has in-kernel NFS and SMB servers too.
It's not a simple decision, there are tradeoffs.
Posted Oct 8, 2021 5:15 UTC (Fri)
by zdzichu (guest, #17118)
[Link] (1 responses)
https://www.snia.org/sites/default/orig/sdc_archives/2008...
It's interesting how it compares to ksmd rationale in Linux, over a decade later.
Posted Oct 8, 2021 14:36 UTC (Fri)
by bfields (subscriber, #19510)
[Link]
On p. 3, under "Kernel based implementation", it says "Better I/O performance."
The other goals listed on that and the following slide--I don't know, some look like they could be easier if the file server was in-kernel, with some (ACLs?) I don't see off hand why that would help.
Historically I think it's true that in the case of knfsd the motivation has been the difficulty of providing correct semantics without deeper integration with the filesystems. But I haven't seen a good performance comparison recently.
Posted Oct 10, 2021 10:51 UTC (Sun)
by iainn (guest, #64312)
[Link]
TrueNAS is FreeBSD based, so using an in-kernel Linux implementation wouldn't work so well.
However, TrueNAS now also has a Linux port. (Of course, it'll be easier to share the NFS config code between Linux and FreeBSD, by sticking with Ganesha.)
Posted Oct 7, 2021 16:42 UTC (Thu)
by atnot (subscriber, #124910)
[Link]
Posted Oct 7, 2021 16:25 UTC (Thu)
by flussence (guest, #85566)
[Link] (30 responses)
Posted Oct 7, 2021 19:31 UTC (Thu)
by developer122 (guest, #152928)
[Link] (29 responses)
Maybe I just want to deploy nfs on my network after already having set up a bunch of computers, and maybe they all just happen to use slightly different usernames and IDs for the same users. Right now the only thing I can do is squash all the users from any particular host down to one user, but I'd really like to be able to map them. It works so long as I'm the only one using it, but I can see a multiuser future on the horizon (eg guests/family).
/nfs_gripe
Posted Oct 7, 2021 20:27 UTC (Thu)
by ejr (subscriber, #51652)
[Link] (1 responses)
Posted Oct 8, 2021 14:48 UTC (Fri)
by bfields (subscriber, #19510)
[Link]
In the absence of kerberos, the server's mostly just trusting the clients to correctly represent who's accessing the filesystem anyway.
I seem to recall one or two people making an attempt at adding this kind of mapping, and it turning out to be more complicated than expected. But that was a while ago. I wonder if any of the container work done since then would be useful here.
Anyway, there'd have to be someone willing to look into it and do the work.
Posted Oct 8, 2021 7:32 UTC (Fri)
by Wol (subscriber, #4433)
[Link]
How hard is it to set up LDAP? Can't you implement some simple "single sign on"?
Cheers,
Posted Oct 8, 2021 9:09 UTC (Fri)
by geert (subscriber, #98403)
[Link]
Posted Oct 9, 2021 11:50 UTC (Sat)
by ballombe (subscriber, #9523)
[Link] (1 responses)
Posted Oct 11, 2021 13:19 UTC (Mon)
by bfields (subscriber, #19510)
[Link]
Looking at old documentation: the old userspace nfsd daemon (which preceded both Ganesha and knfsd) supported a "map_daemon" export option. When that was set, it would query the client's rpc.ugidd for id mappings using an rpc protocol. So you ran rpc.ugidd on the client.
No distribution carries rpc.ugidd any more, and the map_daemon export option was never supported by knfsd.
Might be interesting to know more of the history. Digging through old nfs-utils tarballs (it predates git) might be one way to figure it out.
If we were to support uid/gid mapping today, we'd do it some other way.
Posted Oct 9, 2021 16:24 UTC (Sat)
by Baughn (subscriber, #124425)
[Link]
It works.
Posted Oct 25, 2021 10:12 UTC (Mon)
by roblucid (guest, #48964)
[Link] (21 responses)
Posted Oct 25, 2021 17:37 UTC (Mon)
by nybble41 (subscriber, #55106)
[Link] (20 responses)
That advice just makes NFS utterly impractical in any situation where you don't have absolute control over UID & GID assignments for every system you want to export files to. (You want to export NFS shares to Android without remapping IDs? Good luck with that…)
Every so often I start thinking that it would be nice to have a network filesystem without the overhead of FUSE, but the cost of setting up Kerberos (or doing without ID mapping) and the headaches of making that work reliably and securely when the systems may not always be on the same local network always send me back to SSHFS.
Posted Oct 26, 2021 17:45 UTC (Tue)
by bfields (subscriber, #19510)
[Link] (19 responses)
I'd think instead you want to map everyone to one user, and export with something like (all_squash,anonuid=MYID,anongid=MYGID).
Posted Oct 27, 2021 2:58 UTC (Wed)
by nybble41 (subscriber, #55106)
[Link] (18 responses)
Actually what I would want is not squashing all requests down to one UID/GID per export, but rather performing all accesses as the particular user whose credentials were used to authenticate to the server (like SSHFS does, or NFS with LDAP and Kerberos, or mount.cifs) without making any assumptions about the UID/GID (or username / group name(s)) on the client. There should also be options to control how the UIDs, GIDs, and permissions of files from the server are presented locally (again, like SSHFS with -o uid=X,gid=Y,umask=NNN).
Or perhaps what I really want is just SSHFS with less overhead. (ksshfs?) Until something like that is available, the FUSE implementation works well enough that I don't really see a need for NFS.
Posted Oct 27, 2021 7:06 UTC (Wed)
by mbunkus (subscriber, #87248)
[Link] (9 responses)
Posted Oct 27, 2021 11:18 UTC (Wed)
by mathstuf (subscriber, #69389)
[Link] (1 responses)
Posted Oct 27, 2021 11:28 UTC (Wed)
by mbunkus (subscriber, #87248)
[Link]
You can look into the "unix extensions" parameter on Samba.
My comment was really just an answer to nybble41's requirements, not a general endorsement to use CIFS as the one and only network file system. That being said, I'm still envious of the various fine-grained controls Samba offers whenever I run into the various limitations of what NFS can do.
Posted Oct 27, 2021 16:43 UTC (Wed)
by nybble41 (subscriber, #55106)
[Link] (6 responses)
If the security were comparable to SSH (including authentication via public keys rather than passwords) then I would agree, CIFS has most of the other properties I'm looking for. You can even set up a multi-user mount point and have the kernel track per-user login credentials using the cifscreds utility.
Posted Oct 27, 2021 17:24 UTC (Wed)
by mbunkus (subscriber, #87248)
[Link]
It surely isn't trivial as you absolutely must restrict the protocol version to the latest one (SMB3.1 or so) due to security issues in earlier versions. Then again, you have to do that with all the usual services; one shouldn't run HTTPS with SSL 1.0 anymore either, after all. And no, most Apache & nginx default installations on current popular server distributions do not come with the best/tightest SSL/TLS security settings either.
Things are… complicated. What I don't get, though, is the naysayers offering things such as Nextcloud/Owncloud (web-based file hosting services) as a supposedly secure alternative. What's more secure about it? Both run over protocols for which older versions have security issues. Samba-the-project has had a couple of well-known security issues, but then again so do Nextcloud/Owncloud. Both usually authenticate via user & password (unless the server belongs to a company environment where Kerberos is used for Samba & maybe SAML for NC/OC). They're both… roughly identical. What am I missing here?
I do use it regularly for backups that are encrypted on the client side, accessing my hosting provider's storage via CIFS. There are two different layers (factors) of security, and that suffices for me, and the other alternatives are NFS without any type of transport layer security and sshfs, being its usual slow and sometimes unreliable self. Meh.
Posted Oct 28, 2021 0:44 UTC (Thu)
by nybble41 (subscriber, #55106)
[Link] (3 responses)
> Can you use SMB/CIFS over the Internet these days (without a VPN), or is that still considered insecure?
The only notable example I found of anything similar was the use of SMB 3.1.1 in Microsoft Azure, which isn't exactly "over the Internet" but comes fairly close. But everywhere else the consensus seemed to be "don't use SMB, even SMB 3, over the Internet without a VPN."
> You can even set up a multi-user mount point and have the kernel track per-user login credentials using the cifscreds utility.
Despite the warnings, I spent a few hours configuring the most secure Samba configuration I could come up with for Linux-to-Linux file sharing (forcing SMB 3.1.1, inhibiting anonymous / guest logins, disabling netbios) and attempted to make this work.
The first obstacle I encountered was that Samba (or at least the latest version available in any Debian release: 4.13) doesn't support Unix extensions in SMB 3 mode—or the POSIX extensions which are meant to replace them. The Linux kernel supports them, but the server does not. Easy enough to work around—just mount without Unix or POSIX extensions. But this means certain features are unavailable.
The real problem, though, was that there does not appear to be any way to set up a mount point for a SMB 3 share in multiuser mode without providing a username and password at mount time for an account with access to that share. This completely defeats the point of the "multiuser" option. The credentials which can access the share(s) should only be provided by individual users via the cifscreds utility—they aren't available when the share is mounted from /etc/fstab or a systemd mount unit. Which implies that the kernel should just set up the mount point locally and not actually connect to the server until a user comes along with login credentials, but in practice the kernel tries to connect immediately. Allowing that connection to succeed so that it will create the mount point would mean either storing one user's credentials for the entire system to use or else opening up the share to guest users on the server, neither of which is an attractive option.
Anyway, it was an interesting challenge and I learned a lot about configuring modern Samba versions, but I'll be sticking with SSHFS for the foreseeable future.
Posted Nov 2, 2021 13:00 UTC (Tue)
by JanC_ (guest, #34940)
[Link] (1 responses)
Posted Nov 2, 2021 20:48 UTC (Tue)
by nybble41 (subscriber, #55106)
[Link]
Naturally. But if you're setting up a mount point with -o multiuser then you're probably doing so as root (with or without /etc/fstab) and not as one of the (locally) unprivileged users with the login credentials for that share on the server. The mechanics of -o multiuser are that when a user accesses the local mount point the kernel gets the credentials from that user's keychain and establishes a new connection to the server for that user. It doesn't make sense to require "default credentials" to set up the mount point.
The alternative is to install mount.cifs with the SUID bit enabled and let each user mount their own shares, which works (more or less, if you're okay with the Windows version of the SMB3 protocol without POSIX extensions) but isn't as nice as having a common multi-user mount point.
Posted Nov 2, 2021 13:20 UTC (Tue)
by mbunkus (subscriber, #87248)
[Link]
I've never set that up without Kerberos, though.
[1] Maybe that initial mount could also be done via automounting, not at boot, though I don't know whether or not that works when the initial request for a currently unmounted directory comes from a user process.
Posted Oct 29, 2021 3:49 UTC (Fri)
by Fowl (subscriber, #65667)
[Link]
https://techcommunity.microsoft.com/t5/itops-talk-blog/sm...
Posted Oct 28, 2021 1:51 UTC (Thu)
by neilbrown (subscriber, #359)
[Link] (7 responses)
This doesn't mean anything for NFS. NFS doesn't authenticate a connection, it authenticates each request.
With NFSv4, there is a "first" request (EXCHANGE_ID I think in v4.1 and v4.2) and almost all other requests inherit a "state" from that. This is mostly used for clear ordering and exactly-once semantics.
If don't think that would be useful with any current NFS client, as they use "machine" credentials to authenticate the state management, and that doesn't necessarily map to any UID. Obviously you could change the NFS client to behave differently, but then you would just change it to send the credential you want the server to honour.
What precisely is it that you want to achieve. I'm in favour of making NFS useful for more use-cases, but we would need a clear description of what the use-case is.
Posted Oct 28, 2021 17:19 UTC (Thu)
by nybble41 (subscriber, #55106)
[Link] (6 responses)
Not exactly. The CIFS/SMB multiuser model is a closer fit, where the kernel maintains the credentials for each server in a per-user keyring. One would need to do something about the flaw that SMB multiuser mounts still require valid credentials for an account with access to the share at mount time[0], though perhaps an NFS-based equivalent wouldn't have that problem. It doesn't really matter whether there is a single connection or multiple connections as long the credentials are not tied to a specific shared UID or username between the client and the server and all access checks are enforced on the server (i.e. the client can be untrusted). And of course I'd rather have POSIX/Linux filesystem semantics like NFS as opposed to a protocol originally designed around the Windows VFS. The protocol would obviously need to be hardened and encrypted to be a suitable alternative to SSHFS (SFTP) over the Internet and not just LANs. Regarding authentication, I currently require public keys for all SSH logins on my server, and I'd rather not go back to passwords.
The full use case is basically this: Given any random Linux server which can be accessed through SSH, I would like to be able to mount a filesystem from this server from a separately-administered client machine using a kernel-based filesystem module, with the full POSIX semantics available from NFSv4 mounts and without the overhead and limitations of FUSE. The same mount point should be available to multiple users on the client, with each user accessing files on the server through their own existing SSH login credentials. In other words: Starting with SMB-style multiuser mounts, allow mounting without any default credentials, use the NFS protocol for the actual filesystem operations, and add public-key authentication and secure encryption akin to SSH.
(One option for the authentication would be to actually perform an SSH login in userspace when adding the credentials with a fixed command which, on success, registers a temporary session key which can be loaded into the client's keyring and used for all further requests. This seems like it would be fairly ergonomic and wouldn't require the kernel to implement all the different authentication types supported by SSH.)
The existing SMB3 support would probably be "good enough", though not ideal due to limited POSIX support, if it weren't for the issue of requiring mount-time credentials. I could even emulate SSH authentication by scripting a remote smbpasswd command with a temporary password, though that only allows one client machine at a time for each account and might involve running smbpasswd as root (with restricted options) to allow a new temporary password to be set without knowing the old one.
Posted Oct 28, 2021 22:17 UTC (Thu)
by nix (subscriber, #2304)
[Link] (4 responses)
A place to start on the server side of this is already written in the form of the sftp subsystem, though it doesn't implement remotely enough operations and probably the serialization protocol should be rethought, since we are not at all wedded to the sftp protocol. The biggest problem is that by default this thing would be single-threaded, but a multithreaded version is perfectly possible that fires up multiple worker threads (possibly in an expanding-as-needed thread pool), kicks off separate ssh -s's for each one, and lets things rip accordingly.
Nobody has written any of this, but it's purely userspace coding, likely fairly humdrum, and the performance impact of FUSE is probably going to be ignorable compared to the unavoidable performance impact of, well, using SSH (and honestly for all but really big bulk ops or ops on machines with slow CPUs I think you won't even notice that).
... oh dammit I want to write this thing now. (Assuming nobody already has. I haven't even looked, but given the number of people who seem to be even *aware* of SSH subsystems, let alone how damn useful they are for things like this, I do strongly suspect that nothing like this exists.)
Posted Oct 29, 2021 6:04 UTC (Fri)
by nybble41 (subscriber, #55106)
[Link] (3 responses)
An SSHFS equivalent using something like the NFS protocol (without any NFS authentication, just acting as the logged-in user) through an SSH tunnel instead of SFTP would be an interesting design, though it doesn't address my main design goal of migrating the filesystem away from FUSE and into the kernel.
Posted Oct 29, 2021 12:51 UTC (Fri)
by nix (subscriber, #2304)
[Link] (2 responses)
A true multiuser permission-respecting filesystem... well, I guess if you ssh as root it could setfsuid as needed as requests came in. That's what the fsuid is for, after all.
Posted Oct 29, 2021 14:54 UTC (Fri)
by nybble41 (subscriber, #55106)
[Link] (1 responses)
The code in sshfs.c[0] appears to pass "-s sftp" to the SSH command by default (i.e. using the subsystem mechanism) unless the sftp_server option is set (with a path) or the SSHv1 protocol is selected.
> A true multiuser permission-respecting filesystem... well, I guess if you ssh as root it could setfsuid as needed as requests came in.
The kernel SMB3 implementation creates a separate connection for each user, and I'd probably do the same thing here. Many systems, my own included, don't allow direct root logins via SSH; ssh as root + setfsuid on the server would essentially mean trusting the client machine with root access to the server, and even with restrictions such as only allowing this one approved subsystem it could be used to bypass SSH login policies.
The FUSE filesystem would need to be set up by root on the client with the allow_other option to permit shared access. You could have an interface for users to link their ssh-agent to the FUSE filesystem so it can connect on their behalf (using keys), though I'm sure there would be all sorts of interesting security and UX implications.
Posted Oct 29, 2021 17:32 UTC (Fri)
by nix (subscriber, #2304)
[Link]
OK I'm too tired to think then, or simply can't read. It really is there and really obvious :) I guess that shows I was thinking of the right design, since sshfs is already doing it!
OK, so the right thing to do is to soup up sftp-server until it can do everything FUSE can be asked for, then soup up sshfs to talk to it and add a thread pool etc to it :) if this doesn't work (rejected by upstream), sshfs could ship its own variant (under another name: sshfs-server) and use it if set up on a remote system.
Posted Oct 29, 2021 4:14 UTC (Fri)
by neilbrown (subscriber, #359)
[Link]
I interpret your problem description as "You want a key distribution protocol based on ssh rather than kerberos, and you want NFS to be able to work with the keys thus distributed".
NFS is designed to have pluggable authentication systems, but krb5 wrapped in rpcsec/gss is the only one that is actually implemented.
I wonder if it would be possible to use an ssh-based scheme to distribute keys. I have no knowledge of the internals of krb5 certificates, but my guess is that it isn't completely out of the question. You would need to modify or replace gssproxy on the server and rpc.gssd on the client.
An alternate possible direction involves NFS over TLS. This is a draft standard for this, and I think there is prototype code. Whether the standard allows the credential for the connection to be used for FS requests, I don't know. If it did, then this might be a direction that could be standards-complient and so more likely to be implemented widely.
Posted Oct 7, 2021 19:44 UTC (Thu)
by jokeyrhyme (subscriber, #136576)
[Link]
> On the other hand, might the rumored sudden merge of the ksmdb driver (https://lwn.net/Articles/871098/) have been due to the implicit threat of its being rewritten in Rust?
I wasn't aware that ksmbd had been "rushed", but I hope fear of Rust isn't a motivating factor in contributing C code to the kernel, haha
Posted Oct 7, 2021 21:54 UTC (Thu)
by jra (subscriber, #55261)
[Link] (1 responses)
A small history lesson :-). The first bug I spotted in ksmbd (using ../../../ to escape from a share) I remember was demonstrated by tridge to Microsoft against a Windows NT server back in Redmond in 1996. They fixed it by checking for the specific "../" characters he was using - he swiftly demonstrated to them this wasn't enough :-). That was a different time, and a different Microsoft.
Posted Oct 10, 2021 14:06 UTC (Sun)
by Rudd-O (guest, #61155)
[Link]
Posted Oct 8, 2021 0:29 UTC (Fri)
by dancol (guest, #142293)
[Link] (2 responses)
Posted Oct 8, 2021 0:56 UTC (Fri)
by jra (subscriber, #55261)
[Link] (1 responses)
See here:
https://www.youtube.com/watch?v=eYxp8yJHpik
for an excellent talk by Metze (Stefan Metzemacher) for the latest Samba performance data with io_uring.
Posted Oct 11, 2021 17:29 UTC (Mon)
by slowfranklin (subscriber, #134701)
[Link]
Posted Oct 8, 2021 3:03 UTC (Fri)
by willy (subscriber, #9762)
[Link]
I don't have time to do the thorough review I'd like to do. Sorry.
Posted Oct 8, 2021 4:28 UTC (Fri)
by alison (subscriber, #63752)
[Link] (4 responses)
https://lwn.net/Articles/504970/
As far as I know, the kernel still lacks a built-in multicast IPC facility. Among the notable failed attempts are not only AF_BUS linked above, but also the subsequent KDBUS. The principal objector the last time was Andy Lutomirski:
https://lkml.org/lkml/2015/6/23/22
Part of the objection may have been to integration with systemd, which was more controversial in 2015 than now. Lutomirski concluded:
"I think that a very high quality implementation of the
Perhaps Lutomirski, whom we should thank for years of hard work as a maintainer, did not have a chance to read the ksmbd patchset.
Posted Oct 8, 2021 6:34 UTC (Fri)
by PengZheng (subscriber, #108006)
[Link] (1 responses)
Posted Oct 8, 2021 16:08 UTC (Fri)
by bluca (subscriber, #118303)
[Link]
Posted Oct 8, 2021 22:43 UTC (Fri)
by atai (subscriber, #10977)
[Link] (1 responses)
Posted Oct 23, 2021 13:54 UTC (Sat)
by HelloWorld (guest, #56129)
[Link]
Posted Oct 9, 2021 9:32 UTC (Sat)
by xophos (subscriber, #75267)
[Link] (3 responses)
Posted Oct 11, 2021 8:47 UTC (Mon)
by ballombe (subscriber, #9523)
[Link] (2 responses)
Posted Oct 11, 2021 18:55 UTC (Mon)
by jra (subscriber, #55261)
[Link] (1 responses)
Posted Oct 13, 2021 6:36 UTC (Wed)
by xophos (subscriber, #75267)
[Link]
Posted Jul 21, 2022 1:24 UTC (Thu)
by llamafilm (guest, #159799)
[Link]
A rough start for ksmbd
It was removed from the kernel when userspace web servers had improved, surpassing khttpd's performance.
A rough start for ksmbd
A rough start for ksmbd
A rough start for ksmbd
A rough start for ksmbd
A rough start for ksmbd
A rough start for ksmbd
A rough start for ksmbd
A rough start for ksmbd
A rough start for ksmbd
A rough start for ksmbd
Wol
A rough start for ksmbd
Less tech-savvy people ("family") just use "Connect to Server" with "sftp://nas/..." in the GUI file manager. No further setup needed.
A rough start for ksmbd
A rough start for ksmbd
A rough start for ksmbd
A rough start for ksmbd
A rough start for ksmbd
A rough start for ksmbd
A rough start for ksmbd
A rough start for ksmbd
A rough start for ksmbd
A rough start for ksmbd
A rough start for ksmbd
A rough start for ksmbd
A rough start for ksmbd
A rough start for ksmbd
A rough start for ksmbd
A rough start for ksmbd
A rough start for ksmbd
A rough start for ksmbd
You seem to be suggesting that the credentials used to authenticate all subsequent requests should be ignored, and the credentials of the "first" request should be used throughout.
A rough start for ksmbd
A rough start for ksmbd
A rough start for ksmbd
A rough start for ksmbd
A rough start for ksmbd
A rough start for ksmbd
A rough start for ksmbd
The kernel "knows" about krb5 certificates and encryption scheme, but out-sources to user-space for distributing those certificates and keys.
A rough start for ksmbd
A rough start for ksmbd
A rough start for ksmbd
A rough start for ksmbd
A rough start for ksmbd
A rough start for ksmbd
A rough start for ksmbd
Supporting Microsoft filesystem server but not multicast IPC
kdbus concept and API would be a bit faster than a very high quality
userspace implementation of dbus. Other than that, I think it would
actually be worse. The kdbus proponents seem to be comparing the
current kdbus implementation to the current userspace implementation,
and a favorable comparison there is not a good reason to merge it."
Supporting Microsoft filesystem server but not multicast IPC
Supporting Microsoft filesystem server but not multicast IPC
Supporting Microsoft filesystem server but not multicast IPC
Supporting Microsoft filesystem server but not multicast IPC
A rough start for ksmbd
A rough start for ksmbd
Do not shoot the implementer...
A rough start for ksmbd
A rough start for ksmbd
A rough start for ksmbd