Ladybird browser spreads its wings
Ladybird is an open-source project aimed at building an independent web browser, rather than yet another browser based on Chrome. It is written in C++ and licensed under a two-clause BSD license. The effort began as part of the SerenityOS project, but developer Andreas Kling announced on June 3 that he was "forking" Ladybird as a separate project and stepping away from SerenityOS to focus his attention on the browser completely. Ladybird is not ready to replace Firefox or Chrome for regular use, but it is showing great promise.
Kling started working on SerenityOS in 2018 as a therapy project after completing a substance-abuse rehabilitation program. The SerenityOS name is a nod to the serenity prayer. Prior to working on the project, he had worked on WebKit-based browsers at Apple and Nokia. Eventually he made SerenityOS his full-time job, and funded the work through donations, sales of SerenityOS merchandise, and income from YouTube. (Kling posts monthly updates to his YouTube channel about Ladybird, as well as hacking videos where he walks through working on various components in the browser, such as the JavaScript JIT compiler.)
Taking flight
Kling announced the Ladybird project in September 2022. He said that the project started while creating a Qt GUI for SerenityOS's LibWeb browser engine. He decided to target Linux as well as SerenityOS so it would be easier for people to work on and debug while in Linux. In the post announcing his intent to work solely on Ladybird, he noted that he had been focusing all of his attention on the Linux version of Ladybird. With that realization, he decided to step down as "benevolent dictator for life" (BDFL) of SerenityOS so its development would not be held back:
Before anyone asks, there is no drama behind this change. It's simply recognizing that there have been two big projects packed uncomfortably into a single space for too long, and I'm doing what I believe will make life better for everyone involved.
Ladybird's governance is similar to SerenityOS. Kling is the
BDFL, with a group of maintainers (currently ten) who can
approve and merge pull requests. The contributing
guide notes that maintainership is "by invitation only and does not
correlate with any particular metric
". Project development
discussions are held on a Discord server
(account required).
Now independent, Ladybird has dropped SerenityOS as a development target, and has moved to its own GitHub repository. In addition, Kling has relaxed his self-imposed policy of excluding "not invented here" (NIH) code that had applied to SerenityOS, which means that the Ladybird project will be able to make use of existing libraries rather than writing from scratch.
Comparing the README file in the standalone Ladybird repository
against the README
file in the SerenityOS repository, the goal has
evolved from creating "a standards-compliant, independent web browser with
no third-party dependencies
" to developing an independent browser "using a
novel engine based on web standards
".
The changes to the section that enumerates the core libraries for Ladybird
provide some hints about Kling's plans
to use existing libraries rather than continuing to reinvent the
wheel. The core support libraries for the project include homegrown
libraries for cryptography, TLS, 2D-graphics rendering, archive-file
format support, Unicode, as well as audio and video playback.
In the pre-fork documentation, they are described
as alternatives to other software. For example, Ladybird's TLS (LibTLS)
and cryptography (LibCrypto) libraries are "Cryptography primitives
and Transport Layer Security (rather than OpenSSL)
". The "rather
than" language has been removed in the journey to the standalone
repository, and the LibSQL library from SerenityOS has already
been stripped out in favor of sqlite3.
In a discussion in the project's Discord instance on June 5, Kling indicated that font rendering would likely be replaced with a third-party library. A user asked on June 6 what would determine whether a component would be developed in-house versus using a third-party library. Kling responded that if it implements a web standard, "i.e DOM, HTML, JavaScript, CSS, Wasm, etc. then we build it in house." Otherwise, the project would look to alternatives "unless we believe we can build something better ourselves".
Status
Ladybird is still in early development ("pre-alpha") today. It currently runs on Linux, macOS, and other UNIX-like operating systems. It's also possible to use on Windows with Windows Subsystem for Linux (WSL) version 2, but there appears to be no effort to target Windows independently at this time. At the moment, the project does not provide binaries for any platform. Interested users will need to grab the source and follow the build instructions. Users will need GCC 13+ or Clang 17, and Qt6 development packages to play along at home. Ladybird compiles and runs on, for example, Fedora 40 without a problem, but it is a long way from being suitable for regular use.
One might expect that the browser would be more usable with sites with simpler layouts and little to no JavaScript (e.g. LWN) than those with complex layouts and a fair amount of JavaScript (e.g. GitHub). However, this isn't always the case—Ladybird rendered GitHub and many other sites well, if slowly. Browsing LWN anonymously worked well, but logging into LWN, however, consistently proved to be too much for the application. Each time, it basically froze on the front page and clicking links to articles did nothing.
Somewhat ironically, it was not possible to log into Discord using Ladybird. It does a fair job of rendering pages, but speed and stability are still wanting. Each Ladybird tab has its own render process, which is sandboxed as a security measure to prevent any malicious pages from affecting the rest of the system. However, it doesn't seem to suffice to keep a single page from crashing the browser entirely. That's to be expected from a project that's still considered pre-alpha, though.
The current feature set is, not surprisingly, minimal. Ladybird has a URL/search bar, reload, tabs, can zoom in/out on content, take screenshots, and (of course) has backward and forward navigation. It does not, however, have bookmarks, a history display, extensions, password management, printing, or even the ability to save an image. WebRTC does not seem to be supported yet. CSS support seems relatively robust. Ladybird passes 100% of the CSS Selectors tests for levels 1-3, for example, using this test. It gets a 53% score for level 4, while Firefox gets 71%, so not a terrible showing at all. JavaScript support seems solid, but slow: the examples here work, but they load slowly.
On the other hand, Ladybird does have tools for developers, such as inspectors for the document object model (DOM) tree and accessibility trees, as well as the ability to create dumps of various things: the DOM tree and layout tree, computed styles, and so forth. It also has the ability to spoof the User-Agent sent by the browser so that testers can try to get around sites that refuse to work with "unknown" browsers. However, toggling the User-Agent wasn't enough to get past Google's gatekeeping to sign into Gmail—but it's unclear if that meant Ladybird wasn't sending the string correctly or if Google is using other means to fingerprint non-approved browsers.
Suffice it to say, Ladybird is not ready for mainstream use but it does show potential. In the past month, the project has had more than 880 commits from 49 authors. If the project maintains that kind of momentum, or picks up steam, it could become a usable alternative to mainstream browsers before too long.
Posted Jun 7, 2024 18:50 UTC (Fri)
by riking (subscriber, #95706)
[Link]
Posted Jun 7, 2024 19:04 UTC (Fri)
by python (guest, #171317)
[Link] (29 responses)
I looked at the source code for LadyBird and I am surprised that they are not using C++20 modules (it's been supported by clang for 5 years). I always found maintaining header files (eg. having to write the function definitions twice, once in the .h and once in the .cpp file, etc.) to be a great bother. I can only assume they have much more pressing issues like implementing missing features and trying to comply with all of the modern web standards.
As I've watched Firefox, and Microsoft try to compete with Chrome - I find it progressively more difficult to get excited about attempts at new browsers when there appears to be nothing terribly new or innovative about how or what they are trying to achieve. It's a lot of work to build even a small portion of the machinery underlying a web browser like Firefox or Chrome. It will probably take a decade for something like LadyBird to be competitive (assuming that the web "remains still enough" to catch up). I fear that they might get mired down in the complexity of what they are trying to achieve (with their chosen tools, and limited manpower) before they ever reach that goal.
Posted Jun 7, 2024 19:11 UTC (Fri)
by atai (subscriber, #10977)
[Link] (2 responses)
Posted Jun 8, 2024 15:13 UTC (Sat)
by ejr (subscriber, #51652)
[Link]
Posted Jun 30, 2024 12:22 UTC (Sun)
by dmytrish (guest, #85653)
[Link]
Posted Jun 7, 2024 19:39 UTC (Fri)
by willy (subscriber, #9762)
[Link] (3 responses)
https://wiki.mozilla.org/Oxidation
is a bit outdated (not modified in 4 years) but clearly shows how Firefox is being gradually rewritten in Rust.
Posted Jun 7, 2024 19:47 UTC (Fri)
by atnot (guest, #124910)
[Link]
Posted Jun 9, 2024 7:23 UTC (Sun)
by dilinger (subscriber, #2867)
[Link] (1 responses)
Posted Jun 21, 2024 10:00 UTC (Fri)
by pjmlp (subscriber, #168573)
[Link]
==> How Chromium Will Support the Use of Rust
https://security.googleblog.com/2023/01/supporting-use-of...
Posted Jun 7, 2024 20:34 UTC (Fri)
by mathstuf (subscriber, #69389)
[Link] (5 responses)
Clang has not supported C++20 modules for 5 years. 2 is probably more accurate. MSVC is probably close to 3. You're thinking of "Clang modules", which served as a prototype for C++20's modules, but not what got standardized (standard modules scale far better according to (ISO-)reported benchmarks). Even with that, (released) build system support for C++20 modules only landed in CMake of October 2023 (FD: I'm the main developer of that support). Now that it does exist, I expect many issues to need fixing over the next few years before it is "battle hardened". Other build systems do have support (build2, xmake), but others are still pending (Meson) or have no known progress (autotools, Tup).
FWIW, I had a prototype that worked with CMake and GCC (both locally patched) in 2019, but what landed is a far better situation than what I had then.
> I always found maintaining header files (eg. having to write the function definitions twice, once in the .h and once in the .cpp file, etc.) to be a great bother.
Sure, you *could* do one-file modules, but build performance will vastly prefer to continue having the split (just now between module interface and implementation units). The main benefit is that you're now in control of what interface you provide consumers via the `export` keyword instead of "whatever the preprocessor happened to come across" while you included headers for what you needed as well.
Posted Jun 7, 2024 21:12 UTC (Fri)
by madscientist (subscriber, #16861)
[Link] (4 responses)
There was also an idea of using an LSP-like model, where a separate server process would keep the module dependency information and respond to requests from clients at runtime.
It seems the first has the most momentum behind it. Maybe the second idea has fallen by the wayside.
Posted Jun 7, 2024 21:45 UTC (Fri)
by NYKevin (subscriber, #129325)
[Link] (1 responses)
The README says it is extensible "to support new languages and custom rule sets." I wonder if it can be extended for C++ modules? Then setting up a C++ project (that uses modules) in Bazel should be much easier.
Posted Jun 8, 2024 3:23 UTC (Sat)
by mathstuf (subscriber, #69389)
[Link]
I can't say it with 100% confidence, but I don't think Gazelle can handle C++ modules reliably with the existing interfaces. See my other reply in this subthread for details on (CMake's) C++ module compilation strategy (to avoid splitting the discussion).
Posted Jun 8, 2024 4:30 UTC (Sat)
by mathstuf (subscriber, #69389)
[Link] (1 responses)
There are 2 plausible strategies AFAIK ("explicit" and "implicit"). I consider only 1 ("explicit") viable in the grand scheme of the state of the C++ ecosystem where even file extensions cannot be agreed upon. There are tradeoffs of each. For CMake, I chose the one that gives the most reliable results with the widest support because I detest tracking down heisenbuild bugs and CMake ends up having to support all kinds of things anyways.
> some tool parses the module files and generates module description information, which can then be considered by the build system, which can then be considered by the build system, which I think is the way CMake has gone (it's good for them because they already have a separate "configure the build" step).
This is inaccurate. CMake does this during the build, not during "configure". If it were done during the configure step, any change to a C++ module-using or -providing source would require a rerun of the configure to update the build graph. Note that it must also be performed at build time in order to support generated sources (that don't exist to scan at configure time). This is obviously not optimal.
Some definitions (for clarity):
- build system: a model of a project that represents artifacts and the dependencies between them (e.g., CMake, Bazel, Meson)
For background, this paper[1] I presented to ISO C++ which describes how CMake supports Fortran modules (which are isomorphic to C++ modules for build graph purposes). The core issue is that the module name has no (forced) relation to the filename on disk. So one needs to determine the dependencies between compilations *dynamically*. This is distinct from "discovered" dependencies reported by the likes of `-showIncludes` or `-MF` where the files that are discovered during the course of the tool's execution are reported so that if any change, the tool can be reexecuted. The main point here is that they can be discovered during the first execution (since the output doesn't exist, it needs to run anyways). Modules are different because at the point of `import M;` during translation we need the BMI of M to continue compilation as the next line might use a type from that module. The compiler needs to know where that BMI is and is the primary goal of the build system for modules: to provide BMI paths during compilation and ensure they're ready when compiling anything that needs them.
Dynamic dependencies instead are dependencies between entities expressed in the *content* of the sources and nowhere else. To support this, the contents need to be "scanned" to report what is needed during compilation and generated during the compilation of the source itself. This is reported in the P1689R5[2] format that is meant to describe these things. These dependency files (CMake uses the `.ddi` extension: "dynamic dependency information") are then given to a "collator" that reads them, the collated information of any dependent libraries, and some details from the CMake, to do the following:
- see file X makes module P
There is some flexibility with this model (also covered in the paper):
- scan individually or scan in batches?
Smaller scans are better for incremental (developer) builds (e.g., if 20 files are scanned in a batch, changing any one will force scanning of all 20); batching is better for one-shot (CI) builds. Collating after scanning simplifies the tool (the collator doesn't need to understand $LANG syntax), but is probably slower (lots of tiny P1689 files flitting about). CMake currently has individual scans with separate collation. I don't forsee collation being merged into scanning, but an option to batch scans may make sense given performance measurements.
Restrictions can also be enforced here. CMake doesn't care about file extensions, basenames matching module names, or anything like that. If one wants to do that (I suspect Meson might do so), that's fine. Configure-time scanning is fine if the configure is fast enough and generated sources (module-providing sources for sure; module-consuming sources may be able to be deferred-scanned, but once you have that support…just simpler to scan everything at build time if you ask me) aren't of any concern.
> There was also an idea of using an LSP-like model, where a separate server process would keep the module dependency information and respond to requests from clients at runtime.
This is, AFAIK, what build2 does (it is a build system and a build tool). It may implement it to the level of being an "explicit" build, but I've not dug in to know. There are patches to GNU Make to act as a module mapper as well. This patch only works with GCC today; Clang may add support for libcody in the future. Any "implicit" implementation has a major flaw in the generic case: I don't know how to meaningfully give visibility to modules. Note that the "`-fmodule-directory=` Clang flag where Clang just reads and writes files in the given directory based on module names as this strategy as well (the filesystem is the "module mapper" here).
I find this to be problematic in practice because it means the state of the build graph depends on the runtime state of some running process, not just mtimes, content fingerprints (for tools which are hash-the-content-based rather than mtime-based), some metadata on disk (e.g., `.ninja_log`). I can't imagine how debugging this is expected to work when those not well-versed in build systems are on the front lines.
Beyond that, there are corner cases to consider:
- When a request to import module Q comes in, how many processes do we expect to launch before we find the rule that exports module Q is found? What if it lies beyond our `-j` limit? Do we suspend processes while we wait, doing job server shenanigans? What if it doesn't exist? What if we discover a module import cycle?
These are not easy questions to answer and that's on top of two-way communication with compilers you're launching.
[0] There is a wholly unnecessary MSVC extension where it is allowed (IIUC, due to a misreading of the standard that, TBF, we also had when starting CMake's implementation); easily avoided and made portable: drop the partition name.
Posted Jun 9, 2024 0:58 UTC (Sun)
by buck (subscriber, #55985)
[Link]
You should have saved it up for an LWN contributed article in its own right, though, maybe.
Posted Jun 7, 2024 20:47 UTC (Fri)
by q3cpma (subscriber, #120859)
[Link]
Posted Jun 8, 2024 12:42 UTC (Sat)
by vadim (subscriber, #35271)
[Link]
Writing a web engine is an enormous project, I'm sure they could use more donations.
Posted Jun 15, 2024 9:17 UTC (Sat)
by iteratedlateralus (guest, #102183)
[Link]
Posted Jun 21, 2024 12:17 UTC (Fri)
by rc00 (guest, #164740)
[Link] (12 responses)
Posted Jun 21, 2024 13:37 UTC (Fri)
by atnot (guest, #124910)
[Link] (11 responses)
What? That statement was barely justifiable half a decade ago. In 2024 it just seems petty and delusional.
Posted Jun 23, 2024 19:00 UTC (Sun)
by rc00 (guest, #164740)
[Link] (10 responses)
You're free to provide actual counterexamples of worthwhile projects that don't have more feature complete non-Rust alternatives. The fact of the matter is, once you wade through the Rust rewrite announcements, there is little more than an academic exercise afoot. And there's nothing wrong with that, but "delusional" is pretending that there is a real purpose beyond marketing, ego-stroking, or some combination there within. Rust, just like the crypto-scamming boondoggles it was previously associated with, is perennially a solution in search of a problem. Languages like Go, Zig, and yes, even C++ have proven to be more suitable and reasonable languages for when real work needs to be done and marketing fails to suffice. I'm all ears for counterexamples.
Posted Jun 23, 2024 20:04 UTC (Sun)
by atnot (guest, #124910)
[Link] (9 responses)
surely that deserves a listing over Zig, a language with, to my knowledge, currently zero notable users? Or you're just trolling. Actually, I prefer that option.
Posted Jun 23, 2024 20:47 UTC (Sun)
by rc00 (guest, #164740)
[Link] (8 responses)
The criteria:
You seem to have missed the overarching criteria here but it's nice to see that you consume the marketing material. Firefox is still mostly written in C. Ripgrep is not more feature complete than grep (https://github.com/BurntSushi/ripgrep?tab=readme-ov-file#...). Firecracker (https://hocus.dev/blog/qemu-vs-firecracker/). Cloud-hypervisor is not as feature complete as the non-Rust alternatives. Same with InfluxDB versus something like TimescaleDB. And so on and so forth. I can keep spamming links if needed to make further points. These are academic exercises that have been marketed into production environments by zealots and proselytizers but fortunately, we are nearing the end of Rust's hype cycle and getting to the trough of despair that is the reality of where Rust belongs.
> surely that deserves a listing over Zig, a language with, to my knowledge, currently zero notable users? Or you're just trolling. Actually, I prefer that option.
Zig, while very early, has already proven to be productive instead of academic. I'm not sure how you would define notable but River [https://codeberg.org/river/river] comes to mind quickly. There are other projects but I would like to hear the criteria. It sounds like you want marketing material instead of engineering projects.
Posted Jun 24, 2024 6:56 UTC (Mon)
by farnz (subscriber, #17727)
[Link] (7 responses)
River fails by your own criteria - it's just a less feature-complete clone of sway. Do you have examples of Zig projects that aren't just less-feature complete clones of Rust, C++ or C projects, or is it also just an "academic exercise"?
Posted Jun 24, 2024 10:18 UTC (Mon)
by ceplm (subscriber, #41334)
[Link] (1 responses)
Posted Jun 24, 2024 10:47 UTC (Mon)
by rc00 (guest, #164740)
[Link]
Agreed. I stated previously that it was early for Zig. Yet Zig has shown actual potential in frameworks like Bun. One can imagine a 1.0 release with a mature feature set would see significantly more adoption since it is quicker to learn, read, and write with than a counterpart like Rust. The fact that productivity isn't lost for a few nice language concepts means future adoption could be high. See Mojo here for another example that is early but following a similar paradigm to Zig.
Zig is also still something of an under the radar language but I hope that if/when a hype cycle emerges, that it passes quickly. Using Python as an example, it only began being taken more seriously once it had moved past the hype cycle phase of growth and it has now settled well into finding its place.
While early, I think a Venn diagram of Zig and Go will dominate any long-term potential usage for Rust. For where memory needs to be manually managed and performance is of the utmost importance, one could in the future make a case for Zig without sacrificing productivity or forcing a steep learning curve. Where garbage collection and asymptotic performance are sufficient, Go already shines. And C/C++ will not be going anywhere. :)
Posted Jun 24, 2024 10:30 UTC (Mon)
by rc00 (guest, #164740)
[Link] (4 responses)
This is a piteous attempt at trolling while dodging the original point entirely. River and Sway are both tiling window managers, but River does not set out to be a clone of Sway and therefore has an entirely different feature set along with custom protocols implemented. The proper analogy would be a Venn diagram and based on preference, one would choose which to run.
To attempt to derive some value from this thread, there's the JavaScript/TypeScript framework Bun (https://github.com/oven-sh/bun). This dominates the Node.js counterpart which is written in C++ and JavaScript if I recall correctly. This is what an engineering effort with purpose looks like versus an academic exercise.
I am still eagerly awaiting your Rust-based counterexamples.
Posted Jun 24, 2024 13:23 UTC (Mon)
by atnot (guest, #124910)
[Link] (3 responses)
I assume cosmic desktop is also no true software somehow, unlike river. Because it's pre 1.0, making it academic? Unlike zig, which is also pre 1.0, but not academic. Genuinely looking forward to how you'll do this.
Look, I've checked out of this discussion already, can't reason someone out of a position they clearly didn't reason themselves into. But if you're trying not to sound delusional and bitter, you're not doing a very good job.
Posted Jun 24, 2024 16:00 UTC (Mon)
by rc00 (guest, #164740)
[Link] (2 responses)
Posted Jun 24, 2024 16:15 UTC (Mon)
by corbet (editor, #1)
[Link]
Posted Jun 24, 2024 17:04 UTC (Mon)
by atnot (guest, #124910)
[Link]
Posted Jun 7, 2024 21:07 UTC (Fri)
by Cyberax (✭ supporter ✭, #52523)
[Link] (9 responses)
It would be much better to work with Mozilla to improve Firefox. One thing that's sorely missing is the embedding API. It's dead easy to embed Chromium, and this gives it so much advantage.
Posted Jun 8, 2024 8:40 UTC (Sat)
by b7j0c (guest, #27559)
[Link] (8 responses)
Mozilla as an organization is a spent force. They have jettisoned their primary means for making significant game-changing progress (Rust/Servo) and are instead just managing the inertia of their uncompetitive, aged, dying platform.
Mozilla squandered their future in exchange for shallow activism. Did any of their activist goals actually achieve anything other than me-too virtue signalling? I see a decade+ of superficial, low/zero impact activism while Chrome basically ate their entire market share.
Kling is doing us a favor by paving the way for a post-Mozilla future.
Posted Jun 8, 2024 12:05 UTC (Sat)
by roc (subscriber, #30627)
[Link] (7 responses)
I just watched a video on a streaming service. That's basic functionality normal users require. To get that to work well you need a ton of stuff, including
To be competitive with modern browsers on security you need to have the site JS+CSS+HTML in its own sandboxed process. GPU rendering and A/V decoding need to be in separate sandboxed processes (could be the same process in a pinch).
Firefox does this all very well. It took many talented engineers (and me) years to build and a lot more work to maintain since then. Maybe Mozilla is a spent force --- I hope not --- but don't underestimate what they had and still have, and how difficult it will be for a new project to even catch up to where they are now.
Posted Jun 8, 2024 13:10 UTC (Sat)
by PengZheng (subscriber, #108006)
[Link] (1 responses)
> A user asked on June 6 what would determine whether a component would be developed in-house versus using a third-party library. Kling responded that if it implements a web standard, "i.e DOM, HTML, JavaScript, CSS, Wasm, etc. then we build it in house."
As a developer familiar with multimedia streaming, I would say even re-implementing WebRTC alone will be a huge challenge for the small team.
Seriously, calling Firefox a dying platform will not add one's own value, it just pisses out faithful users of Firefox (there are a LOT here on LWN).
Posted Jun 22, 2024 15:55 UTC (Sat)
by circl (guest, #172114)
[Link]
The project moves faster and is more active than most people expect, it has implemented a large amount of the standards required for web browsing, especially LibJS which is now mostly probed for performance and compliance. I'm sure an implementation will be started at some point. :^)
Posted Jun 8, 2024 17:44 UTC (Sat)
by cytochrome (subscriber, #58718)
[Link]
Posted Jun 9, 2024 8:55 UTC (Sun)
by roc (subscriber, #30627)
[Link] (3 responses)
Posted Jun 15, 2024 2:06 UTC (Sat)
by himi (subscriber, #340)
[Link] (2 responses)
That said, something that targets the Electron niche (and does it well enough) likely has a lot of legs. Definitely still an uphill battle, but far less of one than targeting a full browser . . .
Posted Jun 15, 2024 2:27 UTC (Sat)
by Cyberax (✭ supporter ✭, #52523)
[Link]
Something like this would be extremely useful. Not a full browser engine, but a subset for UIs.
Posted Jun 15, 2024 14:02 UTC (Sat)
by Wol (subscriber, #4433)
[Link]
But seriously, to what extent is that the modern paradigm nowadays - the browser is the OS that runs your programs, and linux and Windows are just glorified hardware abstraction layers. For many people - especially at work - that is now not far from the reality ...
Cheers,
Posted Jun 7, 2024 21:19 UTC (Fri)
by flussence (guest, #85566)
[Link]
Posted Jun 7, 2024 22:07 UTC (Fri)
by roc (subscriber, #30627)
[Link] (1 responses)
Also, if you're not just doing it for fun, then to justify starting from near-scratch you should make some fundamental decisions differently from the existing engines. Building with Rust would be a good one, but apparently that's not it. Building in site isolation from the ground up (i.e. IFRAMEs in their own processes) would be a good one, but they're not doing that AFAICT. Going all-in on parallelism would be another interesting one but I don't see that either. The only answer here that I can see in interviews with Kling is that Ladybird is designed to hew close to the architecture of the specifications, which is good but not much.
Posted Jun 10, 2024 3:23 UTC (Mon)
by raven667 (subscriber, #5198)
[Link]
Posted Jun 7, 2024 22:18 UTC (Fri)
by cesarb (subscriber, #6266)
[Link] (8 responses)
I still remember using the single-digit milestone releases of Mozilla Suite (according to https://en.wikipedia.org/wiki/History_of_Mozilla_Applicat... this was back in 1999). Each page I loaded had a chance of crashing the whole browser, and that chance increased dramatically if I used more than one browser window (there were no browser tabs back then). It was fun. I stuck with it, later switching to its descendant Phoenix/Firebird/Firefox, which I still use to this day.
That is, history shows that crashing often, this early in the project's life, does not mean the project is going to fail.
Posted Jun 8, 2024 6:43 UTC (Sat)
by joib (subscriber, #8541)
[Link]
Posted Jun 10, 2024 7:16 UTC (Mon)
by edeloget (subscriber, #88392)
[Link] (4 responses)
Posted Jun 10, 2024 11:25 UTC (Mon)
by cesarb (subscriber, #6266)
[Link] (3 responses)
This was also a time when people bragged on slashdot about their Linux uptime. Rebooting the whole PC multiple times per day was IIRC more of a Windows thing back then; I was surprised when once I somehow managed to kernel panic Linux by doing something as simple as removing a floppy disk which was in use.
> so having a crash on a web browser, while annoying, was kind of expected :)
This, on the other hand, was IIRC more common, even in non-alpha browsers like the pre-open-sourcing Netscape Navigator 4.x. But they were not so unstable that one would be afraid to open more than one window at the same time, for fear of a crash closing all open browser windows.
Posted Jun 10, 2024 12:25 UTC (Mon)
by geert (subscriber, #98403)
[Link] (1 responses)
Posted Jun 14, 2024 23:28 UTC (Fri)
by ssmith32 (subscriber, #72404)
[Link]
But I remember being pretty handy with alt-shift-F1 and alt-shift-backspace, or whatever the hotkeys were to get to the virtual consoles and hard-kill X.
And X dying was almost as bad as a full reboot for a desktop where apps didn't do a terribly good job with checkpoints and session restores.
So, while restarts and blue screens and corrupted data is what moved me off of Windows in the early 2000s, the Linux Desktop was not nearly as stable as it is today.
Nowadays, I had one desktop issue in the past few years, and I had System76 support figure out the issue and send me a new driver (which weirdly was triggered by using Google Maps in Firefox). Worlds better than when I was running a Mandrake (RPM) Frankenstein where half the software was compiled from tarballs because the RPMs available were woefully out of date..
Posted Jun 10, 2024 19:56 UTC (Mon)
by parametricpoly (subscriber, #143903)
[Link] (1 responses)
Posted Jun 13, 2024 5:40 UTC (Thu)
by ceplm (subscriber, #41334)
[Link]
Actually, surprisingly, it is still alive. That’s more than I expected.
Posted Jun 8, 2024 15:45 UTC (Sat)
by drago01 (subscriber, #50715)
[Link] (1 responses)
It doesn't do anything new or different to attract enough attention to compete with Chromium or Firefox.
Posted Jun 22, 2024 15:55 UTC (Sat)
by circl (guest, #172114)
[Link]
Posted Jun 8, 2024 16:45 UTC (Sat)
by rsidd (subscriber, #2582)
[Link] (3 responses)
It's pretty impressive what Ladybird has achieved, and there's no reason to think it won't be 10× more useful a year from now even if it lags Chrome and Firefox in some respects. But even if not, this is what they want to do, good luck to them.
And it's false logic to say they should just work on firefox. That's not how "scratch an itch" works.
Actually, given the change in policy that they will not DIY everything but accept some NIH code where useful, I expect the next 12 months to be pretty interesting.
Posted Jun 9, 2024 4:58 UTC (Sun)
by chris_se (subscriber, #99706)
[Link] (2 responses)
For that matter, I believe that the current browser ecosystem is not healthy at all: you have Chrome (+ all its derivatives), and Firefox (+ its derivatives), and then some niche projects. And while derivatives of Chrome make some changes, they will not diverge significantly from the actual browsing engine, so in the end, Google is the only one that determines what's next in terms of browser engine development, regardless of whether you use Edge, Chrome, Chromium, Opera, ... The only usable alternative right now is Firefox, but they are not the ones driving innovation in that space at the moment, unfortunately. (Which is a real shame, I like Firefox.)
So in that sense I welcome people being excited and working on alternatives, such as Servo, Ladybird, and others. Because I don't want to be in the situation where the entire future of the web is basically in the hands of a single company, and even if I don't end up using them myself for whatever reason, if they do thrive, they will bring a breath of fresh air into the web ecosystem, which is sorely needed in my opinion.
Posted Jun 9, 2024 14:44 UTC (Sun)
by rgmoore (✭ supporter ✭, #75)
[Link] (1 responses)
The current browser ecosystem is unhealthy because Google sees Chrome as a means to maintain its dominant market position in the lucrative search and online advertising businesses. They use their money from search and ads to squeeze profits out of the browser market and make it effectively impossible for anyone to make money by developing a competing browser from scratch. It's basically the IE situation from the late '90s all over again, and it won't improve until some government steps up antitrust enforcement the way the USA did in the '90s.
Posted Jun 10, 2024 3:37 UTC (Mon)
by raven667 (subscriber, #5198)
[Link]
Posted Jun 21, 2024 12:40 UTC (Fri)
by pgarciaq (subscriber, #153687)
[Link] (1 responses)
Posted Jun 22, 2024 15:55 UTC (Sat)
by circl (guest, #172114)
[Link]
Congratulations
Where is this going to be 10 years from now?
Where is this going to be 10 years from now?
Where is this going to be 10 years from now?
Where is this going to be 10 years from now?
Where is this going to be 10 years from now?
Where is this going to be 10 years from now?
Where is this going to be 10 years from now?
Where is this going to be 10 years from now?
C++20 module misconception clearing
C++20 module support in build systems
C++20 module support in build systems
C++20 module support in build systems
C++20 module support in build systems
- build tool: a tool which executes a DAG representing a build system's actions to perform the necessary (e.g., Bazel, make, ninja)
- BMI: "binary module interface"; a representation of a module interface that is used when importing the module
- CMI: "compiled module interface"; same as BMI; preferred by some because they might not be "binary", but ASCII
- module interface unit: a source file with `export module X;` or `[export] module X:partition;`; may be imported (assuming language and build system visibility rules allow it)
- module implementation unit: a source file with `module X;` that provides implementations for module interfaces that have `module X` (with or without a partition name; implementation units never mention partitions[0])
- explicit module build: the compiler is told the exact BMI file to use for each module it reads or writes
- implicit module build: the compiler is given search paths to look for or to create BMIs by name (note that this includes when a build system just uses the module name as a key to look them up rather than considering the context of the importing source file: is the requested module visible? is it the right configuration?)
- sees file Y needs module P
- writes out a snippet for the build tool (ninja, make) to learn that Y's object rule depends on X's object rule (this is what the details from CMake are needed for: the collator needs to know the paths ninja knows these things by to make a valid dyndep file)
- writes out a representation of this information for any dependent libraries so that they may be consumed in other targets as well
- writes out a "modmap" file for each TU saying "I see you want modules E, F, G; here is the path for those modules *and no others*" (this is important to avoid accidental/stale module usage; note that Fortran doesn't have this because module files are found via `-I` flags implicitly…I uncovered a lot of issues with that when enforcing C++ strictness on the Fortran stuff in CMake)
- (there are some other tasks[3], but they're not relevant to the build graph)
- collate separately or after scanning?
- The LSP-model must have some concept of module visibility in mind. Library L links to library K; K's sources should not be able to import any modules that are part of L. Even within L, if a module is not "public", it shouldn't be importable from outside of the library (including transitively by its own module interface files).
- With the filesystem-as-mapper model, stale files are a serious issue. If I rename module R to S, what is in charge of cleaning up R's BMI file? How do I say "yes, you found it; it is radioactive"? What remembers that the now-scanned S creator used to make R to clean it up?
- Duplicate module names. While C++ forbids multiple named modules within a program, nothing prevents `export module test;` from appearing in separate binaries. Related: if the build graph has release and debug variants, one now needs to track the debug version of module M separately from the release version.
[1] https://mathstuf.fedorapeople.org/fortran-modules/fortran...
[2] https://wg21.link/p1689r5
[3] Namely writing install rules for the BMI files and CMake properties for exported targets.
C++20 module support in build systems
Where is this going to be 10 years from now?
Where is this going to be 10 years from now?
Where is this going to be 10 years from now?
Where is this going to be 10 years from now?
Where is this going to be 10 years from now?
Where is this going to be 10 years from now?
Where is this going to be 10 years from now?
Where is this going to be 10 years from now?
> worthwhile projects that don't have more feature complete non-Rust alternatives
Where is this going to be 10 years from now?
Where is this going to be 10 years from now?
Where is this going to be 10 years from now?
Where is this going to be 10 years from now?
Where is this going to be 10 years from now?
Where is this going to be 10 years from now?
Folks, this is not kindergarten. Can we stop this childish stuff right here and now, please?
Enough.
Where is this going to be 10 years from now?
Support Firefox instead
Support Firefox instead
Support Firefox instead
-- <video> support, including the related DOM APIs
-- Integration with modern A/V codecs
-- Integration with hardware-accelerated decoding, where available
-- Support for Media Source Extensions
-- Support for EME
-- Reliable A/V sync
-- GPU-accelerated rendering
-- Off-main-thread compositing
-- Integration with the system compositor framework, where available (to minimize power usage)
(I'm sure I missed some stuff, it's been a while.)
Support Firefox instead
Support Firefox instead
Support Firefox instead
Support Firefox instead
Support Firefox instead
Support Firefox instead
Support Firefox instead
Wol
Comparisons to Firefox/Chrome seem premature
Quixotic
Quixotic
Crashing is not necessarily a bad omen
Crashing is not necessarily a bad omen
Crashing is not necessarily a bad omen
Crashing is not necessarily a bad omen
Crashing is not necessarily a bad omen
netscape
Crashing is not necessarily a bad omen
Uptime for the kernel was pretty good on desktops.
Crashing is not necessarily a bad omen
Crashing is not necessarily a bad omen
Future?
Future?
Why *not*?
Why *not*?
Why *not*?
Chromium is a browser consortium lead by Google
Ports to other operating systems in progress
Ports to other operating systems in progress

![Ladybird browser with inspector [Ladybird browser with inspector]](https://static.lwn.net/images/2024/ladybird-sm.png)