|
|
Subscribe / Log in / New account

Herman: Shipping Rust in Firefox

Dave Herman reports that with Firefox 48, Mozilla will ship its first Rust component to all desktop platforms. "One of the first groups at Mozilla to make use of Rust was the Media Playback team. Now, it’s certainly easy to see that media is at the heart of the modern Web experience. What may be less obvious to the non-paranoid is that every time a browser plays a seemingly innocuous video (say, a chameleon popping bubbles), it’s reading data delivered in a complex format and created by someone you don’t know and don’t trust. And as it turns out, media formats are known to have been used to trick decoders into exposing nasty security vulnerabilities that exploit memory management bugs in Web browsers’ implementation code. This makes a memory-safe programming language like Rust a compelling addition to Mozilla’s tool-chest for protecting against potentially malicious media content on the Web."

to post comments

Herman: Shipping Rust in Firefox

Posted Jul 12, 2016 23:08 UTC (Tue) by flussence (guest, #85566) [Link] (30 responses)

>it’s reading data delivered in a complex format and created by someone you don’t know and don’t trust.
Along those lines, the X.509 parser would be a good candidate for replacement too...

Herman: Shipping Rust in Firefox

Posted Jul 13, 2016 0:04 UTC (Wed) by tialaramex (subscriber, #21167) [Link] (29 responses)

Mozilla (re)wrote their own certificate verification code relatively recently, as mozilla:pkix and they still have maybe a dozen or so bugs open for cases where the Right Thing™ as originally implemented conflicts with the messy reality of the web PKI and so we have to wait while CAs stop doing the Wrong Thing, and then for 3 years (or maybe longer, no modern end entity certificates last more than 39 months but old ones were issued for a decade or more in some cases...) before Firefox can remove the workarounds.

For example ASN.1 defines a whole bunch of ways to write text. Nearly all of them are obsolete, either you want UTF8String (which is what it sounds like) or you'll be happy with IA5String (ASCII), plus the confusing PrintableString is fine if you really must insist on it despite it not doing what you probably wanted. But lots of CAs are still out there using BMPString and TeletexString and goodness knows what else. So mozilla:pkix has to carry around implementations of these incomplete and long obsolete text encodings, and, of course, they're all inevitably abused so that it's necessary to also carry workarounds for common "mistakes" like writing ISO-8859-N or 8-bit Windows encodings into a field that's supposedly PrintableString because your company never quite caught up to Unicode...

Anyway there probably isn't a great deal of enthusiasm for doing it all again so soon.

Herman: Shipping Rust in Firefox

Posted Jul 13, 2016 3:29 UTC (Wed) by lambda (subscriber, #40735) [Link]

Actually, one of the main guys who wrote mozilla:pkix is now working on ring (https://github.com/briansmith/ring), a port of the crypto primitives from BoringSSL to Rust (with the performance and timing sensitive portions still in assembler), along with webpki (https://github.com/briansmith/webpki), which is a PKI library inspired by his work on mozilla:pkix.

So yes, they are working on getting the ASN.1 parsing ported to Rust. Still has some ways to go before it's ready to deal with all of the crazy types of certificates out in the wild, but it is an active project.

Herman: Shipping Rust in Firefox

Posted Jul 13, 2016 19:20 UTC (Wed) by mjthayer (guest, #39183) [Link] (25 responses)

> Mozilla (re)wrote their own certificate verification code relatively recently, as mozilla:pkix and they still have maybe a dozen or so bugs open for cases where the Right Thing™ as originally implemented conflicts with the messy reality of the web PKI and so we have to wait while CAs stop doing the Wrong Thing, and then for 3 years (or maybe longer, no modern end entity certificates last more than 39 months but old ones were issued for a decade or more in some cases...) before Firefox can remove the workarounds.

This is probably a really naive question, and off-topic to boot, but why can't trusted entities (preferably several, and independent) just keep databases of certificates which are known to be in use for some valid public purpose, and of known compromised certificates, rather than relying on PKI? I know that the second is already done by browser vendors to some extent, but I think it is more the exception than the rule (perhaps not). Then if say a web site had a self-signed certificate for doing HTTPS users would not need to click away warnings as long as it was in the database they were checking against, and marked as matching the site's address. People manage to maintain virus signature databases, which I don't think is such a different problem.

Herman: Shipping Rust in Firefox

Posted Jul 13, 2016 19:31 UTC (Wed) by flewellyn (subscriber, #5047) [Link] (24 responses)

>This is probably a really naive question, and off-topic to boot, but why can't trusted entities (preferably several, and independent) just keep databases of certificates which are known to be in use for some valid public purpose, and of known compromised certificates, rather than relying on PKI?

What you just described IS the PKI.

Herman: Shipping Rust in Firefox

Posted Jul 13, 2016 19:42 UTC (Wed) by mjthayer (guest, #39183) [Link] (23 responses)

michaeljt:
> This is probably a really naive question, and off-topic to boot, but why can't trusted entities (preferably several, and independent) just keep databases of certificates which are known to be in use for some valid public purpose, and of known compromised certificates, rather than relying on PKI?

flewellyn:
> What you just described IS the PKI.

Then surely it would be a decentralised version. If you decide that the database you are relying on is doing a bad job it would be a couple of mouse clicks or key presses to switch to a different one. Is that possible with PKI as it stands?

Herman: Shipping Rust in Firefox

Posted Jul 13, 2016 20:29 UTC (Wed) by flussence (guest, #85566) [Link]

Debian handles the certificate situation best out of all systems I've seen so far: when new certificates are added in an update, it actually *asks* the user to choose which ones they trust enough to keep. Those choices are then promptly disrespected by almost every browser on the system, because those bundle their own “trust stores” of hundreds of whitelisted CA certs, and the user interface to them is designed to be scary and opaque.

Ideally the browser itself would be up front and honest, giving the user an informed choice and the tools to act on it, instead of training them to click through interstitial scare pages. That kind of progress is a pipe dream though.

Herman: Shipping Rust in Firefox

Posted Jul 13, 2016 23:02 UTC (Wed) by tialaramex (subscriber, #21167) [Link] (21 responses)

I guess it's worth covering some fundamentals.

An X509 certificate, like most real world certificates, is a signed document. In X509 the signature is performed using a combination of a hash function (often SHA-256 today) and a public key algorithm (often RSA). Anyone who knows a public key can do the mathematics on the signature using that public key, get back the hash value and check it is the same as the hash of the certificate they were reading, if it is, this key signed the certificate.

So, every certificate is signed by exactly one key, we sometimes call this the "Issuer" of the certificate for obvious reasons. The contents of the certificate are a serial number, another public key (that of the "Subject" of the certificate), some identifying details for the Issuer (so you know if a public key you have should verify this certificate) together with whatever the Issuer is certifying about that Subject, such as (on the public web) their fully qualified domain name, or IP address and maybe the name of their business, the country in which it is registered and so on; and there will usually be some house-keeping stuff in there, which I'll get back to.

Of course you can sign your own certificate, so that the public key signed is the same one that verifies the signature. But why should anybody trust this certificate? How would anybody know which one is really "your" self-signed key - anybody can state the same ?

In the traditional Web PKI the approach is that a relatively small number of trusted Certificate Authorities either act directly as Issuers, or they certify Issuers and your operating system and/or client software decides which CAs it will accept certificates from. If you don't trust, say, Symantec, too bad you now have no way to verify all the certificates they've issued.

What you're talking about is basically Moxie Marlinspike's "Convergence" in which you the end user ask one or more trusted third parties ("notaries") to check you've got the right certificate, and so anyone can issue any certificates but it won't matter unless the notaries say they're OK. There are some pretty awful problems with Convergence in practice and it is largely moribund.

1. It needs online real time verification. In practice on the web things break, servers go off-line, networks get partitioned. Loads of people who think they have "Internet access" actually don't, there's a middle box and it would of course get 100% carte blanche to lie about Convergence answers, or if it doesn't you need another entire PKI to manage Convergence and Now You Have Two Problems.

1a. The Web PKI already has (switched off) real time verification. OCSP is supposed to be a real time online verification of certificates. But in fact many browsers switch it off completely, others "soft fail", treating a certificate as OK unless they get a "Not OK" answer in a limited amount of time. We ended up with OCSP Stapling to work around that - clients still get an OCSP response, but from the server they were trying to connect to, which must be up for them to succeed anyway. But you can't staple Convergence.

2. The other reason the Web PKI moved away from OCSP: The privacy implications are pretty bad. Your notaries end up knowing exactly what you're looking at, because you have to tell them in order to make use of their services. This is worse than for OCSP because you tell only the OCSP server for a particular certificate (see, housekeeping info) that you saw the certificate, whereas you tell all your trusted notaries for every site with Convergence.

3. Who are all these trusted third parties anyway? So long as Convergence users are just a handful of cryto nerds the reality is that the "trusted third parties" are a bunch of other crypto nerds. But if big players come into the game, they dominate, and soon the "agility" is all gone anyway.

4. In fact, who wants to be a trusted third party in this scenario anyway? End users won't pay you. That's an ugly fact but it's a fact. Ordinary Internet users are very resistant to paying. You and I are LWN subscribers, we're the rare minority. In the CA model the server operators pay to make this work. But in Convergence the reality was nobody pays, and as a result quality goes out the window.

The most viable part of Convergence survived, as HPKP. Key pinning (Marlinspike wanted to call something similar "TACK"). HPKP lets server operators promise that one of Convergence's assumptions is actually true for their site, that either their key, or a key that signed their key, is constant and won't change after your first visit. This is the "Trust on first use" model familiar from SSH. If you visited a HPKP-enabled site once, successfully, and no operators of that site ever make a bad mistake, then you're OK forever.

Now, as most SSH users soon find out, actually operators make a lot of mistakes, all the time. In SSH you work around these mistakes by manually editing a text file, or running an intimidating command-line tool. In HPKP it's much the same, except, how many web browser users do you know who are comfortable manually editing text files? Right. Fools deploying HPKP will effectively "brick" an entire FQDN, by ensuring that thousands or millions of users will only ever see an uncancellable error message when they try to visit it because they lost the key they need to pin the site. A bunch have been bricked already, more happen every day.

HPKP is a great idea for high value sites with a group of competent administrators. Google, Microsoft.com, there are dozens, maybe there are even thousands, but in practice even though anybody _could_ do it, most of us almost certainly shouldn't.

Now, as a result of Certificate Transparency we do actually have a database of (most of) the trusted certificates for the Web PKI. But because of the afore-mentioned network reliability + privacy situation, you definitely shouldn't just insist on examining the CT monitors every time you visit a web site.

Herman: Shipping Rust in Firefox

Posted Jul 14, 2016 1:21 UTC (Thu) by nybble41 (subscriber, #55106) [Link] (18 responses)

I think what is needed is not the elimination of the entire PKI/CA system, but rather two much more modest reforms:

1. No more "universal" CA certificates. Each CA certificate is valid for one, and only one, TLD.

This means that CNNIC's CA certificate can only be used to sign for ".cn" domains, not ".com" or ".gov". A CA could have multiple distinct certificates for multiple TLDs, of course, but with this change browser makers and users would be able to choose whether to trust the CA or not on a TLD-by-TLD basis.

2. It should be possible to get your site certificate signed by multiple CAs, and present both signatures to the browser. Browsers should trust the certificate provided it has been signed by at least one trusted CA.

Assuming site operators take advantage of this ability, it would make it easier to revoke the certificates of malicious or negligent (but influential and widely-used) CAs without breaking the Web, since legitimate sites of any significance would tend to have one or more other CA signatures as backups.

Herman: Shipping Rust in Firefox

Posted Jul 14, 2016 3:43 UTC (Thu) by raven667 (subscriber, #5198) [Link] (17 responses)

That first one is unenforceable, every CA already creates many sub-keys signed by their root for many purposes, all you are doing is making them create a few hundred signing keys for each TLD, but otherwise with the same security implications. It doesn't change the resulting risk at all.

Herman: Shipping Rust in Firefox

Posted Jul 14, 2016 12:58 UTC (Thu) by Wol (subscriber, #4433) [Link] (14 responses)

I think this has already been proposed, but the hash of the domain signing the key should be added to DNS. When you connect, you get the key used by the webserver with the DNS lookup.

And, (imho perfectly okay, but guaranteed to upset privacy nuts,) if you're using a proxy then the proxy can sign the connection! This would kick up a warning in the browser saying "you are behind a proxy wall, and the proxy is looking at your connection". It should NOT be acceptable for the browser to enforce privacy for the user against the wishes (or legal needs) of the owner of the computer/connection. The user can then decide whether or not he wants his employer to snoop on the contents of his browsing session.

And of course, the proxy can be configured to not proxy banking sites etc, but this proposal would allow companies to comply with the law and check on stuff going in and out their network, while allowing users to use secure sites securely!

Cheers,
Wol

Herman: Shipping Rust in Firefox

Posted Jul 14, 2016 16:45 UTC (Thu) by Lennie (subscriber, #49641) [Link] (13 responses)

"I think this has already been proposed, but the hash of the domain signing the key should be added to DNS. When you connect, you get the key used by the webserver with the DNS lookup."

Yep, that is called DANE. You put the public key of the HTTPS-certificate of your webserver in DNS and sign your domain with DNSSEC.

Otherwise any active attacker can just change your DNS-packets and point a website to a HTTPS-webserver they control. So that is why we have DNSSEC.

Organization wise DNSSEC is similar to having one CA (DNS-root-nameservers & ICANN) with sub-CA's (TLD-operators) and the domain owners all have their own sub-CA.

Lots of people say: sorry, DNS-root is operated by the US. We don't trust the US to be the source of trust. So ICANN because of this and other reasons like Snowden documents has done a lot of work to become independent of any country (they are in the process). The problems are obviously that it could become the next FIFA (corruption) if there is to little accountability.

One solution to that problem could be if people working at the standards organizations (W3C/IETF, etc.) could develop a protocol based on a blockchain (think of something like Bitcoin which isn't controlled by anyone). Then maybe we could develop something that does not depend on trusting organizations. Namecoin tried to do something similar, that isn't in widespread use.

Herman: Shipping Rust in Firefox

Posted Jul 14, 2016 17:34 UTC (Thu) by Cyberax (✭ supporter ✭, #52523) [Link] (12 responses)

> Lots of people say: sorry, DNS-root is operated by the US. We don't trust the US to be the source of trust.
The beauty of DNSSEC is that the US controls only the root domain (.). They do NOT control top-level domains.

To intercept the connection, NSA/FBI/whatever would have to create a fake certificate and key for the first-level domain that you're using (for example, .io), sign it with the real root key, and then use this fake keypair to MITM the requests.

This would be VERY visible, normally you would have only a handful of keys for each TLD. They can be easily pinned and checked.

> One solution to that problem could be if people working at the standards organizations (W3C/IETF, etc.) could develop a protocol based on a blockchain (think of something like Bitcoin which isn't controlled by anyone).
Not a good idea. With Bitcoin if you lose access to your wallet then that's it. There's no way to restore it, it's lost forever.

With domain names if you lose your domain credentials, you'll still be able to regain access. It might involve stacks of documents and tons of telephone calls, but it's doable.

Herman: Shipping Rust in Firefox

Posted Jul 14, 2016 17:48 UTC (Thu) by Lennie (subscriber, #49641) [Link] (9 responses)

> > Lots of people say: sorry, DNS-root is operated by the US. We don't trust the US to be the source of trust.
> The beauty of DNSSEC is that the US controls only the root domain (.). They do NOT control top-level domains.
>
> To intercept the connection, NSA/FBI/whatever would have to create a fake certificate and key for the first-level domain that you're using (for example, .io), sign it with the real root key, and then use this fake keypair to MITM the requests.
>
> This would be VERY visible, normally you would have only a handful of keys for each TLD. They can be easily pinned and checked.
>

That depends, let's say we would start to depend on such a system. You would have validating DNS-resolver on your host (laptop/PC/phone). In that case most people wouldn't notice if NSA/FBI/whatever did a MITM between them and their upstream (caching) DNS-server as long as the NSA/FBI/whatever also generated fake TLD-signnatures. Which is easy to do. Obviously not easy to do at a large(r) scale, but you are still moving your eggs in one single basket, that better be a good basket.

> > One solution to that problem could be if people working at the standards organizations (W3C/IETF, etc.) could develop a protocol based on a blockchain (think of something like Bitcoin which isn't controlled by anyone).
> Not a good idea. With Bitcoin if you lose access to your wallet then that's it. There's no way to restore it, it's lost forever.
>
> With domain names if you lose your domain credentials, you'll still be able to regain access. It might involve stacks of documents and tons of telephone calls, but it's doable.
>

No, I meant something like Bitcoin for the root / TLDs might be a good idea.

Herman: Shipping Rust in Firefox

Posted Jul 14, 2016 18:01 UTC (Thu) by Cyberax (✭ supporter ✭, #52523) [Link] (2 responses)

> That depends, let's say we would start to depend on such a system. You would have validating DNS-resolver on your host (laptop/PC/phone). In that case most people wouldn't notice if NSA/FBI/whatever did a MITM between them and their upstream (caching) DNS-server as long as the NSA/FBI/whatever also generated fake TLD-signnatures.
Before the root zone was signed, it had been common to sign side chains. And it's still possible to use custom roots of trust for specific TLDs.

It makes little sense for .com (it's managed by the US anyway), but it makes more sense for smaller TLDs.

> No, I meant something like Bitcoin for the root / TLDs might be a good idea.
Might make sense.

Herman: Shipping Rust in Firefox

Posted Jul 14, 2016 18:28 UTC (Thu) by Lennie (subscriber, #49641) [Link] (1 responses)

> > That depends, let's say we would start to depend on such a system. You would have validating DNS-resolver on your host (laptop/PC/phone). In that case most people wouldn't notice if NSA/FBI/whatever did a MITM between them and their upstream (caching) DNS-server as long as the NSA/FBI/whatever also generated fake TLD-signnatures.
> Before the root zone was signed, it had been common to sign side chains. And it's still possible to use custom roots of trust for specific TLDs.
>
> It makes little sense for .com (it's managed by the US anyway), but it makes more sense for smaller TLDs.
>

I don't understand a 100% what you mean, but if you are an attacker you won't be signing a whole TLD if that was what you were implying, you would obviously be doing live signing.

Herman: Shipping Rust in Firefox

Posted Jul 14, 2016 19:52 UTC (Thu) by farnz (subscriber, #17727) [Link]

But, if your operation is to remain stealthy, you need to sign every response I see for the duration of the appropriate TTLs; thus, instead of being only needing to MITM one Internet access session plus compromise one trusted CA (which is all you need in the current CA/B Forum PKI setup), you need to MITM every DNS query I send or receive for a week (that being the TTL of DS records in the root). If you don't, you run the risk that I'll see the "real" key, and discover that there's perfidy afoot.

Herman: Shipping Rust in Firefox

Posted Jul 14, 2016 18:31 UTC (Thu) by farnz (subscriber, #17727) [Link] (4 responses)

The challenge for the NSA is that I can cache keys in my validating resolver; if they want to (say) send me a fake .uk name, they've got to cope with the fact that I can cache a returned DS record for uk. for up to a week in the current setup. That means that either they've got to get the uk. key that I've cached, or they've got to maintain their spoof for at least a week before triggering their attack.

This doesn't make an attack impossible - but it considerably raises the bar.

Herman: Shipping Rust in Firefox

Posted Jul 14, 2016 18:54 UTC (Thu) by Cyberax (✭ supporter ✭, #52523) [Link] (3 responses)

> The challenge for the NSA is that I can cache keys in my validating resolver
Not just _cache_, but completely override them. You can in fact just pull all of the TLD signatures and just use them instead of querying the root name servers for them.

Herman: Shipping Rust in Firefox

Posted Jul 14, 2016 19:43 UTC (Thu) by farnz (subscriber, #17727) [Link] (2 responses)

True, but that implies that I'm paranoid enough to do that and keep updating the local copies when the TLD keys change (with appropriate verification).

The thing about automatic caching is that it's transparent to me, and it's a useful performance optimization (so I'd expect OSes to do a degree of it behind my back). If the NSA doesn't take it into account, they risk being unmasked by their own bad opsec.

Herman: Shipping Rust in Firefox

Posted Jul 14, 2016 20:01 UTC (Thu) by Cyberax (✭ supporter ✭, #52523) [Link]

> True, but that implies that I'm paranoid enough to do that and keep updating the local copies when the TLD keys change (with appropriate verification).
It's not too terribly complicated to package such keys in Fedora/Debian/... or provide a public service accessible over the Internet/TOR/...

Herman: Shipping Rust in Firefox

Posted Jul 14, 2016 21:59 UTC (Thu) by Wol (subscriber, #4433) [Link]

> True, but that implies that I'm paranoid enough to do that and keep updating the local copies when the TLD keys change (with appropriate verification).

But isn't that fairly easy? You pull down a set of "known good" TLD keys, and the system triggers an alert when they change, telling you to re-get the keys. Bit of a pain when they change unexpectedly, but the point is, not that it's secure or not, but that YOU ARE NOTIFIED when something changes.

Cheers,
Wol

Herman: Shipping Rust in Firefox

Posted Jul 15, 2016 14:50 UTC (Fri) by drag (guest, #31333) [Link]

> In that case most people wouldn't notice if NSA/FBI/whatever did a MITM between them and their upstream (caching) DNS-server as long as the NSA/FBI/whatever also generated fake TLD-signnatures.

That is what the certificate pinning are TLD are for, as mentioned above.

When your localhost DNS connects to the network's DNS resolver it will obtain information for the TLD certificate. It will take note and remember the hash for that cert. If the cert changes later on for a MITM attack then the localhost DNS resolver will pick up on this and freak out.

Alternatively it wouldn't be too much of a burden for the distros to collect and ship TLD certificate information along with the localhost DNS resolver.

Cert pinning does work and it has caught MITM attacks against HTTPS when implemented in browsers.

The problem with this is that TLD certifications compromises would be a NIGTHMARE. Pinning can backfire because it can make it more difficult to deal with legit changes to certs.

abuse of DNSSEC signing keys

Posted Jul 17, 2016 22:03 UTC (Sun) by dkg (subscriber, #55359) [Link] (1 responses)

Cyberax said:
To intercept the connection, NSA/FBI/whatever would have to create a fake certificate and key for the first-level domain that you're using (for example, .io), sign it with the real root key, and then use this fake keypair to MITM the requests.
afaict, this is not actually the case. I believe it's possible for a zone signing key to sign any RRSET within the zone at any level, so there is no need to create a "fake" secondary key for the next-hop down the tree the root zone signing keys can just go ahead and sign the records for www.example.com directly. (i'm not saying that an attacker in control of the root signing keys would necessarily want to do this, just that i think it should technically be valid from the perspective of a DNSSEC validator.)

Cyberax also said:

This would be VERY visible, normally you would have only a handful of keys for each TLD. They can be easily pinned and checked.
If this were true, we would have already seen people doing this publicly. However, public efforts in this direction (usually called "DNSSEC transparency") are only in their infancy. I welcome that sort of auditing work, though! The potential rate of turnover in these zones is very high -- RRSIGs often have a short lifespan -- so verifiably logging all RRs signed by a given DNSKEY over time is actually a potentially resource-intensive task. It only gets more expensive if you want to log the signatures from DNSKEYs in the subzones, too.

abuse of DNSSEC signing keys

Posted Jul 17, 2016 22:16 UTC (Sun) by Cyberax (✭ supporter ✭, #52523) [Link]

> afaict, this is not actually the case. I believe it's possible for a zone signing key to sign any RRSET within the zone at any level
Nope. DNSSEC keys are just that - keys. They don't use X.509 crap or anything complicated - the DNS standard directly defines the key encoding.

Signature validation is also straightforward - you get the parent's public key through NSKEY query and check your response. Then repeat it until you reach a locally available root of trust - it's completely hierarchical.

> If this were true, we would have already seen people doing this publicly. However, public efforts in this direction (usually called "DNSSEC transparency") are only in their infancy.
That's because nobody really cares, since DNSSEC is used only in a small number of domains. Even important domains like google.com are not signed.

Herman: Shipping Rust in Firefox

Posted Jul 14, 2016 16:20 UTC (Thu) by nybble41 (subscriber, #55106) [Link] (1 responses)

> ... all you are doing is making them [CAs] create a few hundred signing keys for each TLD, but otherwise with the same security implications.

The CAs can create all the certificates they want, just like anyone else; that's the easy part. These certificates won't be included in the operating system or browser's default trust stores unless the CA routinely issues proper certificates for the associated TLD. This changes the security implications considerably compared to the current situation where any CA can sign a certificate for any TLD and have it automatically trusted, even if the certificate is for "google.com" and the CA isn't considered trustworthy by anyone outside of China.

Herman: Shipping Rust in Firefox

Posted Jul 14, 2016 19:34 UTC (Thu) by raven667 (subscriber, #5198) [Link]

> These certificates won't be included in the operating system or browser's default trust stores unless the CA routinely issues proper certificates for the associated TLD

But of course they all do and would continue to do so, which is why the overall risk is unchanged.

Herman: Shipping Rust in Firefox

Posted Jul 15, 2016 7:24 UTC (Fri) by mjthayer (guest, #39183) [Link]

> There are some pretty awful problems with Convergence in practice and it is largely moribund.

For simplicity I will limit this to using certificates to establish web site identity. It seems to me that the first problem to be solved here is establishing that the site one is communicating with securely is really the one one expected (modulo typing mistakes in URLs, such as "www.mybank.com.badsite.net), and specifically not identifying the site's owners, nor the site's moral integrity. I would expect that this could be achieved using an off-line but regularly updated database mapping URLs to certificates which could be built up by automated crawling, probably using some ranking algorithm to keep the size down. This would be the white list.

The second would be a similar database of certificates known to be in use for bad purposes, the black list. Building this seems to me to be a similar problem to building a virus signature database, and presumably a similar level of quality would be to be expected. The black list would obviously take priority over the white list.

Would you expect the price tag of achieving this to be higher than that of similar databases which have been created?

Herman: Shipping Rust in Firefox

Posted Jul 22, 2016 4:02 UTC (Fri) by ras (subscriber, #33059) [Link]

I've come here to thank tialaramex for his post. It's right up there with the best of the articles on LWN.

Herman: Shipping Rust in Firefox

Posted Jul 14, 2016 6:13 UTC (Thu) by briansmith (guest, #106424) [Link] (1 responses)

You kind of got it right, but actually mozilla::pkix was able to simplify the logic considerably so that we only parse UTF-8 and subsets of UTF-8. See the source code:

http://hg.mozilla.org/mozilla-central/diff/9c4424920d74/s...

mozilla::pkix does gloss over some parts of names as long as those names aren't security-relevant as far as Firefox's threat model is concerned. In particular, it might allow some weird or malformed encodings of things like "Organizational Unit" because those fields don't affect whether or not the networking stack will trust the certificate.

Herman: Shipping Rust in Firefox

Posted Jul 15, 2016 20:43 UTC (Fri) by tialaramex (subscriber, #21167) [Link]

Ah, very interesting. I was going on the Bugzilla ticket contents rather than the code, thanks for linking.

Rust for safety

Posted Jul 13, 2016 0:22 UTC (Wed) by ncm (guest, #165) [Link] (36 responses)

There are lots of memory-safe programming languages today. What makes Rust an easy choice is that, like C++, it imposes no overhead, and no runtime dependencies to complicate integration.

By following some easy rules, C++ coding can be about equally memory-safe, although it's hard to impose those rules on others' code. The place where C++ might never compete with Rust is in its implicit safety against data races -- threads misusing shared memory. These are extremely difficult to spot even for the most experienced coders, but the temptation of performance gains from unproven "lock-free" data structures is too strong to keep them out of common code, and it is all too easy to share memory accidentally, with undefined results.

Places that turn out to merit especial care, enforced by tools wherever possible, include decompression, media rendering, deserialization, and cryptosystem support apparatus. (The crypto itself poses unrelated challenges.)

Rust for safety

Posted Jul 13, 2016 5:14 UTC (Wed) by zlynx (guest, #2285) [Link] (24 responses)

I like C++, but if these rules are so easy why do I keep seeing bugs caused by taking pointers from temporaries?

void f(const string &s) {
g(s.substring(1, 7).c_str());
}

So innocent. So wrong.

Rust for safety

Posted Jul 13, 2016 6:00 UTC (Wed) by Cyberax (✭ supporter ✭, #52523) [Link] (15 responses)

There's nothing wrong here, unless g() saves the "const char*" pointer somewhere where it outlives its scope.

Rust for safety

Posted Jul 13, 2016 6:13 UTC (Wed) by zlynx (guest, #2285) [Link] (14 responses)

Yes, g saved it. And that worked great for years until another programmer added the substr() call. The string passed into f had enough of a lifetime to make that work. And the call chain was obfuscated enough that the pointer copy wasn't obvious.

Point is, Rust spots these bugs immediately.

Rust for safety

Posted Jul 13, 2016 11:59 UTC (Wed) by ianmcc (subscriber, #88379) [Link] (13 responses)

Bjarne Stroustrup and Herb Sutter are working on a set of guidelines that, eventually, they hope will get compiler support that would give errors when doing things like saving a pointer where you are not the owner.

http://isocpp.github.io/CppCoreGuidelines/CppCoreGuidelines

Rust for safety

Posted Jul 13, 2016 19:28 UTC (Wed) by mjthayer (guest, #39183) [Link]

> Bjarne Stroustrup and Herb Sutter are working on a set of guidelines that, eventually, they hope will get compiler support that would give errors when doing things like saving a pointer where you are not the owner.

I imported that into LibreOffice as a very dirty way of estimating the length - to about 400 pages, and still a work in progress. That does give me a bit of a bad feeling.

Rust for safety

Posted Jul 13, 2016 20:29 UTC (Wed) by roc (subscriber, #30627) [Link] (1 responses)

That work is, at best, speculative. It is currently unknown whether that approach can really work or what the resulting subset of C++ will look like. For example, we already know it rules out practically all use of global variables, a limitation that is not made obvious in the description. There are good reasons to be skeptical of this entire approach.

An appropriate term for "speculative product promise" is "vapourware", and I wrote a whole blog post about this situation: http://robert.ocallahan.org/2016/06/safe-c-subset-is-vapo...

Rust for safety

Posted Jul 21, 2016 9:43 UTC (Thu) by HelloWorld (guest, #56129) [Link]

> For example, we already know it rules out practically all use of global variables
Banning shared mutable state seems entirely reasonable to me.

Rust for safety

Posted Jul 14, 2016 4:55 UTC (Thu) by torquay (guest, #92428) [Link] (9 responses)

    ... and Herb Sutter are working on a set of guidelines ...

Anything proposed by Herb Sutter should be taken with a (large) grain of salt. He is pretty much the embodiment of the Ivory Tower establishment.

Firstly, his entire GoTW series perversely serves as clear examples of how C++ is overcomplicated and full of traps. Secondly, he is employed at Microsoft, a company that has a shockingly bad C++ "compiler" (MSVC), notorious for being full of bugs and severely lacking in standards compliance. To this date it doesn't properly support C++11, and its C++98 compliance still isn't complete. A company with this track record should be nowhere near a ISO C++ standards process. (Let's also not forget the manipulation of the Office Open XML "standard").

Rust for safety

Posted Jul 14, 2016 20:23 UTC (Thu) by epa (subscriber, #39769) [Link]

On the contrary, someone who has spent the last twenty years collecting the various traps and gotchas in C++ is ideally placed to suggest ways they could be eliminated. This is also a reason why Rust, to an interested observer, looks like such a good thing: it is written by programmers who have experience using C++ in a large, mature codebase, and have been well exposed to its strengths and weaknesses. (The same is true of Mono, which was also originally a "there must be something better than C++" project following the experience of writing Gnumeric.)

I don't think the failings of Microsoft's C++ compiler are particularly relevant to Stroustrup and Sutter's safe-subset proposal.

Rust for safety

Posted Jul 16, 2016 13:33 UTC (Sat) by mathstuf (subscriber, #69389) [Link] (3 responses)

> a shockingly bad C++ "compiler" (MSVC)

While it has its quirks, it does catch warnings that are not caught by GCC or Clang. Most of its standards shortfalls are documented as such and are, generally, not fixable due to it being a 35 year old codebase (e.g., there are some edge cases where the parser just says "no" to valid constructs that will never be fixed due to the structure of the codebase; two-phase lookup (enable_if-area stuff) is also not implemented). There is now a Clang backend which has a cl-compatible command line interface which ships with the most recent versions of Visual Studio.

> severely lacking in standards compliance

They also don't claim to be compliant. For contrast, see Apple's Clang release where they *ripped out* TLS support (supposedly it is also a runtime failure; it compiles just fine). And they still claim to be compliant.

> A company with this track record should be nowhere near a ISO C++ standards process

Have you been to a ISO C++ meeting? No one company runs the show. Not by any stretch.

Rust for safety

Posted Jul 16, 2016 16:37 UTC (Sat) by pizza (subscriber, #46) [Link]

> They also don't claim to be compliant.

Be that as it may, MSVC's "quirks" have historically made it quite challenging to maintain a cross-platform codebase.

Rust for safety

Posted Jul 17, 2016 10:17 UTC (Sun) by micka (subscriber, #38720) [Link] (1 responses)

> not fixable due to it being a 35 year old codebase

So it's roughly the same age as gcc. It's a shame those two old compilers can't be fixed!

Rust for safety

Posted Jul 17, 2016 12:09 UTC (Sun) by mathstuf (subscriber, #69389) [Link]

I think it's that the compiler phases don't pass all of the required information between them and fixing it is too invasive a change for the cases it fixes. If it were FOSS, there might be an enterprising contributor to tackle it, but I suspect there is higher ROI for things like getting the clang stuff released.

Rust for safety

Posted Jul 21, 2016 10:10 UTC (Thu) by HelloWorld (guest, #56129) [Link] (3 responses)

> Secondly, he is employed at Microsoft, a company that has a shockingly bad C++ "compiler" (MSVC), notorious for being full of bugs and severely lacking in standards compliance.
It's one of the better implementations according to Bjarne Stroustrup.
https://www.simple-talk.com/opinion/geek-of-the-week/bjar...

Rust for safety

Posted Jul 21, 2016 11:45 UTC (Thu) by pizza (subscriber, #46) [Link] (2 responses)

FWIW, that interview was from 2008 -- And when he says "It's getting very good actually both in terms of standard conformance and in code quality," the implication is that it had a reputation for not being either.

MS's C++ compiler _was_ by far the worst when it came to standards compliance and bugs -- not necessarily in the compiler itself, but also the standard [template] libraries that everyone's supposed to be able to rely on. It's quite a lot better now.

Meanwhile.

"...and deep in GNU C++ you find quite a few non-standard features. Whenever I can, I prefer to deal with ISO standard C++ and to access system-specific features through libraries with system-independent interfaces."

One of the nice things about the GNU toolchain is that you can disable all of those non-standard extensions by using compiler flags to force strict compliance (--std=c++03 --pedantic-errors) and still have a useful compiler. (And GCC doesn't provide any access to "system-specific features", beyond the mandated contents of the standard C/C++ libraries...)

Rust for safety

Posted Jul 21, 2016 17:54 UTC (Thu) by Cyberax (✭ supporter ✭, #52523) [Link] (1 responses)

Back in 2003, MSVC was _better_ than C++ standard. For me the major advantage was that it didn't require template code to be littered with "typename" statements.

Rust for safety

Posted Jul 21, 2016 18:11 UTC (Thu) by pizza (subscriber, #46) [Link]

Most of my experience with MSVC was with C, not C++, and it wasn't until VS2015 that it finally supported C99 *syntax* sufficiently well to be able to use C99 at all on a cross-platform codebase.

Anyway. Back to Rust.

Rust for safety

Posted Jul 13, 2016 6:14 UTC (Wed) by ncm (guest, #165) [Link] (7 responses)

An illustrative example.

s.substr(1,7) returns a temporary string with a copy of the bytes, which takes the call c_str(), yielding a pointer to its internal storage. The lifetime of that temporary is to the end of the full expression it appears in, so if g() doesn't stash its argument away, no harm done. That said, numerous pitfalls of that sort are a trivial edit away. A big improvement coming in C++17, billed as something like "fix order of evaluation", eliminates many of those pitfalls. (Expect to see it implemented in a dot release of your current compiler.)

But one of the simple rules is "no raw pointers".

Rust for safety

Posted Jul 13, 2016 12:24 UTC (Wed) by torquay (guest, #92428) [Link] (6 responses)

    That said, numerous pitfalls of that sort are a trivial edit away. A big improvement coming in C++17, billed as something like "fix order of evaluation", eliminates many of those pitfalls.

I'm not sure how the "fix order of evaluation" proposal would help here? If you take a copy of the pointer, you're still screwed.

C++ is full of such pitfalls (experienced first hand), so much so that the "trivial edits" become rather cumbersome and non-trivial in aggregate. The coder needs to keep track of too many things in their head, to the point that it starts to resemble coding in assembler. A programming language is meant to make life easier, not to throw up traps left, right and center.

Sidenote: a few years ago I was surprised that "fix order of evaluation" wasn't actually part of the C++ standard. Code compiled with clang worked as intuitively expected, while under gcc it didn't. The gcc developers shrugged and pointed out that the standard doesn't say anything about the order of evaluation, so they exploited the loophole to implement alleged "optimizations". Timing tests indicated that there was very little difference between the code produced by clang and gcc. The problem here is that gcc developers chose to deliberately provide a non-intuitive implementation, which in all likelihood has caused latent bugs in many user codebases. Gcc's unintuitive behavior is probably more accurately described as a de-optimization, as it wastes everybody's time.

C++11, C++14 and C++17 are in reality an old language with new features retrofitted on top of it, causing yet more corner cases with associated traps. The core of C++ (as well as its insistence of compatibility with C) is essentially too rotten to fix. Rust hence has a huge advantage over C++: it's a clean sheet design.

Rust for safety

Posted Jul 13, 2016 12:45 UTC (Wed) by pizza (subscriber, #46) [Link]

> C++ is full of such pitfalls (experienced first hand), so much so that the "trivial edits" become rather cumbersome and non-trivial in aggregate. The coder needs to keep track of too many things in their head, to the point that it starts to resemble coding in assembler. A programming language is meant to make life easier, not to throw up traps left, right and center.

Every time I've had to deal with C++ in recent memory it's been to hunt down a bug which turned out to be due to one of these pitfalls -- most recently in geeqie, which turned out to be an order of evaluation problem that used to "work" until it didn't any more...

> Gcc's unintuitive behavior is probably more accurately described as a de-optimization, as it wastes everybody's time.

As "quirky" as GCC has been over the years, it was (and still is) light-years beyond the eye-gouging issues one has to deal with when using what is still the dominant C++ compiler -- Microsoft's.

Rust for safety

Posted Jul 13, 2016 14:45 UTC (Wed) by ncm (guest, #165) [Link] (2 responses)

To be fair to C++, almost all of the pitfalls trap only people trying to be too clever. I.e., even if the stunt worked, the program would be better without it.

There really is no substitute for understanding what you're doing, and no language can protect against people who don't. It was an important discovery that there were things almost nobody needs to be doing, and so don't need to understand, and that the language design could make it hard to express trying to do those things. It is better to do something else that's easy to understand than to try to perfect something hard to understand.

Rust for safety

Posted Jul 13, 2016 19:30 UTC (Wed) by mjthayer (guest, #39183) [Link] (1 responses)

> To be fair to C++, almost all of the pitfalls trap only people trying to be too clever. I.e., even if the stunt worked, the program would be better without it.

I sometimes feel that C++ attracts that sort of person though.

Rust for safety

Posted Jul 13, 2016 23:56 UTC (Wed) by khim (subscriber, #9252) [Link]

I sometimes feel that C++ attracts that sort of person though.

Not exactly. These same people could write perfectly readable and reliable programs in Java or Python. The simple fact is: C (and C++) are low-level, dangerous, languages (it's good question which is more dangerous, though). If you write code in C++ and use designs which leave performance of the table… then why you even bother?

Rust for safety

Posted Jul 13, 2016 21:22 UTC (Wed) by lsl (subscriber, #86508) [Link] (1 responses)

> Sidenote: a few years ago I was surprised that "fix order of evaluation" wasn't actually part of the C++ standard. Code compiled with clang worked as intuitively expected, while under gcc it didn't. The gcc developers shrugged and pointed out that the standard doesn't say anything about the order of evaluation, so they exploited the loophole to implement alleged "optimizations".

Or, maybe, they just picked one at random. That's what I would do, because I have no idea why you'd consider one specific evaluation order to be more intuitive than the other (except for some special cases, maybe). Am I just damaged by prolonged exposure to weak standards?

Rust for safety

Posted Jul 13, 2016 23:02 UTC (Wed) by JoeBuck (subscriber, #2330) [Link]

The reason a compiler will choose one order of evaluation over another, when the order is unspecified, is to generate better code. When optimization is off, some compilers will choose a right-to-left order (I think MSVC did at one time), others left to right. With optimization on, if evaluating A before B would require register spills that would not be required if B is evaluated first, B would be evaluated first, and the code runs faster.

Rust for safety

Posted Jul 13, 2016 20:32 UTC (Wed) by roc (subscriber, #30627) [Link] (2 responses)

> By following some easy rules, C++ coding can be about equally memory-safe, although it's hard to impose those rules on others' code.

What are these rules?

The C++ Core Guidelines people are still working on theirs and aren't done yet (see my vapourware comment above). Have you already solved it?

Rust for safety

Posted Jul 13, 2016 23:47 UTC (Wed) by madscientist (subscriber, #16861) [Link] (1 responses)

C++ Core Guidelines are about a LOT more than memory safety. If you look at only the bits related to memory safety it's much more manageable and people have been working on it for a lot longer. The basic requirement is that you never allow a raw pointer in your code anywhere, and you have a set of smart pointers that encode ownership information. As mentioned above, though, it's almost impossible to make these rules universal. And, I'm not sure I'd call it "easy" (unless you just mean the concept rather than the implementation, which always involves a lot of trade-offs).

Rust for safety

Posted Jul 14, 2016 3:34 UTC (Thu) by roc (subscriber, #30627) [Link]

I was specifically referring to the memory safety part of the Core C++ Guidelines. If you look at the discussion in my blog, regardless of how long they've been working on them, they're nowhere near done and it's not clear the approach is really going to work.

Rust for safety

Posted Jul 13, 2016 20:56 UTC (Wed) by lsl (subscriber, #86508) [Link] (2 responses)

> The place where C++ might never compete with Rust is in its implicit safety against data races -- threads misusing shared memory. These are extremely difficult to spot even for the most experienced coders

Spotting data races got really easy with ThreadSanitizer (included with recent LLVM and GCC). At least for normal userspace programs all you need to do is compiling with -fsanitize=thread.
It will immediately flag all races that occur during the execution of the program. Performance hit is negligible when compared with things like Valgrind.

Due to the way it works, tsan has zero false positives making its output immensely useful. A possible catch is that it can only detect data races that actually happen during this particular run of the program, so a statically-proven absence of data races would obviously be superior.

Really, everyone should use it on their code on a regular basis. The low performance hit makes that feasible and convenient. The other sanitizers, too, by the way.

Rust for safety

Posted Jul 14, 2016 1:49 UTC (Thu) by mathstuf (subscriber, #69389) [Link] (1 responses)

The problem with TSan is that it only exercises tested code paths. Are your error paths free of data races as well?

Rust for safety

Posted Jul 14, 2016 3:37 UTC (Thu) by roc (subscriber, #30627) [Link]

Right. Spotting some data races in some test runs is not nearly the same thing as knowing that the type system guarantees your program is data-race-free.

Rust for safety

Posted Jul 16, 2016 6:42 UTC (Sat) by marcH (subscriber, #57642) [Link]

> By following some easy rules, C++ coding can be about equally memory-safe, although it's hard to impose those rules on others' code. The place where C++ might never compete with Rust...

You meant: the other place whether C++ might never compete with Rust either...

Rust for safety

Posted Jul 18, 2016 15:12 UTC (Mon) by ksandstr (guest, #60862) [Link] (3 responses)

>What makes Rust an easy choice is that, like C++, it imposes no overhead, and no runtime dependencies to complicate integration.

On the contrary, Rust imposes the worst kind of overhead possible: that on implementation. For programs written in C++ that're already down with the big-design-up-front philosophy this is insignificant, but unsurprisingly most software isn't written in Big Design C++.

That's to say, Rust's borrow-checking discipline fails to support random hacks. This is a major failure since most software arises from said hacks in one way or another.

Rust for safety

Posted Jul 18, 2016 18:42 UTC (Mon) by rahulsundaram (subscriber, #21946) [Link] (1 responses)

> That's to say, Rust's borrow-checking discipline fails to support random hacks.

Not sure what you mean? Random hacks can be unsafe

https://doc.rust-lang.org/book/unsafe.html

Rust for safety

Posted Jul 19, 2016 2:29 UTC (Tue) by mathstuf (subscriber, #69389) [Link]

The compiler will assume you've enforced the rules of Rust though. No different than playing around in Python's C API.

Rust for safety

Posted Jul 19, 2016 2:33 UTC (Tue) by mathstuf (subscriber, #69389) [Link]

On the contrary. I've coded up hacks in Haskell by prototyping in Haskell's type system. If it compiles, I'm much more confident it will work than some open-coded hack. It can enforce that error codes are handled (or explicitly ignored). If my type signatures are correct, I know I am passing the right information without unnecessary leakage. A project which starts with this small boost is much easier to maintaining in the future should it persist past the hack stage (which many projects do).

Holy bubbles batman.

Posted Jul 13, 2016 6:30 UTC (Wed) by oldtomas (guest, #72579) [Link] (19 responses)

My solution is: I long ago taught my browser to *not* show chameleons popping bubbles.

My main beef with mozilla is that they are making this path ever more difficult. The hoops one has to jump through to disable Javascript have become ridiculously baroque (I keep several profiles for that, wtf?), and the cookie thing is slowly getting out of sight more and more.

Correspondingly, attention dilution^W^W ad industry (which tends to call itself "content") more and more assumes that consumer's browsers are "full on", closing this virtuous circle.

If I have a beef with Mozilla, it's this complicity. I used to trust the Mozilla browser unconditionally (the intentions: not always necessarily the implementation). These days I don't.

Holy bubbles batman.

Posted Jul 13, 2016 7:12 UTC (Wed) by oever (guest, #987) [Link] (12 responses)

I'm using three extensions in Firefox: Noscript, RequestPolicy and CookieMonster. With these, fine-grained permissions for sites can be set via the context menu.

NoScript limits JavaScript execution to domains you white list.

RequestPolicy controls what cross-domain requests are allowed.

CookieMonster lets you turn on cookies or domain cookies on sites easily.

All three allow temporary white listing which expires when the browser quits.

Holy bubbles batman.

Posted Jul 13, 2016 8:44 UTC (Wed) by oldtomas (guest, #72579) [Link] (8 responses)

Yes, and thank you (really!) for this fine list.

What I was trying to do is to point at the more social/political problem. By slowly shifting the defaults, this creates more incentives to make "rich" webpages where it wouldn't be necessary, and this shifts control from the user to the Page that Be. And that, again, motivates the browser people to up the ante, thus gradually spiralling out of control.

Now what could be done better? How? I don't know yet. But watching Mozilla depresses me, because as the only candidate to be on user side, I miss this kind of thinking.

I mean: "enlarge this image" needs Javascript these days! (yah, I know how to "view selection source" and pull the link to the bigger image from the mumbo-jumbo, but hey).

Holy bubbles batman.

Posted Jul 13, 2016 9:47 UTC (Wed) by josh (subscriber, #17465) [Link] (1 responses)

> By slowly shifting the defaults, this creates more incentives to make "rich" webpages where it wouldn't be necessary

*Not* shifting the defaults creates more incentives to replace the browser with one that does, or add plugins. The web of 1999 wasn't any safer; it had flash video instead of HTML5 video, and Flash had a much *larger* attack surface area. Today, it's completely reasonable for the vast majority of people to not have Flash installed at all.

Browsers are not going to help you stick with the web of 1999. People don't just build static web pages, they build web apps too. And there's no hard line between "web page" and "web app".

Holy bubbles batman.

Posted Jul 13, 2016 9:54 UTC (Wed) by josh (subscriber, #17465) [Link]

Also, closely related to those "shifting defaults": the reason Firefox made it significantly harder to disable JavaScript was that many people were going through the Firefox settings, turning off JavaScript, then later finding websites broken and not making any connection to the setting they changed. Various forms of telemetry helped identify this problem. The solution was to make it less likely that someone would disable JavaScript without understanding the implications.

Holy bubbles batman.

Posted Jul 13, 2016 9:50 UTC (Wed) by oever (guest, #987) [Link]

I write this as I'm taking a break from cleaning up some horrible JavaScript code using JSLint and Closure Compiler. Not only is the JavaScript on the web usually not necessary, in 99% of cases it is totally horrible code. The horror starts with the language itself.

The web is a public place. Anyone can set up a server. Anyone can claim to be a web developer. Browsers are more forgiving than the Pope. The result is a lot of diverse creations, often the result of copy and pasting another site. This is how the web has always lived.

A drop on a hot plate is to advocate simple websites and send mails with helpful suggestions to sites that (over)use JavaScript. The JavaScript Trap [1] explains the problem quite well for those in the know about Free Software. I've not seen a convincing advocacy site for simple websites yet, but have not looked very hard either.

Thanks for the tip on 'view selection source'! It's a great Firefox feature. I never noticed that before and was using ctrl-u up till now.

[1] https://www.gnu.org/philosophy/javascript-trap.en.html

Holy bubbles batman.

Posted Jul 13, 2016 19:50 UTC (Wed) by mjthayer (guest, #39183) [Link] (4 responses)

oldtomas:
> What I was trying to do is to point at the more social/political problem. By slowly shifting the defaults, this creates more incentives to make "rich" webpages where it wouldn't be necessary, and this shifts control from the user to the Page that Be. [...] Now what could be done better?

PrivacyBadger blocks resources known to be bad for your privacy. AdBlockPlus blocks resources known to show you intrusive advertising. What about something to block known unnecessarily rich web pages and resources with a warning like "This page is badly designed. Do you really want to view it? Yes/no/always for this page." It would take a bit of effort to get the balance right though.

Holy bubbles batman.

Posted Jul 13, 2016 23:13 UTC (Wed) by JoeBuck (subscriber, #2330) [Link] (3 responses)

Clearly a nearly complete PC emulator in Javascript must be poor design, as must be the now-common trick that dynamically shows older content as you scroll off the bottom, decreasing server load from those users who only want to see the newest content (with the static web you have to choose how many blog entries or photo thumbnails appear per page).

The web has changed; turning off Javascript means that you won't see most of the content, or possibly you'll just see the bait set out for search engines. Sorry.

Holy bubbles batman.

Posted Jul 14, 2016 0:11 UTC (Thu) by flussence (guest, #85566) [Link]

>The web has changed; turning off Javascript means that you won't see most of the content, or possibly you'll just see the bait set out for search engines. Sorry.
“Turning off” means revoking default remote code execution privileges, not removing the concept from existence. The only ones punished here are websites that use active content out of malice or incompetence — they'll fail at whatever assault on security, battery life or good taste they were planning, with a corresponding loss of site visitors. As it should be.

Holy bubbles batman.

Posted Jul 14, 2016 15:50 UTC (Thu) by cwitty (guest, #4600) [Link] (1 responses)

Yes, showing older content as you scroll off the bottom is bad design, at least if that's the only way to navigate (which seems to be common). It makes it more or less impossible to see something from a year ago. (For another example, for years it's been effectively impossible for me to use the Barnes&Noble "My Library" to see what e-books I have... I'd have to sit there and hold page-down for an unbearably long time to see the whole list, and I suspect the web browser would crash first.)

Holy bubbles batman.

Posted Jul 14, 2016 16:41 UTC (Thu) by pizza (subscriber, #46) [Link]

> I'd have to sit there and hold page-down for an unbearably long time to see the whole list, and I suspect the web browser would crash first.

Things definitely misbehave and/or crash when you're using a memory-constrained system (eg smartphone or tablet). Of course, when you complain, you're just told "use the app instead" which makes me grind my teeth...

Holy bubbles batman.

Posted Jul 13, 2016 15:39 UTC (Wed) by sce (subscriber, #65433) [Link] (1 responses)

> NoScript limits JavaScript execution to domains you white list.
>
> RequestPolicy controls what cross-domain requests are allowed.
>
> CookieMonster lets you turn on cookies or domain cookies on sites easily.

Also worth mentioning is uMatrix (I use it instead of those three plugins).

Holy bubbles batman.

Posted Jul 13, 2016 20:14 UTC (Wed) by flussence (guest, #85566) [Link]

I use uMatrix on Chromium; it's an impressive hack, but this implementation leaves a lot to be desired. It works by injecting Content-Security-Policy headers to prevent things from ever loading or running, but doing so completely breaks how the browser handles <noscript> tags, since it thinks scripts are still enabled. I'm not sure if Firefox does it any better today, but it'll probably be running identical code soon as they deprecate their own APIs.

The cookie/localStorage handling is a bit lame too, it's no substitute for the Self-Destructing Cookies extension. Not enough of a difference to switch browsers over though.

Holy bubbles batman.

Posted Jul 15, 2016 22:37 UTC (Fri) by mathstuf (subscriber, #69389) [Link]

AFAIK, RequestPolicy is no longer maintained and the advanced setup for uBlock Origin is the preferred way. I also use Self-Destructing Cookies, but NoScript is replaced by uBlock Origin and uMatrix. Sure, noscript tags are broken, but it is one less extension to juggle too.

Holy bubbles batman.

Posted Jul 13, 2016 10:46 UTC (Wed) by Zack (guest, #37335) [Link] (5 responses)

> My main beef with mozilla is that they are making this path ever more difficult.

It's a logical extension of how Mozilla pushes a multi-tier model of the Web where you have "publishers" and mere "users" who should have different rights when it comes to controlling their content.

Firefox users are a product now. The sooner Mozilla disappears the sooner this cognitive dissonance that exists between the Firefox browser and "the open web" can be rectified.

Holy bubbles batman.

Posted Jul 13, 2016 11:24 UTC (Wed) by josh (subscriber, #17465) [Link] (4 responses)

I think you have Mozilla confused with *every other browser maker*, considering that Mozilla is specifically the one continuing to push for the Open Web. Making the web more capable, and enabling more interesting uses of the web, makes it more likely that people will build things with web technologies.

Holy bubbles batman.

Posted Jul 13, 2016 12:07 UTC (Wed) by Zack (guest, #37335) [Link] (3 responses)

The difference between Mozilla and every other browser maker is that whenever the open web is in jeopardy they put out a press release that this is not the hill to die on and how resisting it would cost them a part of their userbase which in turn would give them less clout to resist detrimental changes to the open web down the road, which when they actually arise become the next hill not to die on.

It's become a Ponzi scheme with goodwill where existing users are reassured that the open web will eventually happen with the extra clout new users--attracted by implementing this or that popular restriction, will give them.

The only difference between Mozilla and their competitors is their mission statement, which is basically a PR tool at this point, riding on the coat-tails of its illustrious past.

Holy bubbles batman.

Posted Jul 13, 2016 12:21 UTC (Wed) by pizza (subscriber, #46) [Link]

> The only difference between Mozilla and their competitors is their mission statement, which is basically a PR tool at this point, riding on the coat-tails of its illustrious past.

Mozilla at least *attempts* to do the right thing most of the time.

The others have reached the point where don't even try to hide the fact they're screwing users over any more.

Holy bubbles batman.

Posted Jul 13, 2016 20:46 UTC (Wed) by roc (subscriber, #30627) [Link]

Mozilla is constantly taking concrete actions to help the open Web, e.g. helping to build better Web standards by providing independent implementations of new features that shake the ambiguities out of the specs, and providing a mobile browser engine on Android that isn't Webkit-derived. Another example: Mozilla recently spawned Let's Encrypt.

Holy bubbles batman.

Posted Jul 15, 2016 15:04 UTC (Fri) by drag (guest, #31333) [Link]

There is a limit to what Mozilla can do.

If you want to have a 'open web' then it's going to require new technologies (IPFS, etc) and the demand from web users to make it happen.

Herman: Shipping Rust in Firefox

Posted Jul 13, 2016 7:05 UTC (Wed) by eduard.munteanu (guest, #66641) [Link] (20 responses)

The way software *should* be written goes like this: first you come up with a safe implementation , then optimize it. That means using a safe language for the most part and resorting to low-level code on an as-needed basis.

What happens instead is you write the whole thing in One True Language because, *obviously*, you need that matrix multiplication to be fast and you "know what you're doing" anyway. And when things blow up over and over, you finally consider "promoting" some code to a safe language. Quite the opposite way around, don't you think? Especially because you value correctness and security so much and all that.

And while we see projects getting completely rewritten because code gets unmanageable, I think we're yet to see optimization being a reason for that. (Well, you could cite things like systemd, but bash scripts don't really count as a sane safe language in the first place.)

Herman: Shipping Rust in Firefox

Posted Jul 13, 2016 7:48 UTC (Wed) by oever (guest, #987) [Link] (19 responses)

And before Rust, what safe language would that be?

I know of no other mainstream safe language. Java and C# avoid memory corruption but still allow plenty of ways to make simple errors. Any language without static typing is not safe.

Besides Rust, Haskell is the only usable, but not mainstream, language that I know that makes many errors impossible. The functional programming style has kept Haskell out of mainstream.

Rust and Haskell are both great but do not come with safe GUI libraries yet. Interfacing between languages adds a lot of extra code with potential for bugs.

Herman: Shipping Rust in Firefox

Posted Jul 13, 2016 9:08 UTC (Wed) by ballombe (subscriber, #9523) [Link] (1 responses)

> The functional programming style has kept Haskell out of mainstream.

and everyday we see mainstream programming languages aping Haskell features without undertanding the underlying context.

Herman: Shipping Rust in Firefox

Posted Jul 13, 2016 9:49 UTC (Wed) by josh (subscriber, #17465) [Link]

That's one of many things I like about the Rust community: many of the core Rust developers have a strong understanding of Haskell, and intentionally design Rust to incorporate some of its best features.

Herman: Shipping Rust in Firefox

Posted Jul 13, 2016 12:03 UTC (Wed) by areilly (subscriber, #87829) [Link] (1 responses)

Much as I'm inclined to like rust, and I do hope it succeeds, you should remember that it is possible to encode the wrong algorithm in any language.

There are actually only a couple of languages that allow the buffer-overflow bug pattern (yes, they're popular), but bugs have still been written in all of the others. The Wirthian languages and Ada have counted strings and sized vectors, as do the lisps. The most popular language of today (arguably) does too (Java). There aren't as many that do it without garbage collection and the runtime problems that come with that.

Herman: Shipping Rust in Firefox

Posted Jul 13, 2016 20:56 UTC (Wed) by roc (subscriber, #30627) [Link]

Rust goes much much further than preventing the buffer-overflow bug pattern.

Apart from the memory-safety features, the ownership invariants give protection from iterator-invalidation and similar bugs, and from data-race bugs. And you can design APIs that provide their own protections, e.g. leveraging the fact that if a function takes a parameter of type "Foo", then it takes sole ownership of Foo and no surviving aliases to Foo can exist in the caller. This lets you write statically-checked stateful APIs.

Also, Rust makes integer overflow an error, and by default checks for those errors in debug builds. Hopefully this will be extended to release builds if/when the performance penalty can be minimised. Of Rust's competitors, only Swift provides this.

Herman: Shipping Rust in Firefox

Posted Jul 13, 2016 13:49 UTC (Wed) by jezuch (subscriber, #52988) [Link] (1 responses)

> Java and C# avoid memory corruption but still allow plenty of ways to make simple errors.

I don't know about C#, but: more specifically, Java was (like, 20 years ago) advertised as having concurrency built-in but it doesn't really have any real safeguards against improper use of concurrency. It's all synchronization primitives or classes like ConcurrentSkipListMap which don't really make it hard to create race-free programs. You have to think hard, and I mean *hard* about concurrency and synchronization and races. Java 8's parallel Streams are much, much better in this regard, though (streams are similar to what in Rust is called iterators, and parallel streams are what the rayon crate implements).

I can only imagine that in C++ it's even worse :)

Herman: Shipping Rust in Firefox

Posted Jul 19, 2016 8:49 UTC (Tue) by marcH (subscriber, #57642) [Link]

> I can only imagine that in C++ it's even worse :)

Despite[*] being much older, C and C++ got a proper [shared] memory model defined in their respective standards 7 years *after* Java did.

[*] or... because of it?!

Herman: Shipping Rust in Firefox

Posted Jul 13, 2016 14:47 UTC (Wed) by eduard.munteanu (guest, #66641) [Link]

Even Java would have saved people a whole lot of pain, security-wise. This is the stuff that we care about most, namely arbitrary code execution (due to e.g. buffer overflow), not denial-of-service. It is also true for a lot of other languages, no matter how much I dislike them for other reasons.

We also have the MLs (ATS[1] deserves a big mention here) and a bunch others. But I think that's besides the point, because if people cared more about safety, those ecosystems would have certainly developed better.

[1] https://en.wikipedia.org/wiki/ATS_(programming_language)

Herman: Shipping Rust in Firefox

Posted Jul 13, 2016 17:06 UTC (Wed) by tuna (guest, #44480) [Link] (1 responses)

"Rust and Haskell are both great but do not come with safe GUI libraries yet. Interfacing between languages adds a lot of extra code with potential for bugs."

I do not think anyone wants to rewrite QT, GTK3 and other gui libraries/platforms. If you want to build new languages and ecosystems you have to be able to work with what exists today.

GUI less security sensitive

Posted Jul 14, 2016 4:40 UTC (Thu) by gmatht (guest, #58961) [Link]

GUI bugs are more likely to be annoying than a security hole. Secure codecs, servers etc. seem like a higher priority at the moment since their bugs are more obviously exploitable.

Herman: Shipping Rust in Firefox

Posted Jul 19, 2016 8:37 UTC (Tue) by marcH (subscriber, #57642) [Link] (9 responses)

> I know of no other mainstream safe language. Java and C# avoid memory corruption but still allow plenty of ways to make simple errors

Sure, what difference can one or two orders of magnitude more bugs make as far as security is concerned? A system is just secure or it's not, right? Nothing in between...

Herman: Shipping Rust in Firefox

Posted Jul 19, 2016 12:46 UTC (Tue) by mathstuf (subscriber, #69389) [Link] (8 responses)

So one thing I don't really understand is why memory is put on a pedestal in these languages. It's not the only resource that needs to be managed, so why doesn't the language help me with those resources at all?

Not that your point is invalid, but sometimes the resources you're managing aren't less important than memory, so proper RAII is more useful than a garbage collector.

Herman: Shipping Rust in Firefox

Posted Jul 19, 2016 15:05 UTC (Tue) by marcH (subscriber, #57642) [Link] (7 responses)

> on a pedestal

More like: in the foundation.

Memory is the lowest level, most basic resource. The lower level is a programming language, the closer it is to a memory management system. C is little more than a (manual and tedious) memory manager.

Also, isn't memory the only local resource that the language has exclusive control over? Except for memory mapped I/O which is rarely seen in user space.

> sometimes the resources you're managing aren't less important than memory,

Even from a pure security standpoint?

Herman: Shipping Rust in Firefox

Posted Jul 19, 2016 15:57 UTC (Tue) by mathstuf (subscriber, #69389) [Link] (5 responses)

If I have a temporary directory, I might like it to be cleaned up no matter how the function exits (except things like poweroff, abort() or SIGKILL; not much anyone can do there). Currently, I have to explicitly try/catch or try/else for this to happen since the language doesn't let me use its memory management facilities for any other resource. Why can't I just have an object which cleans stuff up whenever it goes out of scope, not just when I remember to try/catch around its usage?

> Even from a pure security standpoint?

Yes. Memory problems indicate other errors in the program which can be fixed. An ideal program (which I admit is an extremely rare find) wouldn't care if it were on a GC or some RAII setup with respect to memory. However, the RAII setup allows me to ensure that my program cleans up after itself on the filesystem, over the network, or whatever other resource might be needing managed without issue; a GC setup leaves me back in the world of C's memory management assistance instead for such things. I know D's GC doesn't guarantee destructors are called (genius that, but probably something about undecidablility of the proper call order at program shutdown), but C# and Java don't even *have* destructors for such things.

Herman: Shipping Rust in Firefox

Posted Jul 19, 2016 19:37 UTC (Tue) by Wol (subscriber, #4433) [Link] (1 responses)

> If I have a temporary directory, I might like it to be cleaned up no matter how the function exits (except things like poweroff, abort() or SIGKILL; not much anyone can do there). Currently, I have to explicitly try/catch or try/else for this to happen since the language doesn't let me use its memory management facilities for any other resource.

Which language is that? I remember (in the early 80s) using Fortran77 and getting exactly the functionality you're after (except it was buggy :-) The Ops staff swore at me because I accidentally deleted several important files before I realised what was happening and they had to restore them from backup ... :-) (You could declare a file as temporary, and it was deleted on close. The problem was, if you re-used that file descriptor, it ignored any attempt to declare the new file permanent, and deleted that on close, too :-)

The problem with programs is they like to open files anywhere. If they followed the standards and opened temporary files in /tmp, or c:\temp, or wherever they were supposed to, then the operating system would clean up behind them (any real OS that is, I don't think WinDos does it properly).

Cheers,
Wol

Herman: Shipping Rust in Firefox

Posted Jul 19, 2016 20:37 UTC (Tue) by mathstuf (subscriber, #69389) [Link]

I'm more thinking where there's a directory I created that should not persist outside of the function which created it; claiming ownership of files which already exist would probably not use such an RAII object the same way. `TempDir foo = TempDir::new()` versus `TempDir bar = TempDir::take(path)`.

Herman: Shipping Rust in Firefox

Posted Jul 20, 2016 6:55 UTC (Wed) by oever (guest, #987) [Link] (2 responses)

If I have a temporary directory, I might like it to be cleaned up no matter how the function exits (except things like poweroff, abort() or SIGKILL; not much anyone can do there).
In C++, you can clean up in the destructor of an object. This is simple RAII. In Java, you could clean up in the finalize() function. The C++ destructor is called immediately when the scope ends. finalize() is called by the garbage collector which runs a few times per second. In Java and C# most variables are pointers to objects and Java does not have a way to automatically call a function directly when the last pointer to an object leaves scope and neither does C#.

A refreshing aspect of Rust is that the syntax for dealing with objects is nice and safe. It has the best of both worlds: no need for manual memory deallocation but the ability to run any type of cleanup when the last reference to an object leaves scope. Defining a destructor is optional and done via the Drop trait.

Herman: Shipping Rust in Firefox

Posted Jul 20, 2016 7:01 UTC (Wed) by Cyberax (✭ supporter ✭, #52523) [Link] (1 responses)

> In Java, you could clean up in the finalize() function.
No, you can not. Finalize method is only guaranteed to run some time in future - like next hour or next day.

The way to manage resources in Java now is try-with-resources statement.

Herman: Shipping Rust in Firefox

Posted Jul 20, 2016 9:12 UTC (Wed) by farnz (subscriber, #17727) [Link]

It's not even guaranteed to be called in Java; it's called if the GC determines that this object has no references and is thus eligible for collection.

In practice, this means that if your process exits before the GC decides that your object is ready for collection, the finalizer is not called at all - e.g. because there happens to have been no memory pressure between the object being allocated, and the process exiting cleanly for a restart.

Herman: Shipping Rust in Firefox

Posted Jul 19, 2016 16:57 UTC (Tue) by excors (subscriber, #95769) [Link]

At the bottom level of language (/language runtime) implementation, dynamically-allocated memory is just a limited resource provided by the OS from sbrk/mmap syscalls. Similarly, file descriptors are a limited resource provided by the OS from open syscalls. Same for threads, GPU memory, network sockets, etc. Why should dynamically-allocated memory get so much more special treatment than any of those other resources that look so similar? (And that's before looking at resources provided by other processes rather than the OS.)

Garbage collection is simulating a computer with an infinite amount of memory. There are some languages that simulate a computer with a near-infinite capacity for threads, using green threads etc, but many don't. In theory a language could probably simulate support for an infinite number of file descriptors, but in practice they never seem to bother - if you try to open ~1024 files they'll happily return a "too many open files" error and let you solve the problem yourself. (Sometimes they'll go part way by GCing file objects to close ones they know can never be accessed in the future, but that only really works in languages with a refcounting GC or with RAII, and if you might access those files again then you're left to manually implement some pooling scheme yourself, so the language still isn't helping as much as it could.)

Nowadays virtual address space pretty much is infinite, and the cost of physical memory seems to still be decreasing exponentially, but many of the other limited resources have remained constant for ages. So I think an exclusive focus on managing memory is not good enough for a modern language - they really ought to assist with managing those other resources just as well.

Herman: Shipping Rust in Firefox

Posted Jul 13, 2016 8:17 UTC (Wed) by pabs (subscriber, #43278) [Link] (1 responses)

Anyone know how long the non-Rust ESR will be supported for?

Herman: Shipping Rust in Firefox

Posted Jul 13, 2016 9:50 UTC (Wed) by josh (subscriber, #17465) [Link]

That sounds like the kind of question often answered with "what are you trying to do?".

For instance, are you concerned about distribution packaging logistics? That's currently being worked on in several distributions. (And this'll be a nice forcing function to make sure Rust gets packaged everywhere.)

Herman: Shipping Rust in Firefox

Posted Jul 13, 2016 10:53 UTC (Wed) by adobriyan (subscriber, #30858) [Link] (3 responses)

Misspelling "OK()" is unforgivable.

Herman: Shipping Rust in Firefox

Posted Jul 13, 2016 20:48 UTC (Wed) by roc (subscriber, #30627) [Link] (1 responses)

Rust adopted the convention of capitalizing only the first character of acronyms. You need this to make long identifiers including acronyms readable, and consistency is very important.

Herman: Shipping Rust in Firefox

Posted Jul 16, 2016 20:00 UTC (Sat) by JanC_ (guest, #34940) [Link]

They should have used small caps instead… :)

Herman: Shipping Rust in Firefox

Posted Jul 14, 2016 13:06 UTC (Thu) by Wol (subscriber, #4433) [Link]

> Misspelling "OK()" is unforgivable.

That's Okay()

Cheers,
Wol


Copyright © 2016, Eklektix, Inc.
Comments and public postings are copyrighted by their creators.
Linux is a registered trademark of Linus Torvalds