Herman: Shipping Rust in Firefox
One of the first groups at Mozilla to make use of Rust was the Media Playback team. Now, it’s certainly easy to see that media is at the heart of the modern Web experience. What may be less obvious to the non-paranoid is that every time a browser plays a seemingly innocuous video (say, a chameleon popping bubbles), it’s reading data delivered in a complex format and created by someone you don’t know and don’t trust. And as it turns out, media formats are known to have been used to trick decoders into exposing nasty security vulnerabilities that exploit memory management bugs in Web browsers’ implementation code. This makes a memory-safe programming language like Rust a compelling addition to Mozilla’s tool-chest for protecting against potentially malicious media content on the Web."
Posted Jul 12, 2016 23:08 UTC (Tue)
by flussence (guest, #85566)
[Link] (30 responses)
Posted Jul 13, 2016 0:04 UTC (Wed)
by tialaramex (subscriber, #21167)
[Link] (29 responses)
For example ASN.1 defines a whole bunch of ways to write text. Nearly all of them are obsolete, either you want UTF8String (which is what it sounds like) or you'll be happy with IA5String (ASCII), plus the confusing PrintableString is fine if you really must insist on it despite it not doing what you probably wanted. But lots of CAs are still out there using BMPString and TeletexString and goodness knows what else. So mozilla:pkix has to carry around implementations of these incomplete and long obsolete text encodings, and, of course, they're all inevitably abused so that it's necessary to also carry workarounds for common "mistakes" like writing ISO-8859-N or 8-bit Windows encodings into a field that's supposedly PrintableString because your company never quite caught up to Unicode...
Anyway there probably isn't a great deal of enthusiasm for doing it all again so soon.
Posted Jul 13, 2016 3:29 UTC (Wed)
by lambda (subscriber, #40735)
[Link]
So yes, they are working on getting the ASN.1 parsing ported to Rust. Still has some ways to go before it's ready to deal with all of the crazy types of certificates out in the wild, but it is an active project.
Posted Jul 13, 2016 19:20 UTC (Wed)
by mjthayer (guest, #39183)
[Link] (25 responses)
This is probably a really naive question, and off-topic to boot, but why can't trusted entities (preferably several, and independent) just keep databases of certificates which are known to be in use for some valid public purpose, and of known compromised certificates, rather than relying on PKI? I know that the second is already done by browser vendors to some extent, but I think it is more the exception than the rule (perhaps not). Then if say a web site had a self-signed certificate for doing HTTPS users would not need to click away warnings as long as it was in the database they were checking against, and marked as matching the site's address. People manage to maintain virus signature databases, which I don't think is such a different problem.
Posted Jul 13, 2016 19:31 UTC (Wed)
by flewellyn (subscriber, #5047)
[Link] (24 responses)
What you just described IS the PKI.
Posted Jul 13, 2016 19:42 UTC (Wed)
by mjthayer (guest, #39183)
[Link] (23 responses)
flewellyn:
Then surely it would be a decentralised version. If you decide that the database you are relying on is doing a bad job it would be a couple of mouse clicks or key presses to switch to a different one. Is that possible with PKI as it stands?
Posted Jul 13, 2016 20:29 UTC (Wed)
by flussence (guest, #85566)
[Link]
Ideally the browser itself would be up front and honest, giving the user an informed choice and the tools to act on it, instead of training them to click through interstitial scare pages. That kind of progress is a pipe dream though.
Posted Jul 13, 2016 23:02 UTC (Wed)
by tialaramex (subscriber, #21167)
[Link] (21 responses)
An X509 certificate, like most real world certificates, is a signed document. In X509 the signature is performed using a combination of a hash function (often SHA-256 today) and a public key algorithm (often RSA). Anyone who knows a public key can do the mathematics on the signature using that public key, get back the hash value and check it is the same as the hash of the certificate they were reading, if it is, this key signed the certificate.
So, every certificate is signed by exactly one key, we sometimes call this the "Issuer" of the certificate for obvious reasons. The contents of the certificate are a serial number, another public key (that of the "Subject" of the certificate), some identifying details for the Issuer (so you know if a public key you have should verify this certificate) together with whatever the Issuer is certifying about that Subject, such as (on the public web) their fully qualified domain name, or IP address and maybe the name of their business, the country in which it is registered and so on; and there will usually be some house-keeping stuff in there, which I'll get back to.
Of course you can sign your own certificate, so that the public key signed is the same one that verifies the signature. But why should anybody trust this certificate? How would anybody know which one is really "your" self-signed key - anybody can state the same ?
In the traditional Web PKI the approach is that a relatively small number of trusted Certificate Authorities either act directly as Issuers, or they certify Issuers and your operating system and/or client software decides which CAs it will accept certificates from. If you don't trust, say, Symantec, too bad you now have no way to verify all the certificates they've issued.
What you're talking about is basically Moxie Marlinspike's "Convergence" in which you the end user ask one or more trusted third parties ("notaries") to check you've got the right certificate, and so anyone can issue any certificates but it won't matter unless the notaries say they're OK. There are some pretty awful problems with Convergence in practice and it is largely moribund.
1. It needs online real time verification. In practice on the web things break, servers go off-line, networks get partitioned. Loads of people who think they have "Internet access" actually don't, there's a middle box and it would of course get 100% carte blanche to lie about Convergence answers, or if it doesn't you need another entire PKI to manage Convergence and Now You Have Two Problems.
1a. The Web PKI already has (switched off) real time verification. OCSP is supposed to be a real time online verification of certificates. But in fact many browsers switch it off completely, others "soft fail", treating a certificate as OK unless they get a "Not OK" answer in a limited amount of time. We ended up with OCSP Stapling to work around that - clients still get an OCSP response, but from the server they were trying to connect to, which must be up for them to succeed anyway. But you can't staple Convergence.
2. The other reason the Web PKI moved away from OCSP: The privacy implications are pretty bad. Your notaries end up knowing exactly what you're looking at, because you have to tell them in order to make use of their services. This is worse than for OCSP because you tell only the OCSP server for a particular certificate (see, housekeeping info) that you saw the certificate, whereas you tell all your trusted notaries for every site with Convergence.
3. Who are all these trusted third parties anyway? So long as Convergence users are just a handful of cryto nerds the reality is that the "trusted third parties" are a bunch of other crypto nerds. But if big players come into the game, they dominate, and soon the "agility" is all gone anyway.
4. In fact, who wants to be a trusted third party in this scenario anyway? End users won't pay you. That's an ugly fact but it's a fact. Ordinary Internet users are very resistant to paying. You and I are LWN subscribers, we're the rare minority. In the CA model the server operators pay to make this work. But in Convergence the reality was nobody pays, and as a result quality goes out the window.
The most viable part of Convergence survived, as HPKP. Key pinning (Marlinspike wanted to call something similar "TACK"). HPKP lets server operators promise that one of Convergence's assumptions is actually true for their site, that either their key, or a key that signed their key, is constant and won't change after your first visit. This is the "Trust on first use" model familiar from SSH. If you visited a HPKP-enabled site once, successfully, and no operators of that site ever make a bad mistake, then you're OK forever.
Now, as most SSH users soon find out, actually operators make a lot of mistakes, all the time. In SSH you work around these mistakes by manually editing a text file, or running an intimidating command-line tool. In HPKP it's much the same, except, how many web browser users do you know who are comfortable manually editing text files? Right. Fools deploying HPKP will effectively "brick" an entire FQDN, by ensuring that thousands or millions of users will only ever see an uncancellable error message when they try to visit it because they lost the key they need to pin the site. A bunch have been bricked already, more happen every day.
HPKP is a great idea for high value sites with a group of competent administrators. Google, Microsoft.com, there are dozens, maybe there are even thousands, but in practice even though anybody _could_ do it, most of us almost certainly shouldn't.
Now, as a result of Certificate Transparency we do actually have a database of (most of) the trusted certificates for the Web PKI. But because of the afore-mentioned network reliability + privacy situation, you definitely shouldn't just insist on examining the CT monitors every time you visit a web site.
Posted Jul 14, 2016 1:21 UTC (Thu)
by nybble41 (subscriber, #55106)
[Link] (18 responses)
1. No more "universal" CA certificates. Each CA certificate is valid for one, and only one, TLD.
This means that CNNIC's CA certificate can only be used to sign for ".cn" domains, not ".com" or ".gov". A CA could have multiple distinct certificates for multiple TLDs, of course, but with this change browser makers and users would be able to choose whether to trust the CA or not on a TLD-by-TLD basis.
2. It should be possible to get your site certificate signed by multiple CAs, and present both signatures to the browser. Browsers should trust the certificate provided it has been signed by at least one trusted CA.
Assuming site operators take advantage of this ability, it would make it easier to revoke the certificates of malicious or negligent (but influential and widely-used) CAs without breaking the Web, since legitimate sites of any significance would tend to have one or more other CA signatures as backups.
Posted Jul 14, 2016 3:43 UTC (Thu)
by raven667 (subscriber, #5198)
[Link] (17 responses)
Posted Jul 14, 2016 12:58 UTC (Thu)
by Wol (subscriber, #4433)
[Link] (14 responses)
And, (imho perfectly okay, but guaranteed to upset privacy nuts,) if you're using a proxy then the proxy can sign the connection! This would kick up a warning in the browser saying "you are behind a proxy wall, and the proxy is looking at your connection". It should NOT be acceptable for the browser to enforce privacy for the user against the wishes (or legal needs) of the owner of the computer/connection. The user can then decide whether or not he wants his employer to snoop on the contents of his browsing session.
And of course, the proxy can be configured to not proxy banking sites etc, but this proposal would allow companies to comply with the law and check on stuff going in and out their network, while allowing users to use secure sites securely!
Cheers,
Posted Jul 14, 2016 16:45 UTC (Thu)
by Lennie (subscriber, #49641)
[Link] (13 responses)
Yep, that is called DANE. You put the public key of the HTTPS-certificate of your webserver in DNS and sign your domain with DNSSEC.
Otherwise any active attacker can just change your DNS-packets and point a website to a HTTPS-webserver they control. So that is why we have DNSSEC.
Organization wise DNSSEC is similar to having one CA (DNS-root-nameservers & ICANN) with sub-CA's (TLD-operators) and the domain owners all have their own sub-CA.
Lots of people say: sorry, DNS-root is operated by the US. We don't trust the US to be the source of trust. So ICANN because of this and other reasons like Snowden documents has done a lot of work to become independent of any country (they are in the process). The problems are obviously that it could become the next FIFA (corruption) if there is to little accountability.
One solution to that problem could be if people working at the standards organizations (W3C/IETF, etc.) could develop a protocol based on a blockchain (think of something like Bitcoin which isn't controlled by anyone). Then maybe we could develop something that does not depend on trusting organizations. Namecoin tried to do something similar, that isn't in widespread use.
Posted Jul 14, 2016 17:34 UTC (Thu)
by Cyberax (✭ supporter ✭, #52523)
[Link] (12 responses)
To intercept the connection, NSA/FBI/whatever would have to create a fake certificate and key for the first-level domain that you're using (for example, .io), sign it with the real root key, and then use this fake keypair to MITM the requests.
This would be VERY visible, normally you would have only a handful of keys for each TLD. They can be easily pinned and checked.
> One solution to that problem could be if people working at the standards organizations (W3C/IETF, etc.) could develop a protocol based on a blockchain (think of something like Bitcoin which isn't controlled by anyone).
With domain names if you lose your domain credentials, you'll still be able to regain access. It might involve stacks of documents and tons of telephone calls, but it's doable.
Posted Jul 14, 2016 17:48 UTC (Thu)
by Lennie (subscriber, #49641)
[Link] (9 responses)
That depends, let's say we would start to depend on such a system. You would have validating DNS-resolver on your host (laptop/PC/phone). In that case most people wouldn't notice if NSA/FBI/whatever did a MITM between them and their upstream (caching) DNS-server as long as the NSA/FBI/whatever also generated fake TLD-signnatures. Which is easy to do. Obviously not easy to do at a large(r) scale, but you are still moving your eggs in one single basket, that better be a good basket.
> > One solution to that problem could be if people working at the standards organizations (W3C/IETF, etc.) could develop a protocol based on a blockchain (think of something like Bitcoin which isn't controlled by anyone).
No, I meant something like Bitcoin for the root / TLDs might be a good idea.
Posted Jul 14, 2016 18:01 UTC (Thu)
by Cyberax (✭ supporter ✭, #52523)
[Link] (2 responses)
It makes little sense for .com (it's managed by the US anyway), but it makes more sense for smaller TLDs.
> No, I meant something like Bitcoin for the root / TLDs might be a good idea.
Posted Jul 14, 2016 18:28 UTC (Thu)
by Lennie (subscriber, #49641)
[Link] (1 responses)
I don't understand a 100% what you mean, but if you are an attacker you won't be signing a whole TLD if that was what you were implying, you would obviously be doing live signing.
Posted Jul 14, 2016 19:52 UTC (Thu)
by farnz (subscriber, #17727)
[Link]
But, if your operation is to remain stealthy, you need to sign every response I see for the duration of the appropriate TTLs; thus, instead of being only needing to MITM one Internet access session plus compromise one trusted CA (which is all you need in the current CA/B Forum PKI setup), you need to MITM every DNS query I send or receive for a week (that being the TTL of DS records in the root). If you don't, you run the risk that I'll see the "real" key, and discover that there's perfidy afoot.
Posted Jul 14, 2016 18:31 UTC (Thu)
by farnz (subscriber, #17727)
[Link] (4 responses)
The challenge for the NSA is that I can cache keys in my validating resolver; if they want to (say) send me a fake .uk name, they've got to cope with the fact that I can cache a returned DS record for uk. for up to a week in the current setup. That means that either they've got to get the uk. key that I've cached, or they've got to maintain their spoof for at least a week before triggering their attack.
This doesn't make an attack impossible - but it considerably raises the bar.
Posted Jul 14, 2016 18:54 UTC (Thu)
by Cyberax (✭ supporter ✭, #52523)
[Link] (3 responses)
Posted Jul 14, 2016 19:43 UTC (Thu)
by farnz (subscriber, #17727)
[Link] (2 responses)
True, but that implies that I'm paranoid enough to do that and keep updating the local copies when the TLD keys change (with appropriate verification).
The thing about automatic caching is that it's transparent to me, and it's a useful performance optimization (so I'd expect OSes to do a degree of it behind my back). If the NSA doesn't take it into account, they risk being unmasked by their own bad opsec.
Posted Jul 14, 2016 20:01 UTC (Thu)
by Cyberax (✭ supporter ✭, #52523)
[Link]
Posted Jul 14, 2016 21:59 UTC (Thu)
by Wol (subscriber, #4433)
[Link]
But isn't that fairly easy? You pull down a set of "known good" TLD keys, and the system triggers an alert when they change, telling you to re-get the keys. Bit of a pain when they change unexpectedly, but the point is, not that it's secure or not, but that YOU ARE NOTIFIED when something changes.
Cheers,
Posted Jul 15, 2016 14:50 UTC (Fri)
by drag (guest, #31333)
[Link]
That is what the certificate pinning are TLD are for, as mentioned above.
When your localhost DNS connects to the network's DNS resolver it will obtain information for the TLD certificate. It will take note and remember the hash for that cert. If the cert changes later on for a MITM attack then the localhost DNS resolver will pick up on this and freak out.
Alternatively it wouldn't be too much of a burden for the distros to collect and ship TLD certificate information along with the localhost DNS resolver.
Cert pinning does work and it has caught MITM attacks against HTTPS when implemented in browsers.
The problem with this is that TLD certifications compromises would be a NIGTHMARE. Pinning can backfire because it can make it more difficult to deal with legit changes to certs.
Posted Jul 17, 2016 22:03 UTC (Sun)
by dkg (subscriber, #55359)
[Link] (1 responses)
Cyberax also said:
Posted Jul 17, 2016 22:16 UTC (Sun)
by Cyberax (✭ supporter ✭, #52523)
[Link]
Signature validation is also straightforward - you get the parent's public key through NSKEY query and check your response. Then repeat it until you reach a locally available root of trust - it's completely hierarchical.
> If this were true, we would have already seen people doing this publicly. However, public efforts in this direction (usually called "DNSSEC transparency") are only in their infancy.
Posted Jul 14, 2016 16:20 UTC (Thu)
by nybble41 (subscriber, #55106)
[Link] (1 responses)
The CAs can create all the certificates they want, just like anyone else; that's the easy part. These certificates won't be included in the operating system or browser's default trust stores unless the CA routinely issues proper certificates for the associated TLD. This changes the security implications considerably compared to the current situation where any CA can sign a certificate for any TLD and have it automatically trusted, even if the certificate is for "google.com" and the CA isn't considered trustworthy by anyone outside of China.
Posted Jul 14, 2016 19:34 UTC (Thu)
by raven667 (subscriber, #5198)
[Link]
But of course they all do and would continue to do so, which is why the overall risk is unchanged.
Posted Jul 15, 2016 7:24 UTC (Fri)
by mjthayer (guest, #39183)
[Link]
For simplicity I will limit this to using certificates to establish web site identity. It seems to me that the first problem to be solved here is establishing that the site one is communicating with securely is really the one one expected (modulo typing mistakes in URLs, such as "www.mybank.com.badsite.net), and specifically not identifying the site's owners, nor the site's moral integrity. I would expect that this could be achieved using an off-line but regularly updated database mapping URLs to certificates which could be built up by automated crawling, probably using some ranking algorithm to keep the size down. This would be the white list.
The second would be a similar database of certificates known to be in use for bad purposes, the black list. Building this seems to me to be a similar problem to building a virus signature database, and presumably a similar level of quality would be to be expected. The black list would obviously take priority over the white list.
Would you expect the price tag of achieving this to be higher than that of similar databases which have been created?
Posted Jul 22, 2016 4:02 UTC (Fri)
by ras (subscriber, #33059)
[Link]
Posted Jul 14, 2016 6:13 UTC (Thu)
by briansmith (guest, #106424)
[Link] (1 responses)
http://hg.mozilla.org/mozilla-central/diff/9c4424920d74/s...
mozilla::pkix does gloss over some parts of names as long as those names aren't security-relevant as far as Firefox's threat model is concerned. In particular, it might allow some weird or malformed encodings of things like "Organizational Unit" because those fields don't affect whether or not the networking stack will trust the certificate.
Posted Jul 15, 2016 20:43 UTC (Fri)
by tialaramex (subscriber, #21167)
[Link]
Posted Jul 13, 2016 0:22 UTC (Wed)
by ncm (guest, #165)
[Link] (36 responses)
By following some easy rules, C++ coding can be about equally memory-safe, although it's hard to impose those rules on others' code. The place where C++ might never compete with Rust is in its implicit safety against data races -- threads misusing shared memory. These are extremely difficult to spot even for the most experienced coders, but the temptation of performance gains from unproven "lock-free" data structures is too strong to keep them out of common code, and it is all too easy to share memory accidentally, with undefined results.
Places that turn out to merit especial care, enforced by tools wherever possible, include decompression, media rendering, deserialization, and cryptosystem support apparatus. (The crypto itself poses unrelated challenges.)
Posted Jul 13, 2016 5:14 UTC (Wed)
by zlynx (guest, #2285)
[Link] (24 responses)
void f(const string &s) {
So innocent. So wrong.
Posted Jul 13, 2016 6:00 UTC (Wed)
by Cyberax (✭ supporter ✭, #52523)
[Link] (15 responses)
Posted Jul 13, 2016 6:13 UTC (Wed)
by zlynx (guest, #2285)
[Link] (14 responses)
Point is, Rust spots these bugs immediately.
Posted Jul 13, 2016 11:59 UTC (Wed)
by ianmcc (subscriber, #88379)
[Link] (13 responses)
http://isocpp.github.io/CppCoreGuidelines/CppCoreGuidelines
Posted Jul 13, 2016 19:28 UTC (Wed)
by mjthayer (guest, #39183)
[Link]
I imported that into LibreOffice as a very dirty way of estimating the length - to about 400 pages, and still a work in progress. That does give me a bit of a bad feeling.
Posted Jul 13, 2016 20:29 UTC (Wed)
by roc (subscriber, #30627)
[Link] (1 responses)
An appropriate term for "speculative product promise" is "vapourware", and I wrote a whole blog post about this situation: http://robert.ocallahan.org/2016/06/safe-c-subset-is-vapo...
Posted Jul 21, 2016 9:43 UTC (Thu)
by HelloWorld (guest, #56129)
[Link]
Posted Jul 14, 2016 4:55 UTC (Thu)
by torquay (guest, #92428)
[Link] (9 responses)
Anything proposed by Herb Sutter should be taken with a (large) grain of salt. He is pretty much the embodiment of the Ivory Tower establishment.
Firstly, his entire GoTW series perversely serves as clear examples of how C++ is overcomplicated and full of traps.
Secondly, he is employed at Microsoft, a company that has a shockingly bad C++ "compiler" (MSVC), notorious for being full of bugs and severely lacking in standards compliance. To this date it doesn't properly support C++11, and its C++98 compliance still isn't complete. A company with this track record should be nowhere near a ISO C++ standards process. (Let's also not forget the manipulation of the Office Open XML "standard").
Posted Jul 14, 2016 20:23 UTC (Thu)
by epa (subscriber, #39769)
[Link]
I don't think the failings of Microsoft's C++ compiler are particularly relevant to Stroustrup and Sutter's safe-subset proposal.
Posted Jul 16, 2016 13:33 UTC (Sat)
by mathstuf (subscriber, #69389)
[Link] (3 responses)
While it has its quirks, it does catch warnings that are not caught by GCC or Clang. Most of its standards shortfalls are documented as such and are, generally, not fixable due to it being a 35 year old codebase (e.g., there are some edge cases where the parser just says "no" to valid constructs that will never be fixed due to the structure of the codebase; two-phase lookup (enable_if-area stuff) is also not implemented). There is now a Clang backend which has a cl-compatible command line interface which ships with the most recent versions of Visual Studio.
> severely lacking in standards compliance
They also don't claim to be compliant. For contrast, see Apple's Clang release where they *ripped out* TLS support (supposedly it is also a runtime failure; it compiles just fine). And they still claim to be compliant.
> A company with this track record should be nowhere near a ISO C++ standards process
Have you been to a ISO C++ meeting? No one company runs the show. Not by any stretch.
Posted Jul 16, 2016 16:37 UTC (Sat)
by pizza (subscriber, #46)
[Link]
Be that as it may, MSVC's "quirks" have historically made it quite challenging to maintain a cross-platform codebase.
Posted Jul 17, 2016 10:17 UTC (Sun)
by micka (subscriber, #38720)
[Link] (1 responses)
So it's roughly the same age as gcc. It's a shame those two old compilers can't be fixed!
Posted Jul 17, 2016 12:09 UTC (Sun)
by mathstuf (subscriber, #69389)
[Link]
Posted Jul 21, 2016 10:10 UTC (Thu)
by HelloWorld (guest, #56129)
[Link] (3 responses)
Posted Jul 21, 2016 11:45 UTC (Thu)
by pizza (subscriber, #46)
[Link] (2 responses)
MS's C++ compiler _was_ by far the worst when it came to standards compliance and bugs -- not necessarily in the compiler itself, but also the standard [template] libraries that everyone's supposed to be able to rely on. It's quite a lot better now.
Meanwhile.
"...and deep in GNU C++ you find quite a few non-standard features. Whenever I can, I prefer to deal with ISO standard C++ and to access system-specific features through libraries with system-independent interfaces."
One of the nice things about the GNU toolchain is that you can disable all of those non-standard extensions by using compiler flags to force strict compliance (--std=c++03 --pedantic-errors) and still have a useful compiler. (And GCC doesn't provide any access to "system-specific features", beyond the mandated contents of the standard C/C++ libraries...)
Posted Jul 21, 2016 17:54 UTC (Thu)
by Cyberax (✭ supporter ✭, #52523)
[Link] (1 responses)
Posted Jul 21, 2016 18:11 UTC (Thu)
by pizza (subscriber, #46)
[Link]
Anyway. Back to Rust.
Posted Jul 13, 2016 6:14 UTC (Wed)
by ncm (guest, #165)
[Link] (7 responses)
s.substr(1,7) returns a temporary string with a copy of the bytes, which takes the call c_str(), yielding a pointer to its internal storage. The lifetime of that temporary is to the end of the full expression it appears in, so if g() doesn't stash its argument away, no harm done. That said, numerous pitfalls of that sort are a trivial edit away. A big improvement coming in C++17, billed as something like "fix order of evaluation", eliminates many of those pitfalls. (Expect to see it implemented in a dot release of your current compiler.)
But one of the simple rules is "no raw pointers".
Posted Jul 13, 2016 12:24 UTC (Wed)
by torquay (guest, #92428)
[Link] (6 responses)
I'm not sure how the "fix order of evaluation" proposal would help here? If you take a copy of the pointer, you're still screwed.
C++ is full of such pitfalls (experienced first hand), so much so that the "trivial edits" become rather cumbersome and non-trivial in aggregate. The coder needs to keep track of too many things in their head, to the point that it starts to resemble coding in assembler. A programming language is meant to make life easier, not to throw up traps left, right and center.
Sidenote: a few years ago I was surprised that "fix order of evaluation" wasn't actually part of the C++ standard. Code compiled with clang worked as intuitively expected, while under gcc it didn't. The gcc developers shrugged and pointed out that the standard doesn't say anything about the order of evaluation, so they exploited the loophole to implement alleged "optimizations". Timing tests indicated that there was very little difference between the code produced by clang and gcc. The problem here is that gcc developers chose to deliberately provide a non-intuitive implementation, which in all likelihood has caused latent bugs in many user codebases. Gcc's unintuitive behavior is probably more accurately described as a de-optimization, as it wastes everybody's time.
C++11, C++14 and C++17 are in reality an old language with new features retrofitted on top of it, causing yet more corner cases with associated traps. The core of C++ (as well as its insistence of compatibility with C) is essentially too rotten to fix. Rust hence has a huge advantage over C++: it's a clean sheet design.
Posted Jul 13, 2016 12:45 UTC (Wed)
by pizza (subscriber, #46)
[Link]
Every time I've had to deal with C++ in recent memory it's been to hunt down a bug which turned out to be due to one of these pitfalls -- most recently in geeqie, which turned out to be an order of evaluation problem that used to "work" until it didn't any more...
> Gcc's unintuitive behavior is probably more accurately described as a de-optimization, as it wastes everybody's time.
As "quirky" as GCC has been over the years, it was (and still is) light-years beyond the eye-gouging issues one has to deal with when using what is still the dominant C++ compiler -- Microsoft's.
Posted Jul 13, 2016 14:45 UTC (Wed)
by ncm (guest, #165)
[Link] (2 responses)
There really is no substitute for understanding what you're doing, and no language can protect against people who don't. It was an important discovery that there were things almost nobody needs to be doing, and so don't need to understand, and that the language design could make it hard to express trying to do those things. It is better to do something else that's easy to understand than to try to perfect something hard to understand.
Posted Jul 13, 2016 19:30 UTC (Wed)
by mjthayer (guest, #39183)
[Link] (1 responses)
I sometimes feel that C++ attracts that sort of person though.
Posted Jul 13, 2016 23:56 UTC (Wed)
by khim (subscriber, #9252)
[Link]
Not exactly. These same people could write perfectly readable and reliable programs in Java or Python. The simple fact is: C (and C++) are low-level, dangerous, languages (it's good question which is more dangerous, though). If you write code in C++ and use designs which leave performance of the table… then why you even bother?
Posted Jul 13, 2016 21:22 UTC (Wed)
by lsl (subscriber, #86508)
[Link] (1 responses)
Or, maybe, they just picked one at random. That's what I would do, because I have no idea why you'd consider one specific evaluation order to be more intuitive than the other (except for some special cases, maybe). Am I just damaged by prolonged exposure to weak standards?
Posted Jul 13, 2016 23:02 UTC (Wed)
by JoeBuck (subscriber, #2330)
[Link]
Posted Jul 13, 2016 20:32 UTC (Wed)
by roc (subscriber, #30627)
[Link] (2 responses)
What are these rules?
The C++ Core Guidelines people are still working on theirs and aren't done yet (see my vapourware comment above). Have you already solved it?
Posted Jul 13, 2016 23:47 UTC (Wed)
by madscientist (subscriber, #16861)
[Link] (1 responses)
Posted Jul 14, 2016 3:34 UTC (Thu)
by roc (subscriber, #30627)
[Link]
Posted Jul 13, 2016 20:56 UTC (Wed)
by lsl (subscriber, #86508)
[Link] (2 responses)
Spotting data races got really easy with ThreadSanitizer (included with recent LLVM and GCC). At least for normal userspace programs all you need to do is compiling with -fsanitize=thread.
Due to the way it works, tsan has zero false positives making its output immensely useful. A possible catch is that it can only detect data races that actually happen during this particular run of the program, so a statically-proven absence of data races would obviously be superior.
Really, everyone should use it on their code on a regular basis. The low performance hit makes that feasible and convenient. The other sanitizers, too, by the way.
Posted Jul 14, 2016 1:49 UTC (Thu)
by mathstuf (subscriber, #69389)
[Link] (1 responses)
Posted Jul 14, 2016 3:37 UTC (Thu)
by roc (subscriber, #30627)
[Link]
Posted Jul 16, 2016 6:42 UTC (Sat)
by marcH (subscriber, #57642)
[Link]
You meant: the other place whether C++ might never compete with Rust either...
Posted Jul 18, 2016 15:12 UTC (Mon)
by ksandstr (guest, #60862)
[Link] (3 responses)
On the contrary, Rust imposes the worst kind of overhead possible: that on implementation. For programs written in C++ that're already down with the big-design-up-front philosophy this is insignificant, but unsurprisingly most software isn't written in Big Design C++.
That's to say, Rust's borrow-checking discipline fails to support random hacks. This is a major failure since most software arises from said hacks in one way or another.
Posted Jul 18, 2016 18:42 UTC (Mon)
by rahulsundaram (subscriber, #21946)
[Link] (1 responses)
Not sure what you mean? Random hacks can be unsafe
Posted Jul 19, 2016 2:29 UTC (Tue)
by mathstuf (subscriber, #69389)
[Link]
Posted Jul 19, 2016 2:33 UTC (Tue)
by mathstuf (subscriber, #69389)
[Link]
Posted Jul 13, 2016 6:30 UTC (Wed)
by oldtomas (guest, #72579)
[Link] (19 responses)
My main beef with mozilla is that they are making this path ever more difficult. The hoops one has to jump through to disable Javascript have become ridiculously baroque (I keep several profiles for that, wtf?), and the cookie thing is slowly getting out of sight more and more.
Correspondingly, attention dilution^W^W ad industry (which tends to call itself "content") more and more assumes that consumer's browsers are "full on", closing this virtuous circle.
If I have a beef with Mozilla, it's this complicity. I used to trust the Mozilla browser unconditionally (the intentions: not always necessarily the implementation). These days I don't.
Posted Jul 13, 2016 7:12 UTC (Wed)
by oever (guest, #987)
[Link] (12 responses)
NoScript limits JavaScript execution to domains you white list.
RequestPolicy controls what cross-domain requests are allowed.
CookieMonster lets you turn on cookies or domain cookies on sites easily.
All three allow temporary white listing which expires when the browser quits.
Posted Jul 13, 2016 8:44 UTC (Wed)
by oldtomas (guest, #72579)
[Link] (8 responses)
What I was trying to do is to point at the more social/political problem. By slowly shifting the defaults, this creates more incentives to make "rich" webpages where it wouldn't be necessary, and this shifts control from the user to the Page that Be. And that, again, motivates the browser people to up the ante, thus gradually spiralling out of control.
Now what could be done better? How? I don't know yet. But watching Mozilla depresses me, because as the only candidate to be on user side, I miss this kind of thinking.
I mean: "enlarge this image" needs Javascript these days! (yah, I know how to "view selection source" and pull the link to the bigger image from the mumbo-jumbo, but hey).
Posted Jul 13, 2016 9:47 UTC (Wed)
by josh (subscriber, #17465)
[Link] (1 responses)
*Not* shifting the defaults creates more incentives to replace the browser with one that does, or add plugins. The web of 1999 wasn't any safer; it had flash video instead of HTML5 video, and Flash had a much *larger* attack surface area. Today, it's completely reasonable for the vast majority of people to not have Flash installed at all.
Browsers are not going to help you stick with the web of 1999. People don't just build static web pages, they build web apps too. And there's no hard line between "web page" and "web app".
Posted Jul 13, 2016 9:54 UTC (Wed)
by josh (subscriber, #17465)
[Link]
Posted Jul 13, 2016 9:50 UTC (Wed)
by oever (guest, #987)
[Link]
The web is a public place. Anyone can set up a server. Anyone can claim to be a web developer. Browsers are more forgiving than the Pope. The result is a lot of diverse creations, often the result of copy and pasting another site. This is how the web has always lived.
A drop on a hot plate is to advocate simple websites and send mails with helpful suggestions to sites that (over)use JavaScript. The JavaScript Trap [1] explains the problem quite well for those in the know about Free Software. I've not seen a convincing advocacy site for simple websites yet, but have not looked very hard either.
Thanks for the tip on 'view selection source'! It's a great Firefox feature. I never noticed that before and was using ctrl-u up till now.
Posted Jul 13, 2016 19:50 UTC (Wed)
by mjthayer (guest, #39183)
[Link] (4 responses)
PrivacyBadger blocks resources known to be bad for your privacy. AdBlockPlus blocks resources known to show you intrusive advertising. What about something to block known unnecessarily rich web pages and resources with a warning like "This page is badly designed. Do you really want to view it? Yes/no/always for this page." It would take a bit of effort to get the balance right though.
Posted Jul 13, 2016 23:13 UTC (Wed)
by JoeBuck (subscriber, #2330)
[Link] (3 responses)
The web has changed; turning off Javascript means that you won't see most of the content, or possibly you'll just see the bait set out for search engines. Sorry.
Posted Jul 14, 2016 0:11 UTC (Thu)
by flussence (guest, #85566)
[Link]
Posted Jul 14, 2016 15:50 UTC (Thu)
by cwitty (guest, #4600)
[Link] (1 responses)
Posted Jul 14, 2016 16:41 UTC (Thu)
by pizza (subscriber, #46)
[Link]
Things definitely misbehave and/or crash when you're using a memory-constrained system (eg smartphone or tablet). Of course, when you complain, you're just told "use the app instead" which makes me grind my teeth...
Posted Jul 13, 2016 15:39 UTC (Wed)
by sce (subscriber, #65433)
[Link] (1 responses)
Also worth mentioning is uMatrix (I use it instead of those three plugins).
Posted Jul 13, 2016 20:14 UTC (Wed)
by flussence (guest, #85566)
[Link]
The cookie/localStorage handling is a bit lame too, it's no substitute for the Self-Destructing Cookies extension. Not enough of a difference to switch browsers over though.
Posted Jul 15, 2016 22:37 UTC (Fri)
by mathstuf (subscriber, #69389)
[Link]
Posted Jul 13, 2016 10:46 UTC (Wed)
by Zack (guest, #37335)
[Link] (5 responses)
It's a logical extension of how Mozilla pushes a multi-tier model of the Web where you have "publishers" and mere "users" who should have different rights when it comes to controlling their content.
Firefox users are a product now. The sooner Mozilla disappears the sooner this cognitive dissonance that exists between the Firefox browser and "the open web" can be rectified.
Posted Jul 13, 2016 11:24 UTC (Wed)
by josh (subscriber, #17465)
[Link] (4 responses)
Posted Jul 13, 2016 12:07 UTC (Wed)
by Zack (guest, #37335)
[Link] (3 responses)
It's become a Ponzi scheme with goodwill where existing users are reassured that the open web will eventually happen with the extra clout new users--attracted by implementing this or that popular restriction, will give them.
The only difference between Mozilla and their competitors is their mission statement, which is basically a PR tool at this point, riding on the coat-tails of its illustrious past.
Posted Jul 13, 2016 12:21 UTC (Wed)
by pizza (subscriber, #46)
[Link]
Mozilla at least *attempts* to do the right thing most of the time.
The others have reached the point where don't even try to hide the fact they're screwing users over any more.
Posted Jul 13, 2016 20:46 UTC (Wed)
by roc (subscriber, #30627)
[Link]
Posted Jul 15, 2016 15:04 UTC (Fri)
by drag (guest, #31333)
[Link]
If you want to have a 'open web' then it's going to require new technologies (IPFS, etc) and the demand from web users to make it happen.
Posted Jul 13, 2016 7:05 UTC (Wed)
by eduard.munteanu (guest, #66641)
[Link] (20 responses)
What happens instead is you write the whole thing in One True Language because, *obviously*, you need that matrix multiplication to be fast and you "know what you're doing" anyway. And when things blow up over and over, you finally consider "promoting" some code to a safe language. Quite the opposite way around, don't you think? Especially because you value correctness and security so much and all that.
And while we see projects getting completely rewritten because code gets unmanageable, I think we're yet to see optimization being a reason for that. (Well, you could cite things like systemd, but bash scripts don't really count as a sane safe language in the first place.)
Posted Jul 13, 2016 7:48 UTC (Wed)
by oever (guest, #987)
[Link] (19 responses)
I know of no other mainstream safe language. Java and C# avoid memory corruption but still allow plenty of ways to make simple errors. Any language without static typing is not safe.
Besides Rust, Haskell is the only usable, but not mainstream, language that I know that makes many errors impossible. The functional programming style has kept Haskell out of mainstream.
Rust and Haskell are both great but do not come with safe GUI libraries yet. Interfacing between languages adds a lot of extra code with potential for bugs.
Posted Jul 13, 2016 9:08 UTC (Wed)
by ballombe (subscriber, #9523)
[Link] (1 responses)
and everyday we see mainstream programming languages aping Haskell features without undertanding the underlying context.
Posted Jul 13, 2016 9:49 UTC (Wed)
by josh (subscriber, #17465)
[Link]
Posted Jul 13, 2016 12:03 UTC (Wed)
by areilly (subscriber, #87829)
[Link] (1 responses)
There are actually only a couple of languages that allow the buffer-overflow bug pattern (yes, they're popular), but bugs have still been written in all of the others. The Wirthian languages and Ada have counted strings and sized vectors, as do the lisps. The most popular language of today (arguably) does too (Java). There aren't as many that do it without garbage collection and the runtime problems that come with that.
Posted Jul 13, 2016 20:56 UTC (Wed)
by roc (subscriber, #30627)
[Link]
Apart from the memory-safety features, the ownership invariants give protection from iterator-invalidation and similar bugs, and from data-race bugs. And you can design APIs that provide their own protections, e.g. leveraging the fact that if a function takes a parameter of type "Foo", then it takes sole ownership of Foo and no surviving aliases to Foo can exist in the caller. This lets you write statically-checked stateful APIs.
Also, Rust makes integer overflow an error, and by default checks for those errors in debug builds. Hopefully this will be extended to release builds if/when the performance penalty can be minimised. Of Rust's competitors, only Swift provides this.
Posted Jul 13, 2016 13:49 UTC (Wed)
by jezuch (subscriber, #52988)
[Link] (1 responses)
I don't know about C#, but: more specifically, Java was (like, 20 years ago) advertised as having concurrency built-in but it doesn't really have any real safeguards against improper use of concurrency. It's all synchronization primitives or classes like ConcurrentSkipListMap which don't really make it hard to create race-free programs. You have to think hard, and I mean *hard* about concurrency and synchronization and races. Java 8's parallel Streams are much, much better in this regard, though (streams are similar to what in Rust is called iterators, and parallel streams are what the rayon crate implements).
I can only imagine that in C++ it's even worse :)
Posted Jul 19, 2016 8:49 UTC (Tue)
by marcH (subscriber, #57642)
[Link]
Despite[*] being much older, C and C++ got a proper [shared] memory model defined in their respective standards 7 years *after* Java did.
[*] or... because of it?!
Posted Jul 13, 2016 14:47 UTC (Wed)
by eduard.munteanu (guest, #66641)
[Link]
We also have the MLs (ATS[1] deserves a big mention here) and a bunch others. But I think that's besides the point, because if people cared more about safety, those ecosystems would have certainly developed better.
[1] https://en.wikipedia.org/wiki/ATS_(programming_language)
Posted Jul 13, 2016 17:06 UTC (Wed)
by tuna (guest, #44480)
[Link] (1 responses)
I do not think anyone wants to rewrite QT, GTK3 and other gui libraries/platforms. If you want to build new languages and ecosystems you have to be able to work with what exists today.
Posted Jul 14, 2016 4:40 UTC (Thu)
by gmatht (guest, #58961)
[Link]
Posted Jul 19, 2016 8:37 UTC (Tue)
by marcH (subscriber, #57642)
[Link] (9 responses)
Sure, what difference can one or two orders of magnitude more bugs make as far as security is concerned? A system is just secure or it's not, right? Nothing in between...
Posted Jul 19, 2016 12:46 UTC (Tue)
by mathstuf (subscriber, #69389)
[Link] (8 responses)
Not that your point is invalid, but sometimes the resources you're managing aren't less important than memory, so proper RAII is more useful than a garbage collector.
Posted Jul 19, 2016 15:05 UTC (Tue)
by marcH (subscriber, #57642)
[Link] (7 responses)
More like: in the foundation.
Memory is the lowest level, most basic resource. The lower level is a programming language, the closer it is to a memory management system. C is little more than a (manual and tedious) memory manager.
Also, isn't memory the only local resource that the language has exclusive control over? Except for memory mapped I/O which is rarely seen in user space.
> sometimes the resources you're managing aren't less important than memory,
Even from a pure security standpoint?
Posted Jul 19, 2016 15:57 UTC (Tue)
by mathstuf (subscriber, #69389)
[Link] (5 responses)
> Even from a pure security standpoint?
Yes. Memory problems indicate other errors in the program which can be fixed. An ideal program (which I admit is an extremely rare find) wouldn't care if it were on a GC or some RAII setup with respect to memory. However, the RAII setup allows me to ensure that my program cleans up after itself on the filesystem, over the network, or whatever other resource might be needing managed without issue; a GC setup leaves me back in the world of C's memory management assistance instead for such things. I know D's GC doesn't guarantee destructors are called (genius that, but probably something about undecidablility of the proper call order at program shutdown), but C# and Java don't even *have* destructors for such things.
Posted Jul 19, 2016 19:37 UTC (Tue)
by Wol (subscriber, #4433)
[Link] (1 responses)
Which language is that? I remember (in the early 80s) using Fortran77 and getting exactly the functionality you're after (except it was buggy :-) The Ops staff swore at me because I accidentally deleted several important files before I realised what was happening and they had to restore them from backup ... :-) (You could declare a file as temporary, and it was deleted on close. The problem was, if you re-used that file descriptor, it ignored any attempt to declare the new file permanent, and deleted that on close, too :-)
The problem with programs is they like to open files anywhere. If they followed the standards and opened temporary files in /tmp, or c:\temp, or wherever they were supposed to, then the operating system would clean up behind them (any real OS that is, I don't think WinDos does it properly).
Cheers,
Posted Jul 19, 2016 20:37 UTC (Tue)
by mathstuf (subscriber, #69389)
[Link]
Posted Jul 20, 2016 6:55 UTC (Wed)
by oever (guest, #987)
[Link] (2 responses)
A refreshing aspect of Rust is that the syntax for dealing with objects is nice and safe. It has the best of both worlds: no need for manual memory deallocation but the ability to run any type of cleanup when the last reference to an object leaves scope. Defining a destructor is optional and done via the Drop trait.
Posted Jul 20, 2016 7:01 UTC (Wed)
by Cyberax (✭ supporter ✭, #52523)
[Link] (1 responses)
The way to manage resources in Java now is try-with-resources statement.
Posted Jul 20, 2016 9:12 UTC (Wed)
by farnz (subscriber, #17727)
[Link]
It's not even guaranteed to be called in Java; it's called if the GC determines that this object has no references and is thus eligible for collection.
In practice, this means that if your process exits before the GC decides that your object is ready for collection, the finalizer is not called at all - e.g. because there happens to have been no memory pressure between the object being allocated, and the process exiting cleanly for a restart.
Posted Jul 19, 2016 16:57 UTC (Tue)
by excors (subscriber, #95769)
[Link]
At the bottom level of language (/language runtime) implementation, dynamically-allocated memory is just a limited resource provided by the OS from sbrk/mmap syscalls. Similarly, file descriptors are a limited resource provided by the OS from open syscalls. Same for threads, GPU memory, network sockets, etc. Why should dynamically-allocated memory get so much more special treatment than any of those other resources that look so similar? (And that's before looking at resources provided by other processes rather than the OS.) Garbage collection is simulating a computer with an infinite amount of memory. There are some languages that simulate a computer with a near-infinite capacity for threads, using green threads etc, but many don't. In theory a language could probably simulate support for an infinite number of file descriptors, but in practice they never seem to bother - if you try to open ~1024 files they'll happily return a "too many open files" error and let you solve the problem yourself. (Sometimes they'll go part way by GCing file objects to close ones they know can never be accessed in the future, but that only really works in languages with a refcounting GC or with RAII, and if you might access those files again then you're left to manually implement some pooling scheme yourself, so the language still isn't helping as much as it could.) Nowadays virtual address space pretty much is infinite, and the cost of physical memory seems to still be decreasing exponentially, but many of the other limited resources have remained constant for ages. So I think an exclusive focus on managing memory is not good enough for a modern language - they really ought to assist with managing those other resources just as well.
Posted Jul 13, 2016 8:17 UTC (Wed)
by pabs (subscriber, #43278)
[Link] (1 responses)
Posted Jul 13, 2016 9:50 UTC (Wed)
by josh (subscriber, #17465)
[Link]
For instance, are you concerned about distribution packaging logistics? That's currently being worked on in several distributions. (And this'll be a nice forcing function to make sure Rust gets packaged everywhere.)
Posted Jul 13, 2016 10:53 UTC (Wed)
by adobriyan (subscriber, #30858)
[Link] (3 responses)
Posted Jul 13, 2016 20:48 UTC (Wed)
by roc (subscriber, #30627)
[Link] (1 responses)
Posted Jul 16, 2016 20:00 UTC (Sat)
by JanC_ (guest, #34940)
[Link]
Posted Jul 14, 2016 13:06 UTC (Thu)
by Wol (subscriber, #4433)
[Link]
That's Okay()
Cheers,
Herman: Shipping Rust in Firefox
Along those lines, the X.509 parser would be a good candidate for replacement too...
Herman: Shipping Rust in Firefox
Herman: Shipping Rust in Firefox
Herman: Shipping Rust in Firefox
Herman: Shipping Rust in Firefox
Herman: Shipping Rust in Firefox
> This is probably a really naive question, and off-topic to boot, but why can't trusted entities (preferably several, and independent) just keep databases of certificates which are known to be in use for some valid public purpose, and of known compromised certificates, rather than relying on PKI?
> What you just described IS the PKI.
Herman: Shipping Rust in Firefox
Herman: Shipping Rust in Firefox
Herman: Shipping Rust in Firefox
Herman: Shipping Rust in Firefox
Herman: Shipping Rust in Firefox
Wol
Herman: Shipping Rust in Firefox
Herman: Shipping Rust in Firefox
The beauty of DNSSEC is that the US controls only the root domain (.). They do NOT control top-level domains.
Not a good idea. With Bitcoin if you lose access to your wallet then that's it. There's no way to restore it, it's lost forever.
Herman: Shipping Rust in Firefox
> The beauty of DNSSEC is that the US controls only the root domain (.). They do NOT control top-level domains.
>
> To intercept the connection, NSA/FBI/whatever would have to create a fake certificate and key for the first-level domain that you're using (for example, .io), sign it with the real root key, and then use this fake keypair to MITM the requests.
>
> This would be VERY visible, normally you would have only a handful of keys for each TLD. They can be easily pinned and checked.
>
> Not a good idea. With Bitcoin if you lose access to your wallet then that's it. There's no way to restore it, it's lost forever.
>
> With domain names if you lose your domain credentials, you'll still be able to regain access. It might involve stacks of documents and tons of telephone calls, but it's doable.
>
Herman: Shipping Rust in Firefox
Before the root zone was signed, it had been common to sign side chains. And it's still possible to use custom roots of trust for specific TLDs.
Might make sense.
Herman: Shipping Rust in Firefox
> Before the root zone was signed, it had been common to sign side chains. And it's still possible to use custom roots of trust for specific TLDs.
>
> It makes little sense for .com (it's managed by the US anyway), but it makes more sense for smaller TLDs.
>
Herman: Shipping Rust in Firefox
Herman: Shipping Rust in Firefox
Herman: Shipping Rust in Firefox
Not just _cache_, but completely override them. You can in fact just pull all of the TLD signatures and just use them instead of querying the root name servers for them.
Herman: Shipping Rust in Firefox
Herman: Shipping Rust in Firefox
It's not too terribly complicated to package such keys in Fedora/Debian/... or provide a public service accessible over the Internet/TOR/...
Herman: Shipping Rust in Firefox
Wol
Herman: Shipping Rust in Firefox
Cyberax said:
abuse of DNSSEC signing keys
To intercept the connection, NSA/FBI/whatever would have to create a fake certificate and key for the first-level domain that you're using (for example, .io), sign it with the real root key, and then use this fake keypair to MITM the requests.
afaict, this is not actually the case. I believe it's possible for a zone signing key to sign any RRSET within the zone at any level, so there is no need to create a "fake" secondary key for the next-hop down the tree the root zone signing keys can just go ahead and sign the records for www.example.com directly. (i'm not saying that an attacker in control of the root signing keys would necessarily want to do this, just that i think it should technically be valid from the perspective of a DNSSEC validator.)
This would be VERY visible, normally you would have only a handful of keys for each TLD. They can be easily pinned and checked.
If this were true, we would have already seen people doing this publicly. However, public efforts in this direction (usually called "DNSSEC transparency") are only in their infancy. I welcome that sort of auditing work, though! The potential rate of turnover in these zones is very high -- RRSIGs often have a short lifespan -- so verifiably logging all RRs signed by a given DNSKEY over time is actually a potentially resource-intensive task. It only gets more expensive if you want to log the signatures from DNSKEYs in the subzones, too.
abuse of DNSSEC signing keys
Nope. DNSSEC keys are just that - keys. They don't use X.509 crap or anything complicated - the DNS standard directly defines the key encoding.
That's because nobody really cares, since DNSSEC is used only in a small number of domains. Even important domains like google.com are not signed.
Herman: Shipping Rust in Firefox
Herman: Shipping Rust in Firefox
Herman: Shipping Rust in Firefox
Herman: Shipping Rust in Firefox
Herman: Shipping Rust in Firefox
Herman: Shipping Rust in Firefox
Rust for safety
Rust for safety
g(s.substring(1, 7).c_str());
}
Rust for safety
Rust for safety
Rust for safety
Rust for safety
Rust for safety
Rust for safety
Banning shared mutable state seems entirely reasonable to me.
Rust for safety
... and Herb Sutter are working on a set of guidelines ...
Rust for safety
Rust for safety
Rust for safety
Rust for safety
Rust for safety
Rust for safety
It's one of the better implementations according to Bjarne Stroustrup.
https://www.simple-talk.com/opinion/geek-of-the-week/bjar...
Rust for safety
Rust for safety
Rust for safety
Rust for safety
Rust for safety
That said, numerous pitfalls of that sort are a trivial edit away. A big improvement coming in C++17, billed as something like "fix order of evaluation", eliminates many of those pitfalls.
Rust for safety
Rust for safety
Rust for safety
Rust for safety
I sometimes feel that C++ attracts that sort of person though.
Rust for safety
Rust for safety
Rust for safety
Rust for safety
Rust for safety
Rust for safety
It will immediately flag all races that occur during the execution of the program. Performance hit is negligible when compared with things like Valgrind.
Rust for safety
Rust for safety
Rust for safety
Rust for safety
Rust for safety
Rust for safety
Rust for safety
Holy bubbles batman.
Holy bubbles batman.
Holy bubbles batman.
Holy bubbles batman.
Holy bubbles batman.
Holy bubbles batman.
Holy bubbles batman.
> What I was trying to do is to point at the more social/political problem. By slowly shifting the defaults, this creates more incentives to make "rich" webpages where it wouldn't be necessary, and this shifts control from the user to the Page that Be. [...] Now what could be done better?
Clearly a nearly complete PC emulator in Javascript must be poor design, as must be the now-common trick that dynamically shows older content as you scroll off the bottom, decreasing server load from those users who only want to see the newest content (with the static web you have to choose how many blog entries or photo thumbnails appear per page).
Holy bubbles batman.
Holy bubbles batman.
“Turning off” means revoking default remote code execution privileges, not removing the concept from existence. The only ones punished here are websites that use active content out of malice or incompetence — they'll fail at whatever assault on security, battery life or good taste they were planning, with a corresponding loss of site visitors. As it should be.
Holy bubbles batman.
Holy bubbles batman.
Holy bubbles batman.
>
> RequestPolicy controls what cross-domain requests are allowed.
>
> CookieMonster lets you turn on cookies or domain cookies on sites easily.
Holy bubbles batman.
Holy bubbles batman.
Holy bubbles batman.
Holy bubbles batman.
Holy bubbles batman.
Holy bubbles batman.
Holy bubbles batman.
Holy bubbles batman.
Herman: Shipping Rust in Firefox
Herman: Shipping Rust in Firefox
Herman: Shipping Rust in Firefox
Herman: Shipping Rust in Firefox
Herman: Shipping Rust in Firefox
Herman: Shipping Rust in Firefox
Herman: Shipping Rust in Firefox
Herman: Shipping Rust in Firefox
Herman: Shipping Rust in Firefox
Herman: Shipping Rust in Firefox
GUI less security sensitive
Herman: Shipping Rust in Firefox
Herman: Shipping Rust in Firefox
Herman: Shipping Rust in Firefox
Herman: Shipping Rust in Firefox
Herman: Shipping Rust in Firefox
Wol
Herman: Shipping Rust in Firefox
Herman: Shipping Rust in Firefox
If I have a temporary directory, I might like it to be cleaned up no matter how the function exits (except things like poweroff, abort() or SIGKILL; not much anyone can do there).
In C++, you can clean up in the destructor of an object. This is simple RAII. In Java, you could clean up in the finalize() function. The C++ destructor is called immediately when the scope ends. finalize() is called by the garbage collector which runs a few times per second. In Java and C# most variables are pointers to objects and Java does not have a way to automatically call a function directly when the last pointer to an object leaves scope and neither does C#.
Herman: Shipping Rust in Firefox
No, you can not. Finalize method is only guaranteed to run some time in future - like next hour or next day.
Herman: Shipping Rust in Firefox
Herman: Shipping Rust in Firefox
Herman: Shipping Rust in Firefox
Herman: Shipping Rust in Firefox
Herman: Shipping Rust in Firefox
Herman: Shipping Rust in Firefox
Herman: Shipping Rust in Firefox
Herman: Shipping Rust in Firefox
Wol