|
|
Subscribe / Log in / New account

LibreSSL 4.0.0 released

Version 4.0.0 of the LibreSSL TLS/cryptography stack has been released. Changes include a cleanup of the MD4 and MD5 implementations, removal of unused DSA methods, changes in libtls protocol parsing to ignore unsupported TLSv1.1 and TLSv1.0 protocols, and many more internal changes and bug fixes.


From:  Brent Cook <busterb-AT-gmail.com>
To:  announce-AT-openbsd.org
Subject:  LibreSSL 4.0.0 Released
Date:  Tue, 15 Oct 2024 01:32:03 -0500
Message-ID:  <Zw4MYxowekkMctiF@santo.lan>
Archive-link:  Article

We have released LibreSSL 4.0.0, which will be arriving in the
LibreSSL directory of your local OpenBSD mirror soon. This is the
first stable release for the 4.0.x branch, also available with OpenBSD 7.6

It includes the following change from LibreSSL 3.9.2:

  * Portable changes
    - Added initial Emscripten support in CMake builds.
    - Removed timegm() compatibility layer since all uses were replaced
      with OPENSSL_timegm(). Cleaned up the corresponding test harness.
    - The mips32 platform is no longer actively supported.
    - Fixed Windows support for dates beyond 2038.
  * Internal improvements
    - Cleaned up parts of the conf directory. Simplified some logic,
      fixed memory leaks.
    - Simplified X509_check_trust() internals to be somewhat readable.
    - Removed last internal uses of gmtime() and timegm() and replaced
      them with BoringSSL's posix time conversion API.
    - Removed unnecessary stat calls in by_dir.
    - Split parsing and processing of TLS extensions to ensure that
      extension callbacks are called in a predefined order.
    - Cleaned up the MD4 and MD5 implementations.
    - Assembly functions are no longer exposed in the public API, they
      are all wrapped by C functions.
    - Removed assembly implementations of legacy ciphers on legacy
      architectures.
    - Merged most multi-file implementations of ciphers into one or two
      C files.
    - Removed the cache of certificate validity. This was added for
      performance reasons which no longer apply since BoringSSL's time
      conversion API isn't slow. Also, a recently added error check led
      to obscure, undesirable validation failures.
    - Stopped calling OPENSSL_cpuid_setup() from the .init section on
      amd64 and i386.
    - Rewrote various BN conversion functions.
    - Improved certification request internals.
    - Removed unused DSA methods.
    - Improved X.509v3 extension internals. Fixed various bugs and leaks
      in X509V3_add1_i2d() and X509V3_get_d2i(). Their implementations
      now vaguely resemble code.
    - Rewrote BN_bn2mpi() using CBB.
    - Made most error string tables const.
    - Removed handling for SSLv2 client hello messages.
    - Improvements in the openssl(1) speed app's signal handler.
    - Cleaned up various X509v3_* extension API.
    - Unified the X.509v3 extension methods.
    - Cleaned up cipher handling in SSL_SESSION.
    - Removed get_cipher from SSL_METHOD.
    - Rewrote CRYPTO_EX_DATA from scratch. The only intentional change of
      behavior is that there is now a hard limit on the number of indexes
      that can be allocated.
    - Removed bogus connect() call from netcat.
    - Uses of atoi() and strtol() in libcrypto were replaced with
      strtonum().
    - Introduced crypto_arch.h which will contain the architecture
      dependent code and defines rather than the public opensslconf.h.
    - OPENSSL_cpu_caps() is now architecture independent.
    - Reorganized the DES implementation to use fewer files and removed
      optimizations for ancient processors and compilers.
  * New features
    - Added CRLfile option to the cms command of openssl(1) to specify
      additional CRLs for use during verification.
  * Documentation improvements
    - Removed documentation of no longer existing API.
    - Unified the description of the obsolete ENGINE parameter that
      needs to remain in many functions and should always be NULL.
  * Testing and proactive security
    - Switched the remaining tests to new certs.
  * Compatibility changes
    - Protocol parsing in libtls was changed. The unsupported TLSv1.1
      and TLSv1.0 protocols are ignored and no longer enable or disable
      TLSv1.2 in surprising ways.
    - The dangerous EVP_PKEY*_check(3) family of functions was removed.
      The openssl(1) pkey and pkeyparam commands no longer support the
      -check and -pubcheck flags.
    - The one-step hashing functions, MD4(), MD5(), RIPEMD160(), SHA1(),
      all SHA-2, and HMAC() no longer support returning a static buffer.
      Callers must pass in a correctly sized buffer.
    - Support for Whirlpool was removed. Applications still using this
      should honor OPENSSL_NO_WHIRLPOOL.
    - Removed workaround for F5 middle boxes.
    - Removed the useless pem2.h, a public header that was added since
      it was too hard to add a single prototype to one file.
    - Removed conf_api.h and the public API therein.
    - Removed ssl2.h, ssl23.h and ui_compat.h.
    - Numerous conf and attribute functions were removed. Some unused
      types were removed, others were made opaque.
    - Removed the deprecated HMAC_Init() function.
    - Removed OPENSSL_load_builtin_modules().
    - Removed X509_REQ_{get,set}_extension_nids().
    - X509_check_trust() and was removed, X509_VAL was made opaque.
    - Only specified versions can be set on certs, CRLs and CSRs.
    - Removed unused PEM_USER and PEM_CTX types from pem.h.
    - Removed typdefs for COMP_CTX, COMP_METHOD, X509_CRL_METHOD, STORE,
      STORE_METHOD, and SSL_AEAD_CTX.
    - i2d_ASN1_OBJECT() now returns -1 on error like most other i2d_*.
    - SPKAC support was removed from openssl(1).
    - Added TLS1-PRF support to the EVP interface.
    - Support for attributes in EVP_PKEYs was removed.
    - The X509at_* API is no longer public.
    - SSL_CTX_set1_cert_store() and SSL_CIPHER_get_handshake_digest()
      were added to libssl.
    - The completely broken UI_UTIL password API was removed.
    - The OpenSSL pkcs12 command and PKCS12_create() no longer support
      setting the Microsoft-specific Local Key Set and Cryptographic
      Service Provider attributes.
  * Bug fixes
    - Made ASN1_TIME_set_string() and ASN1_TIME_set_string_X509() match
      their documentation. They always set an RFC 5280 conformant time.
    - Improved standards compliance for supported groups and key shares
      extensions:
      - Duplicate key shares are disallowed.
      - Duplicate supported groups are disallowed.
      - Key shares must be sent in the order of supported groups.
      - Key shares will only be selected if they match the most
        preferred supported group by client preference order.
    - Fixed signed integer overflow in bnrand().
    - Prevent negative zero from being created via BN_clear_bit() and
      BN_mask_bits(). Avoids a one byte overread in BN_bn2mpi().
    - Add guard to avoid contracting the number linear hash buckets
      to zero, which could lead to a crash due to accessing a zero
      sized allocation.
    - Fixed i2d_ASN1_OBJECT() with an output buffer pointing to NULL.
    - Implemented RSA key exchange in constant time. This is done by
      decrypting with RSA_NO_PADDING and checking the padding in libssl
      in constant time. This is possible because the pre-master secret
      is of known length based on the size of the RSA key.
    - Rewrote SSL_select_next_proto() using CBS, also fixing a buffer
      overread that wasn't reachable when used as intended from an
      ALPN callback.
    - Avoid pushing a spurious error onto the error stack in
      ssl_sigalg_select().
    - Made fatal alerts fatal in QUIC.

The LibreSSL project continues improvement of the codebase to reflect modern,
safe programming practices. We welcome feedback and improvements from the
broader community. Thanks to all of the contributors who helped make this
release possible.


to post comments

Why support ancient broken algorithms

Posted Oct 15, 2024 17:53 UTC (Tue) by wittenberg (subscriber, #4473) [Link] (9 responses)

Why "cleanup" MD4 and MD5?

It's been possible to find MD4 collisions in a few seconds since the mid 1990s.

MD5 is a little less bad, but RFC 6151 (from 2011) says not to use it for signatures, and new designs should not use it for HMAC.

I would suggest a decent burial, not a cleanup.

--David

Why support ancient broken algorithms

Posted Oct 15, 2024 18:16 UTC (Tue) by raven667 (subscriber, #5198) [Link] (4 responses)

There still exists MD5 out in the wild, in historical systems/data or just very old but live systems, so the implementation still needs to exist even if it's not used for anything new, and crypto-policies or equivalent prevent it's use by default. If the code needs to exist then it needs to be maintained, which may involve cleanup to work error free with current compilers/libraries, to match changes to style in other internal libraries, etc.

Why support ancient broken algorithms

Posted Oct 16, 2024 15:14 UTC (Wed) by wittenberg (subscriber, #4473) [Link] (2 responses)

One has to consider the cost that supporting old standards (particularly in crypto) imposes. In addition to the obvious work, more code means a larger attack surface, and in the case of crypto algorithms makes "Poodle" style attacks easier. This decreases security for everyone. At what point does one simply say "that's too old"? There are still people riding horses, but we no longer have street sweepers cleaning up manure on the street.

I can see a case for MD5, but MD4 was already completely broken in the mid-1990s (ie, since about the time CD-ROMs started to show up). That strikes me as too outdated to support.

--David

Why support ancient broken algorithms

Posted Oct 16, 2024 20:09 UTC (Wed) by ballombe (subscriber, #9523) [Link]

According to wikipedia MD4 article:
MD4 is used to compute NTLM password-derived key digests on Microsoft Windows NT, XP, Vista, 7, 8, 10 and 11.[4]

Why support ancient broken algorithms

Posted Oct 16, 2024 21:01 UTC (Wed) by wahern (subscriber, #37304) [Link]

MD4 is used by the unfortunately still common legacy protocol MS-CHAPv2. And MS-CHAPv2 is used for, among other things, PPTP username authentication in IKEv1+PPTP VPNs. MS-CHAPv2 is also commonly used in IKEv2 VPNs, alongside EAP-MD5, for username-based authentication setups. Yes, MS-CHAPv2 is completely broken, and EAP-MD5 isn't great, either, but that doesn't matter much in these cases. For IPsec-based VPNs the real security is provided by the separate, outer IKE authentication and IPSec encryption, so they're similar (ignoring salting issues) to plaintext passwords over encrypted channels. Moreover, these passwords are often generated and distributed per user for this specific service, in which cases general login account passwords aren't being put at risk. Strictly speaking there are better options, such as peer certificates, but for various reasons--interoperability, configuration convenience, know-how... basically the same reasons password-based authentication remains sticky elsewhere--these particular authentication setups remain very common.

Why support ancient broken algorithms

Posted Oct 16, 2024 15:49 UTC (Wed) by Cyberax (✭ supporter ✭, #52523) [Link]

MD5 is sometimes used as a data integrity checksum rather than for cryptographic purposes.

Why support ancient broken algorithms

Posted Oct 16, 2024 3:32 UTC (Wed) by WolfWings (subscriber, #56790) [Link]

A lot of the cleanup was merging things down to a single file, and removing a lot of #define INIT_DATA_A ... c->a = INIT_DATA_A; code constructs where a single-use value was still buried in a #define elsewhere in the code instead of just commented where it was actually used.

They also just cleaned up the loops and updated all the variable names to match the actual standard so it's more readable 1:1 versus said standard so instead of R0();HOST_c2l( data, l );X# = l; series over and over it's simple blocks of md4_round1(); then a block of md4_round2();, as the standard counts from 1 but the old code counted rounds from 0, etc.

Multiple steps were verified to cause no change to the assembly output on various modern compilers so it was very much entirely a "If we have to keep this cruft that's throw out support for ancient compilers and make the code readable." process.

They are not entirely broken

Posted Oct 19, 2024 9:33 UTC (Sat) by cypherpunks2 (guest, #152408) [Link] (2 responses)

MD4 and MD5 are vulnerable to collision attacks (MD4 so badly that you can create collisions by hand: https://fortenf.org/e/crypto/2017/09/10/md4-collisions.html) but preimage attacks against them are not feasible. There are situations where only preimage resistance is required, and in that case both MD4 and MD5 are adequate as preimage attacks against them are well beyond what is technologically achievable (i.e. even an MD4 preimage requires ~2^100 hash operations whereas a collision attack requires <2).

Collision attacks attempt to generate two messages with the same digest (and a chosen-prefix collision attack does the same, but only by appending data to an existing message that the attacker does not control). A second preimage attack attempts to modify a message without causing the digest to change, and a first preimage attack attempts to generate an arbitrary message that results in a desired digest. MD4 and MD5 are only severely vulnerable to collision attacks. This doesn't mean they should be used in new applications, but it's not quite correct to say they are completely broken.

Before someone says that they're broken because they can be distinguished from a random oracle without using much computing power, remember that the same is true for SHA-512 as it is trivial vulnerable to length-extension attacks as all non-truncated hashes using the Merkle-Damgaard construction are.

I wouldn't usually be such a pedant, but precision is particularly important when cryptography is involved.

They are not entirely broken

Posted Oct 21, 2024 0:24 UTC (Mon) by wittenberg (subscriber, #4473) [Link] (1 responses)

As you say, precision is important when discussing cryptography.
A major problem in discussing cryptography (and security in general) is that we can't show that something is secure (with a few exceptions like one-time pads and Shamir secret sharing, which are not relevant here). All we can do is discuss how much confidence we have in a system. It is sometimes possible to use a reduction to show that one system is at least as secure as another, but again that is not relevant here. We gain confidence in a system when many people we respect try and fail to find a weakness in it. We lose confidence when we learn of techniques which weaken related systems (perhaps a version with fewer rounds, or a system with a very similar structure). There are cases when we lose confidence almost instantly. We had fair confidence in SIKE in June of 2022, but no confidence in SIKE by August of 2022.

Which brings us to MD4 and MD5. MD4 has been known to be vulnerable to collision attacks for almost 30 years. That greatly reduces my confidence in its adequacy against pre-image attacks. MD5 appears to be stronger.

I am more confident in SHA-3 than I am in SHA-2 for several reasons:
1. It is not vulnerable to length-extension attacks (though they can be defended against in older hash functions by prepending the total length to the message before calculating the message digest.
2. SHA-3 was the result of an open competition, so I worry less about NSA doing something fishy. -- We all remember what they did to ECDH.
3. SHA-2 is close enough in design to SHA-1 to be worrisome.

Another reason for not supporting old algorithms is the cost. The cost in maintenance is hard to estimate in free software, but there is a cost in security which is borne by everyone in the community. More code almost inevitably means more bugs, and therefore more attack surface. All of us bear the increased risk in return for which a small group who have not updated gain convenience. That small group is more vocal, as the cost each of them pays is higher, but I think that the total cost for the large group is much larger.

--David

They are not entirely broken

Posted Oct 21, 2024 2:48 UTC (Mon) by cypherpunks2 (guest, #152408) [Link]

As far as I am aware, collision attacks in the MD4 through SHA2 construction do not imply anything about preimage resistance.

While the keccak core as used in SHAKE/SHA-3 is very good, it hasn't received anywhere near the amount of analysis that the MD4 through SHA-2 constructions have. I am not a cryptographer, but I trust that the weaknesses have been resolved. In particular, the construction of SHA-2 is very different from that of MD4, MD5, and SHA-1. The similarities it shares (unbalanced feistel-like construction, davies-meyer compression function, merkle-damgaard padding) are rather superficial. The germane difference is that the older designs used a single heterogeneous non-linear function that changed based on the iteration, whereas SHA-2 uses all four functions (choice, majority, sum 1, sum 2) on each iteration.

> though they can be defended against in older hash functions by prepending the total length to the message before calculating the message digest

The best way to mitigate it would be to use SHA-512/256, I'd think. Then the hash itself becomes resistant to the attacks and there is no need to change the way the hash is used to work around its own limitations. As an added bonus, on 64-bit machines, SHA-512 is faster than SHA-256 so SHA-512/256 is a more efficient way to get a 256 bit digest than SHA-256 itself.

The primary reason that I am not a fan of SHA-3 is that it is designed to be extremely fast in hardware and not a general-purpose SHA-2 replacement. Its heavy use of bitwise transpositions makes it very efficient in silicon but less efficient in software. This makes it quite bad for slow KDFs because it gives an ASIC an inherent advantage (yes we should all be using a memory hard KDF, but non-memory hard KDFs will be around for a long time). SHA-2 on the other hand is optimized for 32/64 bit operations in software, and efficient silicon implementations are an afterthought.


Copyright © 2024, Eklektix, Inc.
Comments and public postings are copyrighted by their creators.
Linux is a registered trademark of Linus Torvalds