Security
Toward measured boot out of the box
Matthew Garrett began his Linux Security Summit talk by noting that the "security of the boot chain is vital" to having secure systems. It does not matter if the kernel can protect itself, referring to the talk just prior to his; if the boot process can be manipulated, those protections are immaterial. So, he wanted to present where things stand with regard to securing the boot chain.
In the Linux world, UEFI Secure Boot is the primary boot protection mechanism; it requires that the bootloader be signed by a key that is trusted by the firmware or the system won't boot. There are also various solutions for embedded devices that are typically implemented by the system on chip (SoC). The trust is rooted in the firmware in either case; if someone can modify the firmware, all bets are off.
![Matthew Garrett [Matthew Garrett]](https://static.lwn.net/images/2016/lss-garrett-sm.jpg)
Beyond that, most of the existing mechanisms provide no way to prove that the verification of the code to be booted has been done. The running kernel has no way to know that it is running on a base that has been integrity checked—or even whether the kernel itself has been tampered with—any query it could make could be answered with a fake "yes".
That kind of attack generally requires privileged access to the hardware, which is a hazard in its own right, so why would those kinds of attacks matter, he asked. One problem area is that there are providers of "bare metal" servers for users who want the convenience of the cloud without its usual performance penalty. Users of those systems will have root privileges, which will allow them to access the hardware, including potentially permanently changing the firmware to something malicious.
He posited a scenario where an attacker would take out a large number of short-term leases on hardware at a site that is known to be used by the victim. Each system is then infected with malicious firmware and "returned" to the pool at the hosting company. Some of those systems will eventually be picked up by the victim; "Secure Boot will not help you" in that situation, he said.
Another worrisome possibility is for laptops that are surrendered when passing through borders. Perhaps it is overly paranoid to be worried about permanent firmware changes being made at the border, he said, but it is at least worth thinking about. While there is not much that can be done to protect against hardware-based attacks (e.g. adding some malicious hardware to a laptop or server), most of the other kinds of attacks can be handled.
TPM to the rescue
The Trusted Platform Module (TPM) is a bit of hardware that can help. When it was first introduced it got a bad reputation because it was "easy to portray it as a DRM mechanism", though it is difficult to deploy that way and no one has actually done so. TPMs are small chips, made by several different manufacturers, that are generally differentiated by their performance and amount of NVRAM storage they provide. TPM implementations also have "a bewildering array of different bugs", Garrett said.
TPMs have several functions, but the one of interest for ensuring that the boot process has not been tampered with uses the platform configuration registers (PCRs). They are normally 16-24 registers that are not directly accessible outside of the chip; all access is mediated by the rules of the TPM. PCRs are 20 bytes long in TPM 1.2, which is the length of an SHA-1 hash; TPM 2.0 allows for multiple hash algorithms, so the number and size of the PCRs changes to support them.
Ensuring tamper-free boot means that each step of the process must be "measured", which effectively means calculating a cryptographic hash of the binary. Each step in the boot process would measure the next, so the firmware measures the bootloader, the bootloader measures the kernel and initial ramdisk (initrd), and so on. The PCRs provide a tamper-proof mechanism to assist in the measurement process.
One cannot store a value directly into a PCR; instead the TPM must be asked to store the value, which it does in a way that provides integrity to the result. Instead of just storing the value, which would allow any program with access to the hardware to set it to the "right" value, it concatenates the existing value in the PCR and the written value (typically the hash of the measured data) and hashes the result. So, in order to reproduce the value in a given PCR, the same measurements must be written to the register in the same order.
There is also a log associated with the TPM. Each measurement adds an entry to the log that records what was measured and what the hash was. While untrusted code can overwrite the log, he said, that turns out not to be as much of a problem as it sounds.
All x86 firmware has measurement capabilities, though sometimes there are problems with what they can measure. For example, there was firmware he encountered that would measure code that came from disk, but not code that came via a network boot, which kind of misses the point. But that firmware has since been fixed.
Bootloader support
There is no Linux bootloader that supports measurement, however. At one time, TrustedGRUB could be used, but it is now "old and busted"; it worked, but it "wasn't particularly nice", Garrett said. Rohde & Schwarz Cybersecurity have developed TrustedGRUB2, which supports using the TPM, but it has some shortcomings. In particular, it does not support UEFI or TPM 2.0. So, Garrett and others have added code to GRUB 2 to support measuring the kernel and other components at boot time (in this GitHub repository).
There is more to measure than just the kernel, however. The booted state of the system is affected by many other components and configuration files. The kernel command line is relevant, as is the GRUB configuration, since GRUB has a scripting interface that can make hardware changes.
But putting each individual configuration piece into its own PCR does not scale because there are a limited number of them. So there is a need to reuse PCRs, but the final value of the PCR will depend on the order in which those items were measured. Trying to establish a strict ordering is something he would like to avoid. There is also the problem that unimportant changes to configuration files (e.g. comments) will still cause the final hash value to be different. For those and other reasons, using the PCRs that way is suboptimal, he said.
Instead, though, the log file can be used. It can be overwritten with invalid data, but that can be detected by replaying the log and calculating the hashes independently. There are two formatting possibilities for the log messages that Garrett described. The first would log a description of the binary and its hash, which is fine for a small number of binaries. That doesn't work so well for configuration information, though, because it may have unimportant changes that alter the hash. For those, the log entry would contain the text that has been hashed in conjunction with its hash.
Then there needs to be a policy file that describes the acceptable hashes for binaries as well as the allowable text for configuration (using regular expressions for parameters and the like). Creating that policy may be rather troublesome, though. His employer, CoreOS, builds the policy automatically for each release. The policy is not complete, however, since it needs known-good hashes for the firmware on the system and no firmware vendor he knows provides that information. So CoreOS users must extract the values from a known-good system, which will work fine unless the firmware is upgraded at some point.
While it is easy for CoreOS to provide an initial RAM filesystem (initramfs) and its hash, other distributions build the initramfs on the user's system when the kernel or other components are updated. Timestamps then get into the binary, which means the hash is different for each. Some kind of generic initramfs using reproducible build mechanisms would alleviate that problem.
There is also a question of where the boot data gets stored. If it is stored in the initramfs, that will change the hash, so he suggested using UEFI variables for some information and the TPM for keys. In a process known as "sealing", the TPM can store encrypted information that it will only decrypt if certain PCRs have the right values to show that the boot process has not been tampered with. Having sealed keys for tpmtotp (Time-based one-time password, TOTP, attestation using the TPM), disk encryption, or SSH would ensure that the data is only available to properly booted systems.
One problem that has not yet been solved is handling firmware or operating system upgrades. There needs to be a mechanism to unseal values and reseal them based on the upgraded system. So far, no solution to that problem has been found.
Intel's Trusted Execution Technology (TXT) is supposed to make this all easier, he said, but that isn't the case. TXT is based on a dynamic root of trust, rather than the static root of trust used by TPM, which in theory would sidestep some of the problems that the TPM-based boot integrity has encountered. But TXT has "no meaningful support for Secure Boot" and it is also incompatible with runtime UEFI. In effect, Garrett said, TXT is not compatible with the way we boot operating systems.
To do
There are still things that need to be done before this gets into the hands of users. Support for it needs to ship in bootloaders; the firmware in desktop systems is likely to have lots of different bugs that may cause systems using this feature not to boot, so there is a lot of testing work to be done there. Firmware vendors and distributions will need to start shipping known-good measurements. The firmware upgrade process will need to be integrated with updating the measurement information and there will need to be ways to create initramfs images deterministically. But we are getting closer to having measured boot right out of the box.
One audience member wondered about the patches to GRUB 2 and whether those would be making their way upstream. Garrett said that he plans to do that; he has talked to Richard Stallman and convinced him that what was being done was "not intrinsically evil", which was met with audience applause. Garrett joked that he hoped that would find its way into his annual performance review.
GRUB 2 has a new maintainer who is more active, he said, which should help getting this work upstream. There is one problem, however, in that the GRUB 2 project requires copyright assignment and some of the code comes from TrustedGRUB, which he can't assign. He is looking to resolve that since he does not want out-of-tree patches.
[I would like to thank the Linux Foundation for travel support to attend the Linux Security Summit in Toronto.]
AMD memory encryption technologies
Today's virtual machines (VMs) have a variety of protections from their brethren, but hypervisor or kernel bugs can allows guests to access the memory of other guests. In addition, providers of VMs can see the memory of any of the guests, so users of public clouds have to place a lot of trust in their provider. AMD has some upcoming features in its x86 processors that will encrypt memory in ways that alleviate some of these problems. David Kaplan gave a presentation about these new technologies at the Linux Security Summit in Toronto.
![David Kaplan [David Kaplan]](https://static.lwn.net/images/2016/lss-kaplan-sm.jpg)
The motivation for these features is the cloud environment. Currently, the hypervisor must enforce the isolation between guests through a variety of means: hardware virtualization support, page tables, VM intercepts, and so on. But sometimes those break down, leading to various vulnerabilities that allow guests to access other guests, which is scary, he said.
But users are required to trust their cloud providers since they have full access to all guest memory to extract secrets or even to inject code into VMs. The cloud providers would rather not have that power, Kaplan said; they do not want to be able to see their customers' data. For one thing, that protects the providers from "rogue admin" attacks, where a disgruntled employee uses the unwanted access to attack a customer.
That kind of attack, as well as those where a guest gets access to another guest's memory, are "user-access attacks", he said. AMD is targeting those as well as "physical-access attacks", where someone with access to the hardware can probe the physical DRAM interface or freeze and steal the memory chips (e.g. a cold boot attack). How important it is to resist those and other, similar attacks depends on who you talk to, he said.
There are two separate features—Secure Memory Encryption (SME) and Secure Encrypted Virtualization (SEV)—that both use the same hardware support that will be provided in upcoming processors. That support includes an AES-128 hardware engine inline with the RAM and memory controller so that memory can be encrypted and decrypted on the way in and out of the processor with "minimal performance impact". The data inside the processor (e.g. registers, caches) will be in the clear; there will just be a "little extra latency" when RAM is involved.
All of the keys will be managed within the SoC by the AMD Secure Processor, which is a separate 32-bit ARM Cortex A5 that is present on recent SoCs. It runs a secure (closed source) operating system and enables hardware-validated boot. It is used in some laptops as a firmware Trusted Platform Module (TPM). The secure processor will only run AMD-signed code; it also provides cryptographic key generation and management functions.
Of the two features, SME is the simplest. It uses a single key that is generated at boot time using the random-number generator to transparently encrypt pages that have been marked with a special "encrypted" bit in the page-table entry. The operating system or hypervisor manages which pages will be encrypted in RAM by use of that bit. There is support for hardware devices to DMA to and from encrypted memory as well. SME is targeted at thwarting physical-access attacks, since the contents of memory will be inaccessible without the key that is not accessible outside of the secure processor.
SEV, on the other hand, is more complicated. It has multiple encryption keys in the design and is meant to protect guests' memory from each other and from the hypervisor. The eventual goal, Kaplan said, is for the hypervisor to have no view into the guest.
There are keys for the hypervisor and for each VM, though groups of VMs could share keys and some VMs might be unsecured. SEV cryptographically isolates the guests and hypervisor to the point where cache lines (which are unencrypted) are tagged with an ID that specifies which address space they belong to; the processor will prevent guests from accessing the cache of other guests.
The owner of a guest is a "key player" in using SEV, Kaplan said. Information like secrets and policies will need to be transferred to the secure processor using the hypervisor to transport that data. Since the hypervisor is untrusted in this model (so that cloud providers do not have access to customer secrets), the guest owner will create a secure channel to the secure processor (through the hypervisor) using Diffie-Hellman (DH) key exchange.
Launching a guest is a somewhat complicated process; Kaplan's slides [PDF] may be of interest for those who want more details. The hypervisor begins by loading an unencrypted BIOS or OS image into memory. The guest owner then supplies their DH key and the hypervisor facilitates the creation of a secure channel between the guest owner and the secure processor (without being able to eavesdrop on the traffic). Part of that exchange will provide the guest owner with a certificate that allows them to prove that they are truly talking to the secure processor.
The hypervisor then allocates an "address space identifier" (ASID), which is what identifies the guest (and the key for that guest's memory). That ASID is provided to the secure processor with a request to generate or load a key into the AES engine and to encrypt the BIOS/OS image using that key. The hypervisor then sets up and runs the guest using the ASID assigned; the memory controller, AES engine, and secure processor will work together to ensure that the memory is encrypted and decrypted appropriately.
The hypervisor will also send a "launch receipt" to the user that includes a measurement (hash) of the image and some platform authentication information. If the user is provided with the right measurement, they can then provide secrets like disk encryption keys to the guest over a secure channel (e.g. TLS).
There are two levels of page tables: one for the guest and one for the hypervisor. The guest tables determine whether the memory is private or is shared with the hypervisor. All executable pages are private (no matter the setting), as are the guest's page tables. Data pages can be either, but DMA must use shared pages.
A common question that is asked is in regards to the ASID: couldn't the hypervisor "spoof" a different ASID? The answer is that it could, but it wouldn't really gain it anything. If it tries walking the guest page tables or executing code using the wrong key, it will not be particularly successful. SEV is meant to block a range of attacks, both physical and user access; the intent is to reduce the attack surface even more in coming years.
In order to use SEV, both hypervisors and guests will need to change to support it. There are a number of software components required, some that AMD expects to ship and others that it is working with the open-source community on. The secure processor firmware is distributed in binary form and the source is not public. There is a Linux driver to support the secure processor that has been posted for review. The open-source hypervisor support is also being worked on.
There was a question about why AMD had not used the TPM API for its secure processor. Kaplan said there was interest in a simpler API that focused on the VM launch cycle. But the API is available and is only in beta at this point, so those interested should comment. Also, as is often the case with processor features, Kaplan was unable to say when SoCs with either feature would be available.
[I would like to thank the Linux Foundation for travel support to attend the Linux Security Summit in Toronto.]
Brief items
Security quotes of the week
A day or two later I had a fairly complicated self-modifying ROP chain to make the necessary C++ virtual calls to interact with other services from the new, heavily sandboxed, mediaextractor and I was ready to start working on the privilege elevation into system_server. However, every time I tested, attempts to lookup the system_server services failed - and looking in the logs I realised that I’d misunderstood the selinux policy. While the mediaextractor was allowed to make binder calls; it wasn’t permitted to lookup any other binder services! Privilege elevation on N would instead require exploiting an additional, distinct vulnerability.
Rob Fuller, a principal security engineer at R5 Industries, said the hack works reliably on Windows devices and has also succeeded on OS X, although he's working with others to determine if it's just his setup that's vulnerable. The hack works by plugging a flash-sized minicomputer into an unattended computer that's logged in but currently locked. In about 20 seconds, the USB device will obtain the user name and password hash used to log in to the computer. Fuller, who is better known by his hacker handle mubix, said the technique works using both the Hak5 Turtle ($50) and USB Armory ($155), both of which are USB-mounted computers that run Linux.
Building a new Tor that can resist next-generation state surveillance (ars technica)
Here's a lengthy ars technica article on efforts to replace Tor with something more secure. "As a result, these known weaknesses have prompted academic research into how Tor could be strengthened or even replaced by some new anonymity system. The priority for most researchers has been to find better ways to prevent traffic analysis. While a new anonymity system might be equally vulnerable to adversaries running poisoned nodes, better defences against traffic analysis would make those compromised relays much less useful and significantly raise the cost of de-anonymising users."
A bite of Python (Red Hat Security Blog)
On the Red Hat Security Blog, Ilya Etingof describes some traps for the unwary in Python, some that have security implications. "Being easy to pick up and progress quickly towards developing larger and more complicated applications, Python is becoming increasingly ubiquitous in computing environments. Though apparent language clarity and friendliness could lull the vigilance of software engineers and system administrators -- luring them into coding mistakes that may have serious security implications. In this article, which primarily targets people who are new to Python, a handful of security-related quirks are looked at; experienced developers may well be aware of the peculiarities that follow." (Thanks to Paul Wise.)
New vulnerabilities
389-ds-base: information disclosure
Package(s): | 389-ds-base | CVE #(s): | CVE-2016-4992 | ||||||||||||||||||||||||||||||||
Created: | September 7, 2016 | Updated: | November 3, 2016 | ||||||||||||||||||||||||||||||||
Description: | From the Red Hat bugzilla:
A vulnerability in 389-ds-base was found that allows to bypass limitations for compare and read operations specified by Access Control Instructions. When having LDAP sub-tree with some existing objects and having BIND DN which have no privileges over objects inside the sub-tree, unprivileged user can send LDAP ADD operation specifying an object in (supposedly) inaccessible sub-tree. The returned error messages discloses the information when the queried object exists having the specified value. Attacker can use this flaw to guess values of RDN component by repeating the above process. | ||||||||||||||||||||||||||||||||||
Alerts: |
|
canl-c: proxy manipulation
Package(s): | canl-c | CVE #(s): | |||||||||
Created: | September 2, 2016 | Updated: | September 8, 2016 | ||||||||
Description: | From the Fedora advisory: This is a hotfix for proxy DN manipulation vulnerabilities. | ||||||||||
Alerts: |
|
charybdis: incorrect SASL authentication
Package(s): | charybdis | CVE #(s): | CVE-2016-7143 | ||||
Created: | September 7, 2016 | Updated: | September 8, 2016 | ||||
Description: | From the Debian advisory:
It was discovered that incorrect SASL authentication in the Charybdis IRC server may lead to users impersonating other users. | ||||||
Alerts: |
|
chromium: multiple vulnerabilities
Package(s): | chromium | CVE #(s): | CVE-2016-5147 CVE-2016-5148 CVE-2016-5149 CVE-2016-5150 CVE-2016-5151 CVE-2016-5152 CVE-2016-5153 CVE-2016-5154 CVE-2016-5155 CVE-2016-5156 CVE-2016-5157 CVE-2016-5158 CVE-2016-5159 CVE-2016-5160 CVE-2016-5161 CVE-2016-5162 CVE-2016-5163 CVE-2016-5164 CVE-2016-5165 CVE-2016-5166 CVE-2016-5167 | ||||||||||||||||||||||||||||||||||||||||||||||||||||||||
Created: | September 2, 2016 | Updated: | September 13, 2016 | ||||||||||||||||||||||||||||||||||||||||||||||||||||||||
Description: | From the Arch Linux advisory: CVE-2016-5147 CVE-2016-5148 (cross-site scripting): Universal XSS in Blink. CVE-2016-5149 (script injection): Script injection in extensions. CVE-2016-5150 (arbitrary code execution): Use after free in Blink. CVE-2016-5151 (arbitrary code execution): Use after free in PDFium. CVE-2016-5152 CVE-2016-5154 CVE-2016-5157 CVE-2016-5158 CVE-2016-5159 (arbitrary code execution): Heap overflow in PDFium. CVE-2016-5153 (arbitrary code execution): Use after destruction in Blink. CVE-2016-5155 CVE-2016-5163 (address bar spoofing): Address bar spoofing. CVE-2016-5156 (arbitrary code execution): Use after free in event bindings. CVE-2016-5160 CVE-2016-5162 (access restriction bypass): Extensions web accessible resources bypass. CVE-2016-5161 (arbitrary code execution): Type confusion in Blink. CVE-2016-5164 (address bar spoofing): Universal XSS using DevTools. CVE-2016-5165 (script injection) Script injection in DevTools. CVE-2016-5166 (smb relay attack): SMB Relay Attack via Save Page As. CVE-2016-5167 (arbitrary code execution): Various fixes from internal audits, fuzzing and other initiatives. | ||||||||||||||||||||||||||||||||||||||||||||||||||||||||||
Alerts: |
|
ganglia: cross-site scripting
Package(s): | ganglia | CVE #(s): | |||||||||
Created: | September 6, 2016 | Updated: | September 8, 2016 | ||||||||
Description: | From the Red Hat bugzilla:
A reflected XSS issue was found in ganglia-web. This issue was fixed in the 3.7.2 release. | ||||||||||
Alerts: |
|
gd: out-of-bounds read
Package(s): | gd | CVE #(s): | CVE-2016-6905 | ||||||||
Created: | September 1, 2016 | Updated: | September 8, 2016 | ||||||||
Description: | From the openSUSE advisory: Out-of-bounds read in function read_image_tga in gd_tga.c. | ||||||||||
Alerts: |
|
icu: code execution
Package(s): | icu | CVE #(s): | CVE-2016-6293 | ||||||||||||||||||||||||
Created: | September 8, 2016 | Updated: | November 21, 2016 | ||||||||||||||||||||||||
Description: | From the Debian-LTS advisory:
This update fixes a buffer overflow in the uloc_acceptLanguageFromHTTP function in ICU. | ||||||||||||||||||||||||||
Alerts: |
|
java: unspecified vulnerability
Package(s): | java-1_7_1-ibm | CVE #(s): | CVE-2016-3485 | ||||||||||||||||||||||||||||
Created: | September 8, 2016 | Updated: | September 22, 2016 | ||||||||||||||||||||||||||||
Description: | From the SUSE CVE entry:
Unspecified vulnerability in Oracle Java SE 6u115, 7u101, and 8u92; Java SE Embedded 8u91; and JRockit R28.3.10 allows local users to affect integrity via vectors related to Networking. | ||||||||||||||||||||||||||||||
Alerts: |
|
jsch: path traversal
Package(s): | jsch | CVE #(s): | CVE-2016-5725 | ||||||||
Created: | September 6, 2016 | Updated: | September 22, 2016 | ||||||||
Description: | From the Debian LTS advisory:
It was discovered that there was a path traversal vulnerability in jsch, a pure Java implementation of the SSH2 protocol. | ||||||||||
Alerts: |
|
kernel: three vulnerabilities
Package(s): | kernel | CVE #(s): | CVE-2016-3857 CVE-2016-6480 CVE-2016-7118 | ||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||
Created: | September 6, 2016 | Updated: | September 20, 2016 | ||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||
Description: | From the CVE entries:
The kernel in Android before 2016-08-05 on Nexus 7 (2013) devices allows attackers to gain privileges via a crafted application, aka internal bug 28522518. (CVE-2016-3857) Race condition in the ioctl_send_fib function in drivers/scsi/aacraid/commctrl.c in the Linux kernel through 4.7 allows local users to cause a denial of service (out-of-bounds access or system crash) by changing a certain size value, aka a "double fetch" vulnerability. (CVE-2016-6480) fs/fcntl.c in the "aufs 3.2.x+setfl-debian" patch in the linux-image package 3.2.0-4 (kernel 3.2.81-1) in Debian wheezy mishandles F_SETFL fcntl calls on directories, which allows local users to cause a denial of service (NULL pointer dereference and system crash) via standard filesystem operations, as demonstrated by scp from an AUFS filesystem. (CVE-2016-7118) | ||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||
Alerts: |
|
kibana: two vulnerabilties
Package(s): | Kibana | CVE #(s): | |||||
Created: | September 8, 2016 | Updated: | September 8, 2016 | ||||
Description: | From the Red Hat advisory:
* A flaw was found in Kibana's logging functionality. If custom logging output was configured in Kibana, private user data could be written to the Kibana log files. A system attacker could use this data to hijack sessions of other users when using Kibana behind some form of authentication such as Shield. * A cross-site scripting (XSS) flaw was found in Kibana. A remote attacker could use this flaw to inject arbitrary web script into pages served to other users. | ||||||
Alerts: |
|
libksba: denial of service
Package(s): | libksba | CVE #(s): | |||||||||||||
Created: | September 2, 2016 | Updated: | September 22, 2016 | ||||||||||||
Description: | From the Fedora bug report: It was found that an unproportionate amount of memory is allocated when parsing crafted certificates in libskba, which may lead to DoS. Moreover in libksba 1.3.4, allocated memory is uninitialized and could potentially contain sensitive data left in freed memory block. | ||||||||||||||
Alerts: |
|
libstorage: password disclosure
Package(s): | libstorage | CVE #(s): | CVE-2016-5746 | ||||
Created: | September 8, 2016 | Updated: | September 8, 2016 | ||||
Description: | From the openSUSE advisory:
This update for libstorage fixes the following issues: - Use stdin, not tmp files for passwords (bsc#986971, CVE-2016-5746) | ||||||
Alerts: |
|
libtomcrypt: signature forgery
Package(s): | libtomcrypt | CVE #(s): | CVE-2016-6129 | ||||||||
Created: | September 7, 2016 | Updated: | November 7, 2016 | ||||||||
Description: | From the Debian LTS advisory:
It was discovered that the implementation of RSA signature verification in libtomcrypt is vulnerable to the Bleichenbacher signature attack. If an RSA key with exponent 3 is used it may be possible to forge a PKCS#1 v1.5 signature signed by that key. | ||||||||||
Alerts: |
|
mailman: password disclosure
Package(s): | mailman | CVE #(s): | CVE-2016-6893 | ||||||||||||||||
Created: | September 2, 2016 | Updated: | November 2, 2016 | ||||||||||||||||
Description: | From the Debian advisory: It was discovered that there was a CSRF vulnerability in mailman, a web-based mailing list manager, which could allow an attacker to obtain a user's password. | ||||||||||||||||||
Alerts: |
|
mozilla-thunderbird: unspecified vulnerabilities
Package(s): | mozilla-thunderbird | CVE #(s): | |||||||||||||
Created: | September 1, 2016 | Updated: | October 3, 2016 | ||||||||||||
Description: | The Slackware package of mozilla-thunderbird was updated to version 45.3, noting that "this release contains security fixes and improvements." So far, the upstream release has not specified which security fixes are included. | ||||||||||||||
Alerts: |
|
tiff3: two vulnerabilities
Package(s): | tiff3 | CVE #(s): | CVE-2016-3623 CVE-2016-6223 | ||||||||||||||||||||||||||||||||||||||||||||
Created: | September 6, 2016 | Updated: | September 8, 2016 | ||||||||||||||||||||||||||||||||||||||||||||
Description: | From the Debian LTS advisory:
Several security vulnerabilities were discovered in tiff3, a library providing support for the Tag Image File Format (TIFF). An attacker could take advantage of these flaws to cause a denial-of-service against an application using the libtiff4 or libtiffxx0c2 library (application crash), or potentially execute arbitrary code with the privileges of the user running the application. | ||||||||||||||||||||||||||||||||||||||||||||||
Alerts: |
|
tomcat: redirect HTTP traffic
Package(s): | tomcat | CVE #(s): | CVE-2016-5388 | ||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||
Created: | September 7, 2016 | Updated: | November 3, 2016 | ||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||
Description: | From the CVE entry:
Apache Tomcat through 8.5.4, when the CGI Servlet is enabled, follows RFC 3875 section 4.1.18 and therefore does not protect applications from the presence of untrusted client data in the HTTP_PROXY environment variable, which might allow remote attackers to redirect an application's outbound HTTP traffic to an arbitrary proxy server via a crafted Proxy header in an HTTP request, aka an "httpoxy" issue. NOTE: the vendor states "A mitigation is planned for future releases of Tomcat, tracked as CVE-2016-5388"; in other words, this is not a CVE ID for a vulnerability. | ||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||
Alerts: |
|
Page editor: Jake Edge
Next page:
Kernel development>>