Security
SpamAssassin 3.4.1 released
One occasionally sees articles suggesting that the volume of spam on the net is in decline, but nobody would be foolish enough to argue that the spam problem has gone away. Industrial-strength spam-filtering tools are still a necessity for anybody whose email address is known by more than about two other people. For the minority of us who have not given in and moved to Gmail, SpamAssassin tends to be the spam-filtering tool of choice. In recent years it sometimes seems like the spammers are moving more quickly than the SpamAssassin project, so the announcement of the Apache SpamAssassin 3.4.1 release on April 30 — the first in over a year — is naturally of interest. A version-number bump from 3.4.0 to 3.4.1 would not seem to indicate major changes, but, in truth, the SpamAssassin developers have been busy.The "auto whitelist" (AWL) feature of SpamAssassin has long been one of that program's more annoying aspects. In theory it tracks the emails from each sender to get an overall sense of whether they are trustworthy; email from a trusted source will get a bonus score, while messages from apparent spammers will be penalized. The sad truth of the matter, in your editor's experience, is that a spammer need only get a small number of messages through to convince the AWL that everything else should be whitelisted. If SpamAssassin's other scoring mechanisms were perfect, this kind of AWL corruption would not be a problem — but then the AWL would not be needed at all. In a world where scoring is imperfect, the AWL often seems to make things worse.
In 3.4.1, the SpamAssassin developers have tried to address some of the problems with the AWL by replacing it with a new mechanism called TxRep. The basic idea remains the same: track each sender's activity and adjust the score of new messages toward the mean of what has been seen in the past. But a number of useful changes have been made in how this tracking is done, starting with an expansion of the set of data that is used. TxRep maintains reputation scores for the sending email address (as did the AWL), but also the sending domain name, the IP addresses of the originating system and the server that transferred the message, and the "HELO" string used by the last server. For any given message, each of these quantities is mixed in with its own (user-configurable, naturally) weight.
Another useful change is that the sa-learn utility (until now used only with the Bayesian filter) can be used to train TxRep, so the same command now works to update both filters. There is a "dilution" mechanism that causes newer messages to have more influence on a sender's score than older ones, making the system more responsive should, say, a spammer repent and start actually sending useful stuff (or should TxRep initially misjudge a sender). TxRep can be used to whitelist (or blacklist) senders or IP addresses outright — something that might be worth doing automatically for the most obvious of spam or for messages that have been explicitly classified by the recipient. There is also a mechanism to automatically whitelist the recipients of outgoing mail — though that could have undesired effects if one is prone to sending irate responses to spammers.
With these changes, TxRep should be able to avoid some of the worst AWL pitfalls, though the documentation still recommends against turning on auto-learning until SpamAssassin as a whole has been tuned well. But the whole thing still seems to be built around the idea that people can be spammers part of the time and senders of legitimate email at others. Perhaps your editor is an excessively unforgiving character, but it seems like the sender of known spam should not get off lightly with a gradual tweaking of a reputation score; once a spammer, always a spammer. Trust is hard to earn but easy to lose; the TxRep mechanism still doesn't quite reflect that fact.
The PDFInfo module, which has long existed outside of the SpamAssassin mainline, has now been merged; PDFInfo, as its name would suggests, looks for spammy PDF attachments. There is one other new module, URELocalBL, which allows blacklisting of spammy links using a local database.
SpamAssassin 3.4.1 can do a more thorough and careful job of normalizing all messages to the UTF-8 character set before applying rules. That should help to eliminate various tricks using strange character sets to get around the spam-checking rules.
An interesting addition to the Bayesian filter is the ability to hash MIME
attachments and use the result as a filter token. If it works well, it
should allow the filter to recognize often-repeated spam payloads as a
whole. But, as the manual
page notes, "not much experience has yet been gathered regarding
its usefulness
". It seems worth a try, in any case.
Beyond all of this work, of course, is the constant challenge of maintaining the rule base in the face of a changing spammer landscape. Spammers may now be more concerned with getting past Gmail's filters than SpamAssassin, but there are still signs that a subset of spam has been tested against SpamAssassin until the rules are unable to stop it. The Bayesian filter helps with that problem, but so does an ongoing effort to keep those rules current. It is thus unsurprising that a new SpamAssassin release contains a long list of rule changes that should help to keep its effectiveness up — until the spammers work around those as well.
Your editor has often heard the complaint that email is reaching a point of complete uselessness. Such claims overstate the reality — one need only watch how email keeps our development communities going to see that. But email has been under attack for many years, making life harder for both email users and those who are charged with running email systems. It is fair to say that SpamAssassin is one of a small set of tools that has helped email to survive the ongoing spammer onslaught, so it is good to see this tool continuing to evolve.
Brief items
Security quote of the week
On the other hand, plans to try use those sharp sticks and prods to try bully these sites into the https camp like cattle -- well, if you think the world has a mixed view of technologists now, if Mozilla gets its way we'll end up with a positive rating on par with politicians -- if we're lucky.
I very much want to see an Internet where all communications are securely encrypted, but only if it's done the right way, with sites and users treated as valued partners with a full understanding of their resource constraints and sensibilities -- and not as "losers" to be treated with what amounts fundamentally to arrogant contempt.
Unboxing Linux/Mumblehard: Muttering spam from your servers (WeLiveSecurity)
WeLiveSecurity reports that ESET researchers have revealed a family of Linux malware that stayed under the radar for more than 5 years. They are calling it Linux/Mumblehard. "There are two components in the Mumblehard malware family: a backdoor and a spamming daemon. They are both written in Perl and feature the same custom packer written in assembly language. The use of assembly language to produce ELF binaries so as to obfuscate the Perl source code shows a level of sophistication higher than average. Monitoring of the botnet suggests that the main purpose of Mumblehard seems to be to send spam messages by sheltering behind the reputation of the legitimate IP addresses of the infected machines."
Mozilla: Deprecating Non-Secure HTTP
The Mozilla community has declared its intent to phase out "non-secure" (not encrypted with TLS) web access. "Since the goal of this effort is to send a message to the web developer community that they need to be secure, our work here will be most effective if coordinated across the web community. We expect to be making some proposals to the W3C WebAppSec Working Group soon."
New vulnerabilities
ax25-tools: denial of service
Package(s): | ax25-tools | CVE #(s): | |||||||||
Created: | April 30, 2015 | Updated: | May 6, 2015 | ||||||||
Description: | From the Fedora advisory:
Fixed crash when processing ROSE packets (by rose-fix patch) | ||||||||||
Alerts: |
|
clamav: multiple vulnerabilities
Package(s): | clamav | CVE #(s): | CVE-2015-2170 CVE-2015-2221 CVE-2015-2222 CVE-2015-2668 | ||||||||||||||||||||||||||||||||||||
Created: | May 4, 2015 | Updated: | May 13, 2015 | ||||||||||||||||||||||||||||||||||||
Description: | From the Arch Linux advisory:
CVE-2015-2170 (denial of service): A flaw has been found in the UPX decoder with crafted files. During unpacking there are two range checks which are implemented "manually". Those checks lack the detection of overflows which are considered by the CLI_ISCONTAINED() macro. CVE-2015-2221 (denial of service): Y0da cryptor / protector is a PE file encryptor - the executable file is decrypted on start up. Clamav is able to decrypt such files in order to scan them. As part of the decryptor there is an op code emulator. A special crafted file may contain a jump op code to a position that already has been interpreted - which leads to an endless loop. This leads to an endless loop in clamav itself. CVE-2015-2222 (denial of service): Petite is a tool for compressing PE files on windows. Clamav is a virus scanning tool which is able to unpack such files during scanning. Once the file has been identified as "petite" compressed before the decompressing process is started it is possible that a specially crafted file tells clamav to read more data than it allocated memory. On glibc it leads to SIGABRT on free() since glibc's malloc() recognizes this. CVE-2015-2668 (denial of service): A flaw has been discovered that is leading to an infinite loop condition on a crafted "xz" archive file. | ||||||||||||||||||||||||||||||||||||||
Alerts: |
|
curl: information leak
Package(s): | curl | CVE #(s): | CVE-2015-3153 | ||||||||||||||||
Created: | April 30, 2015 | Updated: | May 28, 2015 | ||||||||||||||||
Description: | From the Debian advisory:
It was discovered that cURL, an URL transfer library, if configured to use a proxy server with the HTTPS protocol, by default could send to the proxy the same HTTP headers it sends to the destination server, possibly leaking sensitive information. | ||||||||||||||||||
Alerts: |
|
DirectFB: two vulnerabilities
Package(s): | DirectFB | CVE #(s): | CVE-2014-2977 CVE-2014-2978 | ||||||||||||||||||||
Created: | April 30, 2015 | Updated: | January 23, 2017 | ||||||||||||||||||||
Description: | From the CVE entries:
Multiple integer signedness errors in the Dispatch_Write function in proxy/dispatcher/idirectfbsurface_dispatcher.c in DirectFB 1.4.13 allow remote attackers to cause a denial of service (crash) and possibly execute arbitrary code via the Voodoo interface, which triggers a stack-based buffer overflow. (CVE-2014-2977) The Dispatch_Write function in proxy/dispatcher/idirectfbsurface_dispatcher.c in DirectFB 1.4.4 allows remote attackers to cause a denial of service (crash) and possibly execute arbitrary code via the Voodoo interface, which triggers an out-of-bounds write. (CVE-2014-2978) | ||||||||||||||||||||||
Alerts: |
|
dnsmasq: information disclosure
Package(s): | dnsmasq | CVE #(s): | CVE-2015-3294 | ||||||||||||||||||||||||||||
Created: | May 5, 2015 | Updated: | December 18, 2015 | ||||||||||||||||||||||||||||
Description: | From the CVE entry:
The tcp_request function in Dnsmasq before 2.73rc4 does not properly handle the return value of the setup_reply function, which allows remote attackers to read process memory and cause a denial of service (out-of-bounds read and crash) via a malformed DNS request. | ||||||||||||||||||||||||||||||
Alerts: |
|
elasticsearch: directory traversal
Package(s): | elasticsearch | CVE #(s): | CVE-2015-3337 | ||||
Created: | April 30, 2015 | Updated: | May 6, 2015 | ||||
Description: | From the Debian advisory:
John Heasman discovered that the site plugin handling of the Elasticsearch search engine was susceptible to directory traversal. | ||||||
Alerts: |
|
erlang: man-in-the-middle attack
Package(s): | erlang | CVE #(s): | CVE-2015-2774 | ||||||||||||||||
Created: | May 6, 2015 | Updated: | February 22, 2016 | ||||||||||||||||
Description: | From the Mageia advisory:
Erlang's TLS-1.0 implementation failed to check padding bytes, leaving it vulnerable to an issue similar to POODLE. | ||||||||||||||||||
Alerts: |
|
fcgi: denial of service
Package(s): | fcgi | CVE #(s): | CVE-2012-6687 | ||||||||||||||||||||||||||||
Created: | April 30, 2015 | Updated: | March 3, 2016 | ||||||||||||||||||||||||||||
Description: | From the Red Hat bugzilla:
A stack-smashing bug for fcgi was reported to Ubuntu and subsequently patched in both Ubuntu and Debian. According to the bug report, if more than 1024 connections are received, a segfault can occur. A patch is provided with the bug reports: https://bugs.launchpad.net/ubuntu/+source/libfcgi/+bug/933417 and the report at debian: | ||||||||||||||||||||||||||||||
Alerts: |
|
FlightGear: unspecified vulnerability
Package(s): | FlightGear | CVE #(s): | |||||||||||||
Created: | April 30, 2015 | Updated: | September 30, 2015 | ||||||||||||
Description: | From the Fedora advisory:
This update provides a security fix related to the Nasal scripting language. From the Debian LTS advisory: It was discovered that flightgear, a Flight Gear Flight Simulator game, did not perform adequate filesystem validation checks in its fgValidatePath routine. | ||||||||||||||
Alerts: |
|
ikiwiki: cross-site scripting
Package(s): | ikiwiki | CVE #(s): | CVE-2015-2793 | ||||||||
Created: | May 4, 2015 | Updated: | May 6, 2015 | ||||||||
Description: | From the Red Hat bugzilla:
Cross-site scripting flaw in the handling of the openid_identifier parameter has been fixed in ikiwiki. | ||||||||||
Alerts: |
|
libphp-snoopy: command execution
Package(s): | libphp-snoopy | CVE #(s): | CVE-2014-5008 | ||||||||||||||||||||||||
Created: | May 4, 2015 | Updated: | December 1, 2015 | ||||||||||||||||||||||||
Description: | From the Debian advisory:
It was discovered that missing input sanitizing in Snoopy, a PHP class that simulates a web browser may result in the execution of arbitrary commands. | ||||||||||||||||||||||||||
Alerts: |
|
openstack-glance: denial of service
Package(s): | openstack-glance | CVE #(s): | CVE-2014-9684 CVE-2015-1881 | ||||
Created: | May 6, 2015 | Updated: | May 6, 2015 | ||||
Description: | From the CVE entries:
OpenStack Image Registry and Delivery Service (Glance) 2014.2 through 2014.2.2 does not properly remove images, which allows remote authenticated users to cause a denial of service (disk consumption) by creating a large number of images using the task v2 API and then deleting them before the uploads finish, a different vulnerability than CVE-2015-1881. (CVE-2014-9684) OpenStack Image Registry and Delivery Service (Glance) 2014.2 through 2014.2.2 does not properly remove images, which allows remote authenticated users to cause a denial of service (disk consumption) by creating a large number of images using the task v2 API and then deleting them, a different vulnerability than CVE-2014-9684. (CVE-2015-1881) | ||||||
Alerts: |
|
owncloud: multiple vulnerabilities
Package(s): | owncloud | CVE #(s): | CVE-2015-3011 CVE-2015-3012 CVE-2015-3013 | ||||
Created: | May 4, 2015 | Updated: | May 6, 2015 | ||||
Description: | From the Debian advisory:
CVE-2015-3011: Hugh Davenport discovered that the "contacts" application shipped with ownCloud is vulnerable to multiple stored cross-site scripting attacks. This vulnerability is effectively exploitable in any browser. CVE-2015-3012: Roy Jansen discovered that the "documents" application shipped with ownCloud is vulnerable to multiple stored cross-site scripting attacks. This vulnerability is not exploitable in browsers that support the current CSP standard. CVE-2015-3013: Lukas Reschke discovered a blacklist bypass vulnerability, allowing authenticated remote attackers to bypass the file blacklist and upload files such as the .htaccess files. An attacker could leverage this bypass by uploading a .htaccess and execute arbitrary PHP code if the /data/ directory is stored inside the web root and a web server that interprets .htaccess files is used. On default Debian installations the data directory is outside of the web root and thus this vulnerability is not exploitable by default. | ||||||
Alerts: |
|
perl-xml-libxml: information disclosure
Package(s): | perl-xml-libxml | CVE #(s): | CVE-2015-3451 | ||||||||||||||||||||||||||||||||||||
Created: | May 1, 2015 | Updated: | September 8, 2015 | ||||||||||||||||||||||||||||||||||||
Description: | From the Arch Linux advisory:
Unpreserved unset options after a _clone() call (e.g: in load_xml()) leads to not preserved expand_entities. Therefore it leads to a XML-External-Entity Vulnerability. This vulnerability may lead to the disclosure of confidential data, denial of service, port scanning from the perspective of the machine where the parser is located, and other system impacts. | ||||||||||||||||||||||||||||||||||||||
Alerts: |
|
quassel: SQL injection
Package(s): | quassel | CVE #(s): | CVE-2015-3427 | ||||||||||||
Created: | May 1, 2015 | Updated: | May 6, 2015 | ||||||||||||
Description: | From the Mageia advisory:
Quassel is vulnerable to SQL injection through its use of Qt's postgres driver. If the PostgreSQL server is restarted or the connection is lost at any point, other IRC users may be able to trick the Quassel core into executing SQL queries upon reconnection. | ||||||||||||||
Alerts: |
|
squid: certificate validation bypass
Package(s): | squid | CVE #(s): | CVE-2015-3455 | ||||||||||||||||||||||||||||||||||||||||
Created: | May 4, 2015 | Updated: | December 22, 2015 | ||||||||||||||||||||||||||||||||||||||||
Description: | From the Arch Linux advisory:
The flaw allows remote servers to bypass client certificate validation. Some attackers may also be able to use valid certificates for one domain signed by a global Certificate Authority to abuse an unrelated domain. However, the bug is exploitable only if you have configured Squid to perform SSL Bumping with the "client-first" or "bump" mode of operation. Sites that do not use SSL-Bump are not vulnerable. A remote attacker is able to bypass client certificate validation, as a result malicious server responses can wrongly be presented through the proxy to clients as secure authenticated HTTPS responses. | ||||||||||||||||||||||||||||||||||||||||||
Alerts: |
|
xen: information leak
Package(s): | xen | CVE #(s): | CVE-2015-3340 | ||||||||||||||||||||||||||||||||||||||||||||
Created: | May 4, 2015 | Updated: | May 6, 2015 | ||||||||||||||||||||||||||||||||||||||||||||
Description: | From the CVE entry:
Xen 4.2.x through 4.5.x does not initialize certain fields, which allows certain remote service domains to obtain sensitive information from memory via a (1) XEN_DOMCTL_gettscinfo or (2) XEN_SYSCTL_getdomaininfolist request. | ||||||||||||||||||||||||||||||||||||||||||||||
Alerts: |
|
xorg-server: denial of service
Package(s): | xorg-server | CVE #(s): | CVE-2015-3418 | ||||||||||||
Created: | May 4, 2015 | Updated: | May 6, 2015 | ||||||||||||
Description: | From the Debian LTS advisory:
This issue (CVE-2015-3418) is a regression which got introduced by fixing CVE-2014-8092. The above referenced version of xorg-server in Debian squeeze-lts fixes this regression in the following way: The length checking code validates PutImage height and byte width by making sure that byte-width >= INT32_MAX / height. If height is zero, this generates a divide by zero exception. Allow zero height requests explicitly, bypassing the INT32_MAX check (in dix/dispatch.c). | ||||||||||||||
Alerts: |
|
Page editor: Jake Edge
Next page:
Kernel development>>