Security
LSS: Kernel security subsystem reports
The morning of day two of this year's Linux Security Summit was filled with reports from various kernel security subsystem maintainers. Each spoke for 20 minutes or so, generally about progress in the last year, as well as plans for the future.
Crypto
Herbert Xu reviewed some of the changes that have come for the kernel crypto subsystem, starting with the new user-space API. Since cryptography can be done in user space, providing an API to do it in the kernel may seem a bit roundabout, but it is important so that user space can access hardware crypto accelerators. The API is targeted at crypto offload devices that were not accessible to user space before.
The interface is socket-based, so data can be sent to devices using write() or send(). For large amounts of data, splice() can be used for zero copy I/O. The API is "completely extensible". It doesn't currently handle asymmetric key cryptography, for example, but that could be easily added.
There is also a new user-space control interface for configuring the kernel crypto algorithms. For example, there are multiple AES algorithms available that are optimized for different processors. The performance of the optimized versions may be 20-30 times better than the generic C implementation. The system can often figure out the right one to use, Xu said, but some variants are not easily chosen automatically, so there is a need for this interface.
Parallelizing the crypto algorithms using pcrypt is a case in point. In some scenarios, it may make sense to spread the crypto work around on different processors, but it can sometimes degrade performance. It was designed for the IPSec use case, but there needs to be an administrative interface to choose. That interface is netlink-based and allows users to select the priority of the algorithms that are used by the kernel.
Optimizations of crypto algorithms for various CPUs have also been added. The SHA-1 algorithm has been enhanced to use the SSE3 instructions for x86 processors, and more AES-NI modes for x86 have been added. There is now SHA support on the VIA Nano processor as well. The arc4 cipher has added "block" cipher support, which means that it can be handed more than a single byte at a time (as was required before).
Support for new hardware has also been added, including picoXcell, CAAM, s5p-sss, and ux500. Those are all non-x86 crypto offload devices.
Finally, Xu noted that asymmetric key ciphers have finally been added to the kernel. He had wanted them for some time, but there were no in-kernel users. Now, "thanks to IMA and module signing", there are such users, so that code, along with hardware acceleration and a user-space interface, has been added.
AppArmor
The AppArmor access control mechanism has seen some incremental improvements over the last year, John Johansen reported. One focus has been on eliminating the out-of-tree patches to complete the AppArmor system. There are some "critical pieces" missing, particularly in the upstream version of AppArmor, he said.
Several things have landed in AppArmor, including some bug fixes and the aafs (AppArmor filesystem) introspection interface. The latter allows programs to examine the rules and policies that have been established in the system.
A larger set of changes have been made on the user-space side. The project has standardized on Python, so some tools got rewritten in that language, while others were ported to support Python 3. In addition, the policy language has been made more consistent, and some simple shortcuts have been added to make it easier to use.
The policy compiler has been improved as well, both in terms of memory usage and performance. There were some test policies that could not be compiled even on 256GB systems, but they can now be compiled on 16GB systems. The compiler runs two to four times faster and produces policies that are 30-50% smaller. Lastly, some basic LXC containers integration has been added to AppArmor.
There are a number of things that are "close to landing", he said. The AppArmor mount rules, which govern the allowable devices, filesystem types, mount points, and so on, for mounting are being tested in Ubuntu right now. The implementation seems solid, but it would be nice to have a Linux Security Module (LSM) hook for pivot_root(). There are some "nasty things" that pivot_root() does with namespaces, and the LSM hook could help there.
The reader-writer locks used by AppArmor have been "finally" converted to use read-copy-update (RCU), and that will be pushed upstream. There are also some improvements to policy introspection, including adding a directory for each profile in a given namespace. The original introspection interface was procfs-style, but AppArmor has moved to a sysfs-style interface, which should be more acceptable.
The policy matching engine has been cleaned up and the performance has been improved. Some of that work has been in minimizing the size of the policies. A new policy templating tool has been created that will build a base policy as a starting point for administrators. There has also been work on a sandbox, similar to the SELinux sandbox, that can dynamically generate policies to create a chroot() or container-based sandbox with a nested X server to isolate processes. The last of the near-term changes is a way to mediate D-Bus access with AppArmor rules, which has been prototyped.
The final category of features that Johansen presented were those that are being worked on, but won't be merged soon. Converting the deterministic finite automata (DFA) used in the matching engine to an extended hybrid finite automata (eHFA) headed that list. An eHFA provides capabilities that DFAs don't have including variable matching and back references. The latter is not something AppArmor is likely to use, but eHFAs do provide better compression and performance. Another matching engine enhancement is sharing state machines between profiles and domains, which will improve memory usage and performance.
Beyond that, there are plans to add a "learning mode", similar to SELinux's audit2allow, so that policies can be created from the actions of running programs. Adding more mediation is also being worked on, including handling environment variable filtering, inter-process communication (IPC), and networking. Internally labeling files and other objects, so that the matching engine does not need to run again for objects that have been recently accessed is also on the horizon.
Key management
In a short presentation, David Howells gave an update on the key management subsystem in the kernel. Over the last year, the subsystem has made better use of RCU, which will improve the scalability when using keys. In addition, the kernel keyrings have been "made more useful" by adding additional keyring operations such as invalidating keys and clearing keyrings. The latter is useful for clearing the kernel DNS resolver cache, for example.
A logon key type has been added to support CIFS multi-user mounts. That key type cannot be read from user space, so that the keys cannot be divulged to attackers (e.g. when the user is away from the system). The lockdep (kernel locking validator) support has been improved, as has the garbage collector. There is now just one garbage collector, rather than two, and a deadlock in garbage collection has been fixed as well.
In the future, a bug where the GNOME display manager (gdm) hangs in certain configurations will be fixed. The problem stems from a limitation in the kernel that does not allow session keyring manipulation from multithreaded programs. Support for a generic "crypto" key type will also be added to support signed kernel modules.
SELinux
Eric Paris prefaced his presentation by explaining that he works on the kernel and user-space pieces of SELinux—he is "not a policy writer"—so he would be focusing on those parts in his talk. There have been some interesting developments in the use of SELinux over the past year, including Red Hat's OpenShift project that allows multiple users to develop web applications on a single box. SELinux is used to isolate those users from each other. In addition, he noted the SELinux-based secure Linux containers work that provides a "super lightweight" sandbox using containers. "Twiddle one bit", he said, and that container-based sandbox can be converted to use KVM instead.
Historically, SELinux has focused on containing system daemons, but that is changing somewhat. There are a couple of user programs that are being contained in Fedora, including running the Nautilus thumbnailing program in a sandbox. In addition, Firefox and its plugins now have SELinux policies to contain them for desktop users.
RHEL 5 and 6 have also received Common Criteria certification for the virtualization profile using QEMU/KVM. SELinux enforcement was an important part of gaining that certification.
Paris said that systemd has become SELinux-aware in a number of ways. He likes the new init system and would like it to have more SELinux integration in the future. The socket activation mechanism makes it easy to launch a container on the first connection to a web port, for example. Systemd handles launching the service automatically, so that you don't need to run the init script directly, nor are "run-init games" needed. It is also much easier to deal with daemons that want to use TTYs, he said. Using SELinux enforcement in systemd means that an Apache server running as root would not be able to start or stop the MySQL server, or that a particular administrator would only be able to start and stop the web server, but not the database server.
The named file transitions feature (filename_trans) was "a little bit contentious" when it got added to SELinux, but it "ended up being brilliant", Paris said. The feature took ideas from AppArmor and TOMOYO and helps avoid mislabeling files. In addition to the standard SELinux labels for objects, policies can now use the file name to make decisions. It is just the name of the file, not the full path that "Al Viro says doesn't exist", but it allows proper labeling decisions to be made.
For example, the SSH daemon will create a .ssh directory when a user sends their keys to the system using something like ssh-copy-id. But, without filename_trans, SELinux would have no way to know what label to put on that directory, because it couldn't tell if it was creating .ssh or some other directory (e.g. a directory being copied from the remote host). There used to be a daemon that would fix the label but that was a "hacky" solution. Similarly, SELinux policies can now distinguish between accesses to resolv.conf and shadow. 90% of the bugs reported for SELinux are because the label is wrong, he said, and filename_trans will help alleviate that.
There has also been a split in the SELinux policy world. The upstream maintainers of the core SELinux policies have been slower to adopt changes because they are concerned with "hard security goals". That means that it can take a lot of time to get changes upstream. So, there is now a "contrib" set of policies that affect non-core pieces. That reduces the amount of "messy policy" that Dan Walsh has to fix for Fedora and RHEL.
Shrinking the policies is another area that has been worked on. The RHEL 6 policy is 6.8MB after it is compiled down, but the Fedora 18 policy has shrunk to 4.8MB. The unconfined user policies were removed, as were some duplicate policy entries, which resulted in further space savings. There are "no real drawbacks", he said, as the new policies can do basically the same things as the old in 65% less space.
But there are also efforts to grow the policies. There are "hundreds of daemons and programs" that now have a default policy, which have been incorporated into the Fedora policies. The 65% reduction number includes "all the new stuff we added", he said.
Paris finished his talk by joking that "by far the most interesting" development in the SELinux world recently was the new SELinux stickers that he handed out to interested attendees.
Integrity
The work on the integrity subsystem started long ago, but a lot of it has been merged into the mainline over the years, Mimi Zohar said to begin her report. The integrity measurement architecture (IMA) has been merged in several pieces, starting with IMA-measurement in 2.6.30, and there is still more to come. For example, IMA-appraisal should be merged soon, and the IMA-directories patches have been posted for review. In addition, digital signature support has been added for the IMA file data measurements as well as for the extended verification module (EVM) file metadata measurements. Beyond that, there is a patch to audit the log file measurements that is currently in linux-next.
The integrity subsystem is going in two directions at once, Zohar said. It is extending Trusted Boot by adding remote attestation, while also extending Secure Boot with local integrity measurement and appraisal.
There is still more work to be done, of course. Support for signing files (including kernel modules) needs to be added to distributions, she said. There is also a need to ensure that anything that gets loaded by the kernel is signed and verified. For example, files that are loaded via the request_firmware() interface may still need to be verified.
The kernel build process also needs some work to handle signing the kernel image and modules. For users who may not be interested in maintaining a key pair but still want to sign their kernel, an ephemeral key pair can be created during the build. The private key can be used to sign the image and modules, then it can be discarded. The public key needs to be built into the kernel for module verification. There is also a need for a safe mechanism to store that public key in the UEFI key database for Secure Boot, she said.
TOMOYO
The TOMOYO LSM was added in the 2.6.30 kernel as an alternative mandatory access control (MAC) mechanism, maintainer Tetsuo Handa said. That was based on version 2.2 of TOMOYO, and the 3.2 kernel has been updated to use TOMOYO 3.5. There have been no major changes to TOMOYO since the January release of 3.2.
Handa mostly wanted to discuss adding hooks to the LSM API to protect against shellcode attacks. Those hooks would also allow TOMOYO to run in parallel with other LSMs, he said. By checking the binfmt handler permissions in those hooks, and possibly sanitizing the arguments to the handler, one could thwart some kinds of shellcode execution. James Morris and others seemed somewhat skeptical about that approach, noting that attackers would just adapt to the restrictions.
Those hooks are also useful for Handa's latest project, the CaitSith [PDF] LSM. He believes that customers are finding it too difficult to configure SELinux, so they are mostly disabling it. CaitSith is one of a number of different approaches he has tried (including TOMOYO) to attack that problem.
Smack
In a talk entitled "Smack veers mobile", Casey Schaufler looked at the improvements to the LSM, while pointing to the mobile device space as one of its main users. The security models in the computing industry are changing, he said. Distributions, users, files, and system administrators are "out", while operating systems, user experience, apps, and resources are "in". That shift is largely caused by the recent emphasis on mobile computing.
For Smack, there have been "a few new things" over the last year. There is now an interface for user space to ask Smack to do an access check, rather than wait for a denial. One can write a query to /smack/access, then read back the access decision. Support for the SO_PEERCRED option to getsockopt() for Unix domain sockets has been added. That allows programs to query the credentials of the remote end of the socket to determine what kind of privileges to give it.
If a parent and child process are running with two different labels, there could be situations where the child can't signal its death to the parent. That can lead to zombie processes. It's only "humane" to allow the child to notify the parent, so that has been added.
There is also a new mechanism to revoke all of the rules for a given subject label. Tizen was trying to do this in a library, but it required reading all of the rules in, then removing each. Now, using /smack/remove-subject, that can be all be done in one operation.
The length of Smack labels has increased again. It started out with a seven-character limit, but that was raised earlier to 23 characters in support of labeled networking. It turns out that humans don't generally create the labels, he said, so the limit has now been raised to 255 characters to support generated label names. For example, the label might include information on the version of an app, which app store it came from, and so on. Care must be taken, as there needs to be an explicit mapping from Smack labels to network labels (which are still limited to 23 characters by the CIPSO header).
There is now a "friendlier" rule setting interface for Smack. The original /smack/load interface used a fixed-length buffer with an explicit format, which caused "complaints from time to time". The new /smack/load2 interface uses white space as a separator.
"Transmuting" directories is now recursive. Directories can get their label either from their parent or from the process that creates them, and when the label changes, those changes now propagate into the children. Schaufler originally objected to the change, but eventually "figured out that is was better" that way, he said.
The /smack/onlycap mechanism has been extended to cover CAP_MAC_ADMIN. That means that privileged daemons can still be forced to follow the Smack rules even if they have the CAP_MAC_ADMIN capability. By writing a Smack label to /smack/onlycap, the system will be configured to only allow processes with that label to circumvent the Smack rules. Previously, only CAP_MAC_OVERRIDE was consulted, which would allow processes to get around this restriction.
The Smack rules have been split into multiple lists based on the subject label. In the past, the Smack rule list could get rather long, so it took a long time to determine that there was no rule governing a particular access. By splitting the list, a 30-95% performance increase was realized on a 40,000 rule set, depending on how evenly the rules split.
Some cleanup has been done to remove unnecessary locking and bounds checks. In addition, Al Viro had "some very interesting things to say" about the Smack fcntl() implementation. After three months, he finally settled down, reread the message, and agreed with Viro's assessment. Those problems have now been fixed.
Schaufler said that he is excited by the inclusion of Smack as the MAC solution for the Tizen distribution. He is "very much involved" in the Tizen project and looks forward to Smack being deployed in real world situations.
There are some other things coming for Smack, including better rule list searching and true list entry removal. Right now, rules that are removed are just marked, not taken out of the list, because there is a "small matter of locking" to be resolved. Beyond that, there is probably a surprise or two lurking out there for new Smack features. If someone can make the case for a feature, like the often requested multiple labels feature, it may just find its way into Smack in the future.
Yama
Kees Cook's Yama LSM was named after a Buddhist god of the underworld who is the "ruler of the departed". It started as an effort to get some symbolic link restrictions added to the kernel. Patches to implement those restrictions had been floating around since at least 1996, but had never been merged. Those restrictions are now available in the kernel in the form of the Yama LSM, but the path of getting them into the mainline was rather tortuous.
Cook outlined that history, noting that his original submission was rejected for not being an LSM in May 2010. In June of that year, he added some hardlink and ptrace() attach restrictions to the symlink changes and submitted it as the Yama LSM. In July, a process relationship API was added to allow the ptrace() restrictions to be relaxed for things like crash handlers, but Yama was reverted out of the security-next tree because it was an LSM. Meanwhile, the code was released in Ubuntu 10.10 in October and then in ChromeOS in December 2011.
Eventually, the LSM was "half merged" for the 3.4 kernel. The link restrictions were not part of that, but they have subsequently been merged into the core kernel for 3.6. Those restrictions are at least 16 years old, Cook said, which means they "can drive in the US". He was able to get the link restrictions into the core by working with Al Viro, but he has not been able to get the ptrace() restrictions into the core kernel, which is where he thinks they belong. James Morris noted that none of the core kernel developers "like security", and "some actively hate it", which makes it hard to get these kinds of changes into the core—or sometimes upstream at all.
In the future, Cook would like to see some changes in the kernel module loading path to support ChromeOS. Everyone is talking about signing modules, but ChromeOS already has a protected root partition, he said. If load_module() (or a new interface) could get information about where in the filesystem a module comes from, that would solve his problem. He also mentioned the perennial LSM stacking topic, noting that Ubuntu and other distributions are hardcoding Yama stacking to get the ptrace() restrictions, so maybe that will provide impetus for a more general stacking solution—or to move the ptrace() restrictions into the core kernel.
[ Slides for many of the subsystem reports, as well as the rest of the presentations are available on the LSS schedule page. ]
Brief items
Security quotes of the week
New vulnerabilities
atheme-services: denial of service
Package(s): | atheme-services | CVE #(s): | CVE-2012-1576 | ||||
Created: | September 25, 2012 | Updated: | September 26, 2012 | ||||
Description: | From the Gentoo advisory:
The myuser_delete() function in account.c does not properly remove CertFP entries when deleting user accounts. A remote authenticated attacker may be able to cause a Denial of Service condition or gain access to an Atheme IRC Services user account. | ||||||
Alerts: |
|
cloud-init: unspecified vulnerabilities
Package(s): | cloud-init | CVE #(s): | |||||||||
Created: | September 26, 2012 | Updated: | September 26, 2012 | ||||||||
Description: | From the Red Hat bugzilla [1], [2]:
[1] If the init script takes longer than 90 seconds to finish (e.g. package installation & provisioning on slow network), it gets killed by systemd. Adding `TimeoutSec=0` to cloud-final.service[1] seems to fix the problem. [2] cloud-final.service needs StandardOutput=syslog+console so that final-message gets printed to the console while booting. | ||||||||||
Alerts: |
|
kernel: denial of service
Package(s): | kernel | CVE #(s): | CVE-2012-3552 | ||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||
Created: | September 26, 2012 | Updated: | September 26, 2012 | ||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||
Description: | From the Red Hat advisory:
A race condition was found in the way access to inet->opt ip_options was synchronized in the Linux kernel's TCP/IP protocol suite implementation. Depending on the network facing applications running on the system, a remote attacker could possibly trigger this flaw to cause a denial of service. A local, unprivileged user could use this flaw to cause a denial of service regardless of the applications the system runs. | ||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||
Alerts: |
|
kernel-rt: denial of service
Package(s): | kernel-rt | CVE #(s): | CVE-2012-4398 | ||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||
Created: | September 20, 2012 | Updated: | October 16, 2013 | ||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||
Description: | From the Red Hat advisory: It was found that a deadlock could occur in the Out of Memory (OOM) killer. A process could trigger this deadlock by consuming a large amount of memory, and then causing request_module() to be called. A local, unprivileged user could use this flaw to cause a denial of service (excessive memory consumption). | ||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||
Alerts: |
|
libguac: denial of service
Package(s): | libguac | CVE #(s): | CVE-2012-4415 | ||||||||||||||||||||||||||||||||||||||||||||||||||||
Created: | September 26, 2012 | Updated: | September 26, 2012 | ||||||||||||||||||||||||||||||||||||||||||||||||||||
Description: | From the Red Hat bugzilla:
A stack based buffer overflow flaw was found in guac client plug-in protocol handling functionality of libguac, a common library used by all C components of Guacamole. A remote attacker could provide a specially-crafted protocol specification to the guac client plug-in that, when processed would lead to guac client crash (denial of service). | ||||||||||||||||||||||||||||||||||||||||||||||||||||||
Alerts: |
|
MRG Grid 2.2: multiple vulnerabilities
Package(s): | MRG Grid 2.2 | CVE #(s): | CVE-2012-2680 CVE-2012-2681 CVE-2012-2683 CVE-2012-2684 CVE-2012-2685 CVE-2012-2734 CVE-2012-2735 CVE-2012-3459 CVE-2012-3491 CVE-2012-3492 CVE-2012-3493 CVE-2012-3490 | ||||||||||||||||||||
Created: | September 20, 2012 | Updated: | March 14, 2013 | ||||||||||||||||||||
Description: | From the Red Hat advisory: A number of unprotected resources (web pages, export functionality, image viewing) were found in Cumin. An unauthenticated user could bypass intended access restrictions, resulting in information disclosure. (CVE-2012-2680) Cumin could generate weak session keys, potentially allowing remote attackers to predict session keys and obtain unauthorized access to Cumin. (CVE-2012-2681) Multiple cross-site scripting flaws in Cumin could allow remote attackers to inject arbitrary web script on a web page displayed by Cumin. (CVE-2012-2683) An SQL injection flaw in Cumin could allow remote attackers to manipulate the contents of the back-end database via a specially-crafted URL. (CVE-2012-2684) When Cumin handled image requests, clients could request images of arbitrary sizes. This could result in large memory allocations on the Cumin server, leading to an out-of-memory condition. (CVE-2012-2685) Cumin did not protect against Cross-Site Request Forgery attacks. If an attacker could trick a user, who was logged into the Cumin web interface, into visiting a specially-crafted web page, it could lead to unauthorized command execution in the Cumin web interface with the privileges of the logged-in user. (CVE-2012-2734) A session fixation flaw was found in Cumin. An authenticated user able to pre-set the Cumin session cookie in a victim's browser could possibly use this flaw to steal the victim's session after they log into Cumin. (CVE-2012-2735) It was found that authenticated users could send a specially-crafted HTTP POST request to Cumin that would cause it to submit a job attribute change to Condor. This could be used to change internal Condor attributes, including the Owner attribute, which could allow Cumin users to elevate their privileges. (CVE-2012-3459) It was discovered that Condor's file system authentication challenge accepted directories with weak permissions (for example, world readable, writable and executable permissions). If a user created a directory with such permissions, a local attacker could rename it, allowing them to execute jobs with the privileges of the victim user. (CVE-2012-3492) It was discovered that Condor exposed private information in the data in the ClassAds format served by condor_startd. An unauthenticated user able to connect to condor_startd's port could request a ClassAd for a running job, provided they could guess or brute-force the PID of the job. This could expose the ClaimId which, if obtained, could be used to control the job as well as start new jobs on the system. (CVE-2012-3493) It was discovered that the ability to abort a job in Condor only required WRITE authorization, instead of a combination of WRITE authorization and job ownership. This could allow an authenticated attacker to bypass intended restrictions and abort any idle job on the system. (CVE-2012-3491) | ||||||||||||||||||||||
Alerts: |
|
MRG Messaging 2.2: authentication bypass
Package(s): | MRG Messaging 2.2 | CVE #(s): | CVE-2012-3467 | ||||||||
Created: | September 20, 2012 | Updated: | September 26, 2012 | ||||||||
Description: | From the Red Hat advisory: It was discovered that qpidd did not require authentication for "catch-up" shadow connections created when a new broker joins a cluster. A malicious client could use this flaw to bypass client authentication. (CVE-2012-3467) | ||||||||||
Alerts: |
|
munin: privilege escalation
Package(s): | munin | CVE #(s): | CVE-2012-3512 | ||||||||||||||||||||||||
Created: | September 26, 2012 | Updated: | November 5, 2012 | ||||||||||||||||||||||||
Description: | From the Red Hat bugzilla:
Currently, plugins which run as root mix their state files in the same directory as non-root plugins. The state directory is owned by munin:munin and is group-writable. Because of these facts, it is possible for an attacker who operates as user munin to cause a root-run plugin to run arbitrary code as root. | ||||||||||||||||||||||||||
Alerts: |
|
qpid: denial of service
Package(s): | qpid | CVE #(s): | CVE-2012-2145 | ||||||||||||||||||||
Created: | September 20, 2012 | Updated: | September 26, 2012 | ||||||||||||||||||||
Description: | From the Red Hat advisory: It was discovered that the Qpid daemon (qpidd) did not allow the number of connections from clients to be restricted. A malicious client could use this flaw to open an excessive amount of connections, preventing other legitimate clients from establishing a connection to qpidd. (CVE-2012-2145) | ||||||||||||||||||||||
Alerts: |
|
squidclamav: denial of service
Package(s): | squidclamav | CVE #(s): | CVE-2012-3501 | ||||
Created: | September 25, 2012 | Updated: | September 26, 2012 | ||||
Description: | From the CVE entry:
The squidclamav_check_preview_handler function in squidclamav.c in SquidClamav 5.x before 5.8 and 6.x before 6.7 passes an unescaped URL to a system command call, which allows remote attackers to cause a denial of service (daemon crash) via a URL with certain characters, as demonstrated using %0D or %0A. | ||||||
Alerts: |
|
transmission: cross-site scripting
Package(s): | transmission | CVE #(s): | CVE-2012-4037 | ||||||||
Created: | September 26, 2012 | Updated: | October 30, 2012 | ||||||||
Description: | From the Ubuntu advisory:
Justin C. Klein Keane discovered that the Transmission web client incorrectly escaped certain strings. If a user were tricked into opening a specially crafted torrent file, an attacker could possibly exploit this to conduct cross-site scripting (XSS) attacks. | ||||||||||
Alerts: |
|
Page editor: Jake Edge
Next page:
Kernel development>>