Wednesday, July 16, 2014

KLEE on Ubuntu 14.04 LTS 64Bit

by Brad Antoniewicz.

It seems like all of the cool kids nowadays are into Symbolic Execution, especially for vulnerability research. It's probably all because of DARPA's Cyber Grand Challenge - a government-sponsored challenge to develop a system that automates vulnerability discovery.

If you start to dive into the topic, you'll undoubtedly come across KLEE, a project coming out of Standford University. KLEE is a great tool to get you started with symbolic execution, however the set up can be slightly daunting for the "app crowd" :) KLEE's home page has a Getting Started page, but it lacks some updates. In this blog post we'll walk you through the most up to date build process from a fresh install of Ubuntu 14.04 LTS Desktop 64-bit.

Packages

As with all installations, first make sure you're all up to date:
sudo apt-get update
sudo apt-get upgrade



Apt Packages

Now we'll get the easy stuff out of the way, and install all of the required packages:

sudo apt-get install g++ curl python-minimal git bison flex bc libcap-dev build-essential libboost-all-dev ncurses-dev cmake



LLVM-GCC Binaries

Next we'll need to download the LLVM-GCC binaries and extract them to our home directory:

wget http://llvm.org/releases/2.9/llvm-gcc4.2-2.9-x86_64-linux.tar.bz2
tar -jxvf llvm-gcc4.2-2.9-x86_64-linux.tar.bz2



Environment Variables

At this point, we'll need to set up a few environment variables for everything else to run properly. As stated on the KLEE's Getting Started page, most issues people have are related to not setting these:

export C_INCLUDE_PATH=/usr/include/x86_64-linux-gnu 
export CPLUS_INCLUDE_PATH=/usr/include/x86_64-linux-gnu
export PATH=$PATH:$HOME/llvm-gcc4.2-2.9-x86_64-linux/bin



It's also recommended to add these to your .bashrc:

echo "export C_INCLUDE_PATH=/usr/include/x86_64-linux-gnu" >> .bashrc
echo "export CPLUS_INCLUDE_PATH=/usr/include/x86_64-linux-gnu" >> .bashrc
echo "export PATH=$PATH:$HOME/llvm-gcc4.2-2.9-x86_64-linux/bin" >> .bashrc



Building LLVM 2.9

KLEE specifically requires that you use LLVM 2.9. Now, Ubuntu does have a llvm-2.9 package, and LLVM 2.9 binaries are available from a couple of different locations. However, I decided to stay as true to KLEE's Getting Started instructions. Let's download the source:

wget http://llvm.org/releases/2.9/llvm-2.9.tgz
tar -zxvf llvm-2.9.tgz
cd llvm-2.9



Before we build, we need to apply one patch:

wget http://www.mail-archive.com/klee-dev@imperial.ac.uk/msg01302/unistd-llvm-2.9-jit.patch
patch -p1 < unistd-llvm-2.9-jit.patch 



And now we can build:

./configure --enable-optimized --enable-assertions
make
cd $HOME



The build might produce some warnings but they can all be safely ignored.

Building Simple Theorem Prover

Simple Theorem Prover (STP) was the source of a couple problems, rather than following the Getting Start Page, take this approach:

git clone https://github.com/stp/stp.git
cd stp
mkdir build && cd build
cmake -G 'Unix Makefiles' $HOME/stp
make
sudo make install
sudo ldconfig
ulimit -s unlimited
cd $HOME



KLEE-uclibc

Our last dependancy is klee-uclibc, to get that set up we:

git clone --depth 1 --branch klee_0_9_29 https://github.com/klee/klee-uclibc.git
cd klee-uclibc/
./configure --with-llvm-config $HOME/llvm-2.9/Release+Asserts/bin/llvm-config --make-llvm-lib
make -j`nproc`
cd $HOME



Building KLEE

With all of our dependancies out of the way, we can build KLEE:

git clone https://github.com/klee/klee.git
cd klee
./configure --enable-posix-runtime --with-stp=/usr/local --with-llvm=$HOME/llvm-2.9/ --with-uclibc=$HOME/klee-uclibc/
make ENABLE_OPTIMIZED=1
make check
make unittests
sudo make install
cd $HOME



Testing with an example

Just to confirm everything is working, you can run through Tutorial 1:
cd $HOME/klee/examples/get_sign
llvm-gcc -I ../../include --emit-llvm -c -g get_sign.c
klee get_sign.o



You're ready to go! Good luck!

Have a different set up? Let us know in the comments below!

Tuesday, July 8, 2014

Writing Slack Space on Windows

By Diego Urquiza.

I’m a Foundstone intern in NYC office and for a project I decided to write a tool to remove file slack space. In this post I’ll introduce the methods I took in writing the tool then provide the tool itself. Hope you enjoy it!

About

File Slack Space is the amount of unused space at the end of a file’s last cluster. File systems organize file data through an allocation unit called clusters. Clusters are composed in sequence sectors on a disk that contains a set amount of bytes. A disk that has 512 byte sectors and 8 sectors per cluster would have a total of 4 kilobytes (4096 bytes). Since the file system organizes files into clusters, there ends up being unused space at the end of clusters. From a security standpoint, unused space can contain previous data from a deleted file that can contain valuable information.

When a user chooses to delete a file, the file system will delete the pointer to the location of the file data on disk. This enables the operating system to read the data space as available space so when a new file is created, the file system will write over the old data that the user thought he or she deleted. My tool aims to write over the file slack space multiple times with meaningless data so that any prior data cannot be retrieved.

Download

Download my tool here:

Approaching The Problem

When I began this project I saw two possible approaches:

  1. Find the location of every file’s last cluster, move the pointer to the end of the file data, and write zero's until the end of the cluster
  2. Resize the file, read/write the data, then trim it back down


Moving the File Pointer

This seemed like a feasible approach since the unused files were inconsequential. However, after tackling it for about a week, I found out everything can and will go wrong. Depending on what type of file system was in use, each disk’s data would be organized differently. Moreover, if the file data was too small it would go into the master file table (in the case of NTFS file system) and if the data was larger than a cluster, it would be organized non-contiguously on the logical disk. Using the Windows API functions can become frustrating to use since you need to map the file virtual cluster to the logical cluster while finding the volume offset. Therefore, writing meaningless data to the disc can become tragic if the byte offset from the beginning of disc is wrong ( the next cluster over can be a system file ).

Here’s a code snip for this approach:

 /**********Find the last Cluster ***********************************/
DWORD error = ERROR_MORE_DATA;

returns = DeviceIoControl(hfile.hanFile,
                                                  FSCTL_GET_RETRIEVAL_POINTERS,
                                                  &startVcn,
                                                  sizeof(startVcn),
                                                  &output,
                                                  sizeof(output),
                                                  &bytesReturned,
                                                  NULL);                
                
DWORD lastExtentN = retrievalBuffer->ExtentCount - 1;
LONGLONG lastExtent = retrievalBuffer->Extents[lastExtentN].Lcn.QuadPart;
LONGLONG lengthOfExtent = retrievalBuffer->Extents[lastExtentN].NextVcn.QuadPart - retrievalBuffer->Extents[lastExtentN - 1].NextVcn.QuadPart;


while (error == ERROR_MORE_DATA){

                error = GetLastError();
                                
                switch (error){

                case ERROR_HANDLE_EOF:
                                //file sizes 0-1kb will return EOF error 
                                cout << "ERROR_HANDLE_EOF" << endl;
                                returns = true;
                                break;

                case ERROR_MORE_DATA:
                                cout << "ERROR_MORE_DATA" << endl;
                                startVcn.StartingVcn = retrievalBuffer->Extents[0].NextVcn;

                case NO_ERROR:
                                cout << "NO_ERROR, here is some info: " << endl;
                                cout << "This is the lcnextent : " << retrievalBuffer->Extents[lastExtentN].Lcn.QuadPart << endl;
                                cout << "This is the nextvnc: " << retrievalBuffer->Extents[lastExtentN].NextVcn.QuadPart << endl;
                                cout << "This is the Extent count: " << retrievalBuffer->ExtentCount << endl;
                                cout << "This is the Starting Vnc: " << retrievalBuffer->StartingVcn.QuadPart << endl;
                                cout << "The length of the cluster is: " << lengthOfExtent << endl;
                                cout << "The last cluster is: " << lastExtent + lengthOfExtent - 1 << endl << endl << endl;

                                returns = true;
                                break;

                default:
                                cout << "Error in the code or input error" << endl;
                                break;
                }
}






Resizing Tricks

The second approach was to resize the file and trim it back down. Resizing the file was the right (safe) direction to go. You can easily iterate through all the files on a volume and set file handlers for each one. Then, with each file handle you can calculate the file slack space based on the system information (bytes per sector and sectors per cluster). Finally, move the file pointer to the beginning of the slack space, save the pointer location, write zero’s to the end of the cluster, trim the file back down to the saved pointer location, and do it again. It is important to write meaningless data multiple times because even after one write over, the old data can still be retrieved. This method unfortunately cannot be used on files in use.

Here’s a code snip from this approach:

 /******************* Loop to write random 0s and 1s to the end of the file(4 times) **********/
for( int a = 2; a<6;a++){

                //Alternate 0s and 1s
                int b,c;
                b = 2;
                c = a%b;

                char * wfileBuff = new char[info.slackSpace];
                memset (wfileBuff,c,info.slackSpace);
                                

                returnz = SetFilePointer(info.hanFile, info.fileSize.QuadPart,NULL,FILE_BEGIN);
                if(returnz ==0){
                                cout<<"Error with SetFilePointer"<<endl;
                                return 0;
                }
                
                /**** Lock file, Write data, Unlock file *******/
                if(LockFile(info.hanFile, returnz, 0, info.slackSpace, 0) != 0)

                returnz =WriteFile(info.hanFile, wfileBuff, info.slackSpace, &dwBytesWritten, NULL);
                if(returnz ==0){
                                cout<<"There is an error with WriteFile"<<endl;
                                cout<<"Error: "<<GetLastError()<<endl;
                                return 0;
                }
                if(UnlockFile(info.hanFile, returnz, 0, info.slackSpace, 0) != 0);

                /***********************************************/

                //cout<<dwBytesWritten<<endl<<endl;
                system("pause"); 


                //Cut out the extra data written from the file
                if(!SetEndOfFile(info.tempHan)){
                                cout<<"Error seting the end of the file"<<endl;
                                return 0;
                }
                




Even though the second approach has its drawbacks, it is safer and can work on different file systems. The focus for the next sequence will be optimization of speed and the addition of extra features. Some of the features would include finding the file offset of a file from the disc(can be useful for finding bad sectors), displaying volume information and file information such as size available. Working on this project was an interesting experience that has helped me grow from a computer forensic perspective and I can’t wait to see what I can do next.

Tuesday, June 24, 2014

Approaches to Vulnerability Disclosure

By Brad Antoniewicz.



The excitement of finding a vulnerability in piece of commercial software can quickly shift to fear and regret when you disclose it to the vendor and find yourself in a conversation with a lawyer questioning your intentions. This is an unfortunate reality in our line of work, but you can take actions to protect your butt. In this post, we’ll take a look at how Vulnerability disclosure is handled in standards, by bug hunters, and by large organizations so that you can figure out how to make the best decision for you.

Disclosure Standpoints

While it’s debatable, I think hacking, more specifically vulnerability discovery, started to better the overall community – e.g. we can make the world a better, more secure place by finding and fixing vulnerabilities within the software we use. Telling software maintainers about vulnerabilities we find in their products falls right in line with this idea. However, there is also something else to consider: recognition and sharing. If you spend weeks findings an awesome vulnerability, you should be publically recognized for that effort, and moreover, other’s should also know about your vulnerability so they can learn from it.

Unfortunately, vendors often lack the same altruistic outlook. From a vendor’s perspective, a publically disclosed vulnerability highlights a flaw in their product, which may negatively impact its customer base. Some vendors even interpret vulnerability discovery as a direct attack against their product and even their company. I’ve personally had lawyers ask me “Why are you hacking our company” when I disclosed a vulnerability in their offline desktop application.

As time progressed, vulnerability discovery shifted from a hobby and “betterment” activity to a profitable business. There are plenty of organizations out their selling exploits for undisclosed vulnerabilities. Plus, a seemingly even greater number of criminal or state-sponsored organizations leveraging undisclosed vulnerabilities for corporate espionage and nation-state sponsored attacks. This shift has turned computer hacking from a “hippy” activity to serious business.

The emergence of bug bounty programs has really helped deter bug hunters away from criminal outlets by offering monetary reward and public recognition. It has also demystified how disclosure is handled. However, not all vendors offer a bug bounty program, and many times lawyers may not even be aware of the bug bounty programs available in their own organization, which could put you in a sticky situation if you take the wrong approach to disclosure.

General Approaches

In general, there are three categories of disclosure:

  • Full disclosure – Full details are released publically as soon as possible, often without vendor involvement
  • Coordinated disclosure – Researcher and vendor work together so that the bug is fixed before the vulnerability is disclosed
  • Private or Non-Disclosure – The vulnerability is released to a small group of people (not the vendor) or kept private


These categories broadly classify disclosure approaches but many actual disclosure policies are unique in that they set time limitations on vendor response, etc.. .

Established Disclosure Standards

To give better perspective, let's look at some existing standards that help guide you in the right direction.

  • Internet Engineering Task Force (IETF) – Responsible Vulnerability Disclosure Process - The Responsible Vulnerability Disclosure Process established by this IETF draft is one of the first efforts made to create a process that establishes roles for all parties involved. This process accurately defines the appropriate roles and steps of a disclosure; however it fails to address publication by the researcher if the vendor fails to respond or causes unreasonable delays. At most the process states that the vendor must provide specific reasons for not addressing a vulnerability within 30 days of initial notification.
  • Organization for Internet Safety (OIS) Guidelines for Security Vulnerability Reporting and Response - The OIS guidelines provide further clarification into the disclosure process, offering more detail and establishing terminology for common elements of a disclosure such as the initial vulnerability report (Vulnerability Summary Report), request for confirmation (Request for confirmation receipt), status request (Request for Status), etc.. As with the Responsible Vulnerability Disclosure Process, the OIS Guidelines also do not define a hard time frame for when the researcher may publicize details of the vulnerability. If the process fails, OIS Guidelines define a “Conflict Resolution” step which ultimately results in the ability for parties exit the process, however no disclosure option is provided. The OIS also introduces the scenario where an unrelated third party discloses the same vulnerability – at that time the researcher may disclose without the need for a vendor fix.
  • Microsoft Coordinated Vulnerability Disclosure (CVD) - Microsoft’s Coordinated Vulnerability Disclosure is similar to responsible disclosure in that its aim is to have both the vendor and the researcher (finder) work together to disclose information about the vulnerability at a time after a resolution is reached. However, CVD refrains from defining any specific time frames and only permits public disclosure after a vendor resolution or evidence of exploitation is identified.


Coordinator Policies

Coordinators act on the behalf of a researcher to disclose vulnerabilities to vendors. They provide a level of protection to the researcher and also take on the role of finding an appropriate vendor contact. While coordinators goal is to notify the vendor, they also satisfy the researcher’s aim to share the vulnerability with the community. This sections discusses gives an overview of coordinators policies.

  • Computer Emergency Response Team Coordination Center (CERT/CC) Vulnerability Disclosure Policy - The CERT/CC Vulnerability disclosure policy sets a firm 45 day timeframe from initial report to public disclosure. This occurs regardless of if a patch or workaround is released by the vendor. Exceptions to this policy do exist for critical issues in core components of technology that require a large effort to fix, such as vulnerabilities in standards or core components of an operating system.
  • Zero Day Initiative (ZDI) Disclosure Policy - ZDI is a coordinator that offers monetary rewards for vulnerabilities. It uses the submitted vulnerabilities to generate signatures so that its security products can offer clients early detection and prevention. After making a reasonable effort, ZDI may disclose vulnerabilities within 15 days of initial contact if the vendor does not respond.


Researcher Policies

Security companies commonly support vulnerability research and make their policies publically available. This section provides an overview of a handful:

  • Rapid 7 Vulnerability Disclosure Policy - Rapid7 attempts to contact the vendor via telephone and email then after 15 days, regardless of response, will post its finding to CERT/CC. This combination provides the vendor a potential of 60 days before public disclosure because it is CERT/CC’s policy to wait 45 days.
  • VUPEN Policy - VUPEN is a security research company that adheres to a “commercial responsible disclosure policy”, meaning any vendor who is under contract with VUPEN will be notified of vulnerabilities, however all other vulnerabilities are mostly kept private to fund the organization’s exploitation and intelligence services.
  • Trustwave/SpiderLabs Disclosure Policy - Trustwave makes a best effort approach to contacting the vendor then ultimately puts the decision of public disclosure in its management’s hands if the vendor is unresponsive.

Summary of Approaches

The following table summarizes the approaches mentioned above.

Policy
Notification Emails
Receipt Time Frame
Status Update Time Frame
Verification /Resolution
Time Frame
Disclosure Time Frame
Responsible Vulnerability Disclosure Process

security@
security-alert@
support@
secalert@

And other public info such as domain registrar, etc..

7 days
Every 7 days or otherwise agreed
Vendors make best effort to address within 30 days, can request up to 30 day additional grace period and extensions without defining limits.
After resolution by vendor.
OIS Guidelines for Security Vulnerability Reporting and Response
security@
secure@
security-alert@
secalert@

Alternates:
abuse@
postmaster@
sales@
info@
support@

And other public info such as domain registrar, etc..

7 days, then can send a request for receipt. After three days, go to conflict resolution
Every 7 days or otherwise agreed – Finder can send request for status if vendor does not comply. After three days, go to conflict resolution.
30 day suggestion from vendor receipt, although should be defined on case by case basis.
After resolution by vendor.
Microsoft Coordinated Vulnerability Disclosure
security@
secure@
security-alert@
secalert@
support@
psirt@
info@
sales@

And search engine results, etc..
Not defined
Not defined
Not defined
After resolution by vendor.
CERT/CC Vulnerability Disclosure Process
Not published
Not defined
Not defined
Not defined
45 Days from initial notification

ZDI Disclosure Policy
security@
support@
info@
secure@
5 days then telephone contact.

5 days for telephone response then intermediary
Not defined
6 Months
15 days if no response is provided after initial notification. Up to 6months if notification response is provided
Rapid7
Not defined
15 days after phone/email
Not defined
Not defined
15 days then disclosure to CERT/CC
VUPEN
Not defined
Notification only to customers under contract
Not defined
Not defined
Disclosure only to customers
TrustWave Spider Labs
Not Defined
5 days
5 days
Not defined
If vendor is unresponsive for more than 30 days after initial contact, potential disclosure decided by Trustwave Management.

What to do?

Consider all of the above approaches, and let the vendor know your policy as you disclose it so they are aware. At the end of the day, it's always good to be flexible, and as accomodating as possible to the vendor. However, also be sure the effort is equal, they should be responding in a resonable time and making progress to address the issue.



How do you handle disclosure? Let us know in the comments below!



Tuesday, June 17, 2014

Privilege escalation with AppScan

By Kunal Garg.

Web application vulnerability scanners are a necessary evil when it comes to achieving a rough baseline or some minimum level of security. While they should never be used as the only testament of security for an application, they do provide a level of assurance above no security testing at all. For the security professional, they serve as another tool in the toolbox. All web application scanners are different and some require finer tuning then others. A common question with IBM's AppScan is, "How do you configure it to test only for privilege escalation issues?" In this post, we'll walk you through the steps!

Privilege escalation testing comes handy during authorization testing, when you're looking to tell if one user is authorized to access data or perform actions outside of their role.

Prerequisite

You're first step is to run a post authentication scan with a higher privilege user. In this example, we'll use "FSTESTADMIN". Ideally you'll use a manual crawl so that maximum URL’s are covered.

Configuration

Once the post-authentication scan is complete, follow configure App Scan as follows:

  1. Open a new scan and go to "Scan Configuration"
  2. Go to Login Management and record the login with lower privilege user (Say "FSTESTUSER")
  3. Go to "Test" then "Privilege Escalation" and browse the scan file created previously (scan file for "FSTESTADMIN")



  4. Go to Test Policy, using (CTRL+A) select all tests and uncheck them
  5. In the find section type “escalation” and select all the privilege escalation checks



Once all the above settings are complete run the scan, App Scan will only run tests for Privilege escalation.

This usually creates lots of false positives as App Scan checks for URL’s in the higher privilege scan using the authentication credentials of a lower privilege user. Any URL/pages which are common to both the user will be reported as an issue (false positive in this case).

Tuesday, June 10, 2014

Dojo Toolkit and Risks with Third Party Libraries

By Deepak Choudhary.

3rd party libraries can become critical components of in-house developed applications, while the benefits to using them is huge, there is also some risks to consider. In this blog post we'll look at a common 3rd party component of many web applications, Dojo Toolkit. After noticing it was included during a recent web application penetration test, it became clear that the version incorporated within the application was vulnerable, and ultimately exposed the entire application to attack.

Dojo Toolkit

If you haven't encountered Dojo before, just know it is a JavaScript (Javascript/AJAX) library used to design cross-platform web and mobile applications. The framework itself provides various "widgets" that can be used to support a variety of browsers, everything from Safari to Chrome on iPhone to Blackberry.

Documented Vulnerabilities

Dojo has reported some serious security issues in the past such as XSS, DOM-Based XSS, and URL Redirection so its important to stay up to date with the latest version if you leverage it within your application.

Vulnerable version: Dojo 0.4 through Dojo 1.4
Latest Version: Dojo 1.9.3
Reference: http://dojotoolkit.org/ , http://dojotoolkit.org/features/mobile

Files with known vulnerabilities

  • dojo/resources/iframe_history.html
  • dojox/av/FLAudio.js
  • dojox/av/FLVideo.js
  • dojox/av/resources/audio.swf
  • dojox/av/resources/video.swf
  • util/buildscripts/jslib/build.js
  • util/buildscripts/jslib/buildUtil.js
  • util/doh/runner.html
  • /dijit/tests/form/test_Button.html


Prior attack strings

  • http://WebApp/dojo/iframe_history.html?location=http://www.google.com
  • http://WebApp/dojo/iframe_history.html?location=javascript:alert%20%289999%2
  • http://WebApp/util/doh/runner.html?dojoUrl='/>foo</script><'"<script>alert(/xss/)</script>
  • http://WebApp/util/doh/runner.html?testUrL='/>foo</script><'"<script>alert(/xss/)</script>
  • http://WebApp/dijit/tests/form/test_Button.html?theme="/><script>alert(/xss/)</script>
  • dojox/av/FLAudio.js (allowScriptAccess:"always”)
  • dojox/av/FLVideo.js (allowScriptAccess:"always”) and etc.


If you use Dojo, make sure you have an updated version installed or remove these files (if not needed) from the application's directories.

Tuesday, June 3, 2014

Debugging Android Applications

By Naveen Rudrappa.

Using a debugger to manipulate application variables at runtime can be a powerful technique to employ while penetration testing Android applications. Android applications can be unpacked, modified, re-assembled, and converted to gain access to the underlying application code, however understanding which variables are important and should be modified is a whole other story that can be laborious and time consuming. In this blog post we'll highlight the benefits of runtime debugging and give you a simple example to get you going!

Debugging is a technique where a hook is attached to a particular application code. Execution pauses once a particular piece of code is reached, giving us the ability to analyze local variables, dump class values, modify values, and generally interact with the program state. Then when we're ready, we can resume execution.

Required Tools

If you have done any work with Android applications, you shouldn't need any new tools:

  1. The application's installationation package
  2. Java SDK
  3. Android SDK
Reverse engineering has got prominent role while, penetration testing the android applications. Reversing android applications is helpful in below 2 scenarios:

Debuggable

The AndroidManifest.xml contained within the application's .apk actually has a android:debuggable setting which allows the application to be debuggable. So we'll need to use the APK Manager to decompress the installation package and add android:debuggable="true".



Attaching

We'll need to attach the debugger to our application in order for us to debug it. Using adb jdwp, we can list all of the running applications and as long as the target application was the last to be loaded, we can reliably guess that the last process ID on the list is ours.



Next we'll need to forward our debugging session to a port we can connect to with our debugger:

 adb forward tcp:8000 jdwp:498 


Finally we can attach the debugger with:

 jdb -connect com.sun.jdi.SocketAttach:hostname=localhost,port=8000 


With the debugger attached, we can set breakpoints at the required functions and analyze the application behavior at runtime. To identify function names, you can decompile the application to dex and use that code to guide your debugging session.

Some the useful JDB commands for debugging:

  1. stop in [function name] - Set a breakpoint
  2. next - Executes one line
  3. step - Step into a function
  4. step up - Step out of a function
  5. print obj - Prints a class name
  6. dump obj - Dumps a class
  7. print [variable name] - Print the value of a variable
  8. set [variable name] = [value] - Change the value of a variable

An Exercise for You!



This application is a pretty simple one. Upon entering correct PIN, that is 1234, application responds with message “correct PIN entered”. Upon entering any value apart from 1234 application responds with message “Invalid PIN”. Bypass this logic via debugging so that for any invalid PIN application responds with message “correct PIN entered”. For solution refer below image it summarizes all the command need to complete the challenge.



Tuesday, May 27, 2014

Acquiring Linux Memory from a Server Far Far Away

By Dan Caban.

In the past it was possible to acquire memory from linux systems by directly imaging (with dd) psudo-device files such as /dev/mem and /dev/kmem. In later kernels, this access was restricted and/or removed. To provide investigators and system administrator’s unrestricted access, loadable kernel modules were developed and made available in projects such as fmem and LiME (Linux Memory Extractor).

In this blog post I will introduce you to a scenario where LiME is used to acquire memory from a CentOS 6.5 x64 system that is physically hosted in another continent.

LiME is a Loadable Kernel Module (LKM). LKM’s are typically designed to extend kernel functionality, and can be inserted by a user with root privileges. This sounds a little scary, and it does introduce tangible risks if done wrong. But on the positive side:

  • the LiME compiled LKM is rather small
  • the process does not require a restart
  • the LKM can be added/removed quickly
  • the resulting memory dump can be transferred over the network without writing to the local disk; and
  • the memory dump is compatible with Volatility


Getting LiME

Since LiME is distributed as source without any binaries you need to compile it yourself. You will find documentation on the internet suggesting that you jump right in and compile LiME on your target system. I recommend you first see if a pre-compiled LKM exists, or alternatively compile and test in a virtual machine first.

In either case, you first need to determine the kernel running on your target system, as the LKM you use must have been compiled on the the exact operating system, kernel version and architecture. Here we determine our target is running the kernel 2.6.32-431.5.1.el6.x86_64.

[centos@targetsystem ~]$ uname -a
Linux localhost.localdomain 2.6.32-431.5.1.el6.x86_64 #1 SMP Wed Feb 12 00:41:43 UTC 2014 x86_64 x86_64 x86_64 GNU/Linux



One great reputable resource for pre-compiled LiME LKM’s is the Linux Forensics Tools Repository at Cert.org. They provide a RPM repository of forensic tools for Red Hat Enterprise Linux, CentOS and Fedora.

To check to see your specific kernel is compiled for your operating system, visit Cert and find the “repoview” for your target operating system.



Browse to “applications/forensics tools” and view the documentation on “lime-kernel-objects”.

As of today's date the following kernels have pre-compiled LiME LKM’s for CentOS 6 / RHEL 6:

2.6.32-71
2.6.32-71.14.1
2.6.32-71.18.1
2.6.32-71.24.1
2.6.32-71.29.1
2.6.32-71.7.1
2.6.32-131.0.15
2.6.32-220
2.6.32-279
2.6.32-358.0.1
2.6.32-358.11.1
2.6.32-358.14.1
2.6.32-358.18.1
2.6.32-358.2.1
2.6.32-358.23.2
2.6.32-358.6.1
2.6.32-431
2.6.32-431.1.2.0.1
2.6.32-431.3.1



Oh no! The kind folks at Cert are not completely up to date, and my target system is running a newer kernel. That means I have to do the heavy lifting myself.

I installed CentOS 6.5 x64 in a virtual machine and updated until I had the latest kernel matching 2.6.32-431.5.1.el6.x86_64.

[root@vmtest ~]$ yum update
[root@vmtest ~]$ yum upgrade



Since this was a kernel upgrade, I gave my virtual machine a reboot.

[root@vmtest ~]$ shutdown -r now



We now have the matching kernel, but we still need the associated kernel headers and source as well as the tools needed for compiling.

[root@vmtest ~]$ yum install gcc gcc-c++ kernel-headers kernel-source




Now we are finally ready to download and compile our LiME LKM!

[root@vmtest ~]# mkdir lime; cd lime
[root@vmtest lime]# wget https://lime-forensics.googlecode.com/files/lime-forensics-1.1-r17.tar.gz
[root@vmtest lime]# tar -xzvf lime-forensics-1.1-r17.tar.gz  
[root@vmtest lime]# cd src 
[root@vmtest src]# make
….
make -C /lib/modules/2.6.32-431.5.1.el6.x86_64/build M=/root/lime/src modules
…
[root@vmtest src]# ls lime*.ko
lime-2.6.32-431.5.1.el6.x86_64.ko



Conducting a test run

We can now test out our newly built LiME LKM on our virtual machine by loading the kernel module and dumping memory to a local file.

We are opting to create our memory image on the local file system, so I provide the argument path=/root/mem.img. LiME supports raw, padded and “lime” formats. Since volatility supports the lime format, I have provided the argument format=lime.

[root@vmtest ~]# insmod lime-2.6.32-431.5.1.el6.x86_64.ko "path=/root/mem.img format=lime"



I have validated the memory images matches the amount of RAM I allocated to the virtual machine (1GB) and that it contains valid content.

[root@vmtest ~]# ls -lah /root/mem.img 
-r--r--r--. 1 root root 1.0G Mar  9 08:11 /root/mem.img
[root@vmtest ~]# strings /root/mem.img | head -n 3
EMiL
root (hd0,0)
kernel /vmlinuz-2.6.32-431.5.1.el6.x86_64 ro root=/dev/mapper/vg_livecd-lv_root rd_NO_LUKS 



I can now remove the kernel module with one simple command:

[root@vmtest ~]# rmmod lime



Acquiring memory over the internet

Now we return to our scenario where we are trying to capture memory from a CentOS server on another continent. I opted to upload LiME LKM to a server I control and then download it via HTTP.

[root@targetserver ~]# wget http://my.server-on-the-internet.com/lime-2.6.32-431.5.1.el6.x86_64.ko




The great thing about LiME is that it is not limited to just output to a local disk or physical file. In our test run we supplied an output path with the argument path=/root/mem.img. We will instead create a TCP service using the argument path=tcp:4444.

[root@targetserver ~]# insmod lime-2.6.32-431.5.1.el6.x86_64.ko "path=tcp:4444 format=lime"




If I were situated within our clients network or port 4444 was open over the internet, I could simply use netcat to connect and transfer the memory image to my investigative laptop.

[dan@investigativelaptop ~]# nc target.server.com 4444 > /home/dan/evidence/evidence.lime.img




Since in this scenario our server is on the internet, and a restrictive firewall is inline we are forced to get creative.

Remember how I downloaded the LiME LKM to the target server via HTTP (port 80)? That means the server can make outbound TCP connections via that port.

I can setup a netcat listener on my investigative laptop here in our lab, and opened it up to the internet. I did this by configuring my firewall to pass traffic on this port to my local LAN address, and you can achieve the same results with most routers designed for home/small office with port forwarding.

Step 1: setup netcat server at our lab listening on port 80.

[dan@investigativelaptop ~]# nc –l 80 > /home/dan/evidence/evidence.lime.img



Step 2: Run LiME LKM and configure it to wait for TCP connections on port 4444.

[root@targetserver ~]# insmod lime-2.6.32-431.5.1.el6.x86_64.ko "path=tcp:4444 format=lime"



On the target server I can now use a local netcat connection that is piped to a remote connection in our lab via port 80 (where 60.70.80.90 is our imaginary lab IP address.)

Step 3: In another shell initiate the netcat chain to transfer the memory image to my investigative laptop at our lab.

[root@targetserver ~]# nc localhost 4444 | nc 60.70.80.90 80




Voila! I now have a memory image on my investigative laptop and can start my analysis.

Below is a basic visualization of the process:



Memory Analysis with Volatility

Volatility ships with many prebuilt profiles for parsing memory dumps, but they are focused exclusively the Windows operating system. To perform memory analysis on a sample collected from linux, we need to first create a profile that matches the exact operating system, kernel version and architecture (surprise, surprise!) So let’s head back to our virtual machine where we will need to collect the required information to create a linux profile:

  • the debug symbols (System.map*);
    • Requirements: access to the test virtual machine system running on the same operating system, kernel version and architecture
  • and information about the kernel’s data structures (vtypes).
    • Requirements: Volatility source and the necessary tools to compile vtypes running on the same operating system, kernel version and architecture.


First let’s create a folder for collection of required files.

cd ~
mkdir -p volatility-profile/boot/
mkdir -p volatility-profile/volatility/tools/linux/



Now let’s collect the debug symbols. On a CentOS system it is located in /boot/ directory. We will need to find the System.map* file that matches the active kernel version that was running when we collected the system memory (2.6.32-431.5.1.el6.x86_64).

[root@vmtest ~]# cd /boot/
[root@vmtest boot]# ls -lah System.map*
-rw-r--r--. 1 root root 2.5M Feb 11 20:07 System.map-2.6.32-431.5.1.el6.x86_64
-rw-r--r--. 1 root root 2.5M Nov 21 22:40 System.map-2.6.32-431.el6.x86_64



Copy the appropriate System.map file to the collection folder.

[root@vmtest boot]# cp System.map-2.6.32-431.5.1.el6.x86_64 ~/volatility-profile/boot/



One of the requirements to compile the vtypes is libdwarf. While this may be easily installed on some operating systems using apt-get or yum, CentOS 6.5 requires that we borrow and compile the source from the Fedora Project. The remaining prerequisites for compiling should have been installed when we compiled LiME earlier in the section Getting LiME.

[root@vmtest boot]# cd ~
[root@vmtest ~]# mkdir libdwarf
[root@vmtest libdwarf]# cd libdwarf/
[root@vmtest libdwarf]# wget http://pkgs.fedoraproject.org/repo/pkgs/libdwarf/libdwarf-20140208.tar.gz/4dc74e08a82fa1d3cab6ca6b9610761e/libdwarf-20140208.tar.gz
[root@vmtest libdwarf]# tar -xzvf libdwarf-20140208.tar.gz 
[root@vmtest dwarf-20140208]# cd dwarf-20140208/
[root@vmtest dwarf-20140208]#./configure
[root@vmtest dwarf-20140208]# make
[root@vmtest dwarfdump]# cd dwarfdump
[root@vmtest dwarfdump]# make install




Now we can obtain a copy of the Volatility source code and compile the vtypes.

[root@vmtest dwarfdump]# cd ~
[root@vmtest ~]# mkdir volatility
[root@vmtest ~]# cd volatility
[root@vmtest volatility]# cd volatility
[root@vmtest volatility]# wget https://volatility.googlecode.com/files/volatility-2.3.1.tar.gz
[root@vmtest volatility]# tar -xzvf volatility-2.3.1.tar.gz
[root@vmtest volatility]# cd volatility-2.3.1/tools/linux/
[root@vmtest linux]# make



After successfully compiling the vtypes, we will copy the resulting module.dwarf back out to the collection folder.

[root@vmtest linux]# cp module.dwarf ~/volatility-profile/volatility/tools/linux/



Now that we have our collected the two requirements to create a system profile, let’s package them into a ZIP file, as per the requirements of Volatility.

[root@vmtest linux]# cd ~/volatility-profile/
[root@vmtest volatility-profile]# zip CentOS-6.5-2.6.32-431.5.1.el6.x86_64.zip boot/System.map-2.6.32-431.5.1.el6.x86_64 volatility/tools/linux/module.dwarf 
  adding: boot/System.map-2.6.32-431.5.1.el6.x86_64 (deflated 80%)
  adding: volatility/tools/linux/module.dwarf (deflated 90%)




On my investigative laptop I could drop this ZIP file in the default volatility profile directory, but I would rather avoid losing it in the future due to upgrades/updates. I instead will create a folder to manage my custom profiles and reference it when running volatility.

[dan@investigativelaptop evidence]# mkdir -p ~/.volatility/profiles/
cp CentOS-6.5-2.6.32-431.5.1.el6.x86_64.zip ~/.volatility/profiles/



Now I can confirm Volatility recognizes providing the new plugin directory.

[dan@investigativelaptop evidence]# vol.py --plugins=/home/dan/.volatility/profiles/ --info | grep -i profile | grep -i linux
Volatility Foundation Volatility Framework 2.3.1
LinuxCentOS-6_5-2_6_32-431_5_1_el6_x86_64x64 - A Profile for Linux CentOS-6.5-2.6.32-431.5.1.el6.x86_64 x64



Now I can start running the linux_ prefixed plugins that come shipped with Volatility to conduct memory analysis.

dan@investigativelaptop evidence]# vol.py --plugins=/home/dan/.volatility/profiles/ --profile=LinuxCentOS-6_5-2_6_32-431_5_1_el6_x86_64x64 linux_cpuinfo -f /home/dan/evidence/evidence.lime.img
Volatility Foundation Volatility Framework 2.3.1
Processor    Vendor           Model
------------ ---------------- -----
0            GenuineIntel     Intel(R) Xeon(R) CPU           X5560  @ 2.80GHz
1            GenuineIntel     Intel(R) Xeon(R) CPU           X5560  @ 2.80GHz





About the Author

Dan Caban (EnCE, CCE, ACE) works as a Principal Consultant at McAfee Foundstone Services EMEA based out of Dubai, United Arab Emirates.

References