Tuesday, July 31, 2012

UnBup - McAfee BUP Extractor for Linux

By Tony Lee and Travis Rosiek.

These days, antivirus is a must-have due to the ubiquity of adware, malware, viruses, and worms—yes, even if you are running a Mac. ;) Antivirus does a good job catching the low hanging fruit and other annoyances, but have you ever wondered what happens to the files that the A/V catches? Typically, antivirus engines will deactivate the suspected virus and then store an inert (encoded) copy in a quarantine folder to prevent accidental execution. McAfee’s VirusScan will allow you to restore the binary; however in all scenarios using VirusScan may not be ideal. This article will take you through the process of recovering the quarantined binary and metadata surrounding it. But why would you want to do this? Potentially for the following reasons:
  1. You are a corporate A/V administrator and A/V misidentified a user’s file
  2. You are a malware analyst and would like to dissect the detected binary
  3. You are a home user and A/V grabbed a file you wanted (think netcat or a sysinternals tool) and the restore function did not work

As a bonus, at the bottom of the article, we have included a bash shell script and (faster) Perl script to break apart McAfee BUPs from within a Linux environment. We wrote these scripts because we could not find a Linux BUP tool. It was prototyped in bash because it was quick to code and it removed as many dependencies as possible. Unfortunately, bash bitwise Exclusive or (XOR) was too slow and the tool was rewritten in (well-commented) Perl.

In case you are not familiar with the binary operation XOR, a truth table is provided below:



Note that an output of 1 is only produced for an odd amount of 1’s on the input.

How McAfee deactivation works

McAfee VirusScan, like other antivirus engines, will deactivate the binary and store an inert copy in a pre-defined location. McAfee appears to deactivate the binary by doing the following:
  1. Creates a metadata text file
  2. Performs a bitwise XOR on the metadata file and binary with a well-known key (0x6A)
  3. Combines the original binary and metadata text file into a single file using Microsoft’s compound document format
  4. Stores the file (with .BUP extension) in a quarantine folder defined by the Quarantine Manager Policy (default is C:\QUARANTINE\)
You can check the path of the quarantine folder by right clicking on the McAfee shield -> VirusScan Console -> Quarantine Manager Policy.



Triggering Antivirus to Create a BUP

For this demonstration, we will be using the test file for Global Threat Intelligence (GTI - formerly known as Artemis). This test file is similar to the EICAR antivirus test file, but it triggers a heuristic detection for McAfee VirusScan. You could also use John the Ripper, Cain, netcat, pwdump, or other common hack tools to trigger an A/V event.

You can read more about the test file from the How to verify that GTI File Reputation is installed correctly and that endpoints can communicate with the GTI server McAfee KnowledgeBase article.

A direct link to the test file:

https://kc.mcafee.com/resources/sites/MCAFEE/content/live/CORP_KNOWLEDGEBASE/53000/KB53733/en_US/ArtemisTest.zip
The password is to unzip the file is: password

If On-Access detection does not detect the file right away, right-click and scan to activate an On-Demand scan. We disabled the On-Access protection in order to run a hash on the binary and provide it for your convenience. SHA1 would normally be used as MD5 has a chance of collisions—however, MD5 hashes are sufficient for our purposes in this demo.

 $ md5sum.exe ArtemisTest.exe
5db32a316f079fe7947100f899d8db86 *ArtemisTest.exe



Now, after re-enabling On-Access scan and we have a detection:



Now we check the quarantine folder defined in Quarantine Manager Policy shown above and eureka! We have a .BUP file:



If this file did not trigger, you may not have GTI enabled. Try to first enable GTI or use another known-safe, yet detected, binary to generate the BUP file.

Extracting the BUP in Windows

To extract the BUP in Windows, I followed the helpful How to restore a quarantined file not listed in the VSE Quarantine Manager McAfee KnowledgeBase article.

Requirements:
  • 7-zip (Used to decompress the Microsoft compound document format)
  • Bitwise XOR binary such as: xor.exe

If you are extracting the BUP on the same computer that your McAfee antivirus is running on, make sure you disable On-Access scan or exclude the target folder from scans.

Use 7-zip to extract the file by right clicking it, selecting 7-Zip, then Extract Here.



The results should be two files:
  • Details
  • File_0

 $ md5sum.exe Details
c0bb879bdfd5b5277fc661da602f7460 *Details

$ md5sum.exe File_0
02ab0a6723bca2e8b6b70571680210a9 *File_0



Now, use the xor.exe binary to perform a bitwise XOR against the key (0x6A) in order to obtain two new files. Feel free to use the following syntax in a command prompt:

 C:\QUARANTINE>xor.exe Details Details.txt 0X6A

C:\QUARANTINE>xor.exe File_0 Captured.exe 0X6A





If you would like to restore the original file name to the binary, just look at the metadata from Details.txt. The most useful items that we see in the metadata are the following:
  • A/V detection name - Useful for discovering more information about the detection
  • Major and minor versions of A/V engine - Can be used to troubleshoot why some hosts detect and others miss
  • Major and minor versions of A/V DATs - Can be used to troubleshoot why some hosts detect and others miss
  • When the file was captured (Creation fields) - Helps you create a timeline if detected by On-Access scan
  • Time zone of host - Useful for timeline
  • Original file name - Often reveals a good amount of information about the binary

Snippet of Details.txt:

 [Details]
DetectionName=Artemis!5DB32A316F07
DetectionType=0
EngineMajor=5400
EngineMinor=1158
DATMajor=6771
DATMinor=0
DATType=2
ProductID=12106
CreationYear=2012
CreationMonth=7
CreationDay=14
CreationHour=13
CreationMinute=25
CreationSecond=18
TimeZoneName=Eastern Daylight Time
TimeZoneOffset=240
NumberOfFiles=1
NumberOfValues=5

--snip—

[File_0]
ObjectType=5
OriginalName=C:\USERS\REDACTED\DESKTOP\ARTEMISTEST.EXE
WasAdded=0




Now that you have the original file, you can restore it, reverse it, or whatever your heart desires.

But first let’s make sure the hash from the original file matches with the hash before deactivation. For completeness, we will also provide the hash for Details.txt.

 $ md5sum.exe Captured.exe
5db32a316f079fe7947100f899d8db86 *Captured.exe   <- This matches

$ md5sum.exe Details.txt
46c09e5ba29658a69527ca32c6895c08 *Details.txt




Extracting the BUP in Linux

We just detailed the process to recover the binary from a BUP in Windows. You can perform this same process in Linux if you have 7zip and Wine (used to run the xor.exe binary). However, the goal of this tool was to automate the process, add some features, and remove the Wine dependency.

The UnBup Tool

The first thing you should know about UnBup is the usage menu:
 Usage:  UnBup.sh [option] 

  -d = details file only (no executable)
  -h = help menu
  -s = safe executable (extension is .ex)

Please report bugs to Tony.Lee@Foundstone.com and Travis_Rosiek@McAfee.com


  1. No options - Yields the Details.txt and original binary
  2. - d option - Yields the Details.txt file only (No binary)
  3. - s option - Yields the Details.txt file and the binary, with an extension of .ex to prevent accidental execution

Demo: No Options

 UnBup.sh file.bup


Supplying UnBup with no options and just the BUP file produces the details.txt file and the binary. Note that the MD5 hashes are the same as what was seen in the Windows section.



Demo: The -d Option

 UnBup.sh -d file.bup


The -d option is useful for those who may not want to reverse or dig into the binary—but would like a little more information around the detection.



Demo: The -s Option

 UnBup.sh -s file.bup


The -s option is not a foolproof measure to prevent execution of the binary, however it can help prevent accidental execution. In this case, since we are extracting Windows malware in a Linux environment, this adds another level of protection as it is harder (if not impossible) to cross infect a different operating system.



How it works (simplistically)

If you look at the supplied bash code below and think: “This must be a backdoor, there is no way I am going to run it on my box…”, then the screenshot below is directed at you.

The binary math in bash was the most annoying part of the process, but it was the most important. Here is the breakdown of each step:
  1. xxd to convert the binary to hex
  2. Performing the XOR
  3. Converting the decimal result to hex
  4. The step to convert hex to ASCII is to show you the readable output (not performed in script)



The Shell Script – (SLOW – You may want to use the Perl code below)

Download it here: https://raw.github.com/OpenSecurityResearch/unbup/master/UnBup.sh

In case, file download is blocked, feel free to copy and paste it from here:

#!/bin/bash
# UnBup
# Tony Lee and Travis Rosiek
# Tony.Lee-at-Foundstone.com
# Travis_Rosiek-at-McAfee.com
# Bup Extraction tool - Reverse a McAfee Quarantined Bup file with Bash
# Input:  Bup File
# Output: Details.txt file and original binary (optional)
# Note:  This does not put the file back to the original location (output is to current directory)
# Requirements - 7z (7zip), xxd (hexdumper), awk, cut, grep

##### Function Usage #####
# Prints usage statement
##########################
Usage()
{
echo "UnBup v1.0
Usage:  UnBup.sh [option] 

  -d = details file only (no executable)
  -h = help menu
  -s = safe executable (extension is .ex)

Please report bugs to Tony.Lee-at-Foundstone.com and Travis_Rosiek-at-McAfee.com"
}

# Detect the absence of command line parameters.  If the user did not specify any, print usage statement 
[[ -n "$1" ]] || { Usage; exit 0; }


##### Function XorLoop #####
# Loop through files to perform bitwise xor with key write binary to file
############################
XorLoop()
{
for byte in `xxd -c 1 -p $INPUT`; do   # For loop converts binary to hex 1 byte per line
        #echo "$byte"
        decimal=`echo $((0x$byte ^ 0x6A))` # xor with 6A and convert to decimal
        #echo "decimal = $decimal"
        hex=`echo "obase=16; $decimal" | bc` # Convert decimal to hex
        #echo "hex = $hex"
        echo -ne "\x$hex" >> $OUTPUT;  # Write raw hex to output file
done
}


##### Function CreateDetails #####
# Create the Details.txt file with metadata on bup'd file
##################################
CreateDetails()
{
# Check to see if the text file exists, if not let the user know
 [[ -e "$BupName" ]] || { echo -e "\nError:  The file $BupName does not exist\n"; Usage; exit 0; }
 echo "Extracting encoded files from Bup";
 7z e $BupName > /dev/null;  # Extract the xor encoded files (Details and File_0)
 INPUT=Details;    # Set INPUT variable to the Details file to get the details and filename
 OUTPUT=Details.txt;   # Set OUTPUT variable to Details.txt filename
 echo "Creating the Details.txt file";
 XorLoop;    # Call XorLoop function with variables set
}


##### Function ExtractBinary #####
# Extracts the original binary from the bup file
##################################
ExtractBinary()
{
 Field=`grep OriginalName Details.txt | awk -F '\' '{ print NF }'`; # Find the binary name field
 OUTNAME=`grep OriginalName Details.txt | cut -d '\' -f $Field`;
 OUTPUT=`echo "${OUTNAME%?}"`;      # Get rid of trailing /r
 INPUT=File_0;
 echo "Extracting the binary";
 XorLoop;        # Call xor function again
}

# Parse the command line options
case $1 in
 -d) BupName=$2; CreateDetails;;
 -h) Usage; exit 0;;       # Details.txt file only
 -s) BupName=$2; CreateDetails; ExtractBinary; mv $OUTPUT `echo "${OUTPUT%?}"`;; # Safe binary
 *) BupName=$1; CreateDetails; ExtractBinary;;      # Full process of the bup
esac

rm Details File_0;      # Clean up xor'd files





Our Perl script (MUCH FASTER – You probably want to use this over the shell script)

When processing small files like the Artemis test file, bash shell scripting worked just fine. However, when processing larger executables, the XOR process was too time consuming. We searched for a simple XOR Perl script on-line, but did not find anything to fit what we were looking for so we wrote our own.

xor.pl Usage:
 ./xor.pl
Simple xor script
  Usage: ./xor.pl [Input File] [Output File]

Tony.Lee@Foundstone.com
./xor.pl 7dc7ed19123df0.bup 7dc7ed19123df0.xord



Download : https://raw.github.com/OpenSecurityResearch/unbup/master/xor.pl

  #!/usr/bin/perl
# Simple xor decoder
# Written because I could not find one on the Intertubes
# Email me with problems at Tony.Lee-at-Foundstone.com

# Detection to make sure there are two arguments supplied (an input file and output file) 
if (@ARGV < 2) {
 die "Simple xor script\n  Usage: $0 <Input File> <Output File>\n\nTony.Lee-at-Foundstone.com\n";
}

# Open input file as read only to avoid accidentally modifying the file
open INPUT, "<$ARGV[0]" or die "Input file \"$ARGV[0]\" does not exist\n";

# Open the output file to write to it
open OUTPUT, ">$ARGV[1]" or die "Cannot open file \"$ARGV[1]\"";

# Loop until all bytes in the file are read
while (($n = read INPUT, $byte, 1) != 0) 
{ 
 $decode = $byte ^ 'j';  # xor byte against ASCII 'j' = Hex 0x6A = Dec 106 
 print OUTPUT $decode;  # write the decoded output to a file
}

close INPUT;
close OUTPUT;



After writing the XOR Perl script, we converted the Bash script to Perl to speed the process up.

UnBup.pl

Download : https://raw.github.com/OpenSecurityResearch/unbup/master/UnBup.pl

  #!/usr/bin/perl
# UnBup
# Tony Lee and Travis Rosiek
# Tony.Lee-at-Foundstone.com
# Travis_Rosiek-at-McAfee.com
# Bup Extraction tool - Reverse a McAfee Quarantined Bup file with Bash
# Input:  Bup File
# Output: Details.txt file and original binary (optional)
# Note:  This does not put the file back to the original location (output is to current directory)


# Detect the absence of command line parameters.  If the user did not specify any, print usage statement 
if (@ARGV == 0) { Usage(); exit(); }


##### Function Usage #####
# Prints usage statement
##########################
sub Usage
{
print "UnBup v1.0
Usage:  UnBup.pl [option] 

  -d = details file only (no executable)
  -h = help menu
  -s = safe executable (extension is .ex)

Please report bugs to Tony.Lee-at-Foundstone.com and Travis_Rosiek-at-McAfee.com\n"
}


##### Function XorLoop #####
# Loop through files to perform bitwise xor with key write binary to file
# Input arguments input filename and output filename
# example:  XorLoop(Details, Details.txt)
############################
sub XorLoop
{
 # Open input file as read only to avoid accidentally modifying the file
 open INPUT, "<$_[0]" or die "Input file \"$_[0]\" does not exist\n";

 # Open the output file to write to it
 open OUTPUT, ">$_[1]" or die "Cannot open file \"$_[1]\"";

 # Loop until all bytes in the file are read
 while (($n = read INPUT, $byte, 1) != 0) 
 { 
  $decode = $byte ^ 'j';  # xor byte against ASCII 'j' = Hex 0x6A = Dec 106 
  print OUTPUT $decode;  # write the decoded output to a file
 }

 close INPUT;
 close OUTPUT;
}

##### Function CreateDetails #####
# Create the Details.txt file with metadata on bup'd file
##################################
sub CreateDetails
{
 $BupName=$_[0];
 # Check to see if the text file exists, if not let the user know
 unless(-e "$BupName") { print "\nError:  The file \"$BupName\" does not exist\n"; Usage; exit 0; }
 print "Extracting encoded files from Bup\n";
 `7z e $BupName`;   # Extract the xor encoded files (Details and File_0)
 print "Creating the Details.txt file\n";
 XorLoop("Details", "Details.txt"); # Call XorLoop function with variables set
}

##### Function ExtractBinary #####
# Extracts the original binary from the bup file
##################################
sub ExtractBinary
{
 $Field=`grep OriginalName Details.txt | awk -F '\\' '{ print NF }'`; # Find the binary name field
 $OUTNAME=`grep OriginalName Details.txt | cut -d '\\' -f $Field`; # Find the binary name
 $INPUT=File_0;
 print "Extracting the binary\n";
 XorLoop("$INPUT", "$OUTNAME");      # Call xor function again
}



if ($ARGV[0] eq "-d"){  # Print details file only
 CreateDetails($ARGV[1]);
 `rm Details File_0`; # Clean up original files
}
elsif ($ARGV[0] eq "-h"){ # Print usage statement
  Usage();
}
elsif ($ARGV[0] eq "-s"){ # Create "safe" binary
  CreateDetails($ARGV[1]);
  ExtractBinary();
  chop($OUTNAME);
  $OLD=$OUTNAME;   # Store original name in $OLD variable
  chop($OUTNAME);
  chop($OUTNAME);
  `mv $OLD $OUTNAME`;  # Rename the binary to remove that last E
  `rm Details File_0`;  # Clean up original files
}
else {
 CreateDetails($ARGV[0]); # Extract details file and binary
 ExtractBinary();
 `rm Details File_0`;  # Clean up original files
}




Final thoughts and coding challenge

We provided two different methods for extracting a McAfee Bup tool in Linux. It may not be the most graceful solution—but it works and it did not take much time to hack up. However, we are looking for options that would be useful to others. If you have some options, please feel free to state what you would find useful.

Additionally, these scripts (bash and Perl) fit the bill for us—but it may not meet the needs for those without the 7zip extractor (bash and perl) or xxd (bash script only) utility. Our challenge to anyone that wants to geek out is the following:
  1. Reduce the dependencies (No need for 7zip or xxd)
  2. Code it in your favorite language (python, ruby, C, LUA… whatever you want)
  3. Be as concise and clear as possible

One hint to anyone who gets started on manually parsing the format:

 hexdump -C 7dc7ed19123df0.bup | head -n 1

00000000  d0 cf 11 e0 a1 b1 1a e1  00 00 00 00 00 00 00 00  |................|





Feel free to post back. :) Happy hacking!

Tuesday, July 24, 2012

Proxying Android 4.0 ICS and FS Cert Installer

By Paul Ambrosini.

The first step to testing Android applications is to inspect the application’s traffic. If the application uses SSL encryption, this requires forcing the app to use an intermediate proxy that allows us to grab, inspect, and possibly modify this traffic. Before Android 4.0 (Ice Cream Sandwich or “ICS”) was released, proxying an application was painful; the emulator was a better solution than a physical phone due to SSL certificate issues. Now that ICS is out and many devices have a working build (either from the manufacturer or third-party), it has become much easier to use an actual phone to test Android applications.

While testing Android applications, it quickly becomes apparent that the OS doesn't proxy traffic easily. Since most developers don't use the emulator and code must be specifically written for the emulator, proxying on the emulator can pose additional challenges- namely that the application simply might not work at all, or might not work properly. The --http-proxy setting used for the emulator tends to only work for the stock browser application; other applications generally ignore this setting. The second challenge is that a rooted emulator image is needed, which is possible but yet more effort. It’s ironically easier to root most physical devices than it is to root the standard emulator images (let alone to produce a new pre-rooted image.)

There are multiple solutions to this problem, but the best solution I've come cross is using the “ProxyDroid” app directly on a rooted ICS phone. This allows a tester to easily forward all traffic from the real application through a proxy; the only problem becomes SSL certificates, since the proxy will need to use its own SSL certificate, which Android will not recognize as valid.

For reference, here’s my phone setup (today - Kernel and ROM are regularly updated):


The rooting process is out of the scope of this article, but documentation can usually be found online. The process varies wildly from phone to phone. A good place to start would be the XDA Developer Forums; most devices have a forum dedicated to them, with a General section that usually contains a rooting guide. Rooting your device is your choice- I can't help with (or be held responsible for) issues that arise from a rooted phone.

In Android, unlike iOS, there is no setting for proxying traffic. Android 4.0 (ICS) added some tweaks to the wireless settings that are (slightly) hidden behind a long press on the currently-connected Wi-Fi network and then a check box for advanced options as seen below. (The “beware of Leopard” sign was dropped early in the Beta process.)



Unfortunately, this proxy setting is just like the --http-proxy setting of the emulator, which means it is completely useless for the actual proxying of applications. This leaves testers with the best option being a rooted phone with ProxyDroid running, which will force all traffic to use the proxy. Install the application from Google Play or from the developer’s XDA Developers thread. This will not solve all cases, but applications will happily comply.

An intercepting proxy running on a computer on the same network (or accessible via the Internet, but this is probably a bad idea). This article uses the free version of Burp Suite running on a BackTrack 5 VM, but if you have a preferred intercepting proxy, it should work, too.

Initial proxy setup

A backtrack VM has all the needed tools in this case, so Burp was started from the BT5 VM. The VM was set to bridged mode as to be on the same network as the phones wireless. The Burp proxy options were as shown here:



Take note of the IP address on the VM as this will be needed soon. Pay particular attention to Burp’s server SSL certificate option, as these are particularly important.

Next, start ProxyDroid on the mobile device and allow it root privileges when asked. ProxyDroid requires root, since it uses iptables (the Linux firewall) to modify packet routing on the device. Set ProxyDroid’s Host to the Burp IP and the configured port (default 8080) and then enable the proxy.



Finally, test proxying with the basic browser on the phone. Browse to something simple like http://www.google.com and the traffic should show up in Burp.



If there is no traffic showing, ensure the proxy is configured to listen on all interfaces (i.e., that “loopback only” is disabled) and that the IP/port settings in ProxyDroid are correct. If these settings seem correct, verify that the phone’s Wi-Fi is set to the same network as the machine running Burp. ProxyDroid should also be checked to make sure the IP/port settings are correct and that the app is enabled. When ProxyDroid is enabled, a cloud icon will show in the top left of the phone to let you know that it’s running.

With the phones browser now set to proxy through Burp, let's test what happens with an SSL encrypted connection to Google; navigate to https://www.google.com.



Pressing OK and then Continue will allow the browser to ignore the certificate warnings and load the page. Now we can see the browser’s SSL traffic, but what happens if an application attempts to access an HTTPS site?

Application Proxying

In order to test application proxying, we need an application. I've created a very simple app that creates an HTTPS connection to Foundstone’s website. The app will attempt to connect; if it succeeds, it will change the text in the app to the html response source. If not, the application will print a debug message to the log.

The application can be downloaded from here, which will require that your phone allows non-market apps (Settings > Security > Unknown Sources) to be installed be enabled on your device:


The source code can be seen here (in case you don’t trust me):


To fully understand how this application works, I would suggest loading the source code in Eclipse (with the ADT Plug-in and running the code from there. For this test, it’s helpful to be able to view the phone’s system log, either using an attached computer with the Android SDK installed, or a specialized application on the device (for which there are many options).

First, let's start ‘adb logcat’ with a filter for “FS”. A debug log tag is commonly used to find specific log messages that are sent by the application. The test app uses the tag “FSFSFSFSFSFSFSFSFS”, so filtering for FS will do.

Windows command:
adb logcat | findstr FS



Linux command:
adb logcat | grep FS



Here's what the install result looks like when using Linux:
$ adb logcat | grep FS
W/ActivityManager( 3841): No content provider found for permission revoke: file:///data/local/tmp/FS SSL App Test.apk



In another terminal, install the application. This command should be the same in either Windows or Linux, as long adb is in the path.

$ adb install FS\ SSL\ App\ Test.apk
239 KB/s (10513 bytes in 0.042s)
 pkg:/data/local/tmp/FS SSL App Test.apk
Success



With the application installed and logcat running, let's first turn off ProxyDroid and test the application. The application should produce several log messages in the logcat window. These messages are for debug purposes to help step through testing of the application. The application will also change its text to the response received from the server, as shown below.



Now that we can see the application working, it's time to figure out how to insert our proxy in front of the application. Go back and enable ProxyDroid once again. Attempting to run the application again with ProxyDroid turned on causes an SSL error:
$ adb logcat | grep FSFS
D/FSFSFSFSFSFSFSFSFS(31187): [+] Starting application...
D/FSFSFSFSFSFSFSFSFS(31187): [+] Starting HTTPS request...
D/FSFSFSFSFSFSFSFSFS(31187): [+] Set URL...
D/FSFSFSFSFSFSFSFSFS(31187): [+] Open Connection...
D/FSFSFSFSFSFSFSFSFS(31187): [+] Get the input stream...
D/FSFSFSFSFSFSFSFSFS(31187): [-] EXCEPTION: javax.net.ssl.SSLHandshakeException: java.security.cert.CertPathValidatorException: Trust anchor for certification path not found.



The detailed error is:
EXCEPTION: javax.net.ssl.SSLHandshakeException: java.security.cert.CertPathValidatorException: Trust anchor for certification path not found.



“Trust anchor” in this instance is referring to a pre-accepted CA certificate that can be used to validate the SSL certificate. In other words, the certificate is not signed by a valid CA. This is not unexpected- Burp Suite has generated the certificate and signed it using its internal, randomly-generated CA certificate.

By configuring Firefox to use Burp as its proxy, we can easily see what the certificate chain looks like. Navigate to an SSL-protected page, select Tools -> Page Info, and click the ‘Security’ icon in the top row, and click the ‘View Certificate’ button. You should be presented with a screen like that below.



As the image shows, “PortSwigger CA” is the signing authority for the certificate for “www.foundstone.com”. The phone doesn't have this CA (not least because it’s randomly generated on first run by Burp Suite), so we need to add it, which allow us to decrypt SSL traffic sent by our Android apps.

Still in Firefox, switch to the details tab select “PortSwigger CA” is selected in the “Certificate Hierachy” tree and then click “Export”. Export the file as an X.509 Certificate (DER) file and set the filename to PortSwiggerCA.cer. Android only reads files X.509 Certificates with a .CER extension when loading certificates from the SD card.



Finally, push the .CER file to the phone’s SD card using adb push, just like with any other file:
$ adb push PortSwiggerCA.cer /sdcard/
30 KB/s (712 bytes in 0.23s)



With the certificate file saved on the phone, install it into the certificate pool navigating to Settings -> Security -> Install from SD card. The install process will prompt you for the device lock code, as this is what Android uses to help secure the certificate. If there is no lock code or pin currently configured, you will be asked to create one.

Now that the CA certificate is installed on the phone, attempt to run the test application again, and observe the output in logcat.
$ adb logcat | grep FSFS
D/FSFSFSFSFSFSFSFSFS(31187): [+] Starting application...
D/FSFSFSFSFSFSFSFSFS(31187): [+] Starting HTTPS request...
D/FSFSFSFSFSFSFSFSFS(31187): [+] Set URL...
D/FSFSFSFSFSFSFSFSFS(31187): [+] Open Connection...
D/FSFSFSFSFSFSFSFSFS(31187): [+] Get the input stream...
D/FSFSFSFSFSFSFSFSFS(31187): [-] EXCEPTION: java.io.IOException: Hostname 'www.foundstone.com' was not verified



The detailed error is:
EXCEPTION: java.io.IOException: Hostname 'www.foundstone.com' was not verified



Your first thought might be to go back to Firefox and grab the “www.foundstone.com” certificate from the detail tab, in the same manner that we obtained the PortSwigger CA certificate, but that actually won't work. It appears that the default HttpsURLConnection in Android can sometimes cause an exception when using the default HostnameVerifier. Searching for this issue I found some info here which talks about just using a different HostnameVerifier. Depending on the application this could cause an exception or be completely ignored, in my case my application used the default verifier and I would have to install a site certificate as well.

The easiest fix from the tester perspective is to reconfigure Burp to use a fixed certificate. Go back to Burp and edit the settings for the proxy listener. In the “server SSL certificate” section, select the option “generate a CA-signed certificate with a specific hostname.” The specific hostname for this test will be “www.foundstone.com”. Be sure to click “edit” prior to making the change and “update” afterwards. Just prior to clicking “update,” Burp should look similar to the image below.



Return to Firefox and refresh the https://www.foundstone.com page; a new certificate error should appear. Follow the same process as above to export the certificate, except that this time be sure to export the “www.foundstone.com” certificate instead of the “PortSwigger CA” certificate. Remember to change the format to X.509 Certificate (DER) and to save it with a .CER extension (for example, www.foundstone.com.cer).

$ adb push www.foundstone.com.cer /sdcard/
11 KB/s (616 bytes in 0.051s)



As before, navigate to Settings -> Security -> Install from SD card and install the www.foundstone.com certificate.

Finally, double-check that ProxyDroid is still running then run the test application again.

$ adb logcat | grep FSFS
D/FSFSFSFSFSFSFSFSFS(31187): [+] Starting application...
D/FSFSFSFSFSFSFSFSFS(31187): [+] Starting HTTPS request...
D/FSFSFSFSFSFSFSFSFS(31187): [+] Set URL...
D/FSFSFSFSFSFSFSFSFS(31187): [+] Open Connection...
D/FSFSFSFSFSFSFSFSFS(31187): [+] Get the input stream...
D/FSFSFSFSFSFSFSFSFS(31187): [+] Create a buffered reader to read the response...
D/FSFSFSFSFSFSFSFSFS(31187): [+] Read all of the return....
D/FSFSFSFSFSFSFSFSFS(31187): [+] SUCCESS
D/FSFSFSFSFSFSFSFSFS(31187): [+] SUCCESS
D/FSFSFSFSFSFSFSFSFS(31187): [+] SUCCESS




The Android application should display the HTML loaded from the page after being run successfully.



Burp will show the site being connected to by IP, as shown below.



Success!

Some notes

Where do we go from here? We were able to successfully proxy traffic for this test application, but actual applications may present other difficulties. When testing any application some key points of information will be required- most important being which URL the application talk to. In the example case, we've used “www.foundstone.com” and, thus, created a specific host certificate for this site. For each new application and URL, Burp will need to be re-set to generate a site-specific certificate for the URL in use.

One other way to deal with this proxying issue is to decompile the application and do code replacement before recompiling the application. Foundstone provides an example application, part of its Hacme series, and also some documentation on performing class replacement. Performing class replacements like this can be tedious and frustrating, however, so it should be considered only in cases where the application cannot be coerced to proxy via more usual means.

A Better Way - FS Cert Installer

After going through this a couple times I didn’t want to deal with installing the certificates over and over so I wrote a small application to handle installing them. The application takes the URL, proxy IP and proxy port and then will allow the user to install the CA or site certificate. For the “hostname was not verified issue” Burp will still need to be changed before the certificate is installed.



Download: Usage instructions:
  1. Install the application using the market or the apk file from github.
  2. Set the URL, proxy IP and proxy port.
  3. Install the CA certificate which will most likely be the Burp certificate. Name it anything and enter the lock pin or pattern used on the phone. The pin or pattern is used here by the KeyChain activity not the installer app.
  4. Change the certificate on Burp to generate a certificate with a specific hostname. Install the site certificate.
  5. Test your application!


One large issue I ran into making this application deals with testing the certificate chain. The test certificate chain button will run the test with or without a proxy (if the IP and port are blank). I set up the application this way because a user might be testing or installing certificates with ProxyDroid running and the application should handle that just fine. The issue arises when a user wants to test the certificate chain after installing the CA certificate. The application will also tell the user the site certificate is installed and the full certificate chain is working. This is due to the fact that the class HttpsUrlConnection.openConnectiong(proxy) does not cause the same IOException Hostname Verifier issue as it would without a proxy set. Unfortunately there isn’t a fix for this issue, as far as I’m aware. The code is part of the JDK in javax.net.ssl. I am purposely using this class because it was recommended by the Android developer blog here, the apache HttpClient doesn’t throw the error anyway.

As a final note to remember, some applications won’t need the site certificate and watching logcat will be the best way to figure out what’s happening. Stack traces are your friend!

Monday, July 16, 2012

Detecting File Hash Collisions

By Pär Österberg Medina.

When investigating a computer that is suspected of being involved in a crime or that might be infected with a malware, it is important to try to remove as many known files as possible in order to keep the focus of the analysis on the files you have not seen or analyzed before. This is particular useful in malware forensic when you are looking for something out of the ordinary, something you might not have seen before. In order to remove files from the analysis, a cryptographic checksum of the file is generated that is matched in a hash database. These hash databases are divided up in categories with files that are either known to be good, used, forbidden or bad.

Another common use of hash functions in computer forensics is to use them to verify the integrity of an acquired hard drive image or any other piece of data. A cryptographic hash of the data is generated when the evidence is acquired that later can be used to verify that the content has not changed. Even though newer hash functions exists, in computer forensics we are still relying on MD5 and SHA-1. These hash functions was written a long time ago and has flaws that can be used to generate something called hash collisions. In this blog post I will show how these collisions can be detected so we still can use our old trustworthy hash function - even though they are broken.

Hash Collision

You have most likely heard about hash collisions before and how they can be used for malicious intent. Briefly explained, a collision attack is when a file or a message produces the same hash even though the content of the files are different. To illustrate how this can look I have generated what on the surface seems to be two identical programs.

pmedina@forensic:~$ wc -c prg1 prg2
 9054 prg1
 9054 prg2
18108 total
pmedina@forensic:~$ md5sum prg1 prg2
850d1dc3d24f0ea753a7ee1d5d8bbea9  prg1
850d1dc3d24f0ea753a7ee1d5d8bbea9  prg2



The files above have the same size and produce the same MD5 checksum. However when we execute the programs they produce a completely different result.

pmedina@forensic:~$ ./prg1
Let me see your identification.
pmedina@forensic:~$ ./prg2
These aren't the droids you're looking for



Detecting Hash Collisions

In the example above I showed two files that both had the same size and shared the MD5 checksum. Even though there have been documented hash collisions in both the MD5 and SHA-1 hash functions, it highly unlikely that a collision might occur on multiple hash functions at the same time. As you can see below, the files do not produce the same SHA-1 checksum and are therefore considered to be unique.
pmedina@forensic:~$ sha1sum prg1 prg2
a246766fc497e4d6ed92c43a22ee558b3415946a  prg1
b9c22ad10b61009193aa8b312c6ec88f44323119  prg2



The danger of white listing files and removing them from a forensic investigation is that you have to be absolutely sure that the files you are excluding for analysis are exactly the files you want to remove. Even though there has been no public demonstration of a definite hash collision - an attack where a new file has been generated to produce the same hash as an existing file, I always like to be extra careful when removing files that generates matches in my hash databases. To verify that the files are indeed the same file is something that can be done by mapping a hash to another hash - a technique I call "hashmap".

Hashmap'ing a hash database

In order for us to be able to hashmap a hash database, the database needs to include at least two hashes created by using two different hash functions. Fortunately for us, the databases we generated that are using the RDS format or the NSRL databases we downloaded from NIST, both lists the MD5 and SHA-1 hash in each entry. I have previously showed how the program ‘hfind’ can be used to both create an index from a database as well as how to use that index to search the database. When ‘hfind’ finds a match for a hash we search for, the program will print the filename of the file that matched the hash.

pmedina@forensic:~$ md5sum /files/RedDrive.zip
2e54d3fb64ff68607c38ecb482f5fa25  /files/RedDrive.zip
pmedina@forensic:~$ hfind /tmp/example-rds-unique.txt
2e54d3fb64ff68607c38ecb482f5fa25
2e54d3fb64ff68607c38ecb482f5fa25        RedDrive.zip



This functionality of ‘hfind’ is something we can use when we want to hashmap a MD5 hash to the SHA-1 hash of the same file. In order for this to work, we need to replace the value in the database that holds the filename with the SHA-1 checksum of the file. This can be done in many ways but to demonstrate this I will use the Unix command ‘awk’ on one of the hash databases I have generated before.

pmedina@forensic:~$ head -1 /tmp/example-rds-unique.txt > /tmp/example-rds-unique-hm-sha1.txt
pmedina@forensic:~$ tail -n +2 /tmp/example-rds-unique.txt | awk -F, '{print $1","$2","$3","tolower($1)","$5","$6","$7","$8}' >> /tmp/example-rds-unique-hm-sha1.txt
pmedina@forensic:~$ head -3 /tmp/example-rds-unique-hm-sha1.txt
"SHA-1","MD5","CRC32","FileName","FileSize","ProductCode","OpSystemCode","SpecialCode"
"000e9b6b962bdbcd5b0ff01635a417cce833490e","b0efd5eacfe6f1e251b8870d486326af","c8f43198","000e9b6b962bdbcd5b0ff01635a417cce833490e",96,0,"WIN",""
"001121f9dc35ab520b207908f0f26c48979ed497","6efca4942c73ab0b17875fd729b2d03a","2e929525","001121f9dc35ab520b207908f0f26c48979ed497",72,0,"WIN",""



As you can see the field that used to hold the filename now holds the SHA-1 checksum of the file instead. Since the offsets to the file entries in our hashmap database are not the same as in the original database, we do also need to create a new index.

pmedina@forensic:~$ hfind -i nsrl-md5 /tmp/example-rds-unique-hm-sha1.txt
Index Created



Using our new database to search for the MD5 checksum now’ prints the SHA-1 hash of the file instead of the file name. This hash can we verify by generating a SHA-1 checksum of the file we are investigating.

pmedina@forensic:~$ hfind /tmp/example-rds-unique-hm-sha1.txt 2e54d3fb64ff68607c38ecb482f5fa25
2e54d3fb64ff68607c38ecb482f5fa25        d9c40dd2f1fb08927e773a0dc70d75fedd71549e
pmedina@forensic:~$ sha1sum /files/RedDrive.zip
d9c40dd2f1fb08927e773a0dc70d75fedd71549e  /files/RedDrive.zip 



Now we know for sure that the file we have on disk is exactly the same as we have in our database. We can of course also reverse this process to create a hashmap database that will print the MD5 hash when we query the database using a SHA-1 hash.

Modifying ‘hfind’ to hashmap automatically

Generating a new hash databases that hold the hash value you want to map to in the data field for the file name is a solution that works. The solution however is not that flexible and requires a lot of extra disk space since the hashmap database will be at least the same size as the original database and the index approximately a third of the database size. Instead of trying to work around the issue with ‘hfind’ only printing the field that holds the file name, a much better solution to our problem would be to patch the program so it will present us with the corresponding MD5/SHA-1 hash instead.

To do so the first thing we need to do is to download The Sleuthkit, verify the downloaded file and extract the content. At the time of this writing, the latest stable version of TSK is 3.2.3.

pmedina@forensic:~$ tar -zxf sleuthkit-3.2.3.tar.gz
pmedina@forensic:~$ cd sleuthkit-3.2.3/



The next step is to run the ‘configure’ program to verify that all dependencies are installed and that all the programs that are required to compile the code are in place.

pmedina@forensic:~/sleuthkit-3.2.3$ ./configure
checking for a BSD-compatible install... /usr/bin/install -c
checking whether build environment is sane... yes
checking for a thread-safe mkdir -p... /bin/mkdir -p
checking for gawk... no
checking for mawk... mawk
..
..
config.status: creating tools/timeline/Makefile
config.status: creating tests/Makefile
config.status: creating samples/Makefile
config.status: creating man/Makefile
config.status: creating tsk3/tsk_config.h
config.status: executing depfiles commands
config.status: executing libtool commands
config.status: executing tsk3/tsk_incs.h commands
pmedina@forensic:~/sleuthkit-3.2.3$



The source code that handles the way ‘hfind’ prints the results when processing a database that is using the NSRL2 format is located in the file ‘tsk3/hashdb/nsrl_index.c’. This is the file that we need to patch so that ‘hfind’ will print the SHA1 and MD5 checksum instead of the filename.

pmedina@forensic:~/sleuthkit-3.2.3$ mv tsk3/hashdb/nsrl_index.c tsk3/hashdb/nsrl_index.c.ORIG
pmedina@forensic:~/sleuthkit-3.2.3$ cat ../nsrl_index.c.patch
172,174c172
<                 &str[1 + TSK_HDB_HTYPE_SHA1_LEN + 3 +
<                      TSK_HDB_HTYPE_MD5_LEN + 3 + TSK_HDB_HTYPE_CRC32_LEN +
<                      3];
---
>                 &str[1 + TSK_HDB_HTYPE_SHA1_LEN + 3];
331,333c329
<                 &str[1 + TSK_HDB_HTYPE_SHA1_LEN + 3 +
<                      TSK_HDB_HTYPE_MD5_LEN + 3 + TSK_HDB_HTYPE_CRC32_LEN +
<                      3];
---
>                 &str[1];
pmedina@forensic:~/sleuthkit-3.2.3$ patch tsk3/hashdb/nsrl_index.c.ORIG -i ../nsrl_index.c.patch -o tsk3/hashdb/nsrl_index.c
patching file tsk3/hashdb/nsrl_index.c.ORIG
pmedina@forensic:~/sleuthkit-3.2.3$




We also need to make sure that an error is returned if we are trying to use our patched ‘hfind’ binary on another database type but the one that are using the NSRL2 format. This is done by patching the file ‘tsk3/hashdb/tm_lookup.c’.

pmedina@forensic:~/sleuthkit-3.2.3$ cat ../tm_lookup.c.patch
1022c1022
<             if (dbtype != 0) {
---
>             /* if (dbtype != 0) { */
1024c1024
<                 tsk_errno = TSK_ERR_HDB_UNKTYPE;
---
>                 tsk_errno = TSK_ERR_HDB_UNSUPTYPE;
1026c1026
<                          "hdb_open: Error determining DB type (MD5sum)");
---
>                          "hdb_open: hashmap cannot work on DB type (MD5sum)");
1028c1028
<             }
---
>             /* } */
1032c1032
<             if (dbtype != 0) {
---
>             /* if (dbtype != 0) { */
1034c1034
<                 tsk_errno = TSK_ERR_HDB_UNKTYPE;
---
>                 tsk_errno = TSK_ERR_HDB_UNSUPTYPE;
1036c1036
<                          "hdb_open: Error determining DB type (HK)");
---
>                          "hdb_open: hashmap cannot work on DB type (HK)");
1038c1038
<             }
---
>             /* } */
pmedina@forensic:~/sleuthkit-3.2.3$ mv tsk3/hashdb/tm_lookup.c tsk3/hashdb/tm_lookup.c.ORIG^C
pmedina@forensic:~/sleuthkit-3.2.3$ patch tsk3/hashdb/tm_lookup.c.ORIG -i ../tm_lookup.c.patch -o tsk3/hashdb/tm_lookup.c
patching file tsk3/hashdb/tm_lookup.c.ORIG
pmedina@forensic:~/sleuthkit-3.2.3$



Everything that is needed to modify ‘hfind’ is done and we can compile and test our binary.

pmedina@forensic:~/sleuthkit-3.2.3$ make
Making all in tsk3
..
..
make[1]: Entering directory `/home/pmedina/sleuthkit-3.2.3/tsk3'
Making all in man
make[1]: Entering directory `/home/pmedina/sleuthkit-3.2.3/man'
make[1]: Nothing to be done for `all'.
make[1]: Leaving directory `/home/pmedina/sleuthkit-3.2.3/man'
make[1]: Entering directory `/home/pmedina/sleuthkit-3.2.3'
make[1]: Nothing to be done for `all-am'.
make[1]: Leaving directory `/home/pmedina/sleuthkit-3.2.3'
pmedina@forensic:~/sleuthkit-3.2.3$ sudo cp tools/hashtools/hfind /usr/local/bin/hashmap
pmedina@forensic:~/sleuthkit-3.2.3$ cd ..
pmedina@forensic:~$ hashmap /tmp/example-rds-unique.txt 2e54d3fb64ff68607c38ecb482f5fa25
2e54d3fb64ff68607c38ecb482f5fa25        d9c40dd2f1fb08927e773a0dc70d75fedd71549e
pmedina@forensic:~$ hashmap /tmp/example-rds-unique.txt d9c40dd2f1fb08927e773a0dc70d75fedd71549e
d9c40dd2f1fb08927e773a0dc70d75fedd71549e        2e54d3fb64ff68607c38ecb482f5fa25
pmedina@forensic:~$



As you can see above, instead of returning the name of the file that we find in our hash database, the other pair in the MD5 to SHA-1 or SHA-1 to MD5 hashmap is given to us. With this solution, we do not need to create an additional database and can use the existing index that we already have created.

A Simple USB Thumb Drive Duplicator on the Cheap

By Tony Lee and Matt Kemelhar.

You may have had to shop for a USB duplicator for some reason or another and noticed that they can be quite expensive and the product reviews are not always very encouraging. At Foundstone, we teach a few classes that require each student to have the same Foundstone customized USB stick—thus we have a need for one of these expensive devices—especially when we need upwards of 100 sticks created in a weekend.

After scouring the web and reading reviews, we resorted to buying a duplicator that was around $300 and could duplicate 7 USB sticks at a time. Our first batch finished without a hitch - until we tried to boot off of them. It turns out this product cannot duplicate a bootable USB stick which we needed for a LIVE Linux distro. We contacted customer support and they even went as far as to rewrite the software to try to get it to do what we wanted—unfortunately, without any success.

When all hope was lost, we turned to some Linux dd foo. As it turns out, you don’t need the expensive hardware. All you need is a standard USB hub, dd, and some command line magic.

Overall, the process involves the following steps:
  1. Finding a good USB hub
  2. Getting a copy of dd
  3. Determining the drive mapping
  4. Executing the foo


Finding a good USB hub

Ironically, the very expensive “7-port USB duplicator” that we purchased last year served as our first USB hub, however we later realized that ANY USB hub would work. If you are going to use an old hub you have laying around, you can skip this section. Otherwise, there are a few things you may want to consider if you are going to purchase a new hub:

Number of ports


This will vary depending on the size of your project. As stated earlier, we have to duplicate 100 or more USB sticks in a weekend, so for us… the more ports the better. The USB hub I purchased this year was simply a hub (and not a “duplicator”), but it has 7 ports.

U-Speed H7928-U3: 7-port USB 3.0 hub
Price: Less than $50 on amazon - much cheaper (1/6th) than our USB 2.0 7-port “duplicator”

USB hub speed


The duplicator we originally purchased was a USB 2.0 hub. However, I also used an old Belkin USB 1.1 4-port hub to successfully test duplication as well. This year, we went with a USB 3.0 hub to determine the performance increase—if any.



Spacing between ports


This is something you hopefully do not learn the hard way. When buying a USB hub, you have to keep in mind the bulkiness of the USB sticks you may have to duplicate. Take the image below for example with USB sticks of varying width. The Flash Voyager and the Verbatim stick (lower left) are wider than the Cruzer (lower right) and DataTraveler sticks (top right).



If the spacing between the ports on the hub is too close together and you happen to find a wider stick on-sale when you are buying, you will not be able to fit all of the sticks into the hub at the same time—thus involuntarily reducing your 7 usable ports down to 4.

We took this into consideration when we purchased the USB 3.0 USpeed hub mentioned above. As you can see in the image below there is plenty of space between the ports which accommodates the wider/bulkier USB sticks that may be on-sale.


Source: Amazon product page


An example of a hub that has ports that are too close together was the old USB 1.1 4-port belkin I had laying around the house:


Source: Belkin F5U021 product page


Power requirements


This is not too much of an issue, but it is something to keep in mind. Most of the USB hubs can be powered off of the USB port itself or an external power source. When you are duplicating many sticks at the same time, you may want to plug in to a wall socket—even if you think you can power the hub via USB. The power source can sometimes affect speed and reliability of the copies.

Reviews


One of the deciding factors in our most recent purchase was the positive remarks about the hub (beware of shills!). We recommend choosing a hub that is industry proven and popular for speed, features, and reliability.

Lights on each port


This may seem like a minor and nitpicky feature, however having lights on the individual ports is often helpful to ensure:
  1. The USB port is functioning
  2. The USB stick is functioning
  3. The USB stick is seated properly
  4. Data is being written to the stick

Getting a copy of dd

dd is an old school *nix command that does low-level bit for bit copying. It is a very versatile and easy to use tool. Common uses are acquiring a forensic image of media, creating backup images (ISO’s) of CD’s or DVD’s, performing drive backups and now, duplicating USB sticks.

There are ports of dd for Windows, but many of them have some limitation—thus we prefer to use dd natively in *nix. Ironically, since we are duplicating bootable Linux distributions, we use one of the USB sticks that we manually created in order to boot to that and make the others. We create two USB sticks the manual way (one to boot from and one to copy).

Determining the drive mapping

Depending on the operating system, the USB sticks may be auto-mounted. It is important that you unmount the drives before starting the duplication process. There are several ways in which you can determine how the USB sticks were mounted and how to address them.

Monitoring/var/log/messages


For real time detection, just use tail:
root@box:~# tail –f /var/log/messages
Jul  1 16:16:59 DVORAK kernel: [174268.742086] usb 1-1: new high-speed USB device number 5 using ehci_hcd
Jul  1 16:17:00 DVORAK kernel: [174269.149525] scsi6 : usb-storage 1-1:1.0
Jul  1 16:17:01 DVORAK kernel: [174270.252811] scsi 6:0:0:0: Direct-Access     Kingston DataTraveler G3  PMAP PQ: 0 ANSI: 0 CCS
Jul  1 16:17:01 DVORAK kernel: [174270.260034] sd 6:0:0:0: Attached scsi generic sg2 type 0
Jul  1 16:17:01 DVORAK kernel: [174270.779011] sd 6:0:0:0: [sdb] 7826688 512-byte logical blocks: (4.00 GB/3.73 GiB)
Jul  1 16:17:01 DVORAK kernel: [174270.786126] sd 6:0:0:0: [sdb] Write Protect is off
Jul  1 16:17:02 DVORAK kernel: [174270.834954]  sdb: sdb1
Jul  1 16:17:02 DVORAK kernel: [174270.860634] sd 6:0:0:0: [sdb] Attached SCSI removable disk




dmesg - print or control kernel messages


For a running history, you can use “dmesg | less” and then hit ‘G’ to go to the bottom and find your USB sticks. You will find something like the following indicating that the operating system has detected and labeled the device /dev/sdb:
[  981.231497] Initializing USB Mass Storage driver...
[  981.231586] scsi3 : usb-storage 1-1:1.0
[  981.231792] usbcore: registered new interface driver usb-storage
[  981.231794] USB Mass Storage support registered.
[  982.235921] scsi 3:0:0:0: Direct-Access     Kingston DataTraveler G3  PMAP PQ: 0 ANSI: 0 CCS
[  982.238374] sd 3:0:0:0: Attached scsi generic sg2 type 0
[  982.248017] sd 3:0:0:0: [sdb] 7826688 512-byte logical blocks: (4.00 GB/3.73 GiB)
[  982.252239] sd 3:0:0:0: [sdb] Write Protect is off
[  982.252243] sd 3:0:0:0: [sdb] Mode Sense: 23 00 00 00
[  982.256745] sd 3:0:0:0: [sdb] No Caching mode page present
[  982.256749] sd 3:0:0:0: [sdb] Assuming drive cache: write through
[  982.276716] sd 3:0:0:0: [sdb] No Caching mode page present
[  982.276719] sd 3:0:0:0: [sdb] Assuming drive cache: write through
[  982.278560]  sdb: sdb1
[  982.291142] sd 3:0:0:0: [sdb] No Caching mode page present
[  982.291145] sd 3:0:0:0: [sdb] Assuming drive cache: write through
[  982.291148] sd 3:0:0:0: [sdb] Attached SCSI removable disk




fdisk –l– partition table manipulator and viewer


fdisk with the –l (lower case L) will list the devices as shown:
 Disk /dev/sdb: 4007 MB, 4007264256 bytes
74 heads, 10 sectors/track, 10576 cylinders
Units = cylinders of 740 * 512 = 378880 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes
Disk identifier: 

   Device Boot      Start         End      Blocks   Id  System
/dev/sdb1   *          11       10577     3909312    b  W95 FAT32




Executing the foo


Once you have determined the number of drives detected and how they are to be referenced, you are ready to execute the foo. Make sure you do not switch around source (if) and destination (of) as defined below: For example, if you have 2 drives you are imaging it may look something like this:

Originating drive: /dev/sdf
First drive: /dev/sdd
Second drive: /dev/sde
 root@box:~# dd if=/dev/sdf |pv| tee >(dd of=/dev/sdd bs=16M) | dd of=/dev/sde bs=16M





Syntax explanation:
dd = program used to duplicate
if = input file (source drive)
of = output file (destination drive)
pv = pipe viewer (for copying statistics)
tee = program used to redirect input and output
bs = option of dd to control blocksize – could be adjusted for potential speed increase

An example if you have 4 drives you are imaging it may look something like this:

Originating drive: /dev/sdd
First drive: /dev/sde
Second drive: /dev/sdf
Third drive: /dev/sdg
Fourth drive: /dev/sdh

 root@box:~# dd if=/dev/sdd |pv| tee >(dd of=/dev/sde bs=16M) >(dd of=/dev/sdf bs=16M) >(dd of=/dev/sdg bs=16M) | dd of=/dev/sdh bs=16M



Performance

Duplication performance will vary depending on the setup, however in the next article we will reveal how our different setups did and hopefully extract some useful information that may save you time and money.

Overall results

The command syntax above worked great for us and duplicated over 100 drives with no issues at all—most of the time duplicating up to 7 drives simultaneously. As we realize that there are many ways to get the job done—if you have other commands, tools, or parameter adjustments that you have used that worked well, we would love to hear about them.

Tuesday, July 10, 2012

Sniffing on the 4.9GHz Public Safety Spectrum

By Brad Antoniewicz.

Probably the most important thing to mention about the 4.9GHz spectrum is that you need a license to operate in it! If you don't have a license (I'm pretty sure you don't) - IT MAY BE ILLEGAL TO INTERACT WITH THIS BAND.

You've been warned - That all being said, let's talk about public safety.

What is the Public Safety Spectrum?

The Public Safety Spectrum is the name for a number of reserved ranges in radio spectrum allocated by the FCC and dedicated for the "sole or principal purpose of protecting the safety of life, health, or property". Basically it's used for police, ambulance, fire, and in some cases, utilities to communicate.

The 4.9GHz Public Safety Spectrum (4.940GHz to 4.990GHz) is one of these reserved public safety ranges. It's mainly for short distance, almost line of sight, communications. It's used from everything to create "on the scene" networks so that the police and other responders can share and transfer data, to video camera systems around a fixed location.

The neat thing about the 4.9Ghz spectrum is that the pretty much de-facto standard used in it is IEEE802.11! It takes some deviations from the standard, such as allowing for 1MHz, 5MHz, 10MHz or 20MHz channels, but other then that, it is plain old 802.11 on a different spectrum.

Interacting with the Spectrum

To interact with the spectrum, you'll need an FCC LICENSE (!! if you skipped to this part, please see the first paragraph), and an adapter that is capable of transmitting/receiving on the 4.9GHz spectrum. There are some adapters already out there, such as the Ubiquiti SuperRange 4 Cardbus (SR4C), but no one likes spending more money if they don't have to!

An 4.9GHz adapter you might have already in your possession and not even know it is the Ubiquiti SuperRange Cardbus (SRC or SRC300)! The internal Atheros chipset actually supports from 4.910GHz to 6.1GHz! That's much more then originally advertised :)

The problem is though, if you want to use the cards with any of the standard Linux tools, you're more or less screwed! The current ath5k drivers don't officially support 4.9GHz. There are a couple of patches for older versions of the driver, but some don't work or can't be applied to the current stable driver release. Another issue is that some drivers don't support the 5MHz, 10MHz or 20MHz channel widths.

About the 4.9GHz Driver Patch

I took some time out and wrote up a quick patch for the current version of compat-wireless. I used some of the patches mentioned above as starting point, and then followed the code comments in the existing drivers to implement the channel widths correctly.

Enabling 4.9GHz


To enable the extended frequency ranges, I just modified the driver to accept frequencies as low as 4.910GHz. There was a "ath_is_49ghz_allowed()" function that defined if the regulatory domain was allowed to access that range. I modified this function to always return true. The regulatory domain is often stored within your cards EEPROM and defines what region the card will be operating in. The mac80211 drivers query this value to determine what frequencies you're allowed to use. Based on that value, the driver will either consult its internal (statically defined) regulatory database, or if present, the Central Regulatory Domain Agent (CRDA). The CRDA is a user land agent that defines what frequencies are used within a region. The idea is that if you're in a different regulatory domain then the one your card is registered for, you can dynamically change the allowed frequencies without making any driver changes. You would do this with the "iw" command:
 root@bt:~# iw reg set <VALUE>


One problem I came across is that the driver won't consistently respect a regulatory domain that is defined this way. For example, if my card's EEPROM is set for US, but I set the World regulatory domain ("00"), sometimes it won't actually apply it or won't allow me to use the channels enabled by the extended regulatory domain. Because of this, I took the somewhat brutish approach of just returning true for "ath_is_49ghz_allowed()".

I really wanted to make this patch work with the smallest number of module changes, because the more complicated the module change, the more likely it will break in future releases. Plus, most of the code to support 4.9GHz was already there!

Rather than setting the 4.9GHz channels, etc.. statically within the driver, I also decided to leverage the CRDA, since it can be changed without needing the driver to be rebuilt.

Channel Widths


By default the compat-wireless drivers support 20MHz channel widths. Because 4.9GHz can have 1MHz, 5MHz, 10MHz, or 20MHz channels, the driver needed to be modified to support this. Luckily the driver code comments spell out what needs to be done, and much of the support already existed - it just wasn't used. I modified the drivers as per the code comments, and took a tip from the RADAR patch by adding the "default_bwmode" module parameter so that people can specify the channel width when they load the module.

Installation

Installing the patch is easy. Since I use non-persistent BackTrack for everything, I'll provide instructions using that as a base. You can either perform the installation manually or the "easy way" which is using a script I created.

Download

You should really read on, but if you're impatient and just want stuff to download, here is the download link:

The Easy Way

If you'd like to use this method, you'll just need internet access. Also, once you complete the "easy" way, you can use that directory for an offline installation later on. To install make sure you have internet access and:
 root@bt:~# git clone https://github.com/OpenSecurityResearch/public-safety
 root@bt:~# cd public-safety/4.9ghz/
 root@bt:~/public-safety/4.9ghz# chmod +x 49ghz_install.sh
 root@bt:~/public-safety/4.9ghz# ./49ghz_install.sh


That should be it (told you it was easy)! It'll auto create a monitor mode VAP using 10mhz wide channels.

Manual Installation

Manual installation is pretty simple too, but some people hate typing :) To install everything from scratch, first install all the prerequisites:
 root@bt:~# apt-get install libnl-dev libssl-dev python-m2crypto build-essential 


Next, we'll need to set up CRDA. The CRDA consults a database for its regulatory information. This database is called the wireless-regdb. You'll also need to sign the database. So lets create some keys to do that:
 root@bt:~# openssl genrsa -out key_for_regdb.priv.pem 2048 
 root@bt:~# openssl rsa -in key_for_regdb.priv.pem -out key_for_regdb.pub.pem -pubout -outform PEM


Now, we'll download wireless-regdb, extract it, and build:
 root@bt:~# wget http://linuxwireless.org/download/wireless-regdb/wireless-regdb-2011.04.28.tar.bz2
 root@bt:~# tar -jxf wireless-regdb-2011.04.28.tar.bz2
 root@bt:~# cd wireless-regdb-2011.04.28
 root@bt:~/wireless-regdb-2011.04.28# make


The regulatory database is just a plain-text file that is then converted to the format CRDA expects. You can modify your database any way you'd like, or you can just use the one I created (db-ReturnTrue.txt):
 root@bt:~/wireless-regdb-2011.04.28# wget https://raw.github.com/OpenSecurityResearch/public-safety/master/4.9ghz/db-ReturnTrue.txt
 root@bt:~/wireless-regdb-2011.04.28# cp db-ReturnTrue.txt db.txt
 root@bt:~/wireless-regdb-2011.04.28# ./db2bin.py regulatory.bin db.txt ../key_for_regdb.priv.pem
 root@bt:~/wireless-regdb-2011.04.28# make install


Now that our regulatory database is all set up, lets install CRDA to leverage it. Notice that after we download and extract, we also copy over our public keys to the locations CRDA is expecting them to be. This is so it can validate the authenticity of the regulatory database we created.
 root@bt:~# wget http://linuxwireless.org/download/crda/crda-1.1.2.tar.bz2
 root@bt:~# tar -jxf crda-1.1.2.tar.bz2
 root@bt:~# cd crda-1.1.2
 root@bt:~/crda-1.1.2# cp ../key_for_regdb.pub.pem pubkeys/
 root@bt:~/crda-1.1.2# cp ../key_for_regdb.pub.pem /usr/lib/crda/pubkeys
 root@bt:~/crda-1.1.2# make
 root@bt:~/crda-1.1.2# make install


So now we can get to the compat-wireless installation. First download it, then extract:
 root@bt:~#  wget http://www.orbit-lab.org/kernel/compat-wireless-3-stable/v3.3/compat-wireless-3.3-1.tar.bz2
 root@bt:~# tar -jxf compat-wireless-3.3-1.tar.bz2
 root@bt:~# ln -s /usr/src/linux /lib/modules/`uname -r`/build
 root@bt:~# cd compat-wireless-3.3-1


Next we'll unload any conflicting drivers (I'm being a little redundant with these two commands, I know):
 root@bt:~/compat-wireless-3.3-1# sudo scripts/wlunload.sh
 root@bt:~/compat-wireless-3.3-1# sudo modprobe -r b43 ath5k ath iwlwifi iwlagn mac80211 cfg80211


And then tell compat-wireless to just compile ath5k:
 root@bt:~/compat-wireless-3.3-1# scripts/driver-select ath5k


Now we'll download the default set of patches for BT5R2 and apply them (you may get some errors when applying, you should be able to ignore them):
 root@bt:~/compat-wireless-3.3-1# wget http://www.backtrack-linux.org/2.6.39.patches.tar
 root@bt:~/compat-wireless-3.3-1# tar -xf 2.6.39.patches.tar
 root@bt:~/compat-wireless-3.3-1# patch -p1 < patches/mac80211-2.6.29-fix-tx-ctl-no-ack-retry-count.patch
 root@bt:~/compat-wireless-3.3-1# patch -p1 < patches/mac80211.compat08082009.wl_frag+ack_v1.patch
 root@bt:~/compat-wireless-3.3-1# patch -p1 < patches/zd1211rw-2.6.28.patch
 root@bt:~/compat-wireless-3.3-1# patch -p1 < patches/ipw2200-inject.2.6.36.patch


Then lets apply the 4.9ghz patch:
 root@bt:~/compat-wireless-3.3-1# wget https://raw.github.com/OpenSecurityResearch/public-safety/master/4.9ghz/compat-wireless-3.3-1_ath5k-49GHZ+BWMODE.patch
 root@bt:~/compat-wireless-3.3-1# patch -p1 < compat-wireless-3.3-1_ath5k-49GHZ+BWMODE.patch


Ok! now we're ready to build:
 root@bt:~/compat-wireless-3.3-1# make
 root@bt:~/compat-wireless-3.3-1# make install
 root@bt:~/compat-wireless-3.3-1# cd ..


After this, we're more or less finished. The thing is that you'll probably want to also upgrade your kismet to find these networks. It's highly recommended that you use the latest version of kismet from the git repo.

Updating Kismet

If you followed the easy way above, then you should be already updated. If you're following the manual way, uninstall the installed version of kismet:
 root@bt:~# dpkg -r kismet


Then grab the latest development version of kismet and compile it:
 root@bt:~# git clone https://www.kismetwireless.net/kismet.git
 root@bt:~# cd kismet
 root@bt:~/kismet# ./configure
 root@bt:~/kismet# make dep
 root@bt:~/kismet# make
 root@bt:~/kismet# make install


You'll also need to define a new channel list in your kismet.conf to support this. I've added the following. I chose to use .5MHz channel spacing since many 4.9GHz deployments have varying channel layouts.
 channellist=ps5mhz:4920-4990-5-.5
 channellist=ps10mhz:4920-4990-10-.5
 channellist=ps20mhz:4920-4990-20-.5


Finally, to change to a different channel width (other than 20MHz) you'll need to define the "default_bwmode" module parameter. For 5MHz channels, define "default_bwmode=1", for 10MHz, "default_bwmode=2", and for 40MHz, "default_bwmode=3". Also, for whatever reason, if you're using a channel width other than 20MHz, you'll also need to manually create a monitor mode VAP (e.g. "mon0") and use that as your source for kismet. Here's how to set it up:
 root@bt:~# modprobe ath5k default_bwmode=2
 root@bt:~# iw dev wlan1 interface add mon0 type monitor
 root@bt:~# ifconfig mon0 up
 root@bt:~# iwconfig mon0 freq 4.920G


Is it Working?

Once everything is running, kismet should look normal, just with the previously undiscovered AP available! Note that this band is highly regulated in the US, so you won't see these networks everywhere. And since the transmit distance is so small, you'll likely need to be in near line of sight of the 4.9GHz network you're looking at. Here's a screen shot of kismet discovering our test AP (located in a faraday cage, in a country where it is legal to transmit on 4.9GHz, of course):


Want to learn more?

This article is a precursor to the talk Robert Portvliet and I will be giving this year at Defcon 20. So if this sparks your interest, stop by - we'll be talking about 4.9GHz and the other Public Safety Bands!