Tuesday, November 4, 2014

A Brief Overview of the Google Authenticator

By Deepak Choudhary.

Many application providers are considering implementing a more robust login mechanism to their applications as single layer authentication is no longer considered a secure approach. The trend is to add an additional authentication layer after main login surface. It could be anything like OTP based (text message), secondary authentication questions, voice calls etc...

It’s common for the application provider to setup a second layer on their end but there are also a few third party providers in the market. Google’s “Two factor authentication” is arguably the most popular and the subject of this blog post.


Google’s Two Factor Authenticator is the smartphone-based application that generates a one-time password used in secondary authentication. End users have option to enable this option within the setup on their smart phone.

In order to leverage the Google Authenticator, the user needs to install the mobile application and then synch with application server. Once the user logs into the first page of the application, they will be presented with the Google Authentication verification prompt. The user can scan the on-screen QR code or manually enter a secret as shown below. Once the process is completed, the user can work without any connectivity.

OTP Generation

Google Authenticator uses two type of algorithm for generating OTPs. Both algorithms are supplied as follows:

The main difference in what the algorithms are fed is that the HMAC OTP uses a counter for input feed where the Time based OTP (TOTP) is seeding with the current time. For TOTP, the client and the server’s time must be in sync or the process will not complete. Network time protocol (NTP) is a key component to facilitate TOTP. In contract to TOTP, HMAC OTP uses a counter used instead of time. The counter increases on both sides once OTP is generated.


From a security prospective, OTP should take into consideration the same key elements as most other multifactor authentication mechanisms. For example:

  • Cross account OTP usage
  • OTP expiration
  • OTP reuse
  • OTP length , etc


Tuesday, September 16, 2014

hostapd-wpe: Now with More Pwnage!

By Brad Antoniewicz.

A major component of hacking IEEE 802.11 wireless networks is targeting the client's system. This is because of the trusting nature of wireless and corporate systems can be tricky to configure correctly. But don't forget that the same client-side attacks against 802.11 wireless networks can be used on wired networks with port security when the attacker has physical access to a workstation or access switch. hostapd-wpe provides a means to execute client side attacks on wired and wireless networks, and in this blog post we'll cover hostapd-wpe's latest features.


Both IEEE 802.11 and Ethernet can utilize a security standard called IEEE 802.1x that provides the opportunity for "the network" to authenticate the connecting user. In wireless networks, this is part of WPA Enterprise. 802.1x relies heavily on the Extensible Authentication Protocol (EAP) to send messages between the connecting user (supplicant) and the authentication server. To be as flexible as possible, there are different "EAP Types" which offer different authentication options chosen by the network administrator. For instance, PEAP first sets up a TLS tunnel between the client and server, then sends a username and password within that tunnel.

An opportunity to attack networks running 802.1x exists if the attacker can position themselves between the client and the authentication server. If that happens and the user is configured to blindly trust the network they're connecting to, the user may naively trust an impostor authentication server set up by the attacker and send its username and password to it.

This attack was first implemented in a tool myself and Josh Wright wrote called FreeRADIUS-WPE and recently implemented in hostapd-wpe.


FreeRADIUS-WPE is a great approach to performing client-side attacks against 802.1x/EAP but since its only an authentication server, you still need to create an authenticator. The authenticator of choice most commonly hostapd because it can be run in software, is generally easy to set up, and supports wired and wireless attacks. There's one thing about hostapd that I didn't mention: it can also be an authentication server! So we can move the "WPE functionality" from FreeRADIUS-WPE to hostapd and we eliminate an unneeded layer of complexity!

Impersonation Attacks

hostapd-wpe's core feature is authentication server/authenticator impersonation. It simply logs authentication data from the client to a file and outputs it to the screen. It currently supports the following EAP Types:
  • EAP-FAST/MSCHAPv2 (Phase 0)


Data from the client relevant to the attack is stored in hostapd-wpe.log within the directory where hostapd-wpe was called from. This could be credentials are heartbleed data.

The log file location can be configured within the hostapd-wpe.conf configuration file by the wpe_logfile option.

Credential Format

For EAP Types that utilize MSCHAPv2, hostapd-wpe outputs the challenge and response in the standard WPE format and john's NETNTLM format.

This feature is enabled by default.

Requests for Less Secure Types

hostapd-wpe is configured by default to request cleartext and other less-secure EAP-Types (e.g. PAP) from the client. In some cases a client maybe configured to support multiple EAP-Types, so this acts as sort of a "downgrade" attack.

This feature is enabled by default through the hostapd-wpe.eap_user.

Return EAP-Success

At the end of a successful authentication, the Authentication Server sends an EAP-Success message. hostapd-wpe will always return an EAP-Success so that the client believes they are successfully authenticated and continues normal connection procedures. Assuming the attacker provides the appropriate requirements to establish a connection (IP, DNS, etc..) - the attacker can leverage this to MiTM client traffic or otherwise attack the client.

This feature can be invoked using the -s option via the command line.

Cupid (Heartbleed) Client Attacks

hostapd-wpe implements Cupid or Heartbleed attacks against connecting clients.

This feature can be invoked using the -c option via the command line. The following configuration options exist within the hostapd-wpe.conf configuration file, however default settings are recommended:

wpe_hb_send_before_handshake=0    # Heartbleed True/False (Default: 1)
wpe_hb_send_before_appdata=0      # Heartbleed True/False (Default: 0) 
wpe_hb_send_after_appdata=0       # Heartbleed True/False (Default: 0)
wpe_hb_payload_size=0             # Heartbleed 0-65535 (Default: 50000)
wpe_hb_num_repeats=0              # Heartbleed 0-65535 (Default: 1)
wpe_hb_num_tries=0                # Heartbleed 0-65535 (Default: 1)

Karma-Style Probe Responses

Some 802.11 clients send out probe requests to determine if the wireless network they're configured for is nearby. When Karma-Style Probe Responses are enabled, hostapd-wpe will look for client probe requests and immediately change the SSID it is broadcasting to match the probe request of the client.

This feature can be invoked using the -k option via the command line.


To get hostapd-wpe running on Kali or whatever other Debian based system you first have to install its dependencies:

apt-get update
apt-get install libssl-dev libnl-dev

hostapd-wpe is patch to hostapd, so you'll next have to download the hostapd source and apply the patch:

git clone https://github.com/OpenSecurityResearch/hostapd-wpe
wget http://hostap.epitest.fi/releases/hostapd-2.2.tar.gz
tar -zxf hostapd-2.2.tar.gz
cd hostapd-2.2
patch -p1 < ../hostapd-wpe/hostapd-wpe.patch 

Now you can build

cd hostapd

You'll also need some certificates set up, you can do this with the bootstrap script:

cd ../../hostapd-wpe/certs

Look at hostapd-wpe.conf and set the interface and driver values accordingly to your needs (and perhaps the ssid, hw_mode, and channel for 802.11). Then to run:

cd ../../hostapd-2.2/hostapd
sudo ./hostapd-wpe hostapd-wpe.conf


Tuesday, September 9, 2014

Face Smack: A CSAW CTF Challenge

By Brad Antoniewicz.

For the last couple of years, I've had the pleasure to help out with and judge NYU Poly CSAW's CTF, the largest student-run Capture the Flag competition is the United States (probably bigger). It is always a really an awesome experience, and I encourage you to participate if you have a chance - Qualification Round is September 19th!

Last year, I wrote a challenge called FaceSmack (a.k.a. FunForEveryBody) and wanted to publish it for you to play with. The goal of the challenge is to obtain the key (in format: key{key here}) by clicking buttons in a specific order. The order can be reversed out of the binary - with enough dedication. On average, it's taken people between 4-10 hours to get it.

If you start to play around with the challenge you'll notice each button click causes a change in the way a sentence is formed, you know when you got it right because the status message will read "The key Is key{some sentence}" instead of "The key is NOT key{some sentence}".


Special props to team PPP for being the first to solve it! Those guys are rockstars! Enjoy!

Tuesday, August 26, 2014

My Cousin VIMmy: A Journey Into the Power of VIM

By Melissa Augustine Goldsmith.

I was cleaning up some YARA rules we have in the office. I am, if anything, a bit OCD about tabs and spacing. I came across this rule from Contagio Exploit pack...

 $a41 = {  7d 40 4e 55 05 54 51 4d 46 52 7e 73 3d 7f 7a 74 77 77 63 36 77 71 33 60 64 7e}
 $a42 = {  7e 41 41 54 06 55 56 4c 45 53 41 72 3e 7e 7d 75 74 76 6c 37 74 70 34 61 67 7f}              
 $a43 = {  7f 42 40 5b 07 56 57 4b 44 50 40 4d 3f 7d 7c 72 75 75 6d 38 75 73 35 66 66 7c}
    $a44 = {  78 43 43 5a 08 57 54 4a 43 51 43 4c 00 7c 7f 73 72 74 6e 39 7a 72 36 67 61 7d} 
 $a45 = {  79 44 42 59 09 58 55 49 42 56 42 4f 01 43 7e 70 73 73 6f 3a 7b 7d 37 64 60 7a}   
 $a46 = {  7a 45 45 58 0a 59 5a 48 41 57 45 4e 02 42 41 71 70 72 68 3b 78 7c 38 65 63 7b}

So if it was only this happy little bit of code I would have just grinned and fixed it by hand. However, there were 255 variables with tabs in the wrong place, and gaps in places where there should not be gaps. There was no way I was going to waste time to do this by hand.

So I thought... could vim do the heavy lifting for me??

For those not drinking the vim Kool-Aid, “Vim is a highly configurable text editor built to enable efficient text editing. It is an improved version of the vi editor distributed with most UNIX systems.” . It’s a text editor... so what? Oh but it is so much more! To the linux machine!

There are a ton of awesome commands with VIM, but I am going to focus on the ones I used to solve my issues.

Varying Levels of Tab-dom/Space-ness

How does one remove the tabs in a document. Well, a search replace of course! Most text editors can do this, but with linux its even easier to search for tab (\t), whitespace (\s) or newline (\n) and replace it with whatever.

So let’s crack on with the regex! I entered the following command and hit Enter.


So lets go through this line to figure out its meaning:

: -> short for “execute”, this goes back to the history of VI and VIM
%s -> run the substitute command across the entire document, if you omit the ‘%’ it searches only on the current line where your cursor is
/ -> start of the regex
\t -> what you are searching for, this is [TAB]
/ -> the next item will be what VIM will substitute [TAB] with, if there is nothing, then it just replaces it with nothing
/ -> end of the regex
g -> replaces ALL occurances in the line, if this is omitted, only the first occurrence in the line is substituted

This is a snippet of output.

Success! Mostly! A quick scroll up shows me some lines have spaces at the beginning. It’s hard to see here, but the line starting with $a249 has a space! Totally unacceptable!

$a248 = {  ac 9f 9f 86 d4 83 80 9e 97 9d 8f 80 cc 88 8b 87 86 88 92 c5 86 86 c2 93 95 b1}  
$a249 = {  ad 90 9e 85 d5 84 81 9d 96 82 8e 83 cd 8f 8a 84 87 87 93 c6 87 81 c3 90 94 8e}  
$a250 = {  ae 91 91 84 d6 85 86 9c 95 83 91 82 ce 8e 8d 85 84 86 9c c7 84 80 c4 91 97 8f}  

(table 1)

I will show it! Back to the VIM:


So this is similar to by first substitution but I will explain the differences:

^ -> only search for hits that occur at the beginning of a line. This means it will ignore all other whitespaces on the line
\s -> whitespace

Why did I not add the /g? Well the ^ says I am only looking at the beginning, so adding the /g does not matter no other “hit” would match the criteria of being at the beginning.

Let’s try something else and also show you another VIM command. Let’s say you make a mistake and you want to revert back to the original before you made the subsititution. Just hit ‘u’ and your last changes will be undone! Think of it as CTRL+Z.

So what would be the difference in Table 1 if I omitted the ^ from the regex? This is the result:

$a248= {  ac 9f 9f 86 d4 83 80 9e 97 9d 8f 80 cc 88 8b 87 86 88 92 c5 86 86 c2 93 95 b1}
$a249 = {  ad 90 9e 85 d5 84 81 9d 96 82 8e 83 cd 8f 8a 84 87 87 93 c6 87 81 c3 90 94 8e}
$a250= {  ae 91 91 84 d6 85 86 9c 95 83 91 82 ce 8e 8d 85 84 86 9c c7 84 80 c4 91 97 8f}
(table 1)

I told regex to basically find me the first white space on the line and remove it-- So now there is a space discrepancy on some lines between the variable and the equal sign. This just shows you the power of regular expressions!

My Desire for Balance

If you notice, there are two space between the first curly bracket and the first hex character. However, there are no space between the last hex character and the closing curly bracket. There must be equilibrium!

:%s/{  /{/

So this is saying, for every line in the document, the first instance (hence the lack of ‘g’) you see “{ “ (that’s two spaces), replace it with “{“ (just curly brace). Then move on to the net line.

My Hatred of Blank Lines

So it’s hard to see in the screen shots due to the size of the window, but in the copy and paste items there are clearly blank lines between most variable declarations. I do not enjoy this. It is time for them to go!


Oh man so what did THIS do? Lets go through the new ones:

\+ -> matches the preceeding character (in our case a white space) one or more times
$ -> to the end of the line

$a251 = {af 92 90 8b d7 86 87 9b 94 80 90 9d cf 8d 8c 82 85 85 9d c8 85 83 c5 96 96 8c}       
$a252 = {a8 93 93 8a d8 87 84 9a 93 81 93 9c d0 8c 8f 83 82 84 9e c9 8a 82 c6 97 91 8d}
$a253 = {a9 94 92 89 d9 88 85 99 92 86 92 9f d1 93 8e 80 83 83 9f ca 8b 8d c7 94 90 8a}

Uniform Tabs

So you know how I got rid of the tabs? Well I actually want a tab, just… I also just wanted them all to be uniform. So now that all of that is done, I can now add a tab and be happy with my output! How to do that? Well VIM again! :

First off I need to know how many lines I want to indent. When you open a file in vim it actually tells you but as we made some changes thing may now be a bit different. So lets see how many lines we got:

:echo line('$')

This also makes sense as we start counting at ‘1’ and we have what seems to be 255 variables being declared :)

So now to indent all 254 lines, first I made sure I was at the top of the file, then I typed this in:


Success! Uniform tabs! I could of course run the command again if I wanted to double tab.

So now I have a much cleaner (happier) YARA rule. This shows the power of regular expressions paired with VIM.

Tuesday, August 19, 2014

Learning Exploitation with FSExploitMe

By Brad Antoniewicz.

I've been an adjunct professor at NYU Poly for almost two years now. It's been a great experience for a number of reasons, one of which is because I'm teaching a hot topic: Vulnerability Analysis and Exploitation. The course is the next iteration of the pentest.cryptocity.net content that evolved into the CTF Field Guide by Dan Guido, Trail of Bits, and a bunch of other industry professionals. It takes a student with some minor programming knowledge and submerges them into exploitation. When the student comes out, they have successfully exploited IE on Windows 7, bypassing DEP and ASLR. It's an awesome, but sometimes overwhelming experience for every student who takes it.

Each semester I start the class off with a survey to gauge the student's experience level: No surprise here, most have little to no experience when it comes to real-world exploitation on Windows. This results in a "revamping" period for the student where they have to work extra hard getting used to WinDBG and IDA.

I wanted to create something that would help ease the students into the learning environment, and that's what FSExploitMe is; a tutorial that walks you through the basics of WinDBG and general exploitation in a browser environment. FSExploitMe is based on Vulnerable.ocx, developed by the original creators of the class.


FSExploitMe is a self-contained, Active X based tutorial that you download and run locally within your browser. You'll want to run this in a VM, as it makes your browser vulnerable to attack. Ensure you have the Microsoft Visual C++ 2010 Redistributable Package installed. Then just double click FSExploitMe.html to get started. You'll have the allow the extension to run by right clicking the banner and selecting "Allow Blocked Content...":

Next Internet Explorer will ask you if you'd like to allow the active content to run, click "Yes":

Then finally you'll get a UAC prompt, click "Yes" here as well:

FSExploitMe should be all ready to go now:

Internet Explorer 8 looks a little less pretty then newer versions. IE8 is the recommended version strictly because Lesson 3 of FSExploitMe executes a HeapSpray that will not work on newer versions of IE. You can easily replace that function to use a newer HeapSpray, I just haven't done that and tested it on all other IE versions. That being said, future iterations of FSExploitMe will include a more robust HeapSpray Function.

It will help to have Symbols when you start debugging. The easiest way to do that is by copying the FSExploitMe.pdb file to the C:\Windows\Downloaded Program Files directory.

Then once you launch WinDBG, add that path to your Symbol Path:

.sympath+ C:\Windows\Downloaded Program Files

About the Lessons

When you first open FSExploitMe.html in your browser, you'll arrive at the welcome screen which gives you an overview of the Installation plus learning resources to get you off the ground with x86, IDA and WinDBG if you have absolutely no experience with them. You can return back to this page by clicking the "FSExploitMe" heading on the upper left of the page.

Each activity is broken up into Lessons and can be accessed by using the links on the upper right of the screen:

On newer versions, it will look a little prettier. I promise, i'll put in that new HeapSpray function soon :)

Lesson 1 - Learning WinDBG

Lesson 1 is entirely dedicated to WinDBG since it is so important to the whole exploitation process. The questions will require you to set breakpoints, dig into memory, and execute some common commands to obtain answers.

Lesson 2 - Stack-Based Overflow

Lesson 2 is focused around exploiting a basic stack-based overflow. The questions require you to understand how the stack operates, how to triage a stack-based overflow and finally how to exploit the condition. The first round walks you through the exploitation, the second is a bit harder - there is no walkthrough and it requires the use of IDA.

Lesson 3 - Use-After-Free on the Heap

Lesson 3 walks you through a use-after-free vulnerability on the heap. The questions help you understand how data is stored on the heap, how virtual function tables and pointers are structured, how to triage a use-after-free and finally how to exploit it. This very much mimics a traditional browser use-after-free and should get you on the right track when you have to tackle a real-world vulnerability.

Upcoming Lessons

The next few lessons that will be written will focus on bypassing exploit mitigations! Stay tuned!


FSExploitMe is available for download now! Answers can be provided if you just ask me for them, and you're not one of my students :)

Feedback welcome!

Wednesday, July 16, 2014

KLEE on Ubuntu 14.04 LTS 64Bit

by Brad Antoniewicz.

It seems like all of the cool kids nowadays are into Symbolic Execution, especially for vulnerability research. It's probably all because of DARPA's Cyber Grand Challenge - a government-sponsored challenge to develop a system that automates vulnerability discovery.

If you start to dive into the topic, you'll undoubtedly come across KLEE, a project coming out of Standford University. KLEE is a great tool to get you started with symbolic execution, however the set up can be slightly daunting for the "app crowd" :) KLEE's home page has a Getting Started page, but it lacks some updates. In this blog post we'll walk you through the most up to date build process from a fresh install of Ubuntu 14.04 LTS Desktop 64-bit.


As with all installations, first make sure you're all up to date:
sudo apt-get update
sudo apt-get upgrade

Apt Packages

Now we'll get the easy stuff out of the way, and install all of the required packages:

sudo apt-get install g++ curl python-minimal git bison flex bc libcap-dev build-essential libboost-all-dev ncurses-dev cmake

LLVM-GCC Binaries

Next we'll need to download the LLVM-GCC binaries and extract them to our home directory:

wget http://llvm.org/releases/2.9/llvm-gcc4.2-2.9-x86_64-linux.tar.bz2
tar -jxvf llvm-gcc4.2-2.9-x86_64-linux.tar.bz2

Environment Variables

At this point, we'll need to set up a few environment variables for everything else to run properly. As stated on the KLEE's Getting Started page, most issues people have are related to not setting these:

export C_INCLUDE_PATH=/usr/include/x86_64-linux-gnu 
export CPLUS_INCLUDE_PATH=/usr/include/x86_64-linux-gnu
export PATH=$PATH:$HOME/llvm-gcc4.2-2.9-x86_64-linux/bin

It's also recommended to add these to your .bashrc:

echo "export C_INCLUDE_PATH=/usr/include/x86_64-linux-gnu" >> .bashrc
echo "export CPLUS_INCLUDE_PATH=/usr/include/x86_64-linux-gnu" >> .bashrc
echo "export PATH=$PATH:$HOME/llvm-gcc4.2-2.9-x86_64-linux/bin" >> .bashrc

Building LLVM 2.9

KLEE specifically requires that you use LLVM 2.9. Now, Ubuntu does have a llvm-2.9 package, and LLVM 2.9 binaries are available from a couple of different locations. However, I decided to stay as true to KLEE's Getting Started instructions. Let's download the source:

wget http://llvm.org/releases/2.9/llvm-2.9.tgz
tar -zxvf llvm-2.9.tgz
cd llvm-2.9

Before we build, we need to apply one patch:

wget http://www.mail-archive.com/klee-dev@imperial.ac.uk/msg01302/unistd-llvm-2.9-jit.patch
patch -p1 < unistd-llvm-2.9-jit.patch 

And now we can build:

./configure --enable-optimized --enable-assertions
cd $HOME

The build might produce some warnings but they can all be safely ignored.

Building Simple Theorem Prover

Simple Theorem Prover (STP) was the source of a couple problems, rather than following the Getting Start Page, take this approach:

git clone https://github.com/stp/stp.git
cd stp
mkdir build && cd build
cmake -G 'Unix Makefiles' $HOME/stp
sudo make install
sudo ldconfig
ulimit -s unlimited
cd $HOME


Our last dependancy is klee-uclibc, to get that set up we:

git clone --depth 1 --branch klee_0_9_29 https://github.com/klee/klee-uclibc.git
cd klee-uclibc/
./configure --with-llvm-config $HOME/llvm-2.9/Release+Asserts/bin/llvm-config --make-llvm-lib
make -j`nproc`
cd $HOME

Building KLEE

With all of our dependancies out of the way, we can build KLEE:

git clone https://github.com/klee/klee.git
cd klee
./configure --enable-posix-runtime --with-stp=/usr/local --with-llvm=$HOME/llvm-2.9/ --with-uclibc=$HOME/klee-uclibc/
make check
make unittests
sudo make install
cd $HOME

Testing with an example

Just to confirm everything is working, you can run through Tutorial 1:
cd $HOME/klee/examples/get_sign
llvm-gcc -I ../../include --emit-llvm -c -g get_sign.c
klee get_sign.o

You're ready to go! Good luck!

Have a different set up? Let us know in the comments below!

Tuesday, July 8, 2014

Writing Slack Space on Windows

By Diego Urquiza.

I’m a Foundstone intern in NYC office and for a project I decided to write a tool to remove file slack space. In this post I’ll introduce the methods I took in writing the tool then provide the tool itself. Hope you enjoy it!


File Slack Space is the amount of unused space at the end of a file’s last cluster. File systems organize file data through an allocation unit called clusters. Clusters are composed in sequence sectors on a disk that contains a set amount of bytes. A disk that has 512 byte sectors and 8 sectors per cluster would have a total of 4 kilobytes (4096 bytes). Since the file system organizes files into clusters, there ends up being unused space at the end of clusters. From a security standpoint, unused space can contain previous data from a deleted file that can contain valuable information.

When a user chooses to delete a file, the file system will delete the pointer to the location of the file data on disk. This enables the operating system to read the data space as available space so when a new file is created, the file system will write over the old data that the user thought he or she deleted. My tool aims to write over the file slack space multiple times with meaningless data so that any prior data cannot be retrieved.


Download my tool here:

Approaching The Problem

When I began this project I saw two possible approaches:

  1. Find the location of every file’s last cluster, move the pointer to the end of the file data, and write zero's until the end of the cluster
  2. Resize the file, read/write the data, then trim it back down

Moving the File Pointer

This seemed like a feasible approach since the unused files were inconsequential. However, after tackling it for about a week, I found out everything can and will go wrong. Depending on what type of file system was in use, each disk’s data would be organized differently. Moreover, if the file data was too small it would go into the master file table (in the case of NTFS file system) and if the data was larger than a cluster, it would be organized non-contiguously on the logical disk. Using the Windows API functions can become frustrating to use since you need to map the file virtual cluster to the logical cluster while finding the volume offset. Therefore, writing meaningless data to the disc can become tragic if the byte offset from the beginning of disc is wrong ( the next cluster over can be a system file ).

Here’s a code snip for this approach:

 /**********Find the last Cluster ***********************************/

returns = DeviceIoControl(hfile.hanFile,
DWORD lastExtentN = retrievalBuffer->ExtentCount - 1;
LONGLONG lastExtent = retrievalBuffer->Extents[lastExtentN].Lcn.QuadPart;
LONGLONG lengthOfExtent = retrievalBuffer->Extents[lastExtentN].NextVcn.QuadPart - retrievalBuffer->Extents[lastExtentN - 1].NextVcn.QuadPart;

while (error == ERROR_MORE_DATA){

                error = GetLastError();
                switch (error){

                case ERROR_HANDLE_EOF:
                                //file sizes 0-1kb will return EOF error 
                                cout << "ERROR_HANDLE_EOF" << endl;
                                returns = true;

                case ERROR_MORE_DATA:
                                cout << "ERROR_MORE_DATA" << endl;
                                startVcn.StartingVcn = retrievalBuffer->Extents[0].NextVcn;

                case NO_ERROR:
                                cout << "NO_ERROR, here is some info: " << endl;
                                cout << "This is the lcnextent : " << retrievalBuffer->Extents[lastExtentN].Lcn.QuadPart << endl;
                                cout << "This is the nextvnc: " << retrievalBuffer->Extents[lastExtentN].NextVcn.QuadPart << endl;
                                cout << "This is the Extent count: " << retrievalBuffer->ExtentCount << endl;
                                cout << "This is the Starting Vnc: " << retrievalBuffer->StartingVcn.QuadPart << endl;
                                cout << "The length of the cluster is: " << lengthOfExtent << endl;
                                cout << "The last cluster is: " << lastExtent + lengthOfExtent - 1 << endl << endl << endl;

                                returns = true;

                                cout << "Error in the code or input error" << endl;

Resizing Tricks

The second approach was to resize the file and trim it back down. Resizing the file was the right (safe) direction to go. You can easily iterate through all the files on a volume and set file handlers for each one. Then, with each file handle you can calculate the file slack space based on the system information (bytes per sector and sectors per cluster). Finally, move the file pointer to the beginning of the slack space, save the pointer location, write zero’s to the end of the cluster, trim the file back down to the saved pointer location, and do it again. It is important to write meaningless data multiple times because even after one write over, the old data can still be retrieved. This method unfortunately cannot be used on files in use.

Here’s a code snip from this approach:

 /******************* Loop to write random 0s and 1s to the end of the file(4 times) **********/
for( int a = 2; a<6;a++){

                //Alternate 0s and 1s
                int b,c;
                b = 2;
                c = a%b;

                char * wfileBuff = new char[info.slackSpace];
                memset (wfileBuff,c,info.slackSpace);

                returnz = SetFilePointer(info.hanFile, info.fileSize.QuadPart,NULL,FILE_BEGIN);
                if(returnz ==0){
                                cout<<"Error with SetFilePointer"<<endl;
                                return 0;
                /**** Lock file, Write data, Unlock file *******/
                if(LockFile(info.hanFile, returnz, 0, info.slackSpace, 0) != 0)

                returnz =WriteFile(info.hanFile, wfileBuff, info.slackSpace, &dwBytesWritten, NULL);
                if(returnz ==0){
                                cout<<"There is an error with WriteFile"<<endl;
                                cout<<"Error: "<<GetLastError()<<endl;
                                return 0;
                if(UnlockFile(info.hanFile, returnz, 0, info.slackSpace, 0) != 0);



                //Cut out the extra data written from the file
                                cout<<"Error seting the end of the file"<<endl;
                                return 0;

Even though the second approach has its drawbacks, it is safer and can work on different file systems. The focus for the next sequence will be optimization of speed and the addition of extra features. Some of the features would include finding the file offset of a file from the disc(can be useful for finding bad sectors), displaying volume information and file information such as size available. Working on this project was an interesting experience that has helped me grow from a computer forensic perspective and I can’t wait to see what I can do next.

Tuesday, June 24, 2014

Approaches to Vulnerability Disclosure

By Brad Antoniewicz.

The excitement of finding a vulnerability in piece of commercial software can quickly shift to fear and regret when you disclose it to the vendor and find yourself in a conversation with a lawyer questioning your intentions. This is an unfortunate reality in our line of work, but you can take actions to protect your butt. In this post, we’ll take a look at how Vulnerability disclosure is handled in standards, by bug hunters, and by large organizations so that you can figure out how to make the best decision for you.

Disclosure Standpoints

While it’s debatable, I think hacking, more specifically vulnerability discovery, started to better the overall community – e.g. we can make the world a better, more secure place by finding and fixing vulnerabilities within the software we use. Telling software maintainers about vulnerabilities we find in their products falls right in line with this idea. However, there is also something else to consider: recognition and sharing. If you spend weeks findings an awesome vulnerability, you should be publically recognized for that effort, and moreover, other’s should also know about your vulnerability so they can learn from it.

Unfortunately, vendors often lack the same altruistic outlook. From a vendor’s perspective, a publically disclosed vulnerability highlights a flaw in their product, which may negatively impact its customer base. Some vendors even interpret vulnerability discovery as a direct attack against their product and even their company. I’ve personally had lawyers ask me “Why are you hacking our company” when I disclosed a vulnerability in their offline desktop application.

As time progressed, vulnerability discovery shifted from a hobby and “betterment” activity to a profitable business. There are plenty of organizations out their selling exploits for undisclosed vulnerabilities. Plus, a seemingly even greater number of criminal or state-sponsored organizations leveraging undisclosed vulnerabilities for corporate espionage and nation-state sponsored attacks. This shift has turned computer hacking from a “hippy” activity to serious business.

The emergence of bug bounty programs has really helped deter bug hunters away from criminal outlets by offering monetary reward and public recognition. It has also demystified how disclosure is handled. However, not all vendors offer a bug bounty program, and many times lawyers may not even be aware of the bug bounty programs available in their own organization, which could put you in a sticky situation if you take the wrong approach to disclosure.

General Approaches

In general, there are three categories of disclosure:

  • Full disclosure – Full details are released publically as soon as possible, often without vendor involvement
  • Coordinated disclosure – Researcher and vendor work together so that the bug is fixed before the vulnerability is disclosed
  • Private or Non-Disclosure – The vulnerability is released to a small group of people (not the vendor) or kept private

These categories broadly classify disclosure approaches but many actual disclosure policies are unique in that they set time limitations on vendor response, etc.. .

Established Disclosure Standards

To give better perspective, let's look at some existing standards that help guide you in the right direction.

  • Internet Engineering Task Force (IETF) – Responsible Vulnerability Disclosure Process - The Responsible Vulnerability Disclosure Process established by this IETF draft is one of the first efforts made to create a process that establishes roles for all parties involved. This process accurately defines the appropriate roles and steps of a disclosure; however it fails to address publication by the researcher if the vendor fails to respond or causes unreasonable delays. At most the process states that the vendor must provide specific reasons for not addressing a vulnerability within 30 days of initial notification.
  • Organization for Internet Safety (OIS) Guidelines for Security Vulnerability Reporting and Response - The OIS guidelines provide further clarification into the disclosure process, offering more detail and establishing terminology for common elements of a disclosure such as the initial vulnerability report (Vulnerability Summary Report), request for confirmation (Request for confirmation receipt), status request (Request for Status), etc.. As with the Responsible Vulnerability Disclosure Process, the OIS Guidelines also do not define a hard time frame for when the researcher may publicize details of the vulnerability. If the process fails, OIS Guidelines define a “Conflict Resolution” step which ultimately results in the ability for parties exit the process, however no disclosure option is provided. The OIS also introduces the scenario where an unrelated third party discloses the same vulnerability – at that time the researcher may disclose without the need for a vendor fix.
  • Microsoft Coordinated Vulnerability Disclosure (CVD) - Microsoft’s Coordinated Vulnerability Disclosure is similar to responsible disclosure in that its aim is to have both the vendor and the researcher (finder) work together to disclose information about the vulnerability at a time after a resolution is reached. However, CVD refrains from defining any specific time frames and only permits public disclosure after a vendor resolution or evidence of exploitation is identified.

Coordinator Policies

Coordinators act on the behalf of a researcher to disclose vulnerabilities to vendors. They provide a level of protection to the researcher and also take on the role of finding an appropriate vendor contact. While coordinators goal is to notify the vendor, they also satisfy the researcher’s aim to share the vulnerability with the community. This sections discusses gives an overview of coordinators policies.

  • Computer Emergency Response Team Coordination Center (CERT/CC) Vulnerability Disclosure Policy - The CERT/CC Vulnerability disclosure policy sets a firm 45 day timeframe from initial report to public disclosure. This occurs regardless of if a patch or workaround is released by the vendor. Exceptions to this policy do exist for critical issues in core components of technology that require a large effort to fix, such as vulnerabilities in standards or core components of an operating system.
  • Zero Day Initiative (ZDI) Disclosure Policy - ZDI is a coordinator that offers monetary rewards for vulnerabilities. It uses the submitted vulnerabilities to generate signatures so that its security products can offer clients early detection and prevention. After making a reasonable effort, ZDI may disclose vulnerabilities within 15 days of initial contact if the vendor does not respond.

Researcher Policies

Security companies commonly support vulnerability research and make their policies publically available. This section provides an overview of a handful:

  • Rapid 7 Vulnerability Disclosure Policy - Rapid7 attempts to contact the vendor via telephone and email then after 15 days, regardless of response, will post its finding to CERT/CC. This combination provides the vendor a potential of 60 days before public disclosure because it is CERT/CC’s policy to wait 45 days.
  • VUPEN Policy - VUPEN is a security research company that adheres to a “commercial responsible disclosure policy”, meaning any vendor who is under contract with VUPEN will be notified of vulnerabilities, however all other vulnerabilities are mostly kept private to fund the organization’s exploitation and intelligence services.
  • Trustwave/SpiderLabs Disclosure Policy - Trustwave makes a best effort approach to contacting the vendor then ultimately puts the decision of public disclosure in its management’s hands if the vendor is unresponsive.

Summary of Approaches

The following table summarizes the approaches mentioned above.

Notification Emails
Receipt Time Frame
Status Update Time Frame
Verification /Resolution
Time Frame
Disclosure Time Frame
Responsible Vulnerability Disclosure Process


And other public info such as domain registrar, etc..

7 days
Every 7 days or otherwise agreed
Vendors make best effort to address within 30 days, can request up to 30 day additional grace period and extensions without defining limits.
After resolution by vendor.
OIS Guidelines for Security Vulnerability Reporting and Response


And other public info such as domain registrar, etc..

7 days, then can send a request for receipt. After three days, go to conflict resolution
Every 7 days or otherwise agreed – Finder can send request for status if vendor does not comply. After three days, go to conflict resolution.
30 day suggestion from vendor receipt, although should be defined on case by case basis.
After resolution by vendor.
Microsoft Coordinated Vulnerability Disclosure

And search engine results, etc..
Not defined
Not defined
Not defined
After resolution by vendor.
CERT/CC Vulnerability Disclosure Process
Not published
Not defined
Not defined
Not defined
45 Days from initial notification

ZDI Disclosure Policy
5 days then telephone contact.

5 days for telephone response then intermediary
Not defined
6 Months
15 days if no response is provided after initial notification. Up to 6months if notification response is provided
Not defined
15 days after phone/email
Not defined
Not defined
15 days then disclosure to CERT/CC
Not defined
Notification only to customers under contract
Not defined
Not defined
Disclosure only to customers
TrustWave Spider Labs
Not Defined
5 days
5 days
Not defined
If vendor is unresponsive for more than 30 days after initial contact, potential disclosure decided by Trustwave Management.

What to do?

Consider all of the above approaches, and let the vendor know your policy as you disclose it so they are aware. At the end of the day, it's always good to be flexible, and as accomodating as possible to the vendor. However, also be sure the effort is equal, they should be responding in a resonable time and making progress to address the issue.

How do you handle disclosure? Let us know in the comments below!

Tuesday, June 17, 2014

Privilege escalation with AppScan

By Kunal Garg.

Web application vulnerability scanners are a necessary evil when it comes to achieving a rough baseline or some minimum level of security. While they should never be used as the only testament of security for an application, they do provide a level of assurance above no security testing at all. For the security professional, they serve as another tool in the toolbox. All web application scanners are different and some require finer tuning then others. A common question with IBM's AppScan is, "How do you configure it to test only for privilege escalation issues?" In this post, we'll walk you through the steps!

Privilege escalation testing comes handy during authorization testing, when you're looking to tell if one user is authorized to access data or perform actions outside of their role.


You're first step is to run a post authentication scan with a higher privilege user. In this example, we'll use "FSTESTADMIN". Ideally you'll use a manual crawl so that maximum URL’s are covered.


Once the post-authentication scan is complete, follow configure App Scan as follows:

  1. Open a new scan and go to "Scan Configuration"
  2. Go to Login Management and record the login with lower privilege user (Say "FSTESTUSER")
  3. Go to "Test" then "Privilege Escalation" and browse the scan file created previously (scan file for "FSTESTADMIN")

  4. Go to Test Policy, using (CTRL+A) select all tests and uncheck them
  5. In the find section type “escalation” and select all the privilege escalation checks

Once all the above settings are complete run the scan, App Scan will only run tests for Privilege escalation.

This usually creates lots of false positives as App Scan checks for URL’s in the higher privilege scan using the authentication credentials of a lower privilege user. Any URL/pages which are common to both the user will be reported as an issue (false positive in this case).

Tuesday, June 10, 2014

Dojo Toolkit and Risks with Third Party Libraries

By Deepak Choudhary.

3rd party libraries can become critical components of in-house developed applications, while the benefits to using them is huge, there is also some risks to consider. In this blog post we'll look at a common 3rd party component of many web applications, Dojo Toolkit. After noticing it was included during a recent web application penetration test, it became clear that the version incorporated within the application was vulnerable, and ultimately exposed the entire application to attack.

Dojo Toolkit

If you haven't encountered Dojo before, just know it is a JavaScript (Javascript/AJAX) library used to design cross-platform web and mobile applications. The framework itself provides various "widgets" that can be used to support a variety of browsers, everything from Safari to Chrome on iPhone to Blackberry.

Documented Vulnerabilities

Dojo has reported some serious security issues in the past such as XSS, DOM-Based XSS, and URL Redirection so its important to stay up to date with the latest version if you leverage it within your application.

Vulnerable version: Dojo 0.4 through Dojo 1.4
Latest Version: Dojo 1.9.3
Reference: http://dojotoolkit.org/ , http://dojotoolkit.org/features/mobile

Files with known vulnerabilities

  • dojo/resources/iframe_history.html
  • dojox/av/FLAudio.js
  • dojox/av/FLVideo.js
  • dojox/av/resources/audio.swf
  • dojox/av/resources/video.swf
  • util/buildscripts/jslib/build.js
  • util/buildscripts/jslib/buildUtil.js
  • util/doh/runner.html
  • /dijit/tests/form/test_Button.html

Prior attack strings

  • http://WebApp/dojo/iframe_history.html?location=http://www.google.com
  • http://WebApp/dojo/iframe_history.html?location=javascript:alert%20%289999%2
  • http://WebApp/util/doh/runner.html?dojoUrl='/>foo</script><'"<script>alert(/xss/)</script>
  • http://WebApp/util/doh/runner.html?testUrL='/>foo</script><'"<script>alert(/xss/)</script>
  • http://WebApp/dijit/tests/form/test_Button.html?theme="/><script>alert(/xss/)</script>
  • dojox/av/FLAudio.js (allowScriptAccess:"always”)
  • dojox/av/FLVideo.js (allowScriptAccess:"always”) and etc.

If you use Dojo, make sure you have an updated version installed or remove these files (if not needed) from the application's directories.

Tuesday, June 3, 2014

Debugging Android Applications

By Naveen Rudrappa.

Using a debugger to manipulate application variables at runtime can be a powerful technique to employ while penetration testing Android applications. Android applications can be unpacked, modified, re-assembled, and converted to gain access to the underlying application code, however understanding which variables are important and should be modified is a whole other story that can be laborious and time consuming. In this blog post we'll highlight the benefits of runtime debugging and give you a simple example to get you going!

Debugging is a technique where a hook is attached to a particular application code. Execution pauses once a particular piece of code is reached, giving us the ability to analyze local variables, dump class values, modify values, and generally interact with the program state. Then when we're ready, we can resume execution.

Required Tools

If you have done any work with Android applications, you shouldn't need any new tools:

  1. The application's installationation package
  2. Java SDK
  3. Android SDK
Reverse engineering has got prominent role while, penetration testing the android applications. Reversing android applications is helpful in below 2 scenarios:


The AndroidManifest.xml contained within the application's .apk actually has a android:debuggable setting which allows the application to be debuggable. So we'll need to use the APK Manager to decompress the installation package and add android:debuggable="true".


We'll need to attach the debugger to our application in order for us to debug it. Using adb jdwp, we can list all of the running applications and as long as the target application was the last to be loaded, we can reliably guess that the last process ID on the list is ours.

Next we'll need to forward our debugging session to a port we can connect to with our debugger:

 adb forward tcp:8000 jdwp:498 

Finally we can attach the debugger with:

 jdb -connect com.sun.jdi.SocketAttach:hostname=localhost,port=8000 

With the debugger attached, we can set breakpoints at the required functions and analyze the application behavior at runtime. To identify function names, you can decompile the application to dex and use that code to guide your debugging session.

Some the useful JDB commands for debugging:

  1. stop in [function name] - Set a breakpoint
  2. next - Executes one line
  3. step - Step into a function
  4. step up - Step out of a function
  5. print obj - Prints a class name
  6. dump obj - Dumps a class
  7. print [variable name] - Print the value of a variable
  8. set [variable name] = [value] - Change the value of a variable

An Exercise for You!

This application is a pretty simple one. Upon entering correct PIN, that is 1234, application responds with message “correct PIN entered”. Upon entering any value apart from 1234 application responds with message “Invalid PIN”. Bypass this logic via debugging so that for any invalid PIN application responds with message “correct PIN entered”. For solution refer below image it summarizes all the command need to complete the challenge.