Tuesday, June 24, 2014

Approaches to Vulnerability Disclosure

By Brad Antoniewicz.

The excitement of finding a vulnerability in piece of commercial software can quickly shift to fear and regret when you disclose it to the vendor and find yourself in a conversation with a lawyer questioning your intentions. This is an unfortunate reality in our line of work, but you can take actions to protect your butt. In this post, we’ll take a look at how Vulnerability disclosure is handled in standards, by bug hunters, and by large organizations so that you can figure out how to make the best decision for you.

Disclosure Standpoints

While it’s debatable, I think hacking, more specifically vulnerability discovery, started to better the overall community – e.g. we can make the world a better, more secure place by finding and fixing vulnerabilities within the software we use. Telling software maintainers about vulnerabilities we find in their products falls right in line with this idea. However, there is also something else to consider: recognition and sharing. If you spend weeks findings an awesome vulnerability, you should be publically recognized for that effort, and moreover, other’s should also know about your vulnerability so they can learn from it.

Unfortunately, vendors often lack the same altruistic outlook. From a vendor’s perspective, a publically disclosed vulnerability highlights a flaw in their product, which may negatively impact its customer base. Some vendors even interpret vulnerability discovery as a direct attack against their product and even their company. I’ve personally had lawyers ask me “Why are you hacking our company” when I disclosed a vulnerability in their offline desktop application.

As time progressed, vulnerability discovery shifted from a hobby and “betterment” activity to a profitable business. There are plenty of organizations out their selling exploits for undisclosed vulnerabilities. Plus, a seemingly even greater number of criminal or state-sponsored organizations leveraging undisclosed vulnerabilities for corporate espionage and nation-state sponsored attacks. This shift has turned computer hacking from a “hippy” activity to serious business.

The emergence of bug bounty programs has really helped deter bug hunters away from criminal outlets by offering monetary reward and public recognition. It has also demystified how disclosure is handled. However, not all vendors offer a bug bounty program, and many times lawyers may not even be aware of the bug bounty programs available in their own organization, which could put you in a sticky situation if you take the wrong approach to disclosure.

General Approaches

In general, there are three categories of disclosure:

  • Full disclosure – Full details are released publically as soon as possible, often without vendor involvement
  • Coordinated disclosure – Researcher and vendor work together so that the bug is fixed before the vulnerability is disclosed
  • Private or Non-Disclosure – The vulnerability is released to a small group of people (not the vendor) or kept private

These categories broadly classify disclosure approaches but many actual disclosure policies are unique in that they set time limitations on vendor response, etc.. .

Established Disclosure Standards

To give better perspective, let's look at some existing standards that help guide you in the right direction.

  • Internet Engineering Task Force (IETF) – Responsible Vulnerability Disclosure Process - The Responsible Vulnerability Disclosure Process established by this IETF draft is one of the first efforts made to create a process that establishes roles for all parties involved. This process accurately defines the appropriate roles and steps of a disclosure; however it fails to address publication by the researcher if the vendor fails to respond or causes unreasonable delays. At most the process states that the vendor must provide specific reasons for not addressing a vulnerability within 30 days of initial notification.
  • Organization for Internet Safety (OIS) Guidelines for Security Vulnerability Reporting and Response - The OIS guidelines provide further clarification into the disclosure process, offering more detail and establishing terminology for common elements of a disclosure such as the initial vulnerability report (Vulnerability Summary Report), request for confirmation (Request for confirmation receipt), status request (Request for Status), etc.. As with the Responsible Vulnerability Disclosure Process, the OIS Guidelines also do not define a hard time frame for when the researcher may publicize details of the vulnerability. If the process fails, OIS Guidelines define a “Conflict Resolution” step which ultimately results in the ability for parties exit the process, however no disclosure option is provided. The OIS also introduces the scenario where an unrelated third party discloses the same vulnerability – at that time the researcher may disclose without the need for a vendor fix.
  • Microsoft Coordinated Vulnerability Disclosure (CVD) - Microsoft’s Coordinated Vulnerability Disclosure is similar to responsible disclosure in that its aim is to have both the vendor and the researcher (finder) work together to disclose information about the vulnerability at a time after a resolution is reached. However, CVD refrains from defining any specific time frames and only permits public disclosure after a vendor resolution or evidence of exploitation is identified.

Coordinator Policies

Coordinators act on the behalf of a researcher to disclose vulnerabilities to vendors. They provide a level of protection to the researcher and also take on the role of finding an appropriate vendor contact. While coordinators goal is to notify the vendor, they also satisfy the researcher’s aim to share the vulnerability with the community. This sections discusses gives an overview of coordinators policies.

  • Computer Emergency Response Team Coordination Center (CERT/CC) Vulnerability Disclosure Policy - The CERT/CC Vulnerability disclosure policy sets a firm 45 day timeframe from initial report to public disclosure. This occurs regardless of if a patch or workaround is released by the vendor. Exceptions to this policy do exist for critical issues in core components of technology that require a large effort to fix, such as vulnerabilities in standards or core components of an operating system.
  • Zero Day Initiative (ZDI) Disclosure Policy - ZDI is a coordinator that offers monetary rewards for vulnerabilities. It uses the submitted vulnerabilities to generate signatures so that its security products can offer clients early detection and prevention. After making a reasonable effort, ZDI may disclose vulnerabilities within 15 days of initial contact if the vendor does not respond.

Researcher Policies

Security companies commonly support vulnerability research and make their policies publically available. This section provides an overview of a handful:

  • Rapid 7 Vulnerability Disclosure Policy - Rapid7 attempts to contact the vendor via telephone and email then after 15 days, regardless of response, will post its finding to CERT/CC. This combination provides the vendor a potential of 60 days before public disclosure because it is CERT/CC’s policy to wait 45 days.
  • VUPEN Policy - VUPEN is a security research company that adheres to a “commercial responsible disclosure policy”, meaning any vendor who is under contract with VUPEN will be notified of vulnerabilities, however all other vulnerabilities are mostly kept private to fund the organization’s exploitation and intelligence services.
  • Trustwave/SpiderLabs Disclosure Policy - Trustwave makes a best effort approach to contacting the vendor then ultimately puts the decision of public disclosure in its management’s hands if the vendor is unresponsive.

Summary of Approaches

The following table summarizes the approaches mentioned above.

Notification Emails
Receipt Time Frame
Status Update Time Frame
Verification /Resolution
Time Frame
Disclosure Time Frame
Responsible Vulnerability Disclosure Process


And other public info such as domain registrar, etc..

7 days
Every 7 days or otherwise agreed
Vendors make best effort to address within 30 days, can request up to 30 day additional grace period and extensions without defining limits.
After resolution by vendor.
OIS Guidelines for Security Vulnerability Reporting and Response


And other public info such as domain registrar, etc..

7 days, then can send a request for receipt. After three days, go to conflict resolution
Every 7 days or otherwise agreed – Finder can send request for status if vendor does not comply. After three days, go to conflict resolution.
30 day suggestion from vendor receipt, although should be defined on case by case basis.
After resolution by vendor.
Microsoft Coordinated Vulnerability Disclosure

And search engine results, etc..
Not defined
Not defined
Not defined
After resolution by vendor.
CERT/CC Vulnerability Disclosure Process
Not published
Not defined
Not defined
Not defined
45 Days from initial notification

ZDI Disclosure Policy
5 days then telephone contact.

5 days for telephone response then intermediary
Not defined
6 Months
15 days if no response is provided after initial notification. Up to 6months if notification response is provided
Not defined
15 days after phone/email
Not defined
Not defined
15 days then disclosure to CERT/CC
Not defined
Notification only to customers under contract
Not defined
Not defined
Disclosure only to customers
TrustWave Spider Labs
Not Defined
5 days
5 days
Not defined
If vendor is unresponsive for more than 30 days after initial contact, potential disclosure decided by Trustwave Management.

What to do?

Consider all of the above approaches, and let the vendor know your policy as you disclose it so they are aware. At the end of the day, it's always good to be flexible, and as accomodating as possible to the vendor. However, also be sure the effort is equal, they should be responding in a resonable time and making progress to address the issue.

How do you handle disclosure? Let us know in the comments below!

Tuesday, June 17, 2014

Privilege escalation with AppScan

By Kunal Garg.

Web application vulnerability scanners are a necessary evil when it comes to achieving a rough baseline or some minimum level of security. While they should never be used as the only testament of security for an application, they do provide a level of assurance above no security testing at all. For the security professional, they serve as another tool in the toolbox. All web application scanners are different and some require finer tuning then others. A common question with IBM's AppScan is, "How do you configure it to test only for privilege escalation issues?" In this post, we'll walk you through the steps!

Privilege escalation testing comes handy during authorization testing, when you're looking to tell if one user is authorized to access data or perform actions outside of their role.


You're first step is to run a post authentication scan with a higher privilege user. In this example, we'll use "FSTESTADMIN". Ideally you'll use a manual crawl so that maximum URL’s are covered.


Once the post-authentication scan is complete, follow configure App Scan as follows:

  1. Open a new scan and go to "Scan Configuration"
  2. Go to Login Management and record the login with lower privilege user (Say "FSTESTUSER")
  3. Go to "Test" then "Privilege Escalation" and browse the scan file created previously (scan file for "FSTESTADMIN")

  4. Go to Test Policy, using (CTRL+A) select all tests and uncheck them
  5. In the find section type “escalation” and select all the privilege escalation checks

Once all the above settings are complete run the scan, App Scan will only run tests for Privilege escalation.

This usually creates lots of false positives as App Scan checks for URL’s in the higher privilege scan using the authentication credentials of a lower privilege user. Any URL/pages which are common to both the user will be reported as an issue (false positive in this case).

Tuesday, June 10, 2014

Dojo Toolkit and Risks with Third Party Libraries

By Deepak Choudhary.

3rd party libraries can become critical components of in-house developed applications, while the benefits to using them is huge, there is also some risks to consider. In this blog post we'll look at a common 3rd party component of many web applications, Dojo Toolkit. After noticing it was included during a recent web application penetration test, it became clear that the version incorporated within the application was vulnerable, and ultimately exposed the entire application to attack.

Dojo Toolkit

If you haven't encountered Dojo before, just know it is a JavaScript (Javascript/AJAX) library used to design cross-platform web and mobile applications. The framework itself provides various "widgets" that can be used to support a variety of browsers, everything from Safari to Chrome on iPhone to Blackberry.

Documented Vulnerabilities

Dojo has reported some serious security issues in the past such as XSS, DOM-Based XSS, and URL Redirection so its important to stay up to date with the latest version if you leverage it within your application.

Vulnerable version: Dojo 0.4 through Dojo 1.4
Latest Version: Dojo 1.9.3
Reference: http://dojotoolkit.org/ , http://dojotoolkit.org/features/mobile

Files with known vulnerabilities

  • dojo/resources/iframe_history.html
  • dojox/av/FLAudio.js
  • dojox/av/FLVideo.js
  • dojox/av/resources/audio.swf
  • dojox/av/resources/video.swf
  • util/buildscripts/jslib/build.js
  • util/buildscripts/jslib/buildUtil.js
  • util/doh/runner.html
  • /dijit/tests/form/test_Button.html

Prior attack strings

  • http://WebApp/dojo/iframe_history.html?location=http://www.google.com
  • http://WebApp/dojo/iframe_history.html?location=javascript:alert%20%289999%2
  • http://WebApp/util/doh/runner.html?dojoUrl='/>foo</script><'"<script>alert(/xss/)</script>
  • http://WebApp/util/doh/runner.html?testUrL='/>foo</script><'"<script>alert(/xss/)</script>
  • http://WebApp/dijit/tests/form/test_Button.html?theme="/><script>alert(/xss/)</script>
  • dojox/av/FLAudio.js (allowScriptAccess:"always”)
  • dojox/av/FLVideo.js (allowScriptAccess:"always”) and etc.

If you use Dojo, make sure you have an updated version installed or remove these files (if not needed) from the application's directories.

Tuesday, June 3, 2014

Debugging Android Applications

By Naveen Rudrappa.

Using a debugger to manipulate application variables at runtime can be a powerful technique to employ while penetration testing Android applications. Android applications can be unpacked, modified, re-assembled, and converted to gain access to the underlying application code, however understanding which variables are important and should be modified is a whole other story that can be laborious and time consuming. In this blog post we'll highlight the benefits of runtime debugging and give you a simple example to get you going!

Debugging is a technique where a hook is attached to a particular application code. Execution pauses once a particular piece of code is reached, giving us the ability to analyze local variables, dump class values, modify values, and generally interact with the program state. Then when we're ready, we can resume execution.

Required Tools

If you have done any work with Android applications, you shouldn't need any new tools:

  1. The application's installationation package
  2. Java SDK
  3. Android SDK
Reverse engineering has got prominent role while, penetration testing the android applications. Reversing android applications is helpful in below 2 scenarios:


The AndroidManifest.xml contained within the application's .apk actually has a android:debuggable setting which allows the application to be debuggable. So we'll need to use the APK Manager to decompress the installation package and add android:debuggable="true".


We'll need to attach the debugger to our application in order for us to debug it. Using adb jdwp, we can list all of the running applications and as long as the target application was the last to be loaded, we can reliably guess that the last process ID on the list is ours.

Next we'll need to forward our debugging session to a port we can connect to with our debugger:

 adb forward tcp:8000 jdwp:498 

Finally we can attach the debugger with:

 jdb -connect com.sun.jdi.SocketAttach:hostname=localhost,port=8000 

With the debugger attached, we can set breakpoints at the required functions and analyze the application behavior at runtime. To identify function names, you can decompile the application to dex and use that code to guide your debugging session.

Some the useful JDB commands for debugging:

  1. stop in [function name] - Set a breakpoint
  2. next - Executes one line
  3. step - Step into a function
  4. step up - Step out of a function
  5. print obj - Prints a class name
  6. dump obj - Dumps a class
  7. print [variable name] - Print the value of a variable
  8. set [variable name] = [value] - Change the value of a variable

An Exercise for You!

This application is a pretty simple one. Upon entering correct PIN, that is 1234, application responds with message “correct PIN entered”. Upon entering any value apart from 1234 application responds with message “Invalid PIN”. Bypass this logic via debugging so that for any invalid PIN application responds with message “correct PIN entered”. For solution refer below image it summarizes all the command need to complete the challenge.