Tuesday, April 15, 2014

Heartbleed Recap and Testing

By Mateo Martinez and Melissa Augustine.

CVE-2014-0160 also known as the "Heartbleed Bug", is a serious vulnerability in OpenSSL, one of the most widely used cryptographic libraries. This bug has been present in OpenSSL since March 14, 2012 with the release of version 1.0.1 and specifically affects OpenSSL's implementation of the TLS/DTLS protocols.

To summarize, Heartbleed allows anyone to read the memory of a system running services that use OpenSSL for TLS/DTLS.

Why HeartBleed?

TLS/DTLS leverage “hearbeat”, or “keep alive” messages once a session is established to let hosts know that a connection is still needed and active. Here is an example of a normal heartbeat that occurs after the initial SSL connection has already taken place.



OpenSSL implemented this heartbeat in a way that allowed the user to TELL the server how much data it wanted to echo back. A client can request up to 64k of memory per heartbeat. Stored within the memory can be anything processed by the service, including usernames, passwords, and private keys.



Affected Versions and Recommendations

OpenSSL versions 1.0.1 through 1.0.1f are vulnerable and the fix was implemented in version 1.0.1g. The blanket recommendation is to apply the patch and change passwords. It is important as a user to ensure whatever application you are using has already implemented the patch before changing your passwords; otherwise your new password may still be susceptible to attack.

Prior Detection

How do you tell if someone has used the attack against you in the past? Well, that’s the tricky part - this bug was unknown for a long time, so prior to its release no sensors or products would detect it occuring. If you maintain network captures for your network, you may be able to query that data and look for a signature.. but there is nothing left on the server side that would be an indicator it was being exploited.

Now that the attack has been publicly disclosed, there are now a multitude of detection mechanisms in place to alert administrators of a Heartbleed attack (including Snort, Tripwire, and Honeypot scripts) .

Testing

There are a ton of ways to test for Heartbleed - McAfee has released a Heartbleed Checker tool, there is a metasploit module, and even a nmap nse script. We'll cover the nmap script here.

Using NMAP

Checking HeartBleed with NMAP is a painless process. Firstly you'll need to download the script and place it in the default NSE folder (/usr/share/nmap/scripts)



Next download the TLS library to the nselib folder:



Now make sure nmap has updated it script db:

 root@kali:~# nmap --script-updatedb




And you're ready to roll! To test a specific URL you can run:

 root@kali:~# nmap -vv -p 443 --script ssl-heartbleed www.somesite.com -oN somesite_outputfile



To test a range of hosts you can use:

 root@kali:~# nmap -vv -p 443 --script ssl-heartbleed 192.168.1.0/24 -oN subnet_outputfile



And to test multiple ports, just run:

 root@kali:~# nmap -sV -p 443,8443,6443 --script=ssl-heartbleed.nse 192.168.1.1



Tuesday, April 8, 2014

Secure Usage of Android Webview:

By Naveen Rudrappa

The WebView class is one of the most powerful classes and it renders web pages like a normal browser. Applications can interact with WebView by adding a hook, monitoring changes being made, add JavaScript, etc. Even though this seems like a great feature; it brings in security loopholes if not used with caution. Since WebView can be customized, it creates the opportunity to break out of the sandbox and bypass the same origin policy.

WebView allows sandbox bypass in two different scenarios:

  1. JavaScript can invoke Java code.
  2. Java code can invoke JavaScript.


Sample code to Invoke Java from JavaScript:

 wv.addJavascriptInterface(new FileUtils(), "file");
< script> 
filename = '/data/data/com.Foudnstone/data.txt';
file.write(filename, data, false);
< /script> 



Sample code to Invoke JavaScript from Java:

 String javascr = "javascript: var newscript=document.createElement(\"script\");";
javascr += "newscript.src=\"http://www.foundstone.com\";";
javascr += "document.body.appendChild(newscript);";
myWebView.loadUrl(javascr);



Another way to support sandboxing is to implement addJavascriptInterface. However any class declared using addJavascriptInterface allows for commands to be run on android device from JavaScript, leading to complete compromise. Hence implementing addJavascriptInterface is also not safe.

Hence to implement secure usage of WebView follow the below mentioned solutions:

  • Compile application against Android API level equal or more than 17. This API forces developer to add @JavascriptInterface to any method that is to be exposed to JavaScript. This also prevents access to operating system commands (via java.lang.Runtime).
  • Disable Support for JavaScript. If there is no reason to support JavaScript within the WebView, then it should be disabled. The Android WebSettings class can be used to disable support for JavaScript via the public method:
     setJavaScriptEnabled.webview = new WebView(this); webview.getSettings().setJavaScriptEnabled(false);
    


  • Send all traffic over SSL. Any traffic in clear is easy to sniff and manipulate using a Man-in-The-Middle attack. Thus an hacker cannot inject script via MITM and can not break sandbox of webview.
  • To avoid security issues from the WebView, always restrict users to the application domain using code as below which prevents WebView security issues. By restricting user to known domain we are secure from Javascript being loaded from untrusted websites.
     WebViewclient wvclient = New WebViewClient() {
    // override the "shouldOverrideUrlLoading" hook.
    public boolean shouldOverrideUrlLoading(WebView view,String url){
    if(!url.startsWith("http://clientlocation.com")){
    Intent i = new Intent("Android,intent.action.VIEW",Uri.parse(url));
    startActivity(i);
    // override the "onPageFinished" hook.
    public void onPageFinished(WebView view, String url) { ...}
    }
    webView.setWebViewClient(wvclient);
    // override the "onPageFinished" hook.
    public void onPageFinished(WebView view, String url) { ...}
    }
    webView.setWebViewClient(wvclient);
    


Tuesday, April 1, 2014

Application Whitelisting Programs, WinXP EoS, and HIPAA's Security Rule

By The Foundstone Strategic Services Team.

The United States Department of Health and Human Services (HHS) has stated that the “Security Rule does not specify minimum requirements for personal computer operating systems”. Microsoft’s own Windows XP enterprise end of support website points readers directly to the Health and Human Services (HHS) Security Rule guidance on operating system requirements for the personal computer systems used by a covered entity. The HHS guidance covers a situation such as Windows XP End of Support(EoS) when it states that:

"any known security vulnerabilities of an operating system should be considered in the covered entity’s risk analysis (e.g., does an operating system include known vulnerabilities for which a security patch is unavailable, e.g., because the operating system is no longer supported by its manufacturer).”

HHS guidance explicitly addresses the security compliance that an operating system provides when it states:

“the security capabilities of the operating system may be used to comply with technical safeguards standards and implementation specifications such as audit controls, unique user identification, integrity, person or entity authentication, or transmission security.”

It is clear that an unsupported operating system will need to have significant technical safeguards deployed and configured properly to reduce the risk of exploitation of the unsupported operating system. Application whitelisting programs used to be considered an optional technical security control, but as the nature of networks and applications changed, application whitelisting programs moved past being a “best practice” years ago. It is now considered both a basic and standard security control. When configured properly, these programs can arguably be the strongest component of operating system defense in depth. It can protect against the deliberate or inadvertent exploitation of operating system vulnerabilities, regardless of whether the workstation activity is performed by authorized users, unauthorized users, or malware. Application whitelisting programs have been identified as the first of the five “Quick Wins” in the Top 20 Security Controls – these are the sub-controls that have the most immediate impact on preventing attacks.

These programs offer a range of features that significantly reduces the attack surfaces that threats are actively attempting to exploit. Risk is reduced because there is much less opportunity to deliberately or unintentionally exploit potential weak spots or vulnerabilities. The abilities of application white-listing programs to limit, disable, or restrict access makes it a significant part of defense in depth best practices for all operating systems, including Windows XP as it becomes unsupported.

We'll focus on the feature set of McAfee's Application Control since this is most available to us, but most other feature rich whitelisting applications should contain similar functionality. If you're unsure if all of these items are addressed with the particular program you're evaluating, reach out to the vendor or conduct your own analysis

Application Control

Achieving compliance with the Security Rule while continuing to use Windows XP will involve documenting your risk analysis and using reasonable and appropriate technical safeguards such as application white listing to reduce the likelihood that threats can exploit vulnerabilities.

Human Threats Addressed:

  • Abuse of Information System
  • Abuse of Privileges
  • Abuse of Resources
  • Damage to ePHI or Business Information
  • Destruction of ePHI or Business Information
  • Theft of ePHI or Business Information
  • Theft of Financial Assets


Threat Agents:

  • Reckless Insiders
  • Untrained Insiders
  • Reckless Information Partner
  • Untrained Information Partner
  • Reckless Line of Business
  • Untrained Line of Business
  • Disgruntled Insider
  • Disgruntled Information Partner
  • 3rd Party Threats
  • Organized Crime


Application whitelisting programs also directly supports you if you will be the recipient of a HIPAA Audit Protocol assessment pursuant to the HITECH Act audit mandate. It can specifically enforce or support compliance for components in the Audit Protocol assessment of;
  • Information Access Management §164.308(a)(4)
  • Workstation Use (§164.310(b))
  • Access Control requirement “to allow access only to those persons or software programs that have been granted access rights”
  • Audit Control (§164.312(b))


For environments where there is a need to comply with the Centers for Medicare & Medicaid Services (CMS) requirements which involve NIST 800-53 standards, Application whitelisting programs support meeting these NIST control family standards;

  • Access Control (AC) - This control family includes mechanisms used to designate who or what is to have access to a specific resource and the type of transactions and functions that are permitted.
  • Configuration Management (CM) - This control family aims to address the activities that present a risk of integration failure due to component change. This includes change control processes and asset management.
  • Maintenance (MA) - This control family addresses the requirement that trusted systems within the environment retain their trustworthiness over time. Key elements include patch management, system builds, and hardening processes
  • System and Information Integrity (SI) – The controls in this family are used to protect data from accidental or malicious alteration or destruction and to provide assurance to the user the information meets expectations about its quality and integrity. Additionally, this family covers various aspects of flaw remediation.


CMS has also referenced the Top 20 Critical Security Controls (now maintained by The Council on CyberSecurity). The latest version of the Top 20 (Critical Controls Version 5.0) continues identifying application whitelisting as the first of five “Quick Wins”; these are the sub-controls that have the most immediate impact on preventing attacks.

Enjoy!



*Image above was borrowed from here

Wednesday, March 26, 2014

Extending Burp Proxy With Extensions

By Chris Bush.

The world of information security is awash with tools to help security practitioners do their jobs more easily, accurately and productively. Regardless of whether you are responsible for doing PCI audits, network vulnerability assessments, enterprise risk assessments, social engineering, or what have you, there’s a tool for that. Usually there are several. Some are good, some not so much. One of the reasons a tool may or may not ultimately be useful is the ability for its functionality to be customized or extended to meet the needs of the practitioner using it. This is never truer than in application security, where every application the security tester confronts is different from the last. Bespoke applications demand bespoke security testing, and this requires that the tools used by the application security professional be not only robust and feature rich, but customizable in a way that allows them to be rapidly extended to fit to the needs of the job at hand.

Pound for pound (or maybe dollar for dollar), the Burp Suite is one of the best tools an application security professional can have in their tool kit. It has capabilities and features on par with, or exceeding, those of big-name commercial application scanners costing tens of thousands of dollars more, all in a single UI where all of the tools integrate and work together seamlessly. Often overlooked is the fact that Burp includes an extensibility framework that allows you to extend Burp’s functionality in a number of useful ways, through loading 3rd party extensions, or writing your own.

An Overview of Extending Burp

The Burp extensibility framework provides the ability to easily extend Burp’s functionality in many useful ways, including:

  • Analyzing and modifying HTTP requests/responses
  • Customizing the placement of attack insertion points within scanned requests
  • Implementing custom scan checks
  • Implementing custom session handling
  • Creating and issuing HTTP requests
  • Controlling and initiating actions within Burp, such as initiating scans or spidering
  • Customizing the Burp UI with custom tabs and context menus
  • Much, much more


There are a growing number of 3rd party extensions available that you can download and use. The BApp Store was recently created, providing access to a number of useful extensions that you can download and add to Burp. Beginning with Burp Suite version 1.6 beta, released along with the BApp Store on March 4, 2014, access to the BApp Store is also provided directly from within Burp’s UI.

Additionally, there are a number of examples available in the Portswigger blog that provide an excellent starting point for writing your own extension, depending on what you are trying to accomplish. Go to the Burp Extender page to see an overview of some of these examples, including a links to the full blog posts and downloadable code. And of course, you can always turn to the Burp Extension User Forum for help with writing your own extensions, and more examples contributed by the user community.

In the rest of this article, we’ll provide a quick overview of the Burp Extender tool, which you will use to load extensions and configure Burp to run those extensions. Then we’ll dive right into writing our own custom extension, and create an extension that performs a couple of custom passive scans.

Burp Extender Tool

First, let’s take a look at the Burp Extender Tool. When you select the Extender tab in Burp Suite, there are four sub-tabs that provide access to the functionality and configuration options of the Extender.

Extensions

The Extensions tab (shown below) allows you to load and manage the extensions you are using in Burp. From this you can add and remove your extensions, as well as manage the order in which extensions and their resources are invoked. The panel at the bottom provides details on a selected extension, as well as tabs to display any output written by the extension, as well as any error messages produced by the extension.



BApp Store

The BApp Store tab, new with version 1.6 beta of Burp Suite, provides direct access to downloadable extensions from the Portswigger BApp Store.



APIs

The APIs tab essentially just provides a convenient reference to the Burp Extensibility API. From here, you can also download the Java interface files, for inclusion in your Java project, as well as download the Javadocs as a set of HTML files that you can access locally for reference.



Options

Finally, the Options tab is where you will configure things like the location of different environments required to run your extensions, depending on whether the extension is written in Java, Python, or Ruby. To run extensions written in Python requires the use of Jython, a Python interpreter that is written in Java. Similarly, to run extensions written in Ruby requires the use of JRuby, a Ruby interpreter written in Java. The Options tab allows you to specify the locations of the Jython JAR file or JRuby JAR file respectively. Download the most recent versions of these and configure Burp Extender to point to them if you will be writing extensions in Python or Ruby.



Loading and managing extensions and configuring the runtime options needed is very straightforward and simple. Refer to the Burp Extender Help page online for additional information.

Writing Your Own Extensions

You can write your own extensions in Burp using the Burp Extensibility API. The API consists of a number of Java interfaces that you will provide implementations of, depending upon what you are trying to accomplish. It is beyond the scope of this article to cover the entire API in detail. Refer to the Burp Extender Javadoc page online for a complete description. Instead, we’ll cover a few key interfaces that are used by all extensions, some that are of practical use to nearly all extensions, as well as some that will be useful in understanding the example extension that will be presented later in this article. As indicated previously, Burp extensions can be written in Java, Python, or Ruby. Choose the language you are most familiar with. We’ll use Python here, for its familiarity and the ease of development as an interpreted language. While code examples provided in this article will be in Python, they should be easily read and understood by anyone with a programming background in Java or another high-level language.

IBurpExtender

The IBurpExtender class is the foundation of every extension you will write. A Burp extension must provide an implementation of IBurpExtender that is declared public, and implements a single method, called registerExtenderCallbacks. This method is invoked when the extension is loaded into Burp, and is passed an instance of the IBurpExtenderCallbacks class. This class provides a number of useful methods that can be invoked by your extension to perform a variety of actions. At its very simplest, an extension in Burp starts out like this:

 from burp import IBurpExtender
class BurpExtender(IBurpExtender):
    def registerExtenderCallbacks(self, callbacks):
        # put your extension code here
        return



IBurpExtenderCallbacks

As indicated above, an instance of this class is passed to the registerExtenderCallbacks method in your IBurpExtender implementation when the extension is loaded in Burp. Through this callbacks object, you have access to a wide variety of useful methods that will help you create your extension. I’ll point out just a few, as these will be of importance as we build out our example custom scanner extension to follow.

  • getHelpers – Obtain an instance of the IExtensionHelpers class, which provides numerous useful "helper" methods that can be used to add functionality to an extension.
  • setExtensionName – Sets the name of the extension as it will appear in the Extensions tab in Burp Suite.
  • registerScannerCheck – Used to register an extension as a custom Scanner check.
  • applyMarkers – Used to apply markers, or highlights, to areas of a request or response. For instance, this may be used to mark a vulnerable parameter in a request, or an area of the response that indicates a vulnerability, such as a reflected XSS payload.


This is just a small taste. There are many other methods available to you in an instance of IBurpExtenderCallbacks. Consult the Burp Extender Javadoc page online for complete details.

IExtensionHelpers

The IExtensionHelpers class provides access to another large set of methods that you will undoubtedly find useful. It will be the rare extension that doesn’t get an instance of this class, which is obtained using a call to the getHelpers() method of IBurpExtenderCallbacks (see above). Just a few examples of the methods provided by this class are the following:

  • analyzeRequest – Used to analyze an HTTP request, and obtain various details, such as a list of parameters , headers, and more.
  • analyzeResponse – Used to analyze an HTTP response, and obtain various details, such as a list of cookies, headers, the response code, and more.
  • urlEncode – Perform URL encoding on a piece of data.
  • urlDecode – Perform URL decoding on a piece of data.
  • indexOf – Searches a piece of data for a specified pattern. This is very useful for search a request or a response for a specific value, such as PII, or examining the response for a parameter value that was in the corresponding request.
  • bytesToString – Converts data from an array of bytes to a String object. Many of the methods in the Burp API operate on an array of bytes, so this comes in quite handy.

IScannerCheck

Extensions implement this interface when they are going to be used to perform custom scan checks. Your extension must call the registerScannerCheck method of the IBurpExtenderCallbacks class to tell Burp that it is implementing this interface. Burp will then know to use your extension when performing active or passive scans on a base request/response pair, as well as to report any issues (see IScanIssue below) identified by your custom scan checks. The following three methods may be implemented by an IScannerCheck class:

  • consolidateDuplicateIssues – This method is invoked when the custom Scanner check has reported multiple issues for the same URL. You use this to tell Burp whether to keep the existing issue, or replace with the new issue, based on whatever criteria you decide.
  • doActiveScan – This method is invoked for each insertion point that is actively scanned by Burp. An implementation of this will then construct a new request, based on the base request passed, and insert a test payload into the specified insertion point. It will then issue that new request, and examine the response for an indication that the inserted payload reveals a vulnerability.
  • doPassiveScan – This method is invoked for each base request/response pair that Burp passively scans. An implementation of this will typically examine the base request and/or response for patterns of interest. No new requests should be generated from a passive scan.


Both the doActiveScan and doPassiveScan methods must return a list of IScanIssue objects, which Burp will then automatically include in the Scanner issues report.

IScanIssue

The IScanIssue class provides a representation of a Scanner issue. An extension may retrieve current issues (IScanIssue objects) from the Scanner tool by registering an IScannerListener callback, or by calling the getScanIssue method of the IBurpExtenderCallbacks class. Scanner issues can be added to Burp by implementing the IScanIssue class in your extension, and calling the addScanIssue method of the IBurpExtenderCallbacks class with specific instances. Additionally, Scanner issues can be added via a custom scan check, by creating a list of instances of IScanIssue that is returned by either the doPassiveScan or doActiveScan methods of an IScannerCheck implementation.

Implementing the IScanIssue interface involves implementing a constructor method to set the details of the Scanner issue, as well as a number of getter methods to retrieve those details. We won’t go into details of the various methods here, as they will often be as simple as setting a class variable with a value passed to the constructor, and implementing a getter method that returns this value.

A Custom Passive Scanner



To conclude our discussion, we will present an example extension that implements a custom scanner, which will perform two different passive scan checks:

  • Reflection Checks – Using the values of the parameters in the base request that is being passively scanned, this check searches the corresponding response for those same values, providing a candidate point for further testing for reflected XSS vulnerabilities.
  • Regular Expression Match – Can be used to examine the base response of a passive scan request, looking for any string that matches a particular regular expression. In the context of this example extension, this check is used to do a customized search of application responses using a regular expression designed to match potentially sensitive personally identifiable information (PII) unique to a specific, non-US, country.


The full source code for this example extension can be downloaded from our GitHub page. This extension is written in Python, so to try it out you will first need to download the latest Jython library from The Jython Project, and configure the Burp Extender to use it. Then add the extension, and try it out.

The source code is extensively documented with comments. With the information provided above, as well as the Burp API Javadocs and the comments in the code, it should be easy to grasp what’s going on in the code. In the remainder of this article, I’ll go into a little detail for a few key sections of the code that may be particularly interesting or require some further context for understanding.

Earlier, we showed the simplest example of a Burp extension that does nothing. Recall that at a minimum an extension must implement the IBurpExtender interface, which has one method – registerExtenderCallbacks. Let’s take a quick look at the implementation of our registerExtenderCallbacks method.

 Line 15-26
def registerExtenderCallbacks(self, callbacks):
      # Put the callbacks parameter into a class variable so we have class-level scope
      self._callbacks = callbacks

      # Set the name of our extension, which will appear in the Extender tool when loaded
      self._callbacks.setExtensionName("Custom Passive Scanner")
        
      # Register our extension as a custom scanner check, so Burp will use this extension
      # to perform active or passive scanning and report on scan issues returned
      self._callbacks.registerScannerCheck(self)
        
      return



The registerExtenderCallbacks method is passed an instance of the IBurpExtenderCallbacks class. On line 17 above, we are simply storing this callbacks object in a class variable, so that it has class-level scope, allowing any other methods within our BurpExtender class to access it. On line 20, we use one of the methods of the callbacks object, setExtensionName, to set the name of our extension. This is how the extension will be identified in the Extender tool when it is loaded. Finally, on line 24, we call the registerScannerCheck method of the callbacks object. This tells Burp that our extension implements a custom scanner check, and Burp will now call the doActiveScan and doPassiveScan methods of our extension whenever it is performing an active or passive scan, respectively. In our extension, we have only implemented doPassiveScan.

Our implementation of doPassiveScan makes use of a custom class that we have created, called CustomScans, which is not an implementation of anything in the Burp API.

 Line 47
self._CustomScans = CustomScans(baseRequestResponse, self._callbacks)



As we see above, within doPassiveScan, an instance of this class is created, passing the base request/response pair, as well as our instance of IBurpExtenderCallbacks that was created as a class variable in the registerExtenderCallbacks earlier. The purpose of the CustomScans class is to implement one or more methods that we can call that perform unique scan checks against the base request/response pair being passively scanned. In this extension, we’ve implemented two methods in CustomScans, called findReflections and findRegEx, whose purpose was described above.

Next, the extension’s implementation of doPassiveScan calls the findReflections method of CustomScans. This method will examine the base request/response pair, passed previously to the constructor for CustomScans, and identify any request parameters whose value appears in the corresponding response.

 Line 51-62
issuename = "Possible Reflected XSS"
issuelevel = "Information"
issuedetail = """The value of the $param$ request parameter appears
        in the corresponding response.  This indicates that there is a
        potential for reflected cross-site scripting (XSS), and this URL
        should be tested for XSS vulnerabilities using active scans and
        thorough manual testing and verification. """

tmp_issues = self._CustomScans.findReflections(issuename, issuelevel, issuedetail)
        
# Add the issues (if any) from findReflections to the list of issues to be returned
scan_issues = scan_issues + tmp_issues



Three arguments passed to findReflections provide information used to construct any new scan issues, including the issue name, level (or severity), and issue details. Note that the argument representing the issue details contains HTML tags. Burp will interpret these tags and render the issue details in its UI accordingly. Finally, the findReflections method returns a list of scan issues, in tmp_issues, which is then appended to the list of issues, scan_issues, which will ultimately be returned to Burp from doPassiveScan.

Following the above code, lines 69-81 follow a similar patter, calling CustomScans.findRegEx, and appending any resulting issues to the scan_issues list. Lines 85-88 then returns scan_issues if it is not empty, else it returns None (think null). Burp will then include the returned issues, if any, in the Scanner issues report.

The findReflections and findRegEx methods of CustomScans should be fairly straightforward to understand, and each follows a very similar flow. Lines 127-136 of findReflections, and lines 160-169 of findRegEx in particular follow a very similar pattern, which we’ll explain below.

 Line 127-136 (findReflections)
offset[0] = start
offset[1] = start + len(paramVal)
offsets.append(offset)
                    
# Create a ScanIssue object and append it to our list of issues, marking
# the reflected parameter value in the response.
                    scan_issues.append(ScanIssue(self._requestResponse.getHttpService(),              
        self._helpers.analyzeRequest(self._requestResponse).getUrl(), 
        [self._callbacks.applyMarkers(self._requestResponse, None, offsets)],
        issuename, issuelevel, issuedetail.replace("$param$", paramName)))


The first three lines set up an array that is used to store offsets used to apply a marker to a region of the response, in this case to highlight the reflected parameter value. The first array element contains the start position of the identified value, and the second element contains its end position. This array of two values is then appended to a list, called offsets, which will be passed to the applyMarkers method of IBurpExtenderCallbacks when the new scan issue is created. The applyMarkers method expects a List of arrays in its last two arguments, each array containing the start and end values of regions to be marked in the request and response respectively.

The last four lines above create an instance of our ScanIssue class, which is our extension’s implementation of the IScanIssue interface, by calling its constructor using a number of arguments. We then append that new instance of ScanIssue to a list, called scan_issues, which will be returned back to our caller, doPassiveScan. In the call to the constructor for our Scan, we call the applyMarkers method of IBurpExtenderCallbacks, passing the base request/response pair, and offsets for applying markers to the request (None in this case), and to the response, using the list, offsets, described above. The last three arguments to the ScanIssue constructor provide the issue name, issue level (severity), and issue detail information that was passed as arguments to findReflections. Here, we are replacing a token in the literal string passed in the issuedetail argument with the actual parameter value that was reflected in the response. This adds useful detail to the new scan issue for the tester, and also makes it so Burp will identify the issue as a unique instance when it calls our extension’s consolidateDuplicateIssues method.

The findRegEx method in CustomScans follows a similar pattern to that described above for findReflections. It makes use of Python’s regular expression operations to search the response, but otherwise uses the same techniques to create new scan issues as in findReflections.

One part of findReflections that is perhaps non-intuitive when examining the code is the following:

 Line 122
if len(paramVal) > 3:



Here we are checking the length of the variable paramVal, which contains the value of the current parameter we are checking for reflection of. In order to prevent a lot of noise from coincidental matches, we are simply checking that the parameter’s value is greater than three characters long. This is a fairly simplistic approach, and you are free to try any heuristic you can think of here. Regardless of what you try, since this is a passive scan, eliminating potentially coincidental matches may also eliminate true reflection of parameter values that may in fact be vulnerable to XSS. Remember, the point of this passive XSS scan is only to identify candidate points for further examination and testing, not to actually identify XSS vulnerabilities. Caveat emptor.

Lastly, there are some additional classes from the Burp Extensibility API that are being used in the example extension that have not been covered here. These classes are not explicitly implemented in the extension, but are used implicitly, typically as return values from other methods in the Burp API. They are mentioned briefly below, but you are encouraged to examine these in more detail in the Burp Extender Javadoc page online.

  • IHttpRequestResponse – Representation of an HTTP message
  • IHttpService – Representation of an HTTP service, to which requests can be sent
  • IRequesetInfo – Representation of an HTTP request
  • IParameter – Representation of an HTTP request parameter


Conclusion

As you can see, while it does involve some programming, creating Burp extensions is really quite straightforward, and should be no problem for anyone with a reasonable programming or scripting background. In the above example, we have the basics of a fairly useful custom scan check extension that performs two different passive scan checks, in around 160 lines of code (excluding comments).

Try out the example above, visit the new BApp Store, study the Burp Extensibility APIs, and you’re sure to come up with ideas for your own extensions that will help you do your job more easily, accurately, and productively, and get better results by customizing Burp to meet your particular needs. Best of luck.
 root@bt:~# 


Tuesday, March 18, 2014

Combatting AppScan's "Scan out of session"

By Kunal Garg.

Web application scanners may be full of repetition and obvious vulnerabilities but they do have their place in a web application penetration test. While they should never be used as the sole way to identify vulnerabilities, they can provide baseline and act as another available tool to achieving maximum results. All web application scanners are different and some require finer tuning then others. One common issue we see with IBM's AppScan is the "Scan out of session" error. This blog post aims to give advice around setting up the scan and working around the issue.

When running post authentication scans “In-session” detection is important concept to maximize scan coverage. Anytime the scan will go out of session, notification “scan out of session”will be displayed to user and scan will be suspended.

With the in session management we basically select a unique pattern on an in-session page, which Appscan continually polls to find out if scan is in session or not. This pattern needs to be unique and should be available on post authentication pages. It can be any text such as “welcome userabc” displayed after a specific user logs on or it can be a logout button (if present on all the pages).

Recording the login

First step in configuring the in-session pattern is to record the login using Appscan macro. Once the login is recorded, tick the checkbox “I want to configure In-session detection” and click on next.



Notice all the URL’s recorded in login macro previously appear here, and select the page which is post authentication and contains our unique identifier. In our case, test application after login routes to “main.apsx”, this page therefore is selected as In-session page (Right click and set as In-session).



Now, it’s time to select the In-session pattern this can be selected using the “Select in session pattern” button.

Usually, Appscan will select the session identifier on its own. But it is always advisable to look into the pattern and change the pattern if it’s not unique. In my personal experience with the automatically selected identifiers scans tend to run out of session.

Session pattern can be either selected from the page or its response body in the appscan browser window as shown below.



Session pattern is marked as “signoff”.

If the scan goes out of session there are certain points which we need to consider:

  1. Session cookies are not properly tracked. If the session cookies are not getting tracked it can be marked for tracking from “Login management”.



  2. Check if the application is still accessible.
  3. Check if the user account is not locked out.


Note: While run In-session scans make sure that, login and logout pages are out of scope, and please take due care while running automated scans and configuring them.

Tuesday, March 11, 2014

Identifying Malware Traffic with Bro and the Collective Intelligence Framework (CIF)

By Ismael Valenzuela.

In this post we will walk through some of the most effective techniques used to filter suspicious connections and investigate network data for traces of malware using Bro, some quick and dirty scripting and other free available tools like CIF.

This post doesn’t pretend to be a comprehensive introduction to Bro (check the references section at then end of the post for that) but rather a quick reference with tips and hints on how to spot malware traffic using Bro logs and other open source tools and resources.

All the pcap files used throughout this post can be obtained from GitHub. Some of them have been obtained from the large dataset of pcaps available at contagiodump.

Finally, if you are new to Bro I suggest that you start by downloading the latest version of Security Onion , a must-have Linux distribution for packet ninjas. Since version 12.04.4 Security Onion comes with the new Bro 2.2 installed by default so all you need to do is to open the terminal, grab the samples and maybe some coffee… (There is never enough coffee!).

Traffic Analysis with Bro

We will start replaying our first sample through Bro with:
 $ bro –r sample1.pcap local 



This command tells Bro to read and process sample1.pcap, pretty much like tcpdump or any other pcap tool does. By adding the keyword “local” at the end of the command, we ask Bro to load the ‘local’ script file, which in SecurityOnion is located in /opt/bro/share/bro/site/local.bro.

When the command is completed, Bro will generate a number of logs in the current working directory. These logs are highly structured, plain text ASCII and therefore Unix friendly, meaning that you can use your command line kung-fu with awk, grep, sort, uniq, head, tail and all the other usual suspects.

To see the summary of connections for sample1.pcap we can have a quick look at conn.log:

 $ cat conn.log





The figure above shows an excerpt of the output of this command. Notice how the output of Bro logs is structured in columns, each of them representing different fields. These fields are shown in the 7th line of the output header, starting with "ts" (timestamp in seconds since epoch) and "uid" (a unique identifier of the connection that is used to correlate information across Bro logs). Refer to the Bro documentation to learn more about the rest of the fields.

 #separator \x09
#set_separator ,
#empty_field (empty)
#unset_field -
#path conn
#open 2014-03-07-13-51-01
#fields ts uid id.orig_h id.orig_p id.resp_h id.resp_p proto service duration orig_bytes resp_bytes conn_state local_orig missed_bytes history orig_pkts orig_ip_bytesresp_pkts resp_ip_bytes tunnel_parents
#types time string addr port addr port enum string interval count count string bool count string count count count count table[string]




We can observe a number of connections to port 80 (tcp) and port 53 (udp). Conn.log also reports the result of these connections under the field conn_state. Let’s have a closer look at that using bro-cut an awk-based field extractor for Bro logs.

 $ cat conn.log | bro-cut id.orig_h, id.orig_p, id.resp_h, id.resp_p, proto, conn_state
…
172.16.88.10 49508 172.16.88.135 80 tcp REJ
172.16.88.10 49510 172.16.88.135 80 tcp REJ
172.16.88.10 57852 172.16.88.135 53 udp SF
172.16.88.10 49509 172.16.88.135 80 tcp REJ
172.16.88.10 57399 172.16.88.135 53 udp SF
172.16.88.10 49510 172.16.88.135 80 tcp REJ
172.16.88.10 57456 172.16.88.135 53 udp SF
172.16.88.10 49511 172.16.88.135 80 tcp S0
172.16.88.10 62602 172.16.88.135 53 udp SF
172.16.88.10 54957 172.16.88.135 53 udp SF
172.16.88.10 49511 172.16.88.135 80 tcp SH
172.16.88.10 49512 172.16.88.135 80 tcp S0
172.16.88.10 64623 172.16.88.135 53 udp SF
172.16.88.10 53702 172.16.88.135 53 udp SF
172.16.88.10 49512 172.16.88.135 80 tcp SH
172.16.88.10 49513 172.16.88.135 80 tcp S0
172.16.88.10 52164 172.16.88.135 53 udp SF
172.16.88.10 49513 172.16.88.135 80 tcp SH
172.16.88.10 49516 172.16.88.135 80 tcp S0
172.16.88.10 54832 172.16.88.135 53 udp SF
172.16.88.10 49516 172.16.88.135 80 tcp SH
172.16.88.10 49517 172.16.88.135 80 tcp S0
172.16.88.10 64102 172.16.88.135 53 udp SF
172.16.88.10 51110 172.16.88.135 53 udp SF
172.16.88.10 49517 172.16.88.135 80 tcp SH
172.16.88.10 49518 172.16.88.135 80 tcp S0
172.16.88.10 55957 172.16.88.135 53 udp SF
172.16.88.10 49519 172.16.88.135 80 tcp S0
172.16.88.10 58988 172.16.88.135 53 udp SF
172.16.88.10 49518 172.16.88.135 80 tcp SH



In this case, we can observe that some of the connections attempted on port 80 were rejected (REJ), while others never had a reply (S0) or left the connection half-open (SH, which means a SYN-ACK from the responder was never seen). The reason for this behavior is that sample1.pcap was obtained from one of my sandboxes where 172.16.88.135 is a Virtual Machine running Remnux with fakedns and netcat listening on port 80 instead of a full web server.

Since we know that there is some http traffic going on here, let’s have a look at another log generated by Bro: http.log

 $ cat http.log | bro-cut id.orig_h, id.orig_p, id.resp_h, id.resp_p, host, uri, referrer

172.16.88.10 49493 172.16.88.135 80 f52pwerp32iweqa57k37lwp22erl48g63m39n60ou.net / -
172.16.88.10 49495 172.16.88.135 80 h54jtbqmuj56hwb48e41p42g33h34c29grbqfxm29.ru / -
172.16.88.10 49511 172.16.88.135 80 iqcqmrn30iuoubuo11crfydvkylrbtmtev.info / -
172.16.88.10 49512 172.16.88.135 80 ezdsaqbulsgzh44m59p42eqmrkxa57n40brcq.com / -
172.16.88.10 49513 172.16.88.135 80 o41lwmqnqarmxiyi35iyftpzaye21osjyjq.ru / -
172.16.88.10 49516 172.16.88.135 80 n30arh24frisbslqmqoxgvpvk47o11pritev.biz / -
172.16.88.10 49517 172.16.88.135 80 jsa57n20hyisjxcre11fwl58gta37i65ovf32o51.info / -
172.16.88.10 49518 172.16.88.135 80 j36lxf52hsj56itc49lqayoveymwfzosi15jw.org / -
172.16.88.10 49519 172.16.88.135 80 g53lvo61ayoucrm49kzgvm69irhwl58erjwfu.net / -
...



Anything weird here? Definitely! The host field of the http.log shows entries that don’t seem to correspond with normal browsing.

A closer look at the dns.log produced by Bro will confirm this:

 $ cat dns.log | bro-cut query | sort –u

a37fwf32k17gsgylqb58oylzgvlsi35b58m19bt.com
a47d20ayd10nvkshqn50lrltgqcxb68n20gup62.com
a47dxn60c59pziulsozaxm59dqj26dynvfsnw.com
a67gwktaykulxczeueqf52mvcue61e11jrc59.com
axgql48mql28h34k67fvnylwo51csetj16gzcx.ru
ayp52m49msmwmthxoslwpxg43evg63esmreq.info
azg63j36dyhro61p32brgyo21k37fqh14d10k37fx.com
cvlslworouardudtcxato51hscupunua57.org
cyh44jud50g33iuarlzgqbup22fqisixf62kr.org
d10h34othyp62b18lyfwnzazj26p42fud50gzc49.biz
d20iwe51ftitg53lvl18a27hvlqjyjtd20gue61.com
dqhzhtbto21h14lvp12iqhtlrnxasarcte61.biz
drp42i25ati55m69pvgza57nyh34hwk57i55m19n60.ru
iqcqmrn30iuoubuo11crfydvkylrbtmtev.info
iqo11c69mud20krk57j16fqnrfwgva67oraql48.com
isjqn30a27hwgqbxnxksi65hrnsgyc49mylt.biz
iupqhxfwpylxm29jsexovj16cqfybwb68aw.org
iwpslvesj26i65oynxhtoyc39o41asdvnqc59.com
j36lxf52hsj56itc49lqayoveymwfzosi15jw.org
jshvprc29ntm69p52j36a17m39ozk67g53crfqow.net
jvbtore21fzm39fse51p32auizl28gxaul68px.com
k17g63l58jucvd30brhyovhsptd10lxd60gqfv.biz
k27ori65cve61kvc49hxptdrb48myo61fueves.org
k47isgzkxp62o51etmwazewmvpvgwbvmvfz.com
kqd60lvlsg63bsg33e11i55kvo41nrj36hzbthr.info
kvm49mynrd60l48lynre21hqfun20a47hyn20kq.org
kyoqpxg53nuf42g43oqo21l48a17d40o31k67j16h44.org
l18k17mzpum69jvlyp62c29hzeyi25kta47a37lv.ru
n50owhwguj66evkug33ewntn10n40puhtlxay.org
nrd30j46cxnwmyc69bscrcyiuhvf22otg43mq.com
nub58p52b38ismtg63mwlwm29evd20g13f52otb68.info
nxhyosg43a47exhum19g23f52fro21byayk57fs.info
o21mwm29gzouhvpub68g43dzntgzn30aultd30.net
o31j16n30eyiql58btmxe21euowb38pxf22b68ou.net
psgsgumukxb18b58dxd40e31f22g53a37bzmxcz.com
pxoxgzkqmqp12a47azjzpze11hteri35iti45.info
pyn30h64krm69bwf12azp52fulskvh24m19nrjy.org
(output truncated)





Looking at the length of the domains requested we could observe a pattern. First of all we will cut out the TLDs (com, info, net…) and then calculate the length of each of the strings.

 $ cat dns.log | bro-cut query | sort -u | cut -d . -f1 > domains-withoutTLD
 $ for i in `cat domains-withoutTLD`; do echo "${#i}"; done | sort –u

34
35
36
37
38
39
40
41
42
43
 


So all these strings are within a close range of 34 to 43 characters long. Casualty? Not really, a variant of the ZeuS botnet, the so-called ZeuS Gameover, is known for implementing P2P and Domain Generation Algorithm (DGA) communications to determine the current Command and Control (C&C) domain. When these bots can’t communicate with its botnet via P2P, DGA is used. The domain names generated by ZeuS Gameover consist of a string with a length of 32 to 48 chars and one of the following TLDs: ru, com, biz, net or org. The list contains over 1000 domains and changes every 7 days, based on the current date.

A regular expression like this can be used to search for ZeuS domains:

 [a-z0-9]{32,48}\.(ru|com|biz|info|org|net)


ZeuS Gameover has been reported as one of the most active banking Trojan in 2013, along with Citadel, another well-known piece of malware that has targeted a large number of financial organizations with focus on Europe and the Middle East.

Kleissner.org maintains a list of 1000 valid domains for ZeuS Gameover and updates it every week. A simple bash script can compare a list of domains obtained from dns.log to the list published by Kleissner.org:

 $ cat dns.log | bro-cut query | sort -u | > domains

$ for i in `cat domains`; do grep $i ZeusGameover_Domains; done



SSL Traffic and Notice.log

Malware authors are making increased use of SSL traffic to mask communications with C&C servers, data exfiltration and other malicious actions. Since decrypting SSL communications is not feasible in most of the scenarios, malware analysts must employ other techniques to spot badness in encrypted sessions. TLS or SSL handshake failures, suspicious, invalid or weird certificates can be indicators of such badness in your network traffic and the good news is that Bro, by default, does some of that analysis already for you, suggesting potentially interesting network activity for you to investigate.

To demonstrate how Bro can help with finding those indicators, we’ll look at sample2.pcap

 $ bro -r sample2.pcap local


See that a notice.log file has been created in the working directory, along with http.log, ssl.log and others.

Let’s have a look at the contents of notice.log:

 $ cat notice.log | bro-cut msg, sub

SSL certificate validation failed with (unable to get local issuer certificate) CN=www.tl6ou6ap7fjroh2o.net
SSL certificate validation failed with (unable to get local issuer certificate) CN=www.vklxa6kz.net
SSL certificate validation failed with (unable to get local issuer certificate) CN=www.5rthkzelyecfpir56.net
SSL certificate validation failed with (unable to get local issuer certificate) CN=www.dctpbbpif6zy54mspih.net
SSL certificate validation failed with (unable to get local issuer certificate) CN=www.getvdkk6ibned7k3krkc.net
SSL certificate validation failed with (unable to get local issuer certificate) CN=www.hstk2emyai4yqa5.net
SSL certificate validation failed with (unable to get local issuer certificate) CN=www.icab4ctxldy.net
SSL certificate validation failed with (unable to get local issuer certificate) CN=www.bnbhckfytu.net
SSL certificate validation failed with (unable to get local issuer certificate) CN=www.e6nbbzucq2zrhzqzf.net
SSL certificate validation failed with (unable to get local issuer certificate) CN=www.cvapjjtbfd6yohbarw5q.net
SSL certificate validation failed with (unable to get local issuer certificate) CN=www.zhbohcqeanv5hw.net
SSL certificate validation failed with (unable to get local issuer certificate) CN=www.v6onqj4tmlmcchw23bl.net
SSL certificate validation failed with (unable to get local issuer certificate) CN=www.gaqq6ld5gdgib.net
SSL certificate validation failed with (unable to get local issuer certificate) CN=www.hlixz2cz43jepqwl.net
SSL certificate validation failed with (unable to get local issuer certificate) CN=www.jn4k5f5wi65edy7emll.net
SSL certificate validation failed with (unable to get local issuer certificate) CN=www.4geh5kzuywu3u.net
SSL certificate validation failed with (unable to get local issuer certificate) CN=www.rshopmsscpfbw6p.net
SSL certificate validation failed with (unable to get local issuer certificate) CN=www.c2rwawybhf.net
SSL certificate validation failed with (unable to get local issuer certificate) CN=www.3gbl5nlxxs37ycdbhvcr.net
SSL certificate validation failed with (unable to get local issuer certificate) CN=www.qhpomorewmsgxkg2d.net
SSL certificate validation failed with (unable to get local issuer certificate) CN=www.wtytpviziqgpxsz.net
SSL certificate validation failed with (unable to get local issuer certificate) CN=www.f5zhq25qq.net
SSL certificate validation failed with (unable to get local issuer certificate) CN=www.3ktww4bg.net
SSL certificate validation failed with (unable to get local issuer certificate) CN=www.c2nhdwaukm.net
SSL certificate validation failed with (unable to get local issuer certificate) CN=www.iqm3bvunu.net
SSL certificate validation failed with (unable to get local issuer certificate) CN=www.pts5agysxnvyyvbysfv.net
SSL certificate validation failed with (unable to get local issuer certificate) CN=www.ygn472gapjnkkbplith.net
SSL certificate validation failed with (unable to get local issuer certificate) CN=www.jaaok2kcxn.net
SSL certificate validation failed with (unable to get local issuer certificate) CN=www.ktq2go444i.net
SSL certificate validation failed with (unable to get local issuer certificate) CN=www.ferqncujta3wvl.net
SSL certificate validation failed with (unable to get local issuer certificate) CN=www.2u5j3bw2r.net
SSL certificate validation failed with (unable to get local issuer certificate) CN=www.uopxo7ik3i2nti.net
SSL certificate validation failed with (unable to get local issuer certificate) CN=www.2ugfspjvd3tjaa.net
SSL certificate validation failed with (unable to get local issuer certificate) CN=www.vjonqvyku.net
SSL certificate validation failed with (unable to get local issuer certificate) CN=www.6canpulqbqdbqkxc6is.net
SSL certificate validation failed with (unable to get local issuer certificate) CN=www.42ixw6g5fu44w7sth.net
SSL certificate validation failed with (unable to get local issuer certificate) CN=www.kqwm2iwsvh4xd2q.net
(output truncated)



Hmmm… that looks really suspicious again!

Let’s have a look at the contents of the ssl.log now:

 $ cat ssl.log | bro-cut server_name, subject, issuer_subject

www.seu4oxkf6.com CN=www.tl6ou6ap7fjroh2o.net CN=www.tbajutyf.com
www.fjpv.com CN=www.vklxa6kz.net CN=www.ohqnkijzzo5vt.com
www.pdpqsu.com CN=www.5rthkzelyecfpir56.net CN=www.qbboo7mcwzv7.com
www.vkojgy6imcvg.com CN=www.dctpbbpif6zy54mspih.net CN=www.m6hoayo5cga.com
www.dbyryztrr7sui3rskjvikes.com CN=www.getvdkk6ibned7k3krkc.net CN=www.7pz4gaio6uc25dyfor.com
www.xqwf7xs6nycmciil3t5e4fy5v.com CN=www.hstk2emyai4yqa5.net CN=www.wc62pgaaorhccubc.com
www.rix56ao4hxldum4zbyim.com CN=www.icab4ctxldy.net CN=www.wmylm3gln.com
www.uabjbwhkanlomodm5xst.com CN=www.bnbhckfytu.net CN=www.w4rlc25peis46haafa.com
www.dl2eypxu3.com CN=www.e6nbbzucq2zrhzqzf.net CN=www.cbj5ajz4qgeieshx32n.com
www.ebd7caljnsax.com CN=www.cvapjjtbfd6yohbarw5q.net CN=www.brbqn4rqhscp4rdq.com
www.qnqxclmrk2cqskkb732czjma.com CN=www.zhbohcqeanv5hw.net CN=www.w3rfg432.com
www.bxstw.com CN=www.v6onqj4tmlmcchw23bl.net CN=www.yc2xz27yoe76.com
www.b6lwb6v.com CN=www.gaqq6ld5gdgib.net CN=www.nu6u7osxzhmgx64.com
www.xf3225vc7drvcgborjll3.com CN=www.ryfg74xnxjg42ln3.net CN=www.y6bn3trq5cesxk.com
www.7dezfrpxuvmtr.com CN=www.svhbg7k2ed7ijcloj2.net CN=www.tfijljrmlqi.net
www.pcnia4i6e6w.com CN=www.yastvwre5fvpq3av.net CN=www.c6dmymzw.com
www.zvnbxtgu5dwe6lwc.com CN=www.u7c2brldvuk3xil.net CN=www.owgtwdiazfmzmwu6a5.com
www.ofbw37.com CN=www.qyccfgkjb.net CN=www.gs52pdnqyd.com
www.zr7kfc25mofcq.com CN=www.oi6z76t4.net CN=www.oe7gv5kxhix2i7eil.com
www.cmeh4agzyphi.com CN=www.jnqlvjcoou26znx.net CN=www.p4tgeg6dhp.com
www.k2u3bnbhxhpl.com CN=www.llhtnj3yyk.net CN=www.qotouwlbhjt.com
www.bneghg3axzl75sn7k2pdzor.com CN=www.shucgk26k4x5inet.net CN=www.j4n2j3sz57cf.com
www.ytedf3vqd4hxjo7rmhe6.com CN=www.noyxmydlc3ncgwv4t7hc.net CN=www.xem2wczmpqtypvzzpsex.com
www.by4seu7gjht7.com CN=www.wgrv4vpyx.net CN=www.eyvoebmi4ls6o6.com
www.cx7dg5bcn4cy.com CN=www.lipko2t5yqirjrqn2e.net CN=www.l4kvblp6bd.com
www.zn26rblhi.com CN=www.5nmv7zbdqdvgbfem6l.net CN=www.l3zkpiwawmpwjbzf.com
www.ecajni2stg3733w4jgi75.com CN=www.k3dbsxb423am5bwcb.net CN=www.uuwdimryu2gi42.com
www.x3os5xrkcr7a2rpmxre2.com CN=www.km6ptswm7mo.net CN=www.giovpc7o3.com
www.2c27bhbej.com CN=www.pymflkqpqdgghnfj.net CN=www.jocupasu2o6b2af2tn.com
www.4x4fp.com CN=www.icab4ctxldy.net CN=www.wmylm3gln.com
www.busdvimuibiundyob3e74js.com CN=www.xwwc4mvab66dnn.net CN=www.7hhuhzlztld46.com
www.zk2sv4vbwtanvh6x.com CN=www.bjxrmwnhp44enzypv6dc.net CN=www.b2ond2dxj.net
www.nijvbs5nuyn7zkemgi.com CN=www.wgwr7qn7v3j.net CN=www.u57w6yc5rvv.com
www.hamsnp.com CN=www.ge26nt2rx.net CN=www.aewmz33hq6rn7x7nud3.com
www.gsen3cievf3px7anzc6j.com CN=www.3zz5we62e.net CN=www.w7sb5mdv7w.com
www.3lwerxmlqmq2jsjioqgx5kkyc.com CN=www.ohfe52bk6gyfzojwgts.net CN=www.jhzi7jmhledqxg.com
www.2ipe23pugsiii.com CN=www.6hfs2womid.net CN=www.aq3w5zrobmejm.com
www.f3vzvxsedn.com CN=www.eelcaqcncssfzliilic.net CN=www.xshjb4uihtmpxh.com
www.hh62esff4qj5.com CN=www.mqhz74wxch4gj.net CN=www.wcmcdpazt7iw7g.com
www.juipuxm76hu6df6.com CN=www.5nmv7zbdqdvgbfem6l.net CN=www.l3zkpiwawmpwjbzf.com
www.6ll3wnw5dmg.com CN=www.suy5hv542.net CN=www.5mypgv7tgzypyaz63w.com
www.h5hgbrs75gl3c5uh5xnld3i.com CN=www.4x4j6xhtk5qh.net CN=www.rmybfv4mrpzlcicfg.net



Again, parsing these logs with bro-cut and other command line tools to generate a list of suspicious domains is straightforward. That list can be compared to a list of well-known malicious domains, or used with various domain reputation services. We will talk more about how to leverage threat intelligence feeds with Bro later in this post.

Let’s carry on with our analysis. A closer look at the http.log reveals some potentially interesting User Agents under the user_agent field:

 $ cat http.log | bro-cut user_agent | sort –u

Mozilla/4.0 (compatible; MSIE 8.0; Windows NT 5.1; Trident/4.0; .NET4.0C; .NET4.0E; .NET CLR 2.0.50727; .NET CLR 3.0.04506.648; .NET CLR 3.5.21022)

cgminer 2.7.5



Can you see that cgminer user agent? It is a well known fact that malware can use unusual, weird or unique user agents as part of the headers of the HTTP requests. A good study on that was written by Robert Vandenbrink.

In this case the user agent indicates that we’re looking at a bot whose purpose is to deliver bitcoin mining traffic. For more information about this particular bot check Liam Randall’s solutions and scripts on his GitHub

The new file analysis framework

The file analysis framework is a new feature introduced with Bro 2.2 that provides plenty of new functionalities to network analysts. One of the most powerful features is the ability to extract files from network streams based on multiple criteria: geo spatial (i.e. per country of origin), signature based, destination based, etc.

Files can be extracted from various protocols including FTP, HTTP, SMTP and IRC. Others like Bit torrent and SMB will be added in the near future.

Thanks to the powerful Bro language, the new file analysis framework can be combined with actions to do awesome stuff like look up in a malware hash registry, upload to virustotal, to a cuckoo sandbox or even tweet the results of your analysis!

To demonstrate some of its capabilities we’ll analyze sample3.pcap. As usual we start replaying the capture with Bro:

 $ bro -r sample3.pcap local


You should have a new log: files.log. Let’s have a look at its contents:

 $ cat files.log | bro-cut fuid, mime_type, filename, total_bytes, md5

FC7cMq18xeqtT9IGD3 application/zip - 31044 0cbc25ade65bcd7a28dd8ac62ea20186
 


We have a unique entry. We don’t have a filename but Bro has recorded the MIME type and even computed the MD5 hash for us!

Can we extract that file? Of course we can! Open your text editor of choice and save these lines as extract-all.bro
 event file_new(f: fa_file)
        {
                Files::add_analyzer(f, Files::ANALYZER_EXTRACT);
        }
 


Congratulations! You’ve written your first Bro script. Next, run the capture against Bro again, this time replacing the ‘local’ script with the new one you just created. You might need to run this as root:

 $ bro -r sample3.pcap extract-all.bro


This command will create a new directory extract_files where all files extracted will be located:

 $ ls extract_files

extract-HTTP-FC7cMq18xeqtT9IGD3
 


Let’s confirm what kind of file we’re looking at:

 $ file extract-HTTP-FC7cMq18xeqtT9IGD3 

extract-HTTP-FC7cMq18xeqtT9IGD3: Zip archive data, at least v2.0 to extract

$ xxd extract-HTTP-FC7cMq18xeqtT9IGD3 | head -10

0000000: 504b 0304 1400 0808 0800 208f 1c41 0000  PK........ ..A..
0000010: 0000 0000 0000 0000 0000 0d00 0000 6234  ..............b4
0000020: 612f 6234 612e 636c 6173 73c5 7979 5c9b  a/b4a.class.yy\.
0000030: 5b76 d8b9 9240 427c 8010 1606 db18 63fb  [v...@B|......c.
0000040: 6110 606c 24b0 0783 0149 0801 daf7 0d09  a.`l$....I......
0000050: edfb 2eb4 22e4 7979 33c9 bc74 3259 babd  ....".yy3..t2Y..
0000060: d725 93ce 6bac a493 f4bd e729 76e3 cc8c  .%..k......)v...
0000070: d32d 69d2 25d3 769a a64d 9ba6 49da a4cd  .-i.%.v..M..I...
0000080: d2c9 d2e9 b44d 9c73 0578 c36f dee4 af9a  .....M.s.x.o....
0000090: 9fbe 7bbe 7bcf 3dfb 39f7 9ecf 3fff a73f  ..{.{.=.9...?..?

$ xxd extract-HTTP-FC7cMq18xeqtT9IGD3 | tail -10

00078b0: db66 0000 6234 612f 6234 642e 636c 6173  .f..b4a/b4d.clas
00078c0: 7350 4b01 0214 0014 0008 0808 0020 8f1c  sPK.......... ..
00078d0: 4167 fdc8 0309 0700 00a7 0f00 000d 0000  Ag..............
00078e0: 0000 0000 0000 0000 0000 0034 7000 0062  ...........4p..b
00078f0: 3461 2f62 3465 2e63 6c61 7373 504b 0102  4a/b4e.classPK..
0007900: 0a00 0a00 0008 0000 208f 1c41 0000 0000  ........ ..A....
0007910: 0000 0000 0000 0000 0400 0000 0000 0000  ................
0007920: 0000 0000 0000 7877 0000 6234 612f 504b  ......xw..b4a/PK
0007930: 0506 0000 0000 0700 0700 9401 0000 9a77  ...............w
0007940: 0000 0000                                ....



While the first bytes in the file header (also known as magic numbers) suggest a ZIP file, the content of the file indicates the presence of Java class files. We can easily confirm that by executing:

 $ jar xf extract-HTTP-FC7cMq18xeqtT9IGD3


Which extracts the Java classes to the b4d directory.

We’ll leave the analysis of the Java classes for now, but can you identify if this is a malicious file with the information we have at this moment? Well, let’s see what others know about this file. Remember the MD5 hash included in the files.log? A quick search in Virustotal reveals that we’re looking at a Java 0-day that was included in the Blackhole Exploit Kit (CVE-2012-4681).

As you can see, the possibilities of using the new file analysis framework are endless. Add a bit of knowledge of the Bro programming language, some python scripting goodness and a few APIs to malware analysis services and you have an awesome cocktail!

Bro, Threat Intelligence and CIF

Threat Intelligence is the new holy grail of security. Finding relevant and up-to-date information on malicious threats is key for all the phases of the security lifecycle, from prevention, to detection, incident response, containment and forensic analysis. The most common types of threat intelligence required by analysts are IP addresses, domains, urls and file hashes that have been observed in relation to malicious activity.

Many organizations provide data feeds that are freely available and that can be used with the new Bro’s Intel Framework to log hits seen in network streams, like those from ZeuS and SpyEye Tracker, Malware Domains, Spamhaus, Shadowserver, Dragon Research Group, and others.

While you could download these data feeds on a regular basis, maintaining an updated repository that is actually usable by your tools can be a daunting task, especially given the number of sources and disparity of formats used. This is where the Collective Intelligence Framework (CIF) comes to the rescue.

CIF is now on version 1 (stable) and allows you to parse, normalize, store, process, query, share and produce data sets of threat intelligence.

Having installed a few CIF servers I can tell you it’s somewhat complex (maybe not complex but rather tedious), so I will refer you to the official documentation if you want to set up your own instance (see the References below). For the rest of this section I will assume that you have access to a running instance of CIF.

To enable the Bro Intel Framework and allow the integration of CIF feeds, add these three lines to your local.bro file (in Security Onion that’s in /opt/bro/share/bro/site/local.bro):

 @load frameworks/intel/seen
@load frameworks/intel/do_notice
@load policy/integration/collective-intel



CIF is used mainly in two ways: either to query for data stored about an IP address, a domain or a url, or to produce feeds based on the stored data sets. The data feeds available in version 1 can be seen here:


In our example, we’ll generate a list of domains related to malware with a confidence level of 75 or greater. To make sure the output is formatted for Bro append “-p bro

 $ cif -q domain/malware -c 75 –p bro > domain-malware.intel


Note that this command won’t work if you don’t have CIF installed. If you don’t have access to a CIF server you can grab a copy of a file formatted for Bro here (note that this will be outdated by the time you download it so use it for testing purposes only).

The figure below shows the contents of the file generated in CIF’s native format (without using the BRO plugin).



In order to import the new data feed we just generated we need to configure Bro’s Input Framework. To do so, add the following lines to your local.bro file:

 redef Intel::read_files += {
        "/opt/bro/feeds/domain-malware.intel",
};



Where /opt/bro/feeds/domain-malware.intel is where you have placed the file generated by CIF. You can add as many files as you want. For more information about different methods to refer to these .intel files check http://blog.bro.org/2014/01/intelligence-data-and-bro_4980.html.

Now the Input Framework will read the information from our text-based file and will send it to the Intel Framework for processing.

To demonstrate the combined usage of Bro and CIF I have created sample4.pcap, a simple capture that contains a DNS query to a malicious domain (winrar-soft.ru). Let’s replay this capture with Bro after making all the changes described above:

 $ bro -r sample4.pcap local


See how a new file, intel.log has been created:

 $ cat intel.log 

#separator \x09
#set_separator ,
#empty_field (empty)
#unset_field -
#path intel
#open 2014-03-07-21-28-09
#fields ts uid id.orig_h id.orig_p id.resp_h id.resp_p fuid file_mime_type file_desc seen.indicator seen.indicator_type seen.where sources
#types time string addr port addr port string string string string enum enum table[string]
1394223877.224159 C7J79H2v6YLWMaJEk6 192.168.68.138 54212 192.168.68.1 53 - - - winrar-soft.ru Intel::DOMAIN DNS::IN_REQUEST CIF - need-to-know
#close 2014-03-07-21-28-10



Since winrar-soft.ru was included in the feed generated by CIF and imported into Bro, now we can identify any attempt of connection to this malicious domain.

Conclusions

Security analysts will never have enough tools or resources to fight malware. Bro and CIF are two of those invaluable resources that every malware analyst should be aware of.

As their creators state, Bro is much more than an IDS. Bro is a full-featured network analysis framework created with a powerful tool, the Bro Programming Language.

If you want to know more about Bro, CIF, Malware Analysis or Network Forensics check the References section.

About the author

Ismael Valenzuela (GCFA, GREM, GCIA, GCIH, GPEN, GWAPT, GCWN, GCUX, GSNA, CISSP, CISM, 27001 Lead Auditor & ITIL Certified) works as a Principal Architect at McAfee Foundstone Services EMEA. Find him on twitter at @aboutsecurity or at http://blog.ismaelvalenzuela.com

References

Pcap samples used in this post:
Catching “bayas” on the Wire: Practical. Kung-Fu to detect Malware Traffic. SANS EU Forensic Summit:
Liam Randall’s samples, exercises and scripts:
Toolsmith: Collective Intelligence Framework:
The Bro Network Security Monitor:
Malware dumps and pcaps:
Collective Intelligence Framework:
Security Onion:
Remnux:

Tuesday, February 25, 2014

An Open Cyber Security Framework

By Mateo Martinez.

In this blog post we´re going to present a brief overview of the Open Cyber Security Framework Project.

There are a number of frameworks already on the market like the new NIST “Cybersecurity Framework” or “Transforming Cybersecurity using COBIT5” from ISACA and other paid or country-oriented frameworks. However there is no single open framework that governments and organizations can adopt for use as a reference model to start or improve on cybersecurity matters, and this is a real need from the market. There are many governments and organisations working on their Cybersecurity Frameworks starting all from scratch. This open framework will be created with governments and organizations around the globe creating the fact model to be used as a reference from starters to the ones improving or looking for optimized cybersecurity frameworks. The main web page of the project is www.ocsfp.org and the core framework release version 1 is expected for end of March 2014. The OWASP Open Cyber Security Framework Project's aim is to create a practical framework on Cybersecurity.

Creating, Implementing and managing a Cybersecurity Framework has become a need (or may be a must) for many governments and organizations. The Open Cybersecurity Framework Project (OCSFP) is an open project dedicated to enabling organizations to conceive or improve a Cybersecurity Framework. All of the information in OCSFP are free and open to anyone. Everyone is invited to join and collaborate in order to improve all the content that would be available worldwide. It is licensed under the Creative Commons Attribution-ShareAlike 3.0 license, so you can copy, distribute and transmit the work, and you can adapt it, and use it commercially, but all provided that you attribute the work and if you alter, transform, or build upon this work, you may distribute the resulting work only under the same or similar license to this one. OCSFP is an OWASP Project since February 2014.

The main objective of the project is to provide a practical Cybersecurity strategy with a 1-2-3 practical phases as shown in the following figure:



There´s a team of active contributors working on the core framework and there´s a very interesting roadmap of releases for this year 2014. Below is the list of open documents that are under development and will be released during the year. There´s an open mailing list to join for those interested in collaborate with OCSFP.

The OCSFP contributors are working hard on the first Framework Core release but there´s also under development open frameworks for different specific Industries like Healthcare, Government, Aeronautics, Telcos and Critical Infrastructure. The first version of all of them will be released during 2014.

Open Cybersecurity Frameworks
  • Open Cybersecurity Framework Core
  • Open Cybersecurity Framework Core Implementation Guidelines
  • Open Cybersecurity Framework for IPv6
  • Open Cybersecurity Framework for Governments
  • Open Cybersecurity Framework for Enterprises
  • Open Cybersecurity Framework for Critical Infrastructure
  • Open Cybersecurity Framework for Aeronautics
  • Open Cybersecurity Framework for Oil & Gas
  • Open Cybersecurity Framework for Healthcare
  • Open Cybersecurity Framework for Telcos
  • Open Cybersecurity Assessment
  • Open Cybersecurity Quick Self-Assessment
  • Open Cybersecurity Quick Reference Guide
  • Open Cybersecurity Free Tools
  • Open Cybersecurity Incident Response Management Framework
  • Open Cybersecurity Framework for Small Biz


For those who are just evaluating their current status on cybersecurity, there´s an quick online assessment with some simple questions about the current Information Security Programs and about the implemented technologies. With the first release of the framework core, a complete assessment will be available online with a table of recommendations for the first steps developing a cybersecurity strategy taking into account your current maturity level.

Some of the available questions in the current online draft are:
  • Do you have a Data Loss Prevention Process?
  • Do you have an Incident Response Program?
  • Do you have a Vunerability Management Process?
  • Do you train your Response Teams in Malware Analysis and Forensics?
  • Do you have a NG Firewall installed?
  • Do you have a dedicated IDS or IPS
  • Do you have a Data Loss Prevention Solution implemented?
  • Do you have a Web Proxy installed?
  • Do you have full disk encryption in you laptops?
  • Do yo have Host Firewall in your organization´s computers?
  • Do yo have Host IPS in your organisation´s computers?
  • Do you have a vulnerability scanner?
  • Do you have any Log Management / SIEM solution?


When you go deeper into the framework you will notice that after the 3 phase strategy there are is a set of activities to be implemented in the cybersecurity strategy:
  • Security Strategy Roadmap
  • Risk Management
  • Vulnerability Management
  • Security Controls
  • Arsenal
  • Incident Response Management
  • Data Loss Prevention
  • Education & Training
  • Business Continuity & Disaster Recovery
  • Application Security
  • Penetration Tests


Last but not least, the project has created a matrix mapping for the controls of SANS Top 20, NIST Cybersecurity Framework and Federal Communications Commission with OCSFP and some other well-known market frameworks are being mapped into OCSFP activities too:



The first release of the framework core will be released at the end of next month and will be available worldwide in order to improve faster on the security posture of organisations and governments.