Wednesday, 7 November 2007

What Every Engineer Needs to Know About Web Security and Where to Learn It

Presenter: Neil Daswani



This talk discusses recent trends in security, and what every engineer needs to know to prevent the most significant emerging threats such as cross-site scripting and SQL injection attacks. Just as every engineer might use object-oriented design principles to achieve extensibility and re-usability, every engineer needs to employ principles such as the principle of least privilege, fail-safe stance, and protecting against the weakest link to achieve security. Instead of focusing on "tips" and "tricks" that allow you to "band-aid" the security of your systems, we discuss how to derive defenses based on the application of security principles, such that you can determine how to deal with new threats as they come along or application-specific threats that might be relevant to your domain. Finally, we present some statistics on the current state of software security vulnerabilities, and discuss existing and upcoming challenges in the field of software security.

How to Break Web Software

Presenter: Mike Andrews



Mike Andrews looks at how web applications are attacked, walks through a testing framework for evaluating the security of an application and takes some deep-dives into a few interesting and common vulnerabilities and how they can be exploited.

Monday, 5 November 2007

Analysis of Compromised Linux Server

These slides demonstrate the process used to analyze a compromised (hacked) Linux Server.

How not to get hacked!

The common ways that web applications can be attacked and what you need to do to prevent it.

Tactical Exploitation - Black Hat USA 2007

Thursday, 25 October 2007

Advanced SQL Injection & Protection



Anti-Forensic Rootkits

Incident response and digital forensics are fast moving fields which have made significant progress over the last couple of years. This means new techniques and tools, one of these is live forensic capture. Live forensics capture means taking an image of a machine while the machine is still running, this is brilliant for the investigators and is becoming common practice. Unfortunately the rootkit premise of "whoever hooks lowest wins" kicks in. So, despite assurances from major forensics software vendors it is possible to give an investigator seemingly valid but completely spurious data.

To prove this isn't just theoretical (as has been claimed) I created an implementation called "ddefy" which is a kernel mode anti-forensic rootkit for Windows systems. This talk will be relatively low level, covering NTFS internals, NT storage architecture, Windows kernel rootkit methods, forensic techniques and their corresponding anti-forensic counterpart.


Tuesday, 23 October 2007

sqlninja v0.1.2 released - sqlserver injection and takeover

sqlninja is a specialized tool for exploiting SQL injection bugs in web applications that use Microsoft SQL server as a backend.

The main goal of this program is to provide shell access on the target database server, even in a very hostile environment. sqlninja can help the penetration tester to automate the process of taking over a database server once an SQL injection vulnerability has been discovered.

v0.1.2 features include:

  • SQL server fingerprinting and enumeration of user privileges
  • sa account bruteforce and privilege escalation
  • custom xp_cmdshell creation
  • custom executable upload using only HTTP requests
  • reverse tcp/udp portscan of the attacking machine to find an open port for reverse tunneling
  • forward and reverse bindshell ability, tcp and udp supported
  • DNS command tunneling / psuedo shell - covert channel - bypass firewall restrictions.

For a quick overview of what sqlninja is all about you can check out this flash demo.

sqlninja is written in perl and should run on any UNIX platform with a perl interpreter, as long as all needed modules have been installed. sqlninja is released by the author icesurfer under the GPL v2 license.

http://sqlninja.sourceforge.net

Metasploit framework version 3.0 released

The Metasploit Framework (”Metasploit”) is a development platform for creating security tools and exploits. Version 3.0 contains 177 exploits, 104 payloads, 17 encoders, and 3 nop modules. Additionally, 30 auxiliary modules are included that perform a wide range of tasks, including host discovery, protocol fuzzing, and denial of service testing.

Metasploit is used by network security professionals to perform penetration tests, system administrators to verify patch installations, product vendors to perform regression testing, and security researchers world-wide. The framework is written in the Ruby programming language and includes components written in C and assembler.

Metasploit 3 is a from-scratch rewrite of Metasploit 2 using the Ruby scripting language. The development process took nearly two years to complete and resulted in over 100,000 lines of Ruby code.

The latest version of the Metasploit Framework, as well as screen shots, video demonstrations, documentation and installation instructions for many platforms, can be found online at http://framework.metasploit.com/

Prioritizing Security Assessments in 20 Minutes or Less!

Overview
Imagine you’re the CSO of some organization, or work for the CSO of your organization and you’ve been asked to come up with a prioritized list of systems/applications that need to be assessed this year and justify why need to be assessed. How do you start? Let me show you.


Your first thoughts might be consulting firms. Couldn’t they help prioritize security assessments? Consulting companies certainly can (they can do anything when the price is right: conduct a full inside and out regulatory compliance check of your IT operations, mow the lawn, paint the side of the building …), but there are several reasons why creating the prioritization yourself is better. First consultants by definition are “outsiders” to your company and won’t have the same understanding and appreciation of systems/applications/data or their criticality and interdependencies as you will. Second, even if consultants could build up that knowledge about your company (unless you don’t mind) it would be on your dime and you would essentially be paying them to learn about your company instead of helping your company. Finally, most vendors will place the assessment projects that will result in the most potential hours for them at the front of the “priority listing” which often helps their bottom line (revenue) and if you’re lucky maybe even yours (reducing critical risks to your business).

So now that using consultants are out the door, what about using tools and surveys to come up with a prioritization? These can definitely help, but here are reasons why they may not be for you:


  • Threat Modeling Tools. Threat modeling helps you understand threats to a system and the prioritization of those threats. You could reasonably base your assessment prioritization on the systems/applications/data that yield the most high risks and threats, but in order to do so you need to have someone who understands the threat modeling process to create the models (more consulting hours) and you need to invest significant time to complete the threat models (even more consulting hours) for every system/application in order to make that prioritization. If you’ve got budget/time/inclination, by all means go the expensive threat modeling services route, but if you don’t read on!


  • Surveys. Most surveys work by asking questions along the lines of “does this risk/threat/hazard apply to you?” In other words you’re basically being asked “is this a bad thing for you” and you need to answer yes or no. Each “bad thing” is given a weight, and at the end if the survey all the “bad things” are tallied up and the system/application with the greatest weight is given the highest priority. The one with the second highest weight is given second priority, so on and so on. One problem. You're trying to quantify the bad and I talk about the perils of doing this in my blog about Input Validation.

Now we’re left the best, I think, prioritization system of them all: you. In this article, I’ll show you my own technique that you can use to quickly and systematically determine the priority of security assessments based simply on what you know about your organization (business objectives, processes, data, etc.) not the unknowns (threats, vulnerabilities) and with little to no in-depth security expertise required.


Prioritizing Security Assessments
Our objective is to be able to come up with a prioritized list of security assessments to conduct (what), show how they align closely with the prioritization of overall business objectives (why) and easily explain to the person you’re asking budget from how you came to that list.
This method takes only known data about your organization and utilizes the following to prioritize security assessments:

  • Explicit Dependencies. Aspects of your organization that without them your organization does not function at all – caput, nada, lights out, etc.

  • Implicit Dependencies. Aspects of your organization that without them your organization still continues, but is degraded in some way.

While it sounds super complex and one beast of a method to execute it’s really not. To get started, all we need to do is start enumerating the following:


  • Your organization’s business objectives

  • Applications and Systems

  • Data and Information

Step 1 (Minute 0-5): Enumerate Your Business Objectives
In this step simply write down your organization’s business objectives. Your organization will probably have multiple ones like increase revenue through direct sales, become industry leader, etc. Doesn’t matter, just write them down.

Next, rank those business objectives as high, medium and low. It’s up to you to determine what constitutes high, medium or low in the context of your organization (since you know it the best). Some of my customers like to rank business objectives in terms of the amount of revenue that objective represents to the organization but that might not fit your scenario. Here’s the standard ranking that I like to use:


  • HIGH – Business objectives that are critical to your organization.

  • MEDIUM – Business objectives that are important to your organization.

  • LOW – Business objectives that are minimally important in the context of your organization.

Step 2 (Minutes 5-10): Enumerate Systems and Applications (Explicit Dependencies)
After you’ve written down your business objectives and ranked them, write down the systems and applications that fulfill those business objectives. These are what I call explicit dependencies because without these systems and applications your organization stops. For instance, if one of your objectives was to increase direct online sales, an application or system that might fulfill that objective is your procurement web application or database to track sales and collects payments.

As we did in that last step, rank the processes, systems and applications. Again, you can decide what each system and application gets categorized as, but be sure to keep the same high, medium and low categories.

  • HIGH – Explicit dependencies that are critical to the business objective they fulfill.

  • MEDIUM – Explicit dependencies that are important to the business objective they fulfill.

  • LOW – Explicit dependencies that are minimally important to the business objective they fulfill.
Step 3 (Minutes 10-15): Enumerate Data (Implicit Dependencies)
After you’ve finished with systems and applications, now write down the data used by those systems and applications that fulfill the business objectives that you noted in Step 1. These are what I call implicit dependencies because in the absence of them your business still runs, but maybe in a degraded state. It can be data your business generates (accounting records for instance) or data that is given to your business (credit cards for instance). Any data your organization uses, creates or reads in. You get the idea.


Now rank the data. Again, it is up to you how you classify what constitutes HIGH, MEDIUM and LOW. I like to use this ranking system:

  • HIGH – Implicit dependencies that are critical to the explicit dependency that uses it. Sometimes referred to as high business impacta (HBI) data.

  • MEDIUM – Implicit dependencies that are important to the explicit dependency that uses it. Sometimes referred to as medium business impacta (MBI) data.

  • LOW – Explicit dependencies that are minimally important to the explicit dependency that uses it. Sometimes referred to as low business impacta (LBI) data.
Step 4 (Minutes 15-20): Making Sense of it All, Prioritizing Your Security Assessments
The final step now is to tie all that data together and come out with a security assessment prioritization. First step is to take your business objective rankings and order them according to rank. So you should have a HIGH bucket, MEDIUM bucket and a LOW bucket.

Now for each bucket, write down the systems/applications (explicit dependencies) that fulfill that business objective. If you have an explicit dependency that fulfills more than one business objective, err on the side of caution and write it down in the bucket with the highest rank. For instance, if you have an application that fulfills both a HIGH business objective and a MEDIUM business objective, then you want to write it down in the HIGH business objective bucket. At this point you have a set of systems/applications that need to be assessed first (the HIGH bucket), second (the MEDIUM bucket) and last (the LOW bucket).

But what about prioritization within buckets? For instance you know that that all the systems/applications in the HIGH bucket need to be reviewed before the MEDIUM bucket, but in what order do you need to review the systems/applications in the HIGH bucket? To do that you can use the following chart:




To use this chart, look at the ranking of the explicit dependency and then take the highest ranking of the implicit dependency (your application may use multiple pieces of data all ranked differently) to determine the assessment priority within the business objective bucket. So for instance, say you had a HIGH business objective, and in it you had two explicit dependencies. One ranked HIGH (Application X) and another ranked MEDIUM (Application Y). Say the highest implicit dependency that Application X uses is LOW, so if you look at ROW 1, COLUMN 3 you'll see that it has a PRIORITY 1 rating. Also say that Application Y's highest implicit dependency rates at MEDIUM as well, then by our chart (ROW 2, COLUMN 2) this application has a PRIORITY 2 rating. So our prioritization looks like this:

"For our high business objectives, Application X should be assessed before Application Y ..."

That’s it! You now have a security assessment prioritization that closely aligns to the prioritization of your organization’s business objectives. So now when you go back to your boss, you can say “here’s a prioritized security assessment for this fiscal year, here’s how they align to our organization’s business objects and here’s why I need the $(some out of this world budget number) budget to pay
Impacta LLC to do those assessments for us :P”.

Making This Method Work for You
You’ll notice that my method is biased towards systems and applications than it is towards data. Does this mean that the data (say social security numbers) is less important than systems and applications? Not at all, the chart above just works well for organizations where availability is more important (think about organizations like eBay, Amazon, MSN, UPS etc.). Now if confidentiality and integrity is more important and the data is king in your line of business (perhaps you’re Experian, TransUnion, or the US Social Security Administration office, etc.) just switches data for the explicit dependencies, and systems/applications to implicit dependencies.

Conclusion
In this article I showed you an easy way to create a prioritized listing of security assessments to do based on known and well-understood information about your organization. The resulting prioritization was closely aligned to business objectives and allows you to easily justify and explain how you came to that prioritization. The advantage about this method is that it’s lets you (the real expert of your organization) make the call on what is important and what’s not important, but now in a systematic, consistent and standard way.


Next steps are to actually conduct the security assessments and another advantage of the method we just discussed is that most of the data we collected here is the data needed to easily lead into most security assessments. So not only are we not re-inventing the wheel, we’re also streamlining our overall security assessment efforts.

In a later post, I’ll talk about the differences between common security assessment techniques such as penetration tests (rarely, but sometimes referred to as “ethical hacking”), vulnerability scanning and audits.

Notes From the Author
Feel free to let me know how this method worked for you either by leaving a comment with this article or at
ContactUs@impactalabs.com. Hey, who knows … with enough interest, maybe I’ll publish a tool that automates this process for you. Enjoy!

Building an InfoSec lab, on the cheap

So, you want to experiment with the latest pen-testing tools, or see how new exploits effect a system? Obviously you don’t want to do these sorts of tests on your production network or systems, so a security lab is just the thing you need. This article will be my advice on how to build a lab for testing security software and vulnerabilities, while keeping it separate from the production network. I’ll be taking a mile high overview, so you will have to look into much of the subject matter yourself if you want a step by step solution. I’ll try to keep my recommendations as inexpensive as possible, mostly sticking to things that could be scrounged from around the office or out of a dumpster. The three InfoSec lab topologies I’ll cover are: Dumpster Diver’s Delight, VM Venture and Hybrid Haven.


Dumpster Diver’s Delight: Old hardware, new use

The key idea here is to use old hardware you have lying around to create a small LAN to test on. Most computer geeks have a graveyard of old boxes friends and family have given them, and businesses have older machines that are otherwise condemned to be hazardous materials waste. While what you will have to gather depends on your needs, I would recommend the following:

1. A NAT box: Any old cable/DSL router will work, or you can dual home a Windows on Linux box for the job and set up IP Masquerading. The reason you want to set up a separate LAN with a NAT box is so that things you do on the test network don’t spill over onto the production network, but you can still access the Internet easily to download needed applications and updates. Also, since you will likely have un-patched boxes in your InfoSec lab so you can test out vulnerabilities, you don’t want them sitting on a hostile network and getting exploited by people other than you. You can punch holes into the test network by using the NAT router’s port forwarding options to map incoming connection to SSH, Remote Desktop or VPN services inside of the InfoSec lab. This way you can sit outside of the InfoSec LAN at your normal workstation on the production LAN, and just tunnel into the InfoSec lab to test things.

2. A bunch of computers/hosts: Whatever you want to test, be it computers, print servers or networking equipment. Boxes for a security lab do not have to be as up to snuff as production workstations. If you are doing mostly network related activities with the hosts, speed becomes less of an issue since you aren’t as annoyed by slow user interfaces.

3. A KVM (Keyboard/Video/Monitor) or plenty of monitors: Use what you have, but my recommendation is to get a KVM switch since it will take up less space and consume less power than having a monitor for each computer.

The problem with the “Dumpster Diver’s Delight” approach is it takes up a lot of desk space. Also, if you are conscious of your monthly power bill you may not want to run a whole lot of boxes 24x7.

VM Venture: One big box, one little network

Why not have one powerful box instead of a bunch of old feeble ones? VMs (Virtual Machines) allow you to have your one workstation act as many boxes running different Operating Systems. I’ve mostly used products from VMware, but Microsoft Virtual PC, Virtual Box, QEMU or Parallels may be worth looking into depending on the platform you prefer. I personally recommend VMware Player and VMware Server, both of which are free:

http://www.vmware.com/products/player/
http://www.vmware.com/products/server/

VMware Server has more features (VM creation, remote management, revert state, etc.), but I’ve found it to run a little slower than VMware Player. The way VMware works is you have a .VMX file that describes the virtual machine’s hardware, and .VMDK file(s) that act as the VM’s hard drive. Setting up your own VMs is easy, and I have videos on my site about it:

http://irongeek.com/i.php?page=security/hackingillustrated

Also check out some of VMwares pre-made VMs:

http://www.vmware.com/vmtn/appliances/

Using VMware has some huge advantages:

1. Did a tested exploit totally hose the box? Just revert the changes or restore the VM from a backup copy.

2. The VM is well isolated to the point that malware has a hard time getting out. Yes, there is research into malware detecting and busting out of VMs, but VMs still add an extra level of isolation.

3. It’s a great way to test out Live CDs/DVDs without taking the time to burn them.

4. VMware presents itself as pretty generic hardware, so installing an Operating System is pretty easy since you don’t have to play driver bingo like you would with some older hardware. That said, installing VMware Tools add-on into your VMs will help make them far more functional.

5. You can configure a virtual network in one of three modes to allow you to have a virtual test network, all on one box:

Bridged: The VM acts as if it’s part of your real network. Useful if you follow the hybrid approach I’ll mention later.
NAT: Your VM is behind a virtual NAT router, protecting it from the outside LAN, but still allowing other VMs ran on the same machine to contact it.
Host-Only: You would want to choose this option if you don’t want the VM to be able to bridge to the Internet using NAT. It would be a good idea to use this option if you are testing out any worm or viral code.

Now you have an InfoSec test network on just one machine, making a much smaller desktop footprint and most likely consuming less power. The big thing to keep in mind when you plan to use VMs for your lab is memory. You want as much RAM as possible in your test machine so you can split it between the different VMs you will be running simultaneously. Depending on how you pare down the Operating Systems installed in your VMs, you will need different amounts of memory. I recommend dedicating the following amounts of RAM to each VM:

Linux 128MB: Could be more or less depending on the desktop interface you use and what services you decide to run.

Windows 9x, 64MB: It should feel quite spry.

Windows 2000/2003/XP, 128MB: yes, you would want more if you can get it, but you can get away with 128MB if necessary.

Windows Vista, 256MB: Don’t send me hateful emails, it can be done. You have to set it to at least 512MB to install Vista, but thereafter you can shrink it down to only 256MB. It’s ugly, but it works.

So, lets say you want to have Ubuntu Linux, Windows 2003 Server, XP and Vista all running at the same time as the guest Operating Systems, while Windows XP is used in the background as VMware’s host OS. That would be 128+128+128+256 = 640MB on top of whatever the host OS needs. Plan on getting at least 2GB of memory for your VM box if you can afford it.

Also, as your VMs’ hard drives start to fill up, the .VMDK file will swell, so a large hard drive will be needed. The CPU is not as big an issue as you might think, but faster is always better, so go dual core if you can and look into getting a processor that supports AMD virtualization (AMD-V) or Intel VT (IVT).

http://en.wikipedia.org/wiki/X86_virtualization

Hybrid Haven: Best of both worlds

There’s no reason why you have to take just one of the above approaches. If the VMware host box is put on the same LAN as the rest of the test network, and the VMs are set to use the Bridged networking option, then you can use both approaches at the same time to create a diverse test network.

Conclusion

In this article I’ve covered how to use spare computers and VMs to create an InfoSec testing environment. I hope you have found this article useful. Most of my advice only helps if you are testing out the security of workstations and server Operating System, services and applications. I’d love to hear from anyone having advice on learning about higher end routers and switchs without having access to the real thing.