Man in the Middle Hacking and Transport Layer Protection
- Introduction to Hacking
- History of Cryptography
- Why Privacy Matters
- Supercookies in the Wild
- Ultimate Guide to SSL for the Newbie
- How Internet Security and SSL Works
- Man in the Middle Hacking and Transport Layer Protection
- Cookie Security and Session Hijacking
- What is Cross Site Scripting? (XSS)
- What is Internal Implementation Disclosure?
- Parameter Tampering and How to Protect Against It
- What are SQL Injection Attacks?
- Protection Against Cross Site Attacks
What are Request Headers
This request header typically contains information such as the resource you want to view, the method for viewing the resource, information about the browser you are using, languages, the referrer and any cookies that have been set by the website. In response to this, the server will send back a response, starting with a header followed by the actual content. The response headers contain normally hidden information about the server (version and OS), content types, caching information and cookies.
This information is available to be read, and manipulated, by various means in the long chain of communication between your browser and the server. These links are illustrated in the diagram below.
Browser > OS > NIC > Router > Exchange > ISP > Global ISP > Remote ISP > Local Exchange > Router > NIC > OS > Web Server Application.
At any point along the line, there is a possibility that data on the wire can be intercepted, read or even manipulated.
Between your browser and your network card (Ethernet or Wi-Fi) there is a possibility that malware, viruses or spyware can intercept communications to and from your browser. It is in this layer that Fiddler acts, capturing HTTP traffic. This layer is relatively easy and cheap to attack, as seen by the prevalence of computer viruses and malware.
After the request has left your computer through the NIC, the next step is the router. Is this configured correctly? Have the DNS entries been hijacked? Probably not an issue for your home connection, but what has been configured, or hacked, on a public network? Website addresses are converted to a numerical IP address which computers use to communicate. DNS hijacking means that when you request a website, for example, www.google.com, a different IP address is used instead of the real one. The result of this is that instead of going to the Google server, you are going to another, malicious server pretending to be Google.
Once the request has been passed from the router, it is now out in the wild. Normally the data is transmitted over telephone wires or fibre optic lines to an exchange or network of exchanges. Any of these exchanges or the line itself can be attacked and compromised. Attacks on this kind of infrastructure require much more specialised knowledge and access to the hardware itself, so are more costly and generally much less likely.
The third main venerable area is the ISP itself. The ISP is able to log, analyse and record every packet of information sent and received by your router. It is also capable of manipulating the DNS, and although this is often for legitimate reasons there are cases when this is not the case (see Turkish Hijacking of DNS Providers).
The request is then sent through cables to the ISP of the web server, and then to the server itself. The server processes the request and the response data is transmitted back along the same lines.
These types of hacks are called Man in the Middle (MitM).
Using Fiddler to Intercept Request Headers
Let's have a quick look in Fiddler at what kind of information is readable and potentially open for attack by a man in the middle.
Firstly open up Fiddler if you haven't done so already. The screen is divided into two sections. The left-hand side shows each HTTP request and response event. The right-hand side shows information about the currently selected event. You can use F12 (or File > Capture Traffic) to start to stop capturing traffic. You can use Ctrl+X (or Edit > Remove > All Sessions) to clear the list. This is useful as you can quickly fill the screen up.
Start capturing traffic and browse to a website. You should see an entry for the site request and probably quite a few for the other resources on the page - images, web fonts, stylesheets, scripts and so on. Stop capturing traffic now and we'll have a look at the request header. Select the event whose host and URL matches the address you navigated to. On the right-hand side click the Inspectors tab, then the headers tab underneath. In this view, you can see all the data that was sent to the server.
A request to this website looked like the content below
GET http://timtrott.co.uk/ HTTP/1.1 Host: timtrott.co.uk Connection: keep-alive Accept: text/html,application/xhtml+xml,application/xml;q=0.9,image/webp,*/*;q=0.8 Upgrade-Insecure-Requests: 1 User-Agent: Mozilla/5.0 (Windows NT 6.1; WOW64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/48.0.2564.116 Safari/537.36 Accept-Encoding: gzip, deflate, sdch Accept-Language: en-GB,en-US;q=0.8,en;q=0.6
From this we can see that the HTTP method was GET, the request was to http://timtrott.co.uk/ using the HTTP/1.1 protocol. We can see information about what data formats my browser accepts, the user agent, and the languages supported.
Underneath this request header is the response header. From here you can see the information that is sent back to the browser. Below the header is the body content - the actual page content requested.
HTTP/1.1 200 OK Date: Wed, 09 Mar 2016 15:58:49 GMT Server: Apache Last-Modified: Wed, 09 Mar 2016 15:57:25 GMT Accept-Ranges: bytes Content-Length: 30188 Cache-Control: max-age=3, must-revalidate Expires: Wed, 09 Mar 2016 15:58:52 GMT X-Clacks-Overhead: GNU Terry Pratchett Vary: Accept-Encoding,Cookie Keep-Alive: timeout=5, max=100 Connection: Keep-Alive Content-Type: text/html; charset=UTF-8
From this we can see information about the response - the first line is the status code (status 200 means everything is ok). We can see the date and time the page was served, the server software used, how much data was sent and a few other bits of header information. Everything seems to be fairly inconspicuous here, but let's see what happens when we try to login to a website.
Insecure vs Secure Login Pages
I'm going to hit the login page, and the request I get back from the server gives a status code of 401. This indicates that authorization is required. Because my browser detected this status code, it has shown me a login box asking for my username and password. These I carefully entered, and I click the login button after restarting network capture in Fiddler.
Below is the captured request header for the login page. Some data has been omitted to protect my security.
GET http://timtrott.co.uk/login HTTP/1.1 Host: timtrott.co.uk Connection: keep-alive Accept: text/html,application/xhtml+xml,application/xml;q=0.9,image/webp,*/*;q=0.8 Upgrade-Insecure-Requests: 1 User-Agent: Mozilla/5.0 (Windows NT 6.1; WOW64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/48.0.2564.116 Safari/537.36 Accept-Encoding: gzip, deflate, sdch Accept-Language: en-GB,en-US;q=0.8,en;q=0.6 log=myusername&pwd=password&submit=Log+In
Now, looking at this we can see the body of the request contains the login name and the password sends in plain clear text. Some sites will encode this using Base64, however, most tech savvy people should be able to recognise a string ending with = or == as a Base64 encoded string. Clearly, this isn't a secure way of sending login details, imagine what a hacker can do with this information!
This is an illustration of why login forms should not be loaded insecurely, that is over HTTP. Any page which deals with sensitive information should be served using secure HTTPS. This encrypts data transferred between two machines using public key cryptography and is very difficult for the information to be decrypted if it is intercepted.
Developers Note: Every page that uses SSL should check to see if it IS running over SSL. If not, make a redirect to the SSL version. Too many sites link to the login or registration pages using the HTTPS protocol, however, the pages themselves fail to check if SSL is active or not, meaning that the page can be loaded insecurely.
Now, when we make the same request to the login page, now secured using an SSL certificate, we can see in Fiddler that there are now no details contained within the request body nor the request header and so these details are secure.
This is a really simple method to demonstrate the risks of sending data over an insecure connection.