EC Council CEH 312-50 – Advanced Hacking and Explotiation Techniques Part 1
July 8, 2023

1. Introduction to Advanced System Explotation

In this section on advanced exploit techniques we’ll learn how exploits actually work we’ll learn about buffer overflows buffer overflow for prevention processor evolution stacks indepth the different types of buffer overflows heat memory stages of exploit development prevention such as data execution prevention that will discuss the metasploit project and core impact in depth.

2. How do Exploits Work?

Actually this particular section along with the cryptography section is one of the ones I find the most interesting. It’s basically how the hackers get into our system and I’m going to explain that. But first we’re going to have to also understand a little bit about how the computer actually works. You’re probably saying yourself, gosh Tim, if I don’t know how the computer works why am I even taking a ceh preparation? Well, I’m going to just make sure we’re all on the same page. So let’s define a couple of things.

An exploit, from the English verb to exploit, meaning using something to own to someone’s advantage is a piece of software, possibly a chunk of data or a sequence of commands that takes advantage of a bug or a glitch or vulnerability in order to cause unintended unaccepted behavior to occur on the computer system or hardware. And it could also be something electronic as well. Usually it’s computerized. Such behavior frequently includes things like gaining control of a computer system, allowing privilege escalation or denial of service attack. Now there are several methods of classifying these particular exploits.

The most common is how the exploit contacts vulnerable software. A remote exploit works over a network and exploits a security vulnerability without any prior access to the vulnerable system. A local exploit requires prior access to the vulnerable system and usually increases the privilege of the person running the exploit past those granted by the systems administrator. Exploits against client applications also exist, usually consisting of modified servers that send an exploit if accessed with a client application. Exploits against client applications may also require some interaction with the user and thus used in conjunction in a combination of a social engineering method.

So, for example, the user themselves have to open the door for us in some case because they’re the only ones who have outbound access to the firewall and we’re going to do some kind of devil tree on their machine to send us and I use that in quotes, air quotes here I send us access to a connection. Naturally this is unbeknownst to them. So all of these would fall into classifications against the vulnerable system unauthorized data access, arbitrary code execution and denial of service. Many exploits are designed to provide super level access to a computer system.

However, it’s also possible to use several exploits first to gain low level access, then of course to escalate privileges repeatedly until one reaches route. Normally a single exploit can only take advantage of a specific hardware vulnerability. Often when the exploit is published, the vulnerability is fixed through a patch and the exploit becomes obsolete until newer versions of the software become available. This is the reason why some black hat hackers don’t really publish their exploits but keep them private to themselves or other hackers. Now this is actually changing a little bit with the advent of bug bounties that a lot of the major software manufacturers are giving to them.

So for example the bug bounty might be half a million dollars for a Google bug. So some of those black cats they know of some bug in Google or some bug in Amazon are just making these names up out of random but they may be paid more to turn in that exploit than to try and use it on individuals. So I’m kind of in favor of the bug bonus things. It seems to cut down on some of these zero day exploits. Now we should also mention that there are some exploits that are out in the wild that are the primary desire of unskilled hackers. These are often named script kiddies.

And I’m going to tell you folks that and as we learned in our previous sections, that as soon as something shows up in a very popular hacking tool such as collie or such as Metasploit, which we’re going to talk about in this particular chapter. You better get your system patched, because every kid in the world is going to be banging on that thing. So you’ve got to get your system patched as quickly as possible. We saw this, obviously with the advent of Wanna cry here just several months ago from the date of this recording, that it blocked out millions of computers and this was actually a zero day that was leaked to, I should say leaked from the NSA and WikiLeaks had made it known and distributed it.

And they seem to be doing some of this these days. It’s kind of an interesting thing isn’t it? Because it’s got to be somebody on the inside of the NSA that’s leaking this information. Maybe someone with the same mindset as Edward Snowden that doesn’t think this is really a good idea for the NSA to have this and to be using this against individuals that they want to use. So you can see this is a lot of really cat and mouse type stuff and one person leapfrogs over the other person. It’s just what I find one of the things that are most interesting about this particular business.

3. Buffer Over Flows Introduction I do when Speaking at a Conference

Now we need to understand exactly how the memory and the processor are organized inside of a computer before we understand what the flaws are. So I’m going to go through this very briefly and then I’m going to show you a couple of things that we’re going to go back and roll up our sleeves, as they say, and go into a lot more depth. All right? So our basic exploitation techniques are categorized it’s in a structured fashion like any other technical issue, right? Before going any further, you have to understand the basic process of memory organization. And it’s a process running in memory that has each of these following structures.

Now we have the code segment and the extended segment which some people refer to as simply the executing segment or the code segment. In reality, there are two different segments as we’re going to see here in just a bit. The code segment was originally defined to be 64k in size and the extended segment lengthened, that quite a bit. So this is really more of a backward compatibility. But anyway, the data segment is where we store our memory variables. These are the writeable segments. They contain static, global, initialized and even initialized data segments and variables the application or the program, or more specifically I should say, the process has access to.

All right, then we have something called the stack segment and the stack segment is at the bottom. And this is important to understand. A stack segment is very similar to a dish stacker or a tray stacker, maybe in your school cafeteria or some places you’ve gone to. That means that we put one dish or tray onto the stack and then we put another one on top of that and another one on top of that. Now naturally, this is what’s called a last in, first out ordering mechanism. So the items are pushed in and pumped back off of the stack to get access to them. A stack pointer in register in the processor keeps track of where is the top of the stack. In most cases, as you’re going to see, I say that kind of tug and cheek in most cases at all times.

And then you hear terms like the heap and the heap is really the base is basically the rest of the memory space that can be assigned to the process. And we’ll generally use the C terms like malloc and Calic to allocate memory and that type of thing at that particular level. All right, so let’s move on and briefly talk about the biggest security risk I would venture to say of all time. The technique is of exploitation. It’s straightforward and it’s absolutely lethal. The stack of the program stores the data in order whereby the parameters are passed to the function are stored first, then the return address, then the previous stack pointer and subsequently the local variables.

If variables like arrays are passed without bound rejects, they can be overflowed by shoving in large amounts of data which corrupts the stack, leading to the override of the return address and consequently segmentation fault. If the trick is craftily done, you can modify the buffers towards any location leading to code execution. Now, was that clear? Okay, come on, tammy, can you simplify this a little bit? Well, basically, it’s the computer trying to put something into a space never designed for that space. So we allocated that memory.

We didn’t allocate enough room, but the input or the computer trying to put more into it than that, that remind me of let me think just for a second. Oh, yes. Now I remember my wife’s trip to the shoe store. The shoes aren’t going on, have two sizes too small. And even if they did go on, they would cripple. You don’t want that, do you? Well, it took three shoehorns and a cookie spaster. But look, now I get to wear the shoes. Wow, great shoes. So, as you can see, I play this at conferences and do this a number of times from my classes. So this is my explanation, a buffer overflow. And so I wanted to explain it really fast, so you didn’t really quite understand it. And I’ll roll up my sleeves and we’ll get to it.

4. Processors and Stacks

So in order to understand the flaws, or actually how the architecture was originally designed and how we had to provide this backward compatibility to it, which kind of makes it a little bit difficult to do, you have to understand the processor. For example, when the IBM PC first came out, it came out with what’s called an 80 88 processor around the early 80s, think 1980 or something like that. It consisted of a segment register and an offset register, both of them being 16 bits. Now, if you get 16 and 16 and added together, what do you get? And of course, any natural person is going to say 32. But I would say this in classic well, in this case, you get 20.

Because what they did is they had the segment register and what was called an offset register point to those combined together, point to 20 bits. Of memory, which is 1024K or 1384k was allocated to the operating system and 640k was allocated to what we refer to as real memory. That’s the base memory that we could use. This naturally wasn’t big enough. This was considered to be a toy. We were trying to make a PC into something more business use. So about two years or so after the 80 88, intel created multiple modes of operation. But they still had to provide, or at least try to provide this backward compatibility.

There’s an old saying that says God created the world in seven days, but he didn’t have an installed base to have to provide backward compatibility to. I guess that’s why he could create it in seven days. So anyway, the 82 86, it actually increased the segment registers from 16 to 24 bits. So now, rather than being able to allocate one meg of memory, we could allocate up to two to the 16th power, which is 24 meg of memory. It really was kind of clunky. They tried to make it work by adding multiple modes of operation. So we had what was called a real mode and we had a protected mode, but we still had this clunkiness of being segment and offset and the programmer had to keep track of it.

It was a real hassle, as a matter of fact. What would happen is they would give you one instruction in computer hardware to get from real mode to protected mode, but it didn’t give you any way to get back. Just as you heard me say earlier, we have to provide backward compatibility to our legacy based applications, this simply didn’t work. It was a real hassle. Some people referred to this as the brain dead processor. All you really got was a fast 80 88. It really wasn’t used for its protected mode capabilities. Now, all of that changed around 1989, early 1990, when intel came out with the 83 86.

Now, the 83 86 is where they pretty much did a pretty good job of solving all their problems. And we had two to the 32nd power of memory, or about 4gb of what I refer to as elbow room or what you refer to as memory address space. 2GB was allocated automatically to every process for its use, and two gig was allocated to the operating system. But every process was thought to have two gig of memory. Or some people you hear us refer to as four gig or three gig on XP. And this is some math they’re playing with on the XP and the operating system, but we at the processor level actually have more memory than that.

So bottom line, we refer to this processor as the X 86 family of processors. And the X 86 family of processors is what we’re still using today with a number of Tweaks and a whole bunch of things for caching and all that type of stuff that I’m not going to really get into for this particular conversation. I wanted to give you just like a three or four five minute overview of the processor evolution. I want you to understand that we use something called virtual memory in the 386. So we as a programmer don’t have to keep track of these memory segments where they’re used and where they’re put it. Doesn’t make any difference at all. All we get is a number of memory.

So unlike the 80 88, where if you move those segments and offset, they could point to a different area of memory. So one person would step on another person on the 386, it always gave you linear addressing. So memory address one was memory address one, memory address two was memory address two. We didn’t care one hoot where it actually put this out in the real memory. We cared how it was mapped to our program’s virtual memory and how we utilize that. I always refer to this when I’m explaining about the swap area of our disk and the swap area of memory. We actually had two swap areas. In reality, it swaps to memory and it also swaps out the disk.

And I refer to this kind of as a couple that pulls it to a restaurant. They give their parking attendant the valet keys to their car. He goes and parks his car. They don’t really care where he parks it. When they come back out of the restaurant, they give him the stub and, okay, come, bring my car back to me. I don’t care where you parked it. Now, naturally, let’s say the same parking attendant, he needed to park a car, one of those minibuses, and these were all overweight people, let’s say. I’m trying to think of a good analogy here something he’s going to say. You know what, these guys are going to be in there for a while. So I would have parked this thing way out here. That might be considered a way of thinking of swapped memory.

You still had the address space, but it took a little bit longer to get to it because you had to move it out to disk and move it back in. And so this is kind of our five to seven minute overview of the processor evolution that you needed to understand before we went into our stack information. So now let’s roll up our sleeves and really talk about what actually is happening inside the stack. So, stacks and computer architecture are regions of memory where data is added and removed in a last in, first out manner. So, in most modern computer systems, each thread, which is defined as the line of executing code, has a reserved region of memory referred to it as its stack.

So when a function executes, it may add some of its state data to the top of the stack when the function exits its response for removing that data from the stack. So, at a minimum, a thread stack is used to store the location of the function calls in memory in order to allow return statements to return to that correct location. But programmers may further choose to explicitly use the stack. So, if a region of memory lies on the thread stack, and that memory is said to have been allocated on the stack, because data is allocated, if a region of memory lies on the thread stack, that memory is said to have been allocated to the stack. Okay? Because the data is added and removed in a last in, first out manner.

Stack based memory allocation is very simple. It’s typically faster than what referred to as heap based memory allocation, known as dynamic memory allocation. I’ll explain a little later. Another feature is that memory on the stack is automatically and very efficiently reclaimed when the function exits and it doesn’t chop itself up like something you’ll see here when we allocate it dynamically with malloc and Calic, as we would have to use on a heat. Therefore, stack based allocation is typical for memory data or temporary data, which is no longer required after the function exits. So we just throw it away. A thread assigned stack can be as small as a few dozen kilobytes.

Allocating more memory on the stack that is available can result in a crash due to a stack overflow. Now, some of the processor families we talked about, the X 86, have special instruction for manipulating that stack. So let’s go ahead and sum this up. So far, we’ve discussed two types of ways of accessing our memory. We can access the memory as a stack versus a heap. All right? So the stack is a first in, last out mechanism. So we push all of our variables functions, registers onto the stack and pop them back off in the reverse order and put them back into places that we originally wanted them to be. So we have to put them in in the reverse order of how we want them to pull them back out.

Okay? Another way of explaining this that I’ve done in the past is, while I’m in front of the class, I’ll find somebody in the class that’s calling Greg. All right? Now, let’s say Greg had a question for me. Greg come up here, and he’s going to come and ask me the question. Oh, just a second, Greg. Let me move all this stuff my pencils, pens, everything off of my desk and put them in this one place because he is, quote, unquote, interrupting me. And I place his paper on my desk, answer his question. He goes back to his chair, and I repopulate my desk. I e pop them back off of the stack and put them in the exact place they were before I was interrupted. Okay? You’ll notice why I’m using that particular term here in a couple of moments. Okay.

Leave a Reply

How It Works

img
Step 1. Choose Exam
on ExamLabs
Download IT Exams Questions & Answers
img
Step 2. Open Exam with
Avanset Exam Simulator
Press here to download VCE Exam Simulator that simulates real exam environment
img
Step 3. Study
& Pass
IT Exams Anywhere, Anytime!