Palo Alto Networks PCNSE – QoS Part 1
May 10, 2023

1. QoS Introduction

In this lecture we’ll talk about quality of service. Quality of service goals is to prioritize and adjust quality aspects of your network traffic. You can control the order in which packets are handled and what bandwidth is allocated for specific traffic applications and user. The way you measure the measurement that is concerned with quality of service is the bandwidth, is the total amount of allocated bandwidth you have on your network. And then throughput is the actual transfer rate that you get for specific type of sessions, applications, classes of traffic. Delay is the amount of time it takes the packet to get processed by the firewall from the time the packet is received to the time the packet is sent. This is the latency jitter is the variation and delay. The variation and delay is the enemy of voice and video application.

The reason why if the packet gets delayed differently they will arrive at the destination out of order which will cause the packets to be dropped and the voice to become choppy. What you want to do from a QRS perspective and what you want to achieve is you can prioritize your network and application traffic, you can guarantee high priority for important traffic and you can limit nonessential traffic. You can also allocate your bandwidth and share the bandwidth evenly, fairly between applications so that you don’t hurt critical applications on your network and you don’t allow nonessential applications to consume more bandwidth than they should. And also the latency, like I said, is the enemy of voice and video applications. You can ensure latency guarantees for critical application.

One of the nice things about the Altoirewall is you can also do the traffic profiling based on applications to make a distinction between the bandwidth that certain application gets versus others. And you probably give more bandwidth to the applications that are critical to your business. The configuration components of the QS is the QS policy which basically determine what type of traffic goes to which class. You can classify the traffic based on different criteria. The QS basically the traffic comes into the firewall, it gets classified into different classes and then the egress it egress the interface and based on the classes you have configured you can apply different profiles to different classes. For example, you can allow application one specific amount of bandwidth, application two or class two different type of bandwidth and so on.

You can only apply there’s limitation on how many QS profiles can be applied on interfaces. In the case of 50 to 20 is twelve interfaces, in the case of 220 is eight interfaces. The nice thing is about the QoS implementation on the Palo Alto Firewall is you can control traffic not only for networks and subnets like typical legacy equipment can do. You can extend the QoS to also classify and shape traffic based on applications and users you are integrating with the functionality. The next generation Firewall functionality of the Palo Alto Firewall, which is app ID and user ID. So the QS policy is define the traffic that’s going to receive different QS treatment and assign it to different classes to fit the traffic into different classes. Is you can classify based on application and application group source zone.

Source address and source users destination Zone destination Address the Spit Service Service Group like Service TCP port 84 43 Service Group list of TCP UDP ports URL categories can also go down to URL categories and also based on traffic received that has different DSCP and type of service values. The QS profile defines eight classes within a single profile rules. So the profile itself is used to find superiority, queuing and bandwidth for each QS class and it also can provide the combined total bandwidth allowed for the eight classes combined. Basically the traffic comes into the firewall and it gets inspected by the QS policy. The Kiosk policy would look at different criteria. For example on here it looks at the application and determine okay, this goes into class one.

The QS profile which is applied to the interface has class one with a specific bandwidth and a specific priority. The traffic incomes to the firewall gets classified based on a QS policy. It goes into a QS class and then this QS class will give a different treatment when it exits out the firewall, the traffic when it comes in it gets inspected by the QS policy. The QS policy will look at the traffic and say okay, determine based on what you set in the policy what type of class it goes to and then the QS profile, the QS profile as the traffic egress out the firewall, it applies specific bandwidth and queuing strategy for that class.So the QS profile can have up to eight classes and QS policy is used to classify what type of class this traffic gets and this could be based on application source address zones, TCP UDP ports and multiple other application and application grouping and other criteria.

The QS priority queue for priority queues, real time takes the presidents, goes in front of the queue and gets processed first. The real time is critical to allocate for bandwidth applications that are latency and gender sensitive like voice and video. And then you have high, medium and low after that and after the real time queue is DQed high, Medium and low. The firewall will de queue based on the different order high, Medium and low. So packets and outgoing traffic flow are queued based on their priority. The priority queue is one component, bandwidth is the other component. Bandwidth allows you to control traffic to ensure it doesn’t exceed capacity. You can allocate bandwidth for certain type of traffic applications and users. You can also enforce bandwidth for the traffic, allows you to set bandwidth limits for different classes and the total combined bandwidth for all eight classes.

In the QS policy you put applications or traffic into different classes based on different criteria. And as the traffic egress the firewall it will enforce. The profile would have different classes, one through class, two through eight. And then each class gets a priority treatment and a bandwidth treatment and then combined bandwidth for all the classes in one. So you can say, okay, I have 100 meg, that’s my combined traffic, ten megs will be allocated to ten megs are going to be allocated for class one. The QS rule are attached to the QS profile rule is attached to the interface to enforce the bandwidth settings. Individual QS classes are enforced for traffic matching that QS class based on the QS policy rule. So first the traffic comes in, it gets assigned a policy.

The policy puts it in a class. The class when the traffic exits out, the Egress interface based on the class gets specific bandwidth. The overall bandwidth limit for the profile can be applied to all clear text traffic. So in the case of the QS profile, you have two type of traffic. There is the clear text traffic which is pretty much everything. And then you have the tunnel traffic which is traffic that’s handled by IPsec tunnels. The QS bandwidth settings are egress bandwidth guaranteed. So you can guarantee an amount of bandwidth for matching traffic when the bandwidth exceeds is exceeding. With that Egress guaranteed, the firewall passes that traffic on the best effort. However, when a bandwidth that’s guaranteed is unused, it continues to be available for all the other traffic. So here, let’s say if I allocated, I have my bandwidth here. My total pipe bandwidth is 100 meg and I located ten meg. I give a guarantee of ten meg to one application. Let’s say this application is using six meg. So that four meg can be used by the rest of the classes, the other classes. Okay? So that four meg can be used. Because my application that’s classified is only using six meg out of ten, it means the four meg that are unused can be used by other classes. However, if I’m allocating ten meg and I’m using twelve meg, two megabits more, it’s going to be best effort.

Okay, so the eagers max, it sets a high limit and if that max has exceeded, it will drop the traffic. So the Egress guarantee is pretty flexible. You can use more bandwidth. The Egress max is a hard limit, hard top limit to the traffic. For example, I specified that ten meg for class one is max. That means if my application has twelve meg to send, two megs are going to be dropped. It would be dropped by the firewall. The Egress interface, you enable QS profile on the Egress interface and when you apply the QS profile on the Egress interface, it applies the treatment for the different classes that you specify. The Egress interface is the QS traffic where the interface enters the firewall.

The eager centerface is for the traffic that the eager interface is the traffic exiting the firewall on one interface. But the eager interface can be either external or internal facing. That depends on the flow of the receiving traffic. So for example, this is my firewall and this is the external, this is the internal. I have my employees here and I want to control the traffic that the employees download. So in this case, what is going to be download is this way, right? So in this case, this will be my Egress interface. So this will be the Egress interface for my policy, for my profile. If I want to control the upload, that means this direction, right? So the external interface will be my Egress interface.

So just to summarize here and put the full picture here’s your firewall. You have the Egress interface, the Egress interface. And then traffic comes in, the firewall looks at it, say, okay, QS policy, what type of class is that? It’s going to assign it to class, let’s say a class number class. And then traffic exits the firewall. There’s going to be a QS profile applied. And this QRS profile has class x has eager bandwidth guarantee, guaranteed bandwidth has a bandwidth guarantee guarantee or bandwidth max. And it has priority queue, real time or high, medium and low. And it applies what you have as far as treatment to the traffic. What an Egress out the firewall. So the components are QS policy classes, QS profile.

2. QoS Download Upload Bandwidth Restriction

In this lecture we see an example on how to configure quality of service. And in our example here, we want to control the download speed for the clients. And what will happen is we need to create the QS policy. We need to create the QS profile, apply the profile to the interface like we discussed in the previous lecture. So something to understand before we proceed with that example is here I have the client and here is the server. And based on previous discussions we had about a session that gets established through the firewall, there’s two legs of the session.

There’s the client to server and then there’s the server to client. Let’s say here’s my client and this is the internet. This is the trust and this is the untrust. And I want to control the download speed for applications like web browsing and SSL. The actual policy that you need to create is the client to server policy. So the client to server policy would say, okay, if my client is trying to talk on the trust, is trying to talk to the untrust on application web browsing and SSL, so this is the client to server leg, then basically apply class number one or two, whatever, right? So this basically applies to class. What happens is the reverse direction which is server to client.

The actual profile is applied on the eager centerface. And in this case the eager centerface will not be my client to server, it will be the eager interface which is server to client. So this is the download direction, right? Meaning that in my case, the firewall has ethernet one, one as the untrust, ethernet as the trust. And I’m going to create a QS policy that says if my clients on the trust started to talk to the Untrust on application web browsing and SSL, then I’m going to apply it class one, okay, this class one will be configured in the QS profile or say anything that’s class one apply. In my case I’m going to apply one hundred K per second. So then I’m going to apply this QS profile on the eager interface which is ethernet one two.

So there’s two legs of the conversation, the client to server and the server to client. So let’s configure this. In the lab here I have my client on ethernet one two and internet is on ethernet eleven. So first thing I need to do is go under policies under QoS and then click add and I am going to specify my policy rule. In this case it’s going to be web and SSL traffic and the source is the trust. Destination is untrust. I’m using an application based policy. I’m going to specify web browsing and SSL and then under DCP two S setting, we’re not going to start talking about that right now. We’re going to talk about it in a different lecture but in other settings. Here’s what I’m going to assign the class. You can also schedule this traffic to be applied, for example, in a busy hour of the day to restrict the traffic to a specific bandwidth policy. In our case, we’re not going to put anything for schedule.

So this is the policy. Now we need to create the profile. So under QS profile we click add and we’re going to create a profile for our trust interface because this is where the traffic is getting this is where the traffic is getting the profile is going to get applied on Egress interface, which is ethernet one, two. And then I’m going to specify the class. I mark the class for web browsing, an SSL to be class one, egress max. This is in megabits per second. So I’m going to apply 0. 1 megabits per second, which in my case specifier one megabits per second. Okay, so let’s make it 0. 1 megabits per second. This way we can see the difference. Run a banner test before and after and see the difference. And now I created the QoS profile. Now I need to attach it to the interface.

So I’m going to go here under QoS and then add and then interface that I’m going to apply it to is the Egress interface, which is the trust. And this traffic is not a tunnel based QS policy. So I don’t have a tunnel based QS policy. Right now I’m going to apply it to clear text. I’m going to specify the profile that I created, choose trust interface and basically that’s all I’m going to do. Nothing here and nothing here. All right, so let’s see. So this is my Windows machine. I’m going to run a bandwidth test before and if we look at the sessions, there’s a lot of sessions here. I’m going to take session ID 47, which is client to server is port 80 and server to client is 51. And we see here the default class for traffic is class four.

So this traffic is assigned class four. And if we run the bandwidth policy we see here we have 18. 44 megabits per second download. So now we’re going to restrict the download to 0. 1 megabits per second. So let’s see that in action. Let’s go ahead and commit. And then if you want to look at, see that statistics. You can click the statistics button here since it’s not applied yet, there’s no statistics, so wait until it commits. So here’s the statistics. So let’s check this session ID 524. We see here. That my session. Now let’s see what the bandwidth is. Run a bandwidth again here and I’m going to pick the last one. So here we see there is a server to client, there is a QS policy that’s applied. Okay, the Ethernet went to and we should be able to see this in the statistics now under class one application.

So under bandwidth is here, 00:15 megabits per second, pretty close. So to look at the statistics, we see here, zero one. We see that we have bandwidth, it’s getting cut off at 0. 1 megabits per second at one megabits point, one megabits per second, which is 100K here. Let’s wait until I don’t see the download speed here. So run it again. 0. 4 megabits per second. That’s the upload speed. So it’s restricted right now we see here from the statistics, we see that it’s getting cut off. And this is for class one, that’s on Ethernet one two. That’s an example on the download speed, let’s restrict the upload speed. So in this test there is upload and download speed because the upload speed is coming from also the same session, right? But we want to apply it to the Egress interface, which in this case my upload. The Egress interface will be this interface here.

So I’m going to create an Egress policy for Ethernet eleven and also restricted to 0. 1 gig per second. Add interface name ethernet clear Ducks policy. I can create a new policy, apply the same policy, that’s fine, we can apply the same policy and click OK. And I should be restricting the upload speed as well. Now wait until it publishes, be sure session all it’s applied. Let’s run it again. Let’s see, it’s going to go all over to all the way to 00:12, but that’s fine, we see here the upload. Now the upload is also restricted. See it’s zero nine megabits per second. So let’s take a look at the sessions. Show session ID 519. There’s the last session here. So in this case we see there is two QS. There’s QS applied from the client to server and then from the server to client. And then we see here. The QS rule matches class one as we created. So that’s a good example to show you the upload and download speed restriction. And in our case we use the application as the criteria to match.

Leave a Reply

How It Works

img
Step 1. Choose Exam
on ExamLabs
Download IT Exams Questions & Answers
img
Step 2. Open Exam with
Avanset Exam Simulator
Press here to download VCE Exam Simulator that simulates real exam environment
img
Step 3. Study
& Pass
IT Exams Anywhere, Anytime!