Got a call from one of a Friend (Network Engineer) yesterday looking for a quick way to verify QOS marking for a specific traffic.
Now there are multiple ways to get this done. One of the easiest way is to use an ACL matching specific IP Precedence values and apply it on interface in incoming direction and see if you are getting hits. Though it's a quick way but there are some limitations as well. For example - Say I am expecting incoming traffic to carry IP Precedence 5 value. And if you are matching IP Precedence 5 in ACL, you should see hits on ACL. On the flip side if for some reason traffic lost it's marking some where in middle. You won't see hits. Which is what usually we need to know while troubleshooting. But on the flip side you still couldn't determine what was actual IP Precedence value that traffic was carrying at that point which didn't create hit on the ACL.
Now you might be thinking, that's easy. We can create one ACL per IP Precedence value (Total 8) and we should be all good. Which is Okay. But lengthy way. Specially if you are little lazy at Typing like me :)
There is an alternate way as well. All most of us have used "IP Accouting" feature at some point of our Networking Career. But this feature has couple of more branches which are not explored that much per say. One of function it provides is something that can help us with above mentioned situation and even easier and faster in terms of configuration and verification.
Let's review our topology:
Say on R3, I am expecting Server to send traffic towards R3's Loopback with IP Precedence Value 5. But on the flip side I am really not 100% sure if that is really happening.
Now we can create 8 ACLs matching IP Precedence Values 0-7 and apply that ACL on F0/0 of R3 in incoming direction. And than later look at "sh access-list " command output to see which one out of eight ACL entries actually got hit to figure out which IP Precedence value traffic was actually carrying.
Nahhh..... Let's look at other way
Lets enable the feature on R3's F0/0 interface for incoming packets. Though this feature is supported in outbound direction as well.
R3#sh run int f0/0
Building configuration...
Current configuration : 125 bytes
!
interface FastEthernet0/0
ip address 23.0.0.3 255.255.255.0
ip accounting precedence input
duplex auto
speed auto
end
Lets send couple of pings to see our traffic is carrying which IP Precedence value:
SERVER#ping 3.3.3.3
Type escape sequence to abort.
Sending 5, 100-byte ICMP Echos to 3.3.3.3, timeout is 2 seconds:
!!!!!
Success rate is 100 percent (5/5), round-trip min/avg/max = 352/428/524 ms
Lets verify back on R3
R3#sh int f0/0 precedence
FastEthernet0/0
Input
Precedence 0: 5 packets, 570 bytes
As you can see we received the traffic on R3 marked with IP Precedence 0, Which means default value.
Hmmm... Seems like our QOS policy is not working.
Lets set up one and see if that changes the situation as per our requirement.
R2#conf t
Enter configuration commands, one per line. End with CNTL/Z.
R2(config)#class-map match-all ICMP
R2(config-cmap)#match protocol icmp
R2(config-cmap)#exit
R2(config)#
R2(config)#policy-map PREC5
R2(config-pmap)#class ICMP
R2(config-pmap-c)#set ip precedence 5
R2(config-pmap-c)#exit
R2(config-pmap)#
R2(config-pmap)#int f0/1
R2(config-if)#service-policy output PREC5
R2(config-if)#end
Now as configuration seems to be in place to get our job done. Let's re-send the Pings from Server and see if they are carrying IP Precedence 5 now.
SERVER#ping 3.3.3.3
Type escape sequence to abort.
Sending 5, 100-byte ICMP Echos to 3.3.3.3, timeout is 2 seconds:
!!!!!
Success rate is 100 percent (5/5), round-trip min/avg/max = 232/488/688 ms
Let's re-verify on R3
R3#sh int f0/0 precedence
FastEthernet0/0
Input
Precedence 0: 10 packets, 1140 bytes
Precedence 5: 5 packets, 570 bytes
Hmmm....The policy seems to be working now. But old matches are still there and seems to be increased. Which looks confusing.
Okay, lets clear the mess and re-run the test and verify one last time.
R3#clear counters f0/0
Clear "show interface" counters on this interface [confirm]
R3#sh int f0/0 precedence
FastEthernet0/0
Input
(none)
Now counters are cleared. Let's Re-Test
SERVER#ping 3.3.3.3 r 10
Type escape sequence to abort.
Sending 10, 100-byte ICMP Echos to 3.3.3.3, timeout is 2 seconds:
!!!!!!!!!!
Success rate is 100 percent (10/10), round-trip min/avg/max = 200/346/508 ms
Final Verification:
R3#sh int f0/0 precedence
FastEthernet0/0
Input
Precedence 5: 10 packets, 1140 bytes
Nice... Isn't It ? :)
Further Readings
http://www.ciscopress.com/articles/article.asp?p=764234&seqNum=4
HTH...
Deepak Arora
Evil CCIE
Now there are multiple ways to get this done. One of the easiest way is to use an ACL matching specific IP Precedence values and apply it on interface in incoming direction and see if you are getting hits. Though it's a quick way but there are some limitations as well. For example - Say I am expecting incoming traffic to carry IP Precedence 5 value. And if you are matching IP Precedence 5 in ACL, you should see hits on ACL. On the flip side if for some reason traffic lost it's marking some where in middle. You won't see hits. Which is what usually we need to know while troubleshooting. But on the flip side you still couldn't determine what was actual IP Precedence value that traffic was carrying at that point which didn't create hit on the ACL.
Now you might be thinking, that's easy. We can create one ACL per IP Precedence value (Total 8) and we should be all good. Which is Okay. But lengthy way. Specially if you are little lazy at Typing like me :)
There is an alternate way as well. All most of us have used "IP Accouting" feature at some point of our Networking Career. But this feature has couple of more branches which are not explored that much per say. One of function it provides is something that can help us with above mentioned situation and even easier and faster in terms of configuration and verification.
Let's review our topology:
Say on R3, I am expecting Server to send traffic towards R3's Loopback with IP Precedence Value 5. But on the flip side I am really not 100% sure if that is really happening.
Now we can create 8 ACLs matching IP Precedence Values 0-7 and apply that ACL on F0/0 of R3 in incoming direction. And than later look at "sh access-list " command output to see which one out of eight ACL entries actually got hit to figure out which IP Precedence value traffic was actually carrying.
Nahhh..... Let's look at other way
Lets enable the feature on R3's F0/0 interface for incoming packets. Though this feature is supported in outbound direction as well.
R3#sh run int f0/0
Building configuration...
Current configuration : 125 bytes
!
interface FastEthernet0/0
ip address 23.0.0.3 255.255.255.0
ip accounting precedence input
duplex auto
speed auto
end
Lets send couple of pings to see our traffic is carrying which IP Precedence value:
SERVER#ping 3.3.3.3
Type escape sequence to abort.
Sending 5, 100-byte ICMP Echos to 3.3.3.3, timeout is 2 seconds:
!!!!!
Success rate is 100 percent (5/5), round-trip min/avg/max = 352/428/524 ms
Lets verify back on R3
R3#sh int f0/0 precedence
FastEthernet0/0
Input
Precedence 0: 5 packets, 570 bytes
As you can see we received the traffic on R3 marked with IP Precedence 0, Which means default value.
Hmmm... Seems like our QOS policy is not working.
Lets set up one and see if that changes the situation as per our requirement.
R2#conf t
Enter configuration commands, one per line. End with CNTL/Z.
R2(config)#class-map match-all ICMP
R2(config-cmap)#match protocol icmp
R2(config-cmap)#exit
R2(config)#
R2(config)#policy-map PREC5
R2(config-pmap)#class ICMP
R2(config-pmap-c)#set ip precedence 5
R2(config-pmap-c)#exit
R2(config-pmap)#
R2(config-pmap)#int f0/1
R2(config-if)#service-policy output PREC5
R2(config-if)#end
Now as configuration seems to be in place to get our job done. Let's re-send the Pings from Server and see if they are carrying IP Precedence 5 now.
SERVER#ping 3.3.3.3
Type escape sequence to abort.
Sending 5, 100-byte ICMP Echos to 3.3.3.3, timeout is 2 seconds:
!!!!!
Success rate is 100 percent (5/5), round-trip min/avg/max = 232/488/688 ms
Let's re-verify on R3
R3#sh int f0/0 precedence
FastEthernet0/0
Input
Precedence 0: 10 packets, 1140 bytes
Precedence 5: 5 packets, 570 bytes
Hmmm....The policy seems to be working now. But old matches are still there and seems to be increased. Which looks confusing.
Okay, lets clear the mess and re-run the test and verify one last time.
R3#clear counters f0/0
Clear "show interface" counters on this interface [confirm]
R3#sh int f0/0 precedence
FastEthernet0/0
Input
(none)
Now counters are cleared. Let's Re-Test
SERVER#ping 3.3.3.3 r 10
Type escape sequence to abort.
Sending 10, 100-byte ICMP Echos to 3.3.3.3, timeout is 2 seconds:
!!!!!!!!!!
Success rate is 100 percent (10/10), round-trip min/avg/max = 200/346/508 ms
Final Verification:
R3#sh int f0/0 precedence
FastEthernet0/0
Input
Precedence 5: 10 packets, 1140 bytes
Nice... Isn't It ? :)
Further Readings
http://www.ciscopress.com/articles/article.asp?p=764234&seqNum=4
HTH...
Deepak Arora
Evil CCIE
1 comment:
That's really cool. There is another way that usually I follow during troubleshooting.
show ip cache verbose flow
In this the 6th Column would tell the TOS value that the packet is carrying in Hex Decimal.
We will have to configure the below under the interface to see the table.
#ip route-cache flow
eg:
R1#sh ip cache verbose flow | i 172.12.1.2
Se0/0 172.12.1.2 Local 172.12.1.1 01 B8 10 108
So B8 is TOS Bit 184
Post a Comment