CiscoLive 2012 – Day 3

Though I was a day late on the last post, hopefully I wasn’t a dollar short in what I was
able to share. I cannot stress enough how valuable CiscoLive is for any Cisco partners or
customers. The contacts you make and the information you can glean are incredible. If you
can’t make it, I would recommend signing up for an account at It’s
free, and you can download most of the presentations from CiscoLive.

Yesterday, I only sat on two sessions, Cisco TrustSec and Security Group Tagging” and
“Understanding and Deploying the CleanAir Technology to Improve Enterprise WLAN Spectrum
Management” (that’s a mouthful.)

Although I haven’t yet deployed Security Group Tags, it’s an interesting idea to me. I
believe it is vital to continue to learn about new technologies and frameworks so that you
can communicate intelligently about them as they become more mainstream. Of course, some
don’t get to that point, but the concepts help us to grow in our overall system thought
process. Again, some of the highlights for me from this session were:

1. Security Group Tags (SGTs) are an enabler for enforcing policies, and specifically
security policies at this point.

2. While VLANs and static or downloadable ACLs are useful, they are not scalable. Changing
subnets, additional VLANs, and changing or new host IP address add to the complexity. SGTs
can abstract that complexity.

3. A key principle of SGT based access control is to classify at ingress and filter at
egress. So, a user/device is tagged at ingress, and an SGT ACL is used at the egress

4. Since not all devices support SGT, the SXP protocol is a way of migrating between the
two. Also, there are ways of mapping VLANs and subnets to SGTs, to help in the transition.

Of course, there was more – download the presentation!

I missed a session because I was enjoying the World of Solutions too much. I was able to
talk with a number of different vendors, some of whom we resell (like Tessco), some whose
tools we use (Ekahau – hopefully!), and others that our customers use (LiveAction).

The afternoon session on CleanAir was presented by the masterful Jim Florwick. He’s another “must go see” presenter. While I’ve seen much of what was presented before, it was still valuable, with some great reminders about RF and 802.11.

1. 802.11 is Listen Before Talk (LBT or CSMA/CA). And, it’s very, very polite in doing that. So much so, that it won’t talk unless the sensed power is below a certain threshold.

2. How does it sense the RF power levels in the air? Clear Channel Assessment (CCA) using either Energy Detect (ED) (quick, low power, prone to false positives) and/or Preamble (takes time, more power, less prone to false positives). The required power levels for the air space to be seen as “clear” can vary by band, year, client, etc.

3. Of course, non-Wifi devices don’t participate in 802.11 Collision Avoidance (CA). So they will often stomp on 802.11 devices, which will then wait to transmit. So, the more noise, the longer clients have to wait to send due to congestion. Now, there are two Responses to congestion. Either retransmit a packet or rate shift if the client retransmits too many times or SNR is too poor.

4. Since retransmits add to the time that other clients need to wait before sending, busy networks are even less tolerant to interference or noise.

5. Persistent Device Avoidance in 7.2 is a cool feature. It allows CleanAir APs to send information about interfering devices to non-CleanAir APs that are seen as neighbors. Be careful with this, though, as the RSSI or dBm values for neighbors is not adjustable for this feature. And, the bias to not use the channel used by that persistent device is for 7 days, which is also not configurable. Also, PDA does not mean that an AP won’t use the channel. It just adds a factor to not use the channel when that device is there.

The day was capped off with the CCIE Party on th USS Midway aircraft carrier. What an awesome time! Talking with old friends, riding in flight simulators, and decent food made for a terrific night. Much better than last year’s party. Cudos to those who planned it.


CiscoLive 2012 – Day 2

Note to self – don’t forget your badge.  I was walking out the door of the hotel to catch the shuttle to the convention center when I realized that my badge was back in my room.  That would have been bad – no session access (bad) and no breakfast (worse.)  Thankfully, I remembered before leaving.

As I said in my last post, one of the awesome pieces of CiscoLive is meeting new people.  This morning at breakfast was no exception.  I had a great discussion with an engineer from Montreal and another from West Point.  Though it was mostly on wireless, they made an interesting point about being the “expert” for a technology since they had installed that technology once.  It is interesting that in the world of networking, if you’ve done it, you’re the expert (at least in the eyes of some.)

My first session was BRKSEC-2022, “Demystifying TrustSec, Identity, NAC, and ISE” by Aaron Woland.  I highly recommend any sessions that he does, as he is a very engaging and knowledgeable speaker.  And, he busts on Cisco from time to time, which is good to see.  Though most of the sessions was review, having taken the ISE class, it was good to have some concepts reaffirmed.  A couple of key points from this session were:

1. For TrustSec (which is the former name of “Secure Group Access” or SGA – thanks Cisco for reusing a term that now includes the former use plus more!),identity means the Who, What, Where, When and How of access.  With that, most of the work of ISE is in the Authorization piece.  While Authentication is good, it is not nearly enough.

2. At least for Wired 802.1X, deploy in Monitor Mode to begin with.  That way you don’t cause yourself a DoS when you bring it up.  There are too many variables involved that can cause clients to not connect properly to begin with.  With this, make sure that the network device and the backend server (ISE in this case) are set up properly for logging what is going on.  That way you can see what is going on.  Also, make sure URL Redirect is not part of the Authorization policy being tested, unless you really want them to be redirected to begin with.

3. Most supplicants don’t have sufficient logging for troubleshooting issues.  Cisco provides the AnyConnect Network Access Manager as a no-cost licensed product for as many clients as needed for those that have ASA5500s, ACS, ISE, Cisco switches, or anything with which AnyConnect could interconnect as long as that component is under Cisco SmartNet.  What that “no-cost” license does is allows for TAC access.  AnyConnect also provides DART.

My second session, vearing from the mobility/security side was “Nexus 7000 Hardware Architecture.”  I’ve worked with these a little bit, so I wanted to better understand what was going on under the hood.  I found that you almost need a PhD in Nexusology to understand how things can be grouped or not-grouped or whatever.  Also, the way that queueing is performed has forced me to rethink the N7K QoS configs I’ve done.  This is due to mostly ingress queueing, before traffic is placed on the fabric.  There is some egress queueing, but the arbiter has already done most of that work prior to placing it on the output interface.

The third session brought me back to ISE.  Another terrific session delivered by Aaron Woland.  Note to self: book sessions with him whenever possible!  He brought a lot of terrific tips and hints that you wouldn’t automatically think of when implementing ISE.  A few key ones for me were:

1. When running ISE install wizard, use lower-case for the hostname.  That will alleviate issues later.

2. All ISE nodes must be resolvable by their FQDN.  Also, a DNS A Record should also have accompanying Pointer Record.  Otherwise, you will not get the redirects that you are wanting.

3. Related to #2, there is a way of creating the certificate such that it allows for the use of multiple host names (such as one for administering ISE, another for sponsors, another for guests, etc.) by the use of Subject Alternate Names.  That requires some OpenSSL magic.  They should have something about this in an upcoming guide.

4. Time Zone = UTC is best practice for a distributed deployment.  Also, remember that if you change the time zone on an ISE, the database is deleted!  So, set this during initial setup.  BTW, for the Eastern Time Zone in the United States, use EST5EDT in order to allow for Daylight Saving Time.

5. Always use the RADIUS probe, and usually the DHCP probe.  Use as few as required to get the information you need.

While all these were great, the highlight of the day was going to dinner at Jakes’s Del Mar in Del Mar, CA with Pat Goessling, Annese Account Manager, and several customers.  We were right on the ocean and had a terrific time talking and laughing.

CiscoLive 2012 – Day 1

Day 1, for me, was on Saturday, when I arrived.  I’m grateful I was able to register that evening, seeing the lines this morning (Monday)!  Also, I was a little disappointed to see that the CiscoLive bag for this year was another backpack.  My one from last year is now in the trash, because it ripped in several places just from everyday use.  Oh well.

For most people, Day 1 was yesterday, Sunday.  For me, that was the “SSL VPN, AnyConnect and Secure Mobility” techtorial presented by Hakan Nohre (Consulting SE), John Eppich (Cisco Security and Mobility), Nadhem Al-Fardan (Solutions Architect), Ryan Wager (Cisco Security and Mobility) – all Cisco engineers.  This was an informative session, which will hopefully help me with the VPN exam I have tomorrow.  Though there was a lot to this session, here are a few of the highlights for me.

  1. Technically, SSL VPNs should be called TLS VPNs at this point, since that is the technology used for most VPNs.
  1. When configuring tunneling SSL VPNs, use DTLS when possible.  This is NOT the default (why not???)  The reason is that DTLS uses UDP packets for the tunnel, whereas TLS uses TCP.  That means that if an application’s TCP packet is dropped in transit, it will be retransmitted twice – once by the application and another time by the tunnel.  Also, voice traffic would be retransmitted by TLS, which is not desired.
  1. If you are doing VPN high-availability with multiple active ASAs, they recommended configuring “redirect-fqdn enable” in order to allow the Active Master to send the FQDN of the ASA to which the client is being redirected.  This way, the client doesn’t get a certificate error.  This solution depends on the DNS server allowing for Pointer records, or all ASAs having host to IP address configurations for the FQDNs of the other ASAs
  1. For the AnyConnect Server List in the AnyConnect Client Profile, do not put in the Host Address.  That way, the client avoids the certificate error when they connect or get a redirect.  Of course, the certificates used need to be created and installed appropriately.
  1. One thing that was clearly communicated was that client certificates are increasing in importance.  One piece of that is the use of Simple Certificate Enrollment Protocol (SCEP).  Note that anything that has “Simple” in the name probably isn’t.  In fact, the name is not consistently used.  If you are using Microsoft Certificate Services, they call it Network Device Enrollment Service (NDES).  And, of course, because I’m a network engineer, I think of “Network Device” as a switch or wireless LAN controller.  In this case, that’s not entirely true.  While those “network devices” are included, the term includes any device that needs to get a certificate.  So, that means client devices.  A good link for an explanation of NDES and its configuration can be found at

At the end of the day, I enjoyed meeting up with friends at a CiscoLive tweetup.  That brings up a couple things.  First, one of the big things of CiscoLive is meeting up with old friends and making new ones.  Some would even say that’s the real value.  The other point is that if you aren’t on Twitter, I recommend it.  Though I can’t always get on Twitter, there’s a lot of great people on it with helpful insights on a variety of technologies and situations.

FlexConnect APs – Some Thoughts

I’m working on a project that requires FlexConnect APs. As part of the project, I’ve run into a few pieces that took a bit to figure out, as they weren’t readily apparent to me.

FlexConnect ACLs

I understand your typical WLC ACLs.  Everything for non-CPU ACLs was from the perspective of the WLC to and from the client.  So, inbound was from the client to the WLC.  Outbound was from the WLC to the client.  And, just make sure that if you have a deny all at the end, you have a permit for both directions of the flows that you want to allow.  No problem.

FlexConnect ACLs appear to take a different approach. I made the (apparently erroneous) assumption that “ingress” (note the change in terminology) was from the client to the AP, while “egress” was from the AP to the client.  Au contraire!  “Ingress” means from the wired side/switch to the AP, while “egress” means from the AP to the wired side/switch.  In this case, I wanted to ensure that guests could only get to external IP addresses.  Applying an ACL that basically denied anything to the,, and subnets while permitting everything else INGRESS blocked my traffic.  Once I applied it EGRESS, everything worked as expected.  I could also apply the inverse ACL ingress, but that wasn’t necessary in this case.

FlexConnect and Local External RADIUS

Way back in my day, if you wanted to use RADIUS with H-REAP, you had to send it back through the WLC.  Then, Cisco added this new-fangled feature of a Backup RADIUS server as part of H-REAP groups, where the AP could go to authenticate users if the WLC was down.  But, what if I want to use a local RADIUS server (say an ACS or Windows NPS server at a site)?  That is where the checkbox “FlexConnect Local Auth” comes into play.  When checked, RADIUS requests will be sent from the AP to either the default RADIUS authentication server(s) of the WLAN OR the primary/secondary RADIUS server(s) of the FlexConnect group (if defined).  The FlexConnect group configuration takes precedence over the WLAN configuration.  Note that the RADIUS server needs to be configured to allow the AP as a NAS, using the shared key defined by the RADIUS server configuration on the WLC.  Also, if using an external RADIUS server, you can ignore the “Enable AP Local Authentication” checkbox under the FlexConnect group configuration.  That’s used if the AP itself will be the RADIUS server. (Thanks to Mr. @revolutionwifi for his article at for pointing me in the right direction there!)

FlexConnect and ISE

In looking at the literature, one would assume that FlexConnect APs won’t work with ISE at this point (WLC 7.2 and ISE 1.1).  That is not completely true.  You can configured ISE as a AAA server for RADIUS, similar to how ACS 5.X (not including 5.0) is configured.  The interface is a bit different from ACS, but many of the concepts apply.

I’m sure that there will be more things with FlexConnect in the future.

Wildcard Certs for WLC

I love a challenge. Tell me that something can’t be done, and I’ll try to find a way to do it. The challenge yesterday was installing a wildcard cert from a 2 tier CA on Cisco NCS (see this page if you haven’t read that article.  You will need to go through that to get the certificates and components for this article.) Since I was able to get that working, I decided to try it on a WLC. The WLC in question is a NME-AIR-WLC6-K9 running  WLC certificates can be used for three purposes:

  1. Web administration of the WLC
  2. The web authentication page
  3. Local EAP (PEAP and EAP-TLS)

In particular, I wanted to get a certificate for both #1 and #2.  And, a benefit of the wildcard cert would be the ability to use the same certificate for both!  In looking at options, I came across a thread on Cisco Support Community that seemed to be a dead end (  That thread references a Cisco Bug ID that was junked (  So, I figured that this would be an interesting challenge 🙂

Since the combined CA cert worked on NCS, I figured I would start with that.  Technically, this is not required for #1 or #2, but why not try it anyways.  I renamed the cacerts.cer file to cacerts.pem.  No conversion, just renaming the file.  I uploaded the file as a Vendor CA Certificate.  And, low and behold, it was successful.  Here is the output from debug transfer all enable:

*TransferTask: Mar 06 20:29:38.870: Memory overcommit policy restored from 1 to 0
*TransferTask: Mar 06 20:30:27.733: Memory overcommit policy changed from 0 to 1
*TransferTask: Mar 06 20:30:27.775: RESULT_STRING: FTP EAP CA cert transfer starting.
*TransferTask: Mar 06 20:30:27.775: RESULT_CODE:1
*TransferTask: Mar 06 20:30:32.552: ftp operation returns 0
*TransferTask: Mar 06 20:30:32.552: RESULT_STRING: FTP receive complete... installing Certificate.
*TransferTask: Mar 06 20:30:32.552: RESULT_CODE:13
*TransferTask: Mar 06 20:30:32.552: Adding cert (3680 bytes) with certificate key password.
*TransferTask: Mar 06 20:30:32.564: RESULT_STRING: Certificate installed.
                Reboot the switch to use new certificate.
*TransferTask: Mar 06 20:30:32.564: RESULT_CODE:11
*TransferTask: Mar 06 20:30:32.565: ummounting: <umount /mnt/download/ >/dev/null 2>&1>  cwd  = /mnt/application
*TransferTask: Mar 06 20:30:32.576: finished umounting

So, the next (and more important for this) task was to take the wildcard certificate and key files that were used for NCS and combine them for the WLC.  First I combined the cert and the key into a single PKCS#12 file using the following command (I don’t believe that the passin is necessary, but I used the password for the original PFX file anyways.)

openssl pkcs12 -export -in cert.pem -inkey key-nopw.pem -out cert.p12 -clcerts -passin pass:12345 -passout pass:12345

The output from this was a single file (guest.p12) with the line “Loading ‘screen’ into random state – done” after the command.  Next, I converted the PKCS#12 file into a PEM file.

openssl pkcs12 -in cert.p12 -out newcert.pem -passin pass:12345 -passout pass:12345

This produced a PEM certificate (newcert.pem) and the line “MAC verified OK” after the command.  I then uploaded that file as an HTTPS administration certificate (Management>HTTP-HTTPS>Download SSL Certificate) using the Certificate Password “12345”.  And it worked!!! Here is the debug output.

*TransferTask: Mar 06 20:42:17.633: Memory overcommit policy restored from 1 to 0
*TransferTask: Mar 06 20:43:14.151: Memory overcommit policy changed from 0 to 1
*TransferTask: Mar 06 20:43:14.192: RESULT_STRING: TFTP Webadmin cert transfer starting.
*TransferTask: Mar 06 20:43:14.192: RESULT_CODE:1
*TransferTask: Mar 06 20:43:18.193: Locking tftp semaphore, pHost= pFilename=/newcert.pem
*TransferTask: Mar 06 20:43:18.275: Semaphore locked, now unlocking, pHost= pFilename=/newcert.pem
*TransferTask: Mar 06 20:43:18.276: Semaphore successfully unlocked, pHost= pFilename=/newcert.pem
*TransferTask: Mar 06 20:43:18.276: TFTP: Binding to local= remote=
*TransferTask: Mar 06 20:43:18.891: TFP End: 4561 bytes transferred (0 retransmitted packets)
*TransferTask: Mar 06 20:43:18.891: tftp rc=0, pHost= pFilename=/newcert.pem
*TransferTask: Mar 06 20:43:18.891: RESULT_STRING: TFTP receive complete... installing Certificate.
*TransferTask: Mar 06 20:43:18.891: RESULT_CODE:13
*TransferTask: Mar 06 20:43:18.891: Adding cert (4525 bytes) with certificate key password.
*TransferTask: Mar 06 20:43:19.165: RESULT_STRING: Certificate installed.
               Reboot the switch to use new certificate.
*TransferTask: Mar 06 20:43:19.166: RESULT_CODE:11
*TransferTask: Mar 06 20:43:19.166: ummounting: <umount /mnt/download/ >/dev/null 2>&1>  cwd  = /mnt/application
*TransferTask: Mar 06 20:43:19.178: finished umounting

I then uploaded the same for Web Authentication (Security>WebAuth>Certificate>Download SSL Certificate,) again using the same “12345” certificate password.  And, that was accepted as well.  As required, I rebooted the WLC.  After rebooting, I tried logging in using the DNS hostname of the WLC.  No certificate error.

I haven’t had a chance to validate the web authentication for the SSID, but I will update this post once that is done.  Given the success of the web administration, I’m fairly confident that will succeed as well.  Hope this helps you in working with WLC and certs.

(Side note: I also uploaded the certificate as a Vendor Device Certificate  Again, we’ll test to verify the Local EAP.)

NCS SSL Administration Certificate

2012/03/05 1 comment

While working on NCS, recently, I had to install an SSL certificate in order to get rid of that nasty SSL certificate error that pops up due to the self-signed certificate that NCS includes by default. In addition to the dirth of instructions available for the process, there were a couple of other key factors that made it more challenging:

1. The certificate had already been created using a 3rd party SSL authority
2. The chain included a root and an intermediate CA certificate.
3. The certificate was a wildcard cert, meaning that it was not specific to a host. Rather, it used “*” plus the domain name.

After investigating and additional trial and error, I finally figured out a way to do this. I didn’t have to install the “root enable package” mentioned in  I did have to install Openssl 0.9.8.  Finally, this required having both a P7B and a PFX file.  The P7B file provided both CA certificates, while the PFX file provided the proper server wildcard certificate and key.  In the end, I was able to login with no certificate error.  Hopefully this helps some others out there trying to do the same thing.

  1. Opened “P7B” file.
  2. Exported both the intermediate CA and the root CA certificates as Base-64 encoded X.509.
  3. Combined exported CA certificates.  I did this by simply opening both files with Notepad++.  Then, I copied each, with intermediate first and root second, into one new file.  Gave the file suffix “cer”.
  4. Imported into NCS over an SSH connection using the command “ncs key importcacert CA-Certs cacerts.cer repository ncs-ftp-repo“, where CA-Certs was the description I gave to the CA, cacerts.cer was the combined certificates file, and ncs-ftp-repowas the repository where I put the combined certificate file.  That repository had been created earlier.  This should result in ouput similar to the following:
    • INFO: no staging url defined, using local space.        rval:2
    • The WCS server is running
    • Changes will take affect on the next server restart
    • Importing certificate to trust store
  5. Converted “PFX” file to key and pem files using the following commands:
    • openssl pkcs12 -in cert.pfx -nocerts -out key.pem
    • openssl pkcs12 -in cert.pfx -clcerts -nokeys -out cert.pem
    • openssl rsa -in key.pem -out key-nopw.pem
    • The last command removes the password from the key, so that it can be imported.  Many thanks to for the list of commands for this part of the process.
  6. Import the key file (no password) and the converted pem file into NCS using “ncs key importkey key-nopw.pem cert.pem repository ncs-ftp-repo”  This should result in output similar to the following:
    • INFO: no staging url defined, using local space.        rval:2
    • INFO: no staging url defined, using local space.        rval:2
    • The WCS server is running
    • Changes will take affect on the next server restart
    • Importing RSA key and matching certificate
  7. Reload the NCS server.
Once NCS comes back up, you should be able to login to the server using the domain name listed in DNS for NCS without a certificate error.  The nice thing about a wildcard certificate is that you can change the DNS entry at any time and it will still work!

MAC Filtering for SSID Access

What?!?!?  MAC filtering? Can you be serious?  Why would anyone use MAC filtering in this day and age, when it has been shown to not be a real security measure.  Someone can sniff the wireless and get the MAC addresses of clients that are connecting in the clear.  Then, change the wireless NIC MAC to match, and, voila, they’re on.  Well, it takes a little bit of knowledge and the right tools/devices to do it, but you get the point.  I used to call MAC filtering the 3 foot picket fence around the yard.  It doesn’t keep much out, but it looks nice.  Of course, painting it every couple of years, replacing broken boards, and the like means it takes a lot of maintenance.  MAC filtering takes a lot of care and feeding as well – how often are new devices added and old ones removed, especially now in the age of BYOD?

So why use MAC filtering?  I am seeing more and more situations where a customer will call up and say that their users can’t connect to the wireless.  After some investigation, we see that the DHCP scope for the subnet to which they are connecting has been completely used up.  Many user devices (cell phones, tablets, etc.) will automatically look for SSIDs to join.  Part of the “let’s make connecting easier for the end user” mantra.  So, let’s say that there is an open, broadcast SSID – think guest access.  When that device connects, it requests an IP address.  It doesn’t matter that the end user may not even be trying to get on the network.  The device will use up an address.  Now, picture hundreds or thousands of devices within range doing the same thing.  Bingo!  The DHCP scope is exhausted.

If MAC filtering was in place, then inadvertent connections would not be able to obtain an IP address.  In this case, I’m not using MAC filtering for security (unless you are counting DHCP scope depletion as a type of DoS attack).  Rather, it is being used to ensure that legitimate users can get an IP address.

Are there other ways of doing the same thing?  Certainly.  Using larger DHCP scopes (perhaps a /23 or /22) is one.  Not broadcasting the SSID can help, though I generally recommend broadcasting to better support some clients that can have connectivity issues without it.  Cisco ISE (Identity Services Engine) and other vendor solutions that look at the type of end device and move it to another VLAN/subnet can be used as well, though those tend to be quite a bit more expensive.  I’m sure there are others as well.  It’s just that I’m not looking at MAC filtering with quite the same disdain that I have in the past.

Categories: Uncategorized

Nexus 7K QoS – Part 1

2011/12/02 2 comments

I’m working on a project for a customer where QoS for the Nexus 7K is a requirement.  Anyone who has attempted to configure QoS on these boxes has probably questioned how different are these devices from, say, the Catalyst 6500s.  Well, they are quite different.  If you are familiar with Modular QoS CLI (MQC,) that is a huge advantage, as all QoS configuration on the N7K is based on MQC.


Let me start by pointing out some key differences between the 6500 and the N7k.

6500 N7K
ENABLE QOS       mls qos Enabled by default
TRUST mls qos trust [cos|dscp|ip-precedence] DSCP (on M1 modules) and CoS (on F1 modules) trusted by default
INTERNAL QOS QoS Label is used internally CoS and/or DSCP passed through, though QoS-Groups can be used
COS TO DSCP MAPPING Default of CoS to 3 most significant bits of DSCP (CoS 1 to DSCP 8 ) Same
DSCP to COS MAPPING Default 3 most significant bits of DSCP to CoS (DSCP 10 to CoS 1) Same
CHANGE COS/DSCP MAPPING Modify cos-dscp or dscp-cos maps Create and apply qos policy-map(s) ingress and/or egress

So, it’s a different way of thinking about QoS when it comes to the Nexus 7Ks.  Why should things stay the same (rhetorical question…)  And, I haven’t even discussed ingress or egress queueing.

In addition to thinking in terms of class-maps and policy-maps, there are some other key pieces that need to be understood.  First, there are three class-map and policy-map object-types that can be created:

  1. Network qos: This is defined in the default VDC.  It defines CoS properties for the entire switch, including all VDCs.  These can be overriden per interface.
  2. QoS: They can be applied ingress and egress to interfaces.  They can be used to mark and police traffic.
  3. Queuing: They can be  ingress and egress to interfaces.  They can be used to mark, shape and (not surprisingly) queue traffic.
    • NOTE: “queuing” class-maps are pre-defined and CANNOT be changed.  These are defined per the input and output queuing options of the specific module.

Another aspect that makes N7K interesting is that different modules (the M1(-XL) and the F1) have different options for QoS.  In particular, the F1 queueing policies should match the network-qos policies.  Also, F1 modules don’t support mapping to QoS Groups.  The “Cisco Nexus 7000 Series NX-OS Quality of Service Configuration Guide, Release 5.X” has further information on the F1 and specific items for its configuration.

In working through an N7K QoS configuration, I came to the conclusion that it generally makes sense to do the following:

  1. Develop a QoS policy for inbound traffic.  Trusting is fine, but is module dependent (see above on the M1 and F1 differences.)  Matching and either trusting or changing DSCP values, in particular, was key to the proper development of the config.
  2. Develop a queuing policy for outbound traffic.  What CoS values should be used for which output queues (module dependent)?  Is priority queuing needed? What DWRR weights should be used for each CoS value?

So, again – inbound QoS and outbound queueing seems to make the most sense for building QoS configurations for most situations.  And, having that decided helps in better determining the actual configurations.

In part 2, I’ll go through a network-qos policy configuration, an ingress qos policy, and an ingress queueing policy to provide some more concrete examples.


2011/10/29 2 comments

Last night I got the call.  One that I’ve never received before.  One that places me with millions of others.  “You’re being let go, effective immediately.”

It’s a different feeling than any I’ve had.  I’ve been working hard, working long hours.  However, the company wasn’t making money.  So, they had to make cuts.  I can understand it, but it still stings.

So, how does one live between jobs, look for a new job when you don’t have one, care for your family, and all the things that are now running through my mind?  I’m reminded of a few things:


As a follower of Christ, I know that he has all things in control.  So, I trust him to take care of my family and me.

Reach out to old friends

I’m reminded of how important it is to not “burn bridges” when leaving a company.  Life changes, and we leave jobs.  But, having a good name and reputation are vital to situations like these.

Look for ways to cut back

We had actually started doing this before this happened.  You never know when this sort of thing will happen.  Plus, learning to live on less is a blessing.

Enjoy time with family and friends

You’ve probably been like me, working long hours to get things done.  During the time with no (job) deadlines, take time to do things with your family and friends.  You may not have it again after you’ve gotten back into work.

Evaluate priorities

Look at what is important to you, both personally and professionally.  Start to make changes in those things.  When looking for a new job, be up front about what you would like to be doing and how your past experience and interests tie into that.

Hopefully this can help some of you when you go through the same situation.  And, I’ll post when I have that new job!

Categories: Life Tags: , , , ,