Writing good code 101


A workmate of mine has recently been dusting off his coding skills and using PowerShell to access REST APIs to pull data and graph it in a dashboard. After falling down the never ending rabbit hole for a while, he tweeted the following question:

It’s not really a question that is best answered in a series of separate 140 character responses so I thought I’d write a brief post to try and distil my understanding of what good code is. A full time developer could probably tear this apart and flesh it out with all sorts of deep and meaningful computer science principles but I’m going to take the perspective of a coding hobbyist, with my target audience being the very same, looking for a quick answer.

Pictures are nice

Let’s keep this as simple as possible. Code can be functional and code can be readable. You want your code to be both at the same time. Let’s discuss those requirements a bit more.


Code that is functional, by my definition, is code that does what it is supposed to do and does it well. Bear in mind that I am talking about code that works well here. I am not talking about functional programming, which is a separate paradigm (you can melt your mind here). Some KPIs to bear in mind:

  • Works predictably. Expected results occur every time
  • As bug free as possible. Good run time error checking and code testing
  • Good validation of all user input. Don’t let humans screw up your hard work
  • Allows additional functionality to be added with relative ease. You don’t want to have to start from scratch every time


Code that is readable, by my definition, is code that another person who has a basic or even better, no understanding of your language of choice can browse through it and understand what the code does. It also means you can go back to your code in 6 months time and not ask ‘what the hell was I thinking?’. Some KPIs to bear in mind here:

  • Follows the language guidelines e.g. in Python, adheres fairly closely to PEP8, code is ‘Pythonic’
  • Is well documented. Good code is self-documenting i.e. the intention is clear in the code itself. Next best thing is well commented code
  • Not over documented. By this I mean focus on the guidelines and making your code clean. You shouldn’t need a comment for every single line of code. Try to make the intention speak for itself as much as possible
  • Does not have duplicate code, which can make understanding code more difficult as well as being generally inefficient. Learn about functions and classes to help with this

Bringing it all together

To write code that puts you on the right path to Venn diagram overlapage (pronounced o-ver-la-par-jay), you do need to put some effort in though. The key steps are:

  1. Learn some basic computer science skills. Not talking about getting a degree, but if you know some basic algorithms and understand what people are talking about when they reference loops, conditional branching, OOP, etc. you’ll be in a better place to answer the question ‘how can I write code to solve this problem?’
  2. Learn how the language of your choice implements those different features. Read the online documentation, buy a book, write some code!
  3. Collaborate. I’m a bit weak in this area myself as the coding I do is pretty much all specific to my workplace and besides, I have justified imposter syndrome. Work on other people’s code and ask for help on yours. This is a great way to gain experience and also improve productivity
  4. Set out to maintain functionality and readability before you write a single line of code. It’s better to incorporate both these requirements as you go, rather than trying to retrofit them later


In this post, I took a high level look at what makes good code. In answer to Craig’s initial question on Twitter, I would say code that isn’t functional and readable is not great code and could always be improved. Make functionality and readability requirements of all the code you write.

This post is aimed predominantly at beginners and hobbyist coders. Got any other advice? Post in the comments.

Till the next time.

Multipath TCP


In a bid to make networks more redundant, we’ve traditionally thrown more paths in to the mix so should one of them go down, traffic can still flow. In a basic layer 2 network, this would utilise Spanning Tree Protocol (STP) to ensure a loop free topology,meaning some links went unused, wasting available bandwidth. Etherchannels using stacked switches, VSS or vPC on pairs of Nexus switches allow all links to be used. Equal Cost Multipath (ECMP) can do a similar thing at layer 3, allowing multiple equal cost paths to be selected for routing.

Multipath TCP is a backwards compatible modification to TCP that allows multiple connections between hosts at layer 4. Because this is at the transport layer, these connections can be sourced from different IP addresses e.g. your wired and wireless NICs simultaneously.

Multipath TCP

A key benefit of this approach is that you can have multiple links being used for the same TCP connection, increasing overall throughput for the same TCP flow. Links can be added or removed without affecting the overall TCP connection, which makes it ideal for mobile use, combining a Wi-Fi and mobile network.

It has uses elsewhere too. As opposed to an Etherchannel, which will only allow a TCP flow across a single link, Multipath TCP will allow a single flow across multiple interfaces, so this will likely become more popular in the data centre.


Multipath TCP is one of those “why didn’t we always do it that way” technologies but it will also be interesting to see if it sees wider adoption than the use cases outlined above.

See here for the RFC.

Till the next time.

The 512K route issue


I was first made aware of an issue when the hosting provider where I host this blog at were tweeting apologies on 12/08/14 for an interrupted service and I later received an excellently worded apology and explanation from them. A couple of colleagues also got in touch later that evening with reports from further afield.

The facts in no particular order

  1. Essentially, routers and switches either make the decision to forward packets in hardware using a special type of very fast memory called TCAM or more software based, using cheaper and somewhat slower RAM. The advantage of TCAM is its speed and its ability to provide an output with a single CPU cycle but it is costly and also a finite resource. RAM on the other hand is slower, but you can usually throw more of it at a problem. Depending on which model of router/switch you have depends on which forwarding method is used
  2. The number of IPv4 routes on the Internet has been growing steadily and increasingly since its creation. Back in early May 2014, this global routing table hit 500K routes
  3. The devices that use TCAM are not only restricted by the finite size available to it, but this TCAM is used for other things besides IPv4 routing information, e.g. access lists (ACLs), QoS policy information, IPv6 routes, MPLS information, multicast routes. So in effect, TCAM is partitioned according to the use the device is being put to. Cisco’s 6500 and 7600 switch and router platforms (respectively) have a default setting for each of these. On many of the devices, the limit for IPv4 routes is set to 512K
  4. Verizon have a big block of IP addresses that they advertise as an aggregated prefix
  5. On Tuesday, for some reason, Verizon started advertising a large amount of subnets within their block as /24 networks instead, to the tune of several thousand, causing the global routing table to exceed the 512K limit on those devices configured as such
  6. This had the impact that those affected devices did not have enough TCAM to hold the full Internet routing table and so the prefixes that didn’t make it in to the table would not be reachable. As prefixes come up and down on the Internet all the time, these routes would have been random in nature throughout the issue i.e. it would not have just been the Verizon routes affected

Are you affected?

If you have Cisco 6500 or 7600 devices running full BGP tables, you need to run the following command:

If the IPv4 line of output is 512k or lower, you are in a pickle and will need to change the settings by entering the command below:

Where the 1000 is the number of 1K entries i.e. the setting as shown in the first output would be 512. Typing a ‘?’ instead of the number will return the maximum available on your platform, so you could in theory be requiring a hardware refresh to add to your woes.

If you have an ASR9K, follow the instructions here to get to your happy place:


Most other router platforms use RAM and so the more you have, the more routes it can handle. The performance varies widely from platform to platform. You should check the vendor’s documentation for specifics e.g. the Cisco ASR1002-X will do 500,000 IPv4 routes with 4GB of RAM and 1,000,000 with 8GB RAM

Who is to blame?

There is an ongoing debate at the moment about whether Cisco are liable or the service providers. I would argue that it is predominantly the latter but Cisco could have done a better job of advising their customers. Cisco did post an announcement about this on their website a number of months ago, but I didn’t spot it so I’m assuming many other customers didn’t also.


Having said that, if you buy a bit of kit to do something, you need to take some responsibility for failing to include capacity planning in to your operational strategy.

Till the next time. (#768K!)

Running ASDM and WebVPN on the same interface


So you are thinking of running ASDM and WebVPN on the same interface? This is quite a rare configuration for the simple reason that ASDM is a management tool and WebVPN is usually enabled on the outside interface and best practice would dictate using an internal or even dedicated management interface to allow ASDM\CLI connections to. However, in a lab environment, this isn’t such an issue. In fact, in my labs, the machine I manage the ASA from is also the machine I test VPN connectivity from so this is a requirement for me.

Running ASDM and WebVPN on same interface

You basically have two options. You can change the port that ASDM runs on, or change the port that WebVPN runs on. As stated, this is mostly seen in a non-production environment so it probably doesn’t matter too much which way you do it but if for any reason you had to use this configuration in production, you would probably want to change the ASDM port so your remote users don’t have to worry about changing ports.

Both options are very simple to implement. To change the ASDM port, you enter a modified version of the command you enter to enable ASDM:

This changes the ASDM port to 4343. As stated, missing out ‘4343’ still enables ASDM but on the default port of 443.

To change the WebVPN port only requires an extra line:

Of course, both services can be run on the same port if required, but you need to know the URL to access ASDM. (The WebVPN URL is the default and so will load with just the IP address\hostname). The ASDM URL at time of writing on software version 9.1(2) is:


Once you have downloaded and installed the ASDM launcher, you again don’t need to worry about having different ports as the launcher itself connects to the correct URL automatically.


There may not be many situations in which you would consider running ASDM and WebVPN on the same interface but it’s good to know it can be done from both a port and URL point of view.

Till the next time.

ASDM inaccessible on Cisco ASA 9.1(1)


I’ll keep this short but sweet and hopefully this will save somebody a lot of head scratching out there. I unwrapped a couple of brand new ASA5512X firewalls but found the ASDM inaccessible.

ASDM inaccessible

All the standard stuff was in there, entered in global configuration mode:

I had enabled the http server and told the ASA which host address to accept connections from. I had enabled local authentication and a user name. When I connected, I got a “This webpage is not available” message.

After some sniffing around, I found a line of config that is critical:

This is the default on the two ASA devices I received. A security device. That has DES encryption as the default setting. Not very good Cisco. Not only is it weak encryption, but it stops my Chrome connecting to ASDM. Funnily enough, IE8, which was another browser on the jump box I was using allows the connection but I missed this until after I fixed it due to being a convert to Chrome for quite some time. So one option is to use an older, less secure browser. Or…


The right fix would be to change the default ssl encryption as below:

This now allows more recent (and secure) browsers to connect to ASDM. The command above is shown in the default config in version 9.1(1). In older versions, you would need to type:

I also believe the default in older versions would be to enable pretty much all levels of SSL encryption:


The key point to remember here is, when you use a new version of software that you have become familiar with, try and find out what the differences are! I’ve not checked at which point this change was made and whilst it’s not a show stopper it is annoying.

By the way, you may have noticed I set the encryption at AES128 and some of you may be aware that AES256 is an option. The reason I currently choose AES128 over the 192 or 256 bit versions is I’ve read about vulnerabilities (albeit non-critical) with those key lengths. I’d be interested in anybody else’s take on this.

Till the next time.

Cisco ASA – OSPF passive interface is…inactive


I recently came across a quirk in the ASA’s implementation of OSPF that is widely known but I thought I’d share it here anyway. With the firewall being placed at the security boundary that quite often connects to the Internet on the outside interface, you would most often have a default route redistributed in to OSPF, although of course you could also be talking OSPF to your service provider depending on your situation.

However, in my home lab, the lab itself terminates on an ASA, which has it’s outside interface on my home LAN. I connect in to my lab from that LAN through the ASA and so wish to advertise that network in to the lab for reachability.

All well and good. Except that I have no OSPF speakers on my home LAN and the ASA, by virtue of line 2, is now desperately trying to chat something up on In my situation, this is mostly harmless. In another, it poses a security risk as anybody could in theory fire up an OSPF speaker, learn about the internal routes and inject it’s own. Obviously, there are ways to protect this from happening e.g. MD5 authentication but quickly firing up Wireshark shows me the traffic is there and I hate seeing traffic on the wire that doesn’t have to be (Dropbox LAN Sync, I’m looking right at you!).

OSPF passive interface

So it makes perfect sense to use the OSPF passive interface functionality in this scenario. This allows me to turn OSPF chatter off on an interface. In conjunction with the network statement on line 2 above, I can advertise the network in to OSPF, but OSPF will not try to talk on the interface. Job done.

Except that for some reason, the ASA does not support this. The command exists under the RIP and EIGRP configuration modes but not OSPF. One possible way to resolve this would be:

Let’s remove the network statement. This removes the network from being advertised and also stops OSPF talking on that interface. Line 3 redistributes connected subnets (duh, obviously!) in to the OSPF process.

As a side note, line 3 advertises subnets as they are configured on the interface e.g. would be advertised as such. If you miss the ‘subnets’ keyword off, it will only advertise classful networks, in our previous example, would not be redistributed. Also, if you have the subnets keyword already added, negate the full line before adding it back in without the subnets keyword. It warns that only classful networks will be redistributed, but if you check the config, the subnets keyword remains and, in our example, would be redistributed. More quirkiness.


Perhaps somebody reading this has insight as to why this functionality has been missed off the ASA platform. The workaround discussed above isn’t perfect either. Anything redistributed will show as an external route in your routing table and quite often that isn’t what you want.

Till the next time.

Logging options on the Cisco ASA


Logging is a critical function of any device in your network, but perhaps even more so on a firewall. Whether you are troubleshooting an issue, following an audit trail or just wanting to know what is going on at any time, being able to view generated logs is highly valuable. This post looks at logging options on the Cisco ASA and discusses some of the things you need to consider.

Tick tock

It’s all very well looking through your logs as individual events but if you want to tie them together, particularly across multiple devices, then you need to ensure that all of your devices have the correct time configured. Nothing says ‘I love you’ more than a time stamp that you can trust. If you use a centralised logging solution, you can filter logs across multiple devices to help determine root cause of issues. You can configure time on the ASA manually by using the following commands:

The above lines configure the time, the timezone relative to UTC and the daylight savings time settings respectively. There is a battery on the motherboard of the ASA that will keep the time should the device lose power or be rebooted. However, locally configured time can drift over…time, so what we really want is to use a trusted external time source that all devices synchronise against and this is where NTP comes in.

Lines 1-2 above dictate that we should be using authentication with NTP for added security and gives a key to use. Line 3 is required to advise the ASA that this key is trusted. Line 4 tells us which server to use, which interface it can be found on and which authentication key to use. It also tells the ASA to prefer this time source over other NTP servers of the same judged accuracy based on what stratum they are in. You should configure at least two NTP servers for redundancy. In the event that all servers are unavailable for an extended period, the ASA can fall back to using the local clock. NTP is a Jekyll and Hyde protocol. It can be as simple to understand as the last section or you can dive deep in to it’s bowels and be lost forever.

Log destination

Logs can be sent to several destinations but before I list them, it should be noted that logs come from two key sources, system events and network events. System events include things like CPU errors, network events include packets being denied on a certain interface. Both types of messages are dealt with by the logging subsystem and are then potentially filtered prior to being sent to one of the following destinations:

  • Console – logs sent here can be viewed in real time when you are connected to the serial port. As this causes CPU interrupts for each message, you need to be careful when enabling this
  • ASDM – logs can be viewed in the ASDM GUI. From here, you can quickly build filters, colour code the logs by severity and save the log as a local text file to be dealt with later or simply archived
  • Monitor – logs to a Telnet or SSH session. But you don’t still use Telnet for management do you?!?
  • Buffered – this is the internal memory buffer
  • Host – a remote syslog server
  • SNMP – rather than sending logs remotely using the syslog syntax, you can use SNMP to send a trap
  • Mail – send generated logs via SMTP
  • Flow-export-syslogs – send event messages via NetFlow v9

Log severity levels

Before I show some examples of how to configure different logging, it’s worth looking at the different severity levels available to us. There are eight in total as per Cisco’s definitions below:

Numeric level Name Definition
0 Emergencies  Extremely critical “system unusable” messages
1 Alerts Messages that require immediate administrator action
2 Critical A critical condition
3 Errors An error message (also the level of many access list deny messages)
4 Warnings A warning message (also the level of many other access list deny messages)
5 Notifications A normal but significant condition (such as an interface coming online)
6 Informational An informational message (such as a session being created or torn down)
7 Debugging A debug message or detailed accounting message

By selecting a lower severity (with a higher number), you are also opting in to everything with a higher severity e.g. level 4 will not only log all warnings but all errors, critical, alert and emergency logs. Be wary of selecting too low a severity level, particularly on the console. You can quickly bring a device to it’s knees if it’s getting hammered.


Here are some examples to show how to get things up and running.

Line 1 enables logging globally. We then enable timestamps on the log messages, without which it’s difficult to tell when an event occurred. Line 3 configures the size of the local buffer memory. Once this fills up, it is overwritten in a circular fashion. Lines 4 and 5 configure the buffered and monitor destinations previously discussed for the same level, the first using the keyword ‘warnings’ and the second using the equivalent numerical value. Both are interchangeable but will show in various command outputs using the keyword regardless (except in the logs themselves, where the numerical form will display). Lines 6 and 7 are configured together for remote syslog logging. Line 6 enables the logging at the specified level (in this case informational) and line 7 configures the syslog server IP address and the interface it can be found on.

Line 8 allows various other attributes to be included in each log message. In this case, it will include the hostname but can also include the firewall IP address of a particular interface, the context name (where used) or a specific string. The latter could be useful for using regular expressions for refining logs at a more granular level.

Finally, the show logging command will firstly show the different settings for each logging destination and then the current contents of the local log buffer. Below is an example of its output with just the first log entry for brevity (please note the enabled settings below are not by any means ideal for a production environment, you need to consider what is best for yours):

One thing to note about logging to Telnet\SSH sessions using the monitor destination. Whilst you may have this enabled and will be able to see the messages logged count in the above output rising each time, you may find yourself confused as to why, whilst SSH’d on to your ASA, you aren’t seeing the logs on your screen. To view logs for the current session, assuming they are enabled, you need to type this command in whilst connected:

and the logs will start appearing according to the severity level you have set. In a move that I can only attribute to Cisco allowing a drunken intern to come up with the command to negate the above one, somebody settled on:

Thanks drunken intern guy. To finish off this post, I’ll bundle some other commands together with a brief description:

If you have a pair of firewalls configured in a failover configuration, you can enter the first command to enable logging on the standby unit also. Just be aware of the increase in traffic if logging externally to the ASA. The second line will additionally send debug messages to any configured syslog servers, which is disabled by default. Again, this can cause a severe increase in traffic, especially if you enable lots of debugs. The last command changes messages to a proprietary Cisco format and to be honest, I don’t think its used much at all.


Hopefully you will have learnt a couple of extra things you can do with logging from this post but you can dive even deeper and I suggest you do to get the most out of this critical function. For example, you can archive buffered logs to the local flash or to a remote FTP server, you can disable certain messages completely or just filter them from certain destinations. You can also change the default severity of individual messages to better suit your environment. It would require a certain amount of initial work but would be easily repeatable across your estate.

A great place to learn more is to use the ASDM console which, despite me being a CLI fiend on the ASA, comes in to its own when configuring and reviewing logs. Also pay special attention to what level of logging you have for each destination. I’ve only covered a couple of key points on how best to do this (e.g. disable console logging) as what works best depends on your environment. If possible though, try to use a centralised Syslog server and use the ASDM logging due to it’s immediate nature and filtering capabilities.

Till the next time.

Top IT podcasts


I thought it would be valuable to some readers if I collated a list of the top IT podcasts in one place and gave a brief description of each of them. Who am I kidding? This list will also be helpful to me as I get older and start forgetting more and more things. Some of these shows were new to me so I went on a marathon session before reviewing.


  • Packet Pushers – this was the first networking podcast I started listening to and its still my favourite. It is hosted by Greg Ferro and Ethan Banks, both who have been in the industry long enough to know a thing or two. Topics cover protocols, hardware, design, security, certification, a little bit of SDN, etc. There have been a few attempts at other podcast streams under the Packet Pushers banner. The Priority Queue is used to deep dive on niche topics and Healthy Paranoia is an excellent security focused podcast hosted by Michele Chubirka. There was a Wrath of the Data Center show that was based around the CCIE Data Center certification hosted by Tony Bourke but it withered away after only a couple of shows. There is a growing blogger community too and also a forum. PP has further presence on IRC, Twitter, Facebook and Google+. Apart from the mostly great content, the thing that really works for me is how Ethan and Greg compliment each other so well. Add in Michele’s insane approach to introducing her show and in depth knowledge and it adds up to a fun learning experience. Expect a new show about once a week
  • No Strings Attached – love wireless? Then look no further. Hosts include Sam Clements, Jennifer Huber, Blake Krone and George Stefanick. The show is relatively new, hitting the airwaves in January 2012. This is obviously a more specialised show than some of the others but, already at show 19, they are still producing great content at about a show every 2-3 weeks. Topics include hardware from different vendors, software and the ever evolving 802.11 standards
  • Class C Block – this is the newest of the shows listed here, only producing it’s first show in September 2012. Since then the hosts, CJ Infantino and Matthew Stone, have produced a show roughly every 2-3 weeks, although it has sadly ground to a halt. Topics cover IPv6, studying, design, and MPLS. Give them your support by getting over there and having a listen and if you like, drop them a comment. There is nothing like positive feedback and high consumption figures to motivate more content. I found this podcast quite refreshing for the most part. You can sense the guys wanting to learn themselves as much as feed back to the community. Just a shame it ran out of steam
  • Risky Business – another more specialised and award winning show, this time focusing on security. This is the longest running show featured here having been born in February 2007 and produces a show every 1-2 weeks. Don’t feel overawed by the 200+ shows, go back up to six months and start from there, dipping in to any older shows that take your fancy
  • Social Engineer.org – a resource rich website, it focuses on what is for me, the most intriguing aspect to Information Security. The show itself started in October 2009 and is produced about once a month. Topics have included pretexting, NLP, penetration testing and Kevin Mitnick. The main host, Chris Hadnagy is excellent and he has a number of  supporting panelists, such as Dave Kennedy, who all offer something different to make this one of my favourites. The quality of the guests always impresses
  • Arrested DevOps – This is one of my favourite more recent podcasts with a good line up of industry folk and content. The show notes are always top notch with full transcriptions too. Some of the topics include hiring in IT and dealing with failure


Have I missed your favourite? If so, add it in the comments below with a brief synopsis as I have above. Try at least a couple from each of those listed above and let the hosts know what you think.

Till the next time.

(I always feel like) somebody’s watching me


Rockwell was a true visionary of his time. His mega hit of the 80’s after which this post is titled always takes me back to my childhood and puts a smile on my face. OK, perhaps I’ve given him too much credit in my opening statement but I was recently reminded of this song at an InfoSec event I attended in January. There were some great presentations from vendors, professionals and amateur hackers alike. It was one of these sessions in particular that made me go ‘wow’ so I thought I’d write it up.


I would like, if I may, to take you on a strange journey. Imagine yourself walking through a busy city, perhaps on your daily commute or just sightseeing. How would you feel if at the end of the day, somebody who you had never met before approached you and showed you a picture of your house, told you where you worked and also which park you like to drop your kids off at each Saturday morning? Of course, things would be even more creepy if somebody had that information but decided to not tell you. It might sound like a bad movie plot. It might also sound like a scaremongering tale or at the very least, highly unlikely due to the assumed effort to collect such information.

Step in Snoopy, a distributed tracking and profiling framework. Using a geographical distribution of wireless access points (WAPs) called drones, they can track a person’s movements as they move around the catchment area. They do this using the MAC address of the mobile device you carry with you e.g. phone, iPad, etc. and take advantage of the chatty nature of WiFi enabled devices. Your device will broadcast on a regular basis trying to find every SSID it has connected to. To use some overly obvious examples, a phone could be trying to find the following wireless networks:

  • MyHomeHub3874
  • CompanyA-Guest
  • MoonBucksCoffee4242
  • CityYAirport

The drone devices themselves can be contained in a very small form factor. You could for example use a Raspberry Pi or even make a custom device. They can be battery powered but imagine one in a mains pluggable air freshener form factor. Blends in nicely and is continuously powered for free. They can be made for about £20-£30 so if anybody finds and removes one, the cost to replace is acceptable. By placing 50 or so of these around a city at key ‘people centric’ places such as train stations, shopping centres, sports stadiums etc., you can cover a large area very well.

The drones also have 3G or better to connect, via OpenVPN, to a server so that data collection can be centralised and also to provide Internet connectivity for clients when it wants to take things to the next level. Before I explore that though, you can see how the person mentioned at the start of this post could quickly glean the information he or she was after. As you walk past any of these droid APs, it detects the SSIDs that you have connected to and with a tip of the hat to Google with their massive war driving initiative as part of mapping the world, can determine the geographical locations of each of these. As long as the SSID is unique and has been mapped to a publicly accessible database, then it’s a breeze to link devices to locations. The next step, which is a bit more naughty is to use the AP as a classic rogue i.e. now allow the client to connect to it by spoofing the SSID. This only works on open networks i.e. those that haven’t been secured with WEP\WPA(2).

At this point, the mobile clients can start accessing the Internet via the rogue AP. This is bad enough yet to make matters worse the connection is proxied via the centralised server meaning that all traffic can be analysed further. That’s a lot of data being collated at the central server but the fun really begins when that data is processed for habits and patterns.


Although Snoopy is lesser known, many of you may already have heard of or even used Maltego. It is touted as an intelligence and forensics application which can mine a source of data (in our case, the data collected from the drones on the central server) and present it in a visual form. It allows the creation of custom transforms which analyses relationships between people, networks, websites, social services etc.

Putting it all together

Now imagine how these two tools can be combined. As our unsuspecting victim, you walk in to MoonBucks located near a drone device which listens to your phone’s broadcasts. The AP easily becomes MoonBucksCoffee4242 and your phone connects. You buy yourself a mocha choca latte with an extra shot and decide to check a couple of things online whilst you fuel up. You first head over to Facebook, then check your Twitter feed. Perhaps you also log on to LinkedIn, Google+ and whatever else takes your fancy.

Between Snoopy and Maltego, there is all sorts of interesting pwnage that can be had. As stated in my initial paragraph covering Snoopy, it is easy to see which SSIDs you have connected to and where they are located but as soon as you connect and start browsing, it can soon be determined relatively easily who you are based on the sites and profiles you are viewing.

Taking it further and introducing time based data i.e.  going beyond a single session, it can now be determined who you are and where you are. Over a period of time, patterns may show up that suggest you take sick days after a big sports event on a Sunday, on Wednesdays you work in a different office, the first day after pay-day you tend to be running an hour later than usual, etc.

Let’s consider a scenario that doesn’t require as much data crunching. If somebody wanted to track a particular person, one way to determine the MAC address of his mobile device would be to set up a droid at an event that person was known to be attending. Do this a couple more times at other events and then use Maltego to extract the unique MAC addresses. You might get lucky and whittle this down to a single MAC after only two events (i.e. the tracked person is the only one to attend both events), perhaps three or four but if you can now get that device to connect to the Internet via your AP, you can link all activity to that one person. I’ll leave it up to your imagination as to what devastation could ensue.


This is all relatively cheap to get set up and working. As a hacking project, I find this fascinating but it should be obvious how, in the wrong hands, your privacy could easily be torn down. Once you mine the data to a deeper level and begin to correlate the movements of multiple people, both physically and online, then your habits basically become an open book.

We are all accustomed to a seamless mobile experience these days but how do you mitigate this kind of attack? Again, it’s the ongoing balancing act between security and usability. This post is highlighting the possibility of such an attack. There are certain steps you can take to make sure you are not connecting to a rogue AP but it’s difficult to block many mobile devices from advertising the networks they have already connected to. This is all posted as food for thought and I hope it was of as interest to you as it was to me.

Till the next time.

Overview of Cisco Catalyst 3850 switch


As many of you will be aware, Cisco announced the release of the Catalyst 3850 switch at Cisco Live 2013 in London only last week. As I blogged at that time, this wasn’t the world’s best kept secret. Several people were talking about it online and I’d come across a few pages on different parts of Cisco’s website hinting that it was coming. There was mixed reaction to the news from ‘is this not just a 3750 with an integrated Wireless LAN Controller?’ to more warm and welcoming feedback. I’ll try and leave my own judgement until the end of the post but for now, let me list some of the specs of the 3850 and make the obvious comparison to the 3750X using data from Cisco’s website:

Comparison of Catalyst 3750-X and 3850 Switches

Features Cisco Catalyst 3750-X Cisco Catalyst 3850
Stacking bandwidth 64 Gbps 480 Gbps
Cisco IOS® Software wireless controller No Yes
Queues per port 4 8
Quality-of-service (QoS) model MLS MQC
Uplinks 4 x 1GE2 x 10GE NM4 x 1 GE or 2 x 10GE SM 4 x 1GE2 x 1/10GE4 x 1/10GE(on 48 port model)
StackPower Yes Yes
Flexible NetFlow support Yes (C3KX-SM-10G required) Yes
Multicore CPU for hosted services No Yes
Flash size 64 MB 2 GB
Operating system Cisco IOS Software Cisco IOS-XE Software

The first thing that is immediately obvious is I need to find a better way to format tables on my site!

The second thing is that, putting the integrated wireless functionality of the 3850 to one side for now, it is clear that the 3850 offers improvements in several areas; far greater bandwidth across a switch stack (where more than one of these switches are connected together as a single ‘virtual switch’. The actual stacking cables themselves are much improved too), more queues per port, a preferable QoS model and a move to IOS-XE which in itself has a number of improvements over vanilla IOS. Take a visit to various places on the web and you will find many more spec sheets that show improvements of all sorts e.g. more ACEs for security, QoS and PBR, a bigger TCAM and many more.

Integrated WLC

Whilst we all love having more of everything to play with on our favourite devices, I think that the feature that gives this announcement some punch is the wireless capabilities of the switch and all in a 1U form factor. You could also get this functionality in a 3750X but only on a 2U switch from what I recall. Of course, if you want to stack your switches and want redundancy in the WLC also, then 1U wins over 2U every time, 4U over 8U, etc.

The WLC integrated in to the 3850 has some features that you might want to see in any Cisco controller e.g. Clean Air, EnergyWise, QoS. One switch will support 50 WAPs and 2000 clients. Although I haven’t looked at purchasing these yet, I was told by a number of Cisco people at Cisco Live that the price is going to be comparable to a 3750X, but you will probably need to add on the WLC licencing to that base price.


If you consider that you are saving yourself the requirement for a standalone WLC on top of all of the increases in capabilities, the move to IOS-XE, the improvement in the stacking technology etc., the 3850 looks like a very capable and tempting upgrade to the 3750X. Cisco are classifying this product under Unified Access, bringing wired and wireless access in at the same point. I just wish I’d had the opportunity to put them in to our office network last year when I opted to use a pair of stacked 3750X switches with a 2504 WLC.

Till the next time.

Catalyst 6800 switch? Maybe, maybe not…


Just a quick piece of conjecture here before last call at the airport. I was hearing rumours of a Cisco Catalyst 6800 series switch over the last few days. Some quick and dirty ‘facts’:

  • It is being touted as the next generation chassis for the 6500. This is to allow the limits of the 6500 back-plane to be upgraded in order to accept some increasingly sophisticated line cards e.g. 100G, service-a-riffic <long week, that’s all your getting
  • Will accept half width line cards
  • Will be available in 2013

I only got a whiff of this on the last day so wasn’t able to delve any deeper but it definitely gives weight to the statements coming from Cisco that the 6500 platform is being invested in, even though they aren’t calling it the 6500 anymore! We’ll just have to wait and see.

Till the next time.

Upgrading my home network: part 2


If you’ve read part 1 of this article, you will probably feel my frustrations or at least laughed out loud. I had tried several Internet providers and yet even when I settled on one that was adequate, my internal network was still shabby to say the least. This second part has been a long time coming so without further a do, let’s get on with how I went about upgrading my home network.

The MiFi device that I was using as a router\3G modem is great for mobility but doesn’t have the power to give me the coverage I needed so I bought a TP-Link MR3420. This is effectively a wireless router with a USB port that supports a wide range of USB 3G modems. The one that I had been provided by 3 when I got the PAYG deal was on the compatibility list. The other good thing about this dongle is that is has a CRC9 connector for an external aerial to help boost signal strength. I didn’t have any power in the loft of my house, which is where I wanted to have the modem…the higher the better. Rather than just use a really long aerial lead, I got Mr Electrician to come around and fit some lighting, four power sockets and a couple of switches in a cupboard at the top of the stairs so that I could power cycle any kit up in the loft without climbing up there. It’s been handy, although I’ve only had to do it a couple of times since installation. We also fed a couple of RJ45 leads from the loft down to where the telephone sockets are.

The lowdown

Below are some pictures with descriptions.


The picture above is looking upwards in to the top of the house. On the right is the TP-Link router. You can see the USB lead that goes off to the USB dongle at the bottom. Two of the Cat5e leads are connected to a patch panel in the loft and from there downstairs, the other connects to a Power Line device which isn’t functional at the moment but will, for those who are unaware, will allow me to run networking over the mains power. Finally, the device at the upper left is a 3G aerial. I’m toying with mounting it to the outside of my house for better gain. I’m also not too happy about the temporary feel of this picture e.g. having a 3G modem right next to a power lead.

Patch Panel

Here is the patch panel that connects the yellow leads from the router to the orange leads that get routed downstairs as you’ll see. The patch panel is Cat6. Yuck, no wonder it was lying around spare at work. I was pleased that the joists were just the right distance apart to let me screw the patch panel in.

Cable run

The data centre grade cables travel in between the joists downwards and come out in the space shown above. As is quite common in the part of Scotland I live in, many rural houses are built as 1.5 storey properties i.e. the upstairs rooms have a slant as they meet the roof. The space above is accessible through a small entrance in the master bedroom. It is quite spacious and gives great access to some of the plumbing and wiring. I used clips at every joist to make it look neat.

Wall port

The most difficult part was getting the leads from the previous picture down to the ground floor. A fair bit of poking around was required but above you can see the dual RJ45 wall port that are both connected back to the router. The yellow lead is run around the skirting board and through a small hole in the wall to my study, where my main PC is just on the other side. The white, flat lead, as some of you may recognise, goes in to my Sonos wireless bridge which is sat on a small table nearby. Yes, that was me that did the painting around that phone socket when we first moved in.

As I stated before, I also have a Power Line device that I can plug in anywhere to connect to the other one in the loft but I’ve not had any need to use it so far. The wireless signal covers the entire house and I’ve not got any devices that don’t have wireless. Having said that, I do have a Raspberry Pi and I may just use the Ethernet port so I can use the WiFi adapter elsewhere. I’m not sure yet.


So has the networking improved? Most definitely. The biggest improvement is the wireless internally. All devices can now talk to each other at different ends of the house. The 3G still isn’t great but it’s much more stable than it used to be. The packet loss is 0% most of the time but once a couple of devices start trying to download at the same time, the connection evaporates which isn’t great and shouldn’t be acceptable in this day and age. I think I need to apply some science to determining the root cause. There are tweaks I can make to try and improve further which I mentioned above e.g. mounting the aerial externally. If I make any further amendments, I’ll keep you posted.

Till the next time.

Upgrading my home network: part 1


In part 1 of this article, I give a little background of how my home network has previously been set up. In part 2, I go through the plan of upgrading my home network and how that plan was implemented.


I’ve lived in my current house for six years. It’s an old farmhouse out in the country and my wife and I fell in love with it as soon as we saw it. I had asked the previous owner if she could get ADSL and she reported she hadn’t signed up but was under the impression it was available. Upon moving in, I signed up with an ISP who thought it should be possible, but alas upon installation of the router and filters, it wasnt to be. I had British Telecom engineers out to my house, their Higher Level Complaints team on the phone frequently but it just boiled down to me being too far from the exchange. This was rubbed in further by houses on each side only a 1/2 mile away that enjoyed ADSL Internet access. BT’s line routing policy meant I was stuck.

So I used dial up for about a year and nearly went insane due to having to take this step back to the (1st world) dark ages. Migrating 750 MS Exchange users to a new domain via Powershell over a high latency\low bandwidth dial up VPN connection is about as much fun as sticking pins in my eyes. On top of the weekend overtime I was putting in, I was also on the on-call rota for one week in six. More often than not, I’d find myself driving around to my parents who live a couple of miles away to use their broadband if I thought the job would take more than 15 minutes. RDP over dial up is appalling more often than not. When you need to RDP on to your jump box via VPN to then RDP on the customer’s jump box to then RDP on to the server in question (I love customer’s requirements), a five minute job could literally take well over an hour.


I then heard about the Scottish Government stumping up a paltry £3.5M to fund it’s Broadband Outreach programme, to bring broadband to those that were currently unable to receive it for whatever reason. Enter me and about 10 neighbours of mine. I jumped at the chance to be our ‘cluster’s’ front man and was soon dismayed to find out that the final solution would be two way satellite, provided by Avanti who at the time didn’t have a great reputation. The fact that I spent every day under contract with them wishing I had an alternative should clarify if that reputation was founded or not. Quite frankly, they were appalling. The dish was almost the size of Jodrell Bank. The modem was the size of a DVD player and it had two thick strands of coaxial cable running to it via a hole in the wall. On a good day, the latency was 750ms. On a bad day, it simply didn’t work. It cost about £45 a month for a 2Mb\s connection which thankfully, my company stumped up the lion’s share. Something had to be done.

3G alternatives

I had tried a number of different 3G mobile dongles from T-Mobile, Orange and Vodafone and all were worse than the satellite. Even walking around the house outside with my laptop made little difference. My house has 12-18″ thick walls in places so I was still willing to have something mounted outside with a lead coming back in. After about 3 years of putting up with the satellite, I decided to look at the problem again, this time reviewing 3’s website, another mobile provider who were one of the first to market with 3G in the UK. I checked their site which claimed I could get 3G at my postcode. I decided to give their PAYG package a ‘low risk’ try out. When I got the dongle home to test, I got the same lame connection. But when I walked around the house with this one, I would find some sweet spots where the latency dropped to 70ms and I could get a solid 1Mb\s up and down speed! This was massive progress, despite some of my colleagues from the big smoke advising me their home connection had just been upgraded to 100Mb\s. I did a good selection of testing, including my work VPN, YouTube, general browsing and a long lost art for me…online gaming using Call of Duty: Modern Warfare. All passed with flying colours and I signed up for a contract.

Internal network

It is worth pointing out at this stage that my internal network was crappy too. Those hefty walls are great at stopping WiFi as much as 3G signals. When I got the MiFi device (a 3G modem and WiFi AP\router in one device), it was best placed in the main bedroom, plugged in for a stronger, more consistent signal and draped over the latch on the window. Unfortunately, the TV room is downstairs and at the other side of the house i.e. pretty much as far from the MiFi as possible. I bought a WiFi extender but it simply repeated the signal on the same channel and so, whilst it made the coverage better, it didn’t seem to improve connectivity if there were a couple of devices trying to connect at the same time. Access to the MiFi was affected with packet loss due to collisions\retransmissions. I needed a more resilient solution so I took a more holistic approach to my networking needs. In part 2 of this article, I outline the design I came up with and give some installation pictures. Hopefully, some people reading this might find some nuggets of inspiration. At the very least, I hope it puts a smug smile on some of the 100Mb\s+ brigade!

Need to go now, the electrician has just turned up to power up the loft.

Till the next time.

Cisco ASA Identity Firewall


When the CTO approached me asking how access to a subnet was restricted, I advised him that the people who needed access were given a DHCP reservation and an ACL on a Cisco ASA limited those IP addresses to certain destination hosts on certain ports. It wasn’t scalable and it wasn’t particularly secure. For example, if one of those users wanted to log on to a different machine, they would get a different IP address and access would break. If their computer NIC got sick and needed replacing, same thing. Worse still, anybody could log on to their machine and get access to the same resources or give themselves a static IP when the other user had their computer turned off and nobody would be the wiser.

I had been looking at a number of methods to address this, such as 802.1x authentication\application proxy and during our discussion, we both came to the conclusion that it would be pretty cool if we could restrict access via usernames, especially if this could remain transparent to those users. This would give us the flexibility and transparency we were looking for whilst maintaining the required level of security. Isn’t that the perfect balance that we strive for in IT security?


Enter the Identity Firewall feature on the Cisco ASA platform. This is a new feature available from software version 8.4(2). The Identity Firewall integrates with Active Directory using an external (to the ASA) agent. The Cisco.com website has a host of documentation on the feature which you should follow to get it up and running but below is a summary of how it works, some things to be aware of and my thoughts on the feature.

How it works

First of all, this feature has three main components. The ASA itself, Active Directory and an agent which sits on one of your domain machines and feeds information back to the ASA. There are a number of prerequisites you need to make sure are in place and rules you must follow before rolling this out e.g. AD must be installed on certain OS versions, same for the server the agent is on, you can have multiple agents for redundancy but only one installed on a domain controller, you must configure the correct level of domain wide security auditing, ensure host firewalls are opened accordingly. The configuration guide does a pretty good job of listing all of these and I would heartily recommend you check it out if deploying the feature.

There is also the relatively small matter of creating an account in AD that the ASA will use to communicate with AD. You also need to create some config on the ASA itself made up of a mix of AAA and specific ‘user-identity’ commands. The agent is configured via the command prompt (or Powershell prompt as I tend to always use since I fell in love with it deploying Exchange 2007). You set up which AD servers it should be monitoring (you want to point it to all DCs that will be logging account logon\logoff events) and then which clients (ASAs) the agents will be reporting to. You can also configure Syslog servers for better accounting.

In a nutshell, the agent is checking the security logs of the DCs to see who is logging on and with what IP address. It adds this information to a local cache and sends it to the configured ASA clients. This is done using Radius. The ASA also talks directly to AD using LDAP to get AD group membership. The final piece of the puzzle is enabling the Identity Firewall so that all of these components start talking. At this stage, there are various commands you can run via the agent or on the ASA to confirm the communication is working as expected.

It is then a matter of creating ACLs that utilise users and\or groups and giving them access to resources. You can also combine this with source IP which means you can say Fred can access Server X when he is in Subnet Y but if he moves, he loses access. If Janet logs on to Fred’s original machine, she could have the same IP but won’t be given the same access due to her username not being in the ACL.

I found setting it all up very easy. However, circumstance led to it being some time before all interested parties were happy with the outcome e.g. the firewall being used had recently been upgraded from 8.0 code to 8.4, and a load of migrated NAT rules started playing up, muddying the waters. The positive side to this was that the firewall config has been streamlined considerably. There was also the morning spent troubleshooting a single user’s issue that ended up being down to him not being in the correct AD group as assumed. That said, I think it’s a pretty robust feature for what is effectively a first iteration.


I’ve tried to break it a number of ways e.g. pull an authorised user’s network cable out of their machine, give myself their static IP, but without being logged on as them, my username\IP pairing doesn’t match what the firewall thinks. There are a number of timers involved with this feature that might trip you up if you don’t understand them e.g. the firewall can only check group membership every hour at best, meaning if you remove a user from a group, they could still retain access. Worse still, if you disable their account in AD but keep them in the group, they will still have access until their Kerberos ticket expires. You can manually update group membership on the ASA if this would be an issue for you. I would guess that if it is, the person has been marched out of the building at that point anyway.

VPN logon vs AD logon

Another thing to be aware of is remote VPN access. When you remote VPN on, you get authenticated to the firewall. This could be via local ASA accounts, Radius, TACACS, ACS, LDAP etc. If you use AAA LDAP authentication (using Active Directory in this case), you are not logging on to the domain as you VPN in, you are simply saying ‘here are my AD credentials, please authenticate me on the firewall’. At that point, one of two things happens with the Identity Firewall. If you are using a domain computer to remote on, that machine will automatically try to make contact with a DC. When it finds one (over the VPN), it will log on to the domain, create a security log and the AD agent will let the ASA know. Any rules assigned to that user, that don’t filter on source IP, will now come in to effect. However, if the machine is not joined to the domain, there will be no logon event (the username\password given at connection was only for VPN authentication), and so any user-identity ACLs will not apply.

In the latter case, you may want to create a specific VPN connection profile with specific DHCP pool for example and go back to restricting by IP, only allowing users in a certain AD group for example to be able to connect to that profile. Not ideal, but it would work.


In summary, I really like this feature. It answers a number of questions specific to the environment I have implemented it in. Things I particularly like are its ease of setup including the troubleshooting commands to aid with this. Learning the syntax of the ACLs is a simple task, as they extend upon existing IP-based ACLs. I also like the fact that there are several ‘under the hood’ settings you can tweak to improve security further still e.g. remove users from the cache if their machine as been idle for a certain amount of time, force LDAP lookups over SSL.

Something that I wasn’t so keen on was the split communication i.e. the ASA getting group membership directly from AD, IP mapping from the agent. I would like to see the agent providing the group membership too. It should also not be too hard to tell the agent which groups to keep a closer eye on (the ones used in ACLs sounds like a good start). This would allow the ACL to be updated much quicker should a group be amended.

I would love to hear what others’ experience of this feature has been. Overall, I am very pleased with it and look forward to upcoming improvements.

Edit: additional points

A few other points I should highlight, if you move to a different subnet the new IP address will be picked up as long as your device authenticates with the domain. I tested this early on by disconnecting my laptop from the wired network with a persistent ping happening in the background. I then VPN’d back to the office via the WiFi network and within a couple of seconds of the connection be made, the ping reestablished.

The second point which will hopefully save somebody a head ache out there; a user testing this functionality was reporting it working as expected until they would open a spreadsheet that would make a data connection to a SQL database in a different subnet. The user had access to this SQL database as allowed by the Identity Firewall ACLs but as soon as the data was refreshed in the spreadsheet, he would get kicked. I could see it happening using the ASA and AD agent troubleshooting commands too. After a little bit of head scratching, we realised that the spreadsheet data connection was using a different set of domain credentials to connect to the database. This triggered a logon event for the user’s IP address but with the user account for the data connection. In turn, the AD Agent updated the cache with the new user and IP combination and denied the traffic based on the ACL. One answer to this would be to add the data connection account to the ACL but in the end, we just connected to the database in the spreadsheet using the user’s own credentials. This highlights the fact that, whilst a user can be affiliated with multiple IP addresses, only one user can be affiliated with any one IP address, which is the key to making this feature secure.

One last thing which is perhaps the most important, considering this is a discussion about a security feature. If your users connect via a shared IP address, you should be aware. For example, if your users log on to a Terminal Server, they will all be coming from the same IP address by default. That means that if a user with elevated permissions logs on to the TS, all other users will effectively gain the same access as permitted via the Identity Firewall. My advice would be to deny access via a shared IP address to any resources that need restricted access.

Till the next time.

The Mask of Sorrow

The purpose of this post is to discuss the differences between a subnet mask and wildcard mask, when they are each used and what tricks you can do with them. This post does not go in to any real depth on subnetting and assumes you know how to subnet already. You probably won’t be surprised from that last sentence to learn that this post covers wildcard masks in a little more detail than their subnet cousins.

I recently saw a tweet asking about wildcard masks and if anybody had a good system for working them out. I keenly replied and, whilst my answer was correct, it turns out that it was quite limited in scope. It sparked an interesting discussion and in the end I learned something that I didn’t know, as did a couple of others, so it seemed like a good topic for a post.

Subnet masks

Let’s start with subnet masks as they are the easiest to understand and are the less ‘funky’ of the two masks being discussed here. They should also be more familiar to non-networking IT types. If you know any sysadmins who don’t know what a subnet mask is, you have my permission to flick them on the forehead very hard. Repeatedly.

Matt’s definition

A subnet mask, in conjunction with an IP address, tells you which subnet that IP address belongs to. Another way to put it is that the subnet mask tells you which part of the IP address refers to the network (or subnet) and which part refers to the host specifically within that subnet.

To give an example, I will lay out my thinking on the page step by step with descriptions of what I am doing.

  1. Let’s take a random example: (using slash notation)
  2. I’ll convert this to dotted notation:
  3. Now to convert to binary: 11000000.10101000.00101010.01001111 11111111.11111111.11111111.11000000
  4. Now I’ll put the subnet mask under the IP address. This makes the next step, doing a binary AND operation, easier to visualise:
    11000000.10101000.00101010.01001111 <IP address
    11111111.11111111.11111111.11000000 <subnet mask
    11000000.10101000.00101010.01000000 <binary AND operation
  5. This is still a /26 remember so we can now convert this AND result back to a decimal number which represents the network or subnet that the original IP address ( belongs to:
  6. As you should know from your subnetting studies, the range of this subnet will be from

From the steps above, the first three all have the goal of getting the IP address and mask converted to binary. Why? Well, in the example above, its to show how the binary AND operation works. When the maths becomes more comprehensible, you should find that working this out in decimal and eventually in your head is second nature. The result of the binary AND gives you the network ID or subnet number that the host belongs to. That is what the subnet mask does, it masks the IP address in such a way to reveal the subnet. The subnet mask should always be a consecutive collection of 1’s, followed by all 0’s if any are required (i.e. anything other than

So where is a subnet mask used? The table at the end of this post gives examples (not a definitive list by any means) of where both masks are to be found. One thing to note is that the subnet mask isn’t sent out in the IP header with the IP address. There is no need for the destination host to know what subnet the source host belongs to so no need to send it. The destination only needs to know whether the source is on its own subnet or another one so it knows whether to communicate directly or via its own next hop gateway. Again, it calculates this by doing a binary AND to compare the network part of the source and destination. If they match, they must be on the same subnet.

Right, I’ve drifted closer to where I said I wouldn’t than I would have liked i.e. in to  a subnetting discussion. The key part of this post is the next topic.

Wildcard masks

I should perhaps make a feeble attempt to defend my ignorance on Twitter here, as stated at the start of this post, and say that I am currently working towards my CCNP and at no point during my studies to date had I seen wildcard masks used as anything other than an inverse subnet mask but in fact I’ve just heard Jeremy Cioara make a passing reference to them in one of his redistibution videos in the CBTNuggets Route series. Always new things to learn! Before I explain what I mean by inverse subnet mask, let me give my quick definition of a wildcard mask.

Matt’s definition

A wildcard mask, in conjunction with an IP address, lets you specify which bits of the IP address you are interested in and which you aren’t.

First, let’s see what a wildcard mask looks like:

What just happened there? That looks different from a subnet mask. Yes it does…because it is. Before I do some magical conversion to binary again to clarify, keep in mind that with a wildcard mask, the following rules apply:

For a binary 0, match
For a binary 1, ignore

Or put another way:

For a binary 0 in the mask, we care what the corresponding bit in the IP address is
For a binary 1 in the mask, we don’t care what the corresponding bit in the IP address is

Now read my definition again to see what the mask above might be trying to achieve. Still a bit unclear? Then let’s break it down.

  1. Let’s convert the IP address\wildcard mask pair above to binary:
    10001011.00101110.11011101.00101000 00000000.00000000.00000000.11111111
  2. Put the wildcard mask under the IP address to see how the masking is in effect
  3. Remember the basic rules to remember above? Applied to this example, that means that we are only interested in the first three octets of the IP address and we can ignore the last octet. (0=match, 1=ignore)
  4. That means that this wildcard mask will apply to any IP addresses that have 139.46.221.x in the address, where x in the last octet could be 0-255 (because the mask doesn’t care). We are ignoring the last octet as dictated by the mask
  5. Remember before I used the term inverse subnet mask? When wildcard masks contain a contiguous series of 0’s only ( or a contiguous series of 0’s followed by a contiguous series of 1’s, this is exactly how a wildcard mask works. In this example, the wildcard mask of would match any IP addresses in the subnet defined by the following IP address\subnet mask pair:
  6. Before I get to the groovy part of wildcard masks, an easy to remember calculation for working out the equivalent wildcard mask (inverse mask) from a subnet mask is to subtract the subnet mask from, octet by octet as below: <all 255’s
    <subtract the subnet mask <the result is the wildcard mask, an inverse of the subnet mask

I used a mask for this example that not only falls on the octet boundary but also is all 0’s followed by all 1’s to keep things simple but it gets more interesting when we take things further. Yes, just like a subnet mask a wildcard mask does not need to fall on an octet boundary but whereas a subnet mask has a contiguous series of 1’s followed by a contiguous series of 0’s, a wildcard mask can be pretty much anything you want and this is where the fun begins.

Time for an example. Let’s say you have multiple physical sites and you assign a subnet to each of those for management IPs i.e. source IPs that can access your networking kit throughout your company. You assign the following /24 network to each site:


where x represents the site number. You have three sites so you create the following config on every device:

[sourcecode language=”plain”]
ip access-list standard DeviceManagement
line vty 0 4
access-class DeviceManagement in

To clarify the config, we have an ACL that says the management range of IPs on sites 1-3 can telnet on to the devices configured as above. That’s all well and good but what if we have 20 sites, 100 sites or even more? What if the number of sites is only three now but will grow by one site a week? These scenarios highlight two key problems. Firstly, with each new site, the ACL gets bigger; an extra line for each site. Secondly, you need a process to update the ACL on every device every single time a new site comes online. Even with a configuration management tool, this isn’t ideal. With the power of a well crafted wildcard mask and just as importantly a carefully designed IP addressing scheme, we can instead use a one line ACL:

[sourcecode language=”plain”]
ip access-list standard DeviceManagement
line vty 0 4
access-class DeviceManagement in

You should be able to see, without a conversion to binary, that the single permit statement is saying that as long as the source IP matches:


then permit access i.e. we don’t care about the 2nd or 4th octets, just that the 1st and 3rd octets must match ’10’. This answers both our previous key problems. The single line ACL matches our three sites ranges and as long as we use the same addressing scheme for each new site, the existing ACL will match any new site, at least up to 255.

OK, I hope this isn’t making your ears bleed and if you’ve made it this far, I have one more example that shows another cool use of wildcard masks. This example is actually the one that Marko Milivojevic (@icemarkom) slapped me with on Twitter when I gave my inverse mask answer and it’s a cracker for showing the power of the wildcard mask. Marko posed the question, how would you use a wildcard mask to select all of the odd-numbered /24 subnets of the following range:

  1. Let’s convert to binary: 10000100.00101001.00100000.00000000
  2. The bold and underlined 0’s represent the subnet bits, the three bits I can use from the original /21 to create my /24 subnets. With three bits, I can create 8 subnets:
  3. The only bit set in the 3rd octet is the 6th, giving a base value of 32. It should be obvious that to create a mask that targets only the odd-numbered /24 subnets, the first bit should be fixed at a value of 1.
  4. This means from the eight subnets in point 2, the ones that match this requirement are:
  5. So for the 3rd octet, the only bits we care about would be 00100xx1. We don’t care what the values of the two ‘x’ bits are, but the other values must be as listed
  6. So we now know the network address:
  7. To calculate the mask, we need to ask ourselves which bits we care about and must match, and which we don’t. For the 1st two octets, the values must match 132 and 41 and for a /24, the last octet must match 0. Point 5 above tells us which points we can ignore in the 3rd octet, so using the wildcard rules I stated at the start of the wildcard mask section (0=match, 1=ignore), I can come up with the following IP address\wildcard mask pair:
  8. Putting this in binary form, with the mask underneath the IP address should show this more clearly:
  9. The mask is effectively saying ‘I dont care what the bits of the subnet are (bits 1-3 of octet 3) as long as the 1st bit is 1’

Sometimes, listing things in a logical order like above helps enormously, other times it just muddies the waters. Read over the post again to determine what the purpose of the wildcard mask is first, then look at the two examples above to get a feel of how they can be applied. Try looking online for further examples of powerful wildcard masks to see if they can perhaps answer a problem you have. Hopefully this post will have at least given you a clear definition of a subnet mask and wildcard mask, how to calculate and use them and where you can find them. If you have any questions, feel free to leave a comment below.

The table below contains a few, non-exhaustive, examples of where subnet masks (S) and wildcard masks (W) are used on networking kit (Cisco specifically)

Type Where and description
S On a NIC, physical interface, SVI. Wherever an IP address is assigned
S On an ASA, ACLs use subnet masks rather than wildcard masks
W In IOS, ACLs use wildcard masks
W In RIP, EIGRP, OSPF, as part of the network statement
S In BGP as part of the network statement
S Most summarisation type commands e.g. area range command in OSPF
 S Static routes in IOS and on ASAs

Finally, I’d like to thank Marko and Bob McCouch (@bobmccouch) for bringing me up to speed on wildcard masks beyond the inverse mask, especially Bob who went further and gave this post a quick once over and also provided one of the examples for me to work with. I find the help of the networking community very motivational and it’s the primary reason why I decided to start blogging myself to hopefully give something back.

Till the next time…