Upgrading my home network: part 1


In part 1 of this article, I give a little background of how my home network has previously been set up. In part 2, I go through the plan of upgrading my home network and how that plan was implemented.


I’ve lived in my current house for six years. It’s an old farmhouse out in the country and my wife and I fell in love with it as soon as we saw it. I had asked the previous owner if she could get ADSL and she reported she hadn’t signed up but was under the impression it was available. Upon moving in, I signed up with an ISP who thought it should be possible, but alas upon installation of the router and filters, it wasnt to be. I had British Telecom engineers out to my house, their Higher Level Complaints team on the phone frequently but it just boiled down to me being too far from the exchange. This was rubbed in further by houses on each side only a 1/2 mile away that enjoyed ADSL Internet access. BT’s line routing policy meant I was stuck.

So I used dial up for about a year and nearly went insane due to having to take this step back to the (1st world) dark ages. Migrating 750 MS Exchange users to a new domain via Powershell over a high latency\low bandwidth dial up VPN connection is about as much fun as sticking pins in my eyes. On top of the weekend overtime I was putting in, I was also on the on-call rota for one week in six. More often than not, I’d find myself driving around to my parents who live a couple of miles away to use their broadband if I thought the job would take more than 15 minutes. RDP over dial up is appalling more often than not. When you need to RDP on to your jump box via VPN to then RDP on the customer’s jump box to then RDP on to the server in question (I love customer’s requirements), a five minute job could literally take well over an hour.


I then heard about the Scottish Government stumping up a paltry £3.5M to fund it’s Broadband Outreach programme, to bring broadband to those that were currently unable to receive it for whatever reason. Enter me and about 10 neighbours of mine. I jumped at the chance to be our ‘cluster’s’ front man and was soon dismayed to find out that the final solution would be two way satellite, provided by Avanti who at the time didn’t have a great reputation. The fact that I spent every day under contract with them wishing I had an alternative should clarify if that reputation was founded or not. Quite frankly, they were appalling. The dish was almost the size of Jodrell Bank. The modem was the size of a DVD player and it had two thick strands of coaxial cable running to it via a hole in the wall. On a good day, the latency was 750ms. On a bad day, it simply didn’t work. It cost about £45 a month for a 2Mb\s connection which thankfully, my company stumped up the lion’s share. Something had to be done.

3G alternatives

I had tried a number of different 3G mobile dongles from T-Mobile, Orange and Vodafone and all were worse than the satellite. Even walking around the house outside with my laptop made little difference. My house has 12-18″ thick walls in places so I was still willing to have something mounted outside with a lead coming back in. After about 3 years of putting up with the satellite, I decided to look at the problem again, this time reviewing 3’s website, another mobile provider who were one of the first to market with 3G in the UK. I checked their site which claimed I could get 3G at my postcode. I decided to give their PAYG package a ‘low risk’ try out. When I got the dongle home to test, I got the same lame connection. But when I walked around the house with this one, I would find some sweet spots where the latency dropped to 70ms and I could get a solid 1Mb\s up and down speed! This was massive progress, despite some of my colleagues from the big smoke advising me their home connection had just been upgraded to 100Mb\s. I did a good selection of testing, including my work VPN, YouTube, general browsing and a long lost art for me…online gaming using Call of Duty: Modern Warfare. All passed with flying colours and I signed up for a contract.

Internal network

It is worth pointing out at this stage that my internal network was crappy too. Those hefty walls are great at stopping WiFi as much as 3G signals. When I got the MiFi device (a 3G modem and WiFi AP\router in one device), it was best placed in the main bedroom, plugged in for a stronger, more consistent signal and draped over the latch on the window. Unfortunately, the TV room is downstairs and at the other side of the house i.e. pretty much as far from the MiFi as possible. I bought a WiFi extender but it simply repeated the signal on the same channel and so, whilst it made the coverage better, it didn’t seem to improve connectivity if there were a couple of devices trying to connect at the same time. Access to the MiFi was affected with packet loss due to collisions\retransmissions. I needed a more resilient solution so I took a more holistic approach to my networking needs. In part 2 of this article, I outline the design I came up with and give some installation pictures. Hopefully, some people reading this might find some nuggets of inspiration. At the very least, I hope it puts a smug smile on some of the 100Mb\s+ brigade!

Need to go now, the electrician has just turned up to power up the loft.

Till the next time.

Exam pass: CCNA Security 640-554


In my previous life as a sysadmin, I always found the topic of security a fascinating one. All those different layers to protect whilst maintaining usability was certainly a challenge. Back then, I earned myself an MCSE 2003 and opted to specialise on the security track. This meant doing an extra exam and I decided to go for the external CompTIA Security+ to give myself a different perspective.

When I began the migration to becoming a network engineer, I was already working on PIX and ASA platforms for basic tasks such as ACLs. I quickly realised that continuing my security based knowledge quest made perfect sense and so always had the CCNP Security certification on my roadmap once I had the routing and switching covered. The fact that about 90% of my day-to-day work involves working on ASAs makes this a no brainer.


The CCNA Security is a prerequisite for the CCNP Security and it made sense to get that one done first. I used the same three methods for learning that I have used for almost all of my IT career exams:

  1. Book
  2. Videos
  3. Labs

The book I opted for was the Cisco Press Official Cert Guide for the 640-554 exam. This book has been co-authored by Keith Barker and Scott Morris. I found almost every one of the 22 chapters a breeze to read through thanks to the easy writing style and well laid out topics. At about 600 pages divided over 22 chapters, it was finished much quicker than I had initially anticipated. In addition to the book, I would also visit Cisco’s site to review their documentation on the various topics being covered and download various PDFs for review.

For the videos, I used the CBTNuggets video series by Jeremy Cioara. Unfortunately, the latest exam videos are not available yet and so I had to watch the 640-553 series but this is an otherwise very good series. For those not familiar with Jeremy’s training, I heartily recommend you try him out. He is a proper geek that ‘totally’ digs what he does.

The most important part of learning for me, whether it is for an exam or just learning a new feature or technology, has always been the hands on labbing. This is where the rubber meets the road and I quite often learn things outside the scope of the both the books and the videos, which lends itself to a far more rounded understanding.

Turn up early for exam

The exam itself was an interesting experience. I initially turned up very early without realising it. I gave the woman in the test centre my name and she advised me that she didn’t have me listed for an exam. I got my phone out to check the confirmation email and immediately spotted that I was exactly one week early for my exam. Plonker! I pleaded with her to find me another spot but she said that all workstations were booked for the day. Funny looking back at it now, not amusing at all on the day. I could not be bothered waiting another seven days. I have a rough schedule for achieving my CCNP Security and I didn’t want to lose a week so I rescheduled for the Friday, the earliest spot I could get. I had done the test questions that came with the book. Each exam was 60 questions. I’ll just say I was a little surprised when I loaded up the real exam. In the four days between the Monday and Friday, I had started on the Cisco Press exam guide for the SECURE exam and was thankful but a little surprised when a topic covered in that book appeared in this exam.

My overall experience of the CCNA Security has been very positive. It covers a fair amount of material, although perhaps not in too much depth (this is where the CCNP Security comes in). Some of it will be revision for those of you who are CCNA certified but there is also a lot of new topics being covered e.g. zone based firewall, IPS. Let’s not also forget that with the latest version of the exam, the SDM has been banished in favour of Cisco Configuration Professional (CCP). This is an improvement for sure, but I still think it’s way behind where it should be, albeit as a free management GUI.

I now have four professional level exams to now begin studying for to attain the CCNP Security. My next goal is the SECURE exam (642-637) and I’ll be applying the same three-step process as above except I’ll be using INE video training in addition to CBTNuggets and doing far more hands on labbing.


As I stated at the beginning of this post, I’ve always been interested in the topic of security. It’s so much more than just the glorified image of a hacker sat in a darkened room trying to break in to a top-secret system, or the endless tales of social engineers using their unique skills to get the information they want. The day-to-day tasks of creating site to site VPNs, amending ACLs, creating class maps and tying them in with policy maps, configuring remote access VPN policies; all of these feel like pieces of a big puzzle and its my job to solve them. I find it both challenging and rewarding beyond the satisfaction of working on networking kit in general.

I’m already looking down the road of my career to decide if I want to specialise in security or keep my skill set a little broader. Time will tell. I am just going to enjoy the CCNP Security journey as it happens for now and soak up as much knowledge as I can.

Till the next time.

Cisco ASA Identity Firewall


When the CTO approached me asking how access to a subnet was restricted, I advised him that the people who needed access were given a DHCP reservation and an ACL on a Cisco ASA limited those IP addresses to certain destination hosts on certain ports. It wasn’t scalable and it wasn’t particularly secure. For example, if one of those users wanted to log on to a different machine, they would get a different IP address and access would break. If their computer NIC got sick and needed replacing, same thing. Worse still, anybody could log on to their machine and get access to the same resources or give themselves a static IP when the other user had their computer turned off and nobody would be the wiser.

I had been looking at a number of methods to address this, such as 802.1x authentication\application proxy and during our discussion, we both came to the conclusion that it would be pretty cool if we could restrict access via usernames, especially if this could remain transparent to those users. This would give us the flexibility and transparency we were looking for whilst maintaining the required level of security. Isn’t that the perfect balance that we strive for in IT security?


Enter the Identity Firewall feature on the Cisco ASA platform. This is a new feature available from software version 8.4(2). The Identity Firewall integrates with Active Directory using an external (to the ASA) agent. The Cisco.com website has a host of documentation on the feature which you should follow to get it up and running but below is a summary of how it works, some things to be aware of and my thoughts on the feature.

How it works

First of all, this feature has three main components. The ASA itself, Active Directory and an agent which sits on one of your domain machines and feeds information back to the ASA. There are a number of prerequisites you need to make sure are in place and rules you must follow before rolling this out e.g. AD must be installed on certain OS versions, same for the server the agent is on, you can have multiple agents for redundancy but only one installed on a domain controller, you must configure the correct level of domain wide security auditing, ensure host firewalls are opened accordingly. The configuration guide does a pretty good job of listing all of these and I would heartily recommend you check it out if deploying the feature.

There is also the relatively small matter of creating an account in AD that the ASA will use to communicate with AD. You also need to create some config on the ASA itself made up of a mix of AAA and specific ‘user-identity’ commands. The agent is configured via the command prompt (or Powershell prompt as I tend to always use since I fell in love with it deploying Exchange 2007). You set up which AD servers it should be monitoring (you want to point it to all DCs that will be logging account logon\logoff events) and then which clients (ASAs) the agents will be reporting to. You can also configure Syslog servers for better accounting.

In a nutshell, the agent is checking the security logs of the DCs to see who is logging on and with what IP address. It adds this information to a local cache and sends it to the configured ASA clients. This is done using Radius. The ASA also talks directly to AD using LDAP to get AD group membership. The final piece of the puzzle is enabling the Identity Firewall so that all of these components start talking. At this stage, there are various commands you can run via the agent or on the ASA to confirm the communication is working as expected.

It is then a matter of creating ACLs that utilise users and\or groups and giving them access to resources. You can also combine this with source IP which means you can say Fred can access Server X when he is in Subnet Y but if he moves, he loses access. If Janet logs on to Fred’s original machine, she could have the same IP but won’t be given the same access due to her username not being in the ACL.

I found setting it all up very easy. However, circumstance led to it being some time before all interested parties were happy with the outcome e.g. the firewall being used had recently been upgraded from 8.0 code to 8.4, and a load of migrated NAT rules started playing up, muddying the waters. The positive side to this was that the firewall config has been streamlined considerably. There was also the morning spent troubleshooting a single user’s issue that ended up being down to him not being in the correct AD group as assumed. That said, I think it’s a pretty robust feature for what is effectively a first iteration.


I’ve tried to break it a number of ways e.g. pull an authorised user’s network cable out of their machine, give myself their static IP, but without being logged on as them, my username\IP pairing doesn’t match what the firewall thinks. There are a number of timers involved with this feature that might trip you up if you don’t understand them e.g. the firewall can only check group membership every hour at best, meaning if you remove a user from a group, they could still retain access. Worse still, if you disable their account in AD but keep them in the group, they will still have access until their Kerberos ticket expires. You can manually update group membership on the ASA if this would be an issue for you. I would guess that if it is, the person has been marched out of the building at that point anyway.

VPN logon vs AD logon

Another thing to be aware of is remote VPN access. When you remote VPN on, you get authenticated to the firewall. This could be via local ASA accounts, Radius, TACACS, ACS, LDAP etc. If you use AAA LDAP authentication (using Active Directory in this case), you are not logging on to the domain as you VPN in, you are simply saying ‘here are my AD credentials, please authenticate me on the firewall’. At that point, one of two things happens with the Identity Firewall. If you are using a domain computer to remote on, that machine will automatically try to make contact with a DC. When it finds one (over the VPN), it will log on to the domain, create a security log and the AD agent will let the ASA know. Any rules assigned to that user, that don’t filter on source IP, will now come in to effect. However, if the machine is not joined to the domain, there will be no logon event (the username\password given at connection was only for VPN authentication), and so any user-identity ACLs will not apply.

In the latter case, you may want to create a specific VPN connection profile with specific DHCP pool for example and go back to restricting by IP, only allowing users in a certain AD group for example to be able to connect to that profile. Not ideal, but it would work.


In summary, I really like this feature. It answers a number of questions specific to the environment I have implemented it in. Things I particularly like are its ease of setup including the troubleshooting commands to aid with this. Learning the syntax of the ACLs is a simple task, as they extend upon existing IP-based ACLs. I also like the fact that there are several ‘under the hood’ settings you can tweak to improve security further still e.g. remove users from the cache if their machine as been idle for a certain amount of time, force LDAP lookups over SSL.

Something that I wasn’t so keen on was the split communication i.e. the ASA getting group membership directly from AD, IP mapping from the agent. I would like to see the agent providing the group membership too. It should also not be too hard to tell the agent which groups to keep a closer eye on (the ones used in ACLs sounds like a good start). This would allow the ACL to be updated much quicker should a group be amended.

I would love to hear what others’ experience of this feature has been. Overall, I am very pleased with it and look forward to upcoming improvements.

Edit: additional points

A few other points I should highlight, if you move to a different subnet the new IP address will be picked up as long as your device authenticates with the domain. I tested this early on by disconnecting my laptop from the wired network with a persistent ping happening in the background. I then VPN’d back to the office via the WiFi network and within a couple of seconds of the connection be made, the ping reestablished.

The second point which will hopefully save somebody a head ache out there; a user testing this functionality was reporting it working as expected until they would open a spreadsheet that would make a data connection to a SQL database in a different subnet. The user had access to this SQL database as allowed by the Identity Firewall ACLs but as soon as the data was refreshed in the spreadsheet, he would get kicked. I could see it happening using the ASA and AD agent troubleshooting commands too. After a little bit of head scratching, we realised that the spreadsheet data connection was using a different set of domain credentials to connect to the database. This triggered a logon event for the user’s IP address but with the user account for the data connection. In turn, the AD Agent updated the cache with the new user and IP combination and denied the traffic based on the ACL. One answer to this would be to add the data connection account to the ACL but in the end, we just connected to the database in the spreadsheet using the user’s own credentials. This highlights the fact that, whilst a user can be affiliated with multiple IP addresses, only one user can be affiliated with any one IP address, which is the key to making this feature secure.

One last thing which is perhaps the most important, considering this is a discussion about a security feature. If your users connect via a shared IP address, you should be aware. For example, if your users log on to a Terminal Server, they will all be coming from the same IP address by default. That means that if a user with elevated permissions logs on to the TS, all other users will effectively gain the same access as permitted via the Identity Firewall. My advice would be to deny access via a shared IP address to any resources that need restricted access.

Till the next time.

Trouble Ticket #2: Missing hop in Traceroute output


Welcome to my 2nd trouble ticket post. It has come a bit later than I would have liked but have been having all sorts of other fun to deal with and let’s not forget the two weeks of the Olympics when I hung up my study cap completely.

To recap, in this category of post, I cover an issue I have come across that stands out for one reason or another. In Trouble Ticket #1, I covered a problem that put Cisco TAC on the back foot. In this ticket, I discuss an initially strange-looking problem which did get resolved but not before I got led on a wild goose chase.


The problem reared its head when I was determining why it appeared that the first /26 of a /24 was reachable yet the second /26 was not from a management station. What stirred things up was when I did a traceroute for an IP that was known to be available and a hop that I thought should be in the path was missing. The diagram below is a simplified topology.


How things initially looked


When I did a traceroute from MGM1 to host, I only got as far as the default gateway, and it died. A traceroute to got there but with the following hops:

  1. (default gateway, expected)
  2. (layer 3 switch that routes to the subnet, again expected)
  3. (the destination host itself, expected)

What I didn’t see, and was expecting to in between steps 1 and 2 was the ASA firewall at There was no ‘Request timed out’ message. The step was missing entirely. For the curious, the address is the gateway address for all management hosts but holds routes for all the infrastructures it needs access to, hence the two hops within the same subnet.

Troubleshooting steps

I started to check the path being taken and as both traceroutes showed a successful reply from the GW, I ruled out any shenanigans on MGM1, but double checked the hosts file and ARP cache for anything untoward. Next, I checked out the GW and this is where the red herring reared its ugly head. Despite the routing table telling me that the next hop for both destination subnets was the firewall, I could see from the port descriptions (not always to be trusted!) and CDP output (more reliable!) that there was a switch directly connected that led through to the same subnets. I traced the path using CDP and found a 2nd layer 3 path but this one wasn’t firewalled. So here is the updated diagram:

The new layout after investigation

The strange thing was, it was the same number of hops either way so there was still something missing regardless of which path was being taken. I decided to err on the side of caution and went back to the GW, checking things like ARP caches and CEF adjacencies. Everything looked as it should.

So off to the Cisco Support Forums I went and after several attempts to craft the correct search, I came across the answer I had been looking for. It turns out that the firewall was not doing something that was hiding it from the traceroute output. In brief, a traceroute works by sending an ICMP packet to the destination IP with a TTL of 1. When the next hop (in the first case, the default gateway) receives this packet, it decrements the TTL to 0 and sends a ‘TTL expired in transit’ message back to the source. The source then sends another ICMP packet to the destination with a TTL value of 2. The GW will forward this on to its own next hop for the destination, after decrementing the TTL to 1. As this packet hits the next hop, the same ‘TTL expired in transit’ message is sent back to the source. This continues until the destination hopefully responds. The traceroute command can therefore display each hop by using the IP address of all devices that respond with the ICMP messages.

The root cause of my issue was the ASA was not decrementing the TTL. Therefore, the ICMP packet was forwarded on from the firewall to the next hop with a TTL of 1, where it was replied to with the standard ‘TTL expired in transit’ message. In this way, the traceroute would still complete, however the ASA would be hidden from the output. OK, I can see how that might be useful from a security point of view but it makes troubleshooting a real pain in the backside, so let’s look at how to disable this feature from configuration mode:

firewall(config)#class-map ICMP_TTL

firewall(config-cmap)#match any

! This creates a new class map to match any traffic

firewall(config)#policy-map global_policy

! This policy-map should already exist

firewall(config-pmap)#class ICMP_TTL

! This adds our new class-map to this policy

firewall(config-pmap-c)#set connection decrement-ttl

! This is the key command. It decrements the TTL on all traffic passing through the firewall



! Exit out to configuration mode

firewall(config)#service-policy global_policy global

! This makes the policy active

The traceroute output was now as I was expecting:


Tracing route to over a maximum of 30 hops

1    <1 ms    <1 ms    <1 ms

2    <1 ms    <1 ms    <1 ms

3     2 ms    <1 ms    <1 ms

4    <1 ms    <1 ms    <1 ms


From the contextual help on the ASA’s CLI, it appears that this behaviour is applied to all IP based traffic, not just ICMP traffic. It should be noted that the config above only applies to ASA versions 8.0(3) and later. It should also be noted that the initial issue I was seeing that got me to this point i.e. part of a subnet responding, part of it not, was down to the fact that this subnet had been previously addressed differently. When the subnet was made larger to a /24, all devices were readdressed correctly, the security ACLs on the firewall were updated but there was a NONAT ACL that was still configured for the previous /26 subnet. I updated that too and return traffic, now matching the NONAT ACL, was not NAT’d and was returned as expected.

Now to just remove that non-firewalled path…

Till the next time.

Trouble Ticket #1: High fibre diet


This is the first of what I am calling a Trouble Ticket post, a review of an issue I’ve run into which I believe warrants sharing with others. This could be because I came across something I’ve not seen before, that I ended up pulling my hair out over or perhaps it simply involved something that was particularly interesting to me. I aim to lay down as much information as I can within the confines of confidentiality, where applicable, and explain my troubleshooting process. So let’s get started with the first ticket.


A customer’s satellite site had a 1Gb\s fibre link laid to the main data centre using a 3rd party supplier. The diagram below shows all the key details to the issue but some more background may help. R1 has an RJ45 to the LAN behind it as does SW1 and both connect to the WAN with LC SFPs. The port on SW1 is layer 2, with the layer 3 endpoint being an ASA behind it.


Network diagram
Network diagram


When R1 was rebooted, its fibre port would initialise but would show as down\down.

Troubleshooting steps

It soon became apparent that this wasn’t going to be a slam dunk fix. Doing a shutdown\no shutdown didn’t help. Removing the LC connector from R1’s SFP and reinserting brought the link back up and removing one of the ST connectors from the patch panel and reinserting also brought the link back up. At this point the link would remain up and stable…until the next reboot. Looked initially like a layer 1 issue.

R1 was returned to my head office where I powered it up with a 2nd SFP in a 2nd port, with a loopback fibre. No matter how hard I tried, I could not replicate the issue i.e. both ports came up as expected after every reboot. This led me to believe that the issue was probably with the link itself, somewhere from R1’s SFP to SW1’s SFP. I took R1 back to the remote site myself with a spare patch lead and another SFP and swapped these out but the issue still persisted.

I then got on to the WAN provider and requested they tested the link. They did an end to end test and reported that they could see no issue but when I presented the results of my loopback test, they agreed to move on to a different fibre pair from the site’s patch panel to their POP site and down to the data centre. This still left the patch panel from their kit in the data centre to SW1, but otherwise a complete replacement. Again, the issue remained.

I then went back to site and replaced R1 with a temporary Cisco 2960G (which I shall call SW2) for testing. For the test I simply removed the SFP from R1 with fibre intact and plugged in to SW2. Each time I rebooted, the link would come up as expected. This test by itself would strongly suggest that the link is actually OK and that the issue lies with the 3925. At this point, I should point out that I did all testing both with the required config on the relevant ports and with default config on there too. This seemed to leave the following causes for the issue:

  • Hardware issue with 3925
  • IOS bug
  • Incompatibility at layer 1 between hardware and WAN provider

Regardless of which of these was the issue, I decided to get our hardware support provider involved and when they tried to get me to do all the things I’d already done, I asked them to provide a replacement router which arrived within hours. The original router had IOS 15.1(2)T2 on it for some reason and when the replacement came, I made the mistake of slapping the latest version of the same T train, 15.2(2)T.

My next stop was Cisco themselves so asked for the ticket to be escalated to Cisco TAC. After several days of chasing, I was asked to downgrade to 15.1(4) M3, a known stable release. This was done within minutes of being requested and yet the problem still persisted and Cisco were telling me that this was the only fix they could think of at that time. A reach out to my online network drew no help either. Knowing that this was stumping everybody that looked at it did not make it any the more palatable.

Eventually, I got direct access to TAC, which is another story in itself, and we set up a Webex session so they could see the issue ‘live’. They ran all the standard commands to get a report they could spend some time looking through but were still unable to see what the root cause was several days later. Prior to the testing, they had even sent out another two SFPs that they claimed were compatible with my setup, just to play it safe.

At this point, I was aware that the customer was keen to get the fault resolved ASAP so they could use the circuit with confidence so I presented three options:

  1. Get the provider to terminate their line at the customer site on active kit
  2. Get a different platform entirely, perhaps a 3945 or an ASR
  3. Terminate the fibre on an intermediate switch

Option 1 came in at a cost that seemed deliberately prohibitive i.e. the provider just didn’t want to do it so added a zero to the cost. Option 2 looked the most likely but the hardware support provider were playing hardball regarding who should front the cost of the more expensive model. At this point, I told the customer to try option 3, but only as a short-term fix whilst I would chase option 2 to its conclusion. I also took this opportunity to streamline the config on the link, which had originally been set up as a trunk with allowed VLANs that had no place being there.

It was at this point that I was told of the underlying purpose of the link and it turned out that a 3925 would never have met the design requirements regardless. The colleague that came up with the kit list had moved on to pastures new but his replacement pointed out that a 3560X switch would not only be much cheaper but also exceed the requirements. The switch landed on my desk and I configured it, installed it and confirmed the issue was not replicating on this platform. The customer soak tested the link and was more than happy with performance. Job done.


In summary, sometimes you can’t make things right from the CLI or by wiggling cables. If you adhere to an IT life-cycle model, whether it’s ITIL, Microsoft’s Operation Framework or Cisco’s PPDIOO, you avoid many of these kind of issues in the first place as you are putting your kit list together based on business requirements\strategy and not based just on what has worked before in another scenario, on what tech looks cool this week or on a gut feeling. I am a strong proponent for a suitable planning and design phase but all too often, these are seen as time-drains when networks could be getting built and used. The loss of productivity that often follows during the operational phase, chasing your tail due to poor planning and design, can inevitably cost much more, not just in money but in staff morale and customer good will.

I still believe strongly that this is a hardware issue with the 3925. Has anybody seen anything similar to this issue, or have other stories of faults they were inevitably unable to resolve to their complete satisfaction?

Till the next time.

Half year review


In one of my earlier posts, I set out my certification goals for 2012:


I gave myself four targets for the year which I still think is more than achievable so let’s take a look at my half year review. I am glad to say that I have ticked numbers 1 and 2 off of the list to gain my CCNP at the half way mark and still have numbers 3 and 4 well within my sights. I think design skills are important to have, even if designing networks is only a small part of your job. If you know how networks should be put together, you are better qualified to point out where improvements can be made in existing networks. This allows you to go beyond simply fixing issues as they occur and approach troubleshooting as a continual improvement process. I am currently in my 2nd week of a two week holiday but fully intend on starting my CCDA studies before month end.

Regarding my other CCNA speciality, I have decided to go for the CCNA Security as the first step in my desire to get a highly job relevant CCNP Security at some point in 2013\14. I am also going to go back to basics in terms of routing and switching knowledge and start building up some study tools so that once I feel I am ready for the CCIE R&S, I will already have built up some momentum. By this time next year, I hope to have a fairly comprehensive flash card library and some expansive mind maps.


The reason for this brief post is mainly for my own motivation. Keeping track of your goals, changing them where required, adding new ones and ticking them off the list as you achieve them is a good way of staying motivated and keeping the momentum up. Without a review from time to time, you can find yourself drifting from your original goal with no real idea of what it is you want. I tend to review my goals much more frequently than every six months, usually every few weeks or if and when something happens which sticks a spanner in the works of some kind.

Till the next time.

Exam pass: TSHOOT 642-832


When I passed the ROUTE exam in April, I only had the TSHOOT exam left to get the CCNP I had set my sights on by the end of October of 2012. This date had been set in my 2011 appraisal but I was planning on taking TSHOOT the first week in June, just before a well-earned two-week holiday. However, when I realised that the 2012 appraisal had to be done by the end of May and my line manager was returning from his honeymoon in the middle of May, an ego switch in my head flicked on and I thought it would be a good idea to walk in to the appraisal with the CCNP objective ticked off the list.


With that in mind, I did something which, whilst I don’t regret it now, at the time caused a bruising of my pride. I booked the exam four weeks earlier than I had originally scheduled for and failed it. The first IT exam I have failed and there have been a few over the years. Looking up and seeing the score, 780 where the pass mark was 790, was a real kick in the stomach. It took a couple of days to start being objective about it but it helped that I got a lot of support from peers who had gone through the same pain and knew that it was just a matter of time to bounce back.

Time management

After all, the problem had been that I had run out of time rather than not understanding the subject matter. The last trouble ticket was completely unanswered and the two before were rushed through in the last minutes. I had fallen foul of appalling exam time management. This was down to two factors. Firstly, I had stupidly miscalculated how much time I had for each question, a simple maths failure. Getting this wrong by just five minutes per ticket was enough to misjudge by over an hour! The most important factor was that I hadn’t learned the topology nearly enough and this was unforgiveable considering Cisco make this freely available on their website. I also made the mistake of drawing the diagram out on the wipe board, not from memory, but from the on-screen topology, as the clock was ticking which wasted valuable minutes.

I had planned on booking it for the following week but when I got struck down by a bug that any psychotic maniac hell-bent on taking over the world would have killed his grandmother to get a sample of, I was unable to stay more than 20 seconds from the nearest bathroom. The exam would have to wait. At the back of my mind, I questioned whether I should wait until the original June date to resit but I was 100% confident that my first time fail was down to nothing more than poor time keeping and so I booked it for 13 days after the first attempt.


I studied the topology diagram in more detail this time and as a hint to those thinking of taking the exam, you would do well to notice the following things on the diagram (just to be clear, this is highlighting what Cisco make publically available and is not giving anything away about the exam that may breach NDA):

  • IP addressing scheme
  • EIGRP coverage
  • OSPF coverage
  • BGP AS numbers and peer addresses
  • GRE tunnel on IPv6 diagram between R3 and R4
  • NAT on R1
  • DHCP on R4
  • DSW switches are layer 3 inferring use of DHCP helper address for client requests
  • Etherchannel between ASWs and DSWs
  • VLANs for clients and FTP servers

Make sure you brush up on the topics above in particular and remember the topology by heart. Each night in the week before the exam, I would draw the topology from memory and compare it to the original. On the day of the exam, I was able to complete 95% of the diagram before I had even started the exam and filled in the last missing details in seconds. Overall, I think I saved myself at least 10 minutes doing this but whereas I used the full 2h15m on the first attempt (which was still not enough), I was able to complete the second exam in less than one hour with 1h15m remaining. Of course, the value of the first exam was that I was ‘hands on’ familiar with the infrastructure now and was already prepared for a number of its quirks. You should also try out the demo TSHOOT trouble tickets on the Cisco website. Although it’s not exactly the same topology, perhaps the biggest difference being the IP addressing scheme, it will give you an idea of how the trouble ticket questions are presented and help you test out your troubleshooting techniques.


This time I looked up to see a much more respectable pass mark of 945/1000. More importantly than that was the fact that I was now CCNP certified and it felt great. This is but one step on a journey that probably only ends when I retire but it feels like a great achievement and will no doubt drive me on further.

Till the next time.

I own this network


Before I go any further, this is my first post using Blogsy on the iPad so here’s hoping it publishes as expected.

I returned to work last week after a lovely beach holiday with the family, chilling out and catching some rays. I went in to the office on Monday with the full intention of putting a couple of ongoing issues to bed and taking note of the outstanding tasks that need addressing so that I could formulate a plan of attack.


Let me give you a little background to put things in perspective. I work for an ISP in the UK. I see my domain as a network engineer broken down in to four main areas. There is our core network, our access customers such as dial up, ADSL\SDSL, leased lines, MPLS etc, our hosted customers that reside in our different data centres and the management piece that encompasses all of these which includes logging, performance monitoring, security, configuration management, auditing, documentation and all that good stuff. Some of these things are missing, some are in place but all need attention to some degree at some location under my responsibility. I started at the company 4.5 years ago as a Microsoft engineer and those who have read my previous posts will know that I got my CCNA three years ago and made the fully fledged move to networking in November of 2011 so I am a relative noobie, despite hopefully being only a few weeks away from gaining a hard earned CCNP.

So back to my original tale of good intentions and their rapid evaporation. Thursday morning came around and I found that I had become increasingly annoyed over the week. Although most likely not true, it seemed that every time I logged on to a device to either make a change or troubleshoot an issue, I was finding trails of legacy config, standard practice being laughed at and very little in the way of an explanation of why certain quirks had been put in place. I’ve seen several network engineers come and go during my time with my current employer, with various skills and capabilities. The problem is that they have all left now. The current team include myself, who only really ‘looked behind the curtain’ at the end of last year and two others that have joined even more recently so I found myself from time to time playing the well known game of ‘blame the guy who has moved on’. Fair? In this case, absolutely. Helpful? Not one bit but I do believe that anybody putting a network infrastructure in place should leave enough documentation behind for another capable engineer to pick up and not only understand how that network is supposed to work but also why certain design decisions were made. I don’t think that’s a lot to ask for as a minimum. So as I drove home on Thursday evening, I found myself quietly seething about me having to pick up the crap left behind by others. This is nothing new in the world of IT though, or indeed in many other fields.

Lightbulb moment

On Friday morning, I decided that I wanted to walk our core network to get a better grasp of how it ticks. I printed off a Visio diagram (as a side note and final moan, a diagram that I created six months ago as the existing one at that time was a mess and largely incorrect). My plan was to start at the edge where our little part of the Internet joins the big boys, work my way to the core and back up to our other edge (we are multihomed) one device at a time. This could take days or even weeks to do, depending on what depth I wanted to go to.

Only five minutes in to my first router and it hit me like a thunderbolt. I wasn’t shrugging my shoulders at the configuration that lay before me. This was now my network and it was mine to improve, tweak, fix and care for. I suddenly saw what was previously a daunting task as an amazing opportunity to improve my own knowledge, understanding and confidence as well as the network itself.

Slightly giddy, I opened an Excel spreadsheet and created a new tab for each core device and a general one to cover things not specific to any one device. I took a dump of the router’s config and started going through it line by line. Anything that looked wrong or didn’t make sense didn’t get me mad. It just got noted. Anything relevant to the Visio that wasn’t already on it got pencilled in for later. I had other tasks to do that day so only managed the one device but it felt very satisfying.


I now had a much clearer vision and a drive to see it through to completion. It all changed with a flick of a switch in my head labelled ‘attitude’. The same problems exist but now I own them, even embrace them and that means this is all going to actually be fun (my twisted sense of fun anyway). This could be the greatest training programme I ever go on…and I get paid for it!

One final thing to say. The title of this post may be slightly misleading to some. Don’t get caught up in the illusion that you really do own a network that you’ve invested time in, unless of course it’s your home network. What I’m saying is, don’t become that guy that keeps details to himself in order to give himself that false sense of job security. You are a facilitator. Share the knowledge so that others can add value too.

Till the next time.

How to prepare for a Cisco exam


Having just passed my 642-902 ROUTE exam, I thought I would write a post to explain how I set out to walk out with a smile on my face and not egg. I’m not going to discuss the details of the exam itself for obvious reasons but thought I would blog about the training path I took and some general points of exam taking. As I often get asked how to prepare for a Cisco exam, this post will hopefully be useful for a wide audience.

For those that haven’t read my first couple of posts (and why is that??), I passed my CCNA via the ICND1 and ICND2 route back in early 2009. At that time I was a Microsoft systems engineer but saw the light and when I had the chance to become a networking engineer last year, I sat the CCNA exam to renew the certification. I moved in to the new role officially in November 2011 but had already begun to study towards the 642-813 Switch exam, which I passed on November 25th. It’s worth noting that I effectively scraped through this exam as far as I was concerned and I put that down to my preparation, which was not as complete as it should have been.


I used the CBTNuggets video series but, after the CCNA series by Jeremy Cioara which was simply excellent, I found the Switch series to be a disappointment and it included many references to the old BCMSN exam, which told me that the content wasn’t bang up to date. OK, fair enough, the topics might not have changed a whole lot but if you are going to resell something an as upgrade, please don’t just stick a different badge on it! I ended up losing interest and watched the INE video series instead.


I also used the official certification guide from Cisco Press but here lay another issue, this time with myself. As part of the move to networking, I felt a certain pressure to get up to speed as quickly as possible. This wasn’t a real pressure, it was something that I imagined but it meant instead of reading the book from cover to cover as I should have done, I skimmed some chapters and skipped a couple of topics. This is exactly why my score was not up to my usual self-imposed standards. It was also what made me determind to put time pressures to one side and make sure that I understood all the material before going in to the next exam.

For the 642-902 exam, I basically used the materials\methods below and I’ll briefly go in to a little more detail on how I blended all these together to give myself the best chance of passing the exam:

  1. Cisco Press exam guide book
  2. CBTNuggets video series
  3. Cisco Live
  4. Labs
  5. INE R&S workbooks
  6. INE video series
  7. Work experience
  8. Boson exams

Firstly I broke the book down in to 6 sections; EIGRP, OSPF, BGP, Redistribution, IPv6, WAN\Branch offices. Straight away, it ceased to be a 700 page book and became 6 individual topics that weren’t so daunting anymore. I gave myself deadlines to read each topic and made sure I hit them by increasing the page count per day if I skipped any days, which I made sure was a rare event. I read them pretty much in the order above, except for BGP which I left until last.

As I was covering each topic in the book, I would watch the corresponding CBTNuggets videos. The Route series is a vast improvement over the Switch videos. Jeremy uses GNS3 labs to cover the topics and the topology files he uses are available to subscribers on their website so you can ‘play along’ with Jezzer.

Filling in the gaps

I was lucky enough to get along to Cisco Live in London this year and found it to be very inspirational. The technical sessions were top notch and gave me a head start on a number of ROUTE related topics, such as IPv6 which I had previously not really ‘got’, but a 4 hour hands on lab gave me a massive boost, as did some of the related breakout sessions. The fact that, up until then I had pencilled in a date of June for sitting the exam but brought it back two months speaks volumes about the effect it had on my motivation.

With the book finished and the CBTNuggets videos wrapped up three weeks before the exam date, I knuckled down to some labbing. Again, I broke it down to the six topics and focused on these, even more so on the routing protocols and redistribution and used the INE CCIE Routing and Switching materials to give me a real sense that I was going beyond the requirements for the Route exam. I should point out that I am lucky in regard to the training materials I have access to. My company have a dedicated training budget and were happy to pay for all the books, subscriptions and the Cisco Live ticket, in addition to the exam cost.

As a form of ‘detail revision’, I also decided to go through the 19 hours or so of INE videos in the Route series and was watching a couple of videos each day whilst labbing. I found that this really helped it all sink in and gel. Whilst I could have rewatched the CBTNuggets videos, I think another trainer’s perspective is quite often useful and so it proved.

On the job training

The day to day tasks that I do as a network engineer really helped. For example, I work for an ISP that runs BGP and OSPF in our core and using this live environment to see how the various topics knit together is priceless. It’s also given me a few tasks to keep me busy over the next few weeks and months as I’ve noticed where improvements and tweaks could be made and let’s not forget the IPv6 implementation plan!

Practice exams

Finally, the Boson exams gave me great insight in to which areas I was still weak in. After completing an exam, I would go back to the book and read up on the weak points. The day before the exam, I did 108 questions and got 907 which made me feel more confident.

The methods used between the Switch and Route exams were worlds apart and I know which one I preferred. Putting the effort in really makes the difference and every hour you use for studying now will save you countless hours of head scratching at a later date. With one more exam to go for the CCNP, I am getting a feeling of anticipation but fully intend to apply the same regime to studying, despite the fact I hear from many sources that if you have been working in IT for any number of years, you should be able to pass the TSHOOT exam with minimal study. That doesn’t tempt me in the slightest. I want to make sure my CCNP is as solid as it can be. After all, this is the foundation for my entire networking career from now on. I have the desire to go on to the CCIE at some point, perhaps with some design certs along the way, maybe the CCIP\CCNP SP and some specialisations such as Wireless and Security.

One thing I have realised is that there is no rush for these career making skills and that is why I’ll be going back to the Switch topics and applying the same process again to them that got me here with the Route. In fact, INE have a deep dive series specifically on Layer 2 that sounds like just the ticket. On a final note, this was my 5th Cisco exam and, despite me loving the CCNA exams the first time around, was my favourite so far. Things are really starting to gel now and I have to say I have a strange attraction to BGP that I will be pursuing further…

The real exams

This last section (which I originally missed out due to being giddy about going on holiday the day after my exam!) is about the exam itself. Oh yeah…that bit!! As you progress through your studies, you should start getting a better idea of when you will be ready to sit the exam. My suggestion is to book the exam about 4-6 weeks before the date itself. This will hopefully give you a last burst of energy in the final stage – there is nothing like a target to aim for. I always try to book the exam for about one week (and usually no more than two) after finishing the books, videos and labs, giving me that 1-2 weeks for exams and final reading up.

What are my thoughts on postponing an exam? It all depends on whether you mind about having to sit some exams more than once before you nail it. If you do care (and I’ll admit I have this obsession about NOT failing an IT exam based on a failed university chapter earlier in my life), then feel free to push it back a week or more, but don’t do this more than once. If you are not bothered about a failure here and there, then stick to the original date. Either way, I think you should try to be as ready as possible, although I can see the benefits of sitting an exam when you might not be 100% ready (examples include your 1st exam when you don’t know what to expect, a renewal that has crept up on you and you must take it before a certain date).

For the exam day itself, I can offer some basic tips. Make sure you have your ID with you, book the exam for a time that suits you (e.g. if you usually feel sleepy mid afternoon, book a morning exam), make sure you know where the test centre is, where parking is etc. Leave plenty of time to get there – most centres I’ve been to have let me start early anyway. If yours doesn’t, you will at least have time to settle your nerves and maybe have a cup of tea\water\etc., (or nip to the loo…).

The exam itself should be an exercise in self-control. Make sure you read the pre-exam blurb carefully, especially if you are fairly new to exam taking. Ask for the paper and pen that you are usually allowed to take in so you can make notes. Before the exam starts proper, you should be told how long you have and how many questions are waiting for you. This is important information. Use it to determine roughly how long you have on each question. I say roughly as some questions will take seconds to answer but a simulation could take 20 minutes or more. The point is, if you have two hours to do 50 questions and you find yourself on question 10 with 30 minutes left, you’ve managed your time poorly. Rather than doing the maths on a question by question basis, I would check my time every 30 mins (in the example above) and try to ensure I was 25% further in. With that in mind, don’t be afraid to drop a question if you’ve hit a road block. In my last exam (ROUTE), I got stuck on a simulation at question 40 ish with 30 minutes left. 8 minutes later, I had done about half of the required work but was going around in circles. What did I do? I set myself a target of dumping the question with no less than 15 minutes left. At that time, I had progressed further but still not nailed it but continued to the next question regardless. As I clicked ‘END’ on my last question, I had exactly 28 seconds left on the clock. My hard decision had allowed me a chance to answer all the remaining questions.

And finally

My last bit of exam advice would be to make yourself as comfortable as you can. For me, that usually means being in the room alone as I like to talk to myself out loud, stand up and stretch my legs from time to time and even sing\hum to myself to chill out! Find what works for you, that doesn’t upset other exam takers.

Till the next time.

Review: the new iPad (3rd generation)

Let me start by making one thing clear. I am no Apple fanboy. There, I said it…and I mean it.

I have never bought anything from Apple before although I do have an iPhone 4 courtesy of my current employer. I’ve had an iPhone (since it was a 3GS) for about three years and it didn’t take long for me to realise that it was the best phone I had ever used, yet I still continued to resist buying an iPad. When the latest generation was due for release in March it was actually Jo, my wife who suggested we got one.

I reviewed what the upgrade brought with it and when I saw a colleague’s arrive at work and we compared it side by side with some HD video and a like for like comparison of a technical PDF, it was the clarity in particular of the latter test that convinced me that I wanted one of these. The fact that both Jo and my five year old daughter Mia could get a lot of use of out it too made the decision a no-brainer. Mia is at that age where the educational value of an iPad alone would justify it’s purchase in my opinion and the ‘always on’ appeal means that Jo can check her emails, browse the web, check a film\actor on IMDB or find out how her team are doing (Manchester United – don’t ask!) within seconds rather than having to boot her laptop up.

This post isn’t a review of the iPad as such. There are already a countless number of those available to help sway you in your decision. In fact, swaying your decision is not the purpose of this post at all. I’ve had my iPad (‘ours’ I can hear two voices cry) for just over a week now and thought it time I gave a summary report of my experience to date, from the perspective of a network engineer who is always happy to find ways of maximising his time. With that in mind, I have broken this down in to the different areas arranged by application type, that offer me real value and functionality.

Social networking

I have a Facebook account. It’s been disabled on more than one occasion and I only use it now to keep up to date with a handful of people who I unfortunately rarely get a chance to catch up with anymore. I also have a Google+ account and use that even less. The only social media site I regularly use is Twitter (and even that is dwindling recently) and I have found the Hootsuite app on the iPad makes the experience much more efficient with it’s multiple configurable columns. Adding a separate column or more to keep track of some useful hashtags is a breeze and I like that I can see what the people I follow think is important enough to retweet in one place.

Online content consolidation

I am currently playing with a couple of different apps that do similar things. Zite and Flipboard will go to a number of different online sources and, based on what you tell them are your interests, create a digital magazine. Although Flipboard is the more slick looking of the two, I like the fact that with Zite (and perhaps I am just missing the similar functionality in Flipboard), you can tell the app which articles, authors, sources and article tags you like so as it learns from your input, you should in theory get even more relevant content every time you use it. All I have to do is open it up, read and give a couple of clicks for feedback.


For me, a tablet platform is ideal for two main areas of productivity and the iPad has a couple of apps which excel at both. Toodledo is, as the name might suggest, a to do list app which allows me to quickly enter tasks, give a breakdown of more information, set priorities, deadlines and reminders. iThoughtsHD is a mind mapping tool and it’s a bloody good one too. Within minutes of having installed it, I had created a couple of maps outlining my certification path over the next few years and a broad list of networking projects I have awaiting me at work. Using these two apps together, I can create multiple 10,000ft views of areas that need my attention (be it at work or at home) and then break those areas down in to tasks with detail and time targets. A very powerful combination.


Again, a couple of useful apps here. I have an Amazon Kindle device and having an app on the iPad that I can view all my Kindle purchases on is very useful. Even more so was picking a PDF app that had more to it than just as a reader. In the end, I opted for GoodReader which has two features that sold me. Firstly, the ability to easily sync with iTunes and arrange my PDFs in folders and secondly some very nice annotation capabilities – useful for techie publications with diagrams that I always like adding detail to.


I’ll admit that I was a little dismayed at the relatively low number of real techie apps on the iPad and by that I mean ones that are more complete toolboxes. Sure, there are lots of apps that do this or that or let you buy extra functionality ‘in app’ but many of those don’t even manage to carry themselves that well. I bought Prompt, an SSH client, for my iPhone quite a while back and because it is a universal app, that means it is designed for both iPhone and iPad and therefore was free for me to download on to the iPad, which was a relief as I originally paid 69p for it and it has gone up to £5.99 since then. I considered getting iSSH but some reviews of the latest version turned me off it. I finally managed to find one of those toolbox type apps with built in ping, traceroute, whois, etc. functionality which doesn’t have hidden costs and has some nice extras. It is called IT Tools.

Spare time

Of course, it’s not all work and the iPad has plently of functionality to let me chill out. Firstly, there are a host of games that can easily cost you hours of your time. For me, I went for some standard classics that include card games, chess and draughts, sudoku and of course Angry Birds! I also went for Real Racing 2 HD, which has been updated for the new iPad’s Retina display and looks gorgeous. I also have a few cracking sports apps that let me keep up to date with results and my team’s news (Manchester City of course!) in the most efficient way. I also opted for Garageband as I recently bought the dongle that allows me to connect my electric guitar so, despite being a failed musician, I can still take a shot at the dream!

Others worth mentioning

Twitter came to my rescue again when I asked about a good flash card app. It was Bob McCouch that suggested Mental Case and it quickly became evident that it would be a tool I will use throughout my career to not only help me on my certification path but to have to hand when my memory otherwise fails me. I’ve also downloaded FeeddlerRSS but haven’t had time to set it up yet. I have also bought Blogsy, which as you may have guessed is a blogging app. I should really have posted this particular review using it, but I’m still not up to speed with it yet, but it looks very capable indeed. Maybe the next post to put it to the test.


For me, the iPad is all about making the most of my time, even when that’s time wasting! I can have access to all my ideas and projects, my tasks for the next day, week and year. I can vacuum the Internet in seconds for information that is relevant. I can have all my reading materials at my finger tips. I can keep up to date with the things that matter to me, without having to sift through 90% of crap first. Now that I have a decent case to protect it, I can take it to work and use it in ways that my laptop can’t really compete with. That’s really where it’s value is for me. It’s not a laptop replacement but the hardware, interface and software filters out of lot of bloated nonsense that I have just grown accustomed to on the laptop: a minute to boot up, another to log on thanks to the daily group policy gang bang, sluggish applications that offer 90% functionality that I’ll likely never use or even learn exists for crying out loud.

It’s also pretty good at producing content, but perhaps not always on a par with a desktop\laptop equivalent. Photo editing, video editing, music creation and blogging tools are all very capable for the most part. Put all of this power in to something you can throw in a much smaller bag than the one you use for your laptop, that you can have sat next to you and accessible at the push of a button, with software that is almost always a fraction of the cost of your main workstation software and I’m already starting to ask myself how I got by for so long without it.

Let me finish by making one thing clear. I am no Apple fanboy. There, I said it…and I mean it.

Although, that may very well change…

Till the next time…

Make some time for yourself

I recently posted at Packet Pushers about 10 key areas that people who work in IT should focus on to see improvements both in their working and personal lives. This post looks at the first of those areas, time management. To match the theme, I will make this post as short as possible so you can get on with the rest of your day.

There are countless books, websites, guides, courses, etc. that give you advice on how to improve your productivity. Some of these are very good, others less so. What most of them share in common is a toolbox of techniques to improve your time management. This post offers only three such tools that I use every day. I guarantee that if you condition yourself to use them every day too, you will find yourself getting more done. For those of you who are really busy, here are the three techniques, which I discuss further below:

  1. Lists
  2. 4 Ds method
  3. Distraction avoidance


Very simple this one. Every evening before you go to bed, spend up to 10 minutes writing out a list of things you need to get done. How you break down the list is up to you e.g. one list for work, another for home. Take any of the big tasks and break them down in to smaller ones. Then prioritise them in a way that works for you e.g. tasks that must be done the next day, those that can wait till later in the week, etc.

Once you have your final list, broken down with enough detail to get you started at full speed and in order of priority, take the list to work the next day and start on the number one priority and get it completed before working on the next task on the list. Cross out each task as you complete it.

4 Ds method

This applies to any workflow that comes your way, whether it’s your helpdesk application, paper tray or email inbox. It’s a simple way to deal with anything that is going to use up some of your valuable time. The 4 Ds all do what they say on the tin. The explanations I give are from the point of view of an email that has just landed in your inbox, but you can apply, as stated above, to any incoming request for your time:

  • Deal. If this is a priority, deal with it right now. Do what is required, sign it off and move on.
  • Delegate. Send this onwards to somebody else who can deal with this. Only make a note if you need to chase it up yourself.
  • Defer. This one is critical. If you need to deal with it, but not just yet, move it to a ‘Defer’ folder and only look at this folder when you are going to deal with it. You must get out of the habit of looking at deferred items more than once before doing anything with them. That costs you a lot of time in the long run.
  • Delete. Just delete it and have done with it.

Distraction avoidance

Distractions can easily suck up hours of your working day:

  • Meetings that you should not have been in
  • Meetings that go on for two hours with a five minute ‘useful’ bit
  • Telephone calls that match the two meeting points above
  • Gossip around the coffee machine\photocopier
  • ‘Can you just take a quick look at this for me’….an hour later, you are still looking

Distractions such as those above and countless others eat in to your working day and indeed life in general. Learn how to deal with them in an assertive yet professional manner.

An example: I’ve said on many occasions that I am unable to make it to a meeting due to being busy on something else. When I read the meeting minutes later, I learn in less than five minutes what it took the attendees 90 minutes to find out. I try to only attend meetings where my input is necessary and even then, I can often give my input after the fact.

When you walk about the office, walk with pace. Not only do you get where you are going quicker but it makes it easier to get past that person who is always grabbing you for advice. When I make myself a brew in the kitchen, I take it straight back to my desk. I eat my lunch at my desk too.

If somebody keeps tapping you on the shoulder for help, rather than doing it for them, show them how to do it themselves, perhaps with a Wiki article or a process guide. Or send them a LMGTFY link. Or be honest and tell them that you are really busy now but if they send you the details, you will get around to it.

Of course sometimes it’s somebody senior to yourself who keeps sapping your time and if that is the case, refer them to your list of priorities for the day and ask them where their request falls on that list. It’s amazing how often they will concede that it’s not as important as first suggested.


Use each of these in conjunction with one another and really put effort in to each of them. It has been estimated that learning a new habit requires daily practice and takes about 2-3 weeks before it starts to feel natural. However, get started today and you will see results almost immediately. Let me know how you get on in the comments below or via email. I also have an upcoming post on how to make the most of your studying time that I have found not only lets me learn things quicker, but makes the topics sink in!

Finally, remember that on average, we have 450 minutes at work each day. Try to make every single one count and watch your productivity soar.

Till the next time…

The Mask of Sorrow

The purpose of this post is to discuss the differences between a subnet mask and wildcard mask, when they are each used and what tricks you can do with them. This post does not go in to any real depth on subnetting and assumes you know how to subnet already. You probably won’t be surprised from that last sentence to learn that this post covers wildcard masks in a little more detail than their subnet cousins.

I recently saw a tweet asking about wildcard masks and if anybody had a good system for working them out. I keenly replied and, whilst my answer was correct, it turns out that it was quite limited in scope. It sparked an interesting discussion and in the end I learned something that I didn’t know, as did a couple of others, so it seemed like a good topic for a post.

Subnet masks

Let’s start with subnet masks as they are the easiest to understand and are the less ‘funky’ of the two masks being discussed here. They should also be more familiar to non-networking IT types. If you know any sysadmins who don’t know what a subnet mask is, you have my permission to flick them on the forehead very hard. Repeatedly.

Matt’s definition

A subnet mask, in conjunction with an IP address, tells you which subnet that IP address belongs to. Another way to put it is that the subnet mask tells you which part of the IP address refers to the network (or subnet) and which part refers to the host specifically within that subnet.

To give an example, I will lay out my thinking on the page step by step with descriptions of what I am doing.

  1. Let’s take a random example: (using slash notation)
  2. I’ll convert this to dotted notation:
  3. Now to convert to binary: 11000000.10101000.00101010.01001111 11111111.11111111.11111111.11000000
  4. Now I’ll put the subnet mask under the IP address. This makes the next step, doing a binary AND operation, easier to visualise:
    11000000.10101000.00101010.01001111 <IP address
    11111111.11111111.11111111.11000000 <subnet mask
    11000000.10101000.00101010.01000000 <binary AND operation
  5. This is still a /26 remember so we can now convert this AND result back to a decimal number which represents the network or subnet that the original IP address ( belongs to:
  6. As you should know from your subnetting studies, the range of this subnet will be from

From the steps above, the first three all have the goal of getting the IP address and mask converted to binary. Why? Well, in the example above, its to show how the binary AND operation works. When the maths becomes more comprehensible, you should find that working this out in decimal and eventually in your head is second nature. The result of the binary AND gives you the network ID or subnet number that the host belongs to. That is what the subnet mask does, it masks the IP address in such a way to reveal the subnet. The subnet mask should always be a consecutive collection of 1’s, followed by all 0’s if any are required (i.e. anything other than

So where is a subnet mask used? The table at the end of this post gives examples (not a definitive list by any means) of where both masks are to be found. One thing to note is that the subnet mask isn’t sent out in the IP header with the IP address. There is no need for the destination host to know what subnet the source host belongs to so no need to send it. The destination only needs to know whether the source is on its own subnet or another one so it knows whether to communicate directly or via its own next hop gateway. Again, it calculates this by doing a binary AND to compare the network part of the source and destination. If they match, they must be on the same subnet.

Right, I’ve drifted closer to where I said I wouldn’t than I would have liked i.e. in to  a subnetting discussion. The key part of this post is the next topic.

Wildcard masks

I should perhaps make a feeble attempt to defend my ignorance on Twitter here, as stated at the start of this post, and say that I am currently working towards my CCNP and at no point during my studies to date had I seen wildcard masks used as anything other than an inverse subnet mask but in fact I’ve just heard Jeremy Cioara make a passing reference to them in one of his redistibution videos in the CBTNuggets Route series. Always new things to learn! Before I explain what I mean by inverse subnet mask, let me give my quick definition of a wildcard mask.

Matt’s definition

A wildcard mask, in conjunction with an IP address, lets you specify which bits of the IP address you are interested in and which you aren’t.

First, let’s see what a wildcard mask looks like:

What just happened there? That looks different from a subnet mask. Yes it does…because it is. Before I do some magical conversion to binary again to clarify, keep in mind that with a wildcard mask, the following rules apply:

For a binary 0, match
For a binary 1, ignore

Or put another way:

For a binary 0 in the mask, we care what the corresponding bit in the IP address is
For a binary 1 in the mask, we don’t care what the corresponding bit in the IP address is

Now read my definition again to see what the mask above might be trying to achieve. Still a bit unclear? Then let’s break it down.

  1. Let’s convert the IP address\wildcard mask pair above to binary:
    10001011.00101110.11011101.00101000 00000000.00000000.00000000.11111111
  2. Put the wildcard mask under the IP address to see how the masking is in effect
  3. Remember the basic rules to remember above? Applied to this example, that means that we are only interested in the first three octets of the IP address and we can ignore the last octet. (0=match, 1=ignore)
  4. That means that this wildcard mask will apply to any IP addresses that have 139.46.221.x in the address, where x in the last octet could be 0-255 (because the mask doesn’t care). We are ignoring the last octet as dictated by the mask
  5. Remember before I used the term inverse subnet mask? When wildcard masks contain a contiguous series of 0’s only ( or a contiguous series of 0’s followed by a contiguous series of 1’s, this is exactly how a wildcard mask works. In this example, the wildcard mask of would match any IP addresses in the subnet defined by the following IP address\subnet mask pair:
  6. Before I get to the groovy part of wildcard masks, an easy to remember calculation for working out the equivalent wildcard mask (inverse mask) from a subnet mask is to subtract the subnet mask from, octet by octet as below: <all 255’s
    <subtract the subnet mask <the result is the wildcard mask, an inverse of the subnet mask

I used a mask for this example that not only falls on the octet boundary but also is all 0’s followed by all 1’s to keep things simple but it gets more interesting when we take things further. Yes, just like a subnet mask a wildcard mask does not need to fall on an octet boundary but whereas a subnet mask has a contiguous series of 1’s followed by a contiguous series of 0’s, a wildcard mask can be pretty much anything you want and this is where the fun begins.

Time for an example. Let’s say you have multiple physical sites and you assign a subnet to each of those for management IPs i.e. source IPs that can access your networking kit throughout your company. You assign the following /24 network to each site:


where x represents the site number. You have three sites so you create the following config on every device:

[sourcecode language=”plain”]
ip access-list standard DeviceManagement
line vty 0 4
access-class DeviceManagement in

To clarify the config, we have an ACL that says the management range of IPs on sites 1-3 can telnet on to the devices configured as above. That’s all well and good but what if we have 20 sites, 100 sites or even more? What if the number of sites is only three now but will grow by one site a week? These scenarios highlight two key problems. Firstly, with each new site, the ACL gets bigger; an extra line for each site. Secondly, you need a process to update the ACL on every device every single time a new site comes online. Even with a configuration management tool, this isn’t ideal. With the power of a well crafted wildcard mask and just as importantly a carefully designed IP addressing scheme, we can instead use a one line ACL:

[sourcecode language=”plain”]
ip access-list standard DeviceManagement
line vty 0 4
access-class DeviceManagement in

You should be able to see, without a conversion to binary, that the single permit statement is saying that as long as the source IP matches:


then permit access i.e. we don’t care about the 2nd or 4th octets, just that the 1st and 3rd octets must match ’10’. This answers both our previous key problems. The single line ACL matches our three sites ranges and as long as we use the same addressing scheme for each new site, the existing ACL will match any new site, at least up to 255.

OK, I hope this isn’t making your ears bleed and if you’ve made it this far, I have one more example that shows another cool use of wildcard masks. This example is actually the one that Marko Milivojevic (@icemarkom) slapped me with on Twitter when I gave my inverse mask answer and it’s a cracker for showing the power of the wildcard mask. Marko posed the question, how would you use a wildcard mask to select all of the odd-numbered /24 subnets of the following range:

  1. Let’s convert to binary: 10000100.00101001.00100000.00000000
  2. The bold and underlined 0’s represent the subnet bits, the three bits I can use from the original /21 to create my /24 subnets. With three bits, I can create 8 subnets:
  3. The only bit set in the 3rd octet is the 6th, giving a base value of 32. It should be obvious that to create a mask that targets only the odd-numbered /24 subnets, the first bit should be fixed at a value of 1.
  4. This means from the eight subnets in point 2, the ones that match this requirement are:
  5. So for the 3rd octet, the only bits we care about would be 00100xx1. We don’t care what the values of the two ‘x’ bits are, but the other values must be as listed
  6. So we now know the network address:
  7. To calculate the mask, we need to ask ourselves which bits we care about and must match, and which we don’t. For the 1st two octets, the values must match 132 and 41 and for a /24, the last octet must match 0. Point 5 above tells us which points we can ignore in the 3rd octet, so using the wildcard rules I stated at the start of the wildcard mask section (0=match, 1=ignore), I can come up with the following IP address\wildcard mask pair:
  8. Putting this in binary form, with the mask underneath the IP address should show this more clearly:
  9. The mask is effectively saying ‘I dont care what the bits of the subnet are (bits 1-3 of octet 3) as long as the 1st bit is 1’

Sometimes, listing things in a logical order like above helps enormously, other times it just muddies the waters. Read over the post again to determine what the purpose of the wildcard mask is first, then look at the two examples above to get a feel of how they can be applied. Try looking online for further examples of powerful wildcard masks to see if they can perhaps answer a problem you have. Hopefully this post will have at least given you a clear definition of a subnet mask and wildcard mask, how to calculate and use them and where you can find them. If you have any questions, feel free to leave a comment below.

The table below contains a few, non-exhaustive, examples of where subnet masks (S) and wildcard masks (W) are used on networking kit (Cisco specifically)

Type Where and description
S On a NIC, physical interface, SVI. Wherever an IP address is assigned
S On an ASA, ACLs use subnet masks rather than wildcard masks
W In IOS, ACLs use wildcard masks
W In RIP, EIGRP, OSPF, as part of the network statement
S In BGP as part of the network statement
S Most summarisation type commands e.g. area range command in OSPF
 S Static routes in IOS and on ASAs

Finally, I’d like to thank Marko and Bob McCouch (@bobmccouch) for bringing me up to speed on wildcard masks beyond the inverse mask, especially Bob who went further and gave this post a quick once over and also provided one of the examples for me to work with. I find the help of the networking community very motivational and it’s the primary reason why I decided to start blogging myself to hopefully give something back.

Till the next time…

Cisco Live London 2012 – It’s value to me

The dust has finally settled on Cisco Live London 2012, the vendors have moved on and the Ethernet and power leads ripped out. On the latter point, these were actually being pulled out as I walked out of the final session on the Friday. Well, they say that time is money.

On that very note, before I start to talk about the value of this event as I perceive it, let’s look at what the real costs are (and damn you WordPress image compression!):

CL12 Rates
The various rates for Cisco Live London 2012 (main conference pass)

This covers the event from Tuesday to Friday midday. Monday is a full day for those that wish to attend the technical seminars. I believe that there were 25 on offer this year and assume that they all cost the same as the one I attended at £475. All of these costs are excluding VAT. You get lunch provided on Monday through Thursday (with a packed lunch on the Friday) and there are snacks and drinks served at various times throughout the day, so you need to factor in evening meals, accommodation and travel costs in to the equation, although Cisco put on a number of parties in the evenings with food. It can all add up quickly. I was fortunate enough to get company sponsorship to attend and, as my company has a flat in the Shoreditch area of London, the costs to the company were in the region of £3000, including my expenses.

If you have to factor in a hotel which isn’t a flea pit, then suddenly you are looking at a ball park figure of £4000 for the week. Not a casual spend by any stretch of the imagination. Yet I spent not a penny of my own so my attempt to define the value of this event in terms of money might at first be pointless. Or would it? Surely I can (and I will as you’ll soon see), list what I see as the main benefits of attending this event and then summarise by saying, would I pay £4000 of my own money to attend. The problem with that is, I don’t have £4000 lying around spare so the answer would have to be no.

Let’s leave the financials out of the discussion for the moment and talk about the benefits of attending this event.

  • Meeting the vendors – the World of Solutions conference hall allowed many different vendors to set up their stall and tell me why their products were unlike anything else on the market. OK, so there will always be a biased pitch but I am fairly immune to that kind of thing (or at least know when I’m letting myself be swayed) and am happy to ask probing questions or call BS where I see it. I saw that at only a couple of stalls – the vast majority accepted their weaknesses (where they had them) and were mostly balanced. As a guide to the usefulness of having all these vendors in one place, there is a product I will be definitely looking at more closely as it offers something that I currently have to get from two separate vendors at twice the cost.
  • Technical seminars – the Monday session proved to be very informative. 4 x 2 hour sessions that maximised the useful information and minimised the fluff. It would have taken me days, if not weeks, to have accumulated that level of knowledge. For this seminar as with all the sessions I attended, to have the presentation materials to refer to whenever I choose means the fact I have a memory leak issue is seriously mitigated!
  • Breakout sessions – the wide variety of these was very impressive. They were also numbered so you could quickly determine the depth of knowledge being passed on i.e. 1### was for the introductory level sessions, 2### for intermediate, 3### as expected for the advanced levels. They ranged in length from 30 minutes to over a couple of hours. All of the presenters throughout the week were bang on the money both in terms of knowledge and presentation skills.
  • Lab sessions – these came in two flavours. Walk in labs and instructor led. With the former, you book your slot (or chance your luck and turn up), and you sit down and work your way through the chosen lab. There were several to choose from and I opted for the CCIE OSPF lab. The instructor led labs were a bit more formal, at set times with (in myIPv6 lab at least) three instructors to help with any questions. There was little instructor led learning for the group. You just worked your way through the lab and asked questions if you had any. I found this session to be extremely valuable. I have always found hands on labs the best way to learn and remember topics and four hours configuring IPv6 helped me understand a good deal about it.
  • Meeting Key Cisco staff – where else would you get the chance to speak to the CTO of Cisco Learning to get key advice on my study path and probe about, for example, what Cisco are doing to protect the CCIE programme? Or speak directly to the IOS product manager about the timelines for features and platform standardisation? Highly valuable discussions.
  • Meeting your peers – I met some great people last week. Friendly, knowledgeable, geeky, willing to share their experiences, willing to listen to mine. I use Twitter quite a lot but it has limitations. The lack of the face to face feedback, the 140 character limit that makes anything more than a passing comment a chore. Sure, there are loads of nice people on there who can help you, but there is no captive audience. Chances are that most of my followers are still asleep on the other side of the pond if I expect an answer before lunch. Facebook is dead to me. The web as a whole offers all the information I could hope for, but sitting down for lunch, or a pint…or a vindaloo perhaps and just talking about ‘stuff’ is so much more sociable and that suits my personality much more and it’s back to the feedback issue…its instantaneous.
  • Inspirational – all of the factors above, crammed in to a single week? It was a real eye opener for me and I came back, despite the very long days, feeling energised, driven to get my CCNP done and move on to bigger and better things, get a plan together for both IPv6 and more global WiFi rollouts within the company and to spread the word as to what is happening in the industry.

Perhaps this post will help you decide if you think Cisco Live is worth attending if you haven’t already. Do I think it was a worthwhile event? Surely you know the answer to that from this post alone, let alone the daily updates I posted (you have read them all haven’t you?!!). I’m already asking the question about if my company intends on sending people there next year.

Would I pay £4000 myself for such an event? If I had that kind of money to spend without it stinging, without a doubt. The fact is though that it would sting but let me make a final comparison to put things in perspective. Being a predominantly self-taught person, I’ve been on only a handful of courses in my IT career. These have usually come in at the £1000-£2000 mark, and that is just for the course i.e. only £0-£1000 cheaper than Cisco Live. If I take the extreme case and say would I pay £1000 more for Cisco Live than the best of those IT courses, then I would say there is no question. I absolutely would and I’ll be gutted if I don’t get to attend again next year, and the next, and the next…

Till the next time…

Cisco Live London 2012 Day 5

I woke up this morning with mixed feelings. On the one hand, I was very excited to get back home to see Jo and Mia, my wife and daughter. Although this week at Cisco Live London 2012 has been a phenomenal experience, I find that I really start to miss them both after a few days away. The flip side of that excitement was the genuine sadness that the Cisco Live week is over. I am very fortunate to have been here, learnt some amazing things and met some quality people. Once the dust settles a bit, I’ll post a summary of the week and explain why somebody in my position found it to be so incredible.

OK, back to the task in hand – what happened on day 5. The last day is a half day and the World of Solutions section closed yesterday afternoon so I was keen to make the most of the morning and had booked in to two sessions. Funnily enough, these were the original two sessions that I had signed up for when I first got my online account. Pretty much every other session had been swapped about before I finally settled on them.

The first session was on OTV. Max Ardica did a great job of covering the topic considering the 90 minute time frame, although it is one of the more easy to understand concepts. OTV is effectively a Layer 2 extension feature, which used in conjunction with LISP, for example, has some real potential. This is a relatively new feature that is maturing at a steady rate. Overlay Transport Virtualisation creates a tunnel or multiple tunnels over a Layer 3 IP network and allows Layer 2 communication across it. Assuming you have the bandwidth for it, it means you can VMotion across geographical locations and using this in conjunction with LISP will allow your external access to find the services in the new location with minimal outage (when I say outage, I am talking about a single packet drop, so outage is not really the right word).

Despite the Cisco Live party last night and it being the last day both this and the last session of the week were full up, which surprised both presenters!

The last session was on the evolution of IOS. This turned out to be more interesting that it might at first sound! First of all, Cisco are committing to making the whole numbering and release fiasco more standardised across all platforms. On that note, there is a strong desire internally to standardise the CLI platforms themselves but it’s not going to happen in the next 18 months. What will happen before then is a more frequent release of SM (standard maintenance) versions with regular EM (extended maintenance) releases. This harks back to the good old days but since 12.4\12.2 on the routing and switching platforms, the numbering system seemed to be set to reach infinity and releases were not nearly as common as they used to be. The presenter (whose name was not on the slide and whose face doesn’t match the name on the Cisco Live website for the session) was the first to admit that there are still a lot more improvements to be made.

Mystery man
Do you know this man?

The subject of licencing of course reared its head and after reviewing customer feedback, the current model is being overhauled to a ‘Right to Use’ system, effectively based on trust. You use, you buy, but you can install an IOS for evaluation purposes and doing a ‘show licence’ will reveal which licences are under evaluation and which have effectively entered the ‘be honest’ phase.

The IOS is moving to a more modular system, where each feature is available in a release and you turn on what you need. In addition, there was talk of feature virtualisation so that, for example, a firewall feature would run in its own computing process separately from OSPF, so that if one caused issues, it would not crash the entire system. Playing in to the modular approach, a role based access method could mean that your firewall guys could log on and only see the firewall process CLI, your routing guys the related processes etc. Perhaps too much granularity for anyone other than the really large shops but I can think of a few good use cases at my current role.

Another feature coming down the line, which I thought was very cool and also long overdue, is the ability to have a Wireshark process running on a switch\router that could packet sniff without having to put a separate device inline. 1984 made easy, 28 years later.

As a late snippet of something I learned yesterday in one of my IPv6 sessions, OSPFv3 will be supporting IPv4, hopefully from next year. Its improved convergence alone makes this good news, but nobody will be running IPv4 by the end of 2013 anyway, right?

Well, I’m at the airport now with five hours to kill thanks to a cancelled flight and intend on catching up with a load of stuff, so…

Can't wait
Till the next time…

Cisco Live London 2012 Day 4

As much as yesterday at Cisco Live London 2012 was about the WAN for me, today was all about IPv6. Well, beer and curry and IPv6 too. At the start of the week, today was going to be about learning more about UCS. Following on from the excellent seminar on Monday, and my colleague’s recommendation of the IPv6 intructor led lab (that he attended yesterday), I decided UCS should take the back seat so I turned up 15 minutes early to be first in the waiting line – this session had been fully booked. Thankfully, not everybody booked in turned up by 08:57, which is when they start letting the people on the waiting list in.

Bam!! Four hours of labbing, with three excellent instructors on hand to answer any questions. There were seven main labs, with four optional ones. I made sure that I fully understood everything I was doing before moving on to the next part and was glad to have made it through five of the seven main labs in the four hours. Missing the last two did not concern me as the lab is available for download and the topology will be easily created in GNS3. As I tweeted later in the day, I will be setting up IPv6 at my home in the coming days and seeing what IPv6 only resources I can access on the Internet. The best way to understand IPv6 is to get stuck in and see what it does. I could feel my trepidation fading away with each successful confirmation that I’d configured it correctly.

The afternoon brought two IPv6 breakout sessions, the first delivered by Cisco IT about how they implemented IPv6 in their own business presented by Khalid Jawaid, the second a session on planning, deploying and things to consider presented by the very capable Yenu Gobena. Although the Cisco IT session was good, the second one was far more informative for me and rounded off my IPv6 day nicely…

…just in time for Net Beers. Yep, last night of Cisco Live is party night but instead of heading straight to the main event, myself with @ghostinthenet and @ccie5851 (Jody Lemoine and Ron Fullar respectively) met up with @xanthein (Jon Still) who unfortunately hadn’t been able to make it to Cisco Live. A good night was had by all and it wasn’t long before Jody was outnerding us all with his knowledge of Sci-Fi & fantasy, history and many other things too. He also won the ‘Matt’s favourite T-shirt of the week’ competition:

Geek T-shirt
You shall not pass!!

At about 21:00, I was feeling rather peckish so Jody and I said our farewells to Jon and headed to the Cisco Live party. The setup was pretty cool, although most of the food had already been taken by that point so when Jody said he felt like a curry, I told him I knew a place! So off to Brick Lane in Shoreditch again for a chilli masala and a vindaloo for Jody (at a different place from Monday, not quite as nice but very pleasant). And so another post midnight day came to an end, I thought I’d keep today’s post a bit briefer.

Two sessions tomorrow to take me up to lunch time, then it’s back up north of the border. Will give an overview of those as soon as I get the chance and a summary of the week as a whole. Also, in light of today’s sessions, I’ve changed the tagline of the blog from “The of networking”. It’s all about progress!!

Till the next time…