MPLS L3VPN’s – Part 2

In Part 1 of MPLS L3VPN’s we talked about the different components and technologies that form the IP VPN service, the role of the devices in the service provider connection model and how they interact between each other. In this second part, we will look at four design cases that use MPLS L3VPN’s in today’s modern networks.

Case 1: Service provider – VPN Service

The most popular and proven use case of L3VPN is within the service provider network used as a transit VPN service. Typically, the aim of a service provider offering is to meet the requirements of their customers by offering a service that can:

  • Support a large number of sites per customer
  • Provide customers with a service that can differentiate and transport traffic with end-to-end QoS
  • Provide a flexible and single infrastructure that can serve various media access methods for all customers
  • Reduce the time to provision new sites or to increase bandwidth throughput

All these goals can be achieved by using the following overlay model, also discussed in Part 1 of this blog:


In this architecture, the provider edge nodes (PE’s) receive the customer traffic from the customer edge nodes (CE’s), carries it across the provider nodes (P’s) to its endpoint PE’s using the relevant VPN and transport MPLS labels to finally inject it into the appropriate CE. In this model, the network is used as a transit service only and does not provide services to the customers themselves.

Case 2: Service Provider – Internet and Application Service

The flexibility provided by MPLS VPN infrastructure allows the provider to offer to the customer multiple additional services like Internet access or Application as a service. There are multiple design options to allow the extension of an application through L3VPN’s but the general model can be seen below:


The PE’s at remote sites will exchange routes and data with the PE’s inside the Service Provider data-center. The PE’s in the service provider data-center can be considered as Autonomous System Boundary Router’s (ASBR’s) as they will be at the edge of the Application or Internet service provided to the customers. This use-case is most popular in providing Internet as an additional service to the already existing transit VPN’s to other sites.

Case 3: Enterprise – WAN and Campus Segmentation/Virtualization

Some enterprise opt to use L3 MPLS over their own WAN or over a Service Provider network to offer L3 segmentation and virtualization up to their Campus networks. This is often done in the case of a merger if there is a need to keep the routing infrastructure separated end to end (due to isolated network for example).

In the case where the enterprise has their own managed WAN network the design will be the same as the service provider network (Case 1) or also called a self-deployed L3 MPLS network. If the enterprise opts instead to use a service provider as transit with a L2 VPN service such as VPLS or EoMPLS, the design will look like the picture below.

From the service provider’s perspective:


From the enterprise perspective:


As you can see, the CE’s from the perspective of the service provider are considered the PE’s for the enterprise. This allows the enterprise to run MPLS and MP-BGP between their PE’s over the WAN of the service provider’s network and extend it inside of the campus network. Using L3 VPN’s and extending them to the campus network from PE1 allows the use of an overlapping infrastructure:


In some cases, the WAN service offered might be a L3 service itself or the enterprise might run a pure IP core backbone and running MPLS natively over the transit network won’t be feasible. In this case, a GRE or mGRE tunnel over MPLS design model can be used as seen below:


One last design option is the MPLS over GRE + IPSEC model or even the MPLS over DMPVN (2548oDMVPN) design with a public network as transit, like the the internet for example. This offers a low cost, scalable MPLS overlay.

In general, L3VPN’s within the enterprise is not as popular due to the lack of MPLS knowledge for enterprise engineers. Another factor contributing to the lack of popularity of this design is that the number of different VPN’s to be maintained within most enterprise networks is often fairly low so scalability is not an issue. If scalability is not a design concern and MPLS is not a familiar technology by the supporting staff, a VRF-lite only design model often becomes a better option in terms of manageability.

Case 4: Data-Center – Multi-Tenancy

In the past few years, the CLOS or Spine/Leaf architecture has been increasing in popularity in the data-center environments due to its several benefits to bandwidth availability and efficiency in east/west traffic patterns.  Although a L2 model using TRILL/Fabricpath/SPB could be used, L3 ECMP is another option available that offers most of the same benefits. It is also possible to use L3 ECMP with VRF-lite to create hop-by-hop segmentation with an IP data-plane to provide multi-tenancy, however we will focus on the option of using MPLS with L3VPN’s. In fact, MPLS L3VPN’s offer several benefits that VRF-lite segmentation does not. The first one being scalability which is very important for most host/cloud providers due to the sheer number of customers that they have to support. The second being convergence time due to the availability of MPLS tools like Fast Reroute and Path/Node protection. Lastly, there is more complexity involved with the VRF-lite model due to the requirement from customers to stitch VRF-lite to MPLS segments at the data-center edge. Let’s take a look at the architecture itself:


Comparing to the spine/leaf architecture, the PE’s would be represented by the Leaf nodes and the P’s would be the Spine nodes. As usual, alink-state routing protocol underlay would run between P devices and PE to P devices with an MP-BGP overlay between PE devices. If there is high number of PE’s, it is possible to use a hierarchical model by using two of the devices as Route Reflectors for the VPNv4 peering’s.

In conclusion, in this post we covered different use-cases for L3VPN’s in the service provider, enterprise and data-center environments. In the next post, we will cover several deployment scenarios and issues encountered in different L3VPN’s topologies.


MPLS L3VPN’s – Part 1

MPLS Layer 3 VPN’s (MPLS L3VPN) or also called MPLS IP VPN, is one of the most used application of MPLS in today’s networks. This service is most popular with Service Provider’s but can also be found in Enterprise WAN and Data-center environments. This post will focus on an overview of the different technologies used to create an MPLS IP VPN network and how they interact with each other.

MPLS L3 VPN is, like the name says, a way to create a Layer 3 VPN by harnessing the power of MPLS. The terminology “L3” or Layer 3 refers to the third layer of the OSI model and it is basically just a fancy way of saying that we are segregating at the network layer by creating a separate routing and forwarding table for each VPN. This means that for each VPN we can have an overlapping IP addressing topology running on a shared backbone network.

MPLS VPN Connection Model:

Let’s start by looking at a common topology for an MPLS VPN network. This is what a simplified Service Provider network would look like:

2015-11-14 12_40_05-untitled

The purpose of the network in this example is to send traffic for each customer from one site to another over the routed Service Provider network while maintaining a separate routing and forwarding instance for each customer. In an enterprise environment, the same topology could be used but with the goal being instead of separating traffic between different departments, applications, groups, services or any other logical domain that might be a requirement of the business. There are different components and technologies used in this network and we will explore them one by one separately.

The first part of this network is the Core MPLS Network, made of Provider (P) devices.

2015-11-14 14_29_57-untitled

P devices sit inside of the MPLS network and run a link-state routing protocol (OSPF or IS-IS) with an MPLS distribution protocol on all of their interfaces. There are a couple of MPLS distribution protocols that can be used: Label Distribution Protocol (LDP), Resource Reservation Protocol with Traffic Engineering Extensions (RSVP-TE), Constraint-based routed LDP (CR-LDP), Multiprotocol BGP (MP-BGP) and Segment Routing, but we will focus on LDP for simplicity and because it is possibly the most widely used.

A quick overview of LDP:

Label Distribution Protocol (LDP) is a protocol used to form Label-Switched Paths (LSP’s) by using the existing routing table made by an IGP. LSP’s are sequences of MPLS enabled devices that forward packets of a certain Forwarding Equivalence Class (FEC). A FEC is a set of packets that a single device forwards the same next hop, with the same interface and the same treatment. If this all sounds confusing and unclear, just think of an LSP as a unidirectional tunnel between two devices that share common characteristics. Without going into too much details on this technology, LDP will create an LSP using P devices and it will be used to forward customer traffic across the MPLS network.

The second part of this network is the Provider Edge (PE) devices.

2015-11-14 14_31_29-untitled

PE devices sit at the edge of the MPLS network in between Provider (P) and Customer Edge (CE) devices. They use MPLS/LDP to the P devices and IP to the CE devices. By running LDP with the P devices, the PE’s are able to create an LSP to the other PE’s. The labels forming this LSP will be used for forwarding packets over the MPLS core and are more commonly called the IGP label or Transport Label.

PE’s also need to create an MP-BGP connection to other PE’s to exchange VPNv4 information as follow:

2015-11-14 15_40_24-untitled

We will see later on what is contained exactly in these BGP updates but one of the key information exchanged will be the VPN address that will be imposed as a label, more commonly known as the VPN label.

PE devices also have another very important role and that is to hold a separate routing instance for every CE. This is made possible by using a separate Virtual Routing and Forwarding (VRF) instances for each customer.

A quick overview of VRF’s:

Virtual Routing and Forwarding (VRF) is a technology that allows multiple routing and forwarding tables to exist on a single device. It is basically the equivalent of a VLAN for Layer 3 segmentation. The use of a VRF alone is called VRF-lite, but within the context of MPLS it is just called a VRF. The difference really is that VRF-lite does not make use of two critical components used in MPLS IP VPN’s: the Route Distinguisher’s (RD’s) and Route Target’s (RT’s).

Route Distinguishers (RD’s) are defined within a VRF’s configuration and their only purpose in L3VPN’s is to make an IPv4 prefix globally unique for route exchange. This is important so that the service provider can distinguish between two same prefixes from different customers.

Let’s say Customer 1 and Customer 2 both have the network and are advertising this prefix through the Service Provider network. How would the Service Provider know from what customer this route came from? It can’t, unless there is something to differentiate both routes. That is exactly why the route distinguisher exists. During the route exchange process, the route distinguisher will be appended on the IPv4 address as follows:

2015-11-14 16_49_08-Untitled - Notepad

Route Target’s (RT’s) are also defined within a VRF configuration and in a similar format as the Route Distinguisher but they have a completely different purpose. Their role is to tell PE device which prefixes should be exported or imported in the VPNv4 routing table using a BGP extended Community Attribute.

Again, we will see how this whole process works later on this post but for now just remember that the Route Distinguisher makes an IPv4 address unique across the MPLS network and the Route Target defines which prefixes get imported or exported on the PE devices.

The third and final part of the network that we need to look at are the CE devices:

2015-11-14 17_22_57-untitled

CE devices sit at the edge of the network and they exchange IP routes with the PE devices using an IP routing protocol (BGP, OSPF, EIGRP, IS-IS, RIP). On the CE devices, the routing is done in the global routing table (normal non-VRF routing table) but on the PE, the routing is done in a separate VRF for each customer. VRF’s are locally significant and this means that the CE does not need to know if the PE interface is in a VRF or not. From the perspective of the customer, the routes advertised to the PE device are tunneled transparently across the service provider network and advertised to the CE at the other end without any special configuration.

To better understand the whole process, let’s look at the life of an IP packet as it moves across the Service Provider network from its signaling (control-plane) to it’s forwarding (forwarding plane). For simplicity, we will only look at Customer1 in a single direction (uni-directional traffic).

  1. The first step is to have LDP to do it’s magic. LDP Label Binding and signaling will be done hop by hop from the loopback’s of a PE to the other PE device, forming an LSP.

2015-11-15 12_37_42-untitled

In this case the LSP is created for Loopback0 ( prefix) and the labels signaled are: Null -> 33 -> 16 -> 40. The first label is always signaled as Null because of the Penultimate hop popping (PHP) function. This process is simply an optimization to avoid a double lookup of the label at the last hop device of the LSP; the Label Edge Router (LER). You can read more on this feature if you check RFC3031 section 3.16

  1. In the second step, the customer sends a route update from the CE device and it is advertised through the routing protocol configured between the CE and PE device. When it reaches the PE device, it is signaled through MP-BGP to the other PE as a VPNv4 route with a Route-Distinguisher attached to the prefix, a Route-Target and a Label. In this case, the route 200:1: is sent with Route-Target 200:1 and Label 21.

2015-11-15 12_57_24-untitled

The update packet in Wireshark  looks like this:


In the first box you can see the Route-Target value of 200:1. You can also see in the third box the Label value sent of 21 for the IPv4 route Finally, in the last box you can see the Route-Distinguisher value that makes the route globally unique.

  1. Finally, the customer at the opposite side sends packets for the signaled route. These packets are forwarded from the CE using IPv4 to the PE. From the PE, the IPv4 packets will be encapsulated with two labels to be forwarded over the MPLS backbone, the VPN and the IGP label. Again, the VPN label was signaled by MP-BGP and tells us in which VRF of the opposing PE to send the traffic into. The IGP label was signaled by LDP and is used to forward packets hop by hop across the MPLS backbone.

2015-11-15 13_15_33-untitled

In this example, when the IPv4 packet for destination enters the ingress PE device from the CE, the PE device looks up the destination IP address ( in its forwarding table and finds the correct VRF to send the traffic by looking at which interface the route entered the PE device in the first place. It encapsulates the packet with both IGP and VPN labels (21 and 40) and sends it to the first P device. Hop by hop, the P devices will change the IGP label based on the BGP next-hop IP address ( and finally remove it at the last P before entering the PE device because of the Null label. The last hop PE device looks up the VPN label in its forwarding table and forwards the packet without any labels (as an IP packet) towards the CE device.

Again, this example only showed unidirectional forwarding. If bi-directional traffic was required, Step 1 and Step 2 would have to happen again in the reversed direction for the route back towards the source to be installed in the control-plane of the remote CE device.

In conclusion, MPLS L3VPN is a mix of several different protocols forming a highly scalable and flexible VPN solution. In Part 2, I will be looking at different use cases for MPLS L3VPN’s and some of the most common deployment scenarios.


During my CCIE R&S studies I purposely avoided creating any content related to MPLS (Multiprotocol Label Switching). The problem with MPLS is that once you start talking about it, you are opening the doors to a much larger and deeper discussion on various other technologies. At the time, I didn’t feel quite prepared to embark on this path but after spending several months exploring this technology, I think I can bring some valuable input on the matter. This will be the first out of several posts on MPLS.

I still remember first hearing about MPLS from other engineers back when I was still new in the networking industry. They explained to me that it could do VPN’s, traffic engineering, advanced switching, guarantee bandwidth and basically solve any problem I’ve ever had or would ever have. I was really impressed by all of this and wanted to learn more so I looked it up on my good old search engine to find some more information. To my surprise, the only information I could find on MPLS was that it was some kind of technique to direct packets based on labels. I did find a lot of other documentation talking about MPLS and MP-BGP, VRF’s, RSVP, LDP/TDP, etc. but all of that was way over my head and too complicated for me at the time. Fast forward a couple of years and I wish that at the time those network engineers would have explained MPLS to me in a clearer manner for me to grasp. Let me attempt to explain MPLS in the way I think they should of.

What is MPLS?

Multiprotocol Label Switching (MPLS) is, like I said before, a technique to direct packets based on labels. The key word to remember in this sentence is “technique” and note that technique does not mean it is a service. What i’m trying to say here is that MPLS doesn’t really do anything on its own, it is simply a way to enable services using other technologies and protocols. In fact, the real use case of MPLS is that it creates a new forwarding paradigms that is not available in conventional IP routing. The role of this paradigm is to de-couple IP packet forwarding from the information carried in the packet header itself thus allowing us to forward packets based on labels.

So if MPLS is just a technique and does not do anything on its own, why should I care to implement it in my network?

Well the benefits of MPLS, using this new forwarding paradigm, is that it enables you to create new services for your network. Some of these services are:


-Traffic Engineering and congestion management: MPLS-TE, RSVP-TE

-Failover services: FRR, Link and Node protection

How does it work?

This is what an MPLS label looks like:


Pretty simple isn’t it? This label is inserted between the Layer 2 Header and Layer 3 portion of a packet by using protocol’s like Label Distribution Protocol (LDP) or Tag Distribution Protocol (TDP) as below:


Once packets are tagged or labeled, there are different ways to create MPLS services by using BGP, RSVP, VRF’s and other protocols but I will cover this in future blogs.

The important point to remember from all of this is that MPLS is a technique that creates a new forwarding paradigm and uses several other protocols and technologies to enable services used in today’s modern networks. In my next post, I will go over one of the more popular use-cases for MPLS right now: MPLS L3VPN’s.

CCDE – My journey to becoming a design guru begins now

It took me a couple of months to decide whether I should pursue another CCIE or to go in another direction. After debating between the different tracks I chose to move in another direction and dive into the design/architecture path. Today, I start my journey to becoming a Cisco Certified Design Expert.

As per wiki: “The CCDE identifies network professionals who have expert network design skills. CCDE focuses on network architecture and does not cover implementation and operations. CCDE supplies a vendor-neutral curriculum, testing business requirements analysis, design, planning, and validation practices.” This certification seems like the best choice for me right now as I try and transition into an architecture/lead engineer role.


One of the things I have learned from my CCIE studies is that you need a plan but you should also take it one step at a time. Following that mentality, I will only plan the upcoming months of study and adapt from there. For the next 2 months, my study schedule for this exam will be as follow:

Reading list:

BGP design and implementation

Optimal routing design

Definitive MPLS design

Video List:

BRKRST-2336 – EIGRP Deployment in Modern Networks

BRKRST-2042 Highly Available Wide Area Network Design

BRKRST-2335 IS-IS Network Design and Deployment

BRKSEC-4054 DMVPN Deployment Model

BRKRST-2310 Deploying OSPF in a Large Scale Network

BRKRST-3051 – Core Network Design: Minimizing Packet Loss with IGPs and MPLS

I will be writing blogs on the different technologies and topics covered in the CCDE. Hopefully this blog will stay informative and relevant throughout my journey.

Hour 9??: The end of my CCIE R&S journey

As you can read by the blog title, I seem to have lost count of the hours I’ve spent studying for the CCIE R&S. I estimate by now it has been in the range of 900 to 1000 hours…. This said, I’m ecstatic to finally say that I have passed the certification on my second attempt in Toronto this week. Here is the success story of my second and last CCIE R&S attempt.

There were several mistakes I made in my initial CCIE lab attempt that I had to correct. One of these mistakes was not giving myself enough time to review all the material before the exam. The other big one was underestimating the mental endurance required for this test. To correct this, I took a week off from work and during this time, I did a full review of all material in 10 to 14 hour sessions per day. I also had a very strict sleeping schedule. Even though I was off for a week and could wake up at any time I wanted, I woke up at 6:30 AM and didn’t start troubleshooting labs until 8:30 AM. This was done to get my sleeping pattern in line with the exam date.

This change was important to me as in the last attempt after the troubleshooting and diagnostic sections were done, I was starting to get mentally fatigued and it resulted in me not doing well in the Configuration section. The problem when you always train for 4-8 hour sessions is that your brain gets used to pushing yourself only for that duration and you get a huge mental crash after that. Add to that the stress and pressure of the exam itself, it just crushes your mental sharpness and you tend to make several small mistakes that can cost you time and points.

I was originally thinking of booking my lab in RTP this time instead of Toronto since I had many issues with the mobile lab last time (as you can read in my first CCIE lab attempt here). However, I figured the advantage of knowing the location and not having to worry about all the administration details would give me a better edge for this attempt. However, this time I did make a small change to the location as I booked a better hotel to avoid the incident I had last time with the fire alarm.

I arrived at 8:00 AM at the testing center on Friday morning and there was only 3 other candidates with me. The proctor said that he was expecting a total of 8 candidates. We waited until 8:20 but they seemed to be MIA so the proctor let us inside the testing room, explained to us the rules and let us start our exam. I saw on the board there were 3 candidates for R&S, 3 for Service provider, 1 Data-Center and 1 Security. Around 10 minutes after I started the Troubleshooting section the rest of the candidates arrived but I was really too busy to even notice them.

Troubleshooting started great and was much easier than last time since the topology was fairly similar and I knew what I was getting into. There were 2 questions I got stuck on but as I practiced, I spent 2 minutes on them and skipped them. When I had finished all the other questions and came back on these I had spent 1h30 of my time and still had up to 1 hour to do these. I was crushing it. I went back to these problems and fixed one within the next 5 minutes. The other one took me around 20 minutes and realised it was something stupid that I overlooked. I did a full review of all the questions and caught a couple of mistakes I made and corrected them quickly. I decided to take the extra 30 minutes to go take a bathroom break. I washed my face with cold water and when I came back I was sharp and ready for the diagnostics section.

Diagnostic was very similar than the first time… or so I thought. I actually thought I got lucky and had some of the same questions as last time and was overjoyed. However, I quickly realised it wasn’t the case when I checked the documents. The first topology and question was the same but the problem was different. I was digging through the documents a little bit worried because the problem last time was fairly obvious to me but this time I couldn’t find it. Next thing I know I look at the clock and there is 9 minutes left and I’m still on the first question. I skip to the next question and was starting to panic. I answered the questions as fast as possible but basically made calculated guesses based on the choices they gave me. There were 3 questions and I didn’t even complete the last question when I ran out of time.

At this point the exam took me to the configuration section and I was really upset with myself. I was thinking I got at best 60% if I got the correct answers but I was as likely to get 0% since I did not have time to go through the configuration outputs to verify my answers. The seconds that came after the realization that I am likely to fail the exam were probably some of the most important seconds of my life. In that moment, I had the choice to give up and go in the configuration with a defeated mentality or to keep pushing forward and forget everything that just happened… I do not give up. Losers give up. So I started the configuration section just as if I had started the exam again… and I crushed it. I configured 80% of the lab inside a notepad and double-checked everything before pasting. Once it was in the running-config I double-checked again and even triple checked once I had finished the section. I finished the configuration section in 3h30 and had a lot of time to review. Even if I had triple checked everything like I said earlier, I still found some silly mistakes that I corrected quickly.

When I looked around me once I had finished, there was 1 R&S guy that was missing. I think he gave up. I was the first one finished and gave my drawing sheets back to the proctor. He looked me in the eyes, smiled and said “Good luck”.

The most painful part of the exam was waiting for the results. I was sure I failed. I even went on techexams forum boards and said that I thought I failed and there was maybe a 10% chance of passing due to how bad I did in the Diagnostic section. I was already preparing my study schedule for the next attempt and trying to get a spot to book a new lab. I didn’t get my results back until 2:00PM. the next day.

I wasn’t expecting to get my results until Monday since it was the weekend. I was on the train at that time heading back to Montreal from Toronto and the internet was really slow. This is when I received the email from Cisco to check the OLSM portal for lab results. I clicked on the link and could feel my heart pounding even though my brain was telling me “don’t get excited, you probably failed it”. After what seemed like an eternity, cisco website appeared with a “PASS” next to my lab date. I was looking at it and couldn’t believe it. Did I read it wrong? I clicked on the “PASS” link and it brought me to another page where my CCIE number was printed. I did it, I had finally passed.

This exam was not just another exam to me. It was a journey and a life lesson. A lesson of tenacity, of endurance, of discipline and perseverance. It has made me a better and engineer and a better person. For all of you out there pursuing the CCIE please read and remember this. Knowledge can be gained and lost but the values you will get from this certification will be there forever. I believe in you. You can do it. Never give up. I sure didn’t.


CCIE # 48240

Hour 748: CCIE material review after first lab exam

A couple of weeks have passed since my first lab attempt. Have I studied since? A couple of hours here and there but nothing as intense as before. I have not given up but my lab failure has made me question if I really wanted to pursue this exam.

This said, I have booked my second lab attempt for April 23rd 2015. This gives me a little over 3 months to start practicing and to review all the material once again. Now that I know what to expect from this exam I think I have a better chance of passing it.

In one of my last posts I said I would do a review of the material I used and how it helped me for the CCIE R&Sv5 so here it is.

Reading Material:

Routing TCP/IP Vol1-2:

These two books are really good for the CCIE Written although a lot of the content is old or better explained in the CCNP books. Some people call them the bible of routing and I can see where they are coming from. However, as a CCNP I was already proficient at routing and didn’t benefit from these books as much as the following ones.

Cisco QoS, Exam certification guide:

There was no QoS in my exam but this doesn’t mean you should ignore the topic. This book helped me tremendously at my job and during the practice labs. Who knows, maybe I’ll get some QoS on my next attempt. A must read for the CCIE Written.

CCIE Routing and Switching version 4

This book is useless. Most of the topics covered are already covered in all the other ones. I doubt that anyone would buy the v4 book now that the v5 one is out anyways.

MPLS Fundamental

I did not read the whole MPLS fundamental book but I should of. Most of the people I talked to recommended to only go through the first few chapters as they told me that you don’t need to know any of the advanced topics for the exam. My suggestion would be to read the whole book. For TSHOOT and CONFIG I had L3VPN and some of the more advanced concepts like load-balancing between RRs on the PE’s.

Developing IP Multicast Networks Vol I

In the CCNA-CCNP path you will learn almost nothing on multicast and you will need to have an expert level knowledge of this topic. This book is the bible on multicast. Read it once, read it again and keep it as a reference.

Internet Routing Architecture

This book is good for best practices and designs. It’s good to read once you finish your CCNP to get a more in-depth understanding of BGP. Not a must have but definitely a good read.


This is the most important document to read out of all of them. You must read the sections not covered in the above books and that are listed in the blueprint. I recommend you read the Q&A for most core topics and understand the ins and outs of these technologies. You do NOT have the time to go read the documentation and figure out why something is broken on your lab day. It only takes 1 mistake in of the critical areas and you will FAIL the lab. The exam is based on a script for auto correction so if you fail one of the sections and it breaks multiple other sections then you will fail the lab.

Lab Practice:

IPexpert CCIE R&S Lab v4 Workbook 1-3 (not all Workbook 3)

These are the first workbooks I went through and the hardest. Marko Milivojevic made them before he left IPexpert. Compared to the real exam these were much harder and challenging. I’m thinking of redoing Workbook 3 and replacing the Frame Relay sections with DMVPN. I would say that if you exhausted the INE’s and IPexpert v5 workbooks to try these.

INE CCIE R&S v5 Technology Workbook

A ton of configuration tricks and caveats for each technologies are explained in this workbook. I learned so much from it, it’s amazing. The best technology workbook on the market for sure and a must have for future references.

IPexpert CCIE R&S Lab v5 Workbook 2 Lab 1-2

The Full mock lab workbook from IPexpert are ok but not great. They are easier than the exam and pretty short. The good thing about them is that they simulate pretty accurately the real thing by having a large topology and with the TSHOOT/DIAG/CONFIG format.

INE CCIE R&S v5 Lab Workbook 1 – 2

Similar to their technology workbook, INE’s Lab workbook is by far the best on the market. I highly recommend them to anyone doing their CCIE Lab training. These are much harder than IPexpert v5 workbooks and about the same level as the exam. The only downside to the INE material is that there is no Diagnostic section.

Cisco 360 Labs 1-10

These are very good to learn the technologies and to get better at troubleshooting complex problems. However, the Cisco 360 TSHOOT as well as the CONFIG have very small topologies (around 9 routers and 4 switches) and don’t reflect the real exam. Although Cisco made these labs, the TSHOOT labs are too easy compared to the real thing. I would stay away from these unless you are running out of material to study.