Category: Labs

LAB IX – RIPv2 -> OSPF Case Study

Building a use case from the CCDP FLG:

Topology:

  • Each site has two links to their HQ (top) via WAN (Prio) and Internet ( backup ).
  • Internet and WAN connectivity goes over multipoint GRE tunnels to the sites with static NHRP mappings.
  • Cost of Internet links are increased so they’re used as backup links.
  • Backbone area configured over WAN and Internet

Building the LAB:

OSPF Design

Building the Backbone:

Adding the tunnel interface and NHRP mappings on the WAN Hub Router (R1):

And we have some routing on the Hubs:

LAB VIII: MPLS (MP-BGP – EoMPLS)

  • P Routers – Provider routers
    • MPLS Core
  • PE Routers – Provider Edge routers
    • MPLS – IP Edge
  • CE Routers – Customer Edge routers
    • IP Edge

Traceroute (R6 -> R7)

Layer 3 setup:

 

GNS3 LAB:

 

 

(more…)

LAB VII: BGP communities

Building a case study from the ARCH FLG book; BGP communities.

The idea is to use BGP communities to influence the routing between Autonomous Systems with the following goals in mind:

  • Configure communities to tag the routes per building on each AS.
  • Configure communities as no-export so the routes of AS65001.building2 and AS65002.building2 are not exported through AS65000.
    • The routes will be tagged on R6 and R9 with community 65000:99 and processed on the AS boundry.
    • The routes of AS65001.building1 and AS65002.building1 are allowed to be exported.
  • Configure communities so that R7 and R8 can set their local preference on the AS65000 side.
    • The routes will be tagged on R7 will be tagged with 65000:200 resulting in a local-preference of 200.
    • The routes will be tagged on R8 will be tagged with 65000:300 resulting in a local-preference of 300.
ASBuildingSubnetCommunityDescription
AS65000Building 1 ( Router 1 )10.0.1.0/2465000:5001
AS65000Building 2 ( Router 2 )10.0.2.0/2465000:5002Single uplink to AS65001
AS65000Building 3 ( Router 3 )10.0.3.0/2465000:5003Double uplink to AS65002
AS65000Building 3 ( Router 4 )10.0.3.0/2465000:5003Double uplink to AS65002
AS65001Building 1 ( Router 5 )10.0.111.0/2465001:5102
AS65001Building 2 ( Router 6 )10.0.112.0/2465001:5102
65000:99
Community 65000:99 is used for no-export
AS65002Building 1 ( Router 7 )10.0.221.0/2465002:5202
65000:200
65000:200 is used for local preference 200 in AS65000
AS65002Building 1 ( Router 8 )10.0.221.0/2465002:5201
65000:300
65000:300 is used for local preference 300 in AS65000
AS65002Building 3 ( Router 9 )10.0.222.0/2465002:5202
65000:99
Community 65000:99 is used for no-export

LAB:

LAYER3:

(more…)

LAB VI: Multicast PIM Sparse mode

https://en.wikipedia.org/wiki/Protocol_Independent_Multicast

  • PIM Sparse Mode (PIM-SM) explicitly builds unidirectional shared trees rooted at a rendezvous point (RP) per group, and optionally creates shortest-path trees per source. PIM-SM generally scales fairly well for wide-area usage.

Packetcapture when generating traffic from the Video Server (R1) to the multicast group address 224.3.2.1.

Connectivity via OSPF:

On all routers:
router ospf 1
 network 0.0.0.0 255.255.255.255 area 0

R1#sh ip route
Codes: C - connected, S - static, R - RIP, M - mobile, B - BGP
       D - EIGRP, EX - EIGRP external, O - OSPF, IA - OSPF inter area
       N1 - OSPF NSSA external type 1, N2 - OSPF NSSA external type 2
       E1 - OSPF external type 1, E2 - OSPF external type 2
       i - IS-IS, su - IS-IS summary, L1 - IS-IS level-1, L2 - IS-IS level-2
       ia - IS-IS inter area, * - candidate default, U - per-user static route
       o - ODR, P - periodic downloaded static route

Gateway of last resort is not set

     1.0.0.0/32 is subnetted, 1 subnets
O       1.1.1.1 [110/21] via 10.0.0.2, 00:14:46, FastEthernet0/0
     20.0.0.0/24 is subnetted, 1 subnets
O       20.0.0.0 [110/20] via 10.0.0.2, 00:14:46, FastEthernet0/0
     10.0.0.0/24 is subnetted, 1 subnets
C       10.0.0.0 is directly connected, FastEthernet0/0
     30.0.0.0/24 is subnetted, 1 subnets
O       30.0.0.0 [110/30] via 10.0.0.2, 00:14:46, FastEthernet0/0

Multicast configuration:

On all routers:
# Enable Multicast routing
ip multicast-routing

#Enable PIM Sparse-mode on the interfaces
R1(config)#int fa0/0
R1(config-if)#ip pim sparse-mode
R1(config)#int fa0/1
R1(config-if)#ip pim sparse-mode

#Add RP address
ip pim rp-address 1.1.1.1

(more…)

Lab V ( Nexus7k, Overlay Transport Virtualization )

OTV: Overlay Transport Virtualization

OTV(Overlay Transport Virtualization) is a technology that provide layer2 extension capabilities between different data centers.
I
n its most simplest form OTV is a new DCI (Data Center Interconnect) technology that routes MAC-based information by encapsulating traffic in normal IP packets for transit.

  • Transparent workload mobility
  • Business resiliency
  • Superior computing resource efficiencies
 DescriptionConfig
Overlay InterfaceLogical OTV Tunnel interfaceinterface Overlay1
OTV Join InterfaceThe physical link or port-channel that you use to route upstream towards the datacenter interconnectotv join-interface Ethernet2/1
OTV Control GroupMulticast address used to discover the remote sites in the control plane.otv control-group 224.100.100.100
OTV Data GroupUsed for tunneling multicast traffic over the OTV in the dataplaneotv data-group 232.1.2.0/24
Extend VLANsVLANs that will be tunneled over OTV.otv extend-vlan 100
Site VLANUsed to synchronize the Authoritative Edge Device (AED) role within an OTV site. otv site-vlan 999
Site IdentifierShould be unique per Datacenter. Used in AED Election.otv site-identifier 0x1

References:

Cisco: OTV Quick Start Guide

Cisco: NX-OS OTV Configuration Guide

Cisco: OTV Best Practices

Cisco: OTV Whitepaper

OTV Encapsulation

OTV adds a further 42 bytes on all packets traveling across the overlay network. The OTV Edge device removes the CRC and 802.1Q fields from the original Layer2 frame. It then adds an OTV Shim Header which includes this 802.1Q field (this includes the priority P-bit value) and the Overlay ID information. It also includes an external IP header for the transport network. All OTV packets have Don’t Fragment (DF) bit set to 1 in the external IP header.

(more…)

LAB IV ( vPC – virtual Port-channels )

ComponentDescription
vPC Domain Includes the vPC Peers, KeepAlive Links and the PortChannels that use the vPC technology.
vPC Peer SwitchThe other switch within the vPC domain. Each switch is connected via the vPC peer link. Its also worth noting that one device is selected as primary and the other secondary.
vPC Member PortPorts included within the vPCs.
vPC Peer Keepalive LinkConnects both vPC peer switches and carries monitoring traffic to/from each peer switch. Monitoring is performed to ensures the switch(s) is both operational and running vPC.
vPC Peer LinkConnects both vPC peer switches. And carries BPDUs, HSRPs, and MAC addresses to its vPC peer. In the event of vPC member port failure it also carries unicast traffic to the peer switch.
Orphan PortAn orphan port is a port that is configured with a vPC VLAN (i.e a VLAN that is carried over the vPC peer link) and is not configured as a vPC member port.

A virtual PortChannel (vPC) allows links that are physically connected to two different Cisco Nexus™ 5000 Series devices to appear as a single PortChannel to a third device. The third device can be a Cisco Nexus 2000 Series Fabric Extender or a switch, server, or any other networking device. A vPC can provide Layer 2 multipathing, which allows you to create redundancy by increasing bandwidth, enabling multiple parallel paths between nodes and load-balancing traffic where alternative paths exist.

After you enable the vPC function, you create a peer keepalive link, which sends heartbeat messages between the two vPC peer devices.

The vPC domain includes both vPC peer devices, the vPC peer keepalive link, the vPC peer link, and all the PortChannels in the vPC domain connected to the downstream device. You can have only one vPC domain ID on each device.

(more…)

LAB III ( DMVPN, MGRE, NHRP, EIGRP)

The Next Hop Resolution Protocol (NHRP) is an Address Resolution Protocol (ARP)-like protocol that dynamically maps a Non-Broadcast Multi-Access (NBMA) network. With NHRP, systems attached to an NBMA network can dynamically learn the NBMA (physical) address of the other systems that are part of that network, allowing these systems to directly communicate.

NHRP is a client and server protocol where the hub is the Next Hop Server (NHS) and the spokes are the Next Hop Clients (NHCs). The hub maintains an NHRP database of the public interface addresses of each spoke. Each spoke registers its real address when it boots and queries the NHRP database for real addresses of the destination spokes to build direct tunnels.

https://www.cisco.com/c/en/us/td/docs/ios-xml/ios/ipaddr_nhrp/configuration/xe-16/nhrp-xe-16-book/config-nhrp.html

HUB (R1 ):

R1: 
interface FastEthernet0/0
 ip address 192.168.1.100 255.255.255.0
 duplex full

interface Tunnel0
 ip address 10.1.1.1 255.255.255.0  #TUNNEL CONFIG
 no ip redirects
 ip mtu 1416
 ip hold-time eigrp 1 35         #EIGRP CONFIG
 no ip next-hop-self eigrp 1     #EIGRP CONFIG
 no ip split-horizon eigrp 1     #EIGRP CONFIG
 ip nhrp map multicast dynamic   #NHRP CONFIG 
 ip nhrp network-id 1            #NHRP CONFIG 
 tunnel source 192.168.1.100     #TUNNEL CONFIG
 tunnel mode gre multipoint      #TUNNEL CONFIG
!
router eigrp 1
 network 10.0.0.0
 network 172.16.0.0
 network 192.168.0.0
!
ip route 0.0.0.0 0.0.0.0 192.168.1.1  #ROUTING TO R2-INTERNET

(more…)

LAB II ( Dual-Homed BGP, HSRP, Linkstate tracking )

Setup:

  • Dual-homed BGP between AS100 and AS200
  • AS100
    • HSRP 192.168.0.10 between R1 and R2
    • Router 1 HSRP Master
    • Linkstate tracking on Fa0/0
    • EIGRP as IGP
  • AS200
    • HSRP 10.10.10.10 between R3 and R4
    • Router 3 as HSRP Master
    • Linkstate tracking on Fa0/0
    • OSPF for IGP

Scenario: The link between Router1 and Router3 would fail. Linkstate tracking would decrement the HSRP priority and switch masters.

When the link was restored and using default HSRP timers, the HSRP master would switch back before the BGP session was established between Router1 and Router3 (at least in GNS3).
Setting up delay timers and linkstate tracking would allow for a good recovery.

(more…)

LAB I ( OSPF over GRE with and without IPsec )

Setup:

  • R1 functions as the internet.
  • R2 is the first location with Public IP 1.1.1.2/30
  • R3 is the second location with Public IP 1.1.2.2/30

There must be a GRE tunnel configured between R2 and R3 so that OSPF can be used between them. In the example we will use a tunnel with and without IPsec.

Configuration without IPsec:

ROUTER 2:

R2:

# WAN ADDRESS
interface FastEthernet0/0
 ip address 1.1.1.2 255.255.255.0
 duplex auto
 speed auto
!

# TUNNEL ADDRESS
interface Tunnel0
 ip address 10.10.10.1 255.255.255.252
 tunnel source 1.1.1.2
 tunnel destination 1.1.2.2
!

# LAN ADDRESS
interface Loopback0
 ip address 192.168.10.1 255.255.255.0
!

# OSPF CONFIG
router ospf 1
 log-adjacency-changes
 network 10.10.10.0 0.0.0.3 area 0
 network 192.168.10.0 0.0.0.255 area 0
!

# DEFAULT ROUTE (TRAFFIC TOWARDS R3)
ip route 0.0.0.0 0.0.0.0 1.1.1.1

(more…)