Lab V ( Nexus7k, Overlay Transport Virtualization )

OTV: Overlay Transport Virtualization

OTV(Overlay Transport Virtualization) is a technology that provide layer2 extension capabilities between different data centers.
I
n its most simplest form OTV is a new DCI (Data Center Interconnect) technology that routes MAC-based information by encapsulating traffic in normal IP packets for transit.

  • Transparent workload mobility
  • Business resiliency
  • Superior computing resource efficiencies
DescriptionConfig
Overlay InterfaceLogical OTV Tunnel interfaceinterface Overlay1
OTV Join InterfaceThe physical link or port-channel that you use to route upstream towards the datacenter interconnectotv join-interface Ethernet2/1
OTV Control GroupMulticast address used to discover the remote sites in the control plane.otv control-group 224.100.100.100
OTV Data GroupUsed for tunneling multicast traffic over the OTV in the dataplaneotv data-group 232.1.2.0/24
Extend VLANsVLANs that will be tunneled over OTV.otv extend-vlan 100
Site VLANUsed to synchronize the Authoritative Edge Device (AED) role within an OTV site. otv site-vlan 999
Site IdentifierShould be unique per Datacenter. Used in AED Election.otv site-identifier 0x1

References:

Cisco: OTV Quick Start Guide

Cisco: NX-OS OTV Configuration Guide

Cisco: OTV Best Practices

Cisco: OTV Whitepaper

OTV Encapsulation

OTV adds a further 42 bytes on all packets traveling across the overlay network. The OTV Edge device removes the CRC and 802.1Q fields from the original Layer2 frame. It then adds an OTV Shim Header which includes this 802.1Q field (this includes the priority P-bit value) and the Overlay ID information. It also includes an external IP header for the transport network. All OTV packets have Don’t Fragment (DF) bit set to 1 in the external IP header.

(more…)

LAB IV ( vPC – virtual Port-channels )

ComponentDescription
vPC Domain Includes the vPC Peers, KeepAlive Links and the PortChannels that use the vPC technology.
vPC Peer SwitchThe other switch within the vPC domain. Each switch is connected via the vPC peer link. Its also worth noting that one device is selected as primary and the other secondary.
vPC Member PortPorts included within the vPCs.
vPC Peer Keepalive LinkConnects both vPC peer switches and carries monitoring traffic to/from each peer switch. Monitoring is performed to ensures the switch(s) is both operational and running vPC.
vPC Peer LinkConnects both vPC peer switches. And carries BPDUs, HSRPs, and MAC addresses to its vPC peer. In the event of vPC member port failure it also carries unicast traffic to the peer switch.
Orphan PortAn orphan port is a port that is configured with a vPC VLAN (i.e a VLAN that is carried over the vPC peer link) and is not configured as a vPC member port.

A virtual PortChannel (vPC) allows links that are physically connected to two different Cisco Nexus™ 5000 Series devices to appear as a single PortChannel to a third device. The third device can be a Cisco Nexus 2000 Series Fabric Extender or a switch, server, or any other networking device. A vPC can provide Layer 2 multipathing, which allows you to create redundancy by increasing bandwidth, enabling multiple parallel paths between nodes and load-balancing traffic where alternative paths exist.

After you enable the vPC function, you create a peer keepalive link, which sends heartbeat messages between the two vPC peer devices.

The vPC domain includes both vPC peer devices, the vPC peer keepalive link, the vPC peer link, and all the PortChannels in the vPC domain connected to the downstream device. You can have only one vPC domain ID on each device.

(more…)

LAB III ( DMVPN, MGRE, NHRP, EIGRP)

The Next Hop Resolution Protocol (NHRP) is an Address Resolution Protocol (ARP)-like protocol that dynamically maps a Non-Broadcast Multi-Access (NBMA) network. With NHRP, systems attached to an NBMA network can dynamically learn the NBMA (physical) address of the other systems that are part of that network, allowing these systems to directly communicate.

NHRP is a client and server protocol where the hub is the Next Hop Server (NHS) and the spokes are the Next Hop Clients (NHCs). The hub maintains an NHRP database of the public interface addresses of each spoke. Each spoke registers its real address when it boots and queries the NHRP database for real addresses of the destination spokes to build direct tunnels.

https://www.cisco.com/c/en/us/td/docs/ios-xml/ios/ipaddr_nhrp/configuration/xe-16/nhrp-xe-16-book/config-nhrp.html

HUB (R1 ):

R1: 
interface FastEthernet0/0
 ip address 192.168.1.100 255.255.255.0
 duplex full

interface Tunnel0
 ip address 10.1.1.1 255.255.255.0  #TUNNEL CONFIG
 no ip redirects
 ip mtu 1416
 ip hold-time eigrp 1 35         #EIGRP CONFIG
 no ip next-hop-self eigrp 1     #EIGRP CONFIG
 no ip split-horizon eigrp 1     #EIGRP CONFIG
 ip nhrp map multicast dynamic   #NHRP CONFIG 
 ip nhrp network-id 1            #NHRP CONFIG 
 tunnel source 192.168.1.100     #TUNNEL CONFIG
 tunnel mode gre multipoint      #TUNNEL CONFIG
!
router eigrp 1
 network 10.0.0.0
 network 172.16.0.0
 network 192.168.0.0
!
ip route 0.0.0.0 0.0.0.0 192.168.1.1  #ROUTING TO R2-INTERNET

(more…)

LAB II ( Dual-Homed BGP, HSRP, Linkstate tracking )

Setup:

  • Dual-homed BGP between AS100 and AS200
  • AS100
    • HSRP 192.168.0.10 between R1 and R2
    • Router 1 HSRP Master
    • Linkstate tracking on Fa0/0
    • EIGRP as IGP
  • AS200
    • HSRP 10.10.10.10 between R3 and R4
    • Router 3 as HSRP Master
    • Linkstate tracking on Fa0/0
    • OSPF for IGP

Scenario: The link between Router1 and Router3 would fail. Linkstate tracking would decrement the HSRP priority and switch masters.

When the link was restored and using default HSRP timers, the HSRP master would switch back before the BGP session was established between Router1 and Router3 (at least in GNS3).
Setting up delay timers and linkstate tracking would allow for a good recovery.

(more…)

Quality of Service II ( Deployment, Design )

Quality of Service Deployment

Choosing the correct WAN Type.

  • WAN Providers, you get what you pay for.
    • Tried and True providers
  • Don’t design a sinking ship, bandwidth.
  • Determine bursting capabilities.
  • QoS classes / Policies support.
  • Multicast support.

Modular QoS CLI ( MQC )

  • Class-map
R1(config)#class-map ccdp
Class-map configuration commands:
  description  Class-Map description
  exit         Exit from class-map configuration mode
  match        classification criteria
  no           Negate or set default values of a command

R1(config-cmap)#mat
R1(config-cmap)#match ?
  access-group         Access group
  any                  Any packets
  atm                  Match on ATM info
  class-map            Class map
  cos                  IEEE 802.1Q/ISL class of service/user priority values
  destination-address  Destination address
  discard-class        Discard behavior identifier
  dscp                 Match DSCP in IPv4 and IPv6 packets
  fr-de                Match on Frame-relay DE bit
  fr-dlci              Match on fr-dlci
  group-object         Match object-group
  input-interface      Select an input interface to match
  ip                   IP specific values
  mpls                 Multi Protocol Label Switching specific values
  not                  Negate this match result
  packet               Layer 3 Packet length
  precedence           Match Precedence in IPv4 and IPv6 packets
  protocol             Protocol
  qos-group            Qos-group
  source-address       Source address
  vlan                 VLANs to match

(more…)

Quality of Service I ( QoS, Models, Methods )

What is QoS?

  • http://docwiki.cisco.com/wiki/Quality_of_Service_Networking
  • Quality of life insurance
  • The ability to dictate traffic treatment

    • Prioritization.
      • Only happens with congestion.
    • Shaping / Policing.
      • Shaping: Mold the traffic down to a specific speed.
      • Policing: ‘evil’ traffic types ( p2p / video ).
    • Advanced Strategies ( WRED – Weighted random early detection)
      • Drop selective TCP streams so it won’t hit max.
  • Strategies to fight the enemy
    • Delay ( how long it takes for a Packet A to get to the other side).
    • Jitter ( Delay variation, Times between Packets A,Packet B,and Packet C taking to get to the other side)
    • Packetloss
Audio Requirements Video Requirements
Jitter< 30ms< 30ms
Delay< 150ms< 150ms
Loss < 1%< 1%
QoS:DSCP EFDSCP AF41
BandwithLowHigh

(more…)

Datacenter Design VI ( SDN )

Software Defined Networking

  • Advantages SDN
    • Automatic Infrastructure Provisioning
    • Multi-tenant enviornments
    • Flexible Placement of servers ( Mobility )
    • Health monitoring of applications
    • Application to NET ( Southbound ) and NET to application ( Northbound ) communication
  • Cisco’s SDN implementation: Application Centric Infrastructure ( ACI )

Three key ingredients for ACI

  • Nexus 9000 series / 9300 / 9500.
  • Aplication Policy Infrastructure Controller ( APIC ).
    • Cisco recommends a minimum of three APIC servers.
  • Policy Model ( “What talks to what and how” ).

(more…)

Datacenter Design IV ( VPC , MEC, Fabric Extenders )

What is a vPC (virtual Port Channel)

  • Nexus series Network Virtualisation Technology.
  • “Lightweight” VSS – Combine ports, not switches.
  • Links on different switches to appear as the same device.
  • Downstream device can be anything supoprting 802.3ad (LACP).
  • Commonly called Multi Chassis Etherchannel ( MEC ).

(more…)