[DC] Storage Networking & FibreChannel

LAN and SAN Separation

  • Security  Ensures protection from hacking
  • Bandwidth – SAN needs more bandwidth than LAN
  • Flow Control – SAN is lossless and LAN is lossy
    • Ethernet Flow control ( LAN ):
      • Source transmits packets untill receiver buffer overflow, then sends a “Pause” frame
      • Lost packets are retransmitted
    • Fibre Channel ( SAN ):
      • Credit based mechanism – Receiver has control
      • Source does not send a frame until the receiver telsl the source it can receive a frame by sending “Ready” signal Back
  • Performance – SAN provides more performance than LAN enviorments

LAN vs SAN flow control

  • Flow control is how data is controlled in a network
  • Ethernet Flow control ( LAN )
    • Source transmits packets until receiver buffers overflow, then sends a “Pause” frame
    • Lost packets are retransmitted
  • Fibre Channel ( SAN )
    • Credit based mechanism – Receiver has control
    • Source does not send a frame until the receiver tells the source it can receive a frame by sending “Ready” signal back.
    • “Lossless Fabric”


  • San Topologies
    • Point-to-Point
      • Initiator (server) and Target (Storage) directly connected
    • Arbitraded Loop (FC-AL) (Legacy)
      • Logical ring topology, similar to token ring
      • Implies connection is required on the ring
    • Switched Fabric ( FC-SW ) ( Standard)
      • Logical equivalent to a switched ethernet LAN
      • Switches manage the fabric allowing any-to-any communication
      • Support more than 16 million device addresses
  • FibreChannel Port types
    • N_port – Node Port
    • NL_port – Node Loop Port
    • F_port – Fabric Port
    • FL_port – Fabric Loop Port
    • E_port – Expansion Port ( ISL )
    • TE_port – Trunking Expansion Port
  • FC Addressing is analogous to IP over Ethernet
    • IP addresses are logical and manually assigned
    • Ethernet MAC Addresses are physical and burned in
    • FC World Wide Names ( WWNs )  / MAC / Zoning

      • 8 byte address burned in by manufacturer
      • Word Wide Node Name
      • World Wide Port Name
    • FC Identifier ( FCID )  / IP / Routing

      • 3 byte logical address assigned by fabric
      • FCID is subdevided into three fields:
        • Domain ID
          • Each switch gets a domainID
        • Area ID
          • Group of ports on a switch have an Area ID
        • Port ID
          • End station connected to switch gets a Port ID
  • FibreChannel Nameserver ( FCNS)
    • analogous to ARP cache
    • Used to resolve WWN ( pysical address ) to FCID ( logical address )
    • Like FSPF, FCNS requires no configuration
  • FibreChannel Logins
    • Ethernet networks are connectionless
    • Fibre Channel networks are connection oriented
      • All end stations must first register with the control plane of the fabric before sending any traffic.
    • Fabric Registration has three parts
      • Fabric Login ( FLOGI)
      • Port Login ( PLOGI)
      • Process Login ( PLRI )
    • sh flogi database
    • sh fcns database
  • VSANs
    • Logical seperation of SAN traffic
  • Zoning
    • like an ACL in the IP world



[DC] NX-OS – Fabricpath


Cisco FabricPath is a Cisco NX-OS software innovation combining the plug-and-play simplicity of Ethernet with the reliability and scalability of Layer 3 routing.

Using FabricPath, you can build highly scalable Layer 2 multipath networks without the Spanning Tree Protocol. Such networks are particularly suitable for large virtualization deployments, private clouds, and high-performance computing (HPC) environments.


Datacenter Design V ( TRILL, Fabric Path )



  • Classic Ethernet ( CE )
    • Regular internet with regular flooding, regular STP, etc.
  • Leaf switch
    • Connects CE domain to FP domain
  • Spine switch
    • FP backbone switch all ports in the FP domain only
  • FP Core Ports
    • Links on leaf up to Spine, or Spine to Spine
    • i.e. the switchport mode fabricpath links
  • CE Edge Ports
    • Links of leaf connecting to regular CE domain (to servers / switches)
    • i.e. NOT the switchport mode fabricpath links

Activating the fabricpath feature set.

For the activation is the “ENHANCED_LAYER2.PK” license needed, or the grace-period of 120 days:


vlan 100
  mode fabricpath
  name test

interface Ethernet2/1
  switchport mode fabricpath
  no shutdown

interface Ethernet2/2
  switchport mode fabricpath
  no shutdown

N7K3# sh run int e2/9
interface Ethernet2/9
  switchport access vlan 100
  no shutdown
N7K3# sh fabricpath isis

Fabricpath IS-IS domain : default
  System ID : 0026.c734.4f2f  IS-Type : L1 Fabric-Control SVI: Unknown
  SAP : 432  Queue Handle : 15
  Maximum LSP MTU: 1492
  Graceful Restart enabled. State: Inactive
  Last graceful restart status : none
  Graceful Restart holding time:60
  Metric-style : advertise(wide), accept(wide)
  Start-Mode: Complete [Start-type configuration]
  Area address(es) :
  Process is up and running
  CIB ID: 1
  Interfaces supported by Fabricpath IS-IS :
  Level 1
  Authentication type and keychain not configured
  Authentication check specified
  LSP Lifetime: 1200
  L1 LSP GEN interval- Max:8000 Initial:50      Second:50
  L1 SPF Interval- Max:8000     Initial:50      Second:50
  MT-0 Ref-Bw: 400000
        Max-Path: 16
  Address family Swid unicast :
    Number of interface : 6
    Distance : 115
  L1 Next SPF: Inactive

N7K3# sh fabricpath switch-id
                        FABRICPATH SWITCH-ID TABLE
Legend: '*' - this system
        '[E]' - local Emulated Switch-id
        '[A]' - local Anycast Switch-id
Total Switch-ids: 4
    1           0026.c751.bd2f    Primary     Confirmed Yes     No
    2           0026.c71f.a62f    Primary     Confirmed Yes     No
*   3           0026.c734.4f2f    Primary     Confirmed Yes     No
    4           0026.c7cb.4b2f    Primary     Confirmed Yes     No
N7K3# sh cdp nei
Capability Codes: R - Router, T - Trans-Bridge, B - Source-Route-Bridge
                  S - Switch, H - Host, I - IGMP, r - Repeater,
                  V - VoIP-Phone, D - Remotely-Managed-Device,
                  s - Supports-STP-Dispute

Device-ID          Local Intrfce  Hldtme Capability  Platform      Port ID
N7k1(TBC751BD00B)   Eth2/1         147    R S I s   N7K-C7018     Eth2/5
N7k1(TBC751BD00B)   Eth2/2         148    R S I s   N7K-C7018     Eth2/6
N7K2(TBC71FA600B)   Eth2/5         170    R S I s   N7K-C7018     Eth2/5
N7K2(TBC71FA600B)   Eth2/6         170    R S I s   N7K-C7018     Eth2/6
R1                  Eth2/9         134    R S I     3725          Fas0/0

Total entries displayed: 5
N7K3# sh fab
fabric       fabricpath
N7K3# sh fabri
fabric       fabricpath
N7K3# sh fabricpath route
FabricPath Unicast Route Table
'a/b/c' denotes ftag/switch-id/subswitch-id
'[x/y]' denotes [admin distance/metric]
ftag 0 is local ftag
subswitch-id 0 is default subswitch-id

FabricPath Unicast Route Table for Topology-Default

0/3/0, number of next-hops: 0
        via ---- , [60/0], 0 day/s 03:03:28, local
1/1/0, number of next-hops: 2
        via Eth2/1, [115/400], 0 day/s 03:01:13, isis_fabricpath-default
        via Eth2/2, [115/400], 0 day/s 03:01:13, isis_fabricpath-default
1/2/0, number of next-hops: 2
        via Eth2/5, [115/400], 0 day/s 03:00:59, isis_fabricpath-default
        via Eth2/6, [115/400], 0 day/s 03:00:59, isis_fabricpath-default
1/4/0, number of next-hops: 4
        via Eth2/1, [115/800], 0 day/s 03:00:59, isis_fabricpath-default
        via Eth2/2, [115/800], 0 day/s 03:00:59, isis_fabricpath-default
        via Eth2/5, [115/800], 0 day/s 03:00:59, isis_fabricpath-default
        via Eth2/6, [115/800], 0 day/s 03:00:59, isis_fabricpath-default


  • VDC
  • VPC
  • Fabricpath
  • Fabric Extenders (FEX)
  • OTV

VDC ( Virtual Device Context )



A VDC can be used to virtualize the device itself, presenting the physical switch as multiple logical devices. Within that VDC it can contain its own unique and independent set of VLANs and VRFs. Each VDC can have assigned to it physical ports, thus allowing for the hardware data plane to be virtualized as well. Within each VDC, a separate management domain can manage the VDC itself, thus allowing the management plane itself to also be virtualized.

Create a new VDC:

N7k1(config)# vdc N5K1
N7k1# switchto vdc N5K1

Show allocated interfaces:

switch# show vdc membership

vdc_id: 0 vdc_name: switch interfaces:

        Ethernet2/1           Ethernet2/2           Ethernet2/3
        Ethernet2/4           Ethernet2/5           Ethernet2/6
        Ethernet2/7           Ethernet2/8           Ethernet2/9
        Ethernet2/10          Ethernet2/11          Ethernet2/12
        Ethernet2/13          Ethernet2/14          Ethernet2/15
        Ethernet2/16          Ethernet2/17          Ethernet2/18
        Ethernet2/19          Ethernet2/20          Ethernet2/21
        Ethernet2/22          Ethernet2/23          Ethernet2/24
        Ethernet2/25          Ethernet2/26          Ethernet2/27
        Ethernet2/28          Ethernet2/29          Ethernet2/30
        Ethernet2/31          Ethernet2/32          Ethernet2/33
        Ethernet2/34          Ethernet2/35          Ethernet2/36
        Ethernet2/37          Ethernet2/38          Ethernet2/39
        Ethernet2/40          Ethernet2/41          Ethernet2/42
        Ethernet2/43          Ethernet2/44          Ethernet2/45

vdc_id: 1 vdc_name: N5K1


Allocate interfaces:

N7k1(config)#vdc N5K1
N7k1(config-vdc)#allocate interface e2/1 - 12

VPC ( Virtual Port Channel )


LAB IV ( vPC – virtual Port-channels )


[DC] Nexus Models

 Nexus 7000/7700Nexus 5500/5600Nexus 2000 ( FEX ) 
1/10/40/100Gbps1/10/40Gbps1/10/40Gbps Fabric Extender
Layer2 and Layer3 LAN switchingLayer2 and Layer3 LAN switchingNo local switching (Traffic is done by parent)
FCoE SAN SwitchingFCoE SAN Switching
No native FC portsNative FC Ports
Highly redundant




LAB VII: BGP communities

Building a case study from the ARCH FLG book; BGP communities.

The idea is to use BGP communities to influence the routing between Autonomous Systems with the following goals in mind:

  • Configure communities to tag the routes per building on each AS.
  • Configure communities as no-export so the routes of AS65001.building2 and AS65002.building2 are not exported through AS65000.
    • The routes will be tagged on R6 and R9 with community 65000:99 and processed on the AS boundry.
    • The routes of AS65001.building1 and AS65002.building1 are allowed to be exported.
  • Configure communities so that R7 and R8 can set their local preference on the AS65000 side.
    • The routes will be tagged on R7 will be tagged with 65000:200 resulting in a local-preference of 200.
    • The routes will be tagged on R8 will be tagged with 65000:300 resulting in a local-preference of 300.
AS65000Building 1 ( Router 1 )
AS65000Building 2 ( Router 2 ) uplink to AS65001
AS65000Building 3 ( Router 3 ) uplink to AS65002
AS65000Building 3 ( Router 4 ) uplink to AS65002
AS65001Building 1 ( Router 5 )
AS65001Building 2 ( Router 6 )
Community 65000:99 is used for no-export
AS65002Building 1 ( Router 7 )
65000:200 is used for local preference 200 in AS65000
AS65002Building 1 ( Router 8 )
65000:300 is used for local preference 300 in AS65000
AS65002Building 3 ( Router 9 )
Community 65000:99 is used for no-export




Starting CCNA: Datacenter

Next up, Datacenter!


  • Data Center Physical Infrastructure
  • Basic Data Center Networking Concepts
  • Advanced Data Center Networking Concepts
  • Basic Data Center Storage
  • Advanced Data Center Storage



  • Unified Computing
  • Network Virtualization
  • Cisco Data Center Networking Technologies
  • Automation and Orchestration
  • Application Centric Infrastructure


LAB VI: Multicast PIM Sparse mode


  • PIM Sparse Mode (PIM-SM) explicitly builds unidirectional shared trees rooted at a rendezvous point (RP) per group, and optionally creates shortest-path trees per source. PIM-SM generally scales fairly well for wide-area usage.

Packetcapture when generating traffic from the Video Server (R1) to the multicast group address

Connectivity via OSPF:

On all routers:
router ospf 1
 network area 0

R1#sh ip route
Codes: C - connected, S - static, R - RIP, M - mobile, B - BGP
       D - EIGRP, EX - EIGRP external, O - OSPF, IA - OSPF inter area
       N1 - OSPF NSSA external type 1, N2 - OSPF NSSA external type 2
       E1 - OSPF external type 1, E2 - OSPF external type 2
       i - IS-IS, su - IS-IS summary, L1 - IS-IS level-1, L2 - IS-IS level-2
       ia - IS-IS inter area, * - candidate default, U - per-user static route
       o - ODR, P - periodic downloaded static route

Gateway of last resort is not set is subnetted, 1 subnets
O [110/21] via, 00:14:46, FastEthernet0/0 is subnetted, 1 subnets
O [110/20] via, 00:14:46, FastEthernet0/0 is subnetted, 1 subnets
C is directly connected, FastEthernet0/0 is subnetted, 1 subnets
O [110/30] via, 00:14:46, FastEthernet0/0

Multicast configuration:

On all routers:
# Enable Multicast routing
ip multicast-routing

#Enable PIM Sparse-mode on the interfaces
R1(config)#int fa0/0
R1(config-if)#ip pim sparse-mode
R1(config)#int fa0/1
R1(config-if)#ip pim sparse-mode

#Add RP address
ip pim rp-address


300-320 ARCH resource list

Designing for Cisco Network Service Architectures (ARCH) Foundation Learning Guide: CCDP ARCH 300-320, 4th Edition:

CCDP 300-320 videos courses:

Cisco Design Webinars:

Cisco Arch Study Material:

Cisco Design Zone:


Books / PDF


Cisco Guides:

Various Resources:

Cisco Live:

  • Enterprise Campus Design: Multilayer Architectures and Design Principles – BRKCRS-2031
  • WAN Architectures and Design Principles – BRKRST-2041
  • Campus Wired LAN Deployment Using Cisco Validated Designs – BRKCRS-1500
  • Campus QoS Design-Simplified – BRKCRS-2501
  • OSPF Deployment in Modern Networks – BRKRST-2337
  • EIGRP Deployment in Modern Networks – BRKRST-2336
  • Advanced – Scaling BGP – BRKRST-3321
  • Nexus Multicast Design Best Practices – BRKIPM-3062
  • Cisco FabricPath Technology and Design – BRKDCT-2081
  • Advanced Enterprise Campus Design: Converged Access – BRKCRS-2888
  • Cisco Unified Contact Center Enterprise Planning and Design – BRKCCT-2007


Lab V ( Nexus7k, Overlay Transport Virtualization )

OTV: Overlay Transport Virtualization

OTV(Overlay Transport Virtualization) is a technology that provide layer2 extension capabilities between different data centers.
n its most simplest form OTV is a new DCI (Data Center Interconnect) technology that routes MAC-based information by encapsulating traffic in normal IP packets for transit.

  • Transparent workload mobility
  • Business resiliency
  • Superior computing resource efficiencies
Overlay InterfaceLogical OTV Tunnel interfaceinterface Overlay1
OTV Join InterfaceThe physical link or port-channel that you use to route upstream towards the datacenter interconnectotv join-interface Ethernet2/1
OTV Control GroupMulticast address used to discover the remote sites in the control plane.otv control-group
OTV Data GroupUsed for tunneling multicast traffic over the OTV in the dataplaneotv data-group
Extend VLANsVLANs that will be tunneled over OTV.otv extend-vlan 100
Site VLANUsed to synchronize the Authoritative Edge Device (AED) role within an OTV site. otv site-vlan 999
Site IdentifierShould be unique per Datacenter. Used in AED Election.otv site-identifier 0x1


Cisco: OTV Quick Start Guide

Cisco: NX-OS OTV Configuration Guide

Cisco: OTV Best Practices

Cisco: OTV Whitepaper

OTV Encapsulation

OTV adds a further 42 bytes on all packets traveling across the overlay network. The OTV Edge device removes the CRC and 802.1Q fields from the original Layer2 frame. It then adds an OTV Shim Header which includes this 802.1Q field (this includes the priority P-bit value) and the Overlay ID information. It also includes an external IP header for the transport network. All OTV packets have Don’t Fragment (DF) bit set to 1 in the external IP header.


LAB IV ( vPC – virtual Port-channels )

vPC Domain Includes the vPC Peers, KeepAlive Links and the PortChannels that use the vPC technology.
vPC Peer SwitchThe other switch within the vPC domain. Each switch is connected via the vPC peer link. Its also worth noting that one device is selected as primary and the other secondary.
vPC Member PortPorts included within the vPCs.
vPC Peer Keepalive LinkConnects both vPC peer switches and carries monitoring traffic to/from each peer switch. Monitoring is performed to ensures the switch(s) is both operational and running vPC.
vPC Peer LinkConnects both vPC peer switches. And carries BPDUs, HSRPs, and MAC addresses to its vPC peer. In the event of vPC member port failure it also carries unicast traffic to the peer switch.
Orphan PortAn orphan port is a port that is configured with a vPC VLAN (i.e a VLAN that is carried over the vPC peer link) and is not configured as a vPC member port.

A virtual PortChannel (vPC) allows links that are physically connected to two different Cisco Nexus™ 5000 Series devices to appear as a single PortChannel to a third device. The third device can be a Cisco Nexus 2000 Series Fabric Extender or a switch, server, or any other networking device. A vPC can provide Layer 2 multipathing, which allows you to create redundancy by increasing bandwidth, enabling multiple parallel paths between nodes and load-balancing traffic where alternative paths exist.

After you enable the vPC function, you create a peer keepalive link, which sends heartbeat messages between the two vPC peer devices.

The vPC domain includes both vPC peer devices, the vPC peer keepalive link, the vPC peer link, and all the PortChannels in the vPC domain connected to the downstream device. You can have only one vPC domain ID on each device.