[DC] Cloud Computing

Basic cloud computing

  • Essential Characteristics
    • Broad network access
    • Rapid elasticity
    • Measured Service
    • On-demand Self-service
    • Resource pooling
  • Service Models
    • SaaS – Software as a Service
    • PaaS – Platform as a Service
    • IaaS – Infrastructure as a Service
  • Deployment Models
    • Public
      • Provisitioned for open use by the general public
    • Private
      • Cloud for the exclusive use by a single organization
      • Managed by IT or thirdparty
      • on-premise or off-premise
    • Hybrid
      • Two or more cloud infrastructures combined
    • Community
      • Multiple organisations combined

What is an API

  • A precise specification written by providers of a service
  • You must follow the specification when using that service
  • An API decribes what functionality is available, how ti must be used and what formats it will accept as input or return as output

(more…)

[DC] ACI and APIC

ACI > Contructs

Tenant VDC
Context VRF
Bridge domain Subnet / SVI
EPG Broadcast domain / VLAN
Contract ACL
L2 External EPG 802.1Q trunk
L3 External EPG L3 Routed link

Fundamentals:

  • Open and Secure
  • Apps and Infrastructure
  • Physical and Virtual
  • On-Site and Cloud

Bringing up the Fabric:

  • Physical requirements
    • Power
    • Cabling + mgmt0
    • Rack and Stack
  • Power on/Connect to APICs
    • How many APICs
    • Fabric Name
    • Admin Password
    • Setup Fabric Network ( IP & VLAN)
  • Log into the APIC (HTTP out of band)
    • NTP
    • Route Reflectors
    • MGMT IP Fabric
    • Leaf and Spine Name/#

Fabric Discovery

  • Zero touch fabric, the controller does everything
  • APIC uses LLDP to get information about the leaf switches it’s connected to
  • First the leaf is dicovered and will be named (101)
  • Then the Spine is connected and named  (201)
  • Then the leafs are discovered (103,104)

(more…)

[DC] Datacenter Interconnects (DCI, OTV)

Distributed Data center Goals

  • Ensure business continuity
  • Distributed applications
  • Seamless workload mobility
  • Maximize compute resources

Challenges in traditional Layer 2 VPN:

  • Flooding Behavior
    • Unknown unicast for mac propagation
    • Unicast Flooding reaches all sites
  • Pseudo-wire Maintenance
    • Full mesh of Pseudo-wire is complex
    • Head-End replication is a common problem
  • Multi-Homing
    • Requires additional protocols and extends STP
    • Malfunctions impact multipe sites

(more…)

[DC] Nexus features config / commands

VDC Configuration

  • Show license usage
  • Show vdc
  • Show vdc membership
  • vdc DCC01
  • allocate resource command
  • limit-resource command
  • show run vdc
  • switchto vdc DCC01

FEX Configuration

  • Enable feature FEX
  • configure fex 100
  • interface e1/25
  • switchport mode fex-fabric
  • fex associate 100
  • show fex

VPC Configuration

  • feature vpc
  • vpc domain 100
  • peer-keepalive destination 10.10.10.2 source 10.10.10.1 vrf management
  • sh vpc
  • int po10
    • vpc peer-link
  • int e 1/25
    • channel-group 10 mode active
  • int po10
    • vpc 10

(more…)

LAB IX – RIPv2 -> OSPF Case Study

Building a use case from the CCDP FLG:

Topology:

  • Each site has two links to their HQ (top) via WAN (Prio) and Internet ( backup ).
  • Internet and WAN connectivity goes over multipoint GRE tunnels to the sites with static NHRP mappings.
  • Cost of Internet links are increased so they’re used as backup links.
  • Backbone area configured over WAN and Internet

Building the LAB:

OSPF Design

Building the Backbone:

Adding the tunnel interface and NHRP mappings on the WAN Hub Router (R1):

And we have some routing on the Hubs:

[DC] Unified Computing Systems ( UCS )

UCS Physical Infrastructure

  •  Fabric Interconnect  ( 6248UP )
    • 32x Fixed unified ports: 1/10 GE or 1/2/4/8 FC
    • Expansion Module
    • Run in an Active / Active state for the dataplane
    • Run in a clustered Active/Passive state for the management
    • Connected to the UCS Chassis
    • Managed via UCSM or Cli (NX-OS)

  •  UCS Chassis
    • 6U Chassis, 32″Deep
    • Passive backplane
    • 8x Half width blades
    • 4x Full width blades
    • Everything is managed by the Fabric Interconnects.

Connecting the Fabric Interconnects to the LAN and SAN:

 

(more…)

[DC] FC / FCoE

FCoE is short for Fibre Channel over Ethernet.

Fibre Channel over Ethernet (FCoE) solves the problem of organizations having to run parallel network infrastructures for their local area networks (LANs) and their storage area networks (SANs). As a result, they have to operate separate switches, host bus adapters (HBAs), network interface cards (NICs) and cables for each of these networks. Even utilizing a virtualization solution like VMware can actually increase the number of network adapters required to carry traffic out of the servers.

https://www.cisco.com/c/en/us/products/collateral/switches/nexus-7000-series-switches/white_paper_c11-560403.html

 

  • FIP – FCOE Initialization Protocol
  • FLOGI – Fabric login
  • FcF – FibreChannel Forwarder
  • FSPF – FibreChannel Shortest Path First
FC PortNameDescrption
N_PortNode PortEnd Device
F_PortFabric PortSwitch Port
L_PortLoop PortLoop Topo, End Device
NL_PortNode Loop PortN Port voor arbitrated loop ToPo
FL_PortFabric Loop PortAllows loops to connect to Fabric
E_PortExpansion PortSwitch to Switch connectivity ( ISL )
G_PortGeneric PortAllows auto config on the switch
B_PortBridge PortFC WAN Gateway Port
U_PortUniversal PortAUTO E, F, or FL Port

 

(more…)

[DC] Unified Fabric and FCoE

Unified Fabric

  • Traditional DCs
    • LAN and SAN fabric isolation
    • Server has 2 adapters  HBA for SAN and NIC for LAN
    • Kept completely seperate end-to-end
  • Unified Fabric DCs
    • Server 10G ethernet Converged Network Adapters ( CNAs ) for both LAN and SAN ( FCoE )
    • the LAN and SAN traffic is Unified on the same wire providing I/O consolidation

FCoE

 

  • FCoE is a protocol for transporting native FC frames over an 10G ethernet transport link.
  • Full FC frame is encapsulated onto a Jumbo Ethernet Frame.
  • FCoE requires lossless delivery.
  • FCoE requires a seperate FCoE VLAN from a normal VLAN traffic

FCoE Terminology

  • FIP – FCoE Initialisation Protocol
  • FCF – FCoE Forwarder ( switch )
    • accesswitch connected to the initiator
  • Enode ( server)
    • CNA running FCOE
  • Virtual Fibre Channel ( VFC ) Interface
  • Virtual Port Types
    • VN_Port
      • virtual node port
    • VF_Port
      • virtual facbric port
    • VE_Port
      • virtual extension port, switch to switch, multihop

Data Center Bridging

  • Data Center Briding ( DCB ) is a set of IEEE standards for Unified Fabrics
  • Priority Flow Control ( PFC )  ( 802.1Qbb )
    • Lossless delivery for selected CoS
  • Enhanced Traffic Selection ( ETS ) ( 8021.Qaz )
    • Bandwith Management and priority selection
  • These protocols combined: Datacenter Bridging Exchange

 

LAB VIII: MPLS (MP-BGP – EoMPLS)

  • P Routers – Provider routers
    • MPLS Core
  • PE Routers – Provider Edge routers
    • MPLS – IP Edge
  • CE Routers – Customer Edge routers
    • IP Edge

Traceroute (R6 -> R7)

Layer 3 setup:

 

GNS3 LAB:

 

 

(more…)

[DC] NX-OS – Overlay Transport Virtualization

https://www.quisted.net/arc/datacenterdesign/lab-v-nexus7k-overlay-transport-virtualization/

What is OTV:

  • Layer 2 VPN over IPv4
  • Used over the DCI to extend VLANs between datacenter sites

OTV was designed for Layer 2 DCI

  • Optimizes ARP Flooding over DCI
  • Does not extend STP domain
  • Can overlay multiple VLANs without complicated design
  • Allows multiple edge routers without complicated design

OTV benefits

  • Provides a flexible overlay VPN on top of without restrictions for the IP nework
  • L2 transports leveraging the transport IP network capabilities
  • Provides a virtual multi-access L2 network that supports efficient transport of unicast, multicast and broadcast traffic

OTV Control Plane

  • Uses IS-IS to advertise MAC addresses between AEDs
    • “Mac in IP” Routing
  • Encapsulated as Control Group Multicast
    • Implies that DCI Must support ASM Multicast
    • Can be encapsulated as Unicast with OTV Adjacency Server

OTV Data Plane

  • Uses both Unicast and Multicast Transport
  • Multicast Control Group
    • Multicast or Broadcast Control Plane Protocols
    • eg. ARP, OSPF, EIGRP etc
  • Unicast Data
    • Normal Unicast is encapsulated as Unicast between AEDs
  • Multicast Data Group
    • Multicast data flows are encapsulated as SSM Multicast
    • Implies AED use IGMPv3 for (S,G) joins
  • OTV Adjacency Server can remove requirement for Multicast completely
    • Will result in Head End Replication when more than 2 DC’s connected over the DCI

OTV DCI Optimizations

  • Other DCI options bridge all traffic over DCI
    • eg. STP, ARP, Broadcast storms etc
  • OTV reducdes unnecessary flooding by:
    • Proxy ARP/ICMPv6 ND Cache on AED
    • Assumption is that hosts are bi-directional (not silent)
    • Inital ARPs are flooded, then cache is used
    • Terminating the STP Domain on AED.

OTV Configuration:

 

License needed: