Overblog
Follow this blog Administration + Create my blog
Cisco & Cisco Network Hardware News and Technology

How to Stack Cisco Catalyst 2960-X or 2960-XR Series Switches?

March 20 2018 , Written by Cisco & Cisco Router, Network Switch Published on #Networking, #Cisco & Cisco Network, #Technology, #Cisco Modules - Cisco Cables - Cisco Memory

Stacking Cisco Catalyst 2960-X or 2960-XR Series Switches is often asked by Cisco users. When we talk about the Catalyst 2960-X or 2960-XR stacking, we need to know Cisco FlexStack-Extended and FlexStack-Plus technology. What’s the FlexStack-Extended and FlexStack-Plus technology? What benefits can we get from this tech? And how to stack Cisco Catalyst 2960-X or 2960-XR? We will share the typical example of Stacking Cisco Catalyst 2960-X or 2960-XR Series Switches in this article.

  1. Cisco FlexStack-Extended and FlexStack-Plus technology allows stacked installation of Cisco Catalyst 2960-X or 2960-XR Series Switches within the same wiring closet, across wiring closets on different floors of a building, or across different buildings in a campus, with a single point of management that reduces IT management overhead.
  2. The Cisco Catalyst 2960-X FlexStack-Plus Stack Module provides high-bandwidth stacking capability over short distances to simplify management and improve resiliency.
  3. The Cisco Catalyst 2960-X FlexStack-Extended Stack Module–Hybrid provides investment protection for Cisco Catalyst 2960-X and 2960-XR Series Switches that are already stacked and installed with FlexStack-Plus modules.

These modules act as interconnects between FlexStack-Plus and FlexStack-Extended stacked switches.

The FlexStack-Extended and FlexStack-Plus modules enable stacking within and across wiring closets. Up to eight Cisco Catalyst 2960-X or 2960-XR Series Switches can be stacked, with a single management and control plane. All management tasks, such as configuration, Cisco IOS Software upgrades, and troubleshooting, can be performed for all stacked switches from a single point of management through a command line or a simple graphical interface with Cisco Catalyst Configuration Professional.

The FlexStack-Plus and FlexStack-Extended modules are simple-to-install plug-and-play modules, with no preset configuration requirements. They simplify troubleshooting of multiple switches spread over large areas of the campus.

The FlexStack-Extended module uses the same rules for stack master election as FlexStack-Plus switches. These modules can be inserted into the stack module slot at the rear of the Cisco Catalyst 2960-X and 2960-XR Series Switches. Up to eight switches can be stacked in a ring topology using the FlexStack-Plus or FlexStack-Extended modules.

Learn more: FlexStack vs. FlexStack-Plus

C2960X-STACK vs. C2960X-FIBER-STK vs. C2960X-HYBRID-STK

Stack Module Slot Location

How to Stack Cisco Catalyst 2960-X or 2960-XR Series Switches?

●   Stack modules are plug and play; no configuration is required to bring up the stack.

Command: “show inventory” to see the modules inserted:

switch#show inventory

NAME: "3", DESCR: "WS-C2960XR-48TD-I"

PID: WS-C2960XR-48TD-I , VID: V01  , SN: FOC1720Y3WK

-----Output omitted-----------------------

NAME: "Switch 1 - FlexStackPlus Module", DESCR: "Stacking Module"

PID: C2960X-HYBRID-STK , VID: V01  , SN: FDO211827QG

The ports of the modules are in a stack port configuration by default.

Command:  “show switch hstack-ports” to ensure that the ports are stack ports.

Example: On the FlexStack-Extended fiber module:

The ports of the modules are in a stack port configuration by default.

Command:  “show switch hstack-ports” to ensure that the ports are stack ports.

Example: On the FlexStack-Extended fiber module:

Example: On the FlexStack-Extended hybrid module:

Note: The fiber port of the module does not show up with this command.

● When connecting the FlexStack-Extended hybrid module to FlexStack-Plus modules, the stack bandwidth of the switch with the FlexStack-Plus module should be manually configured to 10 Gbps

Command: “switch stack port-speed 10G” to set the stacking bandwidth to 40 Gbps:

Example: switch(config)#switch stack port-speed 10

Command: ‘show switch stack-ring  speed’

Example: switch#show switch stack-ring  speed

Stack Ring Speed        : 10G

Stack Ring Configuration: Half

Stack Ring Protocol     : FlexStack

● Once the stack cables (fiber or FlexStack-Plus cables) are connected to the switches to stack them:

Command: “show switch” to see all switches in the stack. The master is indicated with an asterisk (*).

switch#show switch

Switch/Stack Mac Address : d0c7.896b.9480

                                H/W   Current

Role   Mac Address     Priority Version  State

----------------------------------------------------------

 2       Member d0c7.aaaa.xxxx     1      4       Ready

*3       Master d0c7.bbbb.yyyy     1      4       Ready

Command: “show switch stack-ports” to see the status of the stack ports.

Example: switch#show switch stack-ports

 Switch #    Port 1       Port 2

  --------    ------       ------

    2          Down          Ok

    3          Down          Ok

Ok: Port status up

Down: Port status down

Note: When adding a switch to an existing stack, power off the new switch, connect the stack cables, and then power on the new switch. This will prevent any downtime in the existing stack.

How to Pick a Stack Module

● If the switches in the stack are less than 3 m (10 ft) apart or high stacking bandwidth is a requirement, the C2960X STACK module would be best suited for stacking

● If the stack switches are spread across wiring closets on different floors of a building or across multiple buildings in a campus (switches are more than 3 m [10 ft] apart), the C2960X-FIBER-STK module would be best suited

● If the stack is a mix of switches in the same wiring closet and switches spread across wiring closets, the stack modules will be a mix of C2960X STACK, C2960X-FIBER-STK, and C2960X-HYBRID-STK

Points to Remember

● Fast convergence is not supported on stack switches with FlexStack-Extended ports

● The fiber stack ports will support 10-Gbps transceivers only. Refer to the list of supported 10-Gbps transceivers mentioned earlier

● The FlexStack-Extended modules support up to 40-Gbps stack bandwidth over longer distances

● The FlexStack-Plus module supports up to 80-Gbps stack bandwidth over short distances

● When adding a new switch to an existing stack, power off the new switch and then connect the stack cables. This is to prevent reload of the existing stack and stack master reelection

● To use FlexStack-Extended modules, all switches in the stack require upgrade to Cisco IOS Software Release 15.2(6)E or later

Reference from https://www.cisco.com/c/en/us/products/collateral/switches/catalyst-2960-x-series-switches/white-paper-c11-739615.html

More Related

Why SELECT Cisco 2960-X Series?

Cisco Catalyst 2960-X Switches: Enterprise Ready

Cisco Catalyst 2960-X vs. 2960-XR Series Switches

Cisco 2960S and 2960-X Series’ Problems from Users

How to Install or Replace an AC Power Supply in a Cisco 2960-X Switch?

Cisco Catalyst 2960-X Comparison & Features

The Latest Updated: SFP Modules for Cisco Catalyst 2960-X Series Switches

WS-C2960X-48TD-L & WS-C2960XR-48TD-I Tested, from Miercom

Read more

Why 25G Transceiver Choices?

March 14 2018 , Written by Cisco & Cisco Router, Network Switch Published on #Networking, #Cisco Transceiver Modules, #Data Center, #Cisco Modules & Cards, #Cisco & Cisco Network, #IT, #Technology

25G Speeds Up Data Centers and Campus Backbones NOW. With the massive increase in demand for data, equipment providers are responding with 25Gbps edge devices that require more bandwidth than can be provided on a traditional 10Gbps interface.

Whether it’s a server or a campus backbone, high speed data needs to be delivered cost-effectively in a small and low-power package.

In these bandwidth-intensive applications, the choice to go with 25G is clear. To get the same or better bandwidth, the number of 10G interfaces must be 3x (6x for redundancy) or the application needs to move to the larger, more expensive and power-hungry 40G QSFP.

SFP28: For 25G the dominant form factor is SFP28. The SFP28 standard relies on the 10G SFP+ (Small Form Factor Pluggable) standard for mechanical specifications, and the electrical specifications have been improved from one 10Gbps lane that operates at 10.312Gbps to one 28Gbps lane that operates at 25Gbps + error correction. 25G transceivers can be plugged into SFP+ sockets and 10G transceivers can be plugged into SFP28 sockets because they have the same electrical and mechanical pin-out, however the associated host needs to have the software support for associated devices.

Cisco’s 25G transceiver choices include 25G Copper DAC (Direct Attached Cables), 25G AOC (Active Optical Cables) and 25G SR-S (Short Reach) transceivers.

These 25G devices are plugged into Cisco’s data center, campus and service provider switches and routers to provide high speed 25Gbps connectivity. See Cisco’s 25G compatibility matrix for currently supported devices .

Why DAC?

25G DACs are generally used in data center applications and provide the lowest cost fixed length interconnect for TOR (Top of Rack) switches to high-performance servers.  Depending upon the bandwidth and distance, DACs can be either passive or active and are generally based on Twin-AX cable.  For 25G, DACs can generally operate up to 5 meters without active components in the data path. Up to 2 meters, no FEC (Forward Error Correction) is needed. For 3 meters FC-FEC (Fire Code Forward Error Correction) is needed, and for 5 meters RS-FEC (Reed Solomon Forward Error Correction) is needed to correct errors.  Generally, at 25Gbps beyond 5 meters, active components are needed in the data path to amplify and correct the signal.  These components drive up cost which causes network designers to consider optical interfaces.

Why AOC?

25G AOCs also provide a cost effect solution for those same data center applications that require longer distances than 5m. Generally, AOCs are provided in standard lengths of 1m, 2m, 3m, 5m and 10m. However, they are usually limited to about 25 meters because of inventory stock and slack storage issues. Often a data center will be wired with only AOCs for consistency reasons, instead of a combination of AOCs and DACs.

Why SR?

25G-SR is used with standard OM3 or OM4 multimode fiber and is suitable for:

• Data centers that require up 100 meters over OM4 fiber or 70 meters over OM3 fiber for interconnect between TOR switches and leaf or spine switches.

• Breakout configurations in conjunction with 100G-SR4 transceivers where the distances are less than 100 meters for OM4 fiber or 70 meters for OM3 fiber.

• Campus backbones, where the distances between distribution and aggregation switches are less than 100 meters for OM4 fiber or 70 meters for OM3 fiber.

Learn more about how Cisco’s 25G transceiver products are transforming the industry here

Original article from https://blogs.cisco.com/sp/too-slow-25g-speeds-up-data-centers-and-campus-backbones

 

More Related

Cisco 25G Transceivers for Next Generation Switches

Updated: Cisco Gigabit Ethernet Transceiver Modules for ASR 1000 Series Router

Is It Possible to Interconnect SFP, SFP+ and XENPAK/X2…?

Upgrade Seamlessly From 40Gb or 10Gb-Cisco 40/100Gb QSFP100 BiDi Pluggable Transceiver

Read more

Cisco Firepower 2100 Series, as a NGFW or a NGIPS

March 7 2018 , Written by Cisco & Cisco Router, Network Switch Published on #Networking, #NGFW, #Cisco Technology - IT News, #IT, #Technology

The new Cisco Firepower 2100 Series appliances help you achieve a better security doesn’t come at the expense of network performance.

Cisco Firepower 2100 Series can be deployed either as a Next-Generation Firewall (NGFW) or as a Next-Generation IPS (NGIPS). They are perfect for the Internet edge and all the way in to the data center.

Four new models are available: 2110, 2120, 2130, and 2140

• The Firepower 2110 and 2120 models offer 2.0 and 3 Gbps of firewall throughput, respectively. They provide increased port density and can provide up to sixteen (16) 1 Gbps ports in a 1 rack unit (RU) form factor.

• The Firepower 2130 and 2140 models provide 5 and 8.5 Gbps of firewall throughput, respectively. These models differ from the others in that they can be customized through the use of network modules, or NetMods. They can provide up to twenty-four (24) 1 Gbps ports in a 1 RU appliance, or to provide up to twelve (12) 10 Gbps ports.

Firepower 2100 NGFWs uniquely provide sustained performance when supporting threat functions, such as IPS. This is done using an innovative dual multi-core architecture. Layer 2 and 3 functionality is processed on one NPU (Network Processing Unit). Threat inspection and other services are processed on a separate multi-core x86 CPU. By splitting the workload, we minimize the performance degradation that you see with competing solutions when turning on threat inspection.

Firepower 2100 Series Appliance Performance Highlights

Features

Cisco Firepower Model

2110

2120

2130

2140

Throughput FW + AVC (Cisco Firepower Threat Defense)1

2.0 Gbps

3 Gbps

4.75 Gbps

8.5 Gbps

Throughput: FW + AVC + NGIPS (Cisco Firepower Threat Defense)1

2.0 Gbps

3 Gbps

4.75 Gbps

8.5 Gbps

1 HTTP sessions with an average packet size of 1024 bytes

2 1024 bytes TCP firewall performance

Learn more: Guide to the New Cisco Firepower 2100 Series

ASA Performance and Capabilities on Firepower 2100 Series Appliances

Features

Cisco Firepower Appliance Model

2110

2120

2130

2140

Stateful inspection firewall throughput1

3 Gbps

6 Gbps

10 Gbps

20 Gbps

Stateful inspection firewall throughput (multiprotocol)2

1.5 Gbps

3 Gbps

5 Gbps

10 Gbps

Concurrent firewall connections

1 million

1.5 million

2 million

3 million

Firewall latency (UDP 64B microseconds)

-

-

-

-

New connections per second

18000

28000

40000

75000

IPsec VPN throughput (450B UDP L2L test)

500 Mbps

700 Mbps

1 Gbps

2 Gbps

IPsec/Cisco AnyConnect/Apex site-to-site VPN peers

1500

3500

7500

10000

Maximum number of VLANs

400

600

750

1024

Security contexts (included; maximum)

2; 25

2; 25

2; 30

2; 40

High availability

Active/active and active/standby

Active/active and active/standby

Active/active and active/standby

Active/active and active/standby

Clustering

-

-

-

-

Scalability

VPN Load Balancing

Centralized management

Centralized configuration, logging, monitoring, and reporting are performed by Cisco Security Manager or alternatively in the cloud with Cisco Defense Orchestrator

Adaptive Security Device Manager

 

Web-based, local management for small-scale deployments

1 Throughput measured with User Datagram Protocol (UDP) traffic measured under ideal test conditions.

2 “Multiprotocol” refers to a traffic profile consisting primarily of TCP-based protocols and applications like HTTP, SMTP, FTP, IMAPv4, BitTorrent, and DNS.

3 In unclustered configuration.

More detailed data sheet of Cisco NGFW:

https://www.cisco.com/c/en/us/products/collateral/security/firepower-ngfw/datasheet-c78-736661.html

Firepower 2100 Series PIDs: See the show inventory and show inventory expand commands in the Cisco FXOS Troubleshooting Guide for the Firepower 2100 Series to display a list of the PIDs for your Firepower 2100. See Product IDs for a list of the product IDs (PIDs) associated with the 2100 series.

More Related

Finding the Sweet Spot–Firepower 2100

The New Cisco Firepower 2100 Series

How to Deploy the Cisco ASA FirePOWER Services in the Internet Edge, VPN Scenarios and Data Center?

The Most Common NGFW Deployment Scenarios

Read more

What's in HPE's persistent memory/8GB NVDIMM?

March 1 2018 , Written by Cisco & Cisco Router, Network Switch Published on #HPE Servers, #Networking, #IT, #Technology

HPE’s new persistent memory modules, the NVDIMMs, combine the speed of DRAM with

the resilience of flash. The persistent memory module combines 8GB of DRAM and 8GB of flash in a single module that fits in a standard server DIMM slot.

DRAM operates at high speed but it's relatively expensive, and if a server shuts down unexpectedly any data in DRAM is lost. Flash is slower but it's nonvolatile, meaning it retains data when the power source is removed.

It's not intended to replace external storage; SSDs, spinning hard drives and tape are still best for storing large amounts of data. But it provides a portion of storage that sits on the high-speed memory bus and can retain data if a server crashes.

Applications in NVDIMM can run much faster, according to HPE, because data doesn't have to shuttle back and forth between the CPU and storage drives.

HPE isn't first to the game. Component makers including Micron Technology and Viking Technology make NVDIMMs, and other server makers are experimenting with forms of persistent memory.

But Patrick Moorhead, lead analyst at Moor Insights and Strategy, says HPE has a lead over its server rivals, at least for now.

HPE says NVDIMM offers up to six times the bandwidth of SSDs based on the high-speed NVMe (nonvolatile memory express) protocol, and provides up to 24 times more IOPS (input-output operations per second). 

The NVDIMMs, as an option, is for two models of ProLiant Gen9 server, the DL380 and DL360.

It needs software makers on board as well. Operating systems need to be aware of NVDIMM to take advantage of it, and while standard applications will see performance gains, the biggest benefits will come to apps that are tuned for persistent memory.

HPE has written a driver for Windows Server 2012 R2 that will be available with the new servers. And HPE officials said Microsoft will support NVDIMM "out of the box" with Windows Server 2016, expected later this year. It's also working with Linux vendors and other software makers.

The NVDIMMs have a microcontroller and connect to a 96-watt Lithium-ion battery integrated into the server. If a server crashes, the battery provides powers to the module until the data in DRAM has been backed up to flash. The battery can support up to 128GB of persistent memory in a server.

HPE believes NVDIMM could benefit applications like databases, where in-memory processing is a fast-growing trend. It says tests have shown up to a 10x boost in database and analytics applications tuned to run on NVDIMMs.

Where applications haven't been tuned, it says users will still see a 2x increase in SQL Server database logging, for example.

It plans to offer future NVDIMMs that emphasize larger capacity over performance. And HPE officials said Intel may offer its high-speed 3D Xpoint technology in a persistent memory form.

It sees NVDIMMs as a stepping stone on the way to future computing architectures. It's Synergy systems, which have a new type of "composable" infrastructure, will all be enabled for persistent memory when they ship.

It hasn't given a commercial release date yet for Synergy, but it plans to ship beta units to some customers in May, HPE officials said.

Further out, HPE's goal is to collapse memory and storage into a single tier using a new technology called memristors. It hasn't given an arrival date for that system, which it calls the Machine.

Info from

https://www.computerworld.com/article/3051135/data-storage/whats-in-hpes-persistent-memory.html

 

More Related

HPE Persistent Memory/NVDIMMs for HPE ProLiant Servers

HPE ProLiant Gen10-The World’s Most Secure Industry Standard Servers

How to Choose a Server for Your Data Center’s Needs?

How to Buy a Server for Your Business?

A Guide for Storage Newbies: RAID Levels Explained

Read more