Follow this blog Administration + Create my blog
Cisco & Cisco Network Hardware News and Technology
Recent posts

How to Connect the DC Adapter for the ASA 5506H-X?

April 28 2018 , Written by Cisco & Cisco Router, Network Switch Published on #Networking, #Cisco Switches - Cisco Firewall, #IT, #Cisco Technology - IT News, #Cisco & Cisco Network, #NGFW

Cisco ASA 5506-X, ASA 5506W-X, and ASA 5506H-X Hardware Installation Guide

How to maintain and upgrade your ASA firewalls, such as the popular ASA 5506-X, ASA 5506W-X and ASA 5506H-X? In this article we will share the two guides for Cisco ASA 5506-X, ASA 5506W-X and ASA 5506H-X, which contain the following sections:

  • Connect the DC Adapter for the 5506H-X
  • Install the Adjustable Power Supply Retainer

How to Connect the DC Adapter for the ASA 5506H-X?

You can order an optional DC power supply that supplies 24V DC (part number PWR2-20W-24VDC) or 20W 20-60V DC (part number PWR2-22W-20-60VDC).


1. This product relies on the building's installation for short-circuit (overcurrent) protection. Ensure that the protective device is rated not greater than 36 VDC, 5A. Statement 1005

2. This product requires short-circuit (overcurrent) protection to be provided as part of the building installation. Install only in accordance with national and local wiring regulations.

3. The device is designed to work with TN power systems.

To connect the DC power on your 5506H-X, follow these steps:

Step1: Connect the black and white lead wires to a 12 VDC source. The black lead is negative or ground and the white lead is positive. The output cable is 1.3 meters and the input cable is 1 meter in length.

Figure1. DC Power Adapter 


Black wire (negative)


White wire (positive)




Step2: Plug the adapter cord into the ASA.

Note: The power adapters have 18 AWG wires for the input connection. Tinned bare wires are used for the input connection because there is no standard established for connector type. Screw terminal blocks are most often used.

Step3: Power on the ASA and check that it has power. See LEDs for information on the power LED.

How to Install the Adjustable Power Supply Retainer?

You can install an adjustable power supply retainer for the Delta and LiteOn power supplies in the rack-mount tray. The bracket kit contains the bracket, 2 M3 screws, and washers. The following figure shows the adjustable retainer.

Figure2. ASA Bracket Assembly 


Two screws to loosen to change from high to low bracket


Two captive screws to attach to rack-mount tray

Step1: Slide the rack-mount shelf containing the ASA(s) out of the rack.

Step2At the back of the rack-mount shelf behind the power supplies, install the adjustable retainer.

  1. Loosen the 2 top screws (item 1 in the figure above) slightly to adjust the bracket for each power supply.

The Delta power supply uses the bracket extended to its tallest configuration. This configuration has item 1 shifted to the bottom of the slot on the bracket over the power supply. The LiteOn power supply uses the bracket extended to its shortest configuration. This configuration has item 1 shifted to the top of the slot on the bracket over the power supply.

  1. Install the bracket over the power supply and screw the 2 bottom M3 captive screws (item 2 in the figure above) on each side of the bottom of the bracket into the rack-mount tray.

The following figure shows the installed power supply retainer.

Figure3. Installed Power Supply Retainer

Info from


More Related:

Cisco ASA with FirePower Services vs. FTD

How to Deploy the Cisco ASA FirePOWER Services in the Internet Edge, VPN Scenarios and Data Center?

The Most Common NGFW Deployment Scenarios

Migrate from the Cisco ASA5505 to Cisco ASA5506X Series

Migration to Cisco NGFW

Cisco ASA with Firepower Services, Setup Guide-Part1

Cisco ASA with Firepower Services, Setup Guide-Part2

Cisco ASA with Firepower Services, Setup Guide-Part3

Cisco ASA with Firepower Services, Setup Guide-Part4

How to Recover the Password for Your ASA?

Read more

What Factors Will Affect Server Purchases for IT Buyers?

April 20 2018 , Written by Cisco & Cisco Router, Network Switch Published on #Networking, #IT, #Technology, #Data Center, #HPE Servers

How many factors do you consider to choose a server? For example, VM and container consolidation, as well as visualization and scientific computing, each affect the decision. Yes, server selection is a quandary for IT, as security, the use of file servers and whether multiple servers of CPU systems will meet enterprise demand plague enterprises.

In the following part, Stephen J. Bigelow (Senior Technology Editor in the Data Center and Virtualization media group at TechTarget Inc.) discussed some important factors on server purchases for your enterprise.

1. Enhanced server security plays a role in server purchases

Although server purchases aren't based solely on security capabilities, there is a proliferation of protection, detection and recovery features to consider for most enterprise tasks. Modern security features now extend well beyond traditional Trusted Platform Modules.

For example, secure servers can offer protection through a hardware-based root of trust, which uses hardware validation of server management platforms, such as an integrated Dell Remote Access Controller, and server firmware as the system boots. Validation typically includes cryptographic signatures to ensure that only valid firmware and drivers are running on the server. Similarly, firmware and driver updates are usually cryptographically signed to verify their authenticity or source. You can execute validations periodically even though the system might not reboot for months. Native data encryption is increasingly available at the server processor level to protect data in flight and at rest.

An increasing number of systems can detect unauthorized or unexpected changes in system firmware images and firmware configurations, enforcing a system lockdown to prevent such changes and alerting administrators when change attempts occur at the firmware level. Servers frequently include persistent event logging, which includes an indelible record of all activity.

And servers benefit from various recovery capabilities. For example, automatic BIOS/firmware recovery can restore firmware to a known goodstate after the system detects any flaw or compromise in the firmware code base. Some systems can apply similar restoration to the OS by detecting possible malicious activity and restoring the OS to a known good state as well. And system erasure features can be used to wipe all hardware configuration settings of the server, including BIOS data, diagnostic data, management configuration states, nonvolatile cache and internal SD cards. System erasure can be particularly important before redeploying the server or removing it from service.

When choosing a server, evaluate the importance of certain features based on the use cases.

When choosing a server, evaluate the importance of certain features based on the use cases.

2. For data servers, focus on network I/O

File servers, or data servers, can take many shapes and sizes depending on the needs of each specific business. The actual compute resources needed in a data server are typically light. For example, file servers rarely process data or make computations that demand extensive processor or memory capacity. Web servers may include more resources if the system will also be running code or back-end applications, such as databases. If the organization plans to employ virtualization to consolidate multiple data servers onto a single physical box, the processor and memory requirements will need a closer look.

However, the emphasis for data servers is more frequently focused on network I/O, which can be critical for accessing shared/centralized storage resources and exchanging files or web content with many simultaneous users -- network bottlenecks are commonplace. If the data server will employ internal storage, the choice of disk types and capacity can have a significant influence on storage access performance and resilience. Data servers can deploy a fast 10 Gigabit Ethernet port or multiple 1 GbE ports, which you can trunk together for more speed and resilience.

As just one example, a modestly configured Dell EMC PowerEdge R430 rack server offers two processor sockets, 16 GB of memory, four 1 GbE ports and a 1 TB 7.2K rpm Serial Advance Technology Attachment (SATA) 6 Gbps disk drive by default. However, you can select the R430 chassis to accept varied disk configurations with up to 10 hot-pluggable Serial-Attached SCSI, SATA, nearline SAS or solid-state drives if the business chooses to place storage in the server itself. You can also enhance network performance through a choice of Peripheral Component Interconnect Express network adapters or storage host bus adapters.

Systems versus CPUs

Many data centers are shrinking as virtualization, fast networking and other technologies allow fewer servers to host more workloads. The quandary for server purchases then becomes server count versus CPU count. Is it better to have more servers or more resources within fewer servers? Packing more capability into fewer boxes can reduce overall capital expenses, data center floor space and power and cooling demands. But hosting more workloads on fewer boxes can also increase risk to the business because more workloads are affected if the server fails or requires routine maintenance. Clustering, snapshot restoration and other techniques can help to guard against hardware failures, but a business still needs to establish a comfortable balance between server count and server capability, regardless of how the servers are used.

The original article from http://searchdatacenter.techtarget.com/tip/Security-vendor-choices-affect-server-purchases-for-IT-buyers

Outside of cost, what are the biggest factors in your server selection process? Join the Discussion

Read More: HPE Servers Topics

More Related

How to Buy a Server for Your Business?

How to Choose a Server for Your Data Center’s Needs?

Read more

5 Benefits You Get When Buying a Top of Rack Switch Nexus 3000

April 12 2018 , Written by Cisco & Cisco Router, Network Switch Published on #Networking, #Cisco Switches - Cisco Firewall, #Cisco & Cisco Network, #Technology, #IT, #Data Center

What makes Nexus 3100-V unique? Here is a summary of the most important highlights:

  1. Support of 100G uplinks
  2. Bigger buffer (16MB)
  3. Double System memory (16GB)
  4. Quadruple Ingress ACL: increased from 4,000 to 16,000
  5. VxLAN routing

Watch this video if you’d like to get a brief tour on Cisco campus and watch Houfar Azgomi present the Nexus 3100V.

Cisco Nexus 3100-V platform switches summary



Cisco Nexus 3132Q-V Switch

32 x 40-Gbps QSFP+ ports (all ports are capable of 10 or 40 Gbps)

Cisco Nexus 31108PC-V Switch

48 x 10-Gbps SFP+ ports and 6 x QSFP28 ports (all QSFP ports can operate at 40 or 100 Gbps)

Cisco Nexus 31108TC-V Switch

48 x 10GBASE-T ports and 6 x QSFP28 ports (all QSFP ports can operate at 40 or 100 Gbps)

Cisco Nexus 31108TCV-32T Switch

32 x 10GBASE-T ports and 6 x QSFP28 ports (all QSFP ports can operate at 40 or 100 Gbps)


More Info about Nexus 3100-V Models

The Cisco Nexus 3132Q-V is a 40-Gbps Quad Small Form-Factor Pluggable (QSFP) switch with 32 Enhanced QSFP (QSFP+) ports. It also has 4 SFP+ ports that are internally multiplexed with the first QSFP port. Each QSFP+ port can operate in native 40-Gbps mode or 4 x 10-Gbps mode, with up to a maximum of 104 x 10-Gbps ports.

Cisco Nexus 3132Q-V Switch

The Cisco Nexus 31108PC-V is a 10-Gbps SFP+)–based ToR switch with 48 SFP+ ports and 6 QSFP28 ports. Each SFP+ port can operate in 100-Mbps, 1 Gbps, or 10-Gbps mode, and each QSFP28 port can operate in native 100-Gbps or 40-Gbps mode or 4 x 10-Gbps mode, offering flexible migration options. This switch is a true PHY-less switch that is optimized for low latency and low power consumption.

Cisco Nexus 31108PC-V Switch

The Cisco Nexus 31108TC-V is a 10GBASE-T switch with 48 10GBASE-T ports and 6 QSFP28 ports. This switch is well suited for customers who want to reuse existing copper cabling while migrating from 1-Gbps to 10-Gbps servers. QSFP28 port can operate in native 100-Gbps or 40-Gbps mode or 4 x 10-Gbps mode. The 48 ports support 100MBASE, 1GBASE, and 10GBASE-T, and the 6 QSFP ports support 10, 40, and 100 Gbps.

The Cisco Nexus 31108TCV-32T is the Cisco Nexus 31108TC-V with 32 10GBASE-T ports and 6 QSFP+ ports enabled. The ports are enabled through software licensing. This switch provides a cost-effective solution for customers who require up to 32 10GBASE-T ports per rack. This switch comes with a 32-10GBASE-T port license preinstalled. To enable the remaining 16 10GBASE-T ports, the customer installs the 16-port upgrade license.

Cisco Nexus 31108TC-V and 31108TCV-32T Switch

Learn More: Nexus 3000 Model Comparison & Licensing Options

5 Benefits You Get When Buying a Top of Rack Switch Nexus 3100V:

  1. 100G uplinks: Cisco predicts that global data center IP traffic will grow 31% every year in the next 5 years. For this, it is obvious that 100G is the new norm for higher bandwidth, big data, and IP storage workloads.
  2. 16 MB enhanced buffers: Compared to 12MB buffer from previous generation, the Nexus 3100V models offer 16 MB enhanced buffers to absorb bursts of traffic and applications. You won’t have to worry when you need to expand your network in the future, because these deep buffers are designed for highly oversubscribed environments.
  3. 16 GB Increased system memory: In the previous model – Cisco Nexus 3100XL – Cisco already increased the system memory from 4GB to 8GB in order to introduce network programmability features developed in NXOS 7.x. But as networks are becoming more complex, competitive businesses need more memory to store more objects. Hence, Cisco has doubled the capacity again in the Nexus 3100V models from 8GB to 16GB to improve capacity for object-model programming.
  1. Quadrupled ingress ACL table size to 16,000: for more greater securitytraffic control, enhanced security, and policy management flexibility
  1. Support full VxLAN routing (layer 3 VxLAN): With this, workloads in different segment IDs can directly communicate, whereas with VxLAN bridging (layer 2 VxLAN), workloads need to be in the same segment ID to interact.

Cisco continues to bring you true flexibility and scalability through rich architectural options for any size of data center to address increasing business requirements. You can never go wrong with more connectivity options and a diverse set of form factors to meet ever-changing data center needs.

The original article from



More Related

Nexus 3000 Model Comparison & Licensing Options

New, Cisco Nexus 3600 Models-C36180YC-R and 3636C-R

Read more

Cisco Nexus 5500 and Nexus 5600 Licensing Options

April 9 2018 , Written by Cisco & Cisco Router, Network Switch Published on #Networking, #Cisco Switches - Cisco Firewall, #Cisco Technology - IT News, #Cisco & Cisco Network, #Technology, #Cisco License, #Cisco Switches-Software

Different types of licenses are required for the Nexus 5500 and Nexus 5600.

Table 1-15 describes each license and the features it enables.

Table 1-15 Nexus 5500 Product Licensing

Feature License

Product ID


FabricPath Services Package






FCoE NPV Package



Layer 3 Base Services Package


Unlimited static routes and maximum of 256 dynamic routes:

  • Static routes
  • RIPv2
  • OSPFv2 and OSPFv3
  • HSRP
  • VRRP
  • IGMP v2/v3
  • PIMv2 (sparse mode)
  • Routed ACL
  • NAT
  • MSDP
  • Static routes
  • RIPv2
  • OSPFv2 and OSPFv3

Layer 3 Enterprise Services Package


N55-LAN1K9 includes the following features in addition to the ones under N55-BAS1K9 license:

  • VRF Lite
  • PBR
  • PIMv2 (all modes)


Storage Protocols Services Package


Native Fibre Channel



  • FCoE
  • NPV
  • FC Port Security
  • Fabric Binding

Fibre Channel Security Protocol
(FC-SP) authentication

VM-FEX Package



NOTE: To manage the Nexus 5500 and Nexus 5600, two types of licenses are needed: the DCNM LAN and DCNM SAN. Each is a separate license.

More Notes:

Nexus switches have a grace period, which is the amount of time the features in a license package can continue functioning without a license.

Enabling a licensed feature that does not have a license key starts a counter on the grace period. You then have 120 days to install the appropriate license keys, disable the use of that feature, or disable the grace period feature.

If at the end of the 120-day grace period the device does not have a valid license key for the feature, the Cisco NX-OS software automatically disables the feature and removes the configuration from the device. There is also an evaluation license, which is a temporary license. Evaluation licenses are time bound (valid for a specified number of days) and are tied to a host ID (device serial number).


More info from http://www.ciscopress.com/articles/article.asp?p=2762085&seqNum=2


More Related

Cisco Nexus 5500 and Nexus 5600-Model Features

Cisco Nexus 7000 and Nexus 7700 Modular Switches, the Main Chassis

Cisco’s Data Center Architecture

Cisco Nexus 7000 and Nexus 7700 Series Power Supply Options

Cisco Nexus 7000 and Nexus 7700 Supervisor Module

Cisco Nexus 7000 and Nexus 7700 Licensing

Cisco Nexus 7000 and Nexus 7700 Line Cards

Read more

Cisco’s Data Center Architecture

April 3 2018 , Written by Cisco & Cisco Router, Network Switch Published on #Networking, #Cisco & Cisco Network, #IT, #Technology, #Cisco Switches - Cisco Firewall

Originally, most of the traffic data center network architects designed around was client-to-server communication or what we call “north-south.” With client-to-server traffic being the most dominant, network engineers/architects primarily built data centers based on the traditional Core/Aggregation/Access layer design, as seen in Figure1, and the Collapsed Core/Aggregation design, as seen in Figure2.

Figure1. Cisco Three-Tier Network Design

Figure2. Collapsed Core/Aggregation Network Design

In the three-tier and Collapsed Core designs, the architecture is set up for allowing optimal traffic flow for clients accessing servers in the data center, and the return traffic and links between the tiers are set for optimal oversubscription ratios to deal with traffic coming in to and out of the data center. As the increase in link speeds and virtualization became more prevalent, network engineers looked for a way to use all links in between any tiers and hide spanning tree from blocking certain links, as shown in Figure3. To do this in the data center, the Nexus product line introduced virtual Port Channel (vPC). vPC enables two switches to look like one, from a Layer 2 perspective, allowing for all links to be active between tiers, as seen in Figure4.

Figure3. Spanning Tree between Tiers

Figure4. Virtual Port Channel (vPC)

In the latest trends in the data center, the traffic patterns have shifted to virtualization and new application architectures. This new traffic trend is called “east to west,” which means the majority of the traffic and bandwidth being used is actually between nodes within the data center, such as when motioning a virtual machine from one node to another or application clustering.

This topology is a spine-leaf, as seen in Figure5. Spine-leaf has several desirable characteristics that play into the hands of engineers who need to optimize east-west traffic.

Figure5. Spine-Leaf Network Topology

Just to name a few benefits, a spine-leaf design scales horizontally through the addition of spine switches which add availability and bandwidth, which a spanning tree network cannot do. Spine-leaf also uses routing with equal-cost multipathing to allow for all links to be active with higher availability during link failures. With these characteristics, spine-leaf has become the de facto architecture of network engineers and architects for their next wave of data center architectures.

Describe the Cisco Nexus Product Family

The Cisco Nexus product family is a key component of the Cisco unified data center architecture, which is the Unified Fabric. The objective of the Unified Fabric is to build highly available, highly secure network fabrics.

Using the Cisco Nexus products, you can build end-to-end data center designs based on three-tier architecture or based on spine-leaf architecture. Cisco Nexus Product line offers high-density 10G, 40G, and 100G ports as well.

Modern data center designs need the following properties:

  • Effective use of available bandwidth in designs where multiple links exist between the source and destination and one path is active and the other is blocked by spanning tree, or the design is limiting you to use Active/Standby NIC teaming. This is addressed today using Layer 2 multipathing technologies such as FabricPath and virtual Port Channels (vPC).
  • Computing resources must be optimized, which happens by building a computing fabric and dealing with CPU and memory as resources that are utilized when needed. Doing capacity planning for all the workloads and identifying candidates to be virtualized help reduce the number of compute nodes in the data center.
  • Using the concept of a service profile and booting from a SAN in the Cisco Unified Computing system will reduce the time to instantiate new servers. This makes it easy to build and tear down test and development environments.
  • Power and cooling are key problems in the data center today. Ways to address them include using Unified Fabric (converged SAN and LAN), using Cisco virtual interface cards, and using technologies such as VM-FEX and Adapter-FEX. Rather than using, for example, eight 10G links, you can use two 40G links, and so on. Reducing cabling creates efficient airflow, which in turn reduces cooling requirements.
  • The concept of hybrid clouds can benefit your organization. Hybrid clouds extend your existing data center to public clouds as needed, with consistent network and security policies. Cisco is helping customers utilize this concept using CliQr/Cisco CloudCenter.
  • Improved reliability during software updates, configuration changes, or adding components to the data center environment, which should happen with minimum disruption.
  • Hosts, especially virtual hosts, must move without the need to change the topology or require an address change.

The following Figure shows the different product types available at the time this chapter was written.

Cisco Nexus Product Family

NOTE: Cisco is always innovating and creating new modules/switches. Therefore, while studying for your exam, it is always a good idea to check Cisco.com/go/nexus to verify new modules/switches and their associated features.

Info from http://www.ciscopress.com/articles/article.asp?p=2762085&seqNum=2

More Related

Make the Cisco Nexus 9000 Series Your Network Switch Today

Cisco Nexus Positioning: 2 and 3 Tier

Why Choose Cisco Nexus 9000 Series Switches? Top Five Reasons…

The Latest Cisco Nexus 9000 Innovations

Cisco Nexus 9000 Family: Nexus 9500 Modular Switches and the Nexus 9300 Fixed Configuration


Read more

How to Stack Cisco Catalyst 2960-X or 2960-XR Series Switches?

March 20 2018 , Written by Cisco & Cisco Router, Network Switch Published on #Networking, #Cisco & Cisco Network, #Technology, #Cisco Modules - Cisco Cables - Cisco Memory

Stacking Cisco Catalyst 2960-X or 2960-XR Series Switches is often asked by Cisco users. When we talk about the Catalyst 2960-X or 2960-XR stacking, we need to know Cisco FlexStack-Extended and FlexStack-Plus technology. What’s the FlexStack-Extended and FlexStack-Plus technology? What benefits can we get from this tech? And how to stack Cisco Catalyst 2960-X or 2960-XR? We will share the typical example of Stacking Cisco Catalyst 2960-X or 2960-XR Series Switches in this article.

  1. Cisco FlexStack-Extended and FlexStack-Plus technology allows stacked installation of Cisco Catalyst 2960-X or 2960-XR Series Switches within the same wiring closet, across wiring closets on different floors of a building, or across different buildings in a campus, with a single point of management that reduces IT management overhead.
  2. The Cisco Catalyst 2960-X FlexStack-Plus Stack Module provides high-bandwidth stacking capability over short distances to simplify management and improve resiliency.
  3. The Cisco Catalyst 2960-X FlexStack-Extended Stack Module–Hybrid provides investment protection for Cisco Catalyst 2960-X and 2960-XR Series Switches that are already stacked and installed with FlexStack-Plus modules.

These modules act as interconnects between FlexStack-Plus and FlexStack-Extended stacked switches.

The FlexStack-Extended and FlexStack-Plus modules enable stacking within and across wiring closets. Up to eight Cisco Catalyst 2960-X or 2960-XR Series Switches can be stacked, with a single management and control plane. All management tasks, such as configuration, Cisco IOS Software upgrades, and troubleshooting, can be performed for all stacked switches from a single point of management through a command line or a simple graphical interface with Cisco Catalyst Configuration Professional.

The FlexStack-Plus and FlexStack-Extended modules are simple-to-install plug-and-play modules, with no preset configuration requirements. They simplify troubleshooting of multiple switches spread over large areas of the campus.

The FlexStack-Extended module uses the same rules for stack master election as FlexStack-Plus switches. These modules can be inserted into the stack module slot at the rear of the Cisco Catalyst 2960-X and 2960-XR Series Switches. Up to eight switches can be stacked in a ring topology using the FlexStack-Plus or FlexStack-Extended modules.

Learn more: FlexStack vs. FlexStack-Plus


Stack Module Slot Location

How to Stack Cisco Catalyst 2960-X or 2960-XR Series Switches?

●   Stack modules are plug and play; no configuration is required to bring up the stack.

Command: “show inventory” to see the modules inserted:

switch#show inventory

NAME: "3", DESCR: "WS-C2960XR-48TD-I"

PID: WS-C2960XR-48TD-I , VID: V01  , SN: FOC1720Y3WK

-----Output omitted-----------------------

NAME: "Switch 1 - FlexStackPlus Module", DESCR: "Stacking Module"

PID: C2960X-HYBRID-STK , VID: V01  , SN: FDO211827QG

The ports of the modules are in a stack port configuration by default.

Command:  “show switch hstack-ports” to ensure that the ports are stack ports.

Example: On the FlexStack-Extended fiber module:

The ports of the modules are in a stack port configuration by default.

Command:  “show switch hstack-ports” to ensure that the ports are stack ports.

Example: On the FlexStack-Extended fiber module:

Example: On the FlexStack-Extended hybrid module:

Note: The fiber port of the module does not show up with this command.

● When connecting the FlexStack-Extended hybrid module to FlexStack-Plus modules, the stack bandwidth of the switch with the FlexStack-Plus module should be manually configured to 10 Gbps

Command: “switch stack port-speed 10G” to set the stacking bandwidth to 40 Gbps:

Example: switch(config)#switch stack port-speed 10

Command: ‘show switch stack-ring  speed’

Example: switch#show switch stack-ring  speed

Stack Ring Speed        : 10G

Stack Ring Configuration: Half

Stack Ring Protocol     : FlexStack

● Once the stack cables (fiber or FlexStack-Plus cables) are connected to the switches to stack them:

Command: “show switch” to see all switches in the stack. The master is indicated with an asterisk (*).

switch#show switch

Switch/Stack Mac Address : d0c7.896b.9480

                                H/W   Current

Role   Mac Address     Priority Version  State


 2       Member d0c7.aaaa.xxxx     1      4       Ready

*3       Master d0c7.bbbb.yyyy     1      4       Ready

Command: “show switch stack-ports” to see the status of the stack ports.

Example: switch#show switch stack-ports

 Switch #    Port 1       Port 2

  --------    ------       ------

    2          Down          Ok

    3          Down          Ok

Ok: Port status up

Down: Port status down

Note: When adding a switch to an existing stack, power off the new switch, connect the stack cables, and then power on the new switch. This will prevent any downtime in the existing stack.

How to Pick a Stack Module

● If the switches in the stack are less than 3 m (10 ft) apart or high stacking bandwidth is a requirement, the C2960X STACK module would be best suited for stacking

● If the stack switches are spread across wiring closets on different floors of a building or across multiple buildings in a campus (switches are more than 3 m [10 ft] apart), the C2960X-FIBER-STK module would be best suited

● If the stack is a mix of switches in the same wiring closet and switches spread across wiring closets, the stack modules will be a mix of C2960X STACK, C2960X-FIBER-STK, and C2960X-HYBRID-STK

Points to Remember

● Fast convergence is not supported on stack switches with FlexStack-Extended ports

● The fiber stack ports will support 10-Gbps transceivers only. Refer to the list of supported 10-Gbps transceivers mentioned earlier

● The FlexStack-Extended modules support up to 40-Gbps stack bandwidth over longer distances

● The FlexStack-Plus module supports up to 80-Gbps stack bandwidth over short distances

● When adding a new switch to an existing stack, power off the new switch and then connect the stack cables. This is to prevent reload of the existing stack and stack master reelection

● To use FlexStack-Extended modules, all switches in the stack require upgrade to Cisco IOS Software Release 15.2(6)E or later

Reference from https://www.cisco.com/c/en/us/products/collateral/switches/catalyst-2960-x-series-switches/white-paper-c11-739615.html

More Related

Why SELECT Cisco 2960-X Series?

Cisco Catalyst 2960-X Switches: Enterprise Ready

Cisco Catalyst 2960-X vs. 2960-XR Series Switches

Cisco 2960S and 2960-X Series’ Problems from Users

How to Install or Replace an AC Power Supply in a Cisco 2960-X Switch?

Cisco Catalyst 2960-X Comparison & Features

The Latest Updated: SFP Modules for Cisco Catalyst 2960-X Series Switches

WS-C2960X-48TD-L & WS-C2960XR-48TD-I Tested, from Miercom

Read more

Why 25G Transceiver Choices?

March 14 2018 , Written by Cisco & Cisco Router, Network Switch Published on #Networking, #Cisco Transceiver Modules, #Data Center, #Cisco Modules & Cards, #Cisco & Cisco Network, #IT, #Technology

25G Speeds Up Data Centers and Campus Backbones NOW. With the massive increase in demand for data, equipment providers are responding with 25Gbps edge devices that require more bandwidth than can be provided on a traditional 10Gbps interface.

Whether it’s a server or a campus backbone, high speed data needs to be delivered cost-effectively in a small and low-power package.

In these bandwidth-intensive applications, the choice to go with 25G is clear. To get the same or better bandwidth, the number of 10G interfaces must be 3x (6x for redundancy) or the application needs to move to the larger, more expensive and power-hungry 40G QSFP.

SFP28: For 25G the dominant form factor is SFP28. The SFP28 standard relies on the 10G SFP+ (Small Form Factor Pluggable) standard for mechanical specifications, and the electrical specifications have been improved from one 10Gbps lane that operates at 10.312Gbps to one 28Gbps lane that operates at 25Gbps + error correction. 25G transceivers can be plugged into SFP+ sockets and 10G transceivers can be plugged into SFP28 sockets because they have the same electrical and mechanical pin-out, however the associated host needs to have the software support for associated devices.

Cisco’s 25G transceiver choices include 25G Copper DAC (Direct Attached Cables), 25G AOC (Active Optical Cables) and 25G SR-S (Short Reach) transceivers.

These 25G devices are plugged into Cisco’s data center, campus and service provider switches and routers to provide high speed 25Gbps connectivity. See Cisco’s 25G compatibility matrix for currently supported devices .

Why DAC?

25G DACs are generally used in data center applications and provide the lowest cost fixed length interconnect for TOR (Top of Rack) switches to high-performance servers.  Depending upon the bandwidth and distance, DACs can be either passive or active and are generally based on Twin-AX cable.  For 25G, DACs can generally operate up to 5 meters without active components in the data path. Up to 2 meters, no FEC (Forward Error Correction) is needed. For 3 meters FC-FEC (Fire Code Forward Error Correction) is needed, and for 5 meters RS-FEC (Reed Solomon Forward Error Correction) is needed to correct errors.  Generally, at 25Gbps beyond 5 meters, active components are needed in the data path to amplify and correct the signal.  These components drive up cost which causes network designers to consider optical interfaces.

Why AOC?

25G AOCs also provide a cost effect solution for those same data center applications that require longer distances than 5m. Generally, AOCs are provided in standard lengths of 1m, 2m, 3m, 5m and 10m. However, they are usually limited to about 25 meters because of inventory stock and slack storage issues. Often a data center will be wired with only AOCs for consistency reasons, instead of a combination of AOCs and DACs.

Why SR?

25G-SR is used with standard OM3 or OM4 multimode fiber and is suitable for:

• Data centers that require up 100 meters over OM4 fiber or 70 meters over OM3 fiber for interconnect between TOR switches and leaf or spine switches.

• Breakout configurations in conjunction with 100G-SR4 transceivers where the distances are less than 100 meters for OM4 fiber or 70 meters for OM3 fiber.

• Campus backbones, where the distances between distribution and aggregation switches are less than 100 meters for OM4 fiber or 70 meters for OM3 fiber.

Learn more about how Cisco’s 25G transceiver products are transforming the industry here

Original article from https://blogs.cisco.com/sp/too-slow-25g-speeds-up-data-centers-and-campus-backbones


More Related

Cisco 25G Transceivers for Next Generation Switches

Updated: Cisco Gigabit Ethernet Transceiver Modules for ASR 1000 Series Router

Is It Possible to Interconnect SFP, SFP+ and XENPAK/X2…?

Upgrade Seamlessly From 40Gb or 10Gb-Cisco 40/100Gb QSFP100 BiDi Pluggable Transceiver

Read more

Cisco Firepower 2100 Series, as a NGFW or a NGIPS

March 7 2018 , Written by Cisco & Cisco Router, Network Switch Published on #Networking, #NGFW, #Cisco Technology - IT News, #IT, #Technology

The new Cisco Firepower 2100 Series appliances help you achieve a better security doesn’t come at the expense of network performance.

Cisco Firepower 2100 Series can be deployed either as a Next-Generation Firewall (NGFW) or as a Next-Generation IPS (NGIPS). They are perfect for the Internet edge and all the way in to the data center.

Four new models are available: 2110, 2120, 2130, and 2140

• The Firepower 2110 and 2120 models offer 2.0 and 3 Gbps of firewall throughput, respectively. They provide increased port density and can provide up to sixteen (16) 1 Gbps ports in a 1 rack unit (RU) form factor.

• The Firepower 2130 and 2140 models provide 5 and 8.5 Gbps of firewall throughput, respectively. These models differ from the others in that they can be customized through the use of network modules, or NetMods. They can provide up to twenty-four (24) 1 Gbps ports in a 1 RU appliance, or to provide up to twelve (12) 10 Gbps ports.

Firepower 2100 NGFWs uniquely provide sustained performance when supporting threat functions, such as IPS. This is done using an innovative dual multi-core architecture. Layer 2 and 3 functionality is processed on one NPU (Network Processing Unit). Threat inspection and other services are processed on a separate multi-core x86 CPU. By splitting the workload, we minimize the performance degradation that you see with competing solutions when turning on threat inspection.

Firepower 2100 Series Appliance Performance Highlights


Cisco Firepower Model





Throughput FW + AVC (Cisco Firepower Threat Defense)1

2.0 Gbps

3 Gbps

4.75 Gbps

8.5 Gbps

Throughput: FW + AVC + NGIPS (Cisco Firepower Threat Defense)1

2.0 Gbps

3 Gbps

4.75 Gbps

8.5 Gbps

1 HTTP sessions with an average packet size of 1024 bytes

2 1024 bytes TCP firewall performance

Learn more: Guide to the New Cisco Firepower 2100 Series

ASA Performance and Capabilities on Firepower 2100 Series Appliances


Cisco Firepower Appliance Model





Stateful inspection firewall throughput1

3 Gbps

6 Gbps

10 Gbps

20 Gbps

Stateful inspection firewall throughput (multiprotocol)2

1.5 Gbps

3 Gbps

5 Gbps

10 Gbps

Concurrent firewall connections

1 million

1.5 million

2 million

3 million

Firewall latency (UDP 64B microseconds)





New connections per second





IPsec VPN throughput (450B UDP L2L test)

500 Mbps

700 Mbps

1 Gbps

2 Gbps

IPsec/Cisco AnyConnect/Apex site-to-site VPN peers





Maximum number of VLANs





Security contexts (included; maximum)

2; 25

2; 25

2; 30

2; 40

High availability

Active/active and active/standby

Active/active and active/standby

Active/active and active/standby

Active/active and active/standby







VPN Load Balancing

Centralized management

Centralized configuration, logging, monitoring, and reporting are performed by Cisco Security Manager or alternatively in the cloud with Cisco Defense Orchestrator

Adaptive Security Device Manager


Web-based, local management for small-scale deployments

1 Throughput measured with User Datagram Protocol (UDP) traffic measured under ideal test conditions.

2 “Multiprotocol” refers to a traffic profile consisting primarily of TCP-based protocols and applications like HTTP, SMTP, FTP, IMAPv4, BitTorrent, and DNS.

3 In unclustered configuration.

More detailed data sheet of Cisco NGFW:


Firepower 2100 Series PIDs: See the show inventory and show inventory expand commands in the Cisco FXOS Troubleshooting Guide for the Firepower 2100 Series to display a list of the PIDs for your Firepower 2100. See Product IDs for a list of the product IDs (PIDs) associated with the 2100 series.

More Related

Finding the Sweet Spot–Firepower 2100

The New Cisco Firepower 2100 Series

How to Deploy the Cisco ASA FirePOWER Services in the Internet Edge, VPN Scenarios and Data Center?

The Most Common NGFW Deployment Scenarios

Read more

What's in HPE's persistent memory/8GB NVDIMM?

March 1 2018 , Written by Cisco & Cisco Router, Network Switch Published on #HPE Servers, #Networking, #IT, #Technology

HPE’s new persistent memory modules, the NVDIMMs, combine the speed of DRAM with

the resilience of flash. The persistent memory module combines 8GB of DRAM and 8GB of flash in a single module that fits in a standard server DIMM slot.

DRAM operates at high speed but it's relatively expensive, and if a server shuts down unexpectedly any data in DRAM is lost. Flash is slower but it's nonvolatile, meaning it retains data when the power source is removed.

It's not intended to replace external storage; SSDs, spinning hard drives and tape are still best for storing large amounts of data. But it provides a portion of storage that sits on the high-speed memory bus and can retain data if a server crashes.

Applications in NVDIMM can run much faster, according to HPE, because data doesn't have to shuttle back and forth between the CPU and storage drives.

HPE isn't first to the game. Component makers including Micron Technology and Viking Technology make NVDIMMs, and other server makers are experimenting with forms of persistent memory.

But Patrick Moorhead, lead analyst at Moor Insights and Strategy, says HPE has a lead over its server rivals, at least for now.

HPE says NVDIMM offers up to six times the bandwidth of SSDs based on the high-speed NVMe (nonvolatile memory express) protocol, and provides up to 24 times more IOPS (input-output operations per second). 

The NVDIMMs, as an option, is for two models of ProLiant Gen9 server, the DL380 and DL360.

It needs software makers on board as well. Operating systems need to be aware of NVDIMM to take advantage of it, and while standard applications will see performance gains, the biggest benefits will come to apps that are tuned for persistent memory.

HPE has written a driver for Windows Server 2012 R2 that will be available with the new servers. And HPE officials said Microsoft will support NVDIMM "out of the box" with Windows Server 2016, expected later this year. It's also working with Linux vendors and other software makers.

The NVDIMMs have a microcontroller and connect to a 96-watt Lithium-ion battery integrated into the server. If a server crashes, the battery provides powers to the module until the data in DRAM has been backed up to flash. The battery can support up to 128GB of persistent memory in a server.

HPE believes NVDIMM could benefit applications like databases, where in-memory processing is a fast-growing trend. It says tests have shown up to a 10x boost in database and analytics applications tuned to run on NVDIMMs.

Where applications haven't been tuned, it says users will still see a 2x increase in SQL Server database logging, for example.

It plans to offer future NVDIMMs that emphasize larger capacity over performance. And HPE officials said Intel may offer its high-speed 3D Xpoint technology in a persistent memory form.

It sees NVDIMMs as a stepping stone on the way to future computing architectures. It's Synergy systems, which have a new type of "composable" infrastructure, will all be enabled for persistent memory when they ship.

It hasn't given a commercial release date yet for Synergy, but it plans to ship beta units to some customers in May, HPE officials said.

Further out, HPE's goal is to collapse memory and storage into a single tier using a new technology called memristors. It hasn't given an arrival date for that system, which it calls the Machine.

Info from



More Related

HPE Persistent Memory/NVDIMMs for HPE ProLiant Servers

HPE ProLiant Gen10-The World’s Most Secure Industry Standard Servers

How to Choose a Server for Your Data Center’s Needs?

How to Buy a Server for Your Business?

A Guide for Storage Newbies: RAID Levels Explained

Read more

The Advantages of Disk-based Backup

February 8 2018 , Written by Cisco & Cisco Router, Network Switch Published on #Networking, #Technology, #IT

Read the two questions first: when should you use specific disk-based backup products and technologies? How can you fit them into your storage architecture?

With disk-based backup, tape media is replaced as a target for backups with disk, usually in the form of a disk-based appliance. It is also possible to dedicate drives in a SAN array to backup data – although a dedicated disk-based appliance is usually more effective.

Disk-based backup offers faster backups and restores than tape, while eliminating many of the headaches that come with the storage and transport of tape media. When you combine the cost per gigabyte of disk and the advantages of data deduplication, disk-based backup becomes a compelling proposition for storage administrators. Furthermore, in a storage architecture, disk-based backups can form a nearline tier which matches data by age and usefulness to the cost of media.

Tape vs. disk as a backup target

Traditionally backup has been carried out directly to tape media, but this can take a long time and can result in underutilisation if the backup process is not streaming data to tape at optimum speeds. Also, restoring data from tape can be time-consuming, with at least three-minute tape mount and two-minute first data block seek times (or longer if the data has been sent off-site).

Data restores can also be unreliable with tape, because of the fragility of tape and also because data searches are often impossible unless the user knows the tape and file names required. Loss and theft are also problems commonly associated with tape.

Advantages of disk-based backup

Compared to tape, disk-based backup appliances allow administrators to perform backups more quickly and efficiently over the wire, as well as to restore data far more rapidly than from tape. With disk systems, data integrity is catered for by RAID protection.

With the prices of SATA and SAS drives decreasing, cost per gigabyte for disk has begun to near that of tape. Disk-based backup can also boost utilisation when compared to tape because the backup stream never fails to keep up with the high throughputs afforded by tape. The media will never be transported off-site partially full, as happens with tape.

Disk-based backup products allow users to stage data to disk before being run off to tape after a set period in a disk-to-disk-to-tape (D2D2T) configuration. This makes restores from backup available almost instantly. Disk-based backup can be a core component in a storage architecture, by matching the value and frequency of use of data to the cost of tape media with gradations of online, nearline and archived information.

Disk-based backup methods and products 
Disk-based backup can be carried out with standard or intelligent disk products, using NAS virtual tape libraries (VTLs) as targets.

Using standard disk – either standalone or using SAN drives provisioned for the task -- entails dedicating disk volumes as backup targets. The standard disk approach has the advantages of disk over tape but there are drawbacks compared to intelligent NAS or VTL devices. You will need to provision volumes for each backup server and as your environment changes and grows, so will the amount of management overhead. Plain disk-based products without data deduplication will also work out between five to 10 times more expensive than those with.

Intelligent disk

Intelligent disk-based backup products include NAS and VTL, as well as those featuring data deduplication and thin provisioning.

Virtual tape libraries

VTLs emulate a tape library – your backup software sees disk space represented as virtual tape cartridges in a library. Data is then run off at user-determined intervals to physical tape. A VTL configuration allows businesses to repurpose their tape libraries to form a more cost-effective tier in their storage system as archive repositories to which aged or unnecessary data is relegated. VTLs can also be shared between backup products with the device represented as multiple backup targets to the backup software. Thin provisioning can also give you the jump on capacity issues.

NAS as a backup target

NAS is file-level disk storage hardware which sits on the LAN and interoperates with common file systems like NFS and CIFS, appearing as a giant volume to which the backup software writes.


Whether you opt for a VTL or NAS device as a disk-based backup target will depend on your environment and how much data you are handling. Businesses with smaller volumes of data that are using dedicated Fibre Channel or iSCSI SANs and tape libraries will be more suited to VTLs, because VTLs are optimised to store block-based data in LAN-free backup environments with the explicit intention of off-loading to tape. On the other hand, NAS devices are optimised as file storage devices with finite storage capacity, and while users will eventually run data off to tape as they become full, they are not optimised to carry out that task on a day-to-day basis.

Data deduplication

Data deduplication has revolutionized disk-based backup. By eliminating redundant data in backup streams, data deduplication can often reduce data by ratios as high as 50:1.

Data deduplication works by applying an algorithm to data streams which strip out duplicated blocks and mark the single iteration retained with a pointer. For this reason, the longer data deduplication can be applied to on your backups the more it can reduce the amount of data. Data deduplication works best with data types in which there are many repeated blocks. For example, backed-up databases will tend to achieve far higher reduction ratios than many different image files.

The review originally appeared on computerweekly.com.


More networking topics you can visit: http://blog.router-switch.com/

Read more
<< < 1 2 3 4 5 6 7 8 9 10 20 30 40 50 > >>