Cisco ASA 5506-X, ASA 5506W-X, and ASA 5506H-X Hardware Installation Guide
How to maintain and upgrade your ASA firewalls, such as the popular ASA 5506-X, ASA 5506W-X and ASA 5506H-X? In this article we will share the two guides for Cisco ASA 5506-X, ASA 5506W-X and ASA 5506H-X, which contain the following sections:
- Connect the DC Adapter for the 5506H-X
- Install the Adjustable Power Supply Retainer
How to Connect the DC Adapter for the ASA 5506H-X?
1. This product relies on the building's installation for short-circuit (overcurrent) protection. Ensure that the protective device is rated not greater than 36 VDC, 5A. Statement 1005
2. This product requires short-circuit (overcurrent) protection to be provided as part of the building installation. Install only in accordance with national and local wiring regulations.
3. The device is designed to work with TN power systems.
To connect the DC power on your 5506H-X, follow these steps:
Step1: Connect the black and white lead wires to a 12 VDC source. The black lead is negative or ground and the white lead is positive. The output cable is 1.3 meters and the input cable is 1 meter in length.
Figure1. DC Power Adapter
Black wire (negative)
White wire (positive)
Step2: Plug the adapter cord into the ASA.
Note: The power adapters have 18 AWG wires for the input connection. Tinned bare wires are used for the input connection because there is no standard established for connector type. Screw terminal blocks are most often used.
Step3: Power on the ASA and check that it has power. See LEDs for information on the power LED.
How to Install the Adjustable Power Supply Retainer?
You can install an adjustable power supply retainer for the Delta and LiteOn power supplies in the rack-mount tray. The bracket kit contains the bracket, 2 M3 screws, and washers. The following figure shows the adjustable retainer.
Figure2. ASA Bracket Assembly
Two screws to loosen to change from high to low bracket
Two captive screws to attach to rack-mount tray
Step1: Slide the rack-mount shelf containing the ASA(s) out of the rack.
Step2: At the back of the rack-mount shelf behind the power supplies, install the adjustable retainer.
- Loosen the 2 top screws (item 1 in the figure above) slightly to adjust the bracket for each power supply.
The Delta power supply uses the bracket extended to its tallest configuration. This configuration has item 1 shifted to the bottom of the slot on the bracket over the power supply. The LiteOn power supply uses the bracket extended to its shortest configuration. This configuration has item 1 shifted to the top of the slot on the bracket over the power supply.
- Install the bracket over the power supply and screw the 2 bottom M3 captive screws (item 2 in the figure above) on each side of the bottom of the bracket into the rack-mount tray.
The following figure shows the installed power supply retainer.
Figure3. Installed Power Supply Retainer
How many factors do you consider to choose a server? For example, VM and container consolidation, as well as visualization and scientific computing, each affect the decision. Yes, server selection is a quandary for IT, as security, the use of file servers and whether multiple servers of CPU systems will meet enterprise demand plague enterprises.
In the following part, Stephen J. Bigelow (Senior Technology Editor in the Data Center and Virtualization media group at TechTarget Inc.) discussed some important factors on server purchases for your enterprise.
1. Enhanced server security plays a role in server purchases
Although server purchases aren't based solely on security capabilities, there is a proliferation of protection, detection and recovery features to consider for most enterprise tasks. Modern security features now extend well beyond traditional Trusted Platform Modules.
For example, secure servers can offer protection through a hardware-based root of trust, which uses hardware validation of server management platforms, such as an integrated Dell Remote Access Controller, and server firmware as the system boots. Validation typically includes cryptographic signatures to ensure that only valid firmware and drivers are running on the server. Similarly, firmware and driver updates are usually cryptographically signed to verify their authenticity or source. You can execute validations periodically even though the system might not reboot for months. Native data encryption is increasingly available at the server processor level to protect data in flight and at rest.
An increasing number of systems can detect unauthorized or unexpected changes in system firmware images and firmware configurations, enforcing a system lockdown to prevent such changes and alerting administrators when change attempts occur at the firmware level. Servers frequently include persistent event logging, which includes an indelible record of all activity.
And servers benefit from various recovery capabilities. For example, automatic BIOS/firmware recovery can restore firmware to a known goodstate after the system detects any flaw or compromise in the firmware code base. Some systems can apply similar restoration to the OS by detecting possible malicious activity and restoring the OS to a known good state as well. And system erasure features can be used to wipe all hardware configuration settings of the server, including BIOS data, diagnostic data, management configuration states, nonvolatile cache and internal SD cards. System erasure can be particularly important before redeploying the server or removing it from service.
When choosing a server, evaluate the importance of certain features based on the use cases.
When choosing a server, evaluate the importance of certain features based on the use cases.
2. For data servers, focus on network I/O
File servers, or data servers, can take many shapes and sizes depending on the needs of each specific business. The actual compute resources needed in a data server are typically light. For example, file servers rarely process data or make computations that demand extensive processor or memory capacity. Web servers may include more resources if the system will also be running code or back-end applications, such as databases. If the organization plans to employ virtualization to consolidate multiple data servers onto a single physical box, the processor and memory requirements will need a closer look.
However, the emphasis for data servers is more frequently focused on network I/O, which can be critical for accessing shared/centralized storage resources and exchanging files or web content with many simultaneous users -- network bottlenecks are commonplace. If the data server will employ internal storage, the choice of disk types and capacity can have a significant influence on storage access performance and resilience. Data servers can deploy a fast 10 Gigabit Ethernet port or multiple 1 GbE ports, which you can trunk together for more speed and resilience.
As just one example, a modestly configured Dell EMC PowerEdge R430 rack server offers two processor sockets, 16 GB of memory, four 1 GbE ports and a 1 TB 7.2K rpm Serial Advance Technology Attachment (SATA) 6 Gbps disk drive by default. However, you can select the R430 chassis to accept varied disk configurations with up to 10 hot-pluggable Serial-Attached SCSI, SATA, nearline SAS or solid-state drives if the business chooses to place storage in the server itself. You can also enhance network performance through a choice of Peripheral Component Interconnect Express network adapters or storage host bus adapters.
Systems versus CPUs
Many data centers are shrinking as virtualization, fast networking and other technologies allow fewer servers to host more workloads. The quandary for server purchases then becomes server count versus CPU count. Is it better to have more servers or more resources within fewer servers? Packing more capability into fewer boxes can reduce overall capital expenses, data center floor space and power and cooling demands. But hosting more workloads on fewer boxes can also increase risk to the business because more workloads are affected if the server fails or requires routine maintenance. Clustering, snapshot restoration and other techniques can help to guard against hardware failures, but a business still needs to establish a comfortable balance between server count and server capability, regardless of how the servers are used.
The original article from http://searchdatacenter.techtarget.com/tip/Security-vendor-choices-affect-server-purchases-for-IT-buyers
Outside of cost, what are the biggest factors in your server selection process? Join the Discussion
Read More: HPE Servers Topics
What makes Nexus 3100-V unique? Here is a summary of the most important highlights:
- Support of 100G uplinks
- Bigger buffer (16MB)
- Double System memory (16GB)
- Quadruple Ingress ACL: increased from 4,000 to 16,000
- VxLAN routing
Watch this video if you’d like to get a brief tour on Cisco campus and watch Houfar Azgomi present the Nexus 3100V.
Cisco Nexus 3100-V platform switches summary
Cisco Nexus 3132Q-V Switch
32 x 40-Gbps QSFP+ ports (all ports are capable of 10 or 40 Gbps)
Cisco Nexus 31108PC-V Switch
48 x 10-Gbps SFP+ ports and 6 x QSFP28 ports (all QSFP ports can operate at 40 or 100 Gbps)
Cisco Nexus 31108TC-V Switch
48 x 10GBASE-T ports and 6 x QSFP28 ports (all QSFP ports can operate at 40 or 100 Gbps)
Cisco Nexus 31108TCV-32T Switch
32 x 10GBASE-T ports and 6 x QSFP28 ports (all QSFP ports can operate at 40 or 100 Gbps)
More Info about Nexus 3100-V Models
The Cisco Nexus 3132Q-V is a 40-Gbps Quad Small Form-Factor Pluggable (QSFP) switch with 32 Enhanced QSFP (QSFP+) ports. It also has 4 SFP+ ports that are internally multiplexed with the first QSFP port. Each QSFP+ port can operate in native 40-Gbps mode or 4 x 10-Gbps mode, with up to a maximum of 104 x 10-Gbps ports.
Cisco Nexus 3132Q-V Switch
The Cisco Nexus 31108PC-V is a 10-Gbps SFP+)–based ToR switch with 48 SFP+ ports and 6 QSFP28 ports. Each SFP+ port can operate in 100-Mbps, 1 Gbps, or 10-Gbps mode, and each QSFP28 port can operate in native 100-Gbps or 40-Gbps mode or 4 x 10-Gbps mode, offering flexible migration options. This switch is a true PHY-less switch that is optimized for low latency and low power consumption.
Cisco Nexus 31108PC-V Switch
The Cisco Nexus 31108TC-V is a 10GBASE-T switch with 48 10GBASE-T ports and 6 QSFP28 ports. This switch is well suited for customers who want to reuse existing copper cabling while migrating from 1-Gbps to 10-Gbps servers. QSFP28 port can operate in native 100-Gbps or 40-Gbps mode or 4 x 10-Gbps mode. The 48 ports support 100MBASE, 1GBASE, and 10GBASE-T, and the 6 QSFP ports support 10, 40, and 100 Gbps.
The Cisco Nexus 31108TCV-32T is the Cisco Nexus 31108TC-V with 32 10GBASE-T ports and 6 QSFP+ ports enabled. The ports are enabled through software licensing. This switch provides a cost-effective solution for customers who require up to 32 10GBASE-T ports per rack. This switch comes with a 32-10GBASE-T port license preinstalled. To enable the remaining 16 10GBASE-T ports, the customer installs the 16-port upgrade license.
Cisco Nexus 31108TC-V and 31108TCV-32T Switch
Learn More: Nexus 3000 Model Comparison & Licensing Options
5 Benefits You Get When Buying a Top of Rack Switch Nexus 3100V:
- 100G uplinks: Cisco predicts that global data center IP traffic will grow 31% every year in the next 5 years. For this, it is obvious that 100G is the new norm for higher bandwidth, big data, and IP storage workloads.
- 16 MB enhanced buffers: Compared to 12MB buffer from previous generation, the Nexus 3100V models offer 16 MB enhanced buffers to absorb bursts of traffic and applications. You won’t have to worry when you need to expand your network in the future, because these deep buffers are designed for highly oversubscribed environments.
- 16 GB Increased system memory: In the previous model – Cisco Nexus 3100XL – Cisco already increased the system memory from 4GB to 8GB in order to introduce network programmability features developed in NXOS 7.x. But as networks are becoming more complex, competitive businesses need more memory to store more objects. Hence, Cisco has doubled the capacity again in the Nexus 3100V models from 8GB to 16GB to improve capacity for object-model programming.
- Quadrupled ingress ACL table size to 16,000: for more greater security，traffic control, enhanced security, and policy management flexibility
- Support full VxLAN routing (layer 3 VxLAN): With this, workloads in different segment IDs can directly communicate, whereas with VxLAN bridging (layer 2 VxLAN), workloads need to be in the same segment ID to interact.
Cisco continues to bring you true flexibility and scalability through rich architectural options for any size of data center to address increasing business requirements. You can never go wrong with more connectivity options and a diverse set of form factors to meet ever-changing data center needs.
The original article from
Different types of licenses are required for the Nexus 5500 and Nexus 5600.
Table 1-15 describes each license and the features it enables.
Table 1-15 Nexus 5500 Product Licensing
FabricPath Services Package
FCoE NPV Package
Layer 3 Base Services Package
Unlimited static routes and maximum of 256 dynamic routes:
Layer 3 Enterprise Services Package
N55-LAN1K9 includes the following features in addition to the ones under N55-BAS1K9 license:
Storage Protocols Services Package
Native Fibre Channel
NOTE: To manage the Nexus 5500 and Nexus 5600, two types of licenses are needed: the DCNM LAN and DCNM SAN. Each is a separate license.
Nexus switches have a grace period, which is the amount of time the features in a license package can continue functioning without a license.
Enabling a licensed feature that does not have a license key starts a counter on the grace period. You then have 120 days to install the appropriate license keys, disable the use of that feature, or disable the grace period feature.
If at the end of the 120-day grace period the device does not have a valid license key for the feature, the Cisco NX-OS software automatically disables the feature and removes the configuration from the device. There is also an evaluation license, which is a temporary license. Evaluation licenses are time bound (valid for a specified number of days) and are tied to a host ID (device serial number).
Originally, most of the traffic data center network architects designed around was client-to-server communication or what we call “north-south.” With client-to-server traffic being the most dominant, network engineers/architects primarily built data centers based on the traditional Core/Aggregation/Access layer design, as seen in Figure1, and the Collapsed Core/Aggregation design, as seen in Figure2.
Figure1. Cisco Three-Tier Network Design
Figure2. Collapsed Core/Aggregation Network Design
In the three-tier and Collapsed Core designs, the architecture is set up for allowing optimal traffic flow for clients accessing servers in the data center, and the return traffic and links between the tiers are set for optimal oversubscription ratios to deal with traffic coming in to and out of the data center. As the increase in link speeds and virtualization became more prevalent, network engineers looked for a way to use all links in between any tiers and hide spanning tree from blocking certain links, as shown in Figure3. To do this in the data center, the Nexus product line introduced virtual Port Channel (vPC). vPC enables two switches to look like one, from a Layer 2 perspective, allowing for all links to be active between tiers, as seen in Figure4.
Figure3. Spanning Tree between Tiers
Figure4. Virtual Port Channel (vPC)
In the latest trends in the data center, the traffic patterns have shifted to virtualization and new application architectures. This new traffic trend is called “east to west,” which means the majority of the traffic and bandwidth being used is actually between nodes within the data center, such as when motioning a virtual machine from one node to another or application clustering.
This topology is a spine-leaf, as seen in Figure5. Spine-leaf has several desirable characteristics that play into the hands of engineers who need to optimize east-west traffic.
Figure5. Spine-Leaf Network Topology
Just to name a few benefits, a spine-leaf design scales horizontally through the addition of spine switches which add availability and bandwidth, which a spanning tree network cannot do. Spine-leaf also uses routing with equal-cost multipathing to allow for all links to be active with higher availability during link failures. With these characteristics, spine-leaf has become the de facto architecture of network engineers and architects for their next wave of data center architectures.
Describe the Cisco Nexus Product Family
The Cisco Nexus product family is a key component of the Cisco unified data center architecture, which is the Unified Fabric. The objective of the Unified Fabric is to build highly available, highly secure network fabrics.
Using the Cisco Nexus products, you can build end-to-end data center designs based on three-tier architecture or based on spine-leaf architecture. Cisco Nexus Product line offers high-density 10G, 40G, and 100G ports as well.
Modern data center designs need the following properties:
- Effective use of available bandwidth in designs where multiple links exist between the source and destination and one path is active and the other is blocked by spanning tree, or the design is limiting you to use Active/Standby NIC teaming. This is addressed today using Layer 2 multipathing technologies such as FabricPath and virtual Port Channels (vPC).
- Computing resources must be optimized, which happens by building a computing fabric and dealing with CPU and memory as resources that are utilized when needed. Doing capacity planning for all the workloads and identifying candidates to be virtualized help reduce the number of compute nodes in the data center.
- Using the concept of a service profile and booting from a SAN in the Cisco Unified Computing system will reduce the time to instantiate new servers. This makes it easy to build and tear down test and development environments.
- Power and cooling are key problems in the data center today. Ways to address them include using Unified Fabric (converged SAN and LAN), using Cisco virtual interface cards, and using technologies such as VM-FEX and Adapter-FEX. Rather than using, for example, eight 10G links, you can use two 40G links, and so on. Reducing cabling creates efficient airflow, which in turn reduces cooling requirements.
- The concept of hybrid clouds can benefit your organization. Hybrid clouds extend your existing data center to public clouds as needed, with consistent network and security policies. Cisco is helping customers utilize this concept using CliQr/Cisco CloudCenter.
- Improved reliability during software updates, configuration changes, or adding components to the data center environment, which should happen with minimum disruption.
- Hosts, especially virtual hosts, must move without the need to change the topology or require an address change.
The following Figure shows the different product types available at the time this chapter was written.
Cisco Nexus Product Family
NOTE: Cisco is always innovating and creating new modules/switches. Therefore, while studying for your exam, it is always a good idea to check Cisco.com/go/nexus to verify new modules/switches and their associated features.