Posts with #cisco switches - cisco firewall tag
By default the Cisco Catalyst 2950 is not configured for remote administration. Basic configuration to enable remote administration on the Cisco Catalyst 2950 includes configuring an IP address on the switch and also enabling telnet access. Once these configurations are completed, the Cisco Catalyst 2950 can be managed by IP address.
Things You'll Need
- Cisco serial console cable
- Windows XP computer connected to the local network
- Privileged exec password for the Cisco Catalyst 2950
- IP address, subnet mask and gateway IP address for the switch
Instructions to Manage Cisco Catalyst 2950 by IP Address
1. Connect the Cisco serial console cable into the console port on the Cisco Catalyst 2950 switch and connect the other end of the cable into the 9-pin serial port, which is usually located on the back or side of the Windows XP computer.
2. Click the "Start" button and select the "Run" box and type "hypertrm" and press the "Enter" key and the HyperTerminal program will appear. Type a name for the session, such as "Cisco 2950" in the "Name:" field and click the "OK" button. Click the "Connect using:" drop-down menu, then click the "Com port" used to connect the Windows XP computer to the Cisco 2950 switch. Press the "Enter" key. Then click the "Bits per second:" drop-down menu and select "9600." Click "None" in the "Flow Control" drop-down menu and press the "Enter" key.
3. Press the "Enter" key and the Cisco command prompt will appear. Type "enable" and press "Enter." Then enter the password if requested.
4. Type "config term" and press the "Enter" key to enter "Configuration Mode" on the switch.
5. Type "line vty 0 4" and press the "Enter" key. Type "password abcd," replacing "abcd" with the password you wish to use to secure telnet access. Press the "Enter" key. Then type "login" and then press the "Enter" key.
6. Type "interface Vlan1" and press the "Enter" key. Then type "ip address 10.0.0.1 255.0.0.0," replacing the "10.0.0.1 255.0.0.0" with the IP address and subnet mask assigned to the switch. Press the "Enter" key.
7. Type "exit" and press the "Enter" key. Then type "ip default-gateway x.x.x.x," replacing "x.x.x.x" with the gateway IP address for the switch. Press the "Enter" key. Then type "end" and press the "Enter" key. Type "copy run start" and press the "Enter" key to save the configuration. Type "exit" and press the "Enter" key.
8. Click "Start" on the Windows XP computer. Click "Run" and then type "cmd" and press the "Enter" key. Type "telnet x.x.x.x" on the command line, replacing "x.x.x.x" with the IP address just configured on the Cisco Catalyst 2950. Press the "Enter" key. Type the telnet password just programmed into the Cisco Catalyst 2950 when requested. Press the "Enter" key and the Cisco command prompt should display so you can now manage the switch over the network.
More Related to Cisco 2950 Series:
When Cisco came out with its Unified Compute System (UCS) blades a couple of years back, there was plenty of skepticism about how the company would do by venturing into the pastures new of the server landscape. Last month's announcement that the company passed the 10,000 customer milestone for UCS sales laid many of those doubts to rest.
With IDC rating blades as the fastest growing server segment during the next several years, this bodes well for Cisco's growing presence in the marketplace.
"We're hearing from customers who are reporting all-in savings in the range of 40 percent on the cost of computing," said Todd Brannon, senior manager, Data Center and Virtualization, Cisco. "The savings stem from a variety of sources: lower capex as the platform efficiently scales, reduced administrator time, density/power savings and reduced software licensing costs as more workload lands on fewer servers."
One customer told Brannon he could let his CTO take a Cisco blade straight out of the box, insert it into a chassis slot, and as the system identified and integrated the new resource into the available pool, they congratulated him on his first server deployment.
New Cisco UCS Blades
Since our last snapshot around two years ago, Cisco server blade releases have been largely in lock step with the roll-out of Intel Xeon processor roadmap. Two years ago, the company released the Cisco UCS B200 M1 and B250 M1 blades, which are based on the Intel Xeon processor 5500 series. In the past year, it introduced the Cisco UCS B200 M2 and B250 M2, both based on the Intel Xeon Processor 5600 series.
The UCS B200 blade server is a half-width, 2-socket blade server with up to 192 GB of memory. It can deliver substantial throughput and scalability.
The Cisco UCS B250 M2 Extended Memory Blade Server is aimed at maximizing performance and capacity for demanding virtualization and large dataset applications. It is a full-width, 2-socket blade server that supports up to 384 GB of memory.
In addition, the Cisco UCS B230 M2 and B440 M2 blade servers are based on the Intel Xeon processor E7 family. These two servers are follow-on models to earlier-released M1 versions that were based on the Intel Xeon Processor 7500 series
The Cisco UCS B230 M2 Blade Server is a two-socket server supporting up to 20 cores and 512 GB of memory. The B230 M2 extends the capabilities of the Cisco Unified Computing System by delivering higher levels of performance, efficiency and reliability in a more compact, half-width form factor.
The UCS B440 M2 is a 4-socket blade that can support up to 40 cores and 512GB of memory. It is best for enterprise-class applications.
"We will continue to roll out blades targeted at both infrastructure and enterprise-class applications," said Brannon. "Last year, we delivered nine benchmarking world records at the launch of the Intel Xeon processor E7 family."
Cisco UCS Racks
Cisco offers more than just blades. It also provides a range of UCS rack servers. Much like it has done with blades, Cisco has transitioned the rackmount servers from M1 to M2 models to support the newest Intel Xeon Processor 5600 or E7 family.
The Cisco UCS C200 M2 and UCS B210 M2 servers are high-density, 2-socket rackmount servers built for production-level network infrastructure, web services, and mainstream data center, branch and remote-office applications. The Cisco UCS C250 M2 server is a high-performance, memory-intensive, 2-socket, 2-rack unit (RU) rackmount server designed for virtualization and large dataset workloads.
Two rackmount servers use the Intel Xeon processor E7 family. The Cisco UCS C260 M2 Rack-Mount Server is a high-density, 2-socket platform that offers compact performance for enterprise-critical applications. The C260 M2 server's maximum 1TB of memory and 16 drives make it good for memory-bound or disk-intensive applications.
The Cisco UCS C460 M2 Rack-Mount Server has enough processing power, memory and local storage to house mission-critical applications, as well as server consolidation of resource-intense workloads.
"Cisco UCS is a next-generation data center server platform that unites compute, network, storage access and virtualization into a cohesive system designed to outperform previous server architectures, increase operational agility and flexibility while potentially dramatically reducing overall data center costs," said Brannon. "The system is programmable using single point, model-based management to simplify and speed deployment of applications and services running in bare-metal, virtualized, and cloud-computing environments."
---Reading from serverwatch.com
More Related Cisco UCS news:
It is important to understand how to access switch ports. The 3550 switch uses the type slot/port command, just like a 2621 router and just like the 3550 switch. For example, Fastethernet 0/3 is 10/100BaseT port 3.
The 3550 switch type slot/port command can be used with either the interface command or the show command. The interface command allows you to set interface specific configurations. The 3550 switch has only one slot: zero (0), just like the 1900.
Network Layout: Work with the saved network that you used to configure devices in lab 8.27.
1. To configure an interface on a 3550 switch, go to global configuration mode and use the interface command as shown.
Enter configuration commands, one per line. End with CTRL/Z
Async Async interface
BVI Bridge-Group Virtual Interface
Dialer Dialer interface
FastEthernet FastEthernet IEEE 802.3
Group-Async Async Group interface
Lex Lex interface
Loopback Loopback interface
Multilink Multilink-group interface
Null Null interface
Port-channel Ethernet Channel of interfaces
Transparent Transparent interface
Tunnel Tunnel interface
Virtual-Template Virtual Template interface
Virtual-TokenRing Virtual TokenRing
Vlan Catalyst Vlans
fcpa Fiber Channel
range interface range command
2. The next output asks for the slot. Since the 3550 switch is not modular, there is only one slot, which is 0, although it lists 0-2 for some odd reason. However, you can only type in 0 as the slot in this program. Any other slot number will give you an error. The next output gives us a slash (/) to separate the slot/port configuration.
3550A(config)#interface fastethernet ?
<0-2> FastEthernet interface number
3550A(config)#interface fastethernet 0?
3550A(config)#interface fastethernet 0/?
<0-12> FastEthernet interface number
3. After the 0/configuration command, the above output shows the amount of ports you can configure. The output below shows the completed command.
3550A(config)#interface fastethernet 0/4
4. Once you are in interface configuration mode, the prompt changes to (config-if). After you are at the interface prompt, you can use the help commands to see the available commands.
Interface configuration commands:
arp Set arp type (arpa, probe, snap) or timeout
bandwidth Set bandwidth informational parameter
carrier-delay Specify delay for interface transitions
cdp CDP interface subcommands
channel-group Etherchannel/port bundling configuration
default Set a command to its defaults
delay Specify interface throughput delay
description Interface specific description
dot1x IEEE 802.1X subsystem
duplex Configure duplex operation.
exit Exit from interface configuration mode
help Description of the interactive help system
hold-queue Set hold queue depth
ip Interface Internet Protocol config commands
keepalive Enable keepalive
load-interval Specify interval for load calculation for an interface
logging Configure logging for interface
mac-address Manually set interface MAC address
mls mls interface commands
mvr MVR per port configuration
no Negate a command or set its defaults
ntp Configure NTP
You can switch between interface configurations by using the int fa 0/# command at any time from global configuration mode.
5. Let’s look at the duplex and speed configurations for a switch port.
auto Enable AUTO duplex configuration
full Force full duplex operation
half Force half-duplex operation
10 Force 10 Mbps operation
100 Force 100 Mbps operation
auto Enable AUTO speed configuration
6. Since the switch port’s duplex and speed settings are already set to auto by default, you do not need to change the switch port settings. It is recommended that you allow the switch port to auto negotiate speed and duplex settings in most situations. In a rare situation, when it is required to manually set the speed and duplex of a switch port, you can use the following configuration.
Duplex will not be set until speed is set to non-auto value
full duplex - transmission of data in two directions simultaneously. It has a higher throughput than half duplex.
There are no collision domains with this setting
Both sides must have the capability of being set to full duplex
Both sides of the connection must be configured with full duplex
Each side transmits and receives at full bandwidth in both directions
7. Notice in the above command that to run full duplex, you must set the speed to non-auto value.
8. In addition to the duplex and speed commands that can be configured on the switch port, you also can turn on what is called portfast. The portfast command allows a switch port to come up quickly. Typically a switch port waits 50 seconds for spanning-tree to go through its"gotta make sure there are no loops!" cycle. However, if you turn port fast on, then you better be sure you do not create a physical loop on the switch network. A spanning tree loop can severely hurt or bring your network down. Here is how you would enable port fast on a switch port.
bpdufilter Don't send or receive BPDUs on this interface
bpduguard Don't accept BPDUs on this interface
cost Change an interface's spanning tree port path cost
guard Change an interface's spanning tree guard mode
link-type Specify a link type for spanning tree protocol use
port-priority Change an interface's spanning tree port priority
portfast Enable an interface to move directly to forwarding on link up
stack-port Enable stack port
vlan VLAN Switch Spanning Tree
9. The command above shows the available options for the spanning-tree command. We want to use the portfast command.
%Warning: portfast should only be enabled on ports connected to a single
host. Connecting hubs, concentrators, switches, bridges, etc... to this
interface when portfast is enabled, can cause temporary bridging loops.
Use with CAUTION
%Portfast has been configured on FastEthernet0/4 but will only
have effect when the interface is in a non-trunking mode.
10. Notice the message the switch provides when enabling portfast. Although it seems like the command did not take effect, as long as the port is in access mode (discussed in a minute), the port will now be in portfast mode.
11. After you make any changes you want to the interfaces, you can view the different interfaces with the show interface command. The switch output below shows the command used to view a 10/100BaseT interface on the 3550 switch.
3550A#sh int f0/4
FastEthernet0/4 is up, line protocol is up
Hardware is Fast Ethernet, address is 00b0.c5e4.e2cf (bia 00b0.c5e4.e2cf)
MTU 1500 bytes, BW 10000 Kbit, DLY 1000 usec,
reliability 255/255, txload 1/255, rxload 1/255
Encapsulation ARPA, loopback not set
Keepalive set (10 sec)
Full duplex, 100Mb/s
input flow-control is off, output flow-control is off
ARP type: ARPA, ARP Timeout 04:00:00
Last input never, output 1w6d, output hang never
Last clearing of "show interface" counters never
Input queue: 0/75/0/0 (size/max/drops/flushes); Total output drops: 0
Queueing strategy: fifo
Output queue :0/40 (size/max)
5 minute input rate 0 bits/sec, 0 packets/sec
5 minute output rate 0 bits/sec, 0 packets/sec
1 packets input, 64 bytes, 0 no buffer
Received 0 broadcasts, 0 runts, 0 giants, 0 throttles
0 input errors, 0 CRC, 0 frame, 0 overrun, 0 ignored
0 watchdog, 0 multicast, 0 pause input
0 input packets with dribble condition detected
1 packets output, 64 bytes, 0 underruns
0 output errors, 0 collisions, 3 interface resets
0 babbles, 0 late collision, 0 deferred
0 lost carrier, 0 no carrier, 0 PAUSE output
0 output buffer failures, 0 output buffers swapped out
12. In addition to the show interface command, you can use the show running-config command to see the interface configuration as well.
switchport mode dynamic desirable
switchport mode dynamic desirable
13. You can administratively set a name for each interface on the 3550 switch. Like the hostname, the descriptions are only locally significant. For the 3550 series switch, use the description command. You can use spaces with the description command, but you can use underlines if you need to.
To set the descriptions, you need to be in interface configuration mode. From interface configuration mode, use the description command to describe each interface.
Enter configuration commands, one per line. End with CTRL/Z
3550A(config)#int fa 0/4
3550A(config-if)#description Marketing VLAN
3550A(config-if)#int fa 0/10
3550A(config-if)#description trunk to Building 3
In the configuration example above, we set the description on both port 4 and 10.
14. Once you have configured the descriptions you want on each interface, you can then view the descriptions with either the show interface command, or show running-configcommand. View the configuration of the Ethernet interface 0/9 by using the show interface ethernet 0/4 command.
3550A#sh int fa 0/4
FastEthernet0/4 is up, line protocol is up
Hardware is Fast Ethernet, address is 00b0.1a09.2097 (bia 00b0.1a09.2097)
Description: Marketing VLAN
15. Use the show running-config command to view the interface configurations as well.
description "Marketing VLAN"
Notice in the above switch output that the sh int fa0/4 command and the show run command both show the description command set on an interface.
---Original reading at content.digiex.net
More Cisco 3560 Tutorials and Tips:
When an enterprise needs more network ports in a conference room or an extra jack for a printer in an office, a network administrator has traditionally had very few good choices. There was the expensive option of pulling more cables from the wiring closet, or the option of plugging in an unmanaged 8-port switch from a low-cost vendor into an existing port, complicating campus network design.
Now that port shortage problem has reached beyond the conference room as enterprises of all kinds are adding a multitude of IP devices and stretching the edge of the LAN beyond the wiring closet. Companies now deploy large numbers of IP phones and video surveillance cameras, schools have more computers and IP-based instructional technology and retail shops have deployed more IP-connected kiosks and point-of-sales stations. While 802.11n wireless LAN technology and cheap unmanaged switches have mitigated the port shortage to some extent, a better answer may lie in enterprise-class compact switches.
Cisco Systems unveiled a new family of compact switches targeting this problem. The switches are part of the Catalyst C-Series and consist of the Catalyst 2960-C and the 3560-C. There are five models and 8 to 12 Fast Ethernet or Gigabit Ethernet (GbE) ports with dual GbE uplinks. These switches do not require their own power source since each device has a new Power-over-Ethernet (PoE+) "pass-through" feature, which allows them to be powered by an upstream closet switch. They are then in turn able to pass the PoE power downstream to IP-connected devices like phones and cameras.
The Catalyst C switches also have many enterprise-class features that low-cost switches lack, such as auto-configuration, IPv6 acceleration and access control lists (ACL). They also have several features central to Cisco's broader Borderless Networks architecture, including Cisco security functions, TrustSec and the IEEE standard MACSec, and Cisco's EnergyWise energy management. The product compares somewhat to a port extender released by Extreme Networks in 2009, the ReachNXT 100-8t port extender, an 8-port device.
For Jordan Martin, technical services manager at a Pennsylvania-based healthcare enterprise, an enterprise-class 8-port switch would simplify his campus network design.
"We have all kinds of little, unmanaged switches lying around places where there just aren't enough jacks to facilitate what we need. Unfortunately a lot of our wiring in our building was done without a ton of forethought,” Martin said.
"We have a campus here with a guard shack and we need to be able to process fiber in and Ethernet out, so we need a decent capability switch out there. But I don't want to spend $3,000 for one guy with a computer and a phone."
Using unmanaged switches from a low-cost vendor has been adequate at times within his network, but such devices don’t scale well, Martin said. Replacing them with enterprise-class 8-port switches could improve operations, management and visibility into the edge of his network.
"In a non-managed switch, if you're having trouble with a device, it could be the switch; it could be the cabling. Being able to take a look at the interface and see if it's a duplex mismatch or whatever the issue may be without having to go out to the location and put some tap on the line… That remote diagnostic capability of an enterprise switch is big for us."
Campus network design: Even with good forethought you'll need the occasional 8-port switch
Eric Steel, network engineer with Georgia-based law firm Constangy, Brooks & Smith, said he usually avoids the need for switches beyond the wiring closet by planning ahead and making sure he has plenty of ports across the network.
"But in those cases where we can't, we end up putting in a cheap mini-switch -- Linksys or Netgear," he said. "
Those switches bring various operational challenges. Steel has to properly configure them for spanning tree protocol so that they don't loop into the LAN, and getting power to the device is also a frequent challenge. "Security is, of course, another headache, because you now have some open ports for people to plug into accidentally or maliciously," Steel said.
Replacing an unmanaged 8-port switch with compact enterprise-class switches allows users to have a network management and security feature set from the core to the edge, said Mike Spanbauer, principal analyst with Current Analysis.
"It offers the ability for the end user to basically standardize on a specific security configuration or software image," he said. "And if they have Catalyst 3560s in the closet and these 3560-Cs remotely deployed in a conference room, which offers the ability to simplify management."
These compact switches also give new campus network design options to enterprises with large numbers of small branches or locations with a light network footprint.
The Catalyst C switches replace a collection of older 8-port Fast Ethernet Catalyst 2960 switches which lacked the Borderless Networks capabilities, memory, PoE pass-through and dual uplinks of these new models.
---Original news from searchnetworking.techtarget.com
More Cisco Catalyst Switch Tips and Cisco Switch Info:
Juniper EX4200 or Cisco Catalyst 3750 Series Switch Compared
What do people thought about Juniper's EX switches Vs. Cisco Catalysts switches? Someone may answer like these: “Well the Juniper switches are much cheaper, that's for sure. I don't understand this Cisco-only mentality that's out there - why would I pay 3 or 4 times as much for a switch with less features?”. “We bought the Blade Network Technologies Rack Switches. Juniper OEM them, but they are even cheaper buying them from BNT. And the suppport is great too.”…
Right, Both Cisco and Juniper have many users and followers. Not the better, but the right. There are some comparison between Juniper’s EX4200 switch and Cisco’s 3750 series Catalyst switches, which may help you know more about EX4200 switch and Cisco 3750 switches.
EX4200 vs. Catalyst 3750: Layer 3 Stackable Switch Comparison
With prices starting at under $4,000, Juniper’s EX4200 line is available in 24 and 48 port 10/100/1000 densities, both PoE and non-PoE. They also include either 1Gb or 10Gb modular uplink connectivity. Another cool feature is the standard hot swap power supplies, while most of Cisco 3750 switches come with a single non field serviceable power supply.
Cisco 3750G, CISCO 3750E, and Cisco Catalyst 3750X switches come in over 70 different models and it can be overwhelming figuring out exactly what model to order without having to go through a myriad of technical, feature and pricing comparisons. Juniper makes it easy, offering one model with the same or better performance in several categories than all of Cisco 3750 series switches. Better yet, Juniper’s J-Care support can be as much as 75% less than Cisco’s Smartnet.
One of the most important factors in choosing a Layer 3 stackable switch is the actual performance of the stack. An independent study found Juniper’s EX4200 Latency is always lower when the switches are in a Virtual Chassis configuration. Coincidently enough, Cisco doesn’t publish latency rates of their stackable solution. Virtual Chassis configurations recover from hardware and software failures in milliseconds and operate at 30-Gbit/s rates in each direction between switches.
So in a side by side comparison between the Juniper EX4200 and the Cisco 3750G, E or X, it was no contest.
Price and Specs of Juniper EX 4200, Cisco 3750G, Cisco 3750-E, Cisco 3750-X
Example List prices
1 Yr 24x7x4 Support List Prices
Stacking Throughput (Gbps)
Max switches in virtual stack
L3 RIP and Static
Internal power capabilities
Redundant Hot Swappable
Single Field Replaceable
Redundant Hot Swappable
The Cisco RV110W Wireless-N VPN Firewall offers simple, highly secure wired and wireless connectivity for small offices, home offices, and remote workers at an affordable price. It comes with a high-speed, 802.11n wireless access point, a 4-port 10/100 Mbps Fast Ethernet switch, an intuitive, browser-based device manager, and support for the Cisco Small Business FindIT Network Discovery Utility.
It combines business-class features, simple installation, and a quality user experience to provide basic connectivity for small businesses with five or fewer employees.
The RV110W Wireless-N VPN Firewall also features:
- A proven firewall with support for access rules and advanced wireless security to help keep business assets safe
- IP Security (IPsec) VPN support for highly secure remote-access client connectivity
- Support for separate virtual networks to allow you to set up highly secure wireless guest access
- Native support for IPv6, which allows you to take advantage of future networking applications and operating systems, without an equipment upgrade
- Support for Cisco Small Business QuickVPN software
The good: The Cisco RV110W Wireless-N VPN Firewall router offers a built-in PPTP VPN server and fast performance. The compact, IPv6-ready router is easy to use and comes with a well-organized, responsive Web interface.
The bad: The RV110W lacks support for dual-band and Gigabit Ethernet. Its VPN supports only up to five remote clients at time.
The bottom line: The Cisco RV110W Wireless-N VPN Firewall would make a very good investment for a small business that needs an easy VPN solution for remote employees.
The Cisco RV110W Wireless-N VPN Firewall router is not for everyone, but those who need it will appreciate its simplicity. The router offers a built-in VPN for up to five clients at a time. Other than the VPN this is a simple single-band Wireless-N router that doesn't support dual-band wireless or Gigabit Ethernet. At an estimated price of less than $120, though, it's still a good choice for a small business.
Design and ease of use
The Cisco RV110W Wireless-N VPN Firewall router is square and compact, about the size of a bathroom tile. It has four little rubber feet on the bottom to keep it grounded, and is also wall-mountable. Unlike other home routers from Cisco, such as the E series, that have internal antennas, the RV110W has two antennas sticking up from the back. Also on the back you'll find the router's one WAN port (to hook up to the Internet) and four WAN ports (for wired clients). None of these ports, unfortunately, is Gigabit Ethernet, meaning the router offers at the most 100Mbps for its wired networks.
The router doesn't have a USB port, either, which means there's no built-in network storage or print-server capability.
On the front, the router has a Wi-Fi Protected Setup button that helps quickly add Wi-Fi clients to the network. There's also an LED array to show the statuses of the ports on the back and the connection to the Internet.
Unlike other routers, the RV110W doesn't come with the Cisco Connect software. Instead, it has a well-illustrated Quick Start Guide that takes you through the setup process, from hooking up the cables to getting the wireless network up and running. Part of the process involves logging in to the router's well-organized and responsive Web interface, which includes a wizard to make the setup process even easier.
The RV110W's most important feature is the built-in support for hosting a VPN network, which allows clients outside the office to connect to the network as though they were within the local network. This enables remote workers to access local resources such as printers, remote desktops, and databases.
Generally, you'd need a domain server to do this, or you'd need to opt for a much more expensive router. The RV110W is possibly the cheapest simple VPN hosting product that offers an easy-to-use built-in PPTP VPN server on the market. Nonetheless, you'll need to be fairly well-versed in networking to configure a client to connect to the router. On the router side, however, it takes just a few mouse clicks to get the VPN ready.
The router's VPN network-hosting support is limited to up to five concurrent clients at a time, so if your business has more than five employees who work remotely, this router is not for you.
The RV110W is a single-band wireless router, offering Wireless-N (802.11n) on the 2.4GHz band only. Most new home routers offer support for dual-band, meaning they can also broadcast on the higher-bandwidth 5GHz band. For a business router, however, it's still normal not to offer 5GHz. What's not normal, and is disappointing, however, is the fact that the RV110W doesn't offer Gigabit Ethernet.
To make up for that, it's one of the few routers on the market that are IPv6-ready. The new version of Internet protocol promises better security and speed and, most importantly, is future-proofed as the world is now moving on from IPv4, which is running out of addresses.
More Cisco wireless info you can visit: http://blog.router-switch.com/category/technology/wireless/
Virtualization, long a hot topic for servers, has entered the networking realm. With the introduction of a new management blade for its Catalyst 6500 switches, Cisco can make two switches look like one while dramatically reducing failover times in the process.
In an exclusive Clear Choice test of Cisco's new Virtual Switching System (VSS), Network World conducted its largest-ever benchmarks to date, using a mammoth test bed with 130 10G Ethernet interfaces. The results were impressive: VSS not only delivers a 20-fold improvement in failover times but also eliminates Layer 2 and 3 redundancy protocols at the same time.
The performance numbers are even more startling: A VSS-enabled virtual switch moved a record 770 million frames per second in one test, and routed more than 5.6 billion unicast and multicast flows in another. Those numbers are exactly twice what a single physical Catalyst 6509 can do.
All links, all the time
To maximise up-time, network architects typically provision multiple links and devices at every layer of the network, using an alphabet soup of redundancy protocols to protect against downtime. These include rapid spanning tree protocol (RSTP), hot standby routing protocol (HSRP), and virtual router redundancy protocol (VRRP).
This approach works, but has multiple downsides. Chief among them is the "active-passive" model used by most redundancy protocols, where one path carries traffic while the other sits idle until a failure occurs. Active-passive models use only 50 percent of available capacity, adding considerable capital expense.
Further, both HSRP and VRRP require three IP addresses per subnet, even though routers use only one address at a time. And while rapid spanning tree recovers from failures much faster than the original spanning tree, convergence times can still vary by several seconds, leading to erratic application performance. Strictly speaking, spanning tree was intended only to prevent loops, but it's commonly used as a redundancy mechanism.
There's one more downside to current redundant network designs: It creates twice as many network elements to manage. Regardless of whether network managers use a command-line interface or an SNMP-based system for configuration management, any policy change needs to be made twice, once on each redundant component.
Introducing Virtual Switching
In contrast, Cisco's VSS uses an "active-active" model that retains the same amount of redundancy, but makes use of all available links and switch ports.
While many vendors support link aggregation (a means of combining multiple physical interfaces to appear as one logical interface), VSS is unique in its ability to virtualise the entire switch -- including the switch fabric and all interfaces. Link aggregation and variations such as Nortel's Split Multi-Link Trunk (SMLT) do not create virtual switches, nor do they eliminate the need for Layer 3 redundancy mechanisms such as HSRP or VRRP.
At the heart of VSS is the Virtual Switching Supervisor 720-10G, a management and switch fabric blade for Cisco Catalyst 6500 switches. VSS requires two new supervisor cards, one in each physical chassis. The management blades create a virtual switch link (VSL), making both devices appear as one to the outside world: There's just one media access control and one IP address used, and both systems share a common configuration file that covers all ports in both chassis.
On the access side of Cisco's virtual switch, downstream devices still connect to both physical chassis, but a bonding technology called Multichassis EtherChannel (MEC) presents the virtual switch as one logical device. MEC links can use industry-standard 802.1ad link aggregation or Cisco's proprietary port aggregation protocol. Either way, MEC eliminates the need for spanning tree. All links within a MEC are active until a circuit or switch failure occurs, and then traffic continues to flow over the remaining links in the MEC.
Servers also can use MEC's link aggregation support, with no additional software needed. Multiple connections were already possible using "NIC teaming," but that's usually a proprietary, active/passive approach.
On the core side of Cisco's virtual switch, devices also use MEC connections to attach to the virtual switch. This eliminates the need for redundancy protocols such as HSRP or VRRP, and also reduces the number of routes advertised. As on the access side, traffic flows through the MEC in an "active/active" pattern until a failure, after which the MEC continues to operate with fewer elements.
The previous examples focused on distribution-layer switches, but VSL links work between any two Catalyst 6500 chassis. For example, virtual switching can be used at both core and distribution layers, or at the core, distribution and access layers. All attached devices would see one logical device wherever a virtual switch exists.
A VSL works only between two chassis, but it can support up to eight physical links. Multiple VSL links can be established using any combination of interfaces on the new supervisor card or Cisco's WS-6708 10G Ethernet line card. VSS also requires line cards in Cisco's 67xx series, such as the 6724 and 6748 10/100/1000 modules or the 6704 or 6708 10G Ethernet modules. Cisco says VSL control traffic uses less than 5 percent of a 10G Ethernet link, but we did not verify this.
At least for now, VSL traffic is proprietary. It isn't possible to set up a VSL between, say, a Cisco and Foundry switch.
A big swath of fabric
We assessed VSS performance with tests focused on fabric bandwidth and delay, failover times, and unicast/multicast performance across a network backbone.
In the fabric tests we sought to answer two simple questions: How fast does VSS move frames, and how long does it hang on to each frame? The set-up for this test was anything but simple. We attached Spirent TestCenter analyser/generator modules to 130 10G Ethernet ports on two Catalyst 6509 chassis configured as one virtual switch.
These tests produced, by far, the highest throughput we've ever measured from a single (logical) device. When forwarding 64-byte frames, Cisco's virtual switch moved traffic at more than 770 million frames per second. We then ran the same test on a single switch, without virtualisation, and measured throughput of 385 million frames per second -- exactly half the result of the two fabrics combined in the virtual switch. These results prove there's no penalty for combining switch fabrics.
We also measured VSS throughput for 256-byte frames (close to the average Internet frame length) of 287 million frames per second and for 1,518-byte frames (until recently, the maximum in Ethernet, and still the top end on most production networks) of 53 million frames per second. With both frame sizes, throughput was exactly double that of the single-switch case.
The 1,518-byte frames per second number represents throughput of nearly 648Gbps. This is only around half the theoretical maximum rate possible with 130 10G Ethernet ports. The limiting factor is the Supervisor 720 switch fabric, which can't send line-rate traffic to all 66 10G ports in each fully loaded chassis. VSS doubles fabric capacity by combining two switches, but it doesn't extend the capacity of the fabric card in either physical switch.
We also measured delay for all three frame sizes. With a 10 percent intended load, Spirent TestCenter reported average delays ranging from 12 to 17 microsec, both with and without virtual switching. These numbers are similar to those for other 10G switches we've tested, and far below the point where they'd affect performance of any application. Even the maximum delays of around 66 microsec with virtual switching again are too low to slow down any application, especially considering Internet round-trip delays often run into the tens of milliseconds.
Our failover tests produced another record: The fastest recovery from an Layer 2/Layer 3 network failure we've ever measured.
We began these tests with a conventional set-up: Rapid spanning tree at layer 2, HSRP at Layer 3, and 16,000 hosts (emulated on Spirent TestCenter) sending traffic across redundant pairs of access, distribution and core switches. During the test, we cut off power to one of the distribution switches, forcing all redundancy mechanisms and routing protocols to reconverge. Recovery took 6.883 seconds in this set-up.
Then we re-ran the same test two more times with VSS enabled. This time convergence occurred much faster. It took the network just 322 millisec to converge with virtual switching on the distribution switches, and 341 millisec to converge with virtual switching on the core and distribution switches. Both numbers represent better than 20-fold improvements over the usual redundancy mechanisms.
A bigger backbone
Our final tests measured backbone performance using a complex enterprise traffic pattern involving 176,000 unicast routes, more than 10,000 multicast routes, and more than 5.6 billion flows. We ran these tests with unicast traffic alone and a combination of unicast and multicast flows, and again compared results with and without VSS in place.
Just to keep things interesting, we ran all tests with a 10,000-entry access control list in place, and also configured switches to re-mark all packets' diff-serv code point (DSCP) fields. Re-marking DSCPs prevents users from unauthorised "promotion" of their packets to receive higher-priority treatment. In addition, we enabled NetFlow tracking for all test traffic.
Throughput in all the backbone cases was exactly double with virtual switching than without it. This was true for both unicast and mixed-class throughput tests, and also true regardless of whether we enabled virtual switching on distribution switches alone, or on both the core and distribution switches. These results clearly show the advantages of an "active/active" design over an "active/passive" one.
We measured delay as well as throughput in these tests. Ideally, we'd expect to see little difference between test cases with and without virtual switching, and between cases with virtual switching at one or two layers in the network. When it came to average delay, that's pretty much how things looked. Delays across three pairs of physical switches ranged from around 26 to 90 microsec in all test cases, well below the point where applications would notice.
Maximum delays did vary somewhat with virtual switching enabled, but not by a margin that would affect application performance. Curiously, maximum delay increased the most for 256-byte frames, with fourfold increases over results without virtual switching. The actual amounts were always well less than 1 millisec, and also unlikely to affect application performance.
Cisco's VSS is a significant advancement in the state of the switching art. It dramatically improves availability with much faster recovery times, while simultaneously providing a big boost in bandwidth.
How we tested Cisco's VSS
For all tests described here, we configured a 10,000-line access control list (ACL) covering layer-3 and layer-4 criteria and spot-checked that random entries in the ACL blocked traffic as intended. As a safeguard against users making unauthorised changes, Cisco engineers also configured access and core switches to re-mark the diff-serve code point (DSCP) in every packet, and we verified re-marking using counters in the Spirent TestCenter traffic generator/analyser. Cisco also enabled NetFlow traffic monitoring for all test traffic.
To assess the fabric bandwidth and delay, the system under test was one pair of Cisco Catalyst 6509-E switches. Cisco engineers set up a virtual switch link (VSL) between the switches, each equipped with eight WS6408 10G Ethernet line cards and one Virtual Switching Supervisor 720-10G management/switch fabric card. That left a total of 130 10G Ethernet test ports: Eight on each of the line cards, plus one on each of the management cards (we used the management card's other 10G Ethernet port to set up the virtual link between switches).
Using the Spirent TestCenter traffic generator/analyser, we offered 64-, 256- and 1518-byte IPv4 unicast frames on each of the 130 10G test ports to determine throughput and delay. We measured delay at 10 percent of line rate, consistent with our practice in previous 10G Ethernet switch tests. The Spirent TestCenter analyser emulated 100 unique hosts on each port, making for 13,000 total hosts.
In the failover tests, the goal was to compare VSS recovery time upon loss of a switch with recovery using older redundancy mechanisms.
This test involved three pairs of Catalyst 6509 switches, representing the core, distribution and access layers of an enterprise network. We ran the failover tests in three configurations. In the first scenario, we used legacy redundancy mechanisms such as rapid spanning tree and hot standby routing protocol (HSRP). Then we ran two failover scenarios using VSS, first with a virtual link on the distribution switches alone, and again with VSS links on both the distribution and core switches.
For each test, we began by offering traffic to each of 16 interfaces on the core and access sides of the test bed. We began the failover tests with a baseline event to verify no frame loss existed. While Spirent TestCenter offered test traffic for 300 seconds, we cut off power to one of the distribution switches. Because we offered traffic to each interface at a rate of 100,000 frames per second, each dropped frame represented 10 microsec of recovery time. So, for example, if Spirent TestCenter reported 32,000 lost frames, then failover time was 320 millisec.
The backbone performance tests used a set-up similar to the VSS configurations in the failover tests. Here again, there were three pairs of Catalyst 6509 switches, representing core, distribution and access layers of an enterprise network. Here again, we also conducted separate tests with a virtual link on the distribution switches, and again with virtual links on the distribution and core switches.
To represent enterprise conditions, we set up very large numbers of routes, hosts and flows in these tests. From the core side, we configured OSPF to advertise 176,000 unique routes. On the access side, we set up four virtual LANs (VLAN), each with 250 hosts, on each of 16 ports, for 16,000 hosts total. In terms of multicast traffic set-up, one host in each access-side VLAN joined each of 40 groups, each of which had 16 transmitters; with 16 core-side interfaces. In all, this test represented more than 10,000 multicast routes, and more than 5.6 billion unique unicast flows.
In the backbone tests, we used a partially meshed traffic pattern to measure system throughput and delay. As defined in RFC 2285, a partial mesh pattern is one in which ports on both sides of the test bed exchange traffic with one another, but not among themselves. In this case, that meant all access ports exchanged traffic with all core ports, and vice-versa.
We tested all four combinations of unicast, mixed multicast/unicast, and virtual switching enabled and disabled on the core switches (virtual switching was always enabled on the distribution switches and always disabled on the access switches). In all four backbone test set-ups, we measured throughput and delay.
We conducted these tests in an engineering lab at Cisco's campus in San Jose. This is a departure from our normal procedure of testing in our own labs or at a neutral third-party facility. The change was borne of logistical necessity: Cisco's lab was the only one available within the allotted timeframe with sufficient 10G Ethernet test ports and electrical power to conduct this test. Network Test and Spirent engineers conducted all tests and verified configurations of both switches and test instruments, just as we would in any test. The results presented here would be the same regardless of where the test was conducted.
---Original reading from review.techworld.com
The Cisco 3750 range has been around for many years now, and has a vast following. The Cisco 3750-X is the new kid on the Cisco block, and it combines plenty of stuff that will be familiar to users of its predecessors with some funky new features that are clearly a step forward.
Cisco 3750 switch comes in a number of flavors – between 24 and 48 ports, with or without Power over Ethernet. Cisco 3750 with 48P is the PoE variant of the 48-port device. Now, the traditional Cisco 3750 had four 1Gbit/s SFP ports in addition to the 48 10/100/1000 copper ports; the Cisco 3750-X instead has a slot into which you can slot either a four-port 1Gbit/s SFP daughter-board or a two-port 10Gbit/s alternative.
Alongside the port combinations, there are three software installs. The LAN Base software is a layer-2 only software image, and quite frankly I wouldn't ever expect to buy one of these if I only wanted layer-2 functionality. More sensible is the IP Base image which makes the device a proper Layer-3 routing switch, albeit with a limited selection of routing protocols. At the top is the IP Services image, which makes the unit a full-blown router (just like its ancestors – two of my BGP-shouting WAN routers are actually 3750Gs, in fact). The main market will of course be for the IP Base version.
The rear panel is interesting too, of course. As with the older 3750s the rear panel has a pair of “stack” ports. Each stack port provides a 16Gbit/s backplane connection, and by stacking your devices in a loop you end up with a resilient 32Gbit/s backplane. From a management and configuration point of view a stack is a single virtual switch – you manage it rather like a chassis product with a number of blades. So port 1 of switch 1 is Gi1/0/1, port 3 of switch 2 is Gi2/0/3, and so on.
The important rear-panel innovation with the new CISCO 3750-X model is the provision for redundant power supplies. In the old model you had a single, non-removable power supply along with an RPS (Redundant Power Supply) connection; to use the latter and give yourself some resilience you had to buy something like an RPS2300 – an external device that was a stupid shape that didn't fit into a rack very well, had buttons on the front whose only purpose seemed to be to make things break, and on a brighter note provided up to six switches with resilient power. The new model has dual slots for removable PSUs, of which one is populated by default; it's a ten-second job to slip a second one in beside it. One of the downsides of the old 3750 was the bloody awful reliability of the internal (fixed) PSU, and I've spent rather too many hours swapping out units with duff power units, so the removable units in the -X are most welcome.
Along with the redundant PSU facility is the power stacking capability. Just as you have your data stack cables, you also now have a pair of power-stack cables on each unit, so that the total power available via all the PSUs in the stack is available for negotiated use across the whole stack, for switch power and PoE.
As with the older devices, you can add and remove stack devices on the fly. Adding a switch to a stack is a simple case of settings its ID, telling the stack to expect a new member, and plumbing it in (although in theory the stack will deal with firmware mismatches in the new member, I prefer not to tempt fate so I always pre-install the right version). If a unit fails the stack will keep on humming while you pull out the duff one and stick in the replacement, and the config will be automatically migrated to the new unit.
The only downside I've found so far, in fact, is with trying to get the new -X model to co-exist in a stack with the old 3750-G (in short, I've not persuaded it to actually work yet) but I've no doubt I'll persuade it to play before long.
The Cisco 3750-X is a really sensible evolution in an already popular family of switches in the Cisco family. Being an IOS device there's really not a great deal of difference management-wise between the old and the new, so you get new functionality with almost zero additional training requirements. I've recently added seven 48-port non-PoE versions in three of my server installations, and have just received two new pairs of the PoE variant in a couple of offices, and I'm pretty happy thus far.
New power stacking capability is an excellent evolution.
32Gbit/s backplane should be sufficient for most modest installations.
10Gbit/s Ethernet support for uplinking or connecting to blade servers.
More Cisco 3750 Info:
Sample Pricing for Popular Cisco 3750 Models:
Catalyst 3750X 24 Port Data LAN Base: US$2,236.00 (57.00% off list price)
Catalyst 3750X 48 Port Data LAN Base: US$3,827.00 (57.00% off list price)
Catalyst 3750X 24 Port Data IP Base: US$2,795.00 (57.00% off list price)
Catalyst 3750X 48 Port Data IP Base: US$4,945.00 (57.00% off list price)
Catalyst 3750X 24 Port PoE IP Base: US$3,139.00 (57.00% off list price)
WS-C3750X-48P-S: Stackable 48 10/100/1000 Ethernet PoE+ ports, with 715W AC Power Supply: US$5,590.00 (57.00% off list price)
Router-switch.com ((Yejian Technologies Co., Ltd), a World's Leading Cisco Supplier
Restore a corrupt Cisco 3750 IOS or Internetwork Operating System image by transferring a new image to flash storage using the Xmodem protocol. Power anomalies such as brown outs and surges can cause irreparable IOS image corruption. You should delete and replace a corrupt IOS image to ensure that the Cisco 3750 remains reliable. Access “switch:” mode through a serial connection and recover Cisco 3750 firmware to a working state
Things you’ll need to recover firmware Cisco 3750 switch
Windows 7 computer that has a serial COM port and the Tera Term program installed
Cisco serial console cable
IOS image for the Cisco 3750 switch stored on the Windows 7 computer
How to Recover Cisco 3750 Firmware to a Working State?
1. Connect the Cisco serial console cable 9 pin connector to the Windows 7 computer serial COM port. Plug the other end of the serial cable into the Cisco 3750 “Console” port.
2. Launch the Tera Term terminal console program and click “File” then “New connection.” Click the “Serial” radio button. Click the “Port” box and then the name of the serial COM port connected to the Cisco 3750 switch. Click the “OK” button.
3. Unplug the Cisco 3750 switch power cable. Press and hold down the “Mode” button located on the Cisco 3750 front left panel. Power up the Cisco 3750 switch and release the “Mode” button when the Port 1x light turns off.
4. Click the Tera Term window and press the “Enter” key two times. Type “flash_init” at the command prompt and tap “Enter.” Write “load_helper” at the command prompt and press “Enter.”
5. Type “dir flash:” on the command line and press “Enter.” View the command line output and note any files that end with “.bin” or directories that have “3750” in the name.
6. Type “dir flash:directory-name” at the command line. Replace “directory-name” with the name of a directory that has “3750” in the name and press “Enter.” Inspect the command line output and note any files that end with “.bin.”
7. Type “delete flash:image-file-name” at the command prompt. Replace “image-file-name” with the name of the “.bin” file noted earlier. Press the “Enter” key. Tap the “Y” key when prompted to confirm deletion and press “Enter.”
8. Click the “File” menu in the“Tera Term VT” window and then “Transfer.” Click “Xmodem” and then “Send.” Browse to and click on the new Cisco 3750 IOS image file and press “Enter.” Wait for the file transfer to complete (approximately 20 minutes).
9. Type “boot flash” at the command prompt and press “Enter” to boot the Cisco 3750 with the new image.
More Related Cisco 3750 tips:
Supervisor 2T engine for the Catalyst 6500E chassis. The Sup2T is a boost to keep the 6500′s legs running a little longer. I think of the 2T as a product enabling customers with a large 6500 investment to put off the inevitable migration to the Nexus platform. The 2T, by all accounts, is the end of the development roadmap for the 6500. My understanding is that the 2T takes the 6500 chassis as far as it can scale in terms of packet forwarding performance.
With the advent of the Nexus 7009, I doubt we’ll see yet another replacement 6500 chassis model (like we saw the “E” some years back). The Nexus uptake has been reasonably good for most Cisco shops, and the Nexus 7009 form factor takes away the physical space challenges faced by those previously considering the 7010 as a forklift upgrade for the widely deployed Cisco 6509. In my mind, it makes sense for Cisco to focus their Catalyst development efforts on the 4500 line for access and campus deployments, with Nexus products running NX-OS for core routing services and data center fabric. Could I be wrong? Sure. If Cisco announced a new 6500E “plus” chassis that can scale higher, than that would reflect a customer demand for the product that I personally don’t see happening. Most of the network engineering community is warming up to the Nexus gear and NX-OS.
That baseline established, Cisco is selling the Sup2T today. What does it bring to the table? Note that anything in italics is lifted directly from the Cisco architecture document referenced below in the “Links” section.
- Two Terabit (2080 Gbps) crossbar switch fabric. That’s where the “2T” comes from. These sups are allowing for forwarding performance up to 2 Tbps. Of course, as with previous supervisor engines, the aggregate throughput of the chassis depends on what line cards you deploy in the chassis. That old WS-X6148A you bought several years ago isn’t imbued with magical forwarding powers just because you pop a 2T into the chassis.
- The Supervisor 2T is designed to operate in any E-Series 6500 chassis. The Supervisor 2T will not be supported in any of the earlier non E-Series chassis. You know that non-E 6500 chassis running Sup720s you love so much? Gotta go if you want to upgrade to a 2T (to which I ask the question if you’re considering this…why not Nexus 7009 instead?)
- As far as power requirements, note the following:
- The Cisco 6503-E requires a 1400 W power supply and the 6504-E requires a 2700 W power supply, when a Supervisor 2T is used in each chassis.
- While the 2500 W power supply is the minimum-sized power supply that must be used for a 6, 9, and 13-slot chassis supporting Supervisor 2T, the current supported minimum shipping power supply is 3000 W.
- Line cards are going to bite you; backwards compatibility is not what it once was. There’s a lot of requirements here, so take note.
- The Supervisor 2T provides backward compatibility with the existing WS-X6700 Series Linecards, as well as select WS-X6100 Series Linecards only.
- All WS-X67xx Linecards equipped with the Central Forwarding Card (CFC) are supported in a Supervisor 2T system, and will function in centralized CEF720 mode.
- Any existing WS-X67xx Linecards can be upgraded by removing their existing CFC or DFC3x and replacing it with a new DFC4 or DFC4XL. They will then be operationally equivalent to the WS-X68xx linecards but will maintain their WS-X67xx identification.
- There is no support for the WS-X62xx, WS-X63xx, WS-X64xx, or WS-X65xx Linecards.
- Due to compatibility issues, the WS-X6708-10GE-3C/3CXL cannot be inserted in a Supervisor 2T system, and must be upgraded to the new WS-X6908-10GE-2T/2TXL.
- The Supervisor 2T Linecard support also introduces the new WS-X6900 Series Linecards. These support dual 40 Gbps fabric channel connections, and operate in distributed dCEF2T mode.
To summarize thus far, a legacy 6500 chassis will need to be upgraded to a 6500E. Many older series line cards are not supported at all, or will require a DFC upgrade. Power supplies are a consideration, although the base requirements are not egregious. Therefore, moving to a 2T will require a good bit of technical and budgetary planning to get into a Sup2T. I suspect that for the majority of customers, this will not be a simple supervisor engine swap.
This diagram from Cisco shows the hardware layout of the Sup2T, focusing on all the major junction points a packet or frame could crossed through depending on ingress point, required processing, and egress point.
There are two main connectors here to what Cisco identifies as two distinct backplanes: the fabric connector, and the shared bus connector. The fabric connector provides the high-speed connectors for the newer line cards, such as the new 6900 series with the dual 40Gbps connections mentioned above. The shared bus connector supports legacy cards (sometimes referred to as “classic” cards), that is linecards with no fabric connection, but rather connections to a bus shared with similarly capable cards.
The crossbar switch fabric is where the throughput scaling comes from. Notice that Cisco states there are “26 x 40″ fabric channels in the diagram. That equates to the 2080Gbps Cisco’s talking about. The crossbar switch fabric on the Supervisor 2T provides 2080 Gbps of switching capacity. This capacity is based on the use of 26 fabric channels that are used to provision data paths to each slot in the chassis. Each fabric channel can operate at either 40 Gbps or 20 Gbps, depending on the inserted linecard. The capacity of the switch fabric is calculated as follows: 26 x 40 Gbps = 1040 Gbps; 1040 Gbps x 2 (full duplex) = 2080 Gbps.
“Full-duplex” means that what we’re really getting is 1Tbps in one direction, and 1Tbps in the other direction. The marketing folks are using weasel words to say that the Sup2T is providing a 2 terabit fabric. This marketing technique is neither new nor uncommon in the industry when describing speeds and feeds, but it is something to keep in mind in whiteboard sessions, especially if you’re planning a large deployment with specific data rate forwarding requirements.
Now here’s a strange bit. While the crossbar fabric throughput is described in the context of full-duplex, the 80Gbps per-slot is not. The 80 Gbps per slot nomenclature represents 2 x 40 Gbps fabric channels that are assigned to each slot providing for 80 Gbps per slot in total. If marketing math were used for this per slot capacity, one could argue that the E-Series chassis provides 160 Gbps per slot.
Moving onto the control-plane functions of the Sup2T, we run into the new MSFC5. The MSFC5 CPU handles Layer 2 and Layer 3 control plane processes, such as the routing protocols, management protocols like SNMP and SYSLOG, and Layer 2 protocols (such as Spanning Tree, Cisco Discovery Protocol, and others), the switch console, and more. The MSFC5 is not compatible with any other supervisor. The architecture is different from previous MSFC’s, in that while previous MSFC’s sported a route processor and a switch processor, the MSFC5 combines these functions into a single CPU.
The diagram also show a “CMP”, which is a feature enhancement of merit. The CMP is the “Connectivity Management Processor,” and seems to function like an iLO port. Even if the route processor is down on the Sup2T, you can still access the system remotely via the CMP. The CMP is a stand-alone CPU that the administrator can use to perform a variety of remote management services. Examples of how the CMP can be used include: system recovery of the control plane; system resets and reboots; and the copying of IOS image files should the primary IOS image be corrupted or deleted. Implicitly, you will have deployed an out-of-band network or other remote management solution to be able to access the CMP, but the CMP enhances our ability to recover a borked 6500 from far away.
The PFC4/DFC4 comprise the next major component of the Sup2T. The PFC4 rides as a daughter card on the supervisor, and is the hardware slingshot that forwards data through the switch. The DFC4 performs the same functions only it rides on a linecard, keeping forwarding functions local to the linecard, as opposed to passing it through the fabric up to the PFC4.
The majority of packets and frames transiting the switch are going to be handled by the PFC, including IPv4 unicast/multicast, IPv6 unicast/multicast, Multi-Protocol Label Switching (MPLS), and Layer 2 packets. The PFC4 also performs in hardware a number of other functions that could impact how a packet is fowarded. This includes, but is not limited to, the processing of security Access Control Lists (ACLs), applying rate limiting policies, quality of service classification and marking, NetFlow flow collection and flow statistics creation, EtherChannel load balancing, packet rewrite lookup, and packet rewrite statistics collection.
The PFC performs a large array of functions in hardware, including the following list I’m lifting from Cisco’s architecture whitepaper.
- Layer 2 functions:
- Increased MAC Address Support – a 128 K MAC address table is standard.
- A bridge domain is a new concept that has been introduced with PFC4. A bridge domain is used to help scale traditional VLANs, as well as to scale internal Layer 2 forwarding within the switch.
- The PFC4 introduces the concept of a Logical Interface (LIF), which is a hardware-independent interface (or port) reference index associated with all frames entering the forwarding engine.
- Improved EtherChannel Hash – etherchannel groups with odd numbers of members will see a better distribution across links.
- VSS support – it appears you can build a virtual switching system right out of the box with the Sup2T. There does not seem to be a unique “VSS model” like in the Sup720 family.
- Per Port-Per-VLAN – this feature is designed for Metro Ethernet deployments where policies based on both per-port and per- VLAN need to be deployed.
- Layer 3 functions. There’s a lot here, and rather than try to describe them all, I’m just going to hit the feature names here, grouped by category. You can read in more detail in the architecture document I link to below.
- Performance: Increased Layer 3 Forwarding Performance
- IPv6: uRPF for IPv6, Tunnel Source Address Sharing, IPv6 Tunnelling
- MPLS/WAN: VPLS, MPLS over GRE, MPLS Tunnel Modes, Increased Support for Ethernet over MPLS Tunnels, MPLS Aggregate Label Support, Layer 2 Over GRE
- Multicast: PIM Register Encapsulation/De-Encapsulation for IPv4 and IPv6, IGMPv3/MLDv2 Snooping
- Netflow: Increased Support for NetFlow Entries, Improved NetFlow Hash, Egress NetFlow, Sampled NetFlow, MPLS NetFlow, Layer 2 Netflow, Flexible NetFlow
- QoS: Distributed Policing, DSCP Mutation, Aggregate Policers, Microflow Policers
- Security: Cisco TrustSec (CTS), Role-Based ACL, Layer 2 ACL, ACL Dry Run, ACL Hitless Commit, Layer 2 + Layer 3 + Layer 4 ACL, Classification Enhancements, Per Protocol Drop (IPv4, IPv6, MPLS), Increase in ACL Label Support, Increase in ACL TCAM Capacity, Source MAC + IP Binding, Drop on Source MAC Miss, RPF Check Interfaces, RPF Checks for IP Multicast Packets
So, do you upgrade to a Sup2T? It depends. The question comes down to what you need more: speed or features. The Sup2T is extending the life of the 6500E chassis with speed and a boatload of features. That said, you can’t scale the 6500 to the sort of 10Gbps port density you can a Nexus. Besides, most of the features found on a 6500 aren’t going to be used by most customers. If your 6500 is positioned as a core switch, then what you really need is the core functionality of L2 and L3 forwarding to be performed as quickly as possible with minimal downtime. To me, the place to go next is the Nexus line if that description of “core” is your greatest need.
If instead you need a super-rich feature set, then the question is harder to answer. The Nexus has a ways to go before offering all of the features the Catalyst does. That’s not to say that all a Nexus offers is throughput. True, NX-OS lacks the maturity of IOS, but it offers stability better than IOS-SX and features that most customers need.
In some ways, I’m making an unfair comparison. Nexus7K and Cat6500 have different purposes, and solve different problems. But for most customers, I think either platform could meet the needs. So if you’re looking for a chassis you can leave in the rack for a very long time, it’s time to look seriously at Nexus, rejecting it only if there’s some specific function it lacks that you require. If the Nexus platform can’t solve all of your problems, then you probably have requirements that are different from merely “going faster”. The 6500/Sup2T may make sense for you.
---Original reading from packetpushers.net
The Supervisor 2T provides 2-terabit system performance for 80Gbps switching capacity per slot on all Catalyst 6500 E-Series Chassis. As a result, you can:
- Maintain investment protection through backward compatibility
- Deliver scalability and performance improvements such as distributed forwarding (dCEF) 720Mpps with the fourth-generation Policy Feature Card (PFC4)
- Support future 40Gbps interface and nonblocking 10Gbps modules
- Enable new applications and services with hardware accelerated VPLS, Layer 2 over mGRE for Network Virtualization
- Take advantage of integrated Connectivity Management Processor (CMP) for improved out-of-band management.