Cisco Switches, Cisco Firewall

Wednesday 16 may 2012 3 16 /05 /May /2012 06:04

Virtualization, long a hot topic for servers, has entered the networking realm. With the introduction of a new management blade for its Catalyst 6500 switches, Cisco can make two switches look like one while dramatically reducing failover times in the process.

In an exclusive Clear Choice test of Cisco's new Virtual Switching System (VSS), Network World conducted its largest-ever benchmarks to date, using a mammoth test bed with 130 10G Ethernet interfaces. The results were impressive: VSS not only delivers a 20-fold improvement in failover times but also eliminates Layer 2 and 3 redundancy protocols at the same time.

The performance numbers are even more startling: A VSS-enabled virtual switch moved a record 770 million frames per second in one test, and routed more than 5.6 billion unicast and multicast flows in another. Those numbers are exactly twice what a single physical Catalyst 6509 can do.

  Cisco-6500-Virtual-Switching-Supervisor-Engine.jpg

All links, all the time

To maximise up-time, network architects typically provision multiple links and devices at every layer of the network, using an alphabet soup of redundancy protocols to protect against downtime. These include rapid spanning tree protocol (RSTP), hot standby routing protocol (HSRP), and virtual router redundancy protocol (VRRP).

This approach works, but has multiple downsides. Chief among them is the "active-passive" model used by most redundancy protocols, where one path carries traffic while the other sits idle until a failure occurs. Active-passive models use only 50 percent of available capacity, adding considerable capital expense.

Further, both HSRP and VRRP require three IP addresses per subnet, even though routers use only one address at a time. And while rapid spanning tree recovers from failures much faster than the original spanning tree, convergence times can still vary by several seconds, leading to erratic application performance. Strictly speaking, spanning tree was intended only to prevent loops, but it's commonly used as a redundancy mechanism.

There's one more downside to current redundant network designs: It creates twice as many network elements to manage. Regardless of whether network managers use a command-line interface or an SNMP-based system for configuration management, any policy change needs to be made twice, once on each redundant component.

 

Introducing Virtual Switching

In contrast, Cisco's VSS uses an "active-active" model that retains the same amount of redundancy, but makes use of all available links and switch ports.

While many vendors support link aggregation (a means of combining multiple physical interfaces to appear as one logical interface), VSS is unique in its ability to virtualise the entire switch -- including the switch fabric and all interfaces. Link aggregation and variations such as Nortel's Split Multi-Link Trunk (SMLT) do not create virtual switches, nor do they eliminate the need for Layer 3 redundancy mechanisms such as HSRP or VRRP.

At the heart of VSS is the Virtual Switching Supervisor 720-10G, a management and switch fabric blade for Cisco Catalyst 6500 switches. VSS requires two new supervisor cards, one in each physical chassis. The management blades create a virtual switch link (VSL), making both devices appear as one to the outside world: There's just one media access control and one IP address used, and both systems share a common configuration file that covers all ports in both chassis.

On the access side of Cisco's virtual switch, downstream devices still connect to both physical chassis, but a bonding technology called Multichassis EtherChannel (MEC) presents the virtual switch as one logical device. MEC links can use industry-standard 802.1ad link aggregation or Cisco's proprietary port aggregation protocol. Either way, MEC eliminates the need for spanning tree. All links within a MEC are active until a circuit or switch failure occurs, and then traffic continues to flow over the remaining links in the MEC.

Servers also can use MEC's link aggregation support, with no additional software needed. Multiple connections were already possible using "NIC teaming," but that's usually a proprietary, active/passive approach.

On the core side of Cisco's virtual switch, devices also use MEC connections to attach to the virtual switch. This eliminates the need for redundancy protocols such as HSRP or VRRP, and also reduces the number of routes advertised. As on the access side, traffic flows through the MEC in an "active/active" pattern until a failure, after which the MEC continues to operate with fewer elements.

The previous examples focused on distribution-layer switches, but VSL links work between any two Catalyst 6500 chassis. For example, virtual switching can be used at both core and distribution layers, or at the core, distribution and access layers. All attached devices would see one logical device wherever a virtual switch exists.

A VSL works only between two chassis, but it can support up to eight physical links. Multiple VSL links can be established using any combination of interfaces on the new supervisor card or Cisco's WS-6708 10G Ethernet line card. VSS also requires line cards in Cisco's 67xx series, such as the 6724 and 6748 10/100/1000 modules or the 6704 or 6708 10G Ethernet modules. Cisco says VSL control traffic uses less than 5 percent of a 10G Ethernet link, but we did not verify this.

At least for now, VSL traffic is proprietary. It isn't possible to set up a VSL between, say, a Cisco and Foundry switch. 

 

A big swath of fabric

We assessed VSS performance with tests focused on fabric bandwidth and delay, failover times, and unicast/multicast performance across a network backbone.

In the fabric tests we sought to answer two simple questions: How fast does VSS move frames, and how long does it hang on to each frame? The set-up for this test was anything but simple. We attached Spirent TestCenter analyser/generator modules to 130 10G Ethernet ports on two Catalyst 6509 chassis configured as one virtual switch.

These tests produced, by far, the highest throughput we've ever measured from a single (logical) device. When forwarding 64-byte frames, Cisco's virtual switch moved traffic at more than 770 million frames per second. We then ran the same test on a single switch, without virtualisation, and measured throughput of 385 million frames per second -- exactly half the result of the two fabrics combined in the virtual switch. These results prove there's no penalty for combining switch fabrics.

We also measured VSS throughput for 256-byte frames (close to the average Internet frame length) of 287 million frames per second and for 1,518-byte frames (until recently, the maximum in Ethernet, and still the top end on most production networks) of 53 million frames per second. With both frame sizes, throughput was exactly double that of the single-switch case.

The 1,518-byte frames per second number represents throughput of nearly 648Gbps. This is only around half the theoretical maximum rate possible with 130 10G Ethernet ports. The limiting factor is the Supervisor 720 switch fabric, which can't send line-rate traffic to all 66 10G ports in each fully loaded chassis. VSS doubles fabric capacity by combining two switches, but it doesn't extend the capacity of the fabric card in either physical switch.

We also measured delay for all three frame sizes. With a 10 percent intended load, Spirent TestCenter reported average delays ranging from 12 to 17 microsec, both with and without virtual switching. These numbers are similar to those for other 10G switches we've tested, and far below the point where they'd affect performance of any application. Even the maximum delays of around 66 microsec with virtual switching again are too low to slow down any application, especially considering Internet round-trip delays often run into the tens of milliseconds.

 

Faster failovers

Our failover tests produced another record: The fastest recovery from an Layer 2/Layer 3 network failure we've ever measured.

We began these tests with a conventional set-up: Rapid spanning tree at layer 2, HSRP at Layer 3, and 16,000 hosts (emulated on Spirent TestCenter) sending traffic across redundant pairs of access, distribution and core switches. During the test, we cut off power to one of the distribution switches, forcing all redundancy mechanisms and routing protocols to reconverge. Recovery took 6.883 seconds in this set-up.

Then we re-ran the same test two more times with VSS enabled. This time convergence occurred much faster. It took the network just 322 millisec to converge with virtual switching on the distribution switches, and 341 millisec to converge with virtual switching on the core and distribution switches. Both numbers represent better than 20-fold improvements over the usual redundancy mechanisms.

 

A bigger backbone

Our final tests measured backbone performance using a complex enterprise traffic pattern involving 176,000 unicast routes, more than 10,000 multicast routes, and more than 5.6 billion flows. We ran these tests with unicast traffic alone and a combination of unicast and multicast flows, and again compared results with and without VSS in place.

Just to keep things interesting, we ran all tests with a 10,000-entry access control list in place, and also configured switches to re-mark all packets' diff-serv code point (DSCP) fields. Re-marking DSCPs prevents users from unauthorised "promotion" of their packets to receive higher-priority treatment. In addition, we enabled NetFlow tracking for all test traffic.

Throughput in all the backbone cases was exactly double with virtual switching than without it. This was true for both unicast and mixed-class throughput tests, and also true regardless of whether we enabled virtual switching on distribution switches alone, or on both the core and distribution switches. These results clearly show the advantages of an "active/active" design over an "active/passive" one.

We measured delay as well as throughput in these tests. Ideally, we'd expect to see little difference between test cases with and without virtual switching, and between cases with virtual switching at one or two layers in the network. When it came to average delay, that's pretty much how things looked. Delays across three pairs of physical switches ranged from around 26 to 90 microsec in all test cases, well below the point where applications would notice.

Maximum delays did vary somewhat with virtual switching enabled, but not by a margin that would affect application performance. Curiously, maximum delay increased the most for 256-byte frames, with fourfold increases over results without virtual switching. The actual amounts were always well less than 1 millisec, and also unlikely to affect application performance.

Cisco's VSS is a significant advancement in the state of the switching art. It dramatically improves availability with much faster recovery times, while simultaneously providing a big boost in bandwidth.

 

How we tested Cisco's VSS

For all tests described here, we configured a 10,000-line access control list (ACL) covering layer-3 and layer-4 criteria and spot-checked that random entries in the ACL blocked traffic as intended. As a safeguard against users making unauthorised changes, Cisco engineers also configured access and core switches to re-mark the diff-serve code point (DSCP) in every packet, and we verified re-marking using counters in the Spirent TestCenter traffic generator/analyser. Cisco also enabled NetFlow traffic monitoring for all test traffic. 

To assess the fabric bandwidth and delay, the system under test was one pair of Cisco Catalyst 6509-E switches. Cisco engineers set up a virtual switch link (VSL) between the switches, each equipped with eight WS6408 10G Ethernet line cards and one Virtual Switching Supervisor 720-10G management/switch fabric card. That left a total of 130 10G Ethernet test ports: Eight on each of the line cards, plus one on each of the management cards (we used the management card's other 10G Ethernet port to set up the virtual link between switches).

Using the Spirent TestCenter traffic generator/analyser, we offered 64-, 256- and 1518-byte IPv4 unicast frames on each of the 130 10G test ports to determine throughput and delay. We measured delay at 10 percent of line rate, consistent with our practice in previous 10G Ethernet switch tests. The Spirent TestCenter analyser emulated 100 unique hosts on each port, making for 13,000 total hosts.

In the failover tests, the goal was to compare VSS recovery time upon loss of a switch with recovery using older redundancy mechanisms.

This test involved three pairs of Catalyst 6509 switches, representing the core, distribution and access layers of an enterprise network. We ran the failover tests in three configurations. In the first scenario, we used legacy redundancy mechanisms such as rapid spanning tree and hot standby routing protocol (HSRP). Then we ran two failover scenarios using VSS, first with a virtual link on the distribution switches alone, and again with VSS links on both the distribution and core switches.

For each test, we began by offering traffic to each of 16 interfaces on the core and access sides of the test bed. We began the failover tests with a baseline event to verify no frame loss existed. While Spirent TestCenter offered test traffic for 300 seconds, we cut off power to one of the distribution switches. Because we offered traffic to each interface at a rate of 100,000 frames per second, each dropped frame represented 10 microsec of recovery time. So, for example, if Spirent TestCenter reported 32,000 lost frames, then failover time was 320 millisec.

The backbone performance tests used a set-up similar to the VSS configurations in the failover tests. Here again, there were three pairs of Catalyst 6509 switches, representing core, distribution and access layers of an enterprise network. Here again, we also conducted separate tests with a virtual link on the distribution switches, and again with virtual links on the distribution and core switches.

To represent enterprise conditions, we set up very large numbers of routes, hosts and flows in these tests. From the core side, we configured OSPF to advertise 176,000 unique routes. On the access side, we set up four virtual LANs (VLAN), each with 250 hosts, on each of 16 ports, for 16,000 hosts total. In terms of multicast traffic set-up, one host in each access-side VLAN joined each of 40 groups, each of which had 16 transmitters; with 16 core-side interfaces. In all, this test represented more than 10,000 multicast routes, and more than 5.6 billion unique unicast flows.

In the backbone tests, we used a partially meshed traffic pattern to measure system throughput and delay. As defined in RFC 2285, a partial mesh pattern is one in which ports on both sides of the test bed exchange traffic with one another, but not among themselves. In this case, that meant all access ports exchanged traffic with all core ports, and vice-versa.

We tested all four combinations of unicast, mixed multicast/unicast, and virtual switching enabled and disabled on the core switches (virtual switching was always enabled on the distribution switches and always disabled on the access switches). In all four backbone test set-ups, we measured throughput and delay. 

We conducted these tests in an engineering lab at Cisco's campus in San Jose. This is a departure from our normal procedure of testing in our own labs or at a neutral third-party facility. The change was borne of logistical necessity: Cisco's lab was the only one available within the allotted timeframe with sufficient 10G Ethernet test ports and electrical power to conduct this test. Network Test and Spirent engineers conducted all tests and verified configurations of both switches and test instruments, just as we would in any test. The results presented here would be the same regardless of where the test was conducted.

---Original reading from review.techworld.com

 

More Related Cisco Topics:

Cisco Catalyst 6500 Switches Vs. Catalyst 4500 Series

Cisco Catalyst 6000/6500, Aim at Enterprise Network & Service Provider Networks

Is Catalyst 6500 Supervisor 2T Your Upgrade Answer?

Why Cisco 6500 Series is Here to Stay?

By Cisco & Cisco Router, Network Switch - Posted in: Cisco Switches, Cisco Firewall
Enter comment - View the 0 comments
Tuesday 8 may 2012 2 08 /05 /May /2012 09:03

The Cisco 3750 range has been around for many years now, and has a vast following. The Cisco 3750-X is the new kid on the Cisco block, and it combines plenty of stuff that will be familiar to users of its predecessors with some funky new features that are clearly a step forward. Cisco-Catalyst-3750x-12s-s.jpg

Cisco 3750 switch comes in a number of flavors – between 24 and 48 ports, with or without Power over Ethernet. Cisco 3750 with 48P is the PoE variant of the 48-port device. Now, the traditional Cisco 3750 had four 1Gbit/s SFP ports in addition to the 48 10/100/1000 copper ports; the Cisco 3750-X instead has a slot into which you can slot either a four-port 1Gbit/s SFP daughter-board or a two-port 10Gbit/s alternative.

Alongside the port combinations, there are three software installs. The LAN Base software is a layer-2 only software image, and quite frankly I wouldn't ever expect to buy one of these if I only wanted layer-2 functionality. More sensible is the IP Base image which makes the device a proper Layer-3 routing switch, albeit with a limited selection of routing protocols. At the top is the IP Services image, which makes the unit a full-blown router (just like its ancestors – two of my BGP-shouting WAN routers are actually 3750Gs, in fact). The main market will of course be for the IP Base version.

The rear panel is interesting too, of course. As with the older 3750s the rear panel has a pair of “stack” ports. Each stack port provides a 16Gbit/s backplane connection, and by stacking your devices in a loop you end up with a resilient 32Gbit/s backplane. From a management and configuration point of view a stack is a single virtual switch – you manage it rather like a chassis product with a number of blades. So port 1 of switch 1 is Gi1/0/1, port 3 of switch 2 is Gi2/0/3, and so on.

The important rear-panel innovation with the new CISCO 3750-X model is the provision for redundant power supplies. In the old model you had a single, non-removable power supply along with an RPS (Redundant Power Supply) connection; to use the latter and give yourself some resilience you had to buy something like an RPS2300 – an external device that was a stupid shape that didn't fit into a rack very well, had buttons on the front whose only purpose seemed to be to make things break, and on a brighter note provided up to six switches with resilient power. The new model has dual slots for removable PSUs, of which one is populated by default; it's a ten-second job to slip a second one in beside it. One of the downsides of the old 3750 was the bloody awful reliability of the internal (fixed) PSU, and I've spent rather too many hours swapping out units with duff power units, so the removable units in the -X are most welcome.

Along with the redundant PSU facility is the power stacking capability. Just as you have your data stack cables, you also now have a pair of power-stack cables on each unit, so that the total power available via all the PSUs in the stack is available for negotiated use across the whole stack, for switch power and PoE.

As with the older devices, you can add and remove stack devices on the fly. Adding a switch to a stack is a simple case of settings its ID, telling the stack to expect a new member, and plumbing it in (although in theory the stack will deal with firmware mismatches in the new member, I prefer not to tempt fate so I always pre-install the right version). If a unit fails the stack will keep on humming while you pull out the duff one and stick in the replacement, and the config will be automatically migrated to the new unit.

The only downside I've found so far, in fact, is with trying to get the new -X model to co-exist in a stack with the old 3750-G (in short, I've not persuaded it to actually work yet) but I've no doubt I'll persuade it to play before long.

The Cisco 3750-X is a really sensible evolution in an already popular family of switches in the Cisco family. Being an IOS device there's really not a great deal of difference management-wise between the old and the new, so you get new functionality with almost zero additional training requirements. I've recently added seven 48-port non-PoE versions in three of my server installations, and have just received two new pairs of the PoE variant in a couple of offices, and I'm pretty happy thus far.

 

Pro

New power stacking capability is an excellent evolution.

32Gbit/s backplane should be sufficient for most modest installations.

10Gbit/s Ethernet support for uplinking or connecting to blade servers.

 

More Cisco 3750 Info:

CISCO Catalyst 3750 Family

How to Configure a Cisco 3750

How to Add a DHCP Range to a Cisco 3750 Switch?

 

Sample Pricing for Popular Cisco 3750 Models:

Catalyst 3750X 24 Port Data LAN Base: US$2,236.00 (57.00% off list price)

Catalyst 3750X 48 Port Data LAN Base: US$3,827.00 (57.00% off list price)

Catalyst 3750X 24 Port Data IP Base: US$2,795.00 (57.00% off list price)

Catalyst 3750X 48 Port Data IP Base: US$4,945.00 (57.00% off list price)

Catalyst 3750X 24 Port PoE IP Base: US$3,139.00 (57.00% off list price)

WS-C3750X-48P-S: Stackable 48 10/100/1000 Ethernet PoE+ ports, with 715W AC Power Supply: US$5,590.00 (57.00% off list price)

 

Supplier

Router-switch.com ((Yejian Technologies Co., Ltd), a World's Leading Cisco Supplier

Website: http://www.router-switch.com/

By Cisco & Cisco Router, Network Switch - Posted in: Cisco Switches, Cisco Firewall
Enter comment - View the 0 comments
Thursday 3 may 2012 4 03 /05 /May /2012 08:20

 

Restore a corrupt Cisco 3750 IOS or Internetwork Operating System image by transferring a new image to flash storage using the Xmodem protocol. Power anomalies such as brown outs and surges can cause irreparable IOS image corruption. You should delete and replace a corrupt IOS image to ensure that the Cisco 3750 remains reliable. Access “switch:” mode through a serial connection and recover Cisco 3750 firmware to a working state

 

Things you’ll need to recover firmware Cisco 3750 switch

Windows 7 computer that has a serial COM port and the Tera Term program installed

Cisco serial console cable

IOS image for the Cisco 3750 switch stored on the Windows 7 computer


How to Recover Cisco 3750 Firmware to a Working State?

1. Connect the Cisco serial console cable 9 pin connector to the Windows 7 computer serial COM port. Plug the other end of the serial cable into the Cisco 3750 “Console” port.

2. Launch the Tera Term terminal console program and click “File” then “New connection.” Click the “Serial” radio button. Click the “Port” box and then the name of the serial COM port connected to the Cisco 3750 switch. Click the “OK” button.

3. Unplug the Cisco 3750 switch power cable. Press and hold down the “Mode” button located on the Cisco 3750 front left panel. Power up the Cisco 3750 switch and release the “Mode” button when the Port 1x light turns off.

4. Click the Tera Term window and press the “Enter” key two times. Type “flash_init” at the command prompt and tap “Enter.” Write “load_helper” at the command prompt and press “Enter.”

5. Type “dir flash:” on the command line and press “Enter.” View the command line output and note any files that end with “.bin” or directories that have “3750” in the name.

6. Type “dir flash:directory-name” at the command line. Replace “directory-name” with the name of a directory that has “3750” in the name and press “Enter.” Inspect the command line output and note any files that end with “.bin.”

7. Type “delete flash:image-file-name” at the command prompt. Replace “image-file-name” with the name of the “.bin” file noted earlier. Press the “Enter” key. Tap the “Y” key when prompted to confirm deletion and press “Enter.”

8. Click the “File” menu in the“Tera Term VT” window and then “Transfer.” Click “Xmodem” and then “Send.” Browse to and click on the new Cisco 3750 IOS image file and press “Enter.” Wait for the file transfer to complete (approximately 20 minutes).

9. Type “boot flash” at the command prompt and press “Enter” to boot the Cisco 3750 with the new image.


More Related Cisco 3750 tips:

How to Add a DHCP Range to a Cisco 3750 Switch?

How to Configure a Cisco 3750

 

By Cisco & Cisco Router, Network Switch - Posted in: Cisco Switches, Cisco Firewall
Enter comment - View the 0 comments
Saturday 21 april 2012 6 21 /04 /Apr /2012 04:05

 

Supervisor 2T engine for the Catalyst 6500E chassis. The Sup2T is a boost to keep the 6500′s legs running a little longer. I think of the 2T as a product enabling customers with a large 6500 investment to put off the inevitable migration to the Nexus platform. The 2T, by all accounts, is the end of the development roadmap for the 6500. My understanding is that the 2T takes the 6500 chassis as far as it can scale in terms of packet forwarding performance.

With the advent of the Nexus 7009, I doubt we’ll see yet another replacement 6500 chassis model (like we saw the “E” some years back). The Nexus uptake has been reasonably good for most Cisco shops, and the Nexus 7009 form factor takes away the physical space challenges faced by those previously considering the 7010 as a forklift upgrade for the widely deployed Cisco 6509. In my mind, it makes sense for Cisco to focus their Catalyst development efforts on the 4500 line for access and campus deployments, with Nexus products running NX-OS for core routing services and data center fabric. Could I be wrong? Sure. If Cisco announced a new 6500E “plus” chassis that can scale higher, than that would reflect a customer demand for the product that I personally don’t see happening. Most of the network engineering community is warming up to the Nexus gear and NX-OS.

That baseline established, Cisco is selling the Sup2T today. What does it bring to the table? Note that anything in italics is lifted directly from the Cisco architecture document referenced below in the “Links” section.

  • Two Terabit (2080 Gbps) crossbar switch fabric. That’s where the “2T” comes from. These sups are allowing for forwarding performance up to 2 Tbps. Of course, as with previous supervisor engines, the aggregate throughput of the chassis depends on what line cards you deploy in the chassis. That old WS-X6148A you bought several years ago isn’t imbued with magical forwarding powers just because you pop a 2T into the chassis.
  • The Supervisor 2T is designed to operate in any E-Series 6500 chassis. The Supervisor 2T will not be supported in any of the earlier non E-Series chassis. You know that non-E 6500 chassis running Sup720s you love so much? Gotta go if you want to upgrade to a 2T (to which I ask the question if you’re considering this…why not Nexus 7009 instead?)
  • As far as power requirements, note the following:
  •  
    • The Cisco 6503-E requires a 1400 W power supply and the 6504-E requires a 2700 W power supply, when a Supervisor 2T is used in each chassis.
    • While the 2500 W power supply is the minimum-sized power supply that must be used for a 6, 9, and 13-slot chassis supporting Supervisor 2T, the current supported minimum shipping power supply is 3000 W.
  • Line cards are going to bite you; backwards compatibility is not what it once was. There’s a lot of requirements here, so take note.
    • The Supervisor 2T provides backward compatibility with the existing WS-X6700 Series Linecards, as well as select WS-X6100 Series Linecards only.
    • All WS-X67xx Linecards equipped with the Central Forwarding Card (CFC) are supported in a Supervisor 2T system, and will function in centralized CEF720 mode.
    • Any existing WS-X67xx Linecards can be upgraded by removing their existing CFC or DFC3x and replacing it with a new DFC4 or DFC4XL. They will then be operationally equivalent to the WS-X68xx linecards but will maintain their WS-X67xx identification.
    • There is no support for the WS-X62xx, WS-X63xx, WS-X64xx, or WS-X65xx Linecards.
    • Due to compatibility issues, the WS-X6708-10GE-3C/3CXL cannot be inserted in a Supervisor 2T system, and must be upgraded to the new WS-X6908-10GE-2T/2TXL.
    • The Supervisor 2T Linecard support also introduces the new WS-X6900 Series Linecards. These support dual 40 Gbps fabric channel connections, and operate in distributed dCEF2T mode.

To summarize thus far, a legacy 6500 chassis will need to be upgraded to a 6500E. Many older series line cards are not supported at all, or will require a DFC upgrade. Power supplies are a consideration, although the base requirements are not egregious. Therefore, moving to a 2T will require a good bit of technical and budgetary planning to get into a Sup2T. I suspect that for the majority of customers, this will not be a simple supervisor engine swap.

This diagram from Cisco shows the hardware layout of the Sup2T, focusing on all the major junction points a packet or frame could crossed through depending on ingress point, required processing, and egress point.

Will-Sup2T-Stop-You-From-Buying-Nexus01.jpg

There are two main connectors here to what Cisco identifies as two distinct backplanes: the fabric connector, and the shared bus connector. The fabric connector provides the high-speed connectors for the newer line cards, such as the new 6900 series with the dual 40Gbps connections mentioned above. The shared bus connector supports legacy cards (sometimes referred to as “classic” cards), that is linecards with no fabric connection, but rather connections to a bus shared with similarly capable cards.

The crossbar switch fabric is where the throughput scaling comes from. Notice that Cisco states there are “26 x 40″ fabric channels in the diagram. That equates to the 2080Gbps Cisco’s talking about. The crossbar switch fabric on the Supervisor 2T provides 2080 Gbps of switching capacity. This capacity is based on the use of 26 fabric channels that are used to provision data paths to each slot in the chassis. Each fabric channel can operate at either 40 Gbps or 20 Gbps, depending on the inserted linecard. The capacity of the switch fabric is calculated as follows: 26 x 40 Gbps = 1040 Gbps; 1040 Gbps x 2 (full duplex) = 2080 Gbps.

“Full-duplex” means that what we’re really getting is 1Tbps in one direction, and 1Tbps in the other direction. The marketing folks are using weasel words to say that the Sup2T is providing a 2 terabit fabric. This marketing technique is neither new nor uncommon in the industry when describing speeds and feeds, but it is something to keep in mind in whiteboard sessions, especially if you’re planning a large deployment with specific data rate forwarding requirements.

Now here’s a strange bit. While the crossbar fabric throughput is described in the context of full-duplex, the 80Gbps per-slot is not. The 80 Gbps per slot nomenclature represents 2 x 40 Gbps fabric channels that are assigned to each slot providing for 80 Gbps per slot in total. If marketing math were used for this per slot capacity, one could argue that the E-Series chassis provides 160 Gbps per slot.

Moving onto the control-plane functions of the Sup2T, we run into the new MSFC5. The MSFC5 CPU handles Layer 2 and Layer 3 control plane processes, such as the routing protocols, management protocols like SNMP and SYSLOG, and Layer 2 protocols (such as Spanning Tree, Cisco Discovery Protocol, and others), the switch console, and more. The MSFC5 is not compatible with any other supervisor. The architecture is different from previous MSFC’s, in that while previous MSFC’s sported a route processor and a switch processor, the MSFC5 combines these functions into a single CPU.

Will-Sup2T-Stop-You-From-Buying-Nexus02-copy-1.jpg

The diagram also show a “CMP”, which is a feature enhancement of merit. The CMP is the “Connectivity Management Processor,” and seems to function like an iLO port. Even if the route processor is down on the Sup2T, you can still access the system remotely via the CMP. The CMP is a stand-alone CPU that the administrator can use to perform a variety of remote management services. Examples of how the CMP can be used include: system recovery of the control plane; system resets and reboots; and the copying of IOS image files should the primary IOS image be corrupted or deleted. Implicitly, you will have deployed an out-of-band network or other remote management solution to be able to access the CMP, but the CMP enhances our ability to recover a borked 6500 from far away.

The PFC4/DFC4 comprise the next major component of the Sup2T. The PFC4 rides as a daughter card on the supervisor, and is the hardware slingshot that forwards data through the switch. The DFC4 performs the same functions only it rides on a linecard, keeping forwarding functions local to the linecard, as opposed to passing it through the fabric up to the PFC4.

The majority of packets and frames transiting the switch are going to be handled by the PFC, including IPv4 unicast/multicast, IPv6 unicast/multicast, Multi-Protocol Label Switching (MPLS), and Layer 2 packets. The PFC4 also performs in hardware a number of other functions that could impact how a packet is fowarded. This includes, but is not limited to, the processing of security Access Control Lists (ACLs), applying rate limiting policies, quality of service classification and marking, NetFlow flow collection and flow statistics creation, EtherChannel load balancing, packet rewrite lookup, and packet rewrite statistics collection.

Will-Sup2T-Stop-You-From-Buying-Nexus03.jpg

The PFC performs a large array of functions in hardware, including the following list I’m lifting from Cisco’s architecture whitepaper.

  • Layer 2 functions:
  •  
    • Increased MAC Address Support – a 128 K MAC address table is standard.
    • A bridge domain is a new concept that has been introduced with PFC4. A bridge domain is used to help scale traditional VLANs, as well as to scale internal Layer 2 forwarding within the switch.
    • The PFC4 introduces the concept of a Logical Interface (LIF), which is a hardware-independent interface (or port) reference index associated with all frames entering the forwarding engine.
    • Improved EtherChannel Hash – etherchannel groups with odd numbers of members will see a better distribution across links.
    • VSS support – it appears you can build a virtual switching system right out of the box with the Sup2T. There does not seem to be a unique “VSS model” like in the Sup720 family.
    • Per Port-Per-VLAN – this feature is designed for Metro Ethernet deployments where policies based on both per-port and per- VLAN need to be deployed.
  • Layer 3 functions. There’s a lot here, and rather than try to describe them all, I’m just going to hit the feature names here, grouped by category. You can read in more detail in the architecture document I link to below.
    • Performance: Increased Layer 3 Forwarding Performance
    • IPv6: uRPF for IPv6, Tunnel Source Address Sharing, IPv6 Tunnelling
    • MPLS/WAN: VPLS, MPLS over GRE, MPLS Tunnel Modes, Increased Support for Ethernet over MPLS Tunnels, MPLS Aggregate Label Support, Layer 2 Over GRE
    • Multicast: PIM Register Encapsulation/De-Encapsulation for IPv4 and IPv6, IGMPv3/MLDv2 Snooping
    • Netflow: Increased Support for NetFlow Entries, Improved NetFlow Hash, Egress NetFlow, Sampled NetFlow, MPLS NetFlow, Layer 2 Netflow, Flexible NetFlow
    • QoS: Distributed Policing, DSCP Mutation, Aggregate Policers, Microflow Policers
    • Security: Cisco TrustSec (CTS), Role-Based ACL, Layer 2 ACL, ACL Dry Run, ACL Hitless Commit, Layer 2 + Layer 3 + Layer 4 ACL, Classification Enhancements, Per Protocol Drop (IPv4, IPv6, MPLS), Increase in ACL Label Support, Increase in ACL TCAM Capacity, Source MAC + IP Binding, Drop on Source MAC Miss, RPF Check Interfaces, RPF Checks for IP Multicast Packets

So, do you upgrade to a Sup2T? It depends. The question comes down to what you need more: speed or features. The Sup2T is extending the life of the 6500E chassis with speed and a boatload of features. That said, you can’t scale the 6500 to the sort of 10Gbps port density you can a Nexus. Besides, most of the features found on a 6500 aren’t going to be used by most customers. If your 6500 is positioned as a core switch, then what you really need is the core functionality of L2 and L3 forwarding to be performed as quickly as possible with minimal downtime. To me, the place to go next is the Nexus line if that description of “core” is your greatest need.

If instead you need a super-rich feature set, then the question is harder to answer. The Nexus has a ways to go before offering all of the features the Catalyst does. That’s not to say that all a Nexus offers is throughput. True, NX-OS lacks the maturity of IOS, but it offers stability better than IOS-SX and features that most customers need.

In some ways, I’m making an unfair comparison. Nexus7K and Cat6500 have different purposes, and solve different problems. But for most customers, I think either platform could meet the needs. So if you’re looking for a chassis you can leave in the rack for a very long time, it’s time to look seriously at Nexus, rejecting it only if there’s some specific function it lacks that you require. If the Nexus platform can’t solve all of your problems, then you probably have requirements that are different from merely “going faster”. The 6500/Sup2T may make sense for you.

---Original reading from packetpushers.net

More Cisco 6500 Notes:

Is Catalyst 6500 Supervisor 2T Your Upgrade Answer?

The Supervisor 2T provides 2-terabit system performance for 80Gbps switching capacity per slot on all Catalyst 6500 E-Series Chassis. As a result, you can:

  • Maintain investment protection through backward compatibility
  • Deliver scalability and performance improvements such as distributed forwarding (dCEF) 720Mpps with the fourth-generation Policy Feature Card (PFC4)
  • Support future 40Gbps interface and nonblocking 10Gbps modules
  • Enable new applications and services with hardware accelerated VPLS, Layer 2 over mGRE for Network Virtualization
  • Take advantage of integrated Connectivity Management Processor (CMP) for improved out-of-band management.

 

By Cisco & Cisco Router, Network Switch - Posted in: Cisco Switches, Cisco Firewall
Enter comment - View the 0 comments
Thursday 12 april 2012 4 12 /04 /Apr /2012 11:14

 

Cisco Catalyst switches equipped with the Enhanced Multilayer Image (EMI) can work as Layer 3 devices with full routing capabilities. Example switch models that support layer 3 routing are the 3550, 3750, 3560 etc.

On a Layer3-capable switch, the port interfaces work as Layer 2 access ports by default, but you can also configure them as “Routed Ports” which act as normal router interfaces. That is, you can assign an IP address directly on the routed port. Moreover, you can configure also a Switch Vlan Interface (SVI) with the “interface vlan” command which acts as a virtual layer 3 interface on the Layer3 switch.

On this post I will describe a scenario with a Layer3 switch acting as “Inter Vlan Routing” device together with two Layer2 switches acting as closet access switches. See the diagram below:

How-to-Configure-a-Cisco-Layer-3-switch-Inter-VLAN--copy-1.jpg

Interface Fa0/48 of the Layer3 switch is configured as a Routed Port with IP address 10.0.0.1. Two Vlans are configured on the L3 switch, Vlan10 and Vlan20. For Vlan10 we will create an SVI with IP address 10.10.10.10 and for Vlan20 an SVI with IP address 10.20.20.20. These two IP addresses will be the default gateway addresses for hosts belonging to Vlan10 and Vlan20 on the Layer2 switches respectively. That is, hosts connected on Vlan10 on the closet L2 switches will have as default gateway the IP address 10.10.10.10. Similarly, hosts connected on Vlan20 on the closet switches will have address 10.20.20.20 as their default gateway. Traffic between Vlan10 and Vlan20 will be routed by the L3 Switch (InterVlan Routing). Also, all interfaces connecting the three switches must be configured as Trunk Ports in order to allow Vlan10 and Vlan20 tagged frames to pass between switches. Let’s see a configuration snapshot for all switches below:

Cisco L2 Switch (same configuration for both switches)

!  Create VLANs 10 and 20 in the switch database
Layer2-Switch# configure terminal
Layer2-Switch(config)# vlan 10
Layer2-Switch(config-vlan)# end

 

Layer2-Switch(config)# vlan 20
Layer2-Switch(config-vlan)# end

 

!  Assign Port Fe0/1 in VLAN 10
Layer2-Switch(config)# interface fastethernet0/1
Layer2-Switch(config-if)# switchport mode access
Layer2-Switch(config-if)# switchport access vlan 10
Layer2-Switch(config-if)# end

 

!  Assign Port Fe0/2 in VLAN 20
Layer2-Switch(config)# interface fastethernet0/2
Layer2-Switch(config-if)# switchport mode access
Layer2-Switch(config-if)# switchport access vlan 20
Layer2-Switch(config-if)# end

 

!  Create Trunk Port Fe0/24
Layer2-Switch(config)# interface fastethernet0/24
Layer2-Switch(config-if)# switchport mode trunk
Layer2-Switch(config-if)# switchport trunk encapsulation dot1q
Layer2-Switch(config-if)# end

 

Cisco Layer 3 Switch

! Enable Layer 3 routing
Layer3-Switch(config) # ip routing

 

!  Create VLANs 10 and 20 in the switch database
Layer3-Switch# configure terminal
Layer3-Switch(config)# vlan 10
Layer3-Switch(config-vlan)# end

 

Layer3-Switch(config)# vlan 20
Layer3-Switch(config-vlan)# end

 

!  Configure a Routed Port for connecting to the ASA firewall 
Layer3-Switch(config)# interface FastEthernet0/48
Layer3-Switch(config-if)# description To Internet Firewall
Layer3-Switch(config-if)# no switchport
Layer3-Switch(config-if)# ip address 10.0.0.1 255.255.255.252

 

!  Create Trunk Ports Fe0/47 Fe0/46
Layer3-Switch(config)# interface fastethernet0/47
Layer3-Switch(config-if)# switchport mode trunk
Layer3-Switch(config-if)# switchport trunk encapsulation dot1q
Layer3-Switch(config-if)# end

 

Layer3-Switch(config)# interface fastethernet0/46
Layer3-Switch(config-if)# switchport mode trunk
Layer3-Switch(config-if)# switchport trunk encapsulation dot1q
Layer3-Switch(config-if)# end

 

!  Configure Switch Vlan Interfaces (SVI)
Layer3-Switch(config)# interface vlan10
Layer3-Switch(config-if)# ip address 10.10.10.10 255.255.255.0
Layer3-Switch(config-if)# no shut

 

Layer3-Switch(config)# interface vlan20
Layer3-Switch(config-if)# ip address 10.20.20.20 255.255.255.0
Layer3-Switch(config-if)# no shut

 

!  Configure default route towards ASA firewall
Layer3-Switch(config)# ip route 0.0.0.0 0.0.0.0 10.0.0.2

 

NOTE: More discussion of Cisco Layer 3 switch-InterVLAN Routing Configuration from Cisco users you can visit: networkstraining.com

Discussion: Router vs. Layer 3 Switches

Layer 2 Switches & Layer 3 switches

 

By Cisco & Cisco Router, Network Switch - Posted in: Cisco Switches, Cisco Firewall
Enter comment - View the 2 comments

Présentation

  • : Cisco & Cisco Network Hardware News and Technology
  • Cisco & Cisco Network Hardware News and Technology
  • : Hardware Cisco news Cisco network technology Networking solution Cisco Internet
  • : There are all kinds of news and information related to Cisco and Cisco network equipment, such as release of Cisco equipment, news of Cisco's new networking solution, and Cisco hardware and software upgrading...
  • Share this blog
  • Back to homepage
  • Contact

Calendrier

January 2015
M T W T F S S
      1 2 3 4
5 6 7 8 9 10 11
12 13 14 15 16 17 18
19 20 21 22 23 24 25
26 27 28 29 30 31  
<< < > >>
Create your blog for free on over-blog.com - Contact - Terms of Service - Earn Royalties - Report abuse - Most commented articles