Posts with #cisco switches - cisco firewall tag
Cisco ASA configuration may be a frustrating issue for many Cisco users. In fact, everyone has his own troublesome condition. Here Ethan Banks, a network engineer, share his experience of helping his VPN client access a remote office, as well as an example of Cisco ASA 8.3, 8.4 Hairpinning NAT Configuration.
Let’s share the case:
“I ran into an issue over the weekend where a VPN client was unable to access a remote office connected via an L2L tunnel terminated on the same firewall. The symptoms were straightforward enough. The client was unable to either ping or open a URL at a specific server at the remote office, although this connectivity used to work. In this example, VPN client 192.168.100.100 was not able to access server 10.11.12.1, although access to resources in the 10.10.0.0/16 network was fine.
I confirmed the remote office firewall was unlikely to be the issue; the remote firewall had seen no changes. As I knew the headquarters Cisco ASA firewall HAD seen a few changes, that’s where I focused my attention. After reviewing the headquarters firewall rulebase, I knew that the VPN client IP pool had permission to access resources in the remote office.
Monitoring the firewall logs, I spotted several “110003: Routing failed to locate next-hop for protocol from src interface:src IP/src port to dest interface:dest IP/dest port” messages tied directly to the VPN client trying to open a socket to 10.11.12.1. So, I reviewed the firewall routing table with “show route” and “show asp table routing” and found no issues…not that I expected to. If the routing table was having a problem, connectivity issues would have been more widespread.
Of course, NAT sprung to mind as a potential issue, but I couldn’t see an obvious problem. There was a NAT that exempted the entire VPN client pool from being translated to any RFC1918 destinations. As this clearly covered the remote office IP range, I was a little stumped. This confusion was compounded by the fact that the connectivity used to work. A perplexing issue.
Take a read “Cisco ASA 5500 Series Configuration Guide using the CLI, 8.4 and 8.6-Setting General VPN Parameters”. A couple of highlights caught my eye:
- The “same-security-traffic permit intra-interface” is required. Fair enough, easy to implement, makes sense, and I’d already done that. No problem.
- Now the documentation got confusing because of two conflicting statements:
- “When the ASA sends encrypted VPN traffic back out this same interface NAT is optional. The VPN-to-VPN hairpinning works with or without NAT.” Okay – so I don’t need to write a NAT statement for the hairpinned traffic. NAT is optional, right? But then you next read…
- “To exempt the VPN-to-VPN traffic from NAT, add commands that implement NAT exemption for VPN-to-VPN traffic.” Uh, hang on. So I *do* need a NAT statement?
From my experience, I believe that, yes, you need a NAT exemption statement. I think all Cisco is trying to say is that you don’t have to actually translate the source or destination address into something else to be able to get through the hairpin.
Writing a NAT exemption statement is not an unusual thing to have to do in an ASA, but the magic in the context of hairpinning is in defining the ingress and egress interfaces. In a hairpin path, the traffic flows in and out the same interface. While I did have a NAT statement that matched source and destination addresses in question, the interfaces were only suitable for handling source VPN client to destination headquarters network traffic…not traffic headed from VPN client to the remote office network. Therefore, I needed a NAT statement like this: “nat (outside,outside) source static client_vpn_pool client_vpn_pool destination static remote_office_net remote_office_net“.
The order of the NAT statement also mattered, as NAT statements are processed in order. Once I moved my new NAT statement to the top of the list, the issue was resolved.”
More Related Cisco Contents http://blog.router-switch.com/category/cisco-certification/ccie/
Nexus 3500 will feature ultra-low latency, target Infiniband
Cisco has another Nexus Ethernet data center switch in the works, this one with ultra-low latency and a more formidable rival to Infiniband.
Details on the Nexus 3500 are unavailable. Cisco wouldn't discuss it when asked about an online adseeking a software engineer for it.
"This is to be announced later," a Cisco spokesperson e-mailed. "No details to share yet."
Sources, however, say the Nexus 3500 will feature 250 nanosecond port-to-port latency and integrated network address translation. They say it will give Infiniband a run for the money, especially when combined with a new NIC from Cisco called usNIC.
"If this product does come out with latency as stated, it will dominate the silicon industry slamming down on Broadcom and more importantly Fulcrum," the source says. "But also compete with Mellanox and close the gap with Infiniband. Cisco will be untouchable in ultra-low latency switching."
RedHat has apparently run tests of usNIC with its MRG-M high performance computing software. Slides 16-18 of this presentation appear to show performance improvements of MRG-M when running usNIC vs. Infiniband.
Red Hat was not immediately available for comment. But our source said it, along with the Nexus 3500, could be a viable alternative to Infiniband.
"That, combined with this new Nexus 3500 having 250ns latency would be a compelling solution against Infiniband," the source said. "If Cisco launches Nexus 3500 in the next few months and combines usNIC in the launch it will finally be the first Ethernet solution that can compete against (Infiniband)...(and) shows Cisco intent to kill Infiniband with Ethernet."
Mellanox is a leading Infiniband networking vendor. Requests to the company for comment were not answered by posting time.
Cisco currently offers the Nexus 3000 line for low-latency, top-of-rack switching targeted at high-frequency financial trading, as well as the Nexus 7000, 5000 and 2000 lines for data center fabric switching.
---Reading resource from networkworld.com
More Cisco Nexus Switch Tips & News:
By default the Cisco Catalyst 2950 is not configured for remote administration. Basic configuration to enable remote administration on the Cisco Catalyst 2950 includes configuring an IP address on the switch and also enabling telnet access. Once these configurations are completed, the Cisco Catalyst 2950 can be managed by IP address.
Things You'll Need
- Cisco serial console cable
- Windows XP computer connected to the local network
- Privileged exec password for the Cisco Catalyst 2950
- IP address, subnet mask and gateway IP address for the switch
Instructions to Manage Cisco Catalyst 2950 by IP Address
1. Connect the Cisco serial console cable into the console port on the Cisco Catalyst 2950 switch and connect the other end of the cable into the 9-pin serial port, which is usually located on the back or side of the Windows XP computer.
2. Click the "Start" button and select the "Run" box and type "hypertrm" and press the "Enter" key and the HyperTerminal program will appear. Type a name for the session, such as "Cisco 2950" in the "Name:" field and click the "OK" button. Click the "Connect using:" drop-down menu, then click the "Com port" used to connect the Windows XP computer to the Cisco 2950 switch. Press the "Enter" key. Then click the "Bits per second:" drop-down menu and select "9600." Click "None" in the "Flow Control" drop-down menu and press the "Enter" key.
3. Press the "Enter" key and the Cisco command prompt will appear. Type "enable" and press "Enter." Then enter the password if requested.
4. Type "config term" and press the "Enter" key to enter "Configuration Mode" on the switch.
5. Type "line vty 0 4" and press the "Enter" key. Type "password abcd," replacing "abcd" with the password you wish to use to secure telnet access. Press the "Enter" key. Then type "login" and then press the "Enter" key.
6. Type "interface Vlan1" and press the "Enter" key. Then type "ip address 10.0.0.1 255.0.0.0," replacing the "10.0.0.1 255.0.0.0" with the IP address and subnet mask assigned to the switch. Press the "Enter" key.
7. Type "exit" and press the "Enter" key. Then type "ip default-gateway x.x.x.x," replacing "x.x.x.x" with the gateway IP address for the switch. Press the "Enter" key. Then type "end" and press the "Enter" key. Type "copy run start" and press the "Enter" key to save the configuration. Type "exit" and press the "Enter" key.
8. Click "Start" on the Windows XP computer. Click "Run" and then type "cmd" and press the "Enter" key. Type "telnet x.x.x.x" on the command line, replacing "x.x.x.x" with the IP address just configured on the Cisco Catalyst 2950. Press the "Enter" key. Type the telnet password just programmed into the Cisco Catalyst 2950 when requested. Press the "Enter" key and the Cisco command prompt should display so you can now manage the switch over the network.
More Related to Cisco 2950 Series:
When Cisco came out with its Unified Compute System (UCS) blades a couple of years back, there was plenty of skepticism about how the company would do by venturing into the pastures new of the server landscape. Last month's announcement that the company passed the 10,000 customer milestone for UCS sales laid many of those doubts to rest.
With IDC rating blades as the fastest growing server segment during the next several years, this bodes well for Cisco's growing presence in the marketplace.
"We're hearing from customers who are reporting all-in savings in the range of 40 percent on the cost of computing," said Todd Brannon, senior manager, Data Center and Virtualization, Cisco. "The savings stem from a variety of sources: lower capex as the platform efficiently scales, reduced administrator time, density/power savings and reduced software licensing costs as more workload lands on fewer servers."
One customer told Brannon he could let his CTO take a Cisco blade straight out of the box, insert it into a chassis slot, and as the system identified and integrated the new resource into the available pool, they congratulated him on his first server deployment.
New Cisco UCS Blades
Since our last snapshot around two years ago, Cisco server blade releases have been largely in lock step with the roll-out of Intel Xeon processor roadmap. Two years ago, the company released the Cisco UCS B200 M1 and B250 M1 blades, which are based on the Intel Xeon processor 5500 series. In the past year, it introduced the Cisco UCS B200 M2 and B250 M2, both based on the Intel Xeon Processor 5600 series.
The UCS B200 blade server is a half-width, 2-socket blade server with up to 192 GB of memory. It can deliver substantial throughput and scalability.
The Cisco UCS B250 M2 Extended Memory Blade Server is aimed at maximizing performance and capacity for demanding virtualization and large dataset applications. It is a full-width, 2-socket blade server that supports up to 384 GB of memory.
In addition, the Cisco UCS B230 M2 and B440 M2 blade servers are based on the Intel Xeon processor E7 family. These two servers are follow-on models to earlier-released M1 versions that were based on the Intel Xeon Processor 7500 series
The Cisco UCS B230 M2 Blade Server is a two-socket server supporting up to 20 cores and 512 GB of memory. The B230 M2 extends the capabilities of the Cisco Unified Computing System by delivering higher levels of performance, efficiency and reliability in a more compact, half-width form factor.
The UCS B440 M2 is a 4-socket blade that can support up to 40 cores and 512GB of memory. It is best for enterprise-class applications.
"We will continue to roll out blades targeted at both infrastructure and enterprise-class applications," said Brannon. "Last year, we delivered nine benchmarking world records at the launch of the Intel Xeon processor E7 family."
Cisco UCS Racks
Cisco offers more than just blades. It also provides a range of UCS rack servers. Much like it has done with blades, Cisco has transitioned the rackmount servers from M1 to M2 models to support the newest Intel Xeon Processor 5600 or E7 family.
The Cisco UCS C200 M2 and UCS B210 M2 servers are high-density, 2-socket rackmount servers built for production-level network infrastructure, web services, and mainstream data center, branch and remote-office applications. The Cisco UCS C250 M2 server is a high-performance, memory-intensive, 2-socket, 2-rack unit (RU) rackmount server designed for virtualization and large dataset workloads.
Two rackmount servers use the Intel Xeon processor E7 family. The Cisco UCS C260 M2 Rack-Mount Server is a high-density, 2-socket platform that offers compact performance for enterprise-critical applications. The C260 M2 server's maximum 1TB of memory and 16 drives make it good for memory-bound or disk-intensive applications.
The Cisco UCS C460 M2 Rack-Mount Server has enough processing power, memory and local storage to house mission-critical applications, as well as server consolidation of resource-intense workloads.
"Cisco UCS is a next-generation data center server platform that unites compute, network, storage access and virtualization into a cohesive system designed to outperform previous server architectures, increase operational agility and flexibility while potentially dramatically reducing overall data center costs," said Brannon. "The system is programmable using single point, model-based management to simplify and speed deployment of applications and services running in bare-metal, virtualized, and cloud-computing environments."
---Reading from serverwatch.com
More Related Cisco UCS news:
It is important to understand how to access switch ports. The 3550 switch uses the type slot/port command, just like a 2621 router and just like the 3550 switch. For example, Fastethernet 0/3 is 10/100BaseT port 3.
The 3550 switch type slot/port command can be used with either the interface command or the show command. The interface command allows you to set interface specific configurations. The 3550 switch has only one slot: zero (0), just like the 1900.
Network Layout: Work with the saved network that you used to configure devices in lab 8.27.
1. To configure an interface on a 3550 switch, go to global configuration mode and use the interface command as shown.
Enter configuration commands, one per line. End with CTRL/Z
Async Async interface
BVI Bridge-Group Virtual Interface
Dialer Dialer interface
FastEthernet FastEthernet IEEE 802.3
Group-Async Async Group interface
Lex Lex interface
Loopback Loopback interface
Multilink Multilink-group interface
Null Null interface
Port-channel Ethernet Channel of interfaces
Transparent Transparent interface
Tunnel Tunnel interface
Virtual-Template Virtual Template interface
Virtual-TokenRing Virtual TokenRing
Vlan Catalyst Vlans
fcpa Fiber Channel
range interface range command
2. The next output asks for the slot. Since the 3550 switch is not modular, there is only one slot, which is 0, although it lists 0-2 for some odd reason. However, you can only type in 0 as the slot in this program. Any other slot number will give you an error. The next output gives us a slash (/) to separate the slot/port configuration.
3550A(config)#interface fastethernet ?
<0-2> FastEthernet interface number
3550A(config)#interface fastethernet 0?
3550A(config)#interface fastethernet 0/?
<0-12> FastEthernet interface number
3. After the 0/configuration command, the above output shows the amount of ports you can configure. The output below shows the completed command.
3550A(config)#interface fastethernet 0/4
4. Once you are in interface configuration mode, the prompt changes to (config-if). After you are at the interface prompt, you can use the help commands to see the available commands.
Interface configuration commands:
arp Set arp type (arpa, probe, snap) or timeout
bandwidth Set bandwidth informational parameter
carrier-delay Specify delay for interface transitions
cdp CDP interface subcommands
channel-group Etherchannel/port bundling configuration
default Set a command to its defaults
delay Specify interface throughput delay
description Interface specific description
dot1x IEEE 802.1X subsystem
duplex Configure duplex operation.
exit Exit from interface configuration mode
help Description of the interactive help system
hold-queue Set hold queue depth
ip Interface Internet Protocol config commands
keepalive Enable keepalive
load-interval Specify interval for load calculation for an interface
logging Configure logging for interface
mac-address Manually set interface MAC address
mls mls interface commands
mvr MVR per port configuration
no Negate a command or set its defaults
ntp Configure NTP
You can switch between interface configurations by using the int fa 0/# command at any time from global configuration mode.
5. Let’s look at the duplex and speed configurations for a switch port.
auto Enable AUTO duplex configuration
full Force full duplex operation
half Force half-duplex operation
10 Force 10 Mbps operation
100 Force 100 Mbps operation
auto Enable AUTO speed configuration
6. Since the switch port’s duplex and speed settings are already set to auto by default, you do not need to change the switch port settings. It is recommended that you allow the switch port to auto negotiate speed and duplex settings in most situations. In a rare situation, when it is required to manually set the speed and duplex of a switch port, you can use the following configuration.
Duplex will not be set until speed is set to non-auto value
full duplex - transmission of data in two directions simultaneously. It has a higher throughput than half duplex.
There are no collision domains with this setting
Both sides must have the capability of being set to full duplex
Both sides of the connection must be configured with full duplex
Each side transmits and receives at full bandwidth in both directions
7. Notice in the above command that to run full duplex, you must set the speed to non-auto value.
8. In addition to the duplex and speed commands that can be configured on the switch port, you also can turn on what is called portfast. The portfast command allows a switch port to come up quickly. Typically a switch port waits 50 seconds for spanning-tree to go through its"gotta make sure there are no loops!" cycle. However, if you turn port fast on, then you better be sure you do not create a physical loop on the switch network. A spanning tree loop can severely hurt or bring your network down. Here is how you would enable port fast on a switch port.
bpdufilter Don't send or receive BPDUs on this interface
bpduguard Don't accept BPDUs on this interface
cost Change an interface's spanning tree port path cost
guard Change an interface's spanning tree guard mode
link-type Specify a link type for spanning tree protocol use
port-priority Change an interface's spanning tree port priority
portfast Enable an interface to move directly to forwarding on link up
stack-port Enable stack port
vlan VLAN Switch Spanning Tree
9. The command above shows the available options for the spanning-tree command. We want to use the portfast command.
%Warning: portfast should only be enabled on ports connected to a single
host. Connecting hubs, concentrators, switches, bridges, etc... to this
interface when portfast is enabled, can cause temporary bridging loops.
Use with CAUTION
%Portfast has been configured on FastEthernet0/4 but will only
have effect when the interface is in a non-trunking mode.
10. Notice the message the switch provides when enabling portfast. Although it seems like the command did not take effect, as long as the port is in access mode (discussed in a minute), the port will now be in portfast mode.
11. After you make any changes you want to the interfaces, you can view the different interfaces with the show interface command. The switch output below shows the command used to view a 10/100BaseT interface on the 3550 switch.
3550A#sh int f0/4
FastEthernet0/4 is up, line protocol is up
Hardware is Fast Ethernet, address is 00b0.c5e4.e2cf (bia 00b0.c5e4.e2cf)
MTU 1500 bytes, BW 10000 Kbit, DLY 1000 usec,
reliability 255/255, txload 1/255, rxload 1/255
Encapsulation ARPA, loopback not set
Keepalive set (10 sec)
Full duplex, 100Mb/s
input flow-control is off, output flow-control is off
ARP type: ARPA, ARP Timeout 04:00:00
Last input never, output 1w6d, output hang never
Last clearing of "show interface" counters never
Input queue: 0/75/0/0 (size/max/drops/flushes); Total output drops: 0
Queueing strategy: fifo
Output queue :0/40 (size/max)
5 minute input rate 0 bits/sec, 0 packets/sec
5 minute output rate 0 bits/sec, 0 packets/sec
1 packets input, 64 bytes, 0 no buffer
Received 0 broadcasts, 0 runts, 0 giants, 0 throttles
0 input errors, 0 CRC, 0 frame, 0 overrun, 0 ignored
0 watchdog, 0 multicast, 0 pause input
0 input packets with dribble condition detected
1 packets output, 64 bytes, 0 underruns
0 output errors, 0 collisions, 3 interface resets
0 babbles, 0 late collision, 0 deferred
0 lost carrier, 0 no carrier, 0 PAUSE output
0 output buffer failures, 0 output buffers swapped out
12. In addition to the show interface command, you can use the show running-config command to see the interface configuration as well.
switchport mode dynamic desirable
switchport mode dynamic desirable
13. You can administratively set a name for each interface on the 3550 switch. Like the hostname, the descriptions are only locally significant. For the 3550 series switch, use the description command. You can use spaces with the description command, but you can use underlines if you need to.
To set the descriptions, you need to be in interface configuration mode. From interface configuration mode, use the description command to describe each interface.
Enter configuration commands, one per line. End with CTRL/Z
3550A(config)#int fa 0/4
3550A(config-if)#description Marketing VLAN
3550A(config-if)#int fa 0/10
3550A(config-if)#description trunk to Building 3
In the configuration example above, we set the description on both port 4 and 10.
14. Once you have configured the descriptions you want on each interface, you can then view the descriptions with either the show interface command, or show running-configcommand. View the configuration of the Ethernet interface 0/9 by using the show interface ethernet 0/4 command.
3550A#sh int fa 0/4
FastEthernet0/4 is up, line protocol is up
Hardware is Fast Ethernet, address is 00b0.1a09.2097 (bia 00b0.1a09.2097)
Description: Marketing VLAN
15. Use the show running-config command to view the interface configurations as well.
description "Marketing VLAN"
Notice in the above switch output that the sh int fa0/4 command and the show run command both show the description command set on an interface.
---Original reading at content.digiex.net
More Cisco 3560 Tutorials and Tips:
When an enterprise needs more network ports in a conference room or an extra jack for a printer in an office, a network administrator has traditionally had very few good choices. There was the expensive option of pulling more cables from the wiring closet, or the option of plugging in an unmanaged 8-port switch from a low-cost vendor into an existing port, complicating campus network design.
Now that port shortage problem has reached beyond the conference room as enterprises of all kinds are adding a multitude of IP devices and stretching the edge of the LAN beyond the wiring closet. Companies now deploy large numbers of IP phones and video surveillance cameras, schools have more computers and IP-based instructional technology and retail shops have deployed more IP-connected kiosks and point-of-sales stations. While 802.11n wireless LAN technology and cheap unmanaged switches have mitigated the port shortage to some extent, a better answer may lie in enterprise-class compact switches.
Cisco Systems unveiled a new family of compact switches targeting this problem. The switches are part of the Catalyst C-Series and consist of the Catalyst 2960-C and the 3560-C. There are five models and 8 to 12 Fast Ethernet or Gigabit Ethernet (GbE) ports with dual GbE uplinks. These switches do not require their own power source since each device has a new Power-over-Ethernet (PoE+) "pass-through" feature, which allows them to be powered by an upstream closet switch. They are then in turn able to pass the PoE power downstream to IP-connected devices like phones and cameras.
The Catalyst C switches also have many enterprise-class features that low-cost switches lack, such as auto-configuration, IPv6 acceleration and access control lists (ACL). They also have several features central to Cisco's broader Borderless Networks architecture, including Cisco security functions, TrustSec and the IEEE standard MACSec, and Cisco's EnergyWise energy management. The product compares somewhat to a port extender released by Extreme Networks in 2009, the ReachNXT 100-8t port extender, an 8-port device.
For Jordan Martin, technical services manager at a Pennsylvania-based healthcare enterprise, an enterprise-class 8-port switch would simplify his campus network design.
"We have all kinds of little, unmanaged switches lying around places where there just aren't enough jacks to facilitate what we need. Unfortunately a lot of our wiring in our building was done without a ton of forethought,” Martin said.
"We have a campus here with a guard shack and we need to be able to process fiber in and Ethernet out, so we need a decent capability switch out there. But I don't want to spend $3,000 for one guy with a computer and a phone."
Using unmanaged switches from a low-cost vendor has been adequate at times within his network, but such devices don’t scale well, Martin said. Replacing them with enterprise-class 8-port switches could improve operations, management and visibility into the edge of his network.
"In a non-managed switch, if you're having trouble with a device, it could be the switch; it could be the cabling. Being able to take a look at the interface and see if it's a duplex mismatch or whatever the issue may be without having to go out to the location and put some tap on the line… That remote diagnostic capability of an enterprise switch is big for us."
Campus network design: Even with good forethought you'll need the occasional 8-port switch
Eric Steel, network engineer with Georgia-based law firm Constangy, Brooks & Smith, said he usually avoids the need for switches beyond the wiring closet by planning ahead and making sure he has plenty of ports across the network.
"But in those cases where we can't, we end up putting in a cheap mini-switch -- Linksys or Netgear," he said. "
Those switches bring various operational challenges. Steel has to properly configure them for spanning tree protocol so that they don't loop into the LAN, and getting power to the device is also a frequent challenge. "Security is, of course, another headache, because you now have some open ports for people to plug into accidentally or maliciously," Steel said.
Replacing an unmanaged 8-port switch with compact enterprise-class switches allows users to have a network management and security feature set from the core to the edge, said Mike Spanbauer, principal analyst with Current Analysis.
"It offers the ability for the end user to basically standardize on a specific security configuration or software image," he said. "And if they have Catalyst 3560s in the closet and these 3560-Cs remotely deployed in a conference room, which offers the ability to simplify management."
These compact switches also give new campus network design options to enterprises with large numbers of small branches or locations with a light network footprint.
The Catalyst C switches replace a collection of older 8-port Fast Ethernet Catalyst 2960 switches which lacked the Borderless Networks capabilities, memory, PoE pass-through and dual uplinks of these new models.
---Original news from searchnetworking.techtarget.com
More Cisco Catalyst Switch Tips and Cisco Switch Info:
Juniper EX4200 or Cisco Catalyst 3750 Series Switch Compared
What do people thought about Juniper's EX switches Vs. Cisco Catalysts switches? Someone may answer like these: “Well the Juniper switches are much cheaper, that's for sure. I don't understand this Cisco-only mentality that's out there - why would I pay 3 or 4 times as much for a switch with less features?”. “We bought the Blade Network Technologies Rack Switches. Juniper OEM them, but they are even cheaper buying them from BNT. And the suppport is great too.”…
Right, Both Cisco and Juniper have many users and followers. Not the better, but the right. There are some comparison between Juniper’s EX4200 switch and Cisco’s 3750 series Catalyst switches, which may help you know more about EX4200 switch and Cisco 3750 switches.
EX4200 vs. Catalyst 3750: Layer 3 Stackable Switch Comparison
With prices starting at under $4,000, Juniper’s EX4200 line is available in 24 and 48 port 10/100/1000 densities, both PoE and non-PoE. They also include either 1Gb or 10Gb modular uplink connectivity. Another cool feature is the standard hot swap power supplies, while most of Cisco 3750 switches come with a single non field serviceable power supply.
Cisco 3750G, CISCO 3750E, and Cisco Catalyst 3750X switches come in over 70 different models and it can be overwhelming figuring out exactly what model to order without having to go through a myriad of technical, feature and pricing comparisons. Juniper makes it easy, offering one model with the same or better performance in several categories than all of Cisco 3750 series switches. Better yet, Juniper’s J-Care support can be as much as 75% less than Cisco’s Smartnet.
One of the most important factors in choosing a Layer 3 stackable switch is the actual performance of the stack. An independent study found Juniper’s EX4200 Latency is always lower when the switches are in a Virtual Chassis configuration. Coincidently enough, Cisco doesn’t publish latency rates of their stackable solution. Virtual Chassis configurations recover from hardware and software failures in milliseconds and operate at 30-Gbit/s rates in each direction between switches.
So in a side by side comparison between the Juniper EX4200 and the Cisco 3750G, E or X, it was no contest.
Price and Specs of Juniper EX 4200, Cisco 3750G, Cisco 3750-E, Cisco 3750-X
Example List prices
1 Yr 24x7x4 Support List Prices
Stacking Throughput (Gbps)
Max switches in virtual stack
L3 RIP and Static
Internal power capabilities
Redundant Hot Swappable
Single Field Replaceable
Redundant Hot Swappable
The Cisco RV110W Wireless-N VPN Firewall offers simple, highly secure wired and wireless connectivity for small offices, home offices, and remote workers at an affordable price. It comes with a high-speed, 802.11n wireless access point, a 4-port 10/100 Mbps Fast Ethernet switch, an intuitive, browser-based device manager, and support for the Cisco Small Business FindIT Network Discovery Utility.
It combines business-class features, simple installation, and a quality user experience to provide basic connectivity for small businesses with five or fewer employees.
The RV110W Wireless-N VPN Firewall also features:
- A proven firewall with support for access rules and advanced wireless security to help keep business assets safe
- IP Security (IPsec) VPN support for highly secure remote-access client connectivity
- Support for separate virtual networks to allow you to set up highly secure wireless guest access
- Native support for IPv6, which allows you to take advantage of future networking applications and operating systems, without an equipment upgrade
- Support for Cisco Small Business QuickVPN software
The good: The Cisco RV110W Wireless-N VPN Firewall router offers a built-in PPTP VPN server and fast performance. The compact, IPv6-ready router is easy to use and comes with a well-organized, responsive Web interface.
The bad: The RV110W lacks support for dual-band and Gigabit Ethernet. Its VPN supports only up to five remote clients at time.
The bottom line: The Cisco RV110W Wireless-N VPN Firewall would make a very good investment for a small business that needs an easy VPN solution for remote employees.
The Cisco RV110W Wireless-N VPN Firewall router is not for everyone, but those who need it will appreciate its simplicity. The router offers a built-in VPN for up to five clients at a time. Other than the VPN this is a simple single-band Wireless-N router that doesn't support dual-band wireless or Gigabit Ethernet. At an estimated price of less than $120, though, it's still a good choice for a small business.
Design and ease of use
The Cisco RV110W Wireless-N VPN Firewall router is square and compact, about the size of a bathroom tile. It has four little rubber feet on the bottom to keep it grounded, and is also wall-mountable. Unlike other home routers from Cisco, such as the E series, that have internal antennas, the RV110W has two antennas sticking up from the back. Also on the back you'll find the router's one WAN port (to hook up to the Internet) and four WAN ports (for wired clients). None of these ports, unfortunately, is Gigabit Ethernet, meaning the router offers at the most 100Mbps for its wired networks.
The router doesn't have a USB port, either, which means there's no built-in network storage or print-server capability.
On the front, the router has a Wi-Fi Protected Setup button that helps quickly add Wi-Fi clients to the network. There's also an LED array to show the statuses of the ports on the back and the connection to the Internet.
Unlike other routers, the RV110W doesn't come with the Cisco Connect software. Instead, it has a well-illustrated Quick Start Guide that takes you through the setup process, from hooking up the cables to getting the wireless network up and running. Part of the process involves logging in to the router's well-organized and responsive Web interface, which includes a wizard to make the setup process even easier.
The RV110W's most important feature is the built-in support for hosting a VPN network, which allows clients outside the office to connect to the network as though they were within the local network. This enables remote workers to access local resources such as printers, remote desktops, and databases.
Generally, you'd need a domain server to do this, or you'd need to opt for a much more expensive router. The RV110W is possibly the cheapest simple VPN hosting product that offers an easy-to-use built-in PPTP VPN server on the market. Nonetheless, you'll need to be fairly well-versed in networking to configure a client to connect to the router. On the router side, however, it takes just a few mouse clicks to get the VPN ready.
The router's VPN network-hosting support is limited to up to five concurrent clients at a time, so if your business has more than five employees who work remotely, this router is not for you.
The RV110W is a single-band wireless router, offering Wireless-N (802.11n) on the 2.4GHz band only. Most new home routers offer support for dual-band, meaning they can also broadcast on the higher-bandwidth 5GHz band. For a business router, however, it's still normal not to offer 5GHz. What's not normal, and is disappointing, however, is the fact that the RV110W doesn't offer Gigabit Ethernet.
To make up for that, it's one of the few routers on the market that are IPv6-ready. The new version of Internet protocol promises better security and speed and, most importantly, is future-proofed as the world is now moving on from IPv4, which is running out of addresses.
More Cisco wireless info you can visit: http://blog.router-switch.com/category/technology/wireless/
Virtualization, long a hot topic for servers, has entered the networking realm. With the introduction of a new management blade for its Catalyst 6500 switches, Cisco can make two switches look like one while dramatically reducing failover times in the process.
In an exclusive Clear Choice test of Cisco's new Virtual Switching System (VSS), Network World conducted its largest-ever benchmarks to date, using a mammoth test bed with 130 10G Ethernet interfaces. The results were impressive: VSS not only delivers a 20-fold improvement in failover times but also eliminates Layer 2 and 3 redundancy protocols at the same time.
The performance numbers are even more startling: A VSS-enabled virtual switch moved a record 770 million frames per second in one test, and routed more than 5.6 billion unicast and multicast flows in another. Those numbers are exactly twice what a single physical Catalyst 6509 can do.
All links, all the time
To maximise up-time, network architects typically provision multiple links and devices at every layer of the network, using an alphabet soup of redundancy protocols to protect against downtime. These include rapid spanning tree protocol (RSTP), hot standby routing protocol (HSRP), and virtual router redundancy protocol (VRRP).
This approach works, but has multiple downsides. Chief among them is the "active-passive" model used by most redundancy protocols, where one path carries traffic while the other sits idle until a failure occurs. Active-passive models use only 50 percent of available capacity, adding considerable capital expense.
Further, both HSRP and VRRP require three IP addresses per subnet, even though routers use only one address at a time. And while rapid spanning tree recovers from failures much faster than the original spanning tree, convergence times can still vary by several seconds, leading to erratic application performance. Strictly speaking, spanning tree was intended only to prevent loops, but it's commonly used as a redundancy mechanism.
There's one more downside to current redundant network designs: It creates twice as many network elements to manage. Regardless of whether network managers use a command-line interface or an SNMP-based system for configuration management, any policy change needs to be made twice, once on each redundant component.
Introducing Virtual Switching
In contrast, Cisco's VSS uses an "active-active" model that retains the same amount of redundancy, but makes use of all available links and switch ports.
While many vendors support link aggregation (a means of combining multiple physical interfaces to appear as one logical interface), VSS is unique in its ability to virtualise the entire switch -- including the switch fabric and all interfaces. Link aggregation and variations such as Nortel's Split Multi-Link Trunk (SMLT) do not create virtual switches, nor do they eliminate the need for Layer 3 redundancy mechanisms such as HSRP or VRRP.
At the heart of VSS is the Virtual Switching Supervisor 720-10G, a management and switch fabric blade for Cisco Catalyst 6500 switches. VSS requires two new supervisor cards, one in each physical chassis. The management blades create a virtual switch link (VSL), making both devices appear as one to the outside world: There's just one media access control and one IP address used, and both systems share a common configuration file that covers all ports in both chassis.
On the access side of Cisco's virtual switch, downstream devices still connect to both physical chassis, but a bonding technology called Multichassis EtherChannel (MEC) presents the virtual switch as one logical device. MEC links can use industry-standard 802.1ad link aggregation or Cisco's proprietary port aggregation protocol. Either way, MEC eliminates the need for spanning tree. All links within a MEC are active until a circuit or switch failure occurs, and then traffic continues to flow over the remaining links in the MEC.
Servers also can use MEC's link aggregation support, with no additional software needed. Multiple connections were already possible using "NIC teaming," but that's usually a proprietary, active/passive approach.
On the core side of Cisco's virtual switch, devices also use MEC connections to attach to the virtual switch. This eliminates the need for redundancy protocols such as HSRP or VRRP, and also reduces the number of routes advertised. As on the access side, traffic flows through the MEC in an "active/active" pattern until a failure, after which the MEC continues to operate with fewer elements.
The previous examples focused on distribution-layer switches, but VSL links work between any two Catalyst 6500 chassis. For example, virtual switching can be used at both core and distribution layers, or at the core, distribution and access layers. All attached devices would see one logical device wherever a virtual switch exists.
A VSL works only between two chassis, but it can support up to eight physical links. Multiple VSL links can be established using any combination of interfaces on the new supervisor card or Cisco's WS-6708 10G Ethernet line card. VSS also requires line cards in Cisco's 67xx series, such as the 6724 and 6748 10/100/1000 modules or the 6704 or 6708 10G Ethernet modules. Cisco says VSL control traffic uses less than 5 percent of a 10G Ethernet link, but we did not verify this.
At least for now, VSL traffic is proprietary. It isn't possible to set up a VSL between, say, a Cisco and Foundry switch.
A big swath of fabric
We assessed VSS performance with tests focused on fabric bandwidth and delay, failover times, and unicast/multicast performance across a network backbone.
In the fabric tests we sought to answer two simple questions: How fast does VSS move frames, and how long does it hang on to each frame? The set-up for this test was anything but simple. We attached Spirent TestCenter analyser/generator modules to 130 10G Ethernet ports on two Catalyst 6509 chassis configured as one virtual switch.
These tests produced, by far, the highest throughput we've ever measured from a single (logical) device. When forwarding 64-byte frames, Cisco's virtual switch moved traffic at more than 770 million frames per second. We then ran the same test on a single switch, without virtualisation, and measured throughput of 385 million frames per second -- exactly half the result of the two fabrics combined in the virtual switch. These results prove there's no penalty for combining switch fabrics.
We also measured VSS throughput for 256-byte frames (close to the average Internet frame length) of 287 million frames per second and for 1,518-byte frames (until recently, the maximum in Ethernet, and still the top end on most production networks) of 53 million frames per second. With both frame sizes, throughput was exactly double that of the single-switch case.
The 1,518-byte frames per second number represents throughput of nearly 648Gbps. This is only around half the theoretical maximum rate possible with 130 10G Ethernet ports. The limiting factor is the Supervisor 720 switch fabric, which can't send line-rate traffic to all 66 10G ports in each fully loaded chassis. VSS doubles fabric capacity by combining two switches, but it doesn't extend the capacity of the fabric card in either physical switch.
We also measured delay for all three frame sizes. With a 10 percent intended load, Spirent TestCenter reported average delays ranging from 12 to 17 microsec, both with and without virtual switching. These numbers are similar to those for other 10G switches we've tested, and far below the point where they'd affect performance of any application. Even the maximum delays of around 66 microsec with virtual switching again are too low to slow down any application, especially considering Internet round-trip delays often run into the tens of milliseconds.
Our failover tests produced another record: The fastest recovery from an Layer 2/Layer 3 network failure we've ever measured.
We began these tests with a conventional set-up: Rapid spanning tree at layer 2, HSRP at Layer 3, and 16,000 hosts (emulated on Spirent TestCenter) sending traffic across redundant pairs of access, distribution and core switches. During the test, we cut off power to one of the distribution switches, forcing all redundancy mechanisms and routing protocols to reconverge. Recovery took 6.883 seconds in this set-up.
Then we re-ran the same test two more times with VSS enabled. This time convergence occurred much faster. It took the network just 322 millisec to converge with virtual switching on the distribution switches, and 341 millisec to converge with virtual switching on the core and distribution switches. Both numbers represent better than 20-fold improvements over the usual redundancy mechanisms.
A bigger backbone
Our final tests measured backbone performance using a complex enterprise traffic pattern involving 176,000 unicast routes, more than 10,000 multicast routes, and more than 5.6 billion flows. We ran these tests with unicast traffic alone and a combination of unicast and multicast flows, and again compared results with and without VSS in place.
Just to keep things interesting, we ran all tests with a 10,000-entry access control list in place, and also configured switches to re-mark all packets' diff-serv code point (DSCP) fields. Re-marking DSCPs prevents users from unauthorised "promotion" of their packets to receive higher-priority treatment. In addition, we enabled NetFlow tracking for all test traffic.
Throughput in all the backbone cases was exactly double with virtual switching than without it. This was true for both unicast and mixed-class throughput tests, and also true regardless of whether we enabled virtual switching on distribution switches alone, or on both the core and distribution switches. These results clearly show the advantages of an "active/active" design over an "active/passive" one.
We measured delay as well as throughput in these tests. Ideally, we'd expect to see little difference between test cases with and without virtual switching, and between cases with virtual switching at one or two layers in the network. When it came to average delay, that's pretty much how things looked. Delays across three pairs of physical switches ranged from around 26 to 90 microsec in all test cases, well below the point where applications would notice.
Maximum delays did vary somewhat with virtual switching enabled, but not by a margin that would affect application performance. Curiously, maximum delay increased the most for 256-byte frames, with fourfold increases over results without virtual switching. The actual amounts were always well less than 1 millisec, and also unlikely to affect application performance.
Cisco's VSS is a significant advancement in the state of the switching art. It dramatically improves availability with much faster recovery times, while simultaneously providing a big boost in bandwidth.
How we tested Cisco's VSS
For all tests described here, we configured a 10,000-line access control list (ACL) covering layer-3 and layer-4 criteria and spot-checked that random entries in the ACL blocked traffic as intended. As a safeguard against users making unauthorised changes, Cisco engineers also configured access and core switches to re-mark the diff-serve code point (DSCP) in every packet, and we verified re-marking using counters in the Spirent TestCenter traffic generator/analyser. Cisco also enabled NetFlow traffic monitoring for all test traffic.
To assess the fabric bandwidth and delay, the system under test was one pair of Cisco Catalyst 6509-E switches. Cisco engineers set up a virtual switch link (VSL) between the switches, each equipped with eight WS6408 10G Ethernet line cards and one Virtual Switching Supervisor 720-10G management/switch fabric card. That left a total of 130 10G Ethernet test ports: Eight on each of the line cards, plus one on each of the management cards (we used the management card's other 10G Ethernet port to set up the virtual link between switches).
Using the Spirent TestCenter traffic generator/analyser, we offered 64-, 256- and 1518-byte IPv4 unicast frames on each of the 130 10G test ports to determine throughput and delay. We measured delay at 10 percent of line rate, consistent with our practice in previous 10G Ethernet switch tests. The Spirent TestCenter analyser emulated 100 unique hosts on each port, making for 13,000 total hosts.
In the failover tests, the goal was to compare VSS recovery time upon loss of a switch with recovery using older redundancy mechanisms.
This test involved three pairs of Catalyst 6509 switches, representing the core, distribution and access layers of an enterprise network. We ran the failover tests in three configurations. In the first scenario, we used legacy redundancy mechanisms such as rapid spanning tree and hot standby routing protocol (HSRP). Then we ran two failover scenarios using VSS, first with a virtual link on the distribution switches alone, and again with VSS links on both the distribution and core switches.
For each test, we began by offering traffic to each of 16 interfaces on the core and access sides of the test bed. We began the failover tests with a baseline event to verify no frame loss existed. While Spirent TestCenter offered test traffic for 300 seconds, we cut off power to one of the distribution switches. Because we offered traffic to each interface at a rate of 100,000 frames per second, each dropped frame represented 10 microsec of recovery time. So, for example, if Spirent TestCenter reported 32,000 lost frames, then failover time was 320 millisec.
The backbone performance tests used a set-up similar to the VSS configurations in the failover tests. Here again, there were three pairs of Catalyst 6509 switches, representing core, distribution and access layers of an enterprise network. Here again, we also conducted separate tests with a virtual link on the distribution switches, and again with virtual links on the distribution and core switches.
To represent enterprise conditions, we set up very large numbers of routes, hosts and flows in these tests. From the core side, we configured OSPF to advertise 176,000 unique routes. On the access side, we set up four virtual LANs (VLAN), each with 250 hosts, on each of 16 ports, for 16,000 hosts total. In terms of multicast traffic set-up, one host in each access-side VLAN joined each of 40 groups, each of which had 16 transmitters; with 16 core-side interfaces. In all, this test represented more than 10,000 multicast routes, and more than 5.6 billion unique unicast flows.
In the backbone tests, we used a partially meshed traffic pattern to measure system throughput and delay. As defined in RFC 2285, a partial mesh pattern is one in which ports on both sides of the test bed exchange traffic with one another, but not among themselves. In this case, that meant all access ports exchanged traffic with all core ports, and vice-versa.
We tested all four combinations of unicast, mixed multicast/unicast, and virtual switching enabled and disabled on the core switches (virtual switching was always enabled on the distribution switches and always disabled on the access switches). In all four backbone test set-ups, we measured throughput and delay.
We conducted these tests in an engineering lab at Cisco's campus in San Jose. This is a departure from our normal procedure of testing in our own labs or at a neutral third-party facility. The change was borne of logistical necessity: Cisco's lab was the only one available within the allotted timeframe with sufficient 10G Ethernet test ports and electrical power to conduct this test. Network Test and Spirent engineers conducted all tests and verified configurations of both switches and test instruments, just as we would in any test. The results presented here would be the same regardless of where the test was conducted.
---Original reading from review.techworld.com
The Cisco 3750 range has been around for many years now, and has a vast following. The Cisco 3750-X is the new kid on the Cisco block, and it combines plenty of stuff that will be familiar to users of its predecessors with some funky new features that are clearly a step forward.
Cisco 3750 switch comes in a number of flavors – between 24 and 48 ports, with or without Power over Ethernet. Cisco 3750 with 48P is the PoE variant of the 48-port device. Now, the traditional Cisco 3750 had four 1Gbit/s SFP ports in addition to the 48 10/100/1000 copper ports; the Cisco 3750-X instead has a slot into which you can slot either a four-port 1Gbit/s SFP daughter-board or a two-port 10Gbit/s alternative.
Alongside the port combinations, there are three software installs. The LAN Base software is a layer-2 only software image, and quite frankly I wouldn't ever expect to buy one of these if I only wanted layer-2 functionality. More sensible is the IP Base image which makes the device a proper Layer-3 routing switch, albeit with a limited selection of routing protocols. At the top is the IP Services image, which makes the unit a full-blown router (just like its ancestors – two of my BGP-shouting WAN routers are actually 3750Gs, in fact). The main market will of course be for the IP Base version.
The rear panel is interesting too, of course. As with the older 3750s the rear panel has a pair of “stack” ports. Each stack port provides a 16Gbit/s backplane connection, and by stacking your devices in a loop you end up with a resilient 32Gbit/s backplane. From a management and configuration point of view a stack is a single virtual switch – you manage it rather like a chassis product with a number of blades. So port 1 of switch 1 is Gi1/0/1, port 3 of switch 2 is Gi2/0/3, and so on.
The important rear-panel innovation with the new CISCO 3750-X model is the provision for redundant power supplies. In the old model you had a single, non-removable power supply along with an RPS (Redundant Power Supply) connection; to use the latter and give yourself some resilience you had to buy something like an RPS2300 – an external device that was a stupid shape that didn't fit into a rack very well, had buttons on the front whose only purpose seemed to be to make things break, and on a brighter note provided up to six switches with resilient power. The new model has dual slots for removable PSUs, of which one is populated by default; it's a ten-second job to slip a second one in beside it. One of the downsides of the old 3750 was the bloody awful reliability of the internal (fixed) PSU, and I've spent rather too many hours swapping out units with duff power units, so the removable units in the -X are most welcome.
Along with the redundant PSU facility is the power stacking capability. Just as you have your data stack cables, you also now have a pair of power-stack cables on each unit, so that the total power available via all the PSUs in the stack is available for negotiated use across the whole stack, for switch power and PoE.
As with the older devices, you can add and remove stack devices on the fly. Adding a switch to a stack is a simple case of settings its ID, telling the stack to expect a new member, and plumbing it in (although in theory the stack will deal with firmware mismatches in the new member, I prefer not to tempt fate so I always pre-install the right version). If a unit fails the stack will keep on humming while you pull out the duff one and stick in the replacement, and the config will be automatically migrated to the new unit.
The only downside I've found so far, in fact, is with trying to get the new -X model to co-exist in a stack with the old 3750-G (in short, I've not persuaded it to actually work yet) but I've no doubt I'll persuade it to play before long.
The Cisco 3750-X is a really sensible evolution in an already popular family of switches in the Cisco family. Being an IOS device there's really not a great deal of difference management-wise between the old and the new, so you get new functionality with almost zero additional training requirements. I've recently added seven 48-port non-PoE versions in three of my server installations, and have just received two new pairs of the PoE variant in a couple of offices, and I'm pretty happy thus far.
New power stacking capability is an excellent evolution.
32Gbit/s backplane should be sufficient for most modest installations.
10Gbit/s Ethernet support for uplinking or connecting to blade servers.
More Cisco 3750 Info:
Sample Pricing for Popular Cisco 3750 Models:
Catalyst 3750X 24 Port Data LAN Base: US$2,236.00 (57.00% off list price)
Catalyst 3750X 48 Port Data LAN Base: US$3,827.00 (57.00% off list price)
Catalyst 3750X 24 Port Data IP Base: US$2,795.00 (57.00% off list price)
Catalyst 3750X 48 Port Data IP Base: US$4,945.00 (57.00% off list price)
Catalyst 3750X 24 Port PoE IP Base: US$3,139.00 (57.00% off list price)
WS-C3750X-48P-S: Stackable 48 10/100/1000 Ethernet PoE+ ports, with 715W AC Power Supply: US$5,590.00 (57.00% off list price)
Router-switch.com ((Yejian Technologies Co., Ltd), a World's Leading Cisco Supplier