Posts with #networking tag
A. Although EIGRP can propagate a default route using the default network method, it is not required. EIGRP redistributes default routes directly.
Q. Should I always use the eigrp log-neighbor-changes command when I configure EIGRP?
A. Yes, this command makes it easy to determine why an EIGRP neighbor was reset. This reduces troubleshooting time.
Q. Does EIGRP support secondary addresses?
A. EIGRP does support secondary addresses. Since EIGRP always sources data packets from the primary address, Cisco recommends that you configure all routers on a particular subnet with primary addresses that belong to the same subnet. Routers do not form EIGRP neighbors over secondary networks. Therefore, if all of the primary IP addresses of routers do not agree, problems can arise with neighbor adjacencies.
Q. What debugging capabilities does EIGRP have?
A. There are protocol-independent and -dependent debug commands. There is also a suite of show commands that display neighbor table status, topology table status, and EIGRP traffic statistics. Some of these commands are:
Q. What does the word serno mean on the end of an EIGRP topology entry when you issue the show ip eigrp topology command?
A. For example:
show ip eigrp topology
P 172.22.71.208/29, 2 successors, FD is 46163456
via 172.30.1.42 (46163456/45651456), Serial0.2, serno 7539273
via 172.30.2.49 (46163456/45651456), Serial2.6, serno 7539266
Serno stands for serial number. When DRDBs are threaded to be sent, they are assigned a serial number. If you display the topology table at the time an entry is threaded, it shows you the serial number associated with the DRDB.
Threading is the technique used inside the router to queue items up for transmission to neighbors. The updates are not created until it is time for them to go out the interface. Before that, a linked list of pointers to items to send is created (for example, the thread).
These sernos are local to the router and are not passed with the routing update.
Q. What percent of bandwidth and processor resources does EIGRP use?
A. EIGRP version 1 introduced a feature that prevents any single EIGRP process from using more than fifty percent of the configured bandwidth on any link during periods of network convergence. Each AS or protocol (for instance, IP, IPX, or Appletalk) serviced by EIGRP is a separate process. You can use the ip bandwidth-percent eigrpinterface configuration command in order to properly configure the bandwidth percentage on each WAN interface. Refer to the EIGRP White Paper for more information on how this feature works.
In addition, the implementation of partial and incremental updates means that EIGRP sends routing information only when a topology change occurs. This feature significantly reduces bandwidth use.
The feasible successor feature of EIGRP reduces the amount of processor resources used by an autonomous system (AS). It requires only the routers affected by a topology change to perform route re-computation. The route re-computation only occurs for routes that were affected, which reduces search time in complex data structures.
Q. Does EIGRP support aggregation and variable length subnet masks?
A. Yes, EIGRP supports aggregation and variable length subnet masks (VLSM). Unlike Open Shortest Path First (OSPF), EIGRP allows summarization and aggregation at any point in the network. EIGRP supports aggregation to any bit. This allows properly designed EIGRP networks to scale exceptionally well without the use of areas. EIGRP also supports automatic summarization of network addresses at major network borders.
Q. Does EIGRP support areas?
A. No, a single EIGRP process is analogous to an area of a link-state protocol. However, within the process, information can be filtered and aggregated at any interface boundary. In order to bound the propagation of routing information, you can use summarization to create a hierarchy.
Q. Can I configure more than one EIGRP autonomous system on the same router?
A. Yes, you can configure more than one EIGRP autonomous system on the same router. This is typically done at a redistribution point where two EIGRP autonomous systems are interconnected. Individual router interfaces should only be included within a single EIGRP autonomous system.
Cisco does not recommend running multiple EIGRP autonomous systems on the same set of interfaces on the router. If multiple EIGRP autonomous systems are used with multiple points of mutual redistribution, it can cause discrepancies in the EIGRP topology table if correct filtering is not performed at the redistribution points. If possible, Cisco recommends you configure only one EIGRP autonomous system in any single autonomous system. You can also use another protocol, such as Border Gateway Protocol (BGP), in order to connect the two EIGRP autonomous systems.
Q. If there are two EIGRP processes that run and two equal paths are learned, one by each EIGRP process, do both routes get installed?
A. No, only one route is installed. The router installs the route that was learned through the EIGRP process with the lower Autonomous System (AS) number. In Cisco IOS Software Releases earlier than 12.2(7)T, the router installed the path with the latest timestamp received from either of the EIGRP processes. The change in behavior is tracked by Cisco bug ID CSCdm47037.
Q. What does the EIGRP stuck in active message mean?
A. When EIGRP returns a stuck in active (SIA) message, it means that it has not received a reply to a query. EIGRP sends a query when a route is lost and another feasible route does not exist in the topology table. The SIA is caused by two sequential events:
- The route reported by the SIA has gone away.
- An EIGRP neighbor (or neighbors) have not replied to the query for that route.
When the SIA occurs, the router clears the neighbor that did not reply to the query. When this happens, determine which neighbor has been cleared. Keep in mind that this router can be many hops away. Refer to What Does the EIGRP DUAL-3-SIA Error Message Mean? for more information.
Q. What does the neighbor statement in the EIGRP configuration section do?
A. The neighbor command is used in EIGRP in order to define a neighboring router with which to exchange routing information. Due to the current behavior of this command, EIGRP exchanges routing information with the neighbors in the form of unicast packets whenever the neighbor command is configured for an interface. EIGRP stops processing all multicast packets that come inbound on that interface. Also, EIGRP stops sending multicast packets on that interface.
The ideal behavior of this command is for EIGRP to start sending EIGRP packets as unicast packets to the specified neighbor, but not stop sending and receiving multicast packets on that interface. Since the command does not behave as intended, the neighbor command should be used carefully, understanding the impact of the command on the network.
Q. Why does the EIGRP passive-interface command remove all neighbors for an interface?
A. The passive-interface command disables the transmission and receipt of EIGRP hello packets on an interface. Unlike IGRP or RIP, EIGRP sends hello packets in order to form and sustain neighbor adjacencies. Without a neighbor adjacency, EIGRP cannot exchange routes with a neighbor. Therefore, the passive-interface command prevents the exchange of routes on the interface. Although EIGRP does not send or receive routing updates on an interface configured with the passive-interface command, it still includes the address of the interface in routing updates sent out of other non-passive interfaces. Refer to How Does the Passive Interface Feature Work in EIGRP?For more information.
Q. Why are routes received from one neighbor on a point-to-multipoint interface that runs EIGRP not propagated to another neighbor on the same point-to-multipoint interface?
A. The split horizon rule prohibits a router from advertising a route through an interface that the router itself uses to reach the destination. In order to disable the split horizon behavior, use the no ip split-horizon eigrp as-numberinterface command. Some important points to remember about EIGRP split horizon are:
- Split horizon behavior is turned on by default.
- When you change the EIGRP split horizon setting on an interface, it resets all adjacencies with EIGRP neighbors reachable over that interface.
- Split horizon should only be disabled on a hub site in a hub-and-spoke network.
- Disabling split horizon on the spokes radically increases EIGRP memory consumption on the hub router, as well as the amount of traffic generated on the spoke routers.
- The EIGRP split horizon behavior is not controlled or influenced by the ip split-horizon command.
Q. When I configure EIGRP, how can I configure a network statement with a mask?
A. The optional network-mask argument was first added to the network statement in Cisco IOS Software Release 12.0(4)T. The mask argument can be configured in any format (such as in a network mask or in wild card bits). For example, you can use network 10.10.10.0 255.255.255.252 or network 10.10.10.0 0.0.0.3.
Q. I have two routes: 172.16.1.0/24 and 172.16.1.0/28. How can I deny 172.16.1.0/28 while I allow 172.16.1.0/24 in EIGRP?
A. In order to do this you need to use a prefix-list, as shown here:
router eigrp 100
distribute-list prefix test in
no eigrp log-neighbor-changes
ip prefix-list test seq 5 permit 172.16.1.0/24
This allows only the 172.16.1.0/24 prefix and therefore denies 172.16.1.0/28.
Note: The use of ACL and distribute-list under EIGRP does not work in this case. This is because ACLs do not check the mask, they just check the network portion. Since the network portion is the same, when you allow 172.16.1.0/24, you also allow 172.16.1.0/28.
Q. I have a router that runs Cisco Express Forwarding (CEF) and EIGRP. Who does load-balancing when there are multiple links to a destination?
A. The way in which CEF works is that CEF does the switching of the packet based on the routing table which is populated by the routing protocols such as EIGRP. In short, CEF does the load-balancing once the routing protocol table is calculated. Refer to How Does Load Balancing Work? for more information on load balancing.
Q. How do you verify if the EIGRP Non Stop Forwarding (NSF) feature is enabled?
A. In order to check the EIGRP NSF feature, issue the show ip protocols command. Here is the sample output:
show ip protocols
Routing Protocol is "eigrp 101"
Outgoing update filter list for all interfaces is not set
Incoming update filter list for all interfaces is not set
Default networks flagged in outgoing updates
Default networks accepted from incoming updates
EIGRP metric weight K1=1, K2=0, K3=1, K4=0, K5=0
EIGRP maximum hopcount 100
EIGRP maximum metric variance 1
Redistributing: eigrp 101
EIGRP NSF-aware route hold timer is 240s
Automatic network summarization is in effect
Maximum path: 4
Routing for Networks:
Routing Information Sources:
Gateway Distance Last Update
Distance: internal 90 external 170
This output shows that the router is NSF-aware and the route-hold timer is set to 240 seconds, which is the default value.
Q. How can I use only one path when a router has two equal cost paths?
A. Configure the bandwidth value on the interfaces to default, and increase the delay on the backup interface so that the router does not see two equal cost paths.
Q. What is the difference in metric calculation between EIGRP and IGRP?
A. The EIGRP metric is obtained when you multiply the IGRP metric by 256. The IGRP uses only 24 bits in its update packet for the metric field, but EIGRP uses 32 bits in its update packet for the metric field. For example, the IGRP metric to a destination network is 8586, but the EIGRP metric is 8586 x 256 = 2,198,016. Integer division is used when you divide 10^7 by minimum BW, so the calculation involves integer division, which leads to a variation from manual calculation.
Q. What is the EIGRP Stub Routing feature?
A. The Stub routing feature is used to conserve bandwidth by summarizing and filtering routes. Only specified routes are propagated from the remote (Stub) router to the distribution router because of the Stub routing feature. For more information about the Stub routing feature, refer to EIGRP Stub Routing. The EIGRP stub feature can be configured on the switch with the eigrp stub [receive-only] [leak-map name] [connected] [static] [summary] [redistributed] command. This feature can be removed with the no eigrp stub command. When you remove theeigrp stub command from the switch, the switch that runs the IP Base image throws this error:
EIGRP is restricted to stub configurations only
This issue can be resolved if you upgrade to Advanced Enterprise Images. This error is documented inCSCeh58135.
Q. How can I send a default route to the Stub router from the hub?
A. Do this under the outbound interface on the hub router with the ip summary-address eigrp X 0.0.0.0 0.0.0.0command. This command suppresses all the more specific routes and only sends the summary route. In the case of the 0.0.0.0 0.0.0.0, it means it suppresses everything, and the only route that is in the outbound update is 0.0.0.0/0. One drawback to this method is that EIGRP installs a 0.0.0.0/0 route to Null0 is the local routing table with an admin distance of 5.
Q. What are different route types in EIGRP?
A. There are three different types of routes in EIGRP:
- Internal Route—Routes that are originated within the Autonomous System (AS).
- Summary Route—Routes that are summarized in the router (for example, internal paths that have been summarized).
- External Route—Routes that are redistributed to EIGRP.
Q. How do you redistribute an IPv6 default route in EIGRP?
A. For redistributing an IPv6 default route in EIGRP, a sample configuration is shown here:
ipv6 prefix-list DEFAULT-ONLY-V6 seq 10 permit ::/0
route-map DEFAULT_2EIGRP-V6 permit 10
match ipv6 address prefix-list DEFAULT-ONLY-V6
router eigrp Starz_EIGRP
address-family ipv6 unicast
redistribute static route-map DEFAULT_2EIGRP-V6
Q. How does EIGRP behave over a GRE tunnel compared to a directly connected network?
A. EIGRP will use the same administrative distance and metric calculation for the GRE tunnel. The cost calculation is based on bandwidth and delay. The bandwidth and delay of the GRE tunnel will be taken from the tunnel interface configured on the router. The tunnel will also be treated like a directly connected network. If there are two paths to reach a network either through a VLAN interface or tunnel interface, EIGRP prefers the Virtual-Access Interface (VAI) VLAN interface because the VLAN interface has greater bandwidth than the tunnel interface. In order to influence the routing through the tunnel interface, increase the bandwidth parameter of the tunnel interface, or increase the delay parameter of the VLAN interface.
Q. What is an offset-list, and how is it useful?
A. The offset-list is an feature used to modify the composite metrics in EIGRP. The value configured in the offset-list command is added to the delay value calculated by the router for the route matched by an access-list. An offset-list is the preferred method to influence a particular path that is advertised and/or chosen.
Q. How can I tag external routes in EIGRP?
A. You can tag routes that EIGRP has learned from another routing protocol using a 32 bit tag value. Starting with ddts CSCdw22585, internal routes can also be tagged. However, the tag value cannot exceed 255 due to packet limitations for internal routes.
Q. What are the primary functions of the PDM?
A. EIGRP supports 3 protocol suites: IP, IPv6, and IPX. Each of them has its own PDM. These are the primary functions of PDM:
- Maintaining the neighbor and topology tables of EIGRP routers that belong to that protocol suite
- Building and translating protocol specific packets for DUAL
- Interfacing DUAL to the protocol specific routing table
- Computing the metric and passing this information to DUAL; DUAL handles only the picking of the feasible successors (FSs)
- Implement filtering and access lists.
- Perform redistribution functions to/from other routing protocols.
Q. What are the various load-balancing options available in EIGRP?
A. The offset-list can be used to modify the metrics of routes that EIGRP learns through a particular interface, or PBR can be used.
Q. What does the %DUAL-5-NBRCHANGE: IP-EIGRP(0) 100: Neighbor 10.254.0.3 (Tunnel0) is down: holding time expired error message mean?
A. This message indicates that the router has not heard any EIGRP packets from the neighbor within the hold-time limit. Because this is a packet-loss issue, check for a Layer 2 problem.
Q. Is there a IPv6 deployment guide that includes EIGRPv6?
Q. From the 16:29:14.262 Poison squashed: 10.X.X.X/24 reverse message, what does poison squashed mean?
A. The router threads a topology table entry as a poison in reply to an update received (the router sets up for poison reverse). While the router is building the packet that contains the poison reverse, the router realizes that it does not need to send it. For example, if the router receives a query for the route from the neighbor, it is currently threaded to poison. Thus, it sends the poison squashed message.
Q. Is it normal that EIGRP takes over 30 seconds to converge?
A. EIGRP taking longer to converge under heavy CPU usage is a normal behavior. EIGRP convergence is faster when you lower the hold time. The lowest values for hello and hold time are 1 second and 3 seconds respectively. For example:
Router(Config)# interface Fa0/0
!--- (Under an interface directly connected to EIGRP peers.)
Router(Config-if)#ip hello-interval eigrp 1
Router(Config-if)#ip hold-time eigrp 3
Note: Make sure that the hold time is changed on both ends.
For more information on EIGRP performance related issues, refer to How to resolve EIGRP performance problems.
More Related Topics:
As public cloud SLAs take heat from analysts, some enterprises say virtual private clouds offer the right mix of cloud agility and managed services reliability.
A virtual private cloud (VPC) offers on-demand Infrastructure as a Service (IaaS) external to a customer's data center, but it runs on a dedicated infrastructure, rather than a multi-tenant infrastructure. It is usually connected to each customer using a virtual private network (VPN) or another direct network connection, rather than the public Internet.
As such, a virtual private cloud can offer higher service-level agreements (SLAs) than public clouds, contracting for up to 100% uptime in some cases.
Finding the SLA that's Just Right
Some purists might consider this managed hosting rather than cloud computing, but these distinctions aren't relevant to customers such as Taylor Erickson, vice president of IT at Lanx Inc., a company that specializes in spinal care and surgical products in Bloomfield, Colo.
Lanx moved its SAP application and Active Directory to a virtual private cloud hosted by Virtustream Inc., last fall. Virtustream's xStream virtual private cloud gives the company a five-nines (99.999%) uptime SLA. Penalties start at 99.949% uptime, and were negotiated by Lanx with the help of an analyst firm to review the contract, Erickson said.
With the choice between Virtustream's xStream VPC and a public cloud provider Erickson declined to name, the virtual private cloud SLA was just one of the reasons the company chose Virtustream.
In fact, enterprise managed hosting providers such as ViaWest and Hosting.com tend to offer 100% uptime SLAs , but Virtustream's demonstrated expertise at hosting SAP appealed to Lanx, as did Virtustream's cost, which can be as low as half that of such services.
And 99.999% uptime was still more than the company might have been able to provide on its own. For example, a week after the company's migration, an air conditioning unit in Lanx's building failed, and the server room temperature soared to 98 degrees.
"But our mission-critical SAP was up and going because we'd migrated to a cloud provider," Erickson said.
Virtual Private Cloud a Happy Medium between Public and Private Cloud
Other users say public cloud, which tends to be the lowest-cost and most elastic of all service types, has undeniable appeal, but that using it requires very careful planning.
We're all used to pushing a hoster over a barrel to get what we want. We get that, but they custom configure the environment just for us and they sign us up for a three-year commitment.
---James Staten, analyst with Forrester Research
"You can never take [public cloud] off the table," said Dave Robbins, senior vice president and CIO of Ellie Mae, maker of an electronic loan origination platform and based in Pleasanton, Calif. "But if you're going to do it, what's your architecture and strategy to do it?"
Just carving out public cloud IaaS space without respect for regional diversity or how to get an ecosystem in place to exploit application delivery can be very low cost, but it's very low value as well, according to Robbins.
"It's a more complicated picture than most people think through," he said. "You have to look at the entire architecture."
In the meantime, Ellie Mae has found a happy medium in a Tier 3 Inc., virtual private cloud, tied in to an on-premise FlexPod environment that uses Cloupia, now owned by Cisco Systems Inc.
Space on Tier 3's infrastructure was used by the company last year as it migrated from an older infrastructure to the new one built on FlexPods, and simultaneously launched new products and services. Some production applications ran in Tier 3 as this process took place, and the company also uses Tier 3's VPC for QA and test systems.
VPCs Bridge a Disconnect between Public Cloud SLAs and Enterprise Expectations
Some SLAs are cryptic, but what's really more of a problem is the typical enterprise customer's disconnect in expectation from what they normally get from hosting providers and managed service providers and what they're going to get from public cloud, said James Staten, analyst with Forrester Research.
"We're all used to pushing a hoster over a barrel to get what we want. We get that, but they custom configure the environment just for us and they sign us up for a three-year commitment," he said.
Customers pursuing public cloud services tend not to want to be locked in to such commitments, and in some cases using a standardized service is going to be preferable to one custom-managed for the user, Staten said. But in these cases, the SLA is going to be lower.
Article written by Beth Pariseau from
More Related Networking News and Tips:
STP is vital for detecting loops within a switched network. Spanning tree works by designating a common reference point(the root bridge) and systematically building a loop-free tree from the root to all other bridges. All redundant paths remain blocked unless a designated link fails. The following criteria are used by each spanning tree node to select a path to the root bridge:
- Lowest root bridge ID - Determines the root bridge
- Lowest cost to the root bridge - Favors the upstream switch with the least cost to root
- Lowest sender bridge ID - Serves as a tie breaker if multiple upstream switches have equal cost to root
- Lowest sender port ID - Serves as a tie breaker if a switch has multiple (non-Etherchannel) links to a single upstream switch
We can manually configure the priority of a switch and its individual interfaces to influence path selection. The values given below are defaults.
Switch(config)# spanning-tree vlan 1 priority 32768
Switch(config)# interface g0/1
Switch(config-if)# spanning-tree vlan 1 port-priority 128
So where do these configured STP priorities come into play? There is no BPDU field for priority; instead, both bridge and port IDs have their administratively configured priorities embedded in them. Note the Bridge Identifier and Port Identifier fields in this Wireshark capture of a PVST+ BPDU:
Although the bridge ID field has been conveniently split into a bridge priority and MAC address for us by Wireshark's protocol descriptor, it is actually a single eight-byte value. The following field, which contains the port ID unique to each interface, is similarly composed at one-fourth the size.
Because this switch is running PVST+, the VLAN ID (1) is added to the configured bridge priority of 32768 (the default priority) for a sum of 32769. The unique bridge ID, taken from a MAC address, is appended to this value to form the complete bridge ID. Similarly, the port ID is formed by prepending the 4-bit port priority (the default value of 128, or 0x80) to the interface ID, which happens to be 0x001 because we are connected to the first physical switchport. These two values form the complete port ID of 0x8001.
More Networking Tips:
There are many tested IPv6 networks deployed across the world. For actual deployment, however, all the companies need to ensure that the vendors who support companies’ network have the requisite IPv6 enhancements.
There are two categories of IPv6 enhancements. The first is the set that supports the packet forwarding (more commonly referred to as routing) process and the other set comprises enhancements that support the computing or host infrastructure.
IPv6 enhancements of the first category include larger address formats (the ones that affect the routing table size and structure), better routing protocols such as Open Shortest First Protocol (OSPF) and Routing Information Protocol (RIP), and good support for optional extension headers (which streamline the packet forwarding process) such as the Routing Header. And, the second category of enhancements comprises enhancements to the Domain Name System (DNS), the Stateless Auto-configuration (plug and play) process, upgraded Security, and updates to the Application Programming Interfaces (APIs).
Keeping these requisite enhancements in mind, let us now discuss what kind of support ten of the premier networking vendors are equipped to provide:
The open source, UNIX-based OS X operating system from Apple Computer allows for advanced BSD networking and has a TCP/IP stack and advanced sockets. Versions 10.2 and later of this operating system provide good support for IPv6.
As this vendor has been actively involved in the development of IPv6, it provides very good support for IPv6. In fact, the vendors support for IPv6 can be observed in all its products. Further, the documentation of IOS 12 has extensive details of the IPv6 features, such as Automatic and Configured tunneling, BGP extensions for IPv6, MTU Path Discovery, Neighbor Discovery, updated routing protocols, and Stateless Auto-configuration, supported in each platform.
The new HP-UX11i provides support for several IPv6 features such as automatic and configured tunnels, advanced and basic sockets application programming interfaces (APIs), IPv4/IPv6 dual stack protocols, Path Maximum Transmission Unit (PMTU) Discovery, and Stateless Auto-configuration. The new HP-UX11i runs over Infiniband, FDDI, and Ethernet links.
The GR2000 carrier-class gigabit routers from Hitachi provide IPv6 at forwarding rates of a maximum of 26 Mpps and maximum line rates of 2.4 Gbps. The custom Application Specific Integrated Circuits (ASICs) of this system have a dual stack IPv4/IPv6 architecture and support packet filtering, IPv6 over IPv4 and IPv4 over IPv6 tunneling, and Stateless Auto-configuration among other IPv6 features.
Since the release of the IPv6-enabled AIX system in1997, IBM has shown support for IPv6 and has continually added IPv6 support to its products, such as DB2 for Windows v9.1, Unix, and Linux.
The IPv6 protocols for Linux are developed by a volunteer-run collaborative effort referred to as the Universal Playground for IPv6 (USAGI). This project was undertaken to remove the bugs in Linux implementations that made it difficult for a Linux-based system to conform to the IPv6 specifications.
Naturally, when all vendors are providing support for IPv6, Microsoft cannot be far behind. Most of the new versions of the Windows operating system, including Windows Vista, Windows Server Code, Windows Server 2003, and Windows CE .NET have built-in IPv6 enhancements and facilitate an orderly transition from IPv4 to IPv6.
Nortel Networks is working towards providing IPv6 support since the 1990s. The most recent generation of Nortels Ethernet Routing Switch 8600 offers wire speed and terabit performance. Nortel products also provide other IPv6 enhancements such as IPv6 Multicast, IPv4 to IPv6 Tunneling, Neighbor Discovery, and Stateless Auto-configuration.
The IP on NetWare that comes with NetWare 6.5 uses IPv6 as the native transport protocol on its server platform. The IPv6 features supported by Novell include Automatic and Configured tunneling, Basic Socket Interface Extensions, Neighbor Discovery, Stateless Address Auto-configuration, and Transmission Mechanisms for hosts and routers. Please note that with Novell, IPv6 works as an add-on component to the existing TCP/IP protocol stack.
The Solaris 10 operating system by Sun Microsystems offers support for important IPv6 programming interfaces and specifications. It offers the advantage of Internet Key Exchange (IKE), which lets systems connect by using authentication and encryption, and integrated IP Security (IPsec). This vendor also facilitates dual stack tunneling, such as IPv6 over IPv4 and vice versa. For more details on the IPv6 support provided by a specific vendor, visit the IPv6 section on the vendor website or refer to system documentation specific to the vendor.
More Networking Tips:
“Router Switch”, Our New Company Landing in USA/U.S
---Professional Cisco Supply Service is Around You
As router-switch.com founded its branch office in USA, it is also welcoming its 10th anniversary in 2012. From “a small potato” to “a big apple”, router-switch.com did a great effort to realize its goals one by one. Well, in fact, to be a famous leading Cisco supplier around the world is not an easy task, firstly and the most important is to own a strong team (the professional salesmen, pre-sales and after-sales service, free CCIE technical support and creative marketing staff)
The year of 2012, meaningful to all the people in the world (Haha, because the movie 2012 told us 2012 is the end of the world), so is router-switch.com, besides celebrating its 10-year birthday, router-switch.com have prepared a lot of gifts for its regulars and new clients, such as an album of telling its history and achievement, more discount for popular Cisco equipment (Cisco routers, Cisco switches, Cisco wireless Aps, etc.), new version of its official website, more collaborations with Cisco technical support units. To serve for customers better, for router-switch.com, the important action is to be more local in the future. So the “Router Switch” was born, as the times require.
With the foundation of “Router Switch” in U.S., its localization service will be strengthened. A professional local team will offer sincere service (pre-Cisco buying consultation, updating of purchased Cisco hardware, free CCIE technical support, etc.) for the regulars and new clients.
Main Events over the Past 10 Years
What router-switch.com achieved in the past 10 years?
Since 2002, router-switch.com has experienced a rapid development with sales volume maintaining 70% growth per year.
In 2004, CCIE technical support team was built with more and more clients’ technical requirement.
In 2007, it established its marketing department which can spread its reputation and gather freshest market information for Cisco business.
In 2008, most advanced management tools are adopted to improve efficiency greatly.
In 2012, it is making the great effort to be the worldwide largest Cisco reseller online.
“Router Switch”, a Just New Start
Router-switch.com has accomplished its goals with customers’ trust, not only globalization, but also more localization, more humanization.
More Router-switch.com Info you can see
With a Cisco Self-Defending Network, security is integrated into the network, throughout the infrastructure and protecting each endpoint. This approach is:
- Integrated: Every element in the network acts as a point of defense
- Adaptive: Innovative behavioral methods automatically recognize and adapt to new types of threats as they arise
- Collaborative: Various network components work together to provide new means of protection
Multifunction Security management
Cisco ASA 5500 services Adaptive Security Appliances
Cisco ASA 5500 Series Adaptive Security Appliances are easy-to-deploy solutions that integrate world-class firewall, Unified Communications (voice/video) security, SSL and IPSec VPN, intrusion prevention (IPS), and content security services in a flexible, modular product family. Designed as a key component of the Cisco Self-Defending Network, the Cisco ASA 5500 Series provides intelligent threat defense and secure communications services that stop attacks at the perimeter before they impact business continuity.
The CSC SSM module which fits in a ASA provides comprehensive antivirus, anti-spyware, file blocking, anti-spam, anti-phishing, url filtering and content filtering.
Intrusion Prevention System (IPS)
An integral part of the Cisco Self-Defending Network and Cisco Threat Control solutions, the Cisco Intrusion Prevention System (IPS) provides end-to-end protection for your network. This inline, network-based defense can identify, classify, and stop known and unknown threats, including worms, network viruses, application threats, system intrusion attempts, and application misuse. The appliances provide a range of performance, from 80 Mbps up to 8 Gbps, IPS works on latest signature database and these signatures refer to malicious traffic patterns. The signature updates is and yearly subscription service covered by cisco contract. The above can be achieved in two ways
IPS Module within ASA firewall
IPS features can also be available with ASA by using the AIP-SSM. .It monitors and prevents the malicious traffic passing through ASA to the internal network.
It is an appliance suitable to handle one or more networks with its ports configurable as inline pair. If anti-X (CSC –SSM) is deployed in ASA then IPS module can’t be deployed and one has to rely on IPS appliance for the Intrusion Prevention.
Note: Future versions of ASA will support Anti-X & IPS functionality.
The world’s leading email security appliance covered under Cisco security Portfolio. It is ideally placed between firewall and email server so that it acts as an ‘shock absorber” for all incoming mails.
Iron Port Email security appliances uses multi-layer filtering technology which includes reputation and context based filtering.
6500 chassis based FWSM module
The Cisco Catalyst 6500 Series Firewall Services Module (FWSM) which fits in the 6500 chassis allowing customers to benefit from industry-leading innovations, including:
- Leading scalability and performance
100,000 connections/sec and 2.8 million pps
- Unprecedented security protection at Layers 2–7
Private VLAN integration between the FWSM and the Cisco Catalyst 6500 Series for ease of policy deployment
Advanced firewall capabilities, including application and protocol inspections
- Every port within the chassis becomes a security port
Every FWSM works in tandem with other modules in the chassis to deliver robust security throughout the entire chassis.
- New services can be deployed with minimal operational complexity.The integrated approach of the Cisco FWSM integrates virtualization and high availability. Solutions are enhanced through complementary functions.
End point security
Cisco Security agent
Cisco Security Agent It is the first endpoint security solution that combines zero-update attack protection, data loss prevention, and signature-based anti-virus in a single agent. This unique blend of capabilities defends servers and desktops against sophisticated day-zero attacks, and enforces acceptable-use and compliance policies within a simple management infrastructure. Cisco Security Agent also comes with clam antivirus, to provide protection against virus.
Network Admission Control
NAC provides us complete control over the network. Cisco Network Admission Control (NAC) allows only compliant and trusted endpoint with predefined security postures, such as PCs, servers, and PDAs, onto the network, restricting the access of noncompliant devices, and thereby limiting the potential damage from emerging security threats and risks
Monitoring, Analysis and Response System (MARS)
An appliance-based solution that correlates data from across the enterprise and uses your existing network and security investments to identify, isolate, and recommend precision removal of offending elements. MARS, when used in conjunction with Cisco IPS Sensor software v5, provides a total collaborative solution, protecting your entire network infrastructure from attacks, viruses, worms, and other malicious traffic.
Cisco Security Manager
Cisco Security Manager is an enterprise-class management application designed to configure firewall, VPN, and intrusion prevention (IPS) security services on Cisco network and security devices. Cisco Security Manager can be used in networks of all sizes—from small networks to large networks consisting of thousands of devices—by using policy-based management techniques. Cisco Security Manager works in conjunction with the Cisco Security Monitoring, Analysis, and Response System (MARS). Used together, Computech Engineers provide a comprehensive security management solution that addresses configuration management, security monitoring, analysis, and mitigation.
More Network Security Info and Tips: http://blog.router-switch.com/category/networking-2/
Mobile Cloud Traffic to Account for 71 Percent, or 7.6 Exabytes per Month, of Total Mobile Data Traffic by 2016, Compared to 45 Percent, or 269 Petabytes per Month, in 2011
According to the Cisco Visual Networking Index (VNI) Global Mobile Data Traffic Forecast for 2011 to 2016, worldwide mobile data traffic will increase 18-fold over the next five years, reaching 10.8 exabytes per month — or an annual run rate of 130 exabytes — by 2016.
The expected sharp increase in mobile traffic is due, in part, to a projected surge in the number of mobile Internet – connected devices, which will exceed the number of people on earth (2016 world population estimate of 7.3 billion; source: United Nations). During 2011−2016 Cisco anticipates that global mobile data traffic will outgrow global fixed data traffic by three times.
The forecast predicts an annual run rate of 130 exabytes of mobile data traffic, equivalent to:
33 billion DVDs.
4.3 quadrillion MP3 files (music/audio).
813 quadrillion short message service (SMS) text messages.
An exabyte is a unit of information or computer storage equal to 1 quintillion bytes.
This mobile data traffic increase represents a compound annual growth rate (CAGR) of 78 percent spanning the forecast period. The incremental amount of traffic being added to the mobile Internet between 2015 and 2016 alone is approximately three times the estimated size of the entire mobile Internet in 2012. The following trends are driving these significant increases:
1. More Streamed Content: With the consumer expectations increasingly requiring on-demand or streamed content versus simply downloaded content, mobile cloud traffic will increase, growing 28-fold from 2011 to 2016, a CAGR of 95 percent.
2. More Mobile Connections: There will be more than 10 billion mobile Internet-connected devices in 2016, including machine-to-machine (M2M) modules — exceeding the world’s projected population at that time of 7.3 billion. (One M2M application is the use of wireless networks to update digital billboards. This allows advertisers to display different messages based on time of day or day-of-week and allows quick global changes for messages, such as pricing changes for gasoline).
3. Enhanced Computing of Devices: Mobile devices are becoming more powerful and thus able to consume and generate more data traffic. Tablets are a prime example of this trend generating traffic levels that will grow 62-fold from 2011 to 2016 — the highest growth rate of any device category tracked in the forecast. The amount of mobile data traffic generated by tablets in 2016 (1 exabyte per month) will be four times the total amount of monthly global mobile data traffic in 2010 (237 petabytes per month).
4. Faster Mobile Speeds: Mobile network connection speed is a key enabler for mobile data traffic growth. More speed means more consumption, and Cisco projects mobile speeds (including 2G, 3G and 4G networks) to increase nine-fold from 2011 to 2016.
5. More Mobile Video: Mobile users want the best experiences they can have and that generally means mobile video, which will comprise 71 percent of all mobile data traffic by 2016.
The Cisco study also projects that 71 percent of all smartphones and tablets (1.6 billion) could be capable of connecting to an Internet Protocol version 6 (IPv6) mobile network by 2016. From a broader perspective, 39 percent of all global mobile devices (more than 4 billion), could be IPv6-capable by 2016.
Impact of Mobile Devices/Connections
a. The increasing number of wireless devices and nodes accessing mobile networks worldwide is the primary contributor to traffic growth. By 2016, there will be more than 8 billion handheld or personal mobile-ready devices and nearly 2 billion machine-to-machine connections, such as GPS systems in cars, asset tracking systems in shipping and manufacturing sectors and medical applications for making patient records more readily available.
b. Smartphones, laptops and other portable devices will drive about 90 percent of global mobile data traffic by 2016.
c. M2M traffic will represent 5 percent of 2016 global mobile data traffic while residential broadband mobile gateways will account for the remaining 5 percent of global mobile data traffic.
---Original resources from m2mworldnews.com
More Cisco News:
The ISO, International Organization for Standardization is the Emily Post of the network protocol world. Just like Ms. Post, who wrote the book setting the standards or protocols for human social interaction, the ISO developed the OSI model as the precedent and guide for an open network protocol set. Defining the etiquette of communication models, it remains today the most popular means of comparison for protocol suites.
OSI layers are defined as top down such as:
- The Application layer
- The Presentation layer
- The Session layer
- The Transport layer
- The Network layer
- The Data Link layer
- The Physical layer
Cisco Hierarchical Model
Hierarchy has many of the same benefits in network design that it does in other areas of life. When used properly, it makes networks more predictable. It helps us define at which levels of hierarchy we should perform certain functions. Likewise, you can use tools such as access lists at certain levels in hierarchical networks and avoid them at others.
Large networks can be extremely complicated, with multiple protocols, detailed configurations, and diverse technologies. Hierarchy helps us summarize a complex collection of details into an understandable model. Then, as specific configurations are needed, the model dictates the appropriate manner to apply them.
The Cisco hierarchical model can help you design, implement, and maintain a scalable, reliable, cost-effective hierarchical internetwork.
The following are the three layers:
- The Core layer or Backbone
- The Distribution layer
- The Access layer
Each layer has specific responsibilities. However, that the three layers are logical and are not necessarily physical devices. Consider the OSI model, another logical hierarchy. The seven layers describe functions but not necessarily protocols. Sometimes a protocol maps to more than one layer of the OSI model, and sometimes multiple protocols communicate within a single layer. In the same way, when we build physical implementations of hierarchical networks, we may have many devices in a single layer, or we might have a single device performing functions at two layers. The definition of the layers is logical, not physical.
Now, let's take a closer look at each of the layers.
The Core Layer
The core layer is literally the Internet backbone. At the top of the hierarchy, the core layer is responsible for transporting large amounts of traffic both reliably and quickly. The only purpose of the network's core layer is to switch traffic as fast as possible. The traffic transported across the core is common to a majority of users. However, remember that user data is processed at the distribution layer, which forwards the requests to the core if needed.
If there is a failure in the core, every user can be affected. Therefore, fault tolerance at this layer is an issue. The core is likely to see large volumes of traffic, so speed and latency are driving concerns here. Given the function of the core, we can now consider some design specifics. Let's start with something we don't want to do.
- Don't do anything to slow down traffic. This includes using access lists, routing between virtual local area networks, and packet filtering.
- Don't support workgroup access here.
- Avoid expanding the core when the internetwork grows. If performance becomes an issue in the core, give preference to upgrades over expansion.
Now, there are a few things that we want to do as we design the core. They include the following:
- Design the core for high reliability. Consider data-link technologies that facilitate both speed and redundancy, such as FDDI, Fast Ethernet, or even ATM.
- Design with speed in mind. The core should have very little latency.
- Select routing protocols with lower convergence times. Fast and redundant data-link connectivity is no help if your routing tables are shot.
The Distribution Layer
The distribution layer is sometimes referred to as the workgroup layer and is the major communication point between the access layer and the core. The primary function of the distribution layer is to provide routing, filtering, and WAN access and to determine how packets can access the core, if needed.
The distribution layer must determine the fastest way that network service requests are handled; for example, how a file request is forwarded to a server. After the distribution layer determines the best path, it forwards the request to the core layer. The core layer then quickly transports the request to the correct service.
The distribution layer is the place to implement policies for the network. Here you can exercise considerable flexibility in defining network operation. There are several items that generally should be done at the distribution layer such as:
- Implementation of tools such as access lists, of packet filtering, and of queuing
- Implementation of security and network policies including firewalls
- Redistribution between routing protocols, including static routing
- Routing between VLANs and other workgroup support functions
- Definitions of broadcast and multicast domains
Things to avoid at this layer are limited to those functions that exclusively belong to one of the other layers.
The Access Layer
The access layer controls user and workgroup access to internetwork resources. The access layer is sometimes referred to as the desktop layer. The network resources most users need will be available locally. The distribution layer handles any traffic for remote services.
The following are some of the functions to be included at the access layer:
- Continued access control and policies
- Creation of separate collision domains
- Workgroup connectivity into the distribution layer through layer 2 switching
Technologies such as DDR and Ethernet switching are frequently seen in the access layer. Static routing is seen here as well. As already noted, three separate levels does not imply three separate routers. It could be fewer, or it could be more. Remember, this is a layered approach.
---Original Resource from tech-faq.com
More Related Cisco Network Readings:
The fact that online shoppers in China are three times more likely to desire a clear return policy than online shoppers in the United States should suggest to e-commerce businesses that a universal payment platform will not necessarily translate to all shoppers in different countries. A recent survey found that while online shopping itself may be a nearly universal behavior, and habits differ slightly based on nationality.
Pitney Bowes Inc. found that while shopping online is almost universal – 93 percent of those surveyed had purchased products online and nearly half said they had done so in the previous month – there were slight variations in feelings toward prices, selection of products, the checkout process, the shipping process and shipping costs.
For example, French consumers are seven times more likely to want to actively track an order than Japanese consumers, while Canadian consumers were half as likely to care about an accurate delivery date than either Chinese and South Korean consumers.
" … To be successful, retailers need to ensure they can offer a simple and seamless online shopping experience, and have a clear understanding of consumers’ purchasing, shipping and communications preferences in each market," said Pitney Bowes's Jay Oxton in a press release.
In an increasingly globalized world, the internet transcends traditional boundaries, providing companies that accept credit cards online a tremendous opportunity to bolster international sales. An Internet World Stats survey estimates that nearly one-third of the world's population use the internet, so business owners must thoroughly understand their clientele.
Any payment platform must address the desires of as many customers as possible, so companies should consider customer service when choosing a merchant account manager. Established companies that feel they may be lagging in customer service should conduct a payment processing review can help a company determine areas in which it needs to improves its customer service.
---Original reading: patriciaweberconsulting.com
More Related Reading: What’s Your Habit While Shopping or Shopping Online?
Cloud computing is the delivery of computing as a service rather than a product, whereby shared resources, software, and information are provided to computers and other devices as a utility (like the electricity grid) over a network (typically the Internet).
Cloud computing entrusts, typically centalized, services with your data, software, and computation on a published application programming interface (API) over a network. It has a lot of overlap with software as a service (SaaS).
End users access cloud based applications through a web browser or a light weight desktop or mobile app while the business software and data are stored on servers at a remote location. Cloud application providers strive to give the same or better service and performance than if the software programs were installed locally on end-user computers.
At the foundation of cloud computing is the broader concept of infrastructure convergence (or Converged Infrastructure) and shared services. This type of data centre environment allows enterprises to get their applications up and running faster, with easier manageability and less maintenance, and enables IT to more rapidly adjust IT resources (such as servers, storage, and networking) to meet fluctuating and unpredictable business demand.
Cloud computing shares characteristics with:
Autonomic computing—Computer systems capable of self-management.
Client–server model—Client–server computing refers broadly to any distributed application that distinguishes between service providers (servers) and service requesters (clients).
Grid computing—"A form of distributed and parallel computing, whereby a 'super and virtual computer' is composed of a cluster of networked, loosely coupled computers acting in concert to perform very large tasks."
Mainframe computer—Powerful computers used mainly by large organizations for critical applications, typically bulk data processing such as census, industry and consumer statistics, police and secret intelligence services, enterprise resource planning, and financial transaction processing.
Utility computing—The "packaging of computing resources, such as computation and storage, as a metered service similar to a traditional public utility, such as electricity."
Peer-to-peer—Distributed architecture without the need for central coordination, with participants being at the same time both suppliers and consumers of resources (in contrast to the traditional client–server model).
Cloud computing exhibits the following key characteristics:
Empowerment of end-users of computing resources by putting the provisioning of those resources in their own control, as opposed to the control of a centralized IT service (for example)
Agility improves with users' ability to re-provision technological infrastructure resources.
Application programming interface (API) accessibility to software that enables machines to interact with cloud software in the same way the user interface facilitates interaction between humans and computers. Cloud computing systems typically use REST-based APIs.
Cost is claimed to be reduced and in a public cloud delivery model capital expenditure is converted to operational expenditure. This is purported to lower barriers to entry, as infrastructure is typically provided by a third-party and does not need to be purchased for one-time or infrequent intensive computing tasks. Pricing on a utility computing basis is fine-grained with usage-based options and fewer IT skills are required for implementation (in-house).
Device and location independence enable users to access systems using a web browser regardless of their location or what device they are using (e.g., PC, mobile phone). As infrastructure is off-site (typically provided by a third-party) and accessed via the Internet, users can connect from anywhere.
Virtualization technology allows servers and storage devices to be shared and utilization be increased. Applications can be easily migrated from one physical server to another.
Multi-tenancy enables sharing of resources and costs across a large pool of users thus allowing for:
Centralization of infrastructure in locations with lower costs (such as real estate, electricity, etc.)
Peak-load capacity increases (users need not engineer for highest possible load-levels)
Utilisation and efficiency improvements for systems that are often only 10–20% utilised.
Reliability is improved if multiple redundant sites are used, which makes well-designed cloud computing suitable for business continuity and disaster recovery.
Scalability and Elasticity via dynamic ("on-demand") provisioning of resources on a fine-grained, self-service basis near real-time, without users having to engineer for peak loads.
Performance is monitored, and consistent and loosely coupled architectures are constructed using web services as the system interface.
Security could improve due to centralization of data, increased security-focused resources, etc., but concerns can persist about loss of control over certain sensitive data, and the lack of security for stored kernels. Security is often as good as or better than other traditional systems, in part because providers are able to devote resources to solving security issues that many customers cannot afford. However, the complexity of security is greatly increased when data is distributed over a wider area or greater number of devices and in multi-tenant systems that are being shared by unrelated users. In addition, user access to security audit logs may be difficult or impossible. Private cloud installations are in part motivated by users' desire to retain control over the infrastructure and avoid losing control of information security.
Maintenance of cloud computing applications is easier, because they do not need to be installed on each user's computer and can be accessed from different places.
Cloud computing providers offer their services according to three fundamental models: Infrastructure as a service (IaaS), platform as a service (PaaS), and software as a service (SaaS) where IaaS is the most basic and each higher model abstracts from the details of the lower models.
Infrastructure as a Service (IaaS)
In this most basic cloud service model, cloud providers offer computers – as physical or more often as virtual machines –, raw (block) storage, firewalls, load balancers, and networks. IaaS providers supply these resources on demand from their large pools installed in data centers. Local area networks including IP addresses are part of the offer. For the wide area connectivity, the Internet can be used or - in carrier clouds - dedicated virtual private networks can be configured.
To deploy their applications, cloud users then install operating system images on the machines as well as their application software. In this model, it is the cloud user who is responsible for patching and maintaining the operating systems and application software. Cloud providers typically bill IaaS services on a utility computing basis, that is, cost will reflect the amount of resources allocated and consumed.
Platform as a Service (PaaS)
In the PaaS model, cloud providers deliver a computing platform and/or solution stack typically including operating system, programming language execution environment, database, and web server. Application developers can develop and run their software solutions on a cloud platform without the cost and complexity of buying and managing the underlying hardware and software layers. With some PaaS offers, the underlying compute and storage resources scale automatically to match application demand such that the cloud user does not have to allocate resources manually.
Software as a Service (SaaS)
In this model, cloud providers install and operate application software in the cloud and cloud users access the software from cloud clients. The cloud users do not manage the cloud infrastructure and platform on which the application is running. This eliminates the need to install and run the application on the cloud user's own computers simplifying maintenance and support. What makes a cloud application different from other applications is its elasticity. This can be achieved by cloning tasks onto multiple virtual machines at run-time to meet the changing work demand. Load balancers distribute the work over the set of virtual machines. This process is transparent to the cloud user who sees only a single access point. To accomodate a large number of cloud users, cloud applications can be multitenant, that is, any machine serves more than one cloud user organization. It is common to refer to special types of cloud based application software with a similar naming convention: desktop as a service, business process as a service, Test Environment as a Service, communication as a service.
Users access cloud computing using networked client devices, such as desktop computers, laptops, tablets and smartphones. Some of these devices - cloud clients - rely on cloud computing for all or a majority of their applications so as to be essentially useless without it. Examples are thin clients and the browser-based Chromebook. Many cloud applications do not require specific software on the client and instead use a web browser to interact with the cloud application. With Ajax and HTML5 these Web user interfaces can achieve a similar or even better look and feel as native applications. Some cloud applications, however, support specific client software dedicated to these applications (e.g., virtual desktop clients and most email clients). Some legacy applications (line of business applications that until now have been prevalent in thin client Windows computing) are delivered via a screen-sharing technology.
Applications, storage, and other resources are made available to the general public by a service provider. Public cloud services may be free or offered on a pay-per-usage model. There are limited service providers like Microsoft, Google etc owns all Infrastructure at their Data Center and the access will be through Internet mode only. No direct connectivity proposed in Public Cloud Architecture.
Community cloud shares infrastructure between several organizations from a specific community with common concerns (security, compliance, jurisdiction, etc.), whether managed internally or by a third-party and hosted internally or externally. The costs are spread over fewer users than a public cloud (but more than a private cloud), so only some of the cost savings potential of cloud computing are realized.
Hybrid cloud is a composition of two or more clouds (private, community or public) that remain unique entities but are bound together, offering the benefits of multiple deployment models.
Private cloud is infrastructure operated solely for a single organization, whether managed internally or by a third-party and hosted internally or externally.
They have attracted criticism because users "still have to buy, build, and manage them" and thus do not benefit from less hands-on management, essentially "[lacking] the economic model that makes cloud computing such an intriguing concept".
NOTES: More info of Cloud Computing, such as history of Cloud computing, Cloud engineering, Issues about Cloud Computing including Privacy, Compliance, Security, etc., you can visit wikipedia.org---Cloud Computing
More Related: CloudVerse: Cisco Storms into the Cloud Market