Posts with #networking tag
Businesses have long recognized the benefits of wireless networking: flexible network connectivity, improved productivity, and lowered cabling costs. As the demand for reliable and predicable wireless support for time-sensitive applications (such as video streaming and real-time collaboration) has risen, many organizations have made the decision to deploy 802.11n to meet their higher performance requirements. Successful deployment and operation of a 802.11n wireless network depends heavily on the wired
LAN that supports it. To take full advantage of the performance enhancements and scalability offered by 802.11n, the four easy-to-follow suggestions below can help to simplify deployment of a 802.11n wireless network on your wired LAN to maximize network efficiency.
1. Prepare for growth.
Bandwidth provisioning: The main driver to deploying an 802.11n network is to take advantage of the increased bandwidth it provides for multimedia applications. The lower speeds of an 802.11a/g wireless network resulted in unreliable and undesirable consequences for bandwidth-intensive voice and video applications. Now, with the improved performance in 802.11n, and enhanced reliability, it is possible for your wireless LAN to function as predictably as a wired LAN.
To utilize the full potential of 802.11n, sufficient bandwidth must be provisioned in the wired LAN to support the increased traffic demands. A 1:1 ratio of 1Gbps port per 802.11n access point is a safe rule; less obvious is how to properly provision the uplink ports.
10GE uplinks provide the bandwidth necessary to backhaul traffic generated by multiple access points or other bandwidth-intensive devices supported by a single PoE switch. 10GE uplinks provide a reliable and predictable response to the 802.11n wireless LAN demands. 10GE provides the support necessary to decrease latency in time-sensitive applications.
If you are not ready to make the move to 10GE, you can use the existing 1Gpbs uplinks.
However, it is important to be aware of traffic demands on the switch to avoid excessive network disruption. As your network needs grow, a 10GE uplink is recommended.
Cisco solution: The Cisco Catalyst 3750-E Series Switches with StackWise Plus are an enterprise-class line of stackable wiring closet switches that facilitates the deployment of secure converged applications while maximizing investment protection for evolving network and application requirements. Combining 10/100/1000 and Power over Ethernet (PoE) configurations with 10 Gigabit Ethernet uplinks, the Cisco Catalyst 3750-E enhances worker productivity by enabling applications such as IP telephony, wireless, and video.
The Cisco TwinGig Converter Module supports the multistep approach to deploying 802.11n by providing a flexible way to easily deploy 10GE without network disruption. The converter module allows for 1G to be used until traffic demands on the network demand a 10Gig uplink.
This converter module is supported in the Cisco Catalyst 3750-E PoE switches and allows for easy 10GE deployment as the 802.11n bandwidth demand increases.
2. Eliminate complexity and lower costs.
Power over Ethernet (PoE): The benefits of wireless networking are increased productivity and decreased complexity and cost. An integral part of this equation is the ability to provide power through PoE switches. By providing the delivery of power over the existing LAN Ethernet cabling to the connected devices, PoE removes the need for costly and complicated electrical upgrades and reduces labor costs associated with deployment.
For example, electrical outlets are not usually placed in hard-to-reach areas. Wireless access points, in contrast, are typically placed in ceilings to maximize wireless coverage. By simply using the existing LAN conduit, which is typically run inside walls and over ceilings, a wireless access point can be powered where electricity was previously not available. By eliminating the restriction of placing access points only where power outlets are available, a more flexible and reliable wireless network can be realized. The end results are maximum wireless network coverage and availability for the end users.
The benefits of PoE switches do not stop there. Deployment of a PoE switch provides the benefit of being able to control the APs in the plenum (the space between the structural ceiling and a drop-down ceiling) and power off (and on) wireless access points from the switch or WLAN controller.
Power management: After PoE switches are enabled on the LAN, power management of the switches and connected devices can be as simple as setting the automatic thermostat in your home. The Cisco Catalyst PoE switches support Cisco EnergyWise, an advanced green IT technology that allows businesses to measure their power usage and create policies to reduce power consumption when the network is not in use, such as turning off power to “sleeping” devices, such as voice-over-IP (VoIP) phones and printers, during hours when the business is closed.
The Cisco Catalyst PoE switches and Cisco Aironet products are designed to work in concert in providing power-optimized solutions. The Cisco Aironet 1140 Access Point supports Cisco Wireless Control System (WCS) adaptive power management, which allows businesses to schedule when the access point radios are available.
Maintaining the predictability and reliability of an 802.11n network is having device features that perform as configured. Many 802.11n devices have power demands that exceed the 802.3af standard. A frequent, undesirable tradeoff is disabling features when the device is starting to approach the limits of the 802.3af PoE standard. The Cisco Aironet 1140 Access Point works within the 802.3af power specification without compromise to performance, feature set, or power usage.
By coupling the power management capabilities of the Cisco Catalyst PoE switches and the Aironet access points, you can set policies to schedule power usage according to your business needs. And, through careful planning, you can deploy a wired and wireless network solution that provides a simple way to significantly save on overall business operating expenses.
3. Automate devices.
Automate switch deployment: Cisco Catalyst switches support many ease of use features.
To simplify switch deployment and minimize chance for error, the Cisco 3750E offers DHCP AutoInstall. This allows the switch to automatically receive its IP address upon initialization and, once the IP address has been obtained, automatically download the appropriate configuration file.
By automating the deployment process, the AutoInstall feature allows multiple switches to be deployed easily and uniformly, without the risk of administrator input error.
Simplify moves, adds, and changes: The dynamic nature of a wireless network demands real-time responsiveness in the wired LAN. The network should be able to adjust to changes in the network with minimal interference. Cisco’s Auto Smartports significantly decrease deployment time and increase accuracy and consistency by automatically detecting devices connected to its ports. Cisco Catalyst switches use Auto Smartports macros to apply precreated, common switch port configuration scripts and, through automation, lower administrative costs and network response time.
Set network policies: To maximize the benefit of an 802.11n converged environment, it is important to balance resources and address the possibility of resource contention throughout the LAN. Automated network services, such as auto-QoS, allow for easy configuration of traffic prioritization in order to reliably deliver data to time-sensitive applications.
Grant secure user access: Security is another area where creating policies is vital to simplifying wireless deployment. Cisco Identity-Based Networking Services (IBNS) are an integrated solution that combines the management of authentication, access control, and user policies to secure network connectivity and resources. It also provides an account of user activities to provide visibility and safeguard the network. By providing centralized policy-based management for network security policies, the need to manually configure user rights on a perport basis is removed, and overall network administration is greatly simplified, thus decreasing cost and potential for downtime.
4. Protect your investment.
802.11n is a new technology that is experiencing high early adoption rates, and the implementation of its deployment needs to be done in a well-thought-out and prudent manner.
Intelligent networks are built with a strategic vision to keep them at maximum efficiency and top performance. To make sure your business has done the best to protect its upgrade to an 802.11n network, here are a few quick questions you should ask:
● Is my wired network ready to support the demands of an 802.11n wireless network?
● Can I easily upgrade the performance of my LAN switches without network disruption?
● Using 802.3af-compliant PoE switches, can my 802.11n wireless access point perform at full performance and security without any feature constraints?
● Can my switch vendor guarantee interoperability between my PoE switch and my 802.11n wireless access point?
● Do my switches and wireless access points collaborate to help me manage my business operational costs and lower my environmental effects?
By creating high-performance network infrastructures and continuing to lead industry innovations, Cisco has created reliable and responsive environments that accelerate the deployment of applications and services over a single network.
Cisco wired and wireless solutions allow for businesses to use their network more efficiently and more effectively through reliability and consistency. Cisco’s end-to-end solutions reduce complexity and lower complexity, resulting in lower administrative costs.
Additionally, rigorous interoperability and performance testing is done between Cisco devices to guarantee maximum results. There is no interoperability guesswork. The Cisco Catalyst PoE switches and the Cisco Aironet 1140 provide the performance, power, and security needed to support the demands of an 802.11n wireless network.
Cisco understands that as early adopters of the 802.11n wireless network, businesses are making the decision to lead rather than follow. By driving standards forward, helping customers plan for the future, and enabling network excellence, Cisco is committed to its customers’ success.
Note: Cisco has more than 200 offices worldwide. Addresses, phone numbers, and fax numbers are listed on the Cisco website at www.cisco.com/go/offices
More Cisco News, Reviews and Tutorials you can visit: http://blog.router-switch.com/
Glue Networks developing automation tools for managing WAN operations
SDNs aren't just for data center networks, despite the best-use-case-scenario arguments for network virtualization and flow management pervading the industry.
SDNs can automate and manage WAN operations as well. Google is using OpenFlow to interconnect data center over a WAN.
And startup Glue Networks is targeting Cisco's installed base of WAN routers as a sweet spot for its SDN WAN offerings.
Major IT trends such as SaaS, private clouds, BYOD, mobility and voice/data convergence are stressing the quality of links in an enterprise WAN, as analyst Lee Doyle notes here. WAN links now require improved security, lower latency, higher reliability and support for any device in any location to accommodate these trends.
SDN can help enterprise IT accomplish this without the expense of upgrading individual WAN links, Doyle notes. The technology can allow for prioritization of key applications and traffic types, ease provisioning for new sites, new applications, and changed traffic priorities, enhance security and more tightly link WAN service to specific applications.
That's what Glue Networks is after. Glue's gluware software runs in the cloud and provides a cloud-based service for turning up remote sites and teleworkers worldwide. It is designed to lower the cost of private WAN networking by automating those operations and handling ongoing maintenance, monitoring, life-cycle management and feature extension.
Some of those features might include Cisco's WAAS Express, ScanSafe, ISE, MediaNet and TrustSec services.
The software automates the provisioning of voice, video, wireless, LAN networking, IP addressing, PKI security, firewalls, VLANs and ACLs, and allows users to configure a meshed, spoke-to-spoke, low-latency infrastructure that is QoS-enabled, the company says.
The company's gluware Teleworker software resides in the cloud and acts as a control plane to create a secure data plane for teleworkers to connect to the corporate network. Teleworkers can self-provision their equipment with a single click and no IT support, Glue claims.
Glue's products are essentially a software-defined dynamic multipoint VPN offered as a monthly software-as-a-service subscription. It includes a central policy-based controller, applications with "CCIE intelligence," and an API to configure the OS using the applications.
Glue's gluware also includes tools for alert notification based on thresholds; hardware ordering logistics and router provisioning workflows; end-user and administrator monitoring portals; repository of network configurations, end-user data, and reporting and monitoring data; agents to proactively monitor the health of the network and deploy large-scale configurations; and an orchestrator to generate hardware configurations, check for errors and conduct "self-healing" operations.
Glue says its addressable market is the $12 billion worth of 16 million Cisco WAN routers installed globally. Glue expects Cisco to have 23 million WAN routers installed by 2017.
Glue was founded in 2007. It has about $6.2 million in funding from a $4.5 million Series A round in 2011, and $1.7 million in convertible notes in 2012. The company's investors include Keiretsu Forum, San Joaquin Angels, Sierra Angels, Sacramento Angels, Sand Hill Angels, Harvard Angels, Halo Fund and Angel Forum.
Glue is headquartered in San Francisco and the company's executive team is comprised of officials from Yelofin Networks, Cisco, Agilent, Intel, INX and MTV Networks.
---News from http://www.networkworld.com/news/2013/041213-glue-networks-268664.html
More Related Cisco News:
A. Although EIGRP can propagate a default route using the default network method, it is not required. EIGRP redistributes default routes directly.
Q. Should I always use the eigrp log-neighbor-changes command when I configure EIGRP?
A. Yes, this command makes it easy to determine why an EIGRP neighbor was reset. This reduces troubleshooting time.
Q. Does EIGRP support secondary addresses?
A. EIGRP does support secondary addresses. Since EIGRP always sources data packets from the primary address, Cisco recommends that you configure all routers on a particular subnet with primary addresses that belong to the same subnet. Routers do not form EIGRP neighbors over secondary networks. Therefore, if all of the primary IP addresses of routers do not agree, problems can arise with neighbor adjacencies.
Q. What debugging capabilities does EIGRP have?
A. There are protocol-independent and -dependent debug commands. There is also a suite of show commands that display neighbor table status, topology table status, and EIGRP traffic statistics. Some of these commands are:
Q. What does the word serno mean on the end of an EIGRP topology entry when you issue the show ip eigrp topology command?
A. For example:
show ip eigrp topology
P 172.22.71.208/29, 2 successors, FD is 46163456
via 172.30.1.42 (46163456/45651456), Serial0.2, serno 7539273
via 172.30.2.49 (46163456/45651456), Serial2.6, serno 7539266
Serno stands for serial number. When DRDBs are threaded to be sent, they are assigned a serial number. If you display the topology table at the time an entry is threaded, it shows you the serial number associated with the DRDB.
Threading is the technique used inside the router to queue items up for transmission to neighbors. The updates are not created until it is time for them to go out the interface. Before that, a linked list of pointers to items to send is created (for example, the thread).
These sernos are local to the router and are not passed with the routing update.
Q. What percent of bandwidth and processor resources does EIGRP use?
A. EIGRP version 1 introduced a feature that prevents any single EIGRP process from using more than fifty percent of the configured bandwidth on any link during periods of network convergence. Each AS or protocol (for instance, IP, IPX, or Appletalk) serviced by EIGRP is a separate process. You can use the ip bandwidth-percent eigrpinterface configuration command in order to properly configure the bandwidth percentage on each WAN interface. Refer to the EIGRP White Paper for more information on how this feature works.
In addition, the implementation of partial and incremental updates means that EIGRP sends routing information only when a topology change occurs. This feature significantly reduces bandwidth use.
The feasible successor feature of EIGRP reduces the amount of processor resources used by an autonomous system (AS). It requires only the routers affected by a topology change to perform route re-computation. The route re-computation only occurs for routes that were affected, which reduces search time in complex data structures.
Q. Does EIGRP support aggregation and variable length subnet masks?
A. Yes, EIGRP supports aggregation and variable length subnet masks (VLSM). Unlike Open Shortest Path First (OSPF), EIGRP allows summarization and aggregation at any point in the network. EIGRP supports aggregation to any bit. This allows properly designed EIGRP networks to scale exceptionally well without the use of areas. EIGRP also supports automatic summarization of network addresses at major network borders.
Q. Does EIGRP support areas?
A. No, a single EIGRP process is analogous to an area of a link-state protocol. However, within the process, information can be filtered and aggregated at any interface boundary. In order to bound the propagation of routing information, you can use summarization to create a hierarchy.
Q. Can I configure more than one EIGRP autonomous system on the same router?
A. Yes, you can configure more than one EIGRP autonomous system on the same router. This is typically done at a redistribution point where two EIGRP autonomous systems are interconnected. Individual router interfaces should only be included within a single EIGRP autonomous system.
Cisco does not recommend running multiple EIGRP autonomous systems on the same set of interfaces on the router. If multiple EIGRP autonomous systems are used with multiple points of mutual redistribution, it can cause discrepancies in the EIGRP topology table if correct filtering is not performed at the redistribution points. If possible, Cisco recommends you configure only one EIGRP autonomous system in any single autonomous system. You can also use another protocol, such as Border Gateway Protocol (BGP), in order to connect the two EIGRP autonomous systems.
Q. If there are two EIGRP processes that run and two equal paths are learned, one by each EIGRP process, do both routes get installed?
A. No, only one route is installed. The router installs the route that was learned through the EIGRP process with the lower Autonomous System (AS) number. In Cisco IOS Software Releases earlier than 12.2(7)T, the router installed the path with the latest timestamp received from either of the EIGRP processes. The change in behavior is tracked by Cisco bug ID CSCdm47037.
Q. What does the EIGRP stuck in active message mean?
A. When EIGRP returns a stuck in active (SIA) message, it means that it has not received a reply to a query. EIGRP sends a query when a route is lost and another feasible route does not exist in the topology table. The SIA is caused by two sequential events:
- The route reported by the SIA has gone away.
- An EIGRP neighbor (or neighbors) have not replied to the query for that route.
When the SIA occurs, the router clears the neighbor that did not reply to the query. When this happens, determine which neighbor has been cleared. Keep in mind that this router can be many hops away. Refer to What Does the EIGRP DUAL-3-SIA Error Message Mean? for more information.
Q. What does the neighbor statement in the EIGRP configuration section do?
A. The neighbor command is used in EIGRP in order to define a neighboring router with which to exchange routing information. Due to the current behavior of this command, EIGRP exchanges routing information with the neighbors in the form of unicast packets whenever the neighbor command is configured for an interface. EIGRP stops processing all multicast packets that come inbound on that interface. Also, EIGRP stops sending multicast packets on that interface.
The ideal behavior of this command is for EIGRP to start sending EIGRP packets as unicast packets to the specified neighbor, but not stop sending and receiving multicast packets on that interface. Since the command does not behave as intended, the neighbor command should be used carefully, understanding the impact of the command on the network.
Q. Why does the EIGRP passive-interface command remove all neighbors for an interface?
A. The passive-interface command disables the transmission and receipt of EIGRP hello packets on an interface. Unlike IGRP or RIP, EIGRP sends hello packets in order to form and sustain neighbor adjacencies. Without a neighbor adjacency, EIGRP cannot exchange routes with a neighbor. Therefore, the passive-interface command prevents the exchange of routes on the interface. Although EIGRP does not send or receive routing updates on an interface configured with the passive-interface command, it still includes the address of the interface in routing updates sent out of other non-passive interfaces. Refer to How Does the Passive Interface Feature Work in EIGRP?For more information.
Q. Why are routes received from one neighbor on a point-to-multipoint interface that runs EIGRP not propagated to another neighbor on the same point-to-multipoint interface?
A. The split horizon rule prohibits a router from advertising a route through an interface that the router itself uses to reach the destination. In order to disable the split horizon behavior, use the no ip split-horizon eigrp as-numberinterface command. Some important points to remember about EIGRP split horizon are:
- Split horizon behavior is turned on by default.
- When you change the EIGRP split horizon setting on an interface, it resets all adjacencies with EIGRP neighbors reachable over that interface.
- Split horizon should only be disabled on a hub site in a hub-and-spoke network.
- Disabling split horizon on the spokes radically increases EIGRP memory consumption on the hub router, as well as the amount of traffic generated on the spoke routers.
- The EIGRP split horizon behavior is not controlled or influenced by the ip split-horizon command.
Q. When I configure EIGRP, how can I configure a network statement with a mask?
A. The optional network-mask argument was first added to the network statement in Cisco IOS Software Release 12.0(4)T. The mask argument can be configured in any format (such as in a network mask or in wild card bits). For example, you can use network 10.10.10.0 255.255.255.252 or network 10.10.10.0 0.0.0.3.
Q. I have two routes: 172.16.1.0/24 and 172.16.1.0/28. How can I deny 172.16.1.0/28 while I allow 172.16.1.0/24 in EIGRP?
A. In order to do this you need to use a prefix-list, as shown here:
router eigrp 100
distribute-list prefix test in
no eigrp log-neighbor-changes
ip prefix-list test seq 5 permit 172.16.1.0/24
This allows only the 172.16.1.0/24 prefix and therefore denies 172.16.1.0/28.
Note: The use of ACL and distribute-list under EIGRP does not work in this case. This is because ACLs do not check the mask, they just check the network portion. Since the network portion is the same, when you allow 172.16.1.0/24, you also allow 172.16.1.0/28.
Q. I have a router that runs Cisco Express Forwarding (CEF) and EIGRP. Who does load-balancing when there are multiple links to a destination?
A. The way in which CEF works is that CEF does the switching of the packet based on the routing table which is populated by the routing protocols such as EIGRP. In short, CEF does the load-balancing once the routing protocol table is calculated. Refer to How Does Load Balancing Work? for more information on load balancing.
Q. How do you verify if the EIGRP Non Stop Forwarding (NSF) feature is enabled?
A. In order to check the EIGRP NSF feature, issue the show ip protocols command. Here is the sample output:
show ip protocols
Routing Protocol is "eigrp 101"
Outgoing update filter list for all interfaces is not set
Incoming update filter list for all interfaces is not set
Default networks flagged in outgoing updates
Default networks accepted from incoming updates
EIGRP metric weight K1=1, K2=0, K3=1, K4=0, K5=0
EIGRP maximum hopcount 100
EIGRP maximum metric variance 1
Redistributing: eigrp 101
EIGRP NSF-aware route hold timer is 240s
Automatic network summarization is in effect
Maximum path: 4
Routing for Networks:
Routing Information Sources:
Gateway Distance Last Update
Distance: internal 90 external 170
This output shows that the router is NSF-aware and the route-hold timer is set to 240 seconds, which is the default value.
Q. How can I use only one path when a router has two equal cost paths?
A. Configure the bandwidth value on the interfaces to default, and increase the delay on the backup interface so that the router does not see two equal cost paths.
Q. What is the difference in metric calculation between EIGRP and IGRP?
A. The EIGRP metric is obtained when you multiply the IGRP metric by 256. The IGRP uses only 24 bits in its update packet for the metric field, but EIGRP uses 32 bits in its update packet for the metric field. For example, the IGRP metric to a destination network is 8586, but the EIGRP metric is 8586 x 256 = 2,198,016. Integer division is used when you divide 10^7 by minimum BW, so the calculation involves integer division, which leads to a variation from manual calculation.
Q. What is the EIGRP Stub Routing feature?
A. The Stub routing feature is used to conserve bandwidth by summarizing and filtering routes. Only specified routes are propagated from the remote (Stub) router to the distribution router because of the Stub routing feature. For more information about the Stub routing feature, refer to EIGRP Stub Routing. The EIGRP stub feature can be configured on the switch with the eigrp stub [receive-only] [leak-map name] [connected] [static] [summary] [redistributed] command. This feature can be removed with the no eigrp stub command. When you remove theeigrp stub command from the switch, the switch that runs the IP Base image throws this error:
EIGRP is restricted to stub configurations only
This issue can be resolved if you upgrade to Advanced Enterprise Images. This error is documented inCSCeh58135.
Q. How can I send a default route to the Stub router from the hub?
A. Do this under the outbound interface on the hub router with the ip summary-address eigrp X 0.0.0.0 0.0.0.0command. This command suppresses all the more specific routes and only sends the summary route. In the case of the 0.0.0.0 0.0.0.0, it means it suppresses everything, and the only route that is in the outbound update is 0.0.0.0/0. One drawback to this method is that EIGRP installs a 0.0.0.0/0 route to Null0 is the local routing table with an admin distance of 5.
Q. What are different route types in EIGRP?
A. There are three different types of routes in EIGRP:
- Internal Route—Routes that are originated within the Autonomous System (AS).
- Summary Route—Routes that are summarized in the router (for example, internal paths that have been summarized).
- External Route—Routes that are redistributed to EIGRP.
Q. How do you redistribute an IPv6 default route in EIGRP?
A. For redistributing an IPv6 default route in EIGRP, a sample configuration is shown here:
ipv6 prefix-list DEFAULT-ONLY-V6 seq 10 permit ::/0
route-map DEFAULT_2EIGRP-V6 permit 10
match ipv6 address prefix-list DEFAULT-ONLY-V6
router eigrp Starz_EIGRP
address-family ipv6 unicast
redistribute static route-map DEFAULT_2EIGRP-V6
Q. How does EIGRP behave over a GRE tunnel compared to a directly connected network?
A. EIGRP will use the same administrative distance and metric calculation for the GRE tunnel. The cost calculation is based on bandwidth and delay. The bandwidth and delay of the GRE tunnel will be taken from the tunnel interface configured on the router. The tunnel will also be treated like a directly connected network. If there are two paths to reach a network either through a VLAN interface or tunnel interface, EIGRP prefers the Virtual-Access Interface (VAI) VLAN interface because the VLAN interface has greater bandwidth than the tunnel interface. In order to influence the routing through the tunnel interface, increase the bandwidth parameter of the tunnel interface, or increase the delay parameter of the VLAN interface.
Q. What is an offset-list, and how is it useful?
A. The offset-list is an feature used to modify the composite metrics in EIGRP. The value configured in the offset-list command is added to the delay value calculated by the router for the route matched by an access-list. An offset-list is the preferred method to influence a particular path that is advertised and/or chosen.
Q. How can I tag external routes in EIGRP?
A. You can tag routes that EIGRP has learned from another routing protocol using a 32 bit tag value. Starting with ddts CSCdw22585, internal routes can also be tagged. However, the tag value cannot exceed 255 due to packet limitations for internal routes.
Q. What are the primary functions of the PDM?
A. EIGRP supports 3 protocol suites: IP, IPv6, and IPX. Each of them has its own PDM. These are the primary functions of PDM:
- Maintaining the neighbor and topology tables of EIGRP routers that belong to that protocol suite
- Building and translating protocol specific packets for DUAL
- Interfacing DUAL to the protocol specific routing table
- Computing the metric and passing this information to DUAL; DUAL handles only the picking of the feasible successors (FSs)
- Implement filtering and access lists.
- Perform redistribution functions to/from other routing protocols.
Q. What are the various load-balancing options available in EIGRP?
A. The offset-list can be used to modify the metrics of routes that EIGRP learns through a particular interface, or PBR can be used.
Q. What does the %DUAL-5-NBRCHANGE: IP-EIGRP(0) 100: Neighbor 10.254.0.3 (Tunnel0) is down: holding time expired error message mean?
A. This message indicates that the router has not heard any EIGRP packets from the neighbor within the hold-time limit. Because this is a packet-loss issue, check for a Layer 2 problem.
Q. Is there a IPv6 deployment guide that includes EIGRPv6?
Q. From the 16:29:14.262 Poison squashed: 10.X.X.X/24 reverse message, what does poison squashed mean?
A. The router threads a topology table entry as a poison in reply to an update received (the router sets up for poison reverse). While the router is building the packet that contains the poison reverse, the router realizes that it does not need to send it. For example, if the router receives a query for the route from the neighbor, it is currently threaded to poison. Thus, it sends the poison squashed message.
Q. Is it normal that EIGRP takes over 30 seconds to converge?
A. EIGRP taking longer to converge under heavy CPU usage is a normal behavior. EIGRP convergence is faster when you lower the hold time. The lowest values for hello and hold time are 1 second and 3 seconds respectively. For example:
Router(Config)# interface Fa0/0
!--- (Under an interface directly connected to EIGRP peers.)
Router(Config-if)#ip hello-interval eigrp 1
Router(Config-if)#ip hold-time eigrp 3
Note: Make sure that the hold time is changed on both ends.
For more information on EIGRP performance related issues, refer to How to resolve EIGRP performance problems.
More Related Topics:
As public cloud SLAs take heat from analysts, some enterprises say virtual private clouds offer the right mix of cloud agility and managed services reliability.
A virtual private cloud (VPC) offers on-demand Infrastructure as a Service (IaaS) external to a customer's data center, but it runs on a dedicated infrastructure, rather than a multi-tenant infrastructure. It is usually connected to each customer using a virtual private network (VPN) or another direct network connection, rather than the public Internet.
As such, a virtual private cloud can offer higher service-level agreements (SLAs) than public clouds, contracting for up to 100% uptime in some cases.
Finding the SLA that's Just Right
Some purists might consider this managed hosting rather than cloud computing, but these distinctions aren't relevant to customers such as Taylor Erickson, vice president of IT at Lanx Inc., a company that specializes in spinal care and surgical products in Bloomfield, Colo.
Lanx moved its SAP application and Active Directory to a virtual private cloud hosted by Virtustream Inc., last fall. Virtustream's xStream virtual private cloud gives the company a five-nines (99.999%) uptime SLA. Penalties start at 99.949% uptime, and were negotiated by Lanx with the help of an analyst firm to review the contract, Erickson said.
With the choice between Virtustream's xStream VPC and a public cloud provider Erickson declined to name, the virtual private cloud SLA was just one of the reasons the company chose Virtustream.
In fact, enterprise managed hosting providers such as ViaWest and Hosting.com tend to offer 100% uptime SLAs , but Virtustream's demonstrated expertise at hosting SAP appealed to Lanx, as did Virtustream's cost, which can be as low as half that of such services.
And 99.999% uptime was still more than the company might have been able to provide on its own. For example, a week after the company's migration, an air conditioning unit in Lanx's building failed, and the server room temperature soared to 98 degrees.
"But our mission-critical SAP was up and going because we'd migrated to a cloud provider," Erickson said.
Virtual Private Cloud a Happy Medium between Public and Private Cloud
Other users say public cloud, which tends to be the lowest-cost and most elastic of all service types, has undeniable appeal, but that using it requires very careful planning.
We're all used to pushing a hoster over a barrel to get what we want. We get that, but they custom configure the environment just for us and they sign us up for a three-year commitment.
---James Staten, analyst with Forrester Research
"You can never take [public cloud] off the table," said Dave Robbins, senior vice president and CIO of Ellie Mae, maker of an electronic loan origination platform and based in Pleasanton, Calif. "But if you're going to do it, what's your architecture and strategy to do it?"
Just carving out public cloud IaaS space without respect for regional diversity or how to get an ecosystem in place to exploit application delivery can be very low cost, but it's very low value as well, according to Robbins.
"It's a more complicated picture than most people think through," he said. "You have to look at the entire architecture."
In the meantime, Ellie Mae has found a happy medium in a Tier 3 Inc., virtual private cloud, tied in to an on-premise FlexPod environment that uses Cloupia, now owned by Cisco Systems Inc.
Space on Tier 3's infrastructure was used by the company last year as it migrated from an older infrastructure to the new one built on FlexPods, and simultaneously launched new products and services. Some production applications ran in Tier 3 as this process took place, and the company also uses Tier 3's VPC for QA and test systems.
VPCs Bridge a Disconnect between Public Cloud SLAs and Enterprise Expectations
Some SLAs are cryptic, but what's really more of a problem is the typical enterprise customer's disconnect in expectation from what they normally get from hosting providers and managed service providers and what they're going to get from public cloud, said James Staten, analyst with Forrester Research.
"We're all used to pushing a hoster over a barrel to get what we want. We get that, but they custom configure the environment just for us and they sign us up for a three-year commitment," he said.
Customers pursuing public cloud services tend not to want to be locked in to such commitments, and in some cases using a standardized service is going to be preferable to one custom-managed for the user, Staten said. But in these cases, the SLA is going to be lower.
Article written by Beth Pariseau from
More Related Networking News and Tips:
STP is vital for detecting loops within a switched network. Spanning tree works by designating a common reference point(the root bridge) and systematically building a loop-free tree from the root to all other bridges. All redundant paths remain blocked unless a designated link fails. The following criteria are used by each spanning tree node to select a path to the root bridge:
- Lowest root bridge ID - Determines the root bridge
- Lowest cost to the root bridge - Favors the upstream switch with the least cost to root
- Lowest sender bridge ID - Serves as a tie breaker if multiple upstream switches have equal cost to root
- Lowest sender port ID - Serves as a tie breaker if a switch has multiple (non-Etherchannel) links to a single upstream switch
We can manually configure the priority of a switch and its individual interfaces to influence path selection. The values given below are defaults.
Switch(config)# spanning-tree vlan 1 priority 32768
Switch(config)# interface g0/1
Switch(config-if)# spanning-tree vlan 1 port-priority 128
So where do these configured STP priorities come into play? There is no BPDU field for priority; instead, both bridge and port IDs have their administratively configured priorities embedded in them. Note the Bridge Identifier and Port Identifier fields in this Wireshark capture of a PVST+ BPDU:
Although the bridge ID field has been conveniently split into a bridge priority and MAC address for us by Wireshark's protocol descriptor, it is actually a single eight-byte value. The following field, which contains the port ID unique to each interface, is similarly composed at one-fourth the size.
Because this switch is running PVST+, the VLAN ID (1) is added to the configured bridge priority of 32768 (the default priority) for a sum of 32769. The unique bridge ID, taken from a MAC address, is appended to this value to form the complete bridge ID. Similarly, the port ID is formed by prepending the 4-bit port priority (the default value of 128, or 0x80) to the interface ID, which happens to be 0x001 because we are connected to the first physical switchport. These two values form the complete port ID of 0x8001.
More Networking Tips:
There are many tested IPv6 networks deployed across the world. For actual deployment, however, all the companies need to ensure that the vendors who support companies’ network have the requisite IPv6 enhancements.
There are two categories of IPv6 enhancements. The first is the set that supports the packet forwarding (more commonly referred to as routing) process and the other set comprises enhancements that support the computing or host infrastructure.
IPv6 enhancements of the first category include larger address formats (the ones that affect the routing table size and structure), better routing protocols such as Open Shortest First Protocol (OSPF) and Routing Information Protocol (RIP), and good support for optional extension headers (which streamline the packet forwarding process) such as the Routing Header. And, the second category of enhancements comprises enhancements to the Domain Name System (DNS), the Stateless Auto-configuration (plug and play) process, upgraded Security, and updates to the Application Programming Interfaces (APIs).
Keeping these requisite enhancements in mind, let us now discuss what kind of support ten of the premier networking vendors are equipped to provide:
The open source, UNIX-based OS X operating system from Apple Computer allows for advanced BSD networking and has a TCP/IP stack and advanced sockets. Versions 10.2 and later of this operating system provide good support for IPv6.
As this vendor has been actively involved in the development of IPv6, it provides very good support for IPv6. In fact, the vendors support for IPv6 can be observed in all its products. Further, the documentation of IOS 12 has extensive details of the IPv6 features, such as Automatic and Configured tunneling, BGP extensions for IPv6, MTU Path Discovery, Neighbor Discovery, updated routing protocols, and Stateless Auto-configuration, supported in each platform.
The new HP-UX11i provides support for several IPv6 features such as automatic and configured tunnels, advanced and basic sockets application programming interfaces (APIs), IPv4/IPv6 dual stack protocols, Path Maximum Transmission Unit (PMTU) Discovery, and Stateless Auto-configuration. The new HP-UX11i runs over Infiniband, FDDI, and Ethernet links.
The GR2000 carrier-class gigabit routers from Hitachi provide IPv6 at forwarding rates of a maximum of 26 Mpps and maximum line rates of 2.4 Gbps. The custom Application Specific Integrated Circuits (ASICs) of this system have a dual stack IPv4/IPv6 architecture and support packet filtering, IPv6 over IPv4 and IPv4 over IPv6 tunneling, and Stateless Auto-configuration among other IPv6 features.
Since the release of the IPv6-enabled AIX system in1997, IBM has shown support for IPv6 and has continually added IPv6 support to its products, such as DB2 for Windows v9.1, Unix, and Linux.
The IPv6 protocols for Linux are developed by a volunteer-run collaborative effort referred to as the Universal Playground for IPv6 (USAGI). This project was undertaken to remove the bugs in Linux implementations that made it difficult for a Linux-based system to conform to the IPv6 specifications.
Naturally, when all vendors are providing support for IPv6, Microsoft cannot be far behind. Most of the new versions of the Windows operating system, including Windows Vista, Windows Server Code, Windows Server 2003, and Windows CE .NET have built-in IPv6 enhancements and facilitate an orderly transition from IPv4 to IPv6.
Nortel Networks is working towards providing IPv6 support since the 1990s. The most recent generation of Nortels Ethernet Routing Switch 8600 offers wire speed and terabit performance. Nortel products also provide other IPv6 enhancements such as IPv6 Multicast, IPv4 to IPv6 Tunneling, Neighbor Discovery, and Stateless Auto-configuration.
The IP on NetWare that comes with NetWare 6.5 uses IPv6 as the native transport protocol on its server platform. The IPv6 features supported by Novell include Automatic and Configured tunneling, Basic Socket Interface Extensions, Neighbor Discovery, Stateless Address Auto-configuration, and Transmission Mechanisms for hosts and routers. Please note that with Novell, IPv6 works as an add-on component to the existing TCP/IP protocol stack.
The Solaris 10 operating system by Sun Microsystems offers support for important IPv6 programming interfaces and specifications. It offers the advantage of Internet Key Exchange (IKE), which lets systems connect by using authentication and encryption, and integrated IP Security (IPsec). This vendor also facilitates dual stack tunneling, such as IPv6 over IPv4 and vice versa. For more details on the IPv6 support provided by a specific vendor, visit the IPv6 section on the vendor website or refer to system documentation specific to the vendor.
More Networking Tips:
“Router Switch”, Our New Company Landing in USA/U.S
---Professional Cisco Supply Service is Around You
As router-switch.com founded its branch office in USA, it is also welcoming its 10th anniversary in 2012. From “a small potato” to “a big apple”, router-switch.com did a great effort to realize its goals one by one. Well, in fact, to be a famous leading Cisco supplier around the world is not an easy task, firstly and the most important is to own a strong team (the professional salesmen, pre-sales and after-sales service, free CCIE technical support and creative marketing staff)
The year of 2012, meaningful to all the people in the world (Haha, because the movie 2012 told us 2012 is the end of the world), so is router-switch.com, besides celebrating its 10-year birthday, router-switch.com have prepared a lot of gifts for its regulars and new clients, such as an album of telling its history and achievement, more discount for popular Cisco equipment (Cisco routers, Cisco switches, Cisco wireless Aps, etc.), new version of its official website, more collaborations with Cisco technical support units. To serve for customers better, for router-switch.com, the important action is to be more local in the future. So the “Router Switch” was born, as the times require.
With the foundation of “Router Switch” in U.S., its localization service will be strengthened. A professional local team will offer sincere service (pre-Cisco buying consultation, updating of purchased Cisco hardware, free CCIE technical support, etc.) for the regulars and new clients.
Main Events over the Past 10 Years
What router-switch.com achieved in the past 10 years?
Since 2002, router-switch.com has experienced a rapid development with sales volume maintaining 70% growth per year.
In 2004, CCIE technical support team was built with more and more clients’ technical requirement.
In 2007, it established its marketing department which can spread its reputation and gather freshest market information for Cisco business.
In 2008, most advanced management tools are adopted to improve efficiency greatly.
In 2012, it is making the great effort to be the worldwide largest Cisco reseller online.
“Router Switch”, a Just New Start
Router-switch.com has accomplished its goals with customers’ trust, not only globalization, but also more localization, more humanization.
More Router-switch.com Info you can see
With a Cisco Self-Defending Network, security is integrated into the network, throughout the infrastructure and protecting each endpoint. This approach is:
- Integrated: Every element in the network acts as a point of defense
- Adaptive: Innovative behavioral methods automatically recognize and adapt to new types of threats as they arise
- Collaborative: Various network components work together to provide new means of protection
Multifunction Security management
Cisco ASA 5500 services Adaptive Security Appliances
Cisco ASA 5500 Series Adaptive Security Appliances are easy-to-deploy solutions that integrate world-class firewall, Unified Communications (voice/video) security, SSL and IPSec VPN, intrusion prevention (IPS), and content security services in a flexible, modular product family. Designed as a key component of the Cisco Self-Defending Network, the Cisco ASA 5500 Series provides intelligent threat defense and secure communications services that stop attacks at the perimeter before they impact business continuity.
The CSC SSM module which fits in a ASA provides comprehensive antivirus, anti-spyware, file blocking, anti-spam, anti-phishing, url filtering and content filtering.
Intrusion Prevention System (IPS)
An integral part of the Cisco Self-Defending Network and Cisco Threat Control solutions, the Cisco Intrusion Prevention System (IPS) provides end-to-end protection for your network. This inline, network-based defense can identify, classify, and stop known and unknown threats, including worms, network viruses, application threats, system intrusion attempts, and application misuse. The appliances provide a range of performance, from 80 Mbps up to 8 Gbps, IPS works on latest signature database and these signatures refer to malicious traffic patterns. The signature updates is and yearly subscription service covered by cisco contract. The above can be achieved in two ways
IPS Module within ASA firewall
IPS features can also be available with ASA by using the AIP-SSM. .It monitors and prevents the malicious traffic passing through ASA to the internal network.
It is an appliance suitable to handle one or more networks with its ports configurable as inline pair. If anti-X (CSC –SSM) is deployed in ASA then IPS module can’t be deployed and one has to rely on IPS appliance for the Intrusion Prevention.
Note: Future versions of ASA will support Anti-X & IPS functionality.
The world’s leading email security appliance covered under Cisco security Portfolio. It is ideally placed between firewall and email server so that it acts as an ‘shock absorber” for all incoming mails.
Iron Port Email security appliances uses multi-layer filtering technology which includes reputation and context based filtering.
6500 chassis based FWSM module
The Cisco Catalyst 6500 Series Firewall Services Module (FWSM) which fits in the 6500 chassis allowing customers to benefit from industry-leading innovations, including:
- Leading scalability and performance
100,000 connections/sec and 2.8 million pps
- Unprecedented security protection at Layers 2–7
Private VLAN integration between the FWSM and the Cisco Catalyst 6500 Series for ease of policy deployment
Advanced firewall capabilities, including application and protocol inspections
- Every port within the chassis becomes a security port
Every FWSM works in tandem with other modules in the chassis to deliver robust security throughout the entire chassis.
- New services can be deployed with minimal operational complexity.The integrated approach of the Cisco FWSM integrates virtualization and high availability. Solutions are enhanced through complementary functions.
End point security
Cisco Security agent
Cisco Security Agent It is the first endpoint security solution that combines zero-update attack protection, data loss prevention, and signature-based anti-virus in a single agent. This unique blend of capabilities defends servers and desktops against sophisticated day-zero attacks, and enforces acceptable-use and compliance policies within a simple management infrastructure. Cisco Security Agent also comes with clam antivirus, to provide protection against virus.
Network Admission Control
NAC provides us complete control over the network. Cisco Network Admission Control (NAC) allows only compliant and trusted endpoint with predefined security postures, such as PCs, servers, and PDAs, onto the network, restricting the access of noncompliant devices, and thereby limiting the potential damage from emerging security threats and risks
Monitoring, Analysis and Response System (MARS)
An appliance-based solution that correlates data from across the enterprise and uses your existing network and security investments to identify, isolate, and recommend precision removal of offending elements. MARS, when used in conjunction with Cisco IPS Sensor software v5, provides a total collaborative solution, protecting your entire network infrastructure from attacks, viruses, worms, and other malicious traffic.
Cisco Security Manager
Cisco Security Manager is an enterprise-class management application designed to configure firewall, VPN, and intrusion prevention (IPS) security services on Cisco network and security devices. Cisco Security Manager can be used in networks of all sizes—from small networks to large networks consisting of thousands of devices—by using policy-based management techniques. Cisco Security Manager works in conjunction with the Cisco Security Monitoring, Analysis, and Response System (MARS). Used together, Computech Engineers provide a comprehensive security management solution that addresses configuration management, security monitoring, analysis, and mitigation.
More Network Security Info and Tips: http://blog.router-switch.com/category/networking-2/
Mobile Cloud Traffic to Account for 71 Percent, or 7.6 Exabytes per Month, of Total Mobile Data Traffic by 2016, Compared to 45 Percent, or 269 Petabytes per Month, in 2011
According to the Cisco Visual Networking Index (VNI) Global Mobile Data Traffic Forecast for 2011 to 2016, worldwide mobile data traffic will increase 18-fold over the next five years, reaching 10.8 exabytes per month — or an annual run rate of 130 exabytes — by 2016.
The expected sharp increase in mobile traffic is due, in part, to a projected surge in the number of mobile Internet – connected devices, which will exceed the number of people on earth (2016 world population estimate of 7.3 billion; source: United Nations). During 2011−2016 Cisco anticipates that global mobile data traffic will outgrow global fixed data traffic by three times.
The forecast predicts an annual run rate of 130 exabytes of mobile data traffic, equivalent to:
33 billion DVDs.
4.3 quadrillion MP3 files (music/audio).
813 quadrillion short message service (SMS) text messages.
An exabyte is a unit of information or computer storage equal to 1 quintillion bytes.
This mobile data traffic increase represents a compound annual growth rate (CAGR) of 78 percent spanning the forecast period. The incremental amount of traffic being added to the mobile Internet between 2015 and 2016 alone is approximately three times the estimated size of the entire mobile Internet in 2012. The following trends are driving these significant increases:
1. More Streamed Content: With the consumer expectations increasingly requiring on-demand or streamed content versus simply downloaded content, mobile cloud traffic will increase, growing 28-fold from 2011 to 2016, a CAGR of 95 percent.
2. More Mobile Connections: There will be more than 10 billion mobile Internet-connected devices in 2016, including machine-to-machine (M2M) modules — exceeding the world’s projected population at that time of 7.3 billion. (One M2M application is the use of wireless networks to update digital billboards. This allows advertisers to display different messages based on time of day or day-of-week and allows quick global changes for messages, such as pricing changes for gasoline).
3. Enhanced Computing of Devices: Mobile devices are becoming more powerful and thus able to consume and generate more data traffic. Tablets are a prime example of this trend generating traffic levels that will grow 62-fold from 2011 to 2016 — the highest growth rate of any device category tracked in the forecast. The amount of mobile data traffic generated by tablets in 2016 (1 exabyte per month) will be four times the total amount of monthly global mobile data traffic in 2010 (237 petabytes per month).
4. Faster Mobile Speeds: Mobile network connection speed is a key enabler for mobile data traffic growth. More speed means more consumption, and Cisco projects mobile speeds (including 2G, 3G and 4G networks) to increase nine-fold from 2011 to 2016.
5. More Mobile Video: Mobile users want the best experiences they can have and that generally means mobile video, which will comprise 71 percent of all mobile data traffic by 2016.
The Cisco study also projects that 71 percent of all smartphones and tablets (1.6 billion) could be capable of connecting to an Internet Protocol version 6 (IPv6) mobile network by 2016. From a broader perspective, 39 percent of all global mobile devices (more than 4 billion), could be IPv6-capable by 2016.
Impact of Mobile Devices/Connections
a. The increasing number of wireless devices and nodes accessing mobile networks worldwide is the primary contributor to traffic growth. By 2016, there will be more than 8 billion handheld or personal mobile-ready devices and nearly 2 billion machine-to-machine connections, such as GPS systems in cars, asset tracking systems in shipping and manufacturing sectors and medical applications for making patient records more readily available.
b. Smartphones, laptops and other portable devices will drive about 90 percent of global mobile data traffic by 2016.
c. M2M traffic will represent 5 percent of 2016 global mobile data traffic while residential broadband mobile gateways will account for the remaining 5 percent of global mobile data traffic.
---Original resources from m2mworldnews.com
More Cisco News:
The ISO, International Organization for Standardization is the Emily Post of the network protocol world. Just like Ms. Post, who wrote the book setting the standards or protocols for human social interaction, the ISO developed the OSI model as the precedent and guide for an open network protocol set. Defining the etiquette of communication models, it remains today the most popular means of comparison for protocol suites.
OSI layers are defined as top down such as:
- The Application layer
- The Presentation layer
- The Session layer
- The Transport layer
- The Network layer
- The Data Link layer
- The Physical layer
Cisco Hierarchical Model
Hierarchy has many of the same benefits in network design that it does in other areas of life. When used properly, it makes networks more predictable. It helps us define at which levels of hierarchy we should perform certain functions. Likewise, you can use tools such as access lists at certain levels in hierarchical networks and avoid them at others.
Large networks can be extremely complicated, with multiple protocols, detailed configurations, and diverse technologies. Hierarchy helps us summarize a complex collection of details into an understandable model. Then, as specific configurations are needed, the model dictates the appropriate manner to apply them.
The Cisco hierarchical model can help you design, implement, and maintain a scalable, reliable, cost-effective hierarchical internetwork.
The following are the three layers:
- The Core layer or Backbone
- The Distribution layer
- The Access layer
Each layer has specific responsibilities. However, that the three layers are logical and are not necessarily physical devices. Consider the OSI model, another logical hierarchy. The seven layers describe functions but not necessarily protocols. Sometimes a protocol maps to more than one layer of the OSI model, and sometimes multiple protocols communicate within a single layer. In the same way, when we build physical implementations of hierarchical networks, we may have many devices in a single layer, or we might have a single device performing functions at two layers. The definition of the layers is logical, not physical.
Now, let's take a closer look at each of the layers.
The Core Layer
The core layer is literally the Internet backbone. At the top of the hierarchy, the core layer is responsible for transporting large amounts of traffic both reliably and quickly. The only purpose of the network's core layer is to switch traffic as fast as possible. The traffic transported across the core is common to a majority of users. However, remember that user data is processed at the distribution layer, which forwards the requests to the core if needed.
If there is a failure in the core, every user can be affected. Therefore, fault tolerance at this layer is an issue. The core is likely to see large volumes of traffic, so speed and latency are driving concerns here. Given the function of the core, we can now consider some design specifics. Let's start with something we don't want to do.
- Don't do anything to slow down traffic. This includes using access lists, routing between virtual local area networks, and packet filtering.
- Don't support workgroup access here.
- Avoid expanding the core when the internetwork grows. If performance becomes an issue in the core, give preference to upgrades over expansion.
Now, there are a few things that we want to do as we design the core. They include the following:
- Design the core for high reliability. Consider data-link technologies that facilitate both speed and redundancy, such as FDDI, Fast Ethernet, or even ATM.
- Design with speed in mind. The core should have very little latency.
- Select routing protocols with lower convergence times. Fast and redundant data-link connectivity is no help if your routing tables are shot.
The Distribution Layer
The distribution layer is sometimes referred to as the workgroup layer and is the major communication point between the access layer and the core. The primary function of the distribution layer is to provide routing, filtering, and WAN access and to determine how packets can access the core, if needed.
The distribution layer must determine the fastest way that network service requests are handled; for example, how a file request is forwarded to a server. After the distribution layer determines the best path, it forwards the request to the core layer. The core layer then quickly transports the request to the correct service.
The distribution layer is the place to implement policies for the network. Here you can exercise considerable flexibility in defining network operation. There are several items that generally should be done at the distribution layer such as:
- Implementation of tools such as access lists, of packet filtering, and of queuing
- Implementation of security and network policies including firewalls
- Redistribution between routing protocols, including static routing
- Routing between VLANs and other workgroup support functions
- Definitions of broadcast and multicast domains
Things to avoid at this layer are limited to those functions that exclusively belong to one of the other layers.
The Access Layer
The access layer controls user and workgroup access to internetwork resources. The access layer is sometimes referred to as the desktop layer. The network resources most users need will be available locally. The distribution layer handles any traffic for remote services.
The following are some of the functions to be included at the access layer:
- Continued access control and policies
- Creation of separate collision domains
- Workgroup connectivity into the distribution layer through layer 2 switching
Technologies such as DDR and Ethernet switching are frequently seen in the access layer. Static routing is seen here as well. As already noted, three separate levels does not imply three separate routers. It could be fewer, or it could be more. Remember, this is a layered approach.
---Original Resource from tech-faq.com
More Related Cisco Network Readings: