Overblog Follow this blog
Administration Create my blog
Cisco & Cisco Network Hardware News and Technology

Cisco CEO---John Chambers’ Almost Excellent Adventure

September 28 2012 , Written by Cisco & Cisco Router, Network Switch Published on #Cisco News

Cisco CEO John Chambers had a very active week this week, making the rounds with the daily business press to address such topics as his retirement, rival HP and suggestions for the next President of the United States. His week ended with the disclosure that his compensation dropped 9% based on Cisco's stock price and its challenges in facing up to - and down - competition and the macro economy.John-Chambers.jpg

 

Early in the week, Chambers told Bloomberg reporters about his possible retirement plans in two to four years, and who might succeed him. The roster has 10 names attached to it, including Robert Lloyd, executive vice president of worldwide operations; Chuck Robbins, senior vice president of the Americas; and Edzard Overbeek, senior vice president of global services. And if he should get hit by a bus before he retires, Chambers says he would be replaced by COO Gary Moore.

 

As the week progressed, Chambers had some insights on the daunting task facing HP CEO Meg Whitman as she attempts to turn that company around after a flurry of CEO turnover and some badly managed multibillion dollar acquisitions. To Reuters, Chambers said:  "There's not been a company ever turned around by the fifth CEO on the job," referring to the revolving door recently installed HP's corner office. He added that the Silicon Valley pioneer might have a hard time catching up to the rest of the industry in cloud and tablet computing. Huh? Cius anyone?

 

Later that day, Chambers told Reuters what he thinks the next president should do - take a page from the book of former President Bill Clinton. Chambers is a "strong Republican" and supporter of Mitt Romney, but said Clinton worked with businesses to generate jobs and growth in real income for families, while taking the budget from a deficit to a surplus.

 

"And when business got out of line, he smacked them," Chambers told Reuters, adding that Clinton was "the most effective president during my lifetime."

 

A day later, Chambers and Cisco bought ThinkSmart Technologies, an Irish developer of software for Wi-Fi location services. ThinkSmart's products will help enterprises and service providers gauge customer traffic in a variety of venues and then increase customer service representation based on that traffic.

NOTE: Cisco Acquires ThinkSmart Technologies for Location Analytics

Chambers then went back to Bloomberg to discuss Cisco's strategy and the need of the U.S. to overhaul its tax code.

 

For all of this activity and exposure, Chambers' week ended with a 9% cut in pay. He lost $1.2 million in compensation - from $12.9 million in Cisco's fiscal 2011 to $11.7 million in fiscal 2012 - due to growth concerns weighing down Cisco stock. Cisco shares fell 2% in fiscal 2012, which ended July 28.

--Original news from networkworld.com

More Related Reading:

Cisco CEO Plans Retirement, Succession Race Officially Begins

Read more

Cisco Celebrates Ongoing Role in Australian National Broadband Network Build

September 25 2012 , Written by Cisco & Cisco Router, Network Switch Published on #Cisco News

September 25, 2012, Cisco announced that it has been selected by NBN Co to provide equipment for its national data connectivity network. Established by the Australian federal government, NBN Co will design, build and operate Australia's wholesale-only, high-speed broadband network, the National Broadband Network (NBN).Cisco-Celebrates-Ongoing-Role-in-Australian-National-Broadb.jpg

After a competitive tender, Cisco was awarded the contract to supply Multi-Protocol Layer Switching on Cisco routers and firewalls to support NBN Co's National Connectivity Network (NCN) in a deal estimated to be worth up to $38 million over five years. The Cisco intelligent network technology will be deployed in NBN Co's depot, aggregation nodes and other key locations and will support the Operations Support System (OSS) and Business Support System (BSS) platforms, signalling and timing.

In particular, the NCN will play a key role in activating and assuring services as homes and businesses across Australia connect onto the NBN.  Cisco will commence work on the five-year project immediately.

NBN Co's executive general manager of Network Architecture and Technology, Tony Cross, said: "We are pleased to extend our relationship with Cisco as a supplier of data centre infrastructure to include the supply of a high-performing, highly secure network foundation composed of switches, routers and firewalls to support our NCN."

"This equipment will enable communication between NBN Co's centralised operational staff and IT systems with the fibre, fixed wireless and satellite equipment situated at various locations across Australia. The remote control of this equipment, via the NCN, will allow new services to be activated and faults to be diagnosed and repaired quickly and efficiently across Australia," said Cross.

 

The win formally expands Cisco's role in the development of the NBN. In late 2010, a consortium led by Cisco was awarded the contract to provide NBN Co with a data centre platform. Together with its technology partners EMC and VMware, Cisco was commissioned to build a scalable platform to run essential applications, including network management, inventory management, customer care, billing, service provisioning and fulfilment systems, as well as a web portal to support customer self-service and the corporate website.

 

Sam Gerner, director of Service Provider and Cloud for Cisco Australia and New Zealand, said, "We are very pleased with the success of the data centre deployment at NBN Co to date. This has been an important project for both organisations and has enabled industry-leading data centre capabilities for NBN Co's significant scale and flexibility requirements. We are thrilled to be selected as preferred partner in the building of the NCN and we look forward to continuing our involvement with NBN Co on this exciting journey."

---Original Press Release from Cisco’s newsroom

More Cisco News and Info you can visit: Blog.router-switch.com

News, tutorials, tips, info & thoughts on Developments in the Cisco, Cisco network, IT, Software & Network Hardware Industry

More Detailed Cisco News:

Cisco Intros Cisco Nexus 3548 for High Performance Data Center Environments

Cisco, VMware Doing Further on Next-gen Cloud Infrastructure

Cisco Expo South Africa to be Held at Sun City in 2013

Read more

Cisco delivers new wave of security solutions to defend fast-evolving data centers

September 21 2012 , Written by Cisco & Cisco Router, Network Switch Published on #Cisco News

Cisco introduced a set of security solutions designed to protect data centers in the Middle East against the threats they face in moving toward more consolidated and virtualized environments, while also enabling businesses to take advantage of new cloud-based models. 70% of the world’s Internet traffic and 35% of the world’s email traffic flows through Cisco networks. This enables Cisco to gain intelligence from throughout the network to make more informed security decisions placing them in the best position to see and protect against threats before they affect customers’ networks. 

 

Collectively, the new offerings extend data center and security professionals’ power to enforce end-to-end security for high-capacity data centers and mobile workforces. The offerings include new highly scalable software for the world’s most widely deployed firewall, the Cisco Adaptive Security Appliance (ASA) line; virtualized ASA for multi-tenant environments; data center-grade intrusion prevention system (IPS); as well as new improvements to the Cisco AnyConnect Secure Mobility Client to meet the stringent requirements of a more mobile and productive workforce. 

The virtualization and cloud mega trend is forcing profound shifts within data centers, affecting everything from IT services to business models to architectures. According to recent Cisco reports:

Nearly 3000% increase in application traffic and network connections per second by 2015.

More than 50% of workloads in the data center will be virtualized by 2013.

 

An average of 3 X mobile devices are used on enterprise networks by employees.

 

Business leaders are embracing these trends, and using them to really grow their data center operations to the next level. If addressed properly, these trends offer business benefits such as reduced capital investments, new revenue growth and the greater efficiency, agility and scalability demanded by globalization. 

 

With this announcement, Cisco is helping security to keep pace with the demands of changing high-performance virtual and cloud environments, as well as the demands of increased complexity, compliance and employees bringing their own devices to work, among other trends.  As they grow to the next level, data centers have the following security requirements, to support their changing needs:

Scalable Security:  The amount of data and transactions moving through most data centers requires ever-increasing levels of performance.  Security must have the ability to scale to meet these seemingly insatiable performance requirements, while ensuring the highest levels of security.

Physical & Virtual:  Modern-day data centers are no longer comprised solely of physical deployments.  Instead, they are a mixture of physical, virtual, and cloud infrastructures – built to solve the business’ specific needs.  Security policies must have the ability work consistently across hybrid environments.

Business Integration:  While security is certainly important to data center administrators, it isn’t their only concern.  They must also focus on maintaining business/IT alignment and avoiding chokepoints that can degrade performance and jeopardize their SLAs.  Security needs to be an integral part of the network architecture, so that it can help maintain business/IT alignment, avoid performance chokepoints, and enable business flexibility. 

 

Operating under the principle that security must be integrated across the network to ensure protection of unified data centers, Cisco believes network policies must be unified across physical and virtual worlds, intra-virtual machine communication should be secured, and access to applications by wired and mobile clients must be protected. This security approach has become imperative as customers look to make the migration to cloud and a more flexible device-agnostic corporate culture. Cisco’s latest product developments support such an approach. 

 

The new security solutions announced today include:

ASA 9.0 Platform: Major update to the operating system.

Cisco ASA 1000V: Mainstream ASA technology optimized for virtual/cloud environments.

IPS 4500 Series: A new intrusion prevention system (IPS) built for data centers.

Cisco Security Manager 4.3:  Cisco Security Manager (CSM) provides scalable, centralized management.

Cisco AnyConnect 3.1: Enables secure remote access to network resources.

Security Services: Professional and support services, from Cisco and its partners. 

 

Mark Hosking, Data Center and Virtualization Lead for the Middle East:

“For enterprises to confidently seize the business benefits offered by data center virtualization and the cloud, security must be seen as the art of the possible, not as a hindrance. As with the rest of your network, we make consistent security a deployment decision that enables policies to work throughout hybrid environments—physical, virtual and cloud—and enables data center professionals to securely deliver IT-as-a-Service without impeding network performance.”

 

---Original reading from http://www.albawaba.com/business/pr/cisco-security-solutions-442453

More Cisco News you can visit: http://blog.router-switch.com/ such as Cisco Intros Cisco Nexus 3548 for High Performance Data Center Environments

Read more

Huawei Previews Cisco-killin' E9000 Modular System

September 18 2012 , Written by Cisco & Cisco Router, Network Switch Published on #Cisco News

Chinese telecom giant and increasingly important server player Huawei Technologies is moving from racks and blades into modular designs that use a mix of both approaches – and look very much like modular kit from Cisco Systems, IBM, and Hitachi, as well as the newer bladish iron from HP and Dell.

 

The likeness between the forthcoming Huawei servers and IBM and Hitachi machines announced back in April is enough to make you wonder if Huawei is actually manufacturing those companies' respective Flex System and Compute Blade 500 machines.

 

Huawei isn't – as far as we know – but as El Reg pointed out when Hitachi announced the CB500 machines, it sure does look like IBM and Hitachi are tag-teaming on manufacturing for modular systems. Possibly by using the same ODM to bend the metal and make the server node enclosures, perhaps?

 

The distinction between a blade and a modular system is a subtle one. With modular systems, server nodes are oriented horizontally in the chassis and are taller than a typical vertical blade is wide, allowing for hotter and taller processors as well as taller memory and peripheral cards than you can typically put in a skinny blade server.

 

The modular nodes can be half-width or full-width in the chassis and offer the same or slightly better compute density as a blade server in a similar-sized rack enclosure, and because of the extra room in the node, can accommodate GPU or x86 coprocessors as well. They are made for peripheral expansion and maximizing airflow around the nodes.

 

Modular systems generally have converged Ethernet networks for server and storage traffic, but also support an InfiniBand alternative to Ethernet for server networks and Fibre Channel for storage networks, just as do blade servers. Modular systems also tend to have integrated systems management that spans multiple compute node enclosures and are geared for virtualized server clouds. It's not a huge difference, when you get right down to it.

 

What is most important about modular systems, in this evolving definition, is that they look like – and compete with – the "California" Unified Computing System machines that Cisco put into the field three years ago when it broke into the server racket.

 

Cisco's business has been nearly doubling for the past two years and is bucking the slowdown big-time in serverland. Cisco is defining the look of the modern blade server and eating market share. Huawei wants to pull the same California maneuver, peddling its own servers to its installed base of networking and telecom gear customers and driving out the server incumbents.

 

Huawei lifted the veil on the Tecal E9000 modular machines at the Huawei Cloud Congress show recently in Shanghai, and says that the boxes won't actually ship until the first quarter of next year – Huawei is clearly not in any kind of a big hurry to get its Cisco-alike boxes out the door.

huawei_tecal_e9000_servers.jpg

The Tecal E9000 is based on a 12U chassis that can support either eight full-width nodes or sixteen half-width nodes. The chassis has 95 per cent efficient power supplies, and a total of six supplies can go into the enclosure with redundant spares, rated at 3,000 watts a pop AC and 2,500 watts a pop DC.

 

The chassis and server nodes have enough airflow that they can operate at 40°C (104°F) without additional water blocks or other cooling mechanisms on the chassis or the rack. This is the big difference with modular designs, and one that was not possible with traditional blades. Blade enclosures ran hot because they were the wrong shape, and the fact that by simply reorienting the parts you can get the machines to have the same computing capacity in the same form factor just goes to show you that the world still need engineers.

 

The Tecal E9000 server nodes are all based on Intel's Xeon E5-2600 or E5-4600 processors, which span two or four processor sockets in a single system image, respectively. There are a couple server node variants to give customers flexibility on memory and peripheral expansion. The nodes and the chassis are NEBS Level 3 certified (which means they can be deployed in telco networks) and also meet the European Telecommunications Standards Institute's acoustic noise standards (which means workers won't go deaf working on switching gear).

huawei_tecal_ch121_server.jpg

The Tecal CH121 server node

The CH121 is a single-width server node with two sockets that can be plugged with any of the Xeon E5-2600 series processors, whether they have four, six, or eight cores per socket. Each socket has a dozen DDR3 memory slots for a maximum capacity of 768GB across the two sockets using fat (and crazy expensive) 32GB memory sticks.

 

The node has two 2.5-inch disk bays, which can be jammed with SATA or SAS disk drives or solid state disks if you want lots of local I/O bandwidth but not as much capacity for storage on the nodes. The on-node disk controller supports RAID 0, 1, and 10 data protection on the pair of drives.

 

The CH121 machine has one full-height-half-length PCI-Express 3.0 x16 expansion card and two PCI-Express 3.0 x16 mezzanine cards that plug the server node into the midplane and then out to either top-of-rack switches through a pass-through module or to integrated switches in the E9000 enclosure.

 

The CH221 takes the same server and makes it a double-wide node, which gives it enough room to add six PCI-Express peripheral slots. That's two x16 slots in full-height, full-length form factors plus four x8 slots with full-height, half-length dimensions.

huawei_tecal_ch221_server.jpg

The double-wide Tecal CH221 server node

A modified version of this node, called the CH222, uses the extra node's worth of space for disk storage instead of PCI-Express peripherals. The node has room for the same two front-plugged 2.5-inch drives plus another thirteen 2.5-inch bays for SAS or SATA disks or solid state drives if you want to get all flashy. These hang off the two E5-2600 processors, and the node is upgraded with a RAID disk controller that has 512MB of cache memory and supports RAID 0, 1, 10, 5, 50, 6, and 60 protection algorithms across the drives. This units steps back to one PCI-Express x16 slot and two x16 mezz cards into the backplane.

 

If you want more processing to be aggregated together in an SMP node, then Huawei is happy to sell you the CH240 node, a four-socket box based on the Xeon E5-4600. Like other machines in this class from other vendors, the CH240 has 48 memory slots, and that taps out at 1.5TB of memory using those fat 32GB memory sticks. The CH240 supports all of the different SKUs of Intel's Xeon E5-4600 chips, which includes processors with four, six, or eight cores.

huawei_tecal_ch240_server.jpg

The Tecal CH240 four-socketeer

The CH240 does not double-up on the system I/O even as it does double-up the processing and memory capacity compared to the CH221. It has the two PCI-Express x16 mezzanine cards to link into the midplane and then out to switches, but no other peripheral expansion beyond that in the base configuration.

 

This is a compute engine in and of itself, designed predominantly as a database, email, or server virtualization monster. It supports the same RAID disk controller used in the CH221, but because of all that memory crammed into the server node, there's only enough room for eight 2.5-inch bays for disks or SSDs in the front. If you want to sacrifice some local storage, you can put in a PCI-Express riser card, which lets you put one full-height, 3/4ths length x16 peripheral card into the CH240.

 

All of the machines are currently certified to run Windows Server 2008 R2, Red Hat Enterprise Linux 6, and SUSE Linux Enterprise Server 11, and presumably will be ready to run the new Windows Server 2012 when they start shipping early next year.

 

VMware's ESXi 5.X hypervisor and Citrix Systems' XenServer 6 hypervisor as well, and again, presumably Hyper-V 3.0 will get certified on the box at some point and maybe even Red Hat's KVM hypervisor as well. There is no technical reason to believe that the server nodes can't run any modern release of any of the popular x86 hypervisors, but there's always a question of driver testing and certification.

huawei_tecal_cx_switches.jpg

 

The CX series of switch modules for the E9000 enclosure

On the switch front, Huawei is sticking with three different switch modules, which slide into the back of the E9000 chassis and provide networking to the outside world. The CX110, on the right in the above image, has 32 Gigabit Ethernet ports downstream into the server midplane and out to the PCI-Express mezz cards, which is two per node. The CX110 switch module has a dozen Gigabit and four 10GbE uplinks to talk to aggregation switches in the network.

 

The CX311 switch module takes the networking up another notch, with 32 10GbE downstream ports and sixteen 10GbE uplinks. This switch also has an expansion slot that can have an additional eight 10GbE ports or eight 8Gb/sec Fibre Channel switch ports linking out to storage arrays.

 

Huawei also has a QDR/FDR InfiniBand switch model with sixteen downstream ports and eighteen upstream ports, which can run at either 40Gb/sec or 56Gb/sec speeds.

 

The current midplane in the E9000 chassis is rated at 5.6Tbit/sec of aggregate switching bandwidth across its four networking switch slots, which can be used to drive Ethernet or InfiniBand traffic (depending on the switch module you choose).

 

Here's the important thing: the Tecal E9000 midplane will have an upgrade option that will allow it to push that enclosure midplane bandwidth up to 14.4Tb/sec, allowing it to push Ethernet at 40 and 100 Gigabit speeds and next-generation InfiniBand EDR, which will run at 100Gb/sec; 16Gb/sec and 32Gb/sec Fibre Channel will also be supported after the midplane is upgraded. It is not clear when this upgraded midplane will debut.

Pricing on all of this Tecal E9000 gear has not been set yet, according to Huawei.

 

More Cisco and Networking News you can visit:http://blog.router-switch.com/category/news/

Read more

QoS Classification and Marking Configuration

September 10 2012 , Written by Cisco & Cisco Router, Network Switch Published on #Cisco & Cisco Network

In this article we will share the details for proper QoS Marking and Classification configuration. As discussed in the VoIP Quality of Service (QoS) Basics article, the first thing that must be accomplished when configuring QoS is the classification and marking of traffic; this marking is then used by the devices on the network to prioritize high priority over low priority marked traffic. This article discusses the commonly used Differentiated Services Code Point (DSCP) values and the basic concepts of classification and marking. The article then goes on to show the basic configuration steps required to implement traffic classification and marking.

 

The material in this article can be used as a jumping off point for studying for the CCNP Voice certification as this material is found in the CVOICE (642-437) exam that must be passed to obtain this certification. With the integration of voice and video becoming more and more common on modern networks, a solid understanding of what is possible with QoS is essential.

 

DSCP - Per Hob Behaviors (PHB)

The purpose of DSCP is to differentiate the different classes or types of traffic on the network; the DSCP section takes up the first 6 bits of the Type of Service field in the IP header. This space was previously used for IP precedence, and while some older implementations may still use IP precedence, most modern implementations have moved over to using DSCP. The value contained within the DSCP section is called a Per Hob Behavior (PHB); the PHB is what dictates how the traffic is handled when being routed through a network.

 

There are four PHB classes:

  1. Default
  2. Class Selector (CS)
  3. Assured Forwarding (AF)
  4. Expedited Forwarding (EF)

 

The Default class (000000) is typically used as a catch-all for all traffic that does not require a specific priority over the network; this traffic is handled as best effort going across the network. This means simply that the traffic is routed as the resources of the forwarding devices allow.

The Class Selector type is used in order to remain backward compatible with existing IP precedence implementations, the last three bits of the CS DHCP is always 000 with the first three bits being set based on the values of IP precedence, i.e., IP Precedence 7 would be 111000. The CS DHCP values that are typically used are DSCP 8 (001000), 16 (010000), 24 (011000), 32 (100000), 40 (101000), 48 (110000), and 56 (111000).

 

The Assured Forwarding type provides a framework of traffic classes; these are detailed inTable 1.

TABLE 1

Drop Probability

Class 1

Class 2

Class 3

Class 4

Low Drop

AF11 

DSCP 10

'001010'

AF21 

DSCP 18

'010010'

AF31 

DSCP 26

'011010'

AF41 

DSCP 34

'100010'

Medium Drop

AF12 

DSCP 12

'001100'

AF22 

DSCP 20

'010100'

AF32 

DSCP 28

'011100'

AF42 

DSCP 36

'100100'

High Drop

AF13 

DSCP 14

'001110'

AF23 

DSCP 22

'010110'

AF33 

DSCP 30

'011110'

AF43 

DSCP 38

'100110'

 

The Expedited Forwarding type is used to dignify the highest traffic priority; the EF PHB uses a DSCP value of 46 or 101110. This type is typically used on voice and video traffic when it is being passed over a common data network.

 

Traffic Classification and Marking Configuration

The first thing to note here is that this article is focusing on how traffic classification and traffic marking work together. However, traffic classification can be used for a number of different purposes including use with traffic management. If there is a serious interest in learning all the capabilities of traffic classification, please review the IOS QoS guide available at http://www.cisco.com.

 

To perform traffic classification and marking, the Modular QoS Command Line Interface (MQC) is used. The MQC follows a basic structure regardless of what task is being completed, this structure includes:

  • Defining a traffic class, with matching criteria
  • Creating a traffic policy, that is used to define QoS actions
  • Apply the traffic policy, to a specific interface or sub-interface

 

Defining a Traffic Class

The definition of a traffic class is where traffic classification occurs. It is during this part of configuration that the specific traffic that is to be matched is configured. There are a number of different ways that can be used to match specific traffic; some of the available options are included in Table 2.

 

TABLE 2

Match Command

Match Criteria

match access group

Matches based on a predefined access-list

match cos

Matches based on traffic with a specific Class of Service (CoS) value

match dscp

Matches based on traffic with a specific Differentiated Services Code Point (DSCP) value

match precedence

Matches based on traffic with a specific IP precedence value

match protocol protocol

Matches based on the traffic classified by the Network-based application recognition feature. 

The basic syntax to define a traffic class is:

  • router(config)#class-map class-map-name [match-all | match-any]
  • router(config-cmap)#match (See Table 2)

 

Creating a Traffic Policy

A traffic policy defines how to handle the traffic that was matched within the class-map command; this is where traffic marking can occur. There are a number of different supported traffic policy commands. However, as related to traffic marking, the commands in Table 3 are commonly used:

 

TABLE 3

Set command

Traffic attribute

set cos

Sets the value of the CoS field

set dscp

Sets the value of the DSCP field

set precedence

Sets the value of the IP precedence field.

 

The basic syntax to create a traffic policy is:

  • router(config)#policy-map policy-map-name
  • router(config-pmap)#class {class-name | class-default} (This comes from the class-mapcommand)
  • router(config-pmap-c)#set (see Table 3)

 

Apply the Traffic Policy

Of course, the creation of a traffic class and a traffic policy will do very little if it is not applied to a specific interface or subinterface. Traffic policies are applied to an interface in a specific direction, ensuring that the configured direction provides the expected results. Typically, when classifying traffic from an external source, the traffic will be classified and marked at the perimeter of the network coming in to the network.

The basic syntax to apply a traffic policy is:

  • router(config)#interface type number
  • router(config-if)#service-policy {input output} policy-map-name

 

The concepts used to classify and mark traffic are not hard to understand once the basics are made clear. Hopefully, this article gives a good base for understanding how Quality of Service is implemented on Cisco equipment, specifically QoS Classification and Marking Configuration.

--- Original reference from http://www.petri.co.il/qos-marking-and-classification.htm

More…

Basic Overview of Cisco Voice over IP (VoIP) QoS

How to Prepare for the CCIE Voice Written Exam?

Top 5 VoIP Concepts to Know for CCNA Voice

Read more

Basic Overview of Cisco Voice over IP (VoIP) QoS

September 7 2012 , Written by Cisco & Cisco Router, Network Switch Published on #Cisco Technology - IT News

Basic tips of configuring Quality of Service (QoS) with VoIP, including the high level QoS methods available to achieve quality voice traffic.

One of the most important things that must be configured in concert with available VoIP solutions is Quality of Service (QoS). Without QoS options properly configured, the quality of voice (and video) could, and probably will be, sacrificed along with the overall demands of general traffic. These options provide a priority channel that is used by the voice traffic so that quality can be maintained while also allowing general traffic flow. This article reviews QoS basics and briefly discusses available QoS options and how they operate to provide quality for voice traffic.

 

Many of these QoS concepts are integral when studying for a Cisco voice certification. QoS concepts are covered on all of the following exams:

640-461 ICOMMv8.0 - CCNA Voice

https://learningnetwork.cisco.com/community/certifications/voice_ccna/icomm

642-437 CVOICE v8.0 - CCNP Voice

https://learningnetwork.cisco.com/community/certifications/ccvp/cvoicev8?tab=overview

350-030 CCIE Voice Written - CCIE Voice

https://learningnetwork.cisco.com/community/certifications/ccie_voice/written_exam?tab=1

QoS Deployment for VoIP Case Study Example

 QoS-Deployment-for-VoIP-Case-Study-Example.jpg

The Basics

There are a number of QoS factors to consider when configuring a modern QoS implementation on Cisco, or any other vendor’s equipment. However, the most basic of these concepts revolves around what QoS is attempting to accomplish. There are four major factors that need to be controlled in order to have a quality VoIP phone call; these include:

Bandwidth – The amount of end-to-end available bandwidth dictates whether a call will work correctly or not. With unlimited constant bandwidth, a voice call can work from end-to-end without much issue; however, bandwidth is rarely unlimited. The codec selected for use over a specific line is dictated by the amount of available bandwidth and the number of active calls required.

Delay – Unlike with data communications, too much delay on a voice call can make the quality of the call unbearable. Of course, all voice communications have some amount of delay which must be kept to a number that is as small as possible. Typically, with VoIP, optimum call quality includes an end-to-end delay of less than 150ms.

Jitter – Jitter is the amount of delay variation in call traffic. If traffic over a connection is constantly delayed at 100 ms, no issue occurs. However, if for the first portion of the call there is short delay (e.g., below 5ms), followed by a period of long delay (e.g., over 300ms), and then another short delay, the receiving voice device may have trouble synchronizing all of the incoming traffic as it is received in an inconsistent manner.

Loss – Obviously, the loss of voice packets results in the loss of audio on the connection. Small amounts of loss (< 1%) over the course of a connection will probably not be noticed, but if this loss becomes a large problem then significant loss in voice quality occurs.

 

QoS Methods

There are a number of different methods that can be used to control the QoS of a voice connection; these include:

Classification and Marking

Link Efficiency

Congestion Management

Congestion Avoidance

 

Classification and Marking

The most commonly used method of QoS classification and marking is Differentiated Services (DiffServ). The general concept of DiffServ is to monitor the traffic coming through a device; all traffic is then classified into a specific traffic classification (for example, Voice Traffic or Data Traffic). Once this traffic is classified, it is marked with this classification using one of a number of methods. Commonly with IP traffic, the ToS field is used in the IP header and is classified with a Differentiated Service Codepoint (DSCP). This marking is then used by successive devices in prioritizing which traffic to process first.

 

See related article on QoS Marking and Classification

 

Link Efficiency

There are a number of different link efficiency mechanisms. The most commonly known mechanisms include IP header and payload compression.  Other mechanisms include Link Fragmentation and Interleaving (LFI). These are typically used on slower speed serial links to improve delay by fragmenting larger packets into smaller ones, thus allowing other smaller packets to be processed. Obviously, the more efficient the link, the less delay is subject to a VoIP connection.

 

Congestion Management

The concept of congestion on a connection is rather simple to explain; the more congested a link, the less likely a packet will be able to get through in a timely manner required by VoIP (think, rush hour in NYC or LA). Congestion management mechanisms attempt to control the amount of congestion faced by traffic by processing the traffic in a variety of different ways, some more complex than others. Many of these methods are used in conjunction with markings given to traffic (e.g., DSCP). The most common methods include:

FIFO

Priority Queuing (PQ)

Custom Queuing (CQ)

Weighted Fair Queuing (WFQ)

Class Based – Weighted Fair Queuing (CBWFQ)

Low Latency Queuing (LLQ)

See related article on Queue Configuration and Congestion Management. 

 

Congestion Avoidance

Congestion avoidance is another method of QoS; the most common of the techniques used is called Weighted Random Early Detection (WRED). Basically, WRED attempts to predict that congestion will be forthcoming, and when this happens packets are selectively dropped to avoid congestion.


There are a number of different QoS concepts that must be understood in order to properly implement a VoIP network or pass the Cisco voice certification tests. The concepts covered in this article are a simple overview of the high level QoS options available. Hopefully, this article will help the student understand these high level concepts before digging into the depths required for true understanding.

 ---Original reference from http://www.petri.co.il/voip-quality-of-service-basics.htm 

More Related Reference:

Quality of Service for Voice over IP

AutoQoS for Voice Over IP (VoIP)

More Cisco resources you can visit: http://blog.router-switch.com/

Read more

Cisco 2960s Switches Can Route

September 5 2012 , Written by Cisco & Cisco Router, Network Switch Published on #Cisco Switches - Cisco Firewall

As Cisco user know, Cisco 2960s can route now. As of 12.2(55)SE, Cisco 2960s switches are layer 3 switches (with some limitations mentioned later).

Configuring 2960s to route is pretty simple. The Switch Database Management template (SDM) needs to be changed to “lanbase-routing”. A reboot is (always) needed after changing the SDM template. After reboot, it’s just like enabling routing on any other L3 switch with the command “ip routing” from global config.

First we’ll change the SDM template:

SwitchA(config)#sdm prefer lanbase-routing

Changes to the running SDM preferences have been stored, but cannot take effect until the next reload.

Use 'show sdm prefer' to see what SDM preference is currently active.

SwitchA(config)#^Z

SwitchA#reload

System configuration has been modified. Save? [yes/no]: y

Proceed with reload? [confirm]

After changing the SDM template, we are reminded that we’ll need to reboot and also given a command to verify the change after the next boot.

 

Now we verify:

SwitchA#show sdm prefer

The current template is "lanbase-routing" template.

 The selected template optimizes the resources in

 the switch to support this level of features for

 8 routed interfaces and 255 VLANs.

  number of unicast mac addresses:                  4K

  number of IPv4 IGMP groups + multicast routes:    0.25K

  number of IPv4 unicast routes:                    4.25K

  number of directly-connected IPv4 hosts:          4K

  number of indirect IPv4 routes:                   0.25K

  number of IPv4 policy based routing aces:         0

  number of IPv4/MAC qos aces:                      0.125k

  number of IPv4/MAC security aces:                 0.375k

The change was successful and we’re given the details about this SDM template.

 

Now seems like a good time to touch on the limitations of the layer 3 capabilities on Cisco 2960s. As we see in the output above, we’re limited to 8 routed interfaces. These will be SVIs. At this point, thCatalyst e 2960s don’t support routed physical interfaces (“no switchport”). Another important note is that we’re only allowed 16 static routes and there is no dynamic routing capability.

 

Now we’ll enable IP routing and configure a couple SVIs:

SwitchA#conf t

SwitchA(config)#ip routing

SwitchA(config)#

SwitchA(config)#int vlan 15

SwitchA(config-if)#ip add 192.168.15.1 255.255.255.0

SwitchA(config-if)#

SwitchA(config-if)#int vlan 25

SwitchA(config-if)#ip add 192.168.25.1 255.255.255.0

SwitchA(config)#^Z

SwitchA#sh ip route

...

C    192.168.15.0/24 is directly connected, Vlan15

C    192.168.25.0/24 is directly connected, Vlan25

 

More Related Cisco 2960 Tips:

Cisco Catalyst 2960 Series Enables Routing

Read more

Cisco Catalyst 6500 Series-Understanding the MET Reserved VLAN Range on IOS 15

September 3 2012 , Written by Cisco & Cisco Router, Network Switch Published on #Cisco Switches - Cisco Firewall

We deployed some new Cisco Catalyst 6513 with Sup2T supervisor engines as access switches several days ago. During the initial configuration, we realized that Cisco had introduced some new reserved VLANs on IOS15 for internal usage:

switch(config)#vlan 3990

VLAN id: 3990 is an internal vlan id - cannot use it to create a VTP VLAN.

 

Reserved VLANs can be checked with the command “sh vlan internal usage”:

switch #sh vlan internal usage

VLAN Usage

---- --------------------

1006 online diag vlan0

1007 online diag vlan1

1008 online diag vlan2

1009 online diag vlan3

1010 online diag vlan4

1011 online diag vlan5

1012 PM vlan process (trunk tagging)

3968 MET reserved VLAN

3969 MET reserved VLAN

3970 MET reserved VLAN

3971 MET reserved VLAN

...

4030 MET reserved VLAN

4031 MET reserved VLAN

 

Due some code being backported from NX-OS to IOS 15, VLANs from 3968 to 4031 are now reserved for MET usage. However, if you are deploying these switches on working environments you may have the need to use some of these VLANs. If this is the case, you can change the VLAN range with the following command:

switch (config)#vlan internal reserved met vlan 3904

 

The new MET VLAN will take effect after reload.

Please reload, or no change will be made

Configuration MET VLAN value is 3904

Operation MET VLAN value is 3968

 

After you reload the switch the new VLAN range will be applied:

switch #sh vlan internal usage

...

3904 MET reserved VLAN

3905 MET reserved VLAN

3906 MET reserved VLAN

...

3965 MET reserved VLAN

3966 MET reserved VLAN

3967 MET reserved VLAN


---Original resources from packetpushers.net

More Cisco 6500 Tutorials and Tips:

Is Catalyst 6500 Supervisor 2T Your Upgrade Answer?

Cisco Catalyst 6500 Switches Vs. Catalyst 4500 Series

Read more