Posts with #networking tag
Cisco Aironet 36021 AP and Ubiquiti's UniFi AP are part of the so-called “wave 1” phase of 802.11ac standard.
These access points (APs) are theoretically capable of reaching data rate of up to 1. 3 Gigabits per second but actual maximum throughput speeds achieved during a test conducted by technology publication Computerworld.com just reached the 360 to 380 Megabits per second range.
Here is how the two access points fared against each other in terms of speed, features and performance:
Cisco Aironet 36021 AP–This access point came with an 802.1ac module. A Cisco 2504 Wireless LAN controller was use for the test.
The Aironet 36021 comes with two integrated 2.Ghz/5GHz dual radios. The 802.1ac module adds ass a 5-GHz radio supporting three spatial streams.
The AP supports the standard Control and Provisioning of Wireless Access Points Protocol (CAPWAP) and broadcasting up to 16 SSIDs. The maximum transmit power for both integrated dual band radios is 23 dBm and 22dBm for the 802.11ac module.
The Aironet 36021 AP is worth $1,495. Its accompanying 802.11ac model is $500.
Ubiquiti UniFi AC-The $299 Ubiquiti AP came with UniFi Controller software to manage the AP.
This access point has a 2.4GHz radio and 5GHz radio with three spatial streams that support up to four BSSIDs per radio. The radios maximum transmit power is 28 dBm.
This AP has a similar physical dimension as the Cisco unit but weighs about a pound more. It is straightforward to setup and configure just like the Aironet. The Ubiquiti AP has a user-friendly interface.
While the unit does not allow user to configure many settings it allows application of general wireless, network and guest settings across multiple UniFi Aps. User can also place access points on an uploaded map and view stats information on AP and client usage.
Testers found the Cisco AP performed four per cent to 22 per cent better than the Ubiquiti AP in the throughput test. The Cisco AP is recommended for larger enterprise networks.
They concluded that the Ubiquiti AP lacked advanced enterprise settings but is easier to setup and more ideal for small to midsize networks
---News from http://www.itworldcanada.com
More Related to Cisco Wireless Aps:
As service providers continue their IP network convergence, they also need to establish a business strategy that can provide a solid return on their next-generation network investment. Creating a network transformation plan is an essential part of the process that will help service providers increase the efficiency and flexibility of their next-generation networks and services while reducing operations expense (opex).
This Telecom Insights guide looks at what service providers need to know about deploying a converged network architecture that focuses on offering differentiated services that capitalize on their infrastructure and unique customer knowledge and how providers should go about building a solid network transformation plan that will result in the necessary ROI to compete and thrive.
In this series:
- A new vision for telecom network transformation
- Five steps to a next-gen network transformation plan
- Three mega-trends revolutionize telecom
A new vision for telecom network transformation
Much bigger than problems created by an economic downturn, network operators worldwide are facing much more pressure from longer-term erosion in the value of their stock-in-trade: transport bits.
Because business planners need to focus first on profit and revenue growth, today's fundamental market shifts mean that shorter-term planning will have to encompass a different vision of transformation and a different model of monetizing network investment.
The telecom services market is increasingly like a supermarket, with supermarket-like principles. Some services, like certain grocery items, will always be in demand but don't have much feature differentiation. These will become commodities in terms of price but will sustain the foundation of revenues and create customer loyalty. Other services, such as premium items in a store, will produce less revenue but command strong margins and boost profits. The transformation of the network marketplace to this model is the most significant goal for the industry.
Turning transformation on its head
Supporting this kind of transformation is still a hazy notion that could be called the Next-Generation Networks Services Architecture, or NGNSA. This architecture harmonizes the key components of next-generation network transformation:
- Service feature orchestration and syndication through developer partners, over-the-top partners, and traditional service provider partners.
- Business and operations management tools that are "service-focused" to align them with new directions in service creation and support a much higher level of automation of service lifecycle processes.
- Network infrastructure that can be quickly adapted to the traffic patterns and service-level agreement (SLA) needs of the widest variety of services, and tight coupling to the service layer of the network so network operators can differentiate their services from over-the-top solutions. This includes service delivery platforms (SDPs) for computing/software service components and network equipment for connection and transport.
The primary reason NGNSA notions are still fuzzy is the fact that activities are spread across a number of standards processes. While there are active liaisons between the bodies, standards are not moving in synchrony or even particularly quickly. As a result, network operators are looking increasingly to vendors for leadership in these areas and expecting those vendors to support the standards as they develop rather than waiting for them.
Nearly all major network operators worldwide report that they expect to buy into some vendor vision for integrated NGN services in the next year. For those operators, the choice of what approach to take is likely to be set by the priority they place on the three major NGNSA elements.
Complete solutions will drive partnerships
Of the three areas, the second (service operations and management) is probably the most developed in a standards sense, and thus network operators probably understand the positions of their vendor partners and have a good sense of convergence on standards approaches. But not every major equipment vendor has a service management strategy, and pressure to provide a complete solution is likely to create partnerships between management and networking vendors.
Service feature orchestration and third-party partner access to service elements for composition of retail services are likely to be the major focus of network operators in the near term. This area has not been active in the standards-setting sense for as long because the requirements of the space are less understood.
A number of announcements or commitments by equipment vendors in 2008 support the componentization, syndication and composition of services. And the architectures are only starting to emerge. The best approach here may be the most important single factor in creating NGNSA partnerships in the next two years or more.
Service-layer technology must create ROI
For the longer term, the last issue cannot be neglected. Service-layer technology that simply sits on top of connection/transport infrastructure ("anything over the Internet") empowers not only network operators but also over-the-top players. What network operators need and want is a way of creating value from their networks in the form of something linked with, but stepping beyond, the movement of bits. Little has been done in an organized industry sense to create specific service-layer partnership with the network layer. This partnership would provide a special benefit to those who build and own the networks. Thus it would justify network infrastructure investment more effectively by sustaining a higher return on investment (ROI).
ROI has been important for network operators for years, but the importance of ROI is magnified by a combination of economic uncertainty and increased pressure to evolve off the older TDM voice platforms in favor of IP-based services, including voice. 4G technology is based on IP voice, and fixed mobile convergence (FMC) is facilitated if voice technology in both wireline and wireless is based on VoIP. Major tier 1 operators are already announcing serious VoIP offerings, and this will put additional pressure on service-layer deployment because the move is almost certain to lower revenue per call-minute over time.
The role of IMS in the next-generation network
The fact that voice may be a driver for near-term change makes the IP multimedia subsystem (IMS) decision particularly important for operators. IMS is the approved and standardized way to manage mobile VoIP, FMC and non-voice mobile services. IMS is at least a candidate for supporting other NGN services such as video. Here again, standards may not keep pace with market requirements, and network operators may have to work with vendors prepared to take leading-edge positions on harmonizing IMS with service models beyond those involving SIP calling.
The ITU has suggested, in its NGN material, that IMS is one of several elements in what we have called here an NGNSA. But the precise role of IMS in that mix is not defined, nor are the other elements that would coexist with IMS. The vision of IMS's role in NGNSA may be the most critical of all in the near term because of the pressure to evolve voice services.
Network operators plan over a very long cycle--typically about seven years. That means that economic disturbances in the field are less a factor than they would be to industries with shorter capital cycles. Long planning cycles also mean that network operators require a very high degree of confidence in every step of their solution to evolving service needs and opportunities. That requirement is likely to generate new relationships and new levels of cooperation with vendors in the coming years.
Five steps to a next-gen network transformation plan
If transformation has a business goal and convergence a technical goal, then surely one of the challenges that face service providers today is how to navigate a commitment to both at the same time.
The problem is only complicated by the fact that transformation, unlike convergence, has no established formula or timetable. It's hard to get management support for something that, except for the goal itself, seems rather hazy.
The goal of transformation is to define a business strategy that creates sustainable revenues and profits from next-generation network (NGN) investments. Meeting that goal may require different specific technologies and services, but it can be accomplished with a general program that has some defined elements and timing recommendations. It is also important to address a few considerations or recommendations of what not to do, because some steps that are often taken are rarely successful.
Five steps to creating an NGN transformation plan
1. Picking a specific NGN service target set: This is the most problematic of all transformation steps. The most significant difference between the service environment of the past and that of the present is the short-term nature of buyer commitments to service paradigms. Basic voice and connectivity services are long-lived, in large part because they are so basic. As operators attempt to monetize NGN services, they must contend with the fact that the most valuable services to an operator are also those most valuable to service consumers, and this value proposition will change over time.
If committing to an inflexible NGN service strategy is exactly the wrong move, the best move is to create a service-layer architecture with the greatest flexibility possible -- both in terms of the way it can compose and combine service features and in the delivery options (wireless, wireline, computer, TV, phone, etc.) available. In fact, the difference between an IP network and an NGN is in the service-layer flexibility. IP alone simply creates a connectivity base that will be exploited by others but may not be profitable. NGNs must ensure the profit by providing services in a flexible way, not just transporting their traffic.
2. Restructure network, operations and business management systems around services, not technologies. In the second transformation step, the NGN service set will differ from the old set in that it will be made up of shorter-contract-period services with much wider markets. This means that inefficiency in service operations cannot be tolerated, or the costs will mount to swamp the budget. There are standards processes under way to guide this resetting of operations priorities, and many vendors already have tools and plans to support the switch. Services are the product of service providers, and management systems must reflect that reality.
3. Classify service opportunities at the high level. There is a taxonomy of service opportunities, starting with the basic classification of the customer (residential, enterprise, small business) and the nature of the value proposition the service will have for the customer (communication, data exchange, collaboration, hosting, software and computer outsourcing, etc.). For each opportunity element in the structure, there will be a total addressable market and a likely market penetration curve, and these can be used to set service opportunity priorities -- but not yet.
4. Identify the infrastructure implications of each of the opportunities. The goal here is not to plan out every piece of equipment or technology direction but rather to group the opportunities according to the type of infrastructure investment required to support them so that co-dependencies can be identified. In terms of an NGN transformation plan, the right answer will probably come by picking the opportunity group that has the best relationship between cost of infrastructure and benefit in terms of opportunity value.
5. Implement and execute a project to create an effective NGN transformation plan.The final step is a project to execute in the direction that is identified by the last step listed above. At the same time, the incremental steps involved in addressing other related opportunity groups should be explored to develop a plan for later investment and service deployment.
Projected timeline for an NGN transformation project
Most service providers have the information needed to support this sequence. If that is the case, operator experience seems to suggest that a task to complete the first three steps would require approximately eight months, assuming that work already done could not be leveraged. Service-layer deployments generally require about that same time for initial deployments, and so it may be that the operations processes in step 2 will be the inhibiting factor in preparing a quick response. This suggests that it is highly advisable that operations restructuring be given a high priority.
Every NGN program will be different, and every operator will have completed some of the tasks associated with each of the steps outlined here. An inventory of activities is often very useful in ensuring that nothing that has already been done is wasted, and this will also produce a faster path to NGN success.
Three mega-trends revolutionize telecom
Upcoming telecom changes are nothing short of revolutionary, or at least evolutionary, as trends emerge to create a single business model ecosystem out of telecom and the Web, content players and service providers find a workable balance of power, and cloud computing and social networking features gain in importance. Here's a look at the three main trends that will change telecom for the long haul.
1. An emerging online ecosystem joins telecom and the Web into a single business model
In December 2008, Alcatel-Lucent announced a company strategy based on creating the tools for this new ecosystem. Cisco CEO John Chambers had similar comments about binding the tools of the Web into a single, cohesive development framework.
In addition, articles about how Google was looking for a "fast lane" from access providers to speed its content to users seemed to make it clear that the old face-off between the over-the-top players and the telecoms might be ending. We've had years of "over-the-top" versus the carriers, and now we're heading for a future where the distinction will become very fuzzy indeed -- not through mergers and acquisitions but through cooperation.
For three or four years, telecoms and Web companies alike have been working to gain support from application developers to enrich their services. The iPhone and Android models were compelling because they generated a cottage industry that has driven the core product and service set to much greater utility, as well as greater adoption rates and revenue generation. The problem is that while everybody seems to want to support developers, everyone supports them differently.
No one has solved the question of how all these cooperative players manage to combine their efforts to create something stable, easily supported and capable of generating revenue for all through cooperative settlement. Standards have been marking time in this area, and now it looks as if equipment vendors are stepping in to create the framework for the new ecosystem. Why? Because capex is usually pegged to revenue, so if you can't help your carrier customers raise their top line, their spending will languish and so will vendor profits.
Service providers tried to solve this problem of cooperative ecosystem-building with standards, but they moved too slowly. They then started to pressure their equipment vendors to come up with a solution, and the Alcatel-Lucent and Cisco announcements are the result. There will be others; and it will be all about "service mashups."
2. A CDN/cloud computing model emerges for settlement for online services
This is why the new ecosystem is suddenly developing. For decades, the Internet has suffered from a basic problem of lack of settlement among the providers. Everyone pays for access to their ISP, but nobody pays for transit. Where there's no revenue, there's no investment.
On the other hand, content providers are happy to pay for content delivery network (CDN)caching, and Software as a Service (SaaS) providers are eager to find good cloud computing resources. The access carriers are putting money there, and these new resources link not to the Internet core but to the access networks. Telecoms worldwide have seen the opportunity to create a link between investment and revenue, and that new link threatens the whole legacy model of the Internet. It's bringing the Web guys to the table.
If every piece of content and every application were cached or hosted in metro centers, there would be no core traffic on the Internet at all. That extreme isn't likely, but what's certain is that the valuable stuff is migrating to the metro area. That forces the big players like Google to transport their own content via fiber to each access provider, which further bypasses the old Internet peering model.
You can't create a new ecosystem without having the pressure of the old one breaking, and that's what's happening. In the new ecosystem, content and application players will join with search and portal companies and telecoms to fight out a new balance of power.
The most significant winners will be the content/application giants, because getting commercially valuable content via a network connection is the stock in trade of the future.
3. Integrating social network features and relationship knowledge into communications is a trend in the making.
Yahoo launched an advanced email system that illustrates the value of relationship-managed communications, and this new notion will be incorporated into an expanding notion of presence as the central framework for communications and collaboration.
Presence-centered personal communication is the most "tactical" of the major trends because it will have an immediate impact on a number of emerging technical and product trends. Collaboration and telepresence both work better, and justify more investment, if they're mediated through social-network-like frameworks. This is likely to be one of Cisco's major areas of focus in harmonizing all of the Web 2.0 APIs into a new ecosystem. It's also likely to be a focus for unified communications and even things like IMS, femtocells and fixed-mobile convergence (FMC).
What technologies will benefit?
The technologies that will benefit from these major trends are:
- Fiber access, including FTTH and FTTN, because access providers will continue to fight speed wars with one another as they look to leverage their role in the new ecosystem.
- Metro Ethernet and optics, since all of the recent bandwidth created will be within metro areas. Look for new interest in hybrid Ethernet/optics products as well.
- Femtocells and FMC, which will probably benefit IMS. Mobile service competition and the need to integrate mobile and wireline features will be a big boost to this area.
- Operations software, particularly service management, abstraction, componentization, composition and third-party access via APIs.
The broadest impact of the trend on vendors will be promoting a more integrated product strategy that offers telecoms a link from revenue to investment. For many, this will involve partnerships supplemented by selective development or acquisitions that are intended to make each vendor's offerings unique and thus more likely to be accepted by buyers.
There are different network infrastructures (wired LAN, Service Provider Networks) that allows mobility, but in a business environment, the most important is the wireless LAN (WLAN). Most modern business networks rely on switch-based LANs for day-to-day operation inside the office.
Productivity is no longer restricted to a fixed work location or a defined time period. People now expect to be connected at any time and place, (you are in when you are out...) from the office to the airport or even the home.
Traveling employees used to be restricted to pay phones for checking messages and returning a few phone calls between flights. Now employees can check e-mail, voice mail, and the status of products on personal digital assistants (PDAs) while at many temporary locations.
Wireless LAN and Wired (Ethernet) LAN
Wireless LANs share a similar origin with Ethernet LANs. The IEEE has adopted the 802 LAN/MAN portfolio of computer network architecture standards. The two dominant 802 working groups are 802.3 Ethernet and 802.11 wireless LAN. However, there are important differences between the two.
WLANs use radio frequencies (RF) instead of cables at the Physical layer and MAC sub-layer of the Data Link layer. In comparison to cable, RF has the following characteristics:
i. RF does not have boundaries, such as the limits of a wire in a sheath. The lack of such a boundary allows data frames traveling over the RF media to be available to anyone that can receive the RF signal.
ii. RF is unprotected from outside signals, whereas cable is in an insulating sheath. Radios operating independently in the same geographic area but using the same or a similar RF can interfere with each other.
iii. RF transmission is subject to the same challenges inherent in any wave-based technology, such as consumer radio. For example, as you get further away from the source, you may hear stations playing over each other or hear static in the transmission. Eventually you may lose the signal all together. Wired LANs have cables that are of an appropriate length to maintain signal strength.
iv. RF bands are regulated differently in various countries. The use of WLANs is subject to additional regulations and sets of standards that are not applied to wired LANs.
WLANs connect clients to the network through a wireless access point (AP) instead of an Ethernet switch.
WLANs connect mobile devices that are often battery powered, as opposed to plugged-in LAN devices. Wireless network interface cards (NICs) tend to reduce the battery life of a mobile device.
WLANs support hosts that contend for access on the RF media (frequency bands). 802.11 prescribe collision-avoidance instead of collision-detection for media access to proactively avoid collisions within the media.
WLANs use a different frame format than wired Ethernet LANs. WLANs require additional information in the Layer 2 header of the frame.
WLANs raise more privacy issues because radio frequencies can reach outside the facility.
802.11 wireless LANs extend the 802.3 Ethernet LAN infrastructures to provide additional connectivity options. However, additional components and protocols are used to complete wireless connections.
In an 802.3 Ethernet LAN, each client has a cable that connects the client NIC to a switch. The switch is the point where the client gains access to the network.
In a wireless LAN, each client uses a wireless adapter to gain access to the network through a wireless device such as a wireless router or access point.
Once you have a basic understanding of IPv6, the next logical step on Cisco equipment is to test out the different capabilities that exist within Cisco equipment and IOS. Here we take a look at the configuration of IPv6 addressing on a Cisco IOS device.
Cisco IPv6 Static Address Configuration
IPv6 is a little different from IPv4 in that multiple IPv6 addresses can exist on a single network interface; this can include an Aggregatable Unicast Address, Link-Local Unicast address, and/or anycast address. The next few sections review the configuration of these different address types.
Configuring Unicast Addresses
There are two common address types that are assigned to each IPv6 interface; this includes an Aggregatable Unicast address and a Link-Local address. An Aggregatable Unicast address is allowed to be globally routed and operates similarly to a public IPv4 address.
An Aggregatable Unicast address can be configured in a number of ways. This article goes over the ways to statically address an IPv6 interface, which includes either specifying the whole IPv6 address and prefix-length or by using a prefix and using EUI-64. Table 1 shows the steps that are required to configure an Aggregatable Unicast address, using both a completely manual configuration and by using EUI-64.
Table1-IPv6 Aggregatable Unicast Address Configuration
Enter global configuration mode
Enter interface configuration mode
Configure the interface with a manual Aggregatable Unicast address
router(config-if)#ipv6 address address/prefix-length
Configure the interface with an Aggregatable Unicast address using EUI-64. This method uses the prefix and the Interface ID to develop the complete IPv6 address to use.
router(config-if)#ipv6 address address-prefix eui-64
A Link-Local address is used to communicate between devices that share the same link; these addresses are only allowed to be used on the local link and are not routed. Link-Local addresses will automatically be configured using the interface identifier (typically the MAC address) when IPv6 is enabled on an interface or the Link-Local address can be manually configured. Table 2 shows the steps that are required to manually configure a Link-Local address.
Table2-IPv6 Link-Local Address Configuration
Enter global configuration mode
Enter interface configuration mode
Configure the interface with a Link-Local address
router(config-if)#ipv6 address address link-local
Configuring Anycast Addresses
The concept of an Anycast address did not exist within IPv4 and is intended to be (along with additional use of Multicast) a replacement for some of the capabilities of IPv4 broadcast addresses. An Anycast address is intended to be configured on the interface of multiple network devices that provide the same services (i.e. the subnet gateway, DNS server or other server). When a client uses the address, the network will direct it only to the closest device assigned the address to the client. Table3 shows the steps that are required to configure an Anycast address on an interface.
Table3-IPv6 Anycast Address Configuration
Enter global configuration mode
Enter interface configuration mode
Configure the interface with an Anycast address
router(config-if)#ipv6 address address/prefix-length anycast
While there are certainly a number of differences between IPv4 and IPv6 other than the obvious address length, what should be kept in mind is that the majority of the fundamentals are very similar and anyone familiar with IPv4 should be able to transition with a little research and practice. Hopefully the contents of this article make the static configuration of IPv6 address on a Cisco IOS device a little easier.
Reference from http://www.petri.co.il/ipv6-static-address-configuration.htm
More Info and Tips Related to IPv6:
Routing protocols are used to exchange reachability information between routers. Routing information learned from peers is used to determine the next hop towards the destination. To route traffic correctly, it is necessary to prevent malicious or incorrect routing information from getting introduced into the routing table. This can be done by authenticating the routing updates exchanged between routers. Open Shortest Path First (OSPF) supports plain text authentication and Message Digest 5 (MD5) authentications.
Only three key point need to be remembered while configuring authentication in OSPF
A) Types of Authentication:
There are three different types of authentication available for OSPF version 2:
1) Null authentication: Null authentication means that there is no authentication, which is the default on Cisco routers.
2) Clear text authentication: In this method of authentication, passwords are exchanged in clear text on the network
3) Cryptographic authentication: The cryptographic method uses the open standard MD5 (Message Digest type 5) encryption.
B) Enabling OSPF Authentication:
OSPF authentication can be enabling in two ways:
1) Per interface: Authentication is enabling per interface using the "ip ospf athentication" command.
2) Area authentication: Authentication for area can enable using "area authentication" command.
C) Configuring Authentication Key:
In either case password must be configure at interface using "ip ospf authentication-key" or "ip ospf message-digest-key" command
A) Area based authentication Example:
To enable OSPF MD5 authentication:
Enter configuration commands, one per line. End with CNTL/Z.
Router(config-if)#ip ospf message-digest-key 1 md5 cisco@123
Router(config)#router ospf 100
Router(config-router)#area 2 authentication message-digest
To enable clear text authentication
Enter configuration commands, one per line. End with CNTL/Z.
Router(config-if)#ip ospf authentication-key cisco@123
Router(config)#router ospf 100
Router(config-router)#area 2 authentication
Interface based authentication Example:
To enable OSPF MD5 authentication:
Router(config-if)#ip ospf authentication message-digest
Router(config-if)#ip ospf message-digest-key 1 md5 cisco
To enable clear text authentication
Router(config-if)#ip ospf authentication
Router(config-if)#ip ospf authentication-key cisco
OSPF commands for each authentication types:
ip ospf authentication null
area number authentication
ip ospf authentication
ip ospf authentication-key Key-value
area number authentication message-digest
ip ospf authentication message-digest
ip ospf message-digest-key key-num md5 Key-value
OSPF Virtual Link Authentication:
Virual link is an interface in area 0.This mean if you enable authentication on Area 0 it will automatically turn authentication on virtual link but as discussed above password(Key) must need to enable on interface.As we know Virtual link doesnt have any interface on which you can configure authentication,authentication on virtual link can be configure using"area virtual-link" command under OSPF process.
Authentication failures can occur for two reasons:
1) Authentication type mismatch between neighbors
2) Authentication Key mismatch between neighbors
The below “debug ip ospf adj" output indicate mismatch in authentication type.
Router#debug ip ospf adj
OSPF adjacency events debugging is on
*Mar 1 00:02:30.279: OSPF: Rcv pkt from 10.1.1.2, FastEthernet0/0 : Mismatch Authentication type. Input packet specified type 2, we use type 0
*Mar 1 00:02:39.603: OSPF: Rcv pkt from 10.1.1.2, FastEthernet0/0 : Mismatch Authentication type. Input packet specified type 2, we use type 0
Router#sh ip ospf int fa0/0
FastEthernet0/0 is up, line protocol is up
Internet Address 10.1.1.2/24, Area 0
Process ID 100, Router ID 10.1.1.2, Network Type BROADCAST, Cost: 10
Transmit Delay is 1 sec, State DR, Priority 1
Designated Router (ID) 10.1.1.2, Interface address 10.1.1.2
No backup designated router on this network
Timer intervals configured, Hello 10, Dead 40, Wait 40, Retransmit 5
oob-resync timeout 40
Hello due in 00:00:06
Supports Link-local Signaling (LLS)
Cisco NSF helper support enabled
IETF NSF helper support enabled
Index 1/1, flood queue length 0
Last flood scan length is 0, maximum is 0
Last flood scan time is 0 msec, maximum is 0 msec
Neighbor Count is 0, Adjacent neighbor count is 0
Suppress hello for 0 neighbor(s)
Message digest authentication enabled
Youngest key id is 1
---Resources from https://supportforums.cisco.com/docs/DOC-4449
You're probably familiar with 802.11a/b/g/n, all of which are protocols for the 802.11 wireless networking standards. You can safely bet that any device with Wi-Fi connectivity, from your laptop to your smartphone, supports at least wireless B or G, and if it came out within the past few years, it should support wireless N. 802.11n (or the latest draft of it, 802.11n-2009) is the fastest of the ones that are currently widely available. 802.11ac is a new Wi-Fi protocol and is intended to be the natural successor to 802.11n. You may have heard it called "5G Wi-Fi" or "Gigabit Wi-Fi."
Compared with the current 802.11n, what the new 802.11ac will bring to us? What some things you should consider while investing the 802.11ac? There are some main differences you need to know.
The first thing to get out of the way is - like past Wi-Fi standards - 802.11ac is backwards compatible with 802.11b, g and n. This means you can buy an 802.11ac-equipped device and it will work just fine with your existing router. Similarly you can upgrade to an 802.11ac router and it will work happily with all your existing devices. That said you will need both an 802.11ac router and an 802.11ac device to enjoy the standard’s biggest benefits. And those begin with…
With any new wireless technology speed is always the headline-grabbing feature but, as with every wireless standard to date, the figures tossed around can be highly misleading.
1.3 gigabits per second (Gbps) is the speed most commonly cited as the 802.11ac standard. This translates to 166 megabytes per second (MBps) or 1331 megabits per second (Mbps). It is vastly quicker than the 450Mbit per second (0.45Gbps) headline speeds quoted on the highest performing 802.11n routers.
So wireless ac is roughly 3x as fast as wireless n? No.
These figures are ‘theoretical maximums’ that are never close to being realised in real world scenarios. In our experience wireless n performance tends to top off around 50-150Mbit and our reviews of draft 802.11ac routers have typically found performance to be closer to 250-300Mbit. So 2.5x faster when close to your router is a good rule of thumb (though far more at distance, which we'll come to shortly).
Happily this gain is likely to increase as 802.11ac devices advance. Wireless 802.11n supports a maximum of four antennas at roughly 100Mbit each, where 802.11ac can support up to eight antennas at over 400Mbit each.
Smaller devices like smartphones tend to fit only a single antenna, but it gets even bigger in tablets (typically two to four antennas) and laptops and televisions (four to eight). In addition no 802.11ac router released so far has packed more than six antennas.
A final point: beware routers claiming speeds of 1,750 Gigabits. It is a marketing ploy where the manufacturer has added the 1.3Gbit theoretical maximum speed of 802.11ac to the 450Mbit theoretical maximum speed of 802.11n. Sneaky.
While speed is what will likely sell 802.11ac routers, range is equally important. Here wireless ac excels.
The first point to make is the 802.11ac standard lives entirely in the 5GHz spectrum. While some more modern routers broadcast 802.11n in 5GHz as well as 2.4GHz they remain relatively rare.
Consequently, the 5GHz spectrum tends to be 'quiet', meaning much less interference from neighborhood Wi-Fi. This more than counters the fact that, in lab conditions, 5GHz signals do not actually broadcast as far as 2.4GHz signals. 5GHz is also necessary to support the faster speeds of wireless ac.
The second key factor is 802.11ac makes ‘beamforming’ a core part of its spec. Rather than throw out wireless signal equally in all directions, WiFi with beamforming detects where devices are and intensifies the signal in their direction(s).
This technology has been around in proprietary form (it made a huge impact in the D-Link DIR-645), but now it will be inside every 802.11ac router and every 802.11ac device.
The combination of these two technologies is profound. This was most clearly seen with the Linksys EA6500 which hit speeds of 30.2MBps (241.6Mbit) when connecting to a device just two metres away, but still performed at 22.7MBps (181.6Mbit) when 13 metres away with two solid walls in the way. By contrast Linksys’ own EA4500 (identical except being limited to 802.11n) managed 10.6MBps (84.8Mbit) dropping to 2.31MBps (18.48Mbit) under the same conditions.
The real world result is 802.11ac not only enables you to enjoy the fastest 100Mbit (and beyond) fibre optic broadband speeds all over the house, but to enjoy it along with multiple streams of Full HD content, super low latency gaming and blazing fast home networking all at the same time.
Here comes the first caveat. The announcement of the Wi-Fi Alliance’s 802.11ac certification programme means 802.11ac equipped products can now be certified, but that process will take time as thousands of chipsets need to be tested.
Of course some manufacturers have jumped the gun. The 802.11ac routers we have tested are sold as ‘Draft 802.11ac’ products and while many may become certified through a firmware update, it is not guaranteed. Draft 802.11ac products are also not guaranteed to perform optimally with other Draft 802.11ac products - especially between different manufacturers. Certified products are.
The good news is the first certified chipsets are already creeping out and they come from the likes of Intel, Qualcomm, Cisco, Realtek, Marvell, Broadcom and Samsung - manufacturers with extensive networking expertise and who licence their chipsets to others. For example Intel has only one chipset certified - the ‘Dual band Wireless 7260’-but it is expected to be at the heart of most Haswell-powered Ultrabooks. The highest profile of these to date is the new 2013 MacBook Air.
Furthermore, adoption should be fast. The first 802.11ac routers carried a hefty premium, but this has dropped quickly to the point where price shouldn’t be a barrier to anyone keen to hop onto the bandwagon. In addition 802.11ac is extremely efficient and it brings power savings compared to 802.11n, meaning it is ideal for mobile devices. The Samsung Galaxy S4 and Samsung Mega phones already pack wireless ac.
As such, while 802.11ac products are only trickling out at present, it will turn into a tidal wave by early 2014.
Wait for 802.11ac?
All of which begs the question: should I now buy any device that isn’t 802.11ac compatible? The short answer is no. If you live alone in a small flat where you have no signal problems 802.11n may serve all your needs, but in larger, multi-user homes and homes with network attached storage the benefits of 802.11ac are simply too good to miss out on. Especially when buying devices you expect to keep for a number of years.
The longer answer is 802.11ac is a revolution that will be hard to actively avoid. Wireless ac will be built into most laptops and phones within the next 12 months and routers will increasingly come with it (though ISPs are typically slow to adopt new standards in the routers they give out, so plug an ac router into theirs and switch off their wireless to get around it).
It will take time and money for your home to be fully 802.11ac compatible, but it will be worth it.
---Original Reference from http://www.trustedreviews.com/opinions/802-11ac-vs-802-11n-what-s-the-difference
More Related Networking Reviews:
Software-defined networks (SDN) aren’t for everybody. Through programmability and automation, they promise to make IT life easier. But depending on your IT shop, the benefit may not be worth the effort… or investment.
There are eight considerations for IT shops evaluating SDNs, according to IT management software company Solar Winds. The checklist was compiled from interactions with customers considering or inquiring about SDNs:
1) The industry in which the organization is operating
SDNs work for cloud providers or for any organization that experiences dramatically scaling workloads, says Sanjay Castelino, vice president and market leader of SolarWinds’ network management business. Financial services companies and retail fall into that category, where “the dynamic nature of the business drives IT to be flexible,” Castelino says.
Some that do not fit this mold are publishing and healthcare, he says, two industries that are relatively stable, and not launching or moving around application workloads every day. “Their environments are not as dynamic,” Castelino says.
2) The size of an organization’s network
While there is not a distinct bare metal server or virtual machine threshold for implementing an SDN or not, the rule of thumb is hundreds of IP addresses.
“For 50 IP addresses, it’s not worth the change,” he says. “For hundreds of IP addresses, you might need the automation.”
Castelino recommends doing capacity planning before considering SDNs.
3) The level of complexity of an organization’s network
If there are requirements for a lot of network slicing or segmentation for security and isolation, you might be a good candidate for an SDN. If there are lots of virtual LANs to configure and manage, or there are VLANs that require more automation than others, SDNs might be a good fit.
But change shouldn’t be made just for the sake of it, Castelino says.
“You don’t want to make changes that break things,” he says. “Policy is not a simple task to go implement. Have to have someone deeply steeped in network engineering.”
And you have to validate and test the environment multiple times, he adds.
4) The Dynamic nature of an organization’s applications and workloads
This goes back to consideration No. 1: Are you a cloud operator or a hardback book publisher? How often are you launching new applications and closing others? How often are you moving workloads around? Is your environment static and predictable, or always changing, always moving and unpredictable?
5) The number of virtual machines within an organization’s network
“If you’re not at a few hundred, you’re probably early,” Castelino says. He reiterates that if an organization is running hundreds of workloads, it might be worth taking a look at SDNs. Below that level, and with SDN’s immaturity, it might be “way too early” to look at.
6) The organization’s need for agility, flexibility and scalability within the network
See Nos. 4 and 1: If you have a business or IT environment that scales quickly and changes dynamically, you want SDN. But the eventual ease of operations will come with some initial work. The time it takes to get into SDN is not small today, Castelino notes – it’s still at the bleeding edge of the technology curve.
“Network engineering skills and capital resources are going to be key,” he says. “It could be an expensive proposition so you need to ensure value on the other side.”
7) The organization’s need to simplify security measures and control access to applications
The benefit of SDN is that things get done the same way all the time, through policy, even though the environment is dynamic and always changing. Security and network access control in a dynamic environment can be a nightmare. It’s important to get policy enforcement right in this regard not only to ease operation but to ensure information stays where it should.
8) The organization’s access to personnel and capital resources
If an IT shop doesn’t have network engineering expertise, or a personnel is stretched thin, SDN is not the project to undertake, Castelino says.
“There will be lots of bumps in the road,” he says. “It’s going to be a lot of work and take time.”
SDN deployments are done in parallel with the production environment, test, evaluated, validated and tested again before they are cut over to the production network. It takes time, people and money.
In summary, SDN holds a lot of promise. There are a lot of problems it can solve… but also a lot it can start if the environment is not conducive to the effort and undertaking to transition to an SDN-programmable and automated IT operation.
“The hype cycle can sometimes lead to an ugly bursting of the bubble,” Castelino says. “SDN has its purpose. But if it is marketed as a panacea for everything under the sun, you’ll see a lot of dramatic failures. It’s not ready for everyone but some can get a lot of value out of it. You just need to go in with eyes open.”
Review resources from http://www.networkworld.com/news/2013/070213-sdn-271479.html
Note: The original version of this article indicated that VXLAN was used for tunneling. As per Cisco's remarks in the comments section, Cisco is using a proprietary tagging encapsulation protocol. The article has been updated for accuracy and to express the author's views about proprietary protocols.
Cisco Systems' SDN strategy is taking shape via its announcement of Dynamic Fabric Automation. DFA is a data center fabric that uses an overlay network to provide orchestration, multitenancy and operational visibility. VMware, Juniper and Alcatel's Nuage also offer network overlays, but DFA has one significant difference: hardware integration in the physical network devices to support bare-metal servers or other physical devices.
DFA is orchestration software using a software network controller to manage a tunneling overlay network using a proprietary 24-bit tag in the Ethernet header to signal tunnel membership over the Fabric Path-based fabric to an endpoint.
Cisco recommends using Nexus gear deployed in a Spine-and-Leaf configuration, though it's not required. This appears to be a workaround for the lack of entropy in the Ethernet header, which would cause poor load balancing in MLAG network designs common in today's networks.
Announced at Cisco Live in Orlando Florida, this is the first demonstration of Cisco's SDN strategy, which Cisco is calling "Application-Centric Infrastructure."
DFA uses Cisco's Data Center Network Manager (DCNM) as a network controller for the tunnel overlay and manages all the physical and software devices in the Unified Fabric as a distributed control plane. Note that Cisco disagrees with the use of the term "controller" to describe the DCNM. It calls it a Centralized Point of Management (CPoM). Cisco's reasoning is described in the comments section.
DFA works at the device level through an existing feature in NX-OS called Configuration Port Profiles. The DFA controller applies port profiles to logical ports in the Nexus 1000V switch on hypervisor platforms and to the physical leaf-node switches. In this way, both physical and virtual devices can connect using an overlay network.
This control of the network edge, plus integration with cloud platforms such as OpenStack, provides the control for multitenant data centers. DFA enables multitenancy through the underlay network by managing all device configurations and by the use of proprietary overlay networking to isolate traffic.
The DCNM knows the location of endpoints and can graphically display the network slice of each tenant in the architecture, which simplifies troubleshooting and improves network visibility.
Cisco uses the misnomer of "Workload Aware Fabric Network" for this feature. The term implies that the network is adaptively handling traffic flows. In reality, the network controller knows the locations of servers and the network devices that are in the path.
The unified fabric is configured to support a distributed gateway where all leaf nodes share the gateway IP and MAC address for a given subnet. This enables transparent layer-2 functions across all the leaf nodes while also providing layer-3 routing at the network edge.
ARP traffic is terminated on each leaf and BUM traffic is significantly suppressed. Internally, the underlay uses /32 routing for each host to support dynamic L2 mobility at the edge of the network.
DFA Endpoints Source: Greg Ferro
It's not clear which specific Nexus devices support DFA today. As mentioned, Cisco recommends a Leaf/Spine design using an ECMP network core (FabricPath) between the spine and leaf nodes, which is only supported on specific switch models. DFA also uses iBGP to propagate some configuration data between elements of the tunnel fabric (although it's not yet clear what exactly this data is).
Cisco Plays To Its Strengths
It has been clear for some time that Cisco has not been leading Software Defined Networking technology and, to some extent, lost control of the SDN debate. It's trying to get it back. Cisco has started using a marketing term "Application-Centric Infrastructure" instead of "Software Defined Networking" and that message was consistently repeated at Cisco Live.
With DFA, Cisco is the only vendor today with a strategy to orchestrate physical tunnelling functions in network hardware (albeit with a proprietary mechanism with poor interoperability) with software network agents such as the Nexus 1000V.
This allows the deployment of overlay networks that connect both virtualized platforms such as OpenStack or VMware to non-virtualized devices and servers. Instead of supporting virtual workloads in a cloud platform like vCloud or OpenStack, Cisco can support any workload, anywhere.
This embracing of non-cloud systems will be attractive to many customers and attacks a weakness in existing software overlays such as Nicira, Contrail and Nuage that don't provide support for legacy network integration.
DFA looks to be a strong product that certainly meets customer needs, goes beyond competitive products and plays to Cisco's strengths integrating the physical and virtual networks.
Unfortunately, the choice of a non-standard and proprietary encapsulation is a significant drawback. While some customers may not be concerned about the use of proprietary technology, I recommend DFA be avoided because of it.
It's also clear that Cisco is betting a great deal on its Insieme project, which may offer a better solution for similar use cases. Cisco did not clearly explain Insieme at Cisco Live, so customers will have to wait for more information before making concrete plans.
About the author: Greg Ferro is a freelance Network Architect and Engineer.
---News from http://www.networkcomputing.com/
More Cisco News:
Mobile is connecting the world in a dramatic and breath-taking fashion. It bridges generations, builds communities, ignites ideas and tears down the barriers which separate us. Mobile Asia Expo will accelerate this effect by showcasing the mobile trends and solutions that will transform our lives today and tomorrow. Join us in Connecting the Future!
Mobile Asia Expo 2013 will include:
A world-class Expo, showcasing cutting-edge technology, demonstrations, products, devices and apps to mobile professionals and mobile-passionate consumers
A thought-leadership Conference for senior mobile professionals, featuring visionary keynotes and panel discussions and world-class networking opportunities
App Planet, where app developers can learn and expand their knowledge of the popular mobile app marketplace
New for 2013, the Mobile Asia Expo exhibition will feature the Connected City. The Connected City will demonstrate the current reality and future vision of ‘the connected life’ through a real city street in the heart of Mobile Asia Expo, creating an engaging, visionary, and “connected” experience.
New for 2013
Featuring something for everyone who has an interest in the mobile industry, Mobile Asia Expo 2013 will include many new event offerings:
Showcasing ‘Smart City’–Explore the ways that mobile technology is enabling cities to become more efficient through cutting-edge demonstrations from international exhibitors
More networking opportunities–Connect with the C-level leaders in the Asian mobile industry through a range of unique networking opportunities
My MAE online networking platform–Reach out to new contacts and set up meetings using our exclusive, dual language social networking tool
Doing business in China–Learn about buying and selling your products and services as well as finding the right partner within the Chinese market
Training opportunities–Participate in formalized mobile industry business trainings geared toward director & manager level employees
Even more Innovation Lab presenters–Hear from exhibitors, sponsors or partners on emerging technologies and new products or services in the very successful Innovation Lab
Who Will Attend?
Mobile Asia Expo will feature something for everyone who has an interest in the mobile industry. Expected attendees include:
- B2B Mobile Professionals looking for outstanding networking opportunities with senior industry leaders and discussing emerging industry trends
- Industry professionals looking to further their mobile knowledge and discover new products and technologies
- Mobile Consumers interested in the latest in mobile technology and devices
- Retail Buyers seeking new products and glimpsing the future of mobile
- App Developers interested in learning the newest developments from the largest platforms
The conference and exhibition programmes in the inaugural GSMA Mobile Asia Expo were attended by more than 15,500 visitors from 81 markets, attracting executives from mobile operators, software companies, device makers, equipment providers and internet companies, as well as government delegations.
While there are many benefits of attending and being part of the Mobile Asia Expo 2013, our event has continued to grow and evolve around some fundamental values:
Bringing Together the Mobile Ecosystem: Mobile has become part and partial of our daily lives today. GSMA brings together mobile operators and different players in the mobile ecosystem in this annual mobile-focused occasion where mobile innovations, ideas and business deals are being fostered and accelerated.
Where the World meets Chinese and Asian Audiences: This is an international event where prestigious international exhibitors, partners, speakers and press are invited to bring mobile innovations, products and services in front of the prospective industry audience in this rapidly growing market of China and pan-Asia.
Inheriting Professional Quality of Mobile World Congress: Backed up by the professional team behind the industry-renowned Mobile World Congress, Mobile Asia Expo is going to drive a first-class conference and exhibition experience in the state-of-art facilities of Shanghai New International Exhibition Centre.
Bringing ‘Consumer Experience’ to the Exhibition Floor: Different from the traditional tradeshow setting, Mobile Asia Expo brings ‘Consumer Experience’ in the exhibition floor design and provides opportunities for both trade and consumers to experience the forefront of new mobile technologies in an unconventional setting.
More Related Info and Topics of MAE you can visit:
http://www.cisco.com/web/CN/solutions/sp/mae/index_en.html and http://www.mobileasiaexpo.com/
More Cisco News and Reviews:
Cisco is introducing a new framework for sharing context-aware information to a variety of third-party security providers. The networking giant said it will use pxGrid to make Cisco ISE the central repository for context-aware security architecture via a new ecosystem of partners.
Cisco already has a broad set of mobile device management (MDM) vendor partners for Identity Services Engine (ISE). This week, however, Cisco added a new collection of Security Information and Event Management (SIEM) and threat detection vendors that are integrating with ISE via pxGrid. The initial set of partners includes HP ArcSight, IBM, Lancope, LogRhythm, Splunk, Symantec and Tibco.
The PxGrid is a publish-and-subscribe framework through which security products can collect contextual information from ISE, such as user, device, network connection and location. They can then use that information to improve their own analytics. Since pxGrid is bi-directional, these SIEM and threat detection vendors can also send instructions to ISE to revoke or modify network access.
Cisco's decision to align with the SIEM market struck at least one analyst as an odd choice.
"SIEMs are an old technology," said John Katsaros, principal at Internet Research Group. "Some would call it ancient. If you look at it going forward, SIEMs are going to be phased out. I don't think they're going to be around for more than a couple more years."
Instead, Katsaros thinks Cisco should be aligning its security strategy with big data. RSA predicted this shift late last year and evangelized the notion at the RSA conference this year.
Rather than interconnecting different security platforms, Katsaros thinks vendors should be helping enterprises build data warehouses for security management. "Big data makes it more affordable to capture, keep and mine security information. Why (Cisco isn't) going in that direction is beyond me. They didn't show us anything that shows they have a better way of doing things than with big data techniques."
With pxGrid, Cisco ISE adds context everywhere
Kevin Skahill, director of Cisco's secure access and mobility group, said the vendor's plans for pxGrid go well beyond the SIEM and threat-detection market.
"We see potential to do this integration with many other platforms," he said. "PxGrid is a publish-and-subscribe technique that provides a single framework that partners can develop once (with). It allows partners to customize and secure what contexts get shared, because not every partner wants the 80 different attributes that ISE can provide."
Nor is Cisco ISE necessarily being pitched as the heart of a context-aware security architecture, Skahill said, adding that the pxGrid framework will allow vendor partners to share context directly with each other. Cisco is submitting pxGrid to the IETF and other standardization bodies for consideration, he added.
Carefusion, a global manufacturer of medical devices, is an alpha adopter of the pxGrid integration, using a combination of Cisco ISE and Lancope's StealthWatch NetFlow analyzer.
"We are using the ISE and Stealthwatch combination to help secure our wired VPN and wireless access," said Bart Lauwers, Carefusion's vice president of IT infrastructure. "One problem we were facing was how to correlate all this data (from Stealthwatch) and ensure that we're taking the right action. In our alpha deployment, we had the ability to examine historic behavior, determine what the impact (of an incident was) do a full assessment of what the threat was and when it happened and install a rule to prevent it from happening again."
Lauwers said the integration will allow his team to identify and remediate threats instantly, rather than the weeks or months it could sometimes take.
PxGrid also integrates Cisco ISE into SDN
Cisco will also integrate its software-defined networking strategy with pxGrid, said Dave Framptom, vice president and general manager of Cisco's secure access and mobility product group.
"The Cisco ONE controller will be one of the consumers of context from ISE with pxGrid," he said. "Then that controller can take that information and help direct an action in the network."
PxGrid is available now to prospective partners and will be generally available for customer use in the first quarter of 2014.
Reviews from http://searchnetworking.techtarget.com
More Cisco Related: