Review: Meru Networks’ E(z)RF Service Assurance Module

Not an Ideal Setup

The setup is not ideal, as it does not fully replicate the traffic characteristics a wireless client would. Any conditions caused by the wired network may have their effect doubled, since the traffic both originates and terminates at the SA appliance, which could be located in a data centre in a remote location from the AP under test.

The recorded measurements will not accurately reflect that of a real wireless client, as the access point client is likely located on a wall or in the ceiling, rather than somewhere on the office floor or a desk. Access points are typically deployed to optimally offer coverage for real clients on the floor, not artificial ones in the rafters, so APs may be further away from each other than a client machine would be, thereby potentially worsening the throughput and latency figures detected by the SAM.

In addition, the throughput tests have a best-case-scenario flavor to them. If the ESSID under test supports 802.11n in either band, a client AP will associate as an 802.11n client. Under these circumstances, the client AP will not test the network as a down-level 802.11a/b/g client might see it. The effects of coexistence of 802.11n with legacy clients may be tested by happenstance if a laptop happens to be using the network under test, but I had to dig to discover the presence of these legacy clients in the “All Station” log within the test details.

Also, the SAM doesn’t use real application traffic: The iPerf tool used by the SAM to measure throughput sends a large burst of uncompressible data in large frames. Real applications, using different ports with smaller packet sizes and potentially more TCP overhead, will produce different results. As with any benchmarking result, take it with a grain of salt.

Identifying Performance Issues

Despite those shortcomings, when used regularly, the SAM can help identify performance issues in the network—although it is not always as good at explaining why performance degraded. For instance, in one test, the SAM was able to correctly identify when my DHCP (Dynamic Host Configuration Protocol) server was down because the AP client could not get an address, noting the symptoms and possible cause in the test results, while also sending to me an email notification. The SAM was also able to suss out an AP having antenna problems.

However, in situations that were bad but not dire, the SAM was less helpful. To be able to troubleshoot poor wireless performance, you need to know things about both ends of the connection. Interference could be having an effect nearer the client or the AP. With the SAM, you know about the client, but you have to dig for the information about the AP due to the way Meru’s WLAN technology works.

Because of Meru’s single-channel architecture, which utilises the same channel across all APs in the network, I often found that performance varied because the health check tallied its findings against a different access point than the one tested in the baseline. But the health check results don’t clearly spell that out.

This circumstance may be listed in the test logs—if you look at the health check and baseline side by side. But I had to dig into the Network Manager to find out which AP was under test with a given client AP.

Given that Meru controls all the information in the wireless network—either in the SAM and Network Manager or in the wireless controller—I’d like to see Meru do a better job correlating data from all its own sources. The company could present a comprehensive and definitive take on what is going wrong with the network somewhere in its solution encompassing data from these sources, rather than requiring administrators to chase around between Meru applications to sort it all out on their own.

All this hunting around is made more annoying by the SAM’s antiquated web interface. Designed to work with Internet Explorer 7, I had to run IE8 in compatibility mode to get it to render at all. Even so, I still found that some dialog boxes would not register any changes that I made, leading me to try configuring the same thing time after time.

Even when it did work, the GUI was hard to deal with. The web interface doesn’t care about monitor size, instead packing a lot of poorly formatted data into a cramped series of boxes. That required me to constantly scroll left and right within a box to see all the data. Indeed, the easiest way I found to look at logs produced by the SAM was to output the results to a comma-deliminated rendering of the data, which I could then copy and paste into Notepad.

Additional content from Peter Judge.

Page: 1 2 3

Andrew Garcia eWEEK USA 2014. Ziff Davis Enterprise Inc. All Rights Reserved

Share
Published by
Andrew Garcia eWEEK USA 2014. Ziff Davis Enterprise Inc. All Rights Reserved

Recent Posts

Protestors Clash With Police At Tesla Gigafactory In Germany

Hundreds of climate activists clashed with police outside Tesla gigafactory near Berlin, in protest over…

4 days ago

Google I/O: Google Gemini, Project Astra Etc

AI very much the focus at Google's annual developer conference, including Google Gemini and a…

4 days ago

OpenAI Co-founder Ilya Sutskever Departs To Work On ‘New Project’

Co-founder and chief scientist Ilya Sutskever to leave OpenAI, after role in Sam Altman's firing…

4 days ago

Biden Administration Imposes 100 Percent Tariff On Chinese EVs

Electric vehicles made in China are now subject to a 100 percent tariff, to protect…

4 days ago

Microsoft Faces EU Antitrust Charges Over Teams

Microsoft faces formal EU antitrust charges over videoconferencing app Teams after concessions to European Commission…

5 days ago

New Jersey Apple Store Workers Vote Against Unionisation

Workers at New Jersey Apple Store vote against joining union as post-pandemic labour drive at…

5 days ago