College already is tough enough
About a year ago, Northern Michigan University in Marquette found itself in a predicament that one intuitively might think would be much more likely to occur at its much larger cousins in Ann Arbor and East Lansing. Despite its comparatively small student body — a little more than 8000 full-time students — the school’s wireless local area network began to experience serious overloading problems that significantly slowed data transmission rates for some students and prevented others from accessing the network at all.
Unlike larger schools that have many more classrooms and lectures halls separated by wide spaces, all NMU students attend classes and lectures in a single building. Although the heavy concentration of students by itself might be enough to overload the university’s WLAN during peak periods — especially Tuesday and Thursday afternoons when all classrooms are at capacity — the situation was complicated further by NMU’s policy that every student must be equipped with a laptop computer, many of which are toted to class.
“We’d have 800 computers in this building turn on all at the same time,” said Dave Maki, NMU’s director of technical services. “There was so much RF in this building, and the [access points, or APs] were so busy dealing with the RF traffic, that once you got more than 12 to 13 people on an AP, they would just fail. Some people would be OK, and some wouldn’t get connected.”
Typically, 802.11 access points can handle about 70 clients simultaneously.
In addition, the high level of radio frequency (RF) traffic in such a concentrated area created serious interference problems. Still another problem with the system was that its access points utilized 802.11b radios exclusively, so any student using an 802.11g device automatically got “dragged backward,” resulting in degraded performance, Maki said.
Maki’s first approach to solving these problems was to deploy a system developed by Airspace, which utilized a controller in the background to manage the radios by adjusting channels and power utilization as needed. While good, the system had some limitations, Maki said.
“It really worked,” he said. “It took care of all the classrooms upstairs and the lecture hall on the bottom. But once you got above 60 or 70 clients [per AP], it started to waffle.”
Then, last spring, Maki attended Networld Interop in Las Vegas and wandered into the Meru Networks exhibit, where the company was hawking a solution it claimed could support 100 clients or more per access point without connection or degradation issues.
In assessing NMU’s situation, Vaduvar Bhargavan, Meru’s chief technical officer, determined that the biggest problem the school had was that the client devices on the system were choosing the access points to which they were connected, a scenario that is typical of 802.11 networks, he said. However, given NMU’s unique circumstances, that resulted in disruption of service and sporadic access.
“If you have sporadic access, the client can get confused between congestion, which is a very real situation, and poor channel characteristics,” Bhargavan said.
Bhargavan described Meru’s approach as being closer to that used by a cellular network, where the infrastructure decides — via a coordination protocol that exists in the backend of the system — which access point the client will use.
“In our system, the client doesn’t see multiple APs, it sees only one AP and is directed to the one that is best designed to serve that client,” he said. “This is very much a cellular view of the world.”
Meru’s system further replicates cellular infrastructure in its ability to execute seamless handoffs between APs and to achieve appropriate load balancing, which helps to distribute the traffic evenly among available access points.
Bandwidth was another problem tackled by Meru. Before NMU deployed Meru’s system, students were having difficulty downloading research documents and in some cases completing exams because of inconsistent bandwidth. “That’s mission-critical for students,” Bhargavan said. “There were a lot of irate students — and professors.”
Meru solved this problem by developing “fairness” algorithms, layered on top of the 802.11 standard protocol, that deliver bandwidth in a predictable manner by allocating available airtime at any given moment on an equal basis to all users on the system. “This was very important for this particular customer,” Bhargavan said.
Where some systems might give priority to users who are downloading documents over those who are uploading, Meru’s system does not, according to Bhargavan. “It doesn’t matter what they’re doing. We don’t think of it as an uplink versus downlink division, but rather as a per-subscriber or per-device division,” he said. “The way we do that is to aggregate the uplink and downlink time for each device.
“That’s very unusual in the industry today because when most people talk about providing fairness, they’re really talking about priority. The problem with transmission priority is that it works well in a switched network but not in a hub network.”
Meru’s accomplishment is “definitely a step forward,” according to Philipp Muelbert, principal at analyst firm Adventis. But he cautioned against getting too excited because of the NMU deployment’s unique circumstances.
“In most settings, you’re not going to get this type of density; even in enterprises you won’t get this level of density, as most enterprises still are very hesitant to predominantly rely on Wi-Fi for their backbone,” he said. “They’re still wiring most of their buildings and offices for security reasons. So, there are limited applications where [the density improvement] plays out.”
That could change soon should an emerging trend gain momentum, Muelbert said. Municipalities increasingly are looking at the feasibility of building out citywide Wi-Fi networks and will be trying to find the balance point between infrastructure needs and the cost of building and operating the network. For example, Internet search engine Google proposed in November 2005 to provide free wireless Internet access via a wireless mesh network throughout the city of San Francisco, according to numerous media reports. Google reportedly made the offer in response to San Francisco Mayor Gavin Newsom’s request for information on methods to establish an affordable wireless network throughout the city. Other bidders for the San Francisco project include ISP EarthLink, which was chosen to develop a Wi-Fi network throughout the city of Philadelphia.
In addition, the city of Tempe, Ariz., recently entered into the second phase of its citywide Wi-Fi buildout that will cover 95% of the city when the fifth and final phase is completed by the end of next month.
“One of the things [cities will] be looking at is the scalability of the network,” Muelbert said. “To what scale do I have to build it, how many access points do I need? As you’re covering an entire city, going from 75 users per access point to 100 users per access point makes a huge difference in your cost base. For those types of networks, [the jump] makes a significant difference.”
Perhaps more significant than the minimum 43% improvement in the number of clients that Meru says can be connected to an access point without service degradation, the company’s solution successfully addresses another key issue that had developed between the 802.11b and 802.11g standard, according to Muelbert.
“They weren’t able to extract the full utilization rate that you’d usually get with the 802.11g standard when 802.11b devices also were on the network,” he said. “Meru’s Air Traffic Control solution addressed that, and the significance of that is you don’t have to worry about future-proofing the network. Those who are using 802.11g devices won’t have to worry about being handicapped by others on the network who are using 802.11b devices.”
Maki agreed. “From a student perspective, no one has to re-associate now. You’d be out there working along on your g device, and someone comes along with a b radio, and it would actually break your session and you’d have to reconnect.”