How does a university plan for a campus-wide network, a relatively new higher education resource that grew from zero demand to constant on-demand in just two decades? William Deigaard, IT Director for Networking, Telecommunications and Data Centers, responds, “with lots of innovation, forward thinking, peer reviews, industry research, calculations and guesses about future demand, and an eye on the bottom line.”
Rapid and continual IT evolutions complicate university planning for networks, Internet access and data centers. Solutions are expensive and, at many institutions, typically last only 5-7 years before requiring either replacement or significant upgrades. Part of the relatively short-term lifespan is due to significant changes in individual devices. In 2003, when Rice’s second major campus network (RiceNet2) was being planned, and wireless mobile devices were considered almost a luxury, in use by only a small percentage of the community. It was difficult impossible at that time to conceive of a 2013 campus where each of the 13,000 community members would attempt to connect 2-5 wireless devices to the network on a daily basis. Even though this demand is seems enormous for any organization relatively small university, Deigaard says, “we don’t have the crazy student multiplier quantity of students [that state universities have], so we can provide a higher quality experience. After all, that’s why people should come to Rice.” As a result, Rice faculty, students, and staff can all have un-metered bandwidth to and from the Internet, and can use it from any number of devices.
Before the Network
Fifty years ago, when alumnus Dr. Henry Rachford left Humble Oil to become Rice University’s Director for the first Research Computational Laboratory (today’s IT division), no network was required. There were less than five computers at Rice in 1964, and each was a room-size giant independently crunching through data sets and/or punch cards. Not until 1986 would Rice consider an internal “backbone” network to link the growing number of departmental computers.
First Network
By 1990, approximately 50 departmental and campus-wide computers were linked with regional and national networks, precursors of today’s Internet and research networks. A campus-wide network did not appear until 1994. Since Rice’s first connection with an external network in 1990, the daily tasks and tools of faculty, staff and students have undergone significant transitions. Chalk boards went from black to white to PowerPoint slides, manual typewriters and adding machines became electric and then desktop computers, slide rules became calculators, laptop computers, and/or phones. Compared with the physical expansion of the campus in the 1990s, the first university network was installed before George R. Brown or Alice Pratt Brown Halls were complete (1991), before the founding of the Baker Institute (1993), before the opening of Duncan Hall (1994) or Dell Butcher Hall (1998). Wiessmen still lived in Old Wiess when the first external network connections began, there were no shared serveries, Martel College had not been proposed. Duncan and McMurtry Colleges would not open until twofive years after the installation of the second campus-wide network.
Invisible Investment
The academic halls and residential colleges built in the 1990s are much more obvious than the wiring for that first network, but the investments were similar. Unlike the buildings, the network expanded to serve every person at Rice — from an entry level employee clocking in for the day to the President contacting the Board of Trustees. By 2003, the first network had aged ungracefully and could no longer support the constant (and growing) usage and explosive demand for better IT services.and connection interruptions grew more frequent as old equipment failed, was patched, and failed again. In a landmark decision for technology investments at Rice, the Board of Trustees approved a new multi-million dollar project encompassing both a new network and a new data center. When complete, the combination would earn recognition for the university as one of the most technologically advanced universities in the first decade of the twenty first century. The project also included the added convenience of wireless network availability throughout the campus in the most popular locations on campus – including the library and student center.
Ports of Call for Rice’s Next New Network
If a future-ready network requires as much investment planning as project planning, how do we build or buy a network that will function in a misty future only 10 years away? What destinations will faculty, staff, and students be mapping for their network travel itineraries? Internet ports of call will include current cloud services like Google and Box, both under contract to Rice. Local destinations for servers and virtual machines hosted in the university’s data center will also be guaranteed destinations. Connections to the Rice network from external locations will expand as more Rice students and faculty travel, teach, research, and study abroad. Undoubtedly, Rice students, faculty, and staff will also connect to servers and services outside of those under Rice’s control or contract. As Rice’s reputation grows, more legitimate and malicious traffic will flow to the university’s network. The influx of international traffic will increase the number of security attacks on Rice’s resources and data, so the new network must include capacity for automated intrusion detection and prevention services.
Flexible Network Grid Connections, Minimal Disruption
Unlike the 2004 network project, the FY15 network will not require re-wiring the entire campus. For Ricenet3, the majority of the upgrade work will be focused on routers, switches, backup power systems, and other behind the scenes network equipment rather than re-cabling buildings or pulling new fiber. Here, individual office, lab, and wireless router connections merge into high speed network fibers acting as entrance ramps delivering campus traffic out onto the Internet and into private research networks reaching across the U.S. and around the world. Because the changes will be made in switches and closets during off-hours, Deigaard says it will be “the network that was installed with minimal disruption.”