| Home | Dean | Letters | Front&Center1 | Front&Center2 |
| Connections1 | Connections2 | After Hours |
Feature 1 | Feature 2 | Feature 3 | Feature 4
The Global Grid
Distributed computing takes a giant leap forward
Articles by Deb Derrick
Many computers are better than oneespecially for tackling computationally intensive scientific problems and maximizing computer resources. Thats the premise of grid computing, the hottest thing to hit the computing world since the development of the Internet. Since the 1990s, grid computing visionaries have had the idea of making many distributed computers function like one giant computer. They envision a world of interlinked computer gridsa seamless computational universe where users can buy computing power like electricity. With the development of software tools and protocols, grid computing projects are springing up around the world.
The National Science Foundation is currently installing hardware for the $53 million TeraGrid, a transcontinental supercomputer with clusters of microcomputers linked by high-speed networks. TeraGrid will have a processing speed eight times faster than the most powerful academic supercomputer of today. UNL College of Engineering & Technology faculty, funded by NSF EPSCoR, are setting up grids and high-speed networks of their own to enhance connectivity among the computing community. This idea of a grid is more like an amorphous net, says computer engineer David Swanson, with computing resources spread out everywhere. The vision is almost like Star Trek. You wont have to be on the bridge to talk to the computer.
Meet Me at the Grid
Someday your conference room may look like Kauffman 114. This virtual environmentwith projectors, large screens, digital cameras and other multimedia componentsis just one application of grid computing that uses an ensemble of resources called Access Grid. Byrav Ramamurthy, Hamid Sharif and David Swanson talked with Contacts editor Deb Derrick about their work on the Grid.
What exactly is Access Grid?
DS: One way to think about it is that its like a pub. Well both set our Grid to this virtual venue and see who happens to be there. Well chat with fellow experts on a particular problem were all working on. When were done, well leave.
BR: You can talk with people in Montana, Illinois and Hawaii and you can all view a Powerpoint presentation, a Web site and a sophisticated 3D simulation at the same time.
DS: You wont know whos in the room at Boston University unless you click them up on the screen and include them in the conversation. But theyll be there. They may be watching your venue if youre in the same virtual room. And you can switch between several venues.
But I can do videoconferencing on my desktop. Whats the difference?
HS: Desktop videoconferencing is typically designed for individual communications, not group-to-group environments. Access Grid uses Internet2s high speed networkthe minimum connection is 155 megabits per second. Internet2 has better routing and Quality of Service. You can transmit multiple channels of video and audio with near-broadcast quality.
DS: The biggest difference is scalability. Most other technologies are point-to-point with a limited number of participants and sites that can be connected simultaneously.
BR: Access Grid allows you to participate in meetings with hundreds of other people. You can have 100 or more sites all logged into the same conference.
DS: Lets say you know someone at the meeting from Boston. You both can keep your windows open and chat while this is going on. These kinds of things are difficult, if not impossible, to pull off with other technologies.
BR: Another difference is functionality. You can stand up and walk aroundany place in the room picks up sound. Access Grid uses IP multicasting technology. With IP multicasting, you arent limited by the capacity of your central server. You can send the same video stream to 10 sites around New York City without having to send 10 packets of data. The network keeps the data as one packet. When that packet hits the router near the destination point, the router makes copies right before its needed. So you save on bandwidth and processing time.
How long has the technology been around and whos using it?
BR: Some early software was developed at UC, Berkeley and the Lawrence Berkeley labs more than five years ago, but Argonne National Lab took it to where it is today.
DS: Here in Nebraska, one Grid node is up and running at the Kauffman Center on the Lincoln campus. Another is installed in the Telecommunications Engineering Laboratory at the Peter Kiewit Institute. We want to have one or two more nodes.
BR: Worldwide there are about 100 users (see www.accessgrid.org). There are some industry participants but most are educational and research institutions. I recently received a phone call from a vendor whos putting together Access Grid technology in a box. That kind of third-party deployment is picking up but not in a big way. Our early participation benefits our students and our research.
Is it as easy as buying and installing the necessary hardware and software?
HS: There is a standard set-up with hardware and software specifications. A typical node can cost between $20,000 and $40,000 depending on the amount and quality of your equipment. Our node was set up by Debashis Taludkar, a CEEN graduate research assistant.
BR: We assembled most of the node at Kauffman ourselves. You do need someone like Lai Lim, one of our masters students, to run the equipmentsomeone with technical expertise in computers and network administration.
DS: Security isnt as high as with other systems. It also requires a lot of maintenance. Some of that will get ironed out as we figure out what can be done to further simplify things.
What security issues are you looking at?
BR: Access control and encryption. You want to be able to restrict access to certain sites or people. There are tools available for intrusion detection and network monitoring. Privacy is a concern because these large data files are going across the Internet, which is less secure than phone lines or DSL lines. Were studying fast techniques to encrypt these streams so the functionality of these applications doesnt get affected.
How is the Grid being used on campus?
BR: Some of our Chemistry faculty were able to virtually attend a conference on computational chemistry. We hosted the Supercomputing 2001 Conference originating from Denver last fall. I also sat in on a National Science Foundation blue-ribbon panel on cyber-infrastructure where they were taking input from experts all over the country.
HS: Students are using it to monitor transmissions on our SCOLA project and to study routing protocols and Quality of Service issues.
BR: Were working with the J.D. Edwards program to set up virtual meetings and conferences. We have research interests in areas such as multimedia transmission over the Internet. Students are studying network security and IP multicasting.
HS: Weve experimented with communications between the Lincoln and Omaha campuses with very positive results. Now we want to expand our distance education capabilities to provide lectures from other sites and transmit from NU to other sites.
BR: Were working on technical aspects, making it more compact. Were also realizing that the room set-up is a bit intimidating to non-technical users. So were trying to make it friendlier by hiding some of the clutter of the equipment.
DS: It takes time to make it transparent enough so its not inhibiting. If you have to step over wires on the floor or you cant sit comfortably because you might pull something loose, its not going to be an enjoyable experience. But weve come a long way in just two years. UNL is a significant player in multiple Information Technology areas. Were on the map, and we intend to stay there.
For more information, go to http://rcf.unl.edu/sdi/projects/accessgrid or http://unotelecommlab.org
Back to top
Building Capacity and Efficiency
How do you harness the storage capacity of your computer systems? Whats the glue that holds everything together? Hong Jiang, computer science and engineering, talked with Deb Derrick about his work with distributed storage and middleware and UNLs computer clusters.
Whats distributed storage all about?
All computers these days come with relatively large amounts of storage. Collectively, they can form vast amounts of storage capacity. But we dont have an effective way to harness this capacity. Distributed storage involves increasing overall capacity using network bandwidth, providing a reliable way of storing and accessing data, and making it transparent to the user.
So its more cost-effective, right?
The idea is to allow users to utilize computing and storage resources that otherwise lay idle. On any machine on a typical day, less than 10 percent of its capacity is used. Yet demand has increased dramatically with data-intensive applications such as multimedia, GIS and bioinformatics. Biological sciences research, such as the Human Genome Project, involves processing and storing huge amounts of data. Were talking hundreds of gigabytesterabytes.
What objectives do you have for this project?
Were implementing some low level facilities to give us a prototype system. One objective is to have a poor man solutiona huge storage capacity at a tiny fraction of the cost you would pay if you buy something commercially. Another objective is to use middleware technology.
What is middleware?
A simple explanation is that its a layer of software between the user and the computer system hardware. Middleware links the user to the system in a friendly way. It also connects to the hardware to optimize resource-sharing and collective performance. On a PC, middleware is the operating system and user interface.
But this is much more complicated.
Were talking about a much wider range, with a cluster of computers and a network of clusters. You have homogenous systems that have multiple components of the same typethe same type of computer with the same operating system. Increasingly, were dealing with heterogeneous systems with different platforms, operating systems and so on. Middleware can be integrated into existing applications or files to increase or provide functionality such as security.
What are students working on in this area?
Theyre designing a parallel computing platform so users can run applications on a heterogeneous system. To do this, the application needs to be executed concurrently from different machines. When this happens, youre actually intruding into other peoples machines. Usually thats okay as long as you dont do something inappropriate. Now whats appropriate and what isnt? These are issues that need to be sorted out.
What other work led up to this project?
PrairieFire and other clusters grew out of the Research Computing Facility. RCF already has a sizable pool of users on the Lincoln and Omaha campuses. Before PrairieFire, we built two prototype clusters. The Sandhills cluster has 24 nodes (48 processors), about one-fifth the size of PrairieFire. Before that, we built a 16-processor cluster called Bugeater.
So you were building step-by-step.
Yes, from small to large. Because of its sheer size, we had a lot of difficulty in terms of making PrairieFire work. This is probably one of only two or three such systems in the entire country. We were working with vendors day and night to get the system up and running. The smaller clusters are still used for classes and student projects.
We want to have a universitywide high-performance computing facility. Were very optimistic, even with all the obstacles ahead of us. We have very talented students who are diligent and work together well. We have support from university administrators, especially Rich Sincovec and Sharad Seth. On a broader scale, its all part of the vision of grid computing. Eventually youll be able to tap into the grid and get all the storage and computing power you want.
Back to top
Spurred by technological advances and adoption of wireless technologies such as Bluetooth and IEEE 802.11, wired connections in the workplaceand on university campusesmay become a thing of the past.
Through the EPSCoR project, Byrav Ramamurthy and others are developing enhanced wireless technologies to more effectively link researchers on NU campuses together. Its an ambitious effort thats modeled after the Wireless Andrew project at Carnegie Mellon University, the largest high-speed wireless network in the world.
Wireless technologies are mobile and flexible, says Ramamurthy, but theres a wide range of technical issues to work out.
In a wireless environment, you can connect your computer to an access point through a network interface card, which then connects to a wired network. But convenience costs in terms of security and speed.
A wireless network is vulnerable to security compromises and attacks, Ramamurthy says.
Another limitation is bandwidth. The fastest data transfer speed is about 11 megabits per secondmuch lower than with a wired network.
Interference also is a problem. In wireless environments, he says, the connection you get to the bay station depends on various factorsthe layout of the building, whats around and whats inside. So you see some differences in performance.
Weve been working with the Bluetooth toolkit to develop new applications for the technology, he says. Were looking at wireless application protocols to provide a software environment for laptops, PDAs, phones and other devices. The idea is to integrate the wired Web experience with wireless technology.
So if you want to display a Web site on a 4x6 PDA, how do you do this? How do you translate content from wired computers to wireless devices that have less computing power, bandwidth and display? Were working on systems to provide a seamless conversion.
What does the future hold? Products and technologies that will change the workplace and the way we live. Sensors that take in data and communicate directly with users traveling in the area. Communication chips small enough to wear as a patch and deployable using any device. The ability to send data on the fly in ad-hoc environments such as combat situations.
There are many interesting things you can do with wireless technology, Ramamurthy says. Were just getting started.
Back to top
In the middle of farmland ten miles east of Council Bluffs, there are some high-tech experiments going on that will help shape the future of video broadcasting and distance education.
From the SCOLA campus in McClelland, Iowa, foreign language programming is broadcast via satellite to more than 15 million viewers in the United States. That same programming is now available on the high-speed Internet2 platform, thanks to a National Science Foundation project directed by Hamid Sharif, professor of computer and electronics engineering.
The three-year grant, funded through the NSF EPSCoR program, aims to set up an advanced multimedia hub in the Telecommunications Engineering Laboratory at the Peter Kiewit Institute in Omaha. Quiming Zhu, professor of computer science at UNOs College of Information Science & Technology, is co-principal investigator on the project. Seed funding was provided by The Peter Kiewit Institute, the Omaha World Herald and the Nebraska Research Initiative.
Offering SCOLA channels over Internet2 is a natural application of this new technology, Sharif says. This project has given us a research testbed to study video broadcasting and transmission protocols. But were not doing just theoretical research. Were providing a service for SCOLA and its wider audience. Thats very gratifying.
SCOLA (the Satellite Communications for LeArning) is a Nebraska non-profit organization that provides real-time foreign news and cultural and video language programs. Its programming originates from 58 countries in 44 languages through three 24-hour channels.
Until recently, SCOLA was available only through satellite channels. Even with its 17 different satellites and other equipment, the signals are unidirectional and reception quality is often not the best.
The Internet2 platform connects SCOLA with more than 190 universities, research institutions and other members of Internet2. The high-speed connection offers a full-duplex link to transmit SCOLA channels to Internet2 institutions.
This is a wonderful opportunity, says Francis Lajba, SCOLAs director. Schools that dont have a satellite dish now have the option of receiving SCOLA channels via Internet2. It promotes the delivery of our programming and has some exciting possibilities for the future.
Currently, a satellite dish outside the Peter Kiewit Institute receives the SCOLA transmissions, feeds them through digital encoders and re-broadcasts all three channels over equipment housed in the Telecommunications Engineering Laboratory. Institutions such as Harvard and Arizona State University that are receiving those broadcasts, report a high level of satisfaction with the service and quality. "We absolutely love the idea of Scola," says Connie Christo of Harvard University's Language Resource Center. "The quality is very impressive," says Peter Lafford from Arizona State University, "and with easy access at the desktop and in the computing lab, faculty and students can have much greater access than with the video/campus broadband feed."
Sharifs research group is studying Quality of Service issues and developing protocols to support near-broadcast quality transmissions over the high-speed network. Theyve designed software reflectors that reflect back to us what were sending, he says. We can monitor frame loss, packet loss, degradation in quality of signal and other factors.
Zhu and his students are studying how the Internet2 video/audio streams can best be coded and archived for storage and accessibility. Their focus is on content-based archiving, in which the archive can be searched based on content rather than date or time.
Internet2 is a first step, says Lajba, and its a fantastic step, but it goes a lot deeper than that. Were opening up new doors for people to learn languages and experience other parts of the world.
For more information on this project, go to http://scola.unotelecommlab.org
Back to top
Tools for Tough Times
PrairieFires revved up computational power is tailor-made to support research under way in bioinformatics and related areas at the University of NebraskaLincoln. Thats good news for one research group led by Steve Reichenbach, professor of computer science and engineering, that is using the supercomputer to develop drought management tools for the states agricultural producers and government agencies.
Drought is costly to farmers and those who insure them against crop loss. From 1989 to 1998, the U.S. Department of Agricultures Risk Management Agency (RMA) paid out more than $85 million in drought-related claims from Nebraska farmers. Much of Nebraska already has disaster status this year.
RMA currently uses historical data to assess crop loss risk under severe weather conditions and produces maps based on these assessments. Its a time-consuming, labor-intensive process, says UNL agronomist Bill Waltman. These maps cant easily be regenerated to account for extreme shifts in climate that frequently occur in the Great Plains region.
Rapid mapping of drought-affected areas, particularly as events unfold, will help farmers and governmental agencies better respond to a disaster and mitigate such devastating crop loss in the future, he says.
The real challenge is putting all this information into a format that can be readily understood and used to make decisions. Over the past year, Steve Goddard, assistant professor of computer science and engineering, and his students have developed a four-layer software architecture for the project that includes:
drought indices that quantify the magnitude and severity of a drought
mapping tools that visualize droughts and their impacts
data mining tools that identify relationships among climate events in the Pacific Ocean and droughts in the United States.
exposure analysis tools to help quantify the impact of droughts.
All these tools provide a convergence of evidence that helps us better understand drought, Goddard says, and how it affects our society and economy.
Some tools are already online (http://nadss.unl.edu). The exposure analysis, for example, pulls information from a couple of databases and creates maps that show how much crop loss you might expect given a specific drought episode of a certain extent. Maps have been developed for all Nebraska counties.
New information technologies hold great promise for improving risk management practices, Reichenbach says, and thats of interest to the USDA. The research team recently hosted a visit by RMA head Ross Davidson, who traveled from Washington, D.C., to learn more about the project.
The $1.1 million project, which runs through June 2004, is funded by the National Science Foundations Digital Government program. The team also includes Donald Wilhite, Michael Hayes and Mark Svoboda from the National Drought Mitigation Center; Ken Hubbard from the High Plains Climate Center; and Jitender Deogun and Peter Revesz from Computer Science and Engineering.
|Back to top