Full proposal - Computer Science & Engineering

advertisement
Technology for Higher Education 2000
A Proposal to the Intel Corporation
From the University of Washington
Submitted by:
Lee L. Huntsman, Provost
Coordinating faculty:
George Lake, Department of Astronomy
Edward D. Lazowska, Department of Computer Science & Engineering
Gregory L. Zick, Department of Electrical Engineering
Principal additional participants:
Leroy Hood, Department of Molecular Biotechnology
David Baker, Departments of Bioengineering, Molecular & Cell Biology
Craig Hogan, Thomas Quinn, Derek Richardson, Scott Anderson, Bruce Balick,
Department of Astronomy
Chris Stubbs, Ken Young, Toby Burnett, John Rehr, Ed Stern, Jeffrey Wilkes,
Department of Physics
Hannes Jonnson, Wes Borden, Department of Chemistry
Randall Leveque, K.K. Tung, Joachim Stadel, Department of Applied Mathematics
Werner Stuetzle, Department of Statistics
Doug Lind, Department of Mathematics
Valerie Daggett, Keith Laidig, Department of Medicinal Chemistry
Ronald A. Johnson, Vice President and Vice Provost for Computing &
Communications, and Letcher Ross and Amy Philipson, Computing &
Communications
Yasuo Kuga, Jenq-Neng Hwang, Ming-Ting Sun, James Ritcey, Andrew Yang, Arun
Somani, Steve Venema, Mani Soma, Blake Hannaford, Leung Tsang, Akira
Ishimaru, Carl Sechen, Department of Electrical Engineering
Brian Bershad, David Salesin, John Zahorjan, Richard Ladner, Department of
Computer Science and Engineering
Raj Bordia, Lucien Brush, Brian Flinn, Gretchen Kalonji, Department of Materials
Science and Engineering
Uri Shumlak, Scott EberhardtDepartment of Aeronautics & Astronautics
Jim Riley, Phil Malte, Mark Tuttle, Colin Daly, Per Reinhall, Duane Storti, Mark
Ganter, Minoru Taya, Department of Mechanical Engineering
Dennis Lettenmaier, George Turkiyyah, Department of Civil Engineering
Geri Bunker, University Libraries
Sherrilynne Fuller, Department of Medical Education
Brent Stewart, Department of Radiology
Jim Brinkley, Department of Biological Structure
Siva Narayanan, Doug Stewart, Department of Medicine
Shawn Brixey, Richard Karpen, School of Music
02/13/16
1
Table of Contents
1. Introduction/Overview……………………………………………
3
2. The University of Washington and its Partnership with Intel…..
6
3. Advanced Scientific Computing…………………………………..
8
4. Digital Media………………………………………………………
18
5. Supporting the “Educational Enterprise”………………………..
24
6. Institutional Support and Coordination………………………….
28
Appendix I – Project Details………………………………………
30
Appendix II – Summary of Equipment Requests………………..
54
Appendix III – Curricula Vitae of Participants…………………..
56
02/13/16
2
1. Introduction
The University of Washington is one of the nation’s preeminent research and educational
institutions. For more than twenty years, UW has ranked among the top five institutions in annual
Federal research obligations. (Currently -- 1994 data is the most recent available -- UW is second
behind Johns Hopkins, with MIT and Stanford in third and fourth positions. UW also ranks third
nationally in industrial research support, and fifth in licensing revenues from inventions.) The UW
faculty includes nearly one hundred members of the National Academies, eight MacArthur
Foundation “genius” award winners, and four Nobel laureates in the past decade. Programs from
across the campus are ranked among the top ten in the nation in their disciplines: programs in
Medicine such as primary care, family medicine, rural medicine, and nursing (each of which is ranked
#1 in the nation); programs in Engineering such as bioengineering and computer science &
engineering; programs in the Sciences such as astronomy, atmospheric sciences, oceanography,
statistics, and zoology; and programs in the Fine Arts such as creative writing and drama. And the
University of Washington’s Office of Computing & Communications also has established a national
leadership position, both for campus infrastructure and for national initiatives such as Internet2 and
Research TV. It is simply a fact that there is no institution comparable to the University of
Washington in the entire quadrant of the nation that lies north of Berkeley and west of the
Mississippi – and there is only a small handful of comparable institutions nationwide. At the
University of Washington, first class research, first class education, and first class outreach are
seamlessly intertwined, across the campus.
This breadth and depth of excellence is one reason for the strength of the existing partnership
between Intel and the University of Washington. In 1995, for example, 85 University of Washington
students took employment with Intel, making the University of Washington the #1 supplier in the
nation to Intel – first among 38 “Strategic” campuses. This is particularly remarkable
since until very recently Intel has had no “geographic advantage” in recruiting UW students. (It is
less surprising that UW is the #1 supplier in the nation to Microsoft.) Intel has invested heavily in
UW’s Departments of Electrical Engineering and Computer Science & Engineering, to the
enormous benefit of those programs, and, we hope, to the enormous benefit of Intel as well.
The entire University of Washington community looks forward to the current Request for Proposals
as an opportunity to dramatically intensify this already-strong relationship. We propose to partner
with Intel in undertaking three campus-wide initiatives that will serve as national models for driving
the demand for Intel Architecture systems in the research and education communities, and for
transforming those communities with Intel as a partner.
Our proposal is predicated upon two key observations:

Advanced scientific computing applications are of enormous importance in science and
engineering, and in the education of scientists and engineers. These applications are ripe for
moving to Intel Architecture systems -- individual systems, shared-memory multiprocessors, and
cluster-based systems -- not only because of the performance of IA systems, but also because
the move will facilitate the integration of these applications with education and with modern
educational technology. This integration of research and education defines UW’s mission as a
state-funded research institution.
02/13/16
3

While the demand for cycles in these advanced scientific computing applications will continue to
grow, in the long term, “traditional” high performance applications will be consuming a
diminishing percentage of the world’s cycles. One does not need to be clairvoyant to see the
emerging importance of:
-
Advanced digital media applications for science, engineering, education, and entertainment.
Visualization, animation, images and video all play an ever-expanding role in enabling
learning on demand, distance learning, collaboratories, etc. This presents an enormous
opportunity and challenge to the University. UW must continue its fundamental research in
the enabling technologies for these changes (computer graphics, digital media,
human/computer interaction, Internet2/NGI, etc.) and integrate them into our “core
business” of education. There is a fundamental risk to our very survival; if we don’t become
educational content providers using all the available technology, we may be reduced to an
“educational theme park.” Conducting these activities on IA systems provides enormous
leverage both for Intel and for the University.
-
Infrastructure support / data management / user management. Today, the “educational
enterprise” -- mail services, web services, file services -- runs on a mixture of mainframes
and UNIX clusters. Moving these activities to IA systems again provides enormous leverage.
Universities such as UW have tremendous experience with the kinds of large-scale,
cost-effective, open systems that the commercial world is increasingly demanding.
Taking our lead from these observations, our proposal for campus-wide partnership with Intel has
three foci:
1. Creating a showcase for advanced scientific computing applications on Intel Architecture
systems -- cutting-edge high-demand scientific and engineering applications such as
Computational Astrophysics and VLSI design, and programs aimed at educating the next
generation of experts in these fields, such as our new Applied and Computational Mathematical
Sciences program.
2. Developing and deploying advanced digital media applications for the educational environment,
based upon Intel Architecture systems. Applications and underlying fundamental research
include computer graphics, computer animation, digital video, scientific visualization, multimedia
libraries, desktop telecollaboration, and digital learning on demand (including digital video
servers and the distribution and desktop infrastructure for streamed and non-real-time digital
video). Much of this work will be conducted within the framework of our multi-institutional
Internet2 / Next Generation Internet (NGI) and Research TV (RTV) consortium initiatives. We
firmly believe that the potential for growth in this arena over the next five years transcends all
others.
3. Demonstrating the use of highest-end Intel Architecture server clusters for supporting the
“educational enterprise” -- administrative computing, electronic mail, large scale shared file
systems, and multimedia web services. As one of a number of examples, the University of
Washington currently supports 60,000 email accounts on two clusters of 50 RISC UNIX
workstations each; this has served as a national and international reference for the (unnamed)
02/13/16
4
vendor of these workstations and has propagated similar architectures across higher education
and the private sector. We wish to begin the migration of this enterprise support to the Intel
Architecture. We have a commitment from Microsoft to partner in this endeavor.
In addition to leveraging our institutional expertise, our proposal leverages many established
partnerships:






Between the University of Washington and Intel.
Between the University of Washington and Microsoft.
Between the University of Washington and other key national institutions of higher education
such as Carnegie Mellon
Between the University of Washington and NSF’s Partners for Advanced Computing
Infrastructure
Between the University of Washington and regional partners in medical information technology
Between Computer Science & Engineering, Electrical Engineering, Computing &
Communications, and other programs in engineering, the sciences, and the arts.
The remainder of this proposal consists of five additional sections plus a number of appendices.
Section 2 elaborates further on the University of Washington and its relationship with Intel.
Sections 3 through 5 provide a high-level view of our proposed initiatives in advanced scientific
computing applications, digital media, and support for the educational enterprise.
Section 6 describes the institution-wide coordination plan, and suggests a national coordination
effort that the University of Washington is eager to undertake on Intel’s behalf.
The appendices provide more detailed information concerning a number of our proposed initiatives.
02/13/16
5
2. The University of Washington and its Partnership with Intel
Founded in 1861, the University of Washington has 34,000 students (25,000 undergraduate and
9,000 graduate/professional) and 3,500 faculty (2,900 teaching and 600 research) divided into 16
schools and colleges. The University’s annual operating budget is roughly $1.4 billion, 18% of which
comes from the State.
In addition to a host of outstanding programs, as outlined at the start of the previous section, the
University of Washington has long encouraged multidisciplinary programs and regional programs.
Examples of the former include our NSF Science & Technology Center in Molecular Biotechnology,
headed by William H. Gates III Professor Leroy Hood, and our outstanding Bioengineering
Program, with its new NSF Engineering Research Center in engineered biomaterials. Examples of
the latter include our leadership in regional networking (we brought the ARPANET to the Pacific
Northwest, serve as the Network Operations Center for NorthWestNet, and have become the
regional catalyst and focus for Internet2 activities leading to our designation as one of the 8
leading-edge Gigapop sites), and the four-state medical education and treatment effort that has been
headquartered here for more than two decades (including leadership telemedicine efforts, beginning
with remote radiological diagnosis between Alaska and Seattle in the 1970s, and most recently
including NII-award-winning efforts in medical informatics).
The Departments of Electrical Engineering and Computer Science & Engineering are outstanding
programs, each of which has a long and close relationship with Intel. But the fact that the University
of Washington is the nation’s #1 supplier of new graduates to Intel demonstrates the breadth and
depth of the UW/Intel relationship -- for the students that Intel hires come from across our
campus, and these students choose Intel despite a highly attractive geographical and professional
environment in the State of Washington. It also demonstrates UW’s institutional commitment to
Intel as one of the nation’s great companies, for UW and Intel have worked together aggressively
over the past half dozen years to create this wonderful success story.
For all of these reasons, the current Intel Request for Proposals is uniquely well suited to the
University of Washington. Intel’s proactive role has stimulated the beginning of new partnerships
that will grow under our three proposed campus-wide initiatives. These partnerships will provide
outstanding national models for the rapid conversion of academic and scientific computing to Intel
Architecture systems. Without Intel as a partner, this transition might be delayed by as much as a
decade.
The first of our three initiatives -- creating a showcase for advanced scientific computing
applications on Intel Architecture systems – benefits particularly from our long experience and
established excellence in multidisciplinary computational science and engineering, including cluster
computing.
The second of these initiatives -- developing and deploying advanced digital media applications for
the educational environment -- takes advantage of campus-wide experience and expertise in digital
media (including systems, applications, and distance learning), as well as key partnerships with other
educational institutions and our regional and national role in Internet2.
02/13/16
6
The third of these initiatives -- demonstrating the use of highest-end Intel Architecture server
clusters for supporting the “educational enterprise” -- builds upon the Office of Computing &
Communications’ national leadership in large-scale cluster computing, and upon the close
relationship of Computing & Communications, the Department of Computer Science &
Engineering, and Microsoft.
Finally, our ability to assume a national coordinating role in this initiative builds upon our experience
in similar roles in other partnerships, with companies including IBM and Microsoft.
02/13/16
7
3. Advanced Scientific Computing
This component addresses several themes that are nearly all motivated by the ubiquity of
experiments that generate 5-40 GB/day integrating to ~10 TB/yr. This data rate was enabled by two
simple developments: detectors with millions of pixels and cheap exabyte tapes that enable 3 TB/yr
to be cheaply stacked on a shelf. The existence of a cheap storage medium emboldened collection,
but did nothing for analysis or visualization. We now have to match the price of the exabytes with
inexpensive Intel workstations that deliver a substantial fraction of a supercomputer at 10-3 of its
price. Our scientific computing projects have the following themes:






Data Ingestion: How does one process data to create appropriate elements of an on-line
database that services most queries and provides a content key meta-index for the raw data?
Data Archive: How does one manage the ingested data and handle the execution of queries?
Data Analysis: How does one process queries that involve reduction of data?
Knowledge Discovery: How is flexible data mining handled?
Problem Solving Environment: How are the physical simulations linked to the data?
High end computing cycles: How are the high resolution simulations created?
We carefully vetted campus projects looking for following attributes:




High performance computing
Commitments to move significant software to IA systems and Windows NT
Using Intel equipment to collect, analyze, explore and visualize important physical data
The creation of environments used jointly for research and education
The Cosmos: Variability, High Energy, A  View
UW is pioneering techniques to search for time variability in TB datasets. The Universe is full of
dynamic activity spanning timescales from milliseconds to billions of years. Professor Stubbs’ group
has built several high performance CCD mosaic cameras used in variability studies: MACHO
(search for gravitational lensing by dark matter), LONEOS and TIMESLICE.
Distant supernovae are the most promising way of determining the geometry and ultimate fate of
the Universe: will it expand forever or ultimately recollapse? TIMESLICE will search for these
objects using a new $1.5M mosaic CCD camera (built with support from the NSF, the Seaver and
Packard Foundations and the Murdock Charitable Trust) deployed at the 3.5m telescope at Apache
Point in New Mexico (UW owns 31.6% of the time).
The LONEOS camera uses the Lowell Observatory to search for near earth bodies. These have an
intrinsic scientific interest, but there is also a desire to build a monitoring system that would provide
an early warning to events such as the one in the recent Asteroid “mini-series”. This series was not
based on idle speculation. Events that can lead to massive extinctions (such as that believed to have
done in the dinosaurs) can and will occur. Complacency owes only to the known timescale; they
happen about once every 25 million years.
02/13/16
8
The Astronomy Department is a key participant in the Sloan Digital Sky Survey (SDSS)—an epochal
study of the structure of the present-day Universe, producing sky positions and multicolor images
for 107 galaxies with redshifts for 106. The image database will be 12 TB with a spinning Science
Archive of nearly 1 TB. SDSS uses a special purpose, dedicated telescope (also at Apache Point) and
the world’s largest CCD array. UW built much of the equipment, provides the simulations to test
theories of structure formation and is engaged in many other projects including: 1) a search for the
largest and nearest comets that comprise the inner Oort cloud and the Kuiper disk (Professor
Quinn), 2) using a deeper multi-pass southern survey to search for the variable objects (Professor
Hogan), and 3) making sure that the system is designed flexibly enough to detect entirely new and
unexpected classes of objects (Professors Margon and Anderson).
Professor Burnett is the software coordinator for the Gamma-ray Large Angle Space Telescope
GLAST. Computing in Space Physics has been hampered by the lack of powerful equipment aboard
spacecraft, and planning that misses paradigm shifts in computing that occur over the long
development times of missions. Fortunately, mission planning timescales are decreasing,
communication satellites are permitting vastly more data to be transmitted to the ground and the
design process incorporates far more capable on-board hardware and software. But, systems are
launched and thereby “frozen”, so we need extremely sophisticated simulations to predict their
behavior. GLAST will use on-board trigger processing to distinguish gamma rays from the far more
numerous cosmic ray protons. Proper design requires extensive simulations using particle transport
codes and understanding individual events requires 3-D visualization. We intend to move all of this
to Intel machines with WNT for two reasons: cost-effectiveness and development productivity.
Tools such as Microsoft’s Developer Studio and the Visual C++ compiler are the development
environment of choice. The large international collaboration of 50 scientists (7 US universities, 3 US
government labs and groups in Japan, Italy, Germany and France) will grow when the project moves
to its next phase.
While gamma rays extend our spectral range, neutrinos (‘s) provide an entirely different window to
the center of the sun and nearby supernovae. Proton decay detected supernovae 1987A in a Milky
Way satellite galaxy, the Large Magellanic Cloud. Since such events occur only once per hundred
years per galaxy, we need detectors that can see deeper into space. Professors Wilkes and Young
(UW Physics) are part of the Super Kamiokande neutrino observatory. Their group is analyzing the
high energy events and studying the instrument response as a function of the changes in the
triggering algorithms. Each “improvement” to these algorithms requires extensive Monte Carlo
simulation to determine how the sensitivity has changed. These simulations are easily farmed over
workstations using software such as CONDOR, but the group needs to shift to Intel to achieve the
computing power within their budget constraints.
Molecular Biotechnology
The University of Washington is home to the National Science Foundation Science and Technology
Center in Molecular Biotechnology, headed by Leroy Hood, William H. Gates III. Professor Richard
Karp (Computer Science & Engineering, recently honored with the National Medal of Science) has
been working with the Center on a number of computationally demanding investigations. Two of
these are particularly appropriate to this Intel initiative.
02/13/16
9
The first is a new approach to genome sequencing that involves the sequence analysis of both ends
of 300,000 randomly cut 150,000 base pair fragments of human DNA. We will be scaling up to
employ 12 DNA sequencers in this process. Each of these 600,000 sequences must be compared
against the entire databases of chromosomal and cDNA sequences (nearly a Billion base pairs) using
rigorous similarity analyses that use hashing algorithms. Using this new technology, we will be able
to generate more than a million base pairs/day only if we have the computing power necessary to
keep up with the data analysis. The same technique will also be used to analyze the mouse genome.
We have also developed a new approach using ink-jet technology to synthesize DNA fragments
(oligonucelotides) on glass chips. We are using this to build an instrument that can put 150,000
20-mer oligonucleotides on a 3” glass chip. If each oligonucleotide represents a different human
gene, the expression patterns (genes expressed) of human and yeast cells can be determined by
virtue of the specific molecular complementarity each oligonucleotide exhibits for its specific gene
product (messenger RNA). Thus, the quantitative expression patterns of hundreds of thousands of
genes can be determined. To decipher the information pathways of human and yeast cells, literally
tens of thousands of DNA chip snapshots must be taken, analyzed and compared against one
another. We are looking toward the Intel initiative for the computational power for both of these
new initiatives in molecular biotechnology.
X-ray Spectroscopy Data Analysis UWXAFS
Professors John Rehr and Ed Stern (UW Physics) will develop an integrated system for automated
X-ray spectroscopy data analysis, archive and visualization to determine the local geometrical atomic
and molecular structure of materials. Data are collected at national synchrotron radiation facilities
using accurate electronic structure/x-ray spectroscopy codes to determine material structure (i.e, the
UW-XAFS codes). UNIX has been the primary environment, but an Intel/NT port would provide
several hundred groups currently using this package with a more cost effective option and better
visualization tools. Since the optimal structural model is often achieved with computational steering,
these visualization options are extremely important. We will also add extensions to handle other
spectroscopic data that result from similar inverse problems such as x-ray crystallography,
electron-microscopy, photoelectron diffraction, etc. These extensions may expand our user
community from hundreds to thousands of scientists.
Chemistry (CHEMLAB)
Several years ago, NSF funding enabled Professor Jonsson to build a high-end instructional
computing laboratory with 17 SGI workstations. The resulting course won several awards including
the 1995 DOE Computational Science Teaching Award and the course material is being prepared
for a Springer Verlag monograph. A new sequence of Physical Chemistry courses designed for
honors chemistry students will use the Intel equipment requested here. Students have produced
much of the current course software including: Glman (classical dynamics of metals combined with
3D graphics and systematic structure analysis) and 1DWavepacket (quantum mechanical dynamics
simulations in 1D). We are committed to training students with the skills most relevant to industrial
employers..
This lab will also be a significant resource for the migration of the research computing currently
performed on local UNIX workstations and at remote supercomputing facilities. (Professor
Jonsson’s group is among the larger users of the SDSC Paragon MPP.) The P6 lab will deliver an
02/13/16
10
order of magnitude more computing than currently used at SDSC using new algorithms to compute
a wide range of chemical problems including optimal paths for atomic rearrangements to calculate
transition rates and statistical averages of quantum mechanical systems using Feynman Path
Integrals. In both these examples, the calculation uses an ensemble of system replicas that behaves
like elastic band with beads that have nearest neighbor interactions. While we have successfully
modeled many systems using empirical potential functions, increased computing power will enable
ab initio force calculations by solving for the electron wavefunctions. The lab will be outfitted with 16
user machines (Providence single CPU systems that will be upgraded at a future time), one server
and two single CPU workstations for key faculty to develop software.
New ACMS Program
The Departments of Applied Mathematics, Computer Science & Engineering, Mathematics, and
Statistics are launching a new undergraduate major in “Applied and Computational Mathematical
Sciences” (ACMS) with a target enrollment of 160 students. This program seeks to train students to
understand and use our full range of scientific methods and tools to make informed judgments
about technical matters, as well as communicating and working in teams to solve complex problems.
Scientific computing will be a significant component of the ACMS program. Students will use
C/C++ (with interface to Fortran libraries), as well as interactive languages such as Mathematica,
Matlab, and Splus for prototyping and experimental computing. Our original plan was to build an
instructional lab of X-terminals running off UNIX servers, the de facto standard in the Math
Sciences. However, WNT offers significant advantages:



More compute power per dollar and per student. This is particularly important for visualization
and using large packages (Mathematica, S+) interactively
Software availability
Student preparation for the “real world”
Establishment of a PC instructional lab in showcase space will provide a catalyst for a general
transition of Math Sciences computing to WNT—both at UW and elsewhere.
Computational Materials Science Laboratory
We propose a Computational Materials Science and Engineering laboratory for curricular
development, research and instruction in complex materials simulation. We will develop and use
computational techniques applicable to a broad range of spatial and temporal scales
(electronic-atomisitic-microstructural-macrostructural). Examples include:
-- large-scale atomic simulation (up to 5 million atoms) of grain boundaries and interfaces including
structure and dynamics of defect generation (Kalonji and Jonsson)
-- computation of dislocation dynamics in bulk alloys near triple junctions (intersection of three
grain boundaries) with emphasis on the behavior across multiple grains (Brush)
-- development of finite element software to rapidly characterize large scale material properties
(Flinn, Bordia). These methods would enable testing and modeling of thermo-mechanical properties
of bulk materials with non-uniform stresses and strains even in hostile situations where laboratory
testing is impossible
02/13/16
11
The use of IA running WNT will accelerate the instructional use of these new techniques. The
proposed facility will give students access to a variety of material design techniques and applications.
Specifically, this laboratory will: (1) provide a robust infrastructure for computational materials
science; (2) increase the range and complexity of research problems and detail of analysis; (3) enable
better collaborations by taking advantage of multimedia and Web resources; (4) use the high
visibility laboratory to train students and achieve unique scientific results that promotes further
laboratory development with other funding sources including industrial and federal partners. Our
leading role in the NSF funded ECSEL coalition of eight major universities provides an opportunity
to disseminate resulting instructional software (a cornerstone of ECSEL's mission). program.
Computational Fluid Dynamics (CFD)
The vast majority of high performance computing involves the computation of complex convection
or turbulence using CFD. By merely changing the nature of the microphysics, one finds problems
that range over fields as diverse as engineering, biomechanics, geophysics and astrophysics. By
porting our current codes and developing new ones for Intel architectures, we will be able to
continue our scientific and engineering studies on: planetary nebula, plasma physics, computational
aerodynamics, turbulent flow, combustion and hydrologic modeling. A number of significant
problems will be addressed using the IA equipment, ranging from advanced plasma thrusters for
space propulsion and fusion for power generation, to hypersonic re-entry and shock-induced
mixing, to the prediction and reduction of pollutant emissions in gas turbine engines, industrial
burners, and utility furnaces.
In addition to our scientific and engineering objectives, we will highlight the capabilities of the Intel
equipment for large-scale, computationally-intensive scientific calculations and commercial
engineering codes. To promote these efforts we will build a central, high performance facility
consisting of 6 networked, 4-processor Pentium Pro computers operating in parallel and distribute
single high-performance workstations to faculty participants for code development, preliminary
calculations and visualization. The cluster will be used for the most computationally-intensive
simulations, replacing current work on remote Cray supercomputers.
Computational Structural Mechanics: Design for Manufacturing
The Department of Mechanical Engineering has long interest in computationally-intensive problems
related to design for manufacturing. In addition to the traditional use of large commercial codes
(e.g., ANSYS for finite element analysis), we have developed large, special purpose codes. Current
work uses somewhat obsolete UNIX workstations and previous generation Pentiums. The
department is committed to a shift to Intel machines for education, which has created similarly
modernized research computing. As part of this initiative, we will port several existing codes used
for a variety of applications: optimization of composite structures, design and analysis of systems,
such as the muon detector for the Large Hadron Collider under construction at CERN in
Switzerland, using the commercial codes ANSYS and DYNA3D; and solid modeling methods for
rapid manufacturing. The porting of commercial codes such as ANSYS and DYNA3D to AI
equipment will provide important benchmarks for the capabilities of the equipment to handle
industrial applications.
02/13/16
12
Large-Scale Simulations for Wireless Communications and Electronic Packaging
We are engaged in extensive simulations of modern wireless communication systems -electromagnetic field radiation from high-speed printed circuit boards and signal scattering during
propagation through the geophysical environment. For example, Bit-Interleaved Coded Modulation
with Iterative Decoding and Overlayed Spread Spectrum Code Design in wireless communication
systems require extensive integer computation that is ideally suited for Pentium Pros. In another
area, our new Sparse-Matrix Canonical-Grid Method enables us to solve integral equation for
two-dimensional random rough surfaces for systems that are a 1000 times larger than were
previously possible. This numerical method is not limited to remote sensing applications but can be
applied to many other engineering problems including the study of the EM field radiation from
high-speed PCBs. Our current UNIX boxes are not up to the problems that we are capable of doing
using the proposed network of Pentium Pro computers running PVM.
Haptic Simulation for Virtual Reality
Humans perceive reality through five sensory channels. Professor Blake Hannaford is working to
provide a virtual reality (VR) environment that includes more of these sensory channels than current
systems (http://brl.ee.washington.edu/BRL/haptics.html). This will require spatial manipulation
devices, force feedback peripherals, 3-D Cad models and real-time simulation. The results will be far
more realistic systems for entertainment, training (risk free pilot or surgical), engineering design and
scientific data analysis.
Using Intel hardware, we will develop high-speed simulation systems that include touch as well as
vision. Haptic touch allows users to interact more naturally and to manipulate visually displayed
information. By developing these systems on Intel machines, we will better enable their use in
important large markets from games to CAD modeling and evaluation. The computational
requirements for complex haptic simulation leads us to propose multi-processor systems where one
processor will handle the visual I/O and one or more other processors handle the more demanding
(1GHz update rate) haptic simulation I/O.
High-Frequency Digital Systems: Design, Modeling, Layout, and Test
Full cycle VLSI circuit design is a computationally demanding task. We propose to build a
comprehensive VLSI circuit design center from circuit to system using Intel machines with greater
computational power, memory and disk than our current generation piecemeal labs based on UNIX.
To build this comprehensive center for simulation, synthesis, modeling, layout, test and verification,
we will integrate a combination of commercially available tools (cf. Mentor), newly designed tools
(layout software as an example) and a large collection of modeling and simulations software to run
on the Intel machines using WNT.
In addition to this high level of integration, we will addresses the design, verification and testing of
high-frequency (> GHz) digital systems. While current system complexity already burdens our
current systems, the design and test tasks associated with high-frequency switching will demand far
greater performance. We are betting that Intel architectures are the choice to keep pace with these
growing demands.
02/13/16
13
High Performance Computing and the PACI Regional Center
We run our Grand Challenge problems on remote SP2’s, T3D/E’s and Paragons while using local
UNIX systems for data analysis and visualization. When we compute remotely, we have access to
10-20 Gigaflop computers with 8-32 GB of memory and TB data warehouses. Our allocations are
more than 500,000 node hours each year—equivalent to more than 64 CPUs running continuously
through the year. But, the bandwidth from remote sites is a killer. The proposed equipment will
enable us to enhance our analysis and visualization with local simulations. With improvements in our
software, we can do far more work locally in a year on Intel hardware than we’ve done in our entire
history of computing at national centers. Finally, we will show that Intel’s unique floating point
representation has some surprising benefits.
The departments of CSE and Astronomy have a mature collaboration centered around parallel
algorithms for simulation and the visualization of large data sets. This has grown to its current size
and prominence over the last several years. It has served as a nucleus for a high performance
computing efforts at UW. Professor Lake chairs an NSF-funded Graduate Training Program in
Computational Science that supports students in a half dozen departments. Professors Lake and
Lazowska led a successful proposal to build a Regional Center as part of NSF’s Partnerships for
Advanced Computing Infrastructure (PACI). Here, we propose to build that regional center around
Intel machines running WNT connected directly to the National vBNS network. As part of the NSF
PACI proposal, UW pledged $200K/year for 5 years toward equipment for the center. By
committing to IA machines, we are directing these funds toward peripherals that will be attached to
these machines (VR equipment, 1-2 TB of spinning disk, 10-20 TB tape silos, etc.
The center will have two key components:


SIMTHEATRE—a venue for high end visualizations
SUPERCLUSTER—64 P6 CPUs connected with Myrinet.
SUPERCLUSTER will be a WNT version of Beowulf, the Gigaflop workstation and Terabyte
fileserver project at CESDIS (Center for Excellence in Space Data Information Systems) using Intel
hardware and Linux. We have several close ties to this group. For reasons easily imagined in
Beaverton, one of their key technical people left Greenbelt MD for Seattle and is available for this
project. This system will be used to develop data mining tools to explore the 10 TB data sets of
LONEOS and SDSS (using Objectivity OODBMS) and 4 Grand Challenge projects:




the formation and evolution of galaxies and large scale structure
protein folding
explore the stability and fate of our solar system
ab initio formation of planetary formation
Our first Grand Challenge is the formation and evolution of galaxies and large scale structure.
New algorithms and parallel implementations have enabled simulations of the Universe with as
many as 47 Million particles. Our simulations are amongst the largest ever done and have gained us a
reputation for simulation of a sufficient quality to be placed in the critical path of several projects
02/13/16
14
costing over $100M such as SDSS and the Dutch Square Kilometer Array (a radio telescope
designed to image the Universe in neutral gas at high redshift).
We are constantly improving the performance of this code by tuning and revising the algorithm in
collaboration with Professors Anderson and Ladner in CSE. All of our parallel codes are designed
for portability. We are interested in an NT port that is uses its threading capability to hide latency.
This is a step toward using the power of the 100 Gigaflop chips planned by Intel by 2015 (since the
National Semiconductor Industry projects that clocks will not be much faster than 1 GHz, we
assume that these will have 1,000’s of context switching threads).
Astronomical data and theory are often compared in a rather ad hoc way that involves assumptions
about both the physical state of the observed system and the theoretical model. We promote the use
of simulation based on physically motivated initial states consistent with observations of the early
Universe. From the evolved non-linear states, we construct artificial observations including
instrumental effects to make all comparisons in the observational plane. This approach is only
possible by combining our advanced simulation codes with our equally sophisticated visualization
and analysis software TIPSY, the Theoretical Image Processing SYstem. Currently, we simulate
observations of emission and absorption lines, optical and X-ray imaging and X-ray spectra. We are
adding gravitational lensing by the intervening material TIPSY contains over 18,000 lines of code
and is used at over 30 institutions in 11 countries.
The second project addresses the fate of the Solar System using a parallel method developed by a
UW graduate student, Joachim Stadel. Clocks model our belief in the solar system’s regularity, but
this may reflect our ignorance of the planets ejected during the last few Gyrs. The Solar System is
chaotic, but the detailed sources of chaos are uncertain. Laskar (Paris Observatory) estimates that
Mercury’s chance of ejection in the next Gyr approaches 50%. The inner planets, including Earth, all
show 4-5 Myr instability timescales, but the resonances that cause the instabilities are different in
each approximate calculation. The longest accurate integrations are a mere 6 Myr, our full 10 Gyr
integrations will allow us to address fundamental questions:

Do climate variation owe to oscillations in the Earth’s spin and orbit?

How does weak chaos behave? What do Lyapunov times mean? What does the known instability
of the Earth mean?

Are giant planets needed to form terrestrial planets? In Greek myth, “chaos is the great abyss out
of which Gaia flows”. Jupiter and Saturn create chaos causing planetesimals to collide to form
terrestrial planets. They also cleanse the loose planetesimals preventing our frequent
bombardment by asteroids. Too little chaos prevents this, too much would eject the earth. Giant
planet detections may be the ideal screen for terrestrial planet search targets.

Will planets be ejected from the Solar System?
The 5 billion year integrations require quadruple precision arithmetic as double precision fails after
10 million years. To investigate the stability of other planetary systems for periods of several
hundred million years, we can use Intel’s unique 80 bit extended precision.
02/13/16
15
Finally, we reach our last Astrophysical Grand Challenge, a study of the growth of planetesimals
through gravitational instability and collisions to form protoplanets. Past simulations used
~100 planetesimals of 10 lunar masses each to simulate the early inner planetesimal disk. To
accurately simulate the growth and fragmentation process and predict generic properties of planetary
systems, we need to follow 100 Million planetesimals.
Protein folding is the second genetic code whereby the amino acid sequence manifests itself in
structure and function. The ability to predict the three dimensional structure of a protein from its
sequence would unlock a treasure trove of valuable information for protein engineering, drug
discovery, and biotechnology. Baker, Daggett and Laidig have made recent advances in treating this
as a combined problem in dynamics and datamining. Knowledge of the structure of small runs of
amino acids is incorporated into the calculation of mutations of proteins where the structure is
already known. The resulting stability of the ‘mutants’ allows one to assess the importance of
particular residues or grouping of residues and their interaction to the local and global stability of the
protein. This is valuable data for both determining the structure of proteins and for the problem of
cross phylum matching whereby information from animal models is used for human treatment.
Over the last few months, the astronomy and biotechnology groups have been collaborating on
techniques for particle dynamics. In proposing to construct a joint laboratory for simulation, we are
committing to continuing the process of building a computational science community where
physical, biological and computer scientists collaborate to solve epochal problems in their fields.
Astronomy and CSE operate a joint broadcast video studio. The equipment in this proposal will
enable us to capture our highest quality simulations on video. The HPCC group has involved in
many outreach projects (with visualizations supplied to the Discovery Channel, CNN, and the new
$100M Hayden Planetarium). This will be expanded with our Regional Center activities and we
propose a unique venues for visual presentation of visual data. We will convert the astronomy
department’s 30 seat planetarium into a SIMTHEATRE by installing 4 large projectors driven by a
quad processor Intel machine with high end graphics. Several of projects described here will have
systems that will enable them to develop product for SIMTHEATRE.
Summary Equipment Requirements for Advanced Scientific Computing
Equipment Legend
SP1
Pentium Pro Desktop Minitower 200mHz – 128MB/4.3GB SCSI
SP2
Single Processor AP 440 FX Minitower – 64MB/4GB SCSI
SP3
PD440 FX Pentium II MMX Minitower – 266mHz 128MB/4GB SCSI
DP1
Dual Processor AP440 FX Minitower – 256MB/8GB SCSI
DP2
Dual Processor AP440 FX Minitower – 128MB/4GB SCSI
QP1
Quad Pentium Pro AP450 GX Tower – 256MB/9GB SCSI
QP2
Quad Pentium Pro AP450 GX Tower – 256MB/4GB SCSI
Opt1
V3C38 STB 8MB 3-D Multimedia Accelerator
Opt2 G 1012 Ultra FX32T (60MB)
Opt3 G1006 Ture FX Pro 16MB Video Card
Opt4 ST1917WC Seagate 9GB SCSI Hard Drive
Mon1 306389 20” Sony GDM 20SE2T
Mon2 M1010 21” Hitachi Monitor
Net1
E5101TX Intel Switchable Hub
Net2
EC100MAFX Uplink Switch Module
02/13/16
16
$ 6,754
$10,219
$ 6,671
$16,316
$12,871
$37,099
$25,158
$
325
$ 2,868
$ 1,510
$ 2,200
$ 2,200
$ 2,500
$ 4,995
$
595
Technology for Higher Education 2000: A Proposal to the Intel Corporation
Subtotal of Equipment Requests
Advanced Scientific Computing
ASC-1
ASC-1
ASC-1
ASC-1
ASC-1
ASC-1
ASC-1
ASC-1
ASC-1
ASC-1
ASC-1
ASC-3
ASC-2
ASC-4
ASC-5
ASC-6
ASC-7
Project
SP1
SP2
Chem lab
ACMS lab
LONEOS
SLOAN
TIMESLICE
SuperK
GLAST
UWAXFS
Hood lab
Smitheatre
Supercenter
CFD
CMS
CSM
E&M
Haptic VR
VLSI Design
2
19
2
2
2
2
2
2
4
2
2
16
UNIT COSTS
02/13/16
SP3
DP1
DP2
QP1
3
1
2
3
2
1
4
2
32
3
10
1
22
1
2
7
1
2
QP2 Option 1 Option 2 Option 3 Option 4 Mon 1 Mon 2 Net 1 Net 2
MMAC UltraFX
video 9GB Drive Sony Hitachi Switch Hub
1
4
1
4
1
6
1
2
1
4
1
6
1
4
1
2
2
8
1
2
3
8
6
64
9
3
4
3
1
10
12
10
1
3
6
4
8
2
8
6
12
6
22
24
22
41
16
36
61
0
2
17
6754 10219 6671 16316 12871 37099 25158
17
35
325
22
2868
0
1510
164
2200
45
2200
14
2500
3
4995
Project
Request
191507
142821
78524
37092
57808
78524
57808
37092
125361
124443
676420
270439
143518
87436
198655
190548
287744
10 ASC Subtotal 2785740
2375
4. Digital Media
While recognizing the potential of IA systems in the traditional high performance computing arena,
we believe that the engines of future demand for IA cycles will be found in the arena of computer
graphics, digital media, multimedia and video serving and distribution, and collaboration technology.
While the concept of real time, archive video, and audio has been discussed for a number of years,
the available technology did not have the performance or cost-structure required for wide use.
Recent advances in IA machines and support systems have enabled a wide range of opportunity.
UW’s Department of Computer Science & Engineering, Office of Computing & Communications,
Department of Electrical Engineering, University Libraries, School of Medicine, and School of Arts
are engaged in a number of efforts whose goal is to explore the use of advanced digital technology
for education, collaboration, multimedia libraries, and learning laboratories. We propose a number
of projects whose overall effect will be to move the UW dramatically in the direction of employing
digital media in its educational endeavors.
Digital Media Curriculum Development and Research
Digital media technologies and applications are becoming more and more important owing to the
growth of multimedia technologies, digital wide-area networks, and corporate Intranets. DVD
(Digital Video Disc), Direct Broadcast Satellite (DBS), and Video Conferencing are just some
examples that are already having large societal impacts. With increasing networking capabilities, IA
PCs with commodity video boards have surpassed the previous generation of expensive
workstations with expensive “boutique” video hardware boards that were required for video
compression/decompression. With more and more successful new applications running on IA
systems, it is expected that this represents a substantial growth area for these systems.
As a research and educational institution, UW must take a leadership position in research and
applications of Distance Learning and Digital Library technology. There are two major parts to this
project: setting up an environment suitable for supporting multimedia instruction to remote
classrooms, and improving research labs for supporting Digital Library and Digital Video Coding
research. Specific projects include:
The Gigabit-to-the-Desktop Initiative: Building a National Distributed Computer Science
Department
The Gigabit-to-the-Desktop (G2D) initiative is a collaborative effort involving a range of
universities and industrial partners that include UW, Carnegie Mellon, MIT, Brown, Microsoft and
DARPA, under the leadership of Raj Reddy, Dean of the School of Computer Science at Carnegie
Mellon . This consortium is engaged in aggressive anticipation of Internet2/NGI with the creation
of a “distributed computer science department”, exploring the software that will be required to
support highly interactive distance learning and collaboratories when this bandwidth is ubiquitous.
We and our collaborators will conduct this experiment on the IA systems.
02/13/16
18
PC-Based Multimedia Course-on-Demand System for Distance Learning in Electrical
Engineering
To take advantage of the rapidly evolving communication technologies and support creative new
learning environments, such as described above in the G2D project, Professors Sun, Hwang, and
Soma will focus on a new set of tools whose purpose is to enhance the functionality of
course-on-demand support. This system focuses on an asynchronous learning video system for
on-line laboratory support and other educational situations. Following the real time digitization of
video lectures, laboratory training videos and other information resources; this video will be
captioned and indexed to allow users to access segments in a hyperlinked and fully indexed
environment that supports a student’s own particular learning process and approach. The
technology required to support this effort includes audio track word identification for automated
indexing, optimized video digitization/compression and video Web browser software support. Of
particular value is the full database indexing of a complete curriculum of video that allows dynamic
access to topics of interest and the significantly reduced data flow requirements.
Many distance learning activities are beginning to use commercial products effectively, but this work
extends the functionality in three ways. First, the work supports standard compression algorithms
such as H.263 and MPEG4. This has the effect of significantly reducing data stream requirements
(to the order of 20 kbs) while maintaining open system compatibility. Second, while the current
video on demand products support hyperlinks to other video sequences, they do not support video
database search and retrieval. This type of indexing allows students viewing a lecture to search on a
term or concept that is not understood. The results of the query might be to link back to a
prerequisite course at the point of this term’s definition.
Third, a new algorithm developed in Professor Zick’s laboratory that allows real time speech
recognition of video audio track will be incorporated. This function will support automation of
indexing of video lectures and laboratory demonstrations. This automated speech recognition will
allow full indexing of video educational material in an efficient and non-tedious manner.
Multimedia Libraries
Digital libraries are a nationwide focus of all major research universities. Six have been funded by
the National Science Foundation to form the Digital Library Initiative. The University of
Washington has been among its leaders in this area through efforts led by Professor Sherri Fuller in
Medical Informatics. This National Library of Medicine funded project called IAIMS – the
Integrated Advanced Information Management System – has received national recognition
(http://healthlinks.washington.edu/iaims). Professor Fuller has been selected to serve on the
president’s commission for the direction of the second generation of grand computing challenges.
There are many aspects of the digital library initiative. The original RFP required each consortium to
put a major collection online. Additional issues are copyrights and digital watermarking, knowledge
organization and representation, digital scholarship (citation linking etc.), workstyles and access
(including agents), and collections and systems issues (performance, and Terabyte storage.
The proposed activities of the University of Washington Digital Library and the Biomedical
Informatics present a number of unique opportunities that are complementary to the work already
02/13/16
19
in progress through the Digital Library Initiative. First the projects are focused on putting the
computers into library use. Not in the traditional sense but in the distributed new library sense.
These efforts have the strong support of the director of the University of Washington Libraries,
Betty Bengston. Through the Allen grant the University has innovation funds each year to support
development projects. Second, the focus on the extensive Northwest History Project will be unique
in content. It will also serve as a prototype for larger on-line collections. Finally the broad impact of
these projects will be substantial. Library staff will be trained in using Intel architecture, to
characterize major needs and participate in the design of user interfaces. These projects will also
serve to network people across campus, in the sciences, medicine, engineering, and the arts, bringing
them all up to speed to participate in the national library efforts.
The University of Washington Digital Library Project
UW Libraries and the Department of Electrical Engineering are collaborating to accelerate the
development of the UW Digital Library. More than digital pointers to print resources, more than
on-line textual finding aids, true digital libraries deliver full text, images, data in all formats,
regardless of time and place. Scholars are able to make new connections among varied sources of
information, present results graphically, analyze texts more quickly, search databases more easily and
link to needed information. With enough computing power in the library servers and client
programs, users can manipulate and evaluate data, creating new knowledge in profound and
unforeseen ways. Digital library holdings promise a sea-change for scholarly research.
With multimedia collections, many functions are necessarily compute-intensive. Acquisition of
materials often involves scanning and manipulation of images which must be compressed and
reformatted. Texts must be processed with optical character recognition software. On the delivery
side, users need interfaces to sophisticated search engines which can perform complex Boolean
expressions and rank retrievals according to relevance. The price/performance of both mainframe
and RISC platform computing have proven problematic for libraries wishing to launch scaleable
digitization efforts. The UW Libraries has chosen to move away from these platforms and is now
focused on replacing UNIX-based X11/Motif clients with personal computers running Windows 95
served by NT. We depend upon this distributed platform for all our staff and public computing
applications.
A prototype multimedia database system developed in the Electrical Engineering department, called
Content, will be used. A paper on Content will be presented at the ACM Digital Library conference
this July. Content is a practical, scalable, and high-performance text-indexed multimedia database
system. The novelty of Content is in its approach of integrating high-volume storage, fast searching
and browsing, easy multimedia acquisition, effective updates, scalability, extendibility, and an API
based on HTTP. Content is also a low-cost solution for a large multimedia database that is available
today. Standard Web-based browsers such as Netscape can query the Content server. The API is
flexible so that different and unique Content clients on multiple platforms can be built to access
multiple Content servers. The Content architecture permits any multimedia type to be stored. Text
descriptions are used as indices for images and videos. Content includes an easy-to-use
Windows-based acquisition station for acquiring images and video. Currently, Content is being used
in a real library setting and contains more than 25,000 multimedia objects that span two different
collections of valuable historical photographs. In terms of performance, Content can access a single
image in a database of over one million images in less than a second. These images will be used in an
expanding Northwest History collection. This collection will provide the resource for a large scale
02/13/16
20
curriculum support project for the state of Washington. The database also supports indexed video
for sports, news broadcasts, and education. The client server database is designed to be scaleable to
millions of images and hours of video. The proposed project will utilize Intel architecture machines
to scale up servers for full library implementation.
Structure-Based Visual Access to Biomedical Library Information
The UW Academic Medical Center work is an extension and integration of three medical
image-related programs that organize information for medical education and clinical care:

The Digital Anatomist (http://www1.biostr.washington.edu/DigitalAnatomist.html), an
award-winning program for the WWW generation and delivery of 3-D anatomy resources. We
will incorporate additional images from the visionary Visible Human Project
(http://www.nlm.nih.gov/research/visible/visible_human.html).

The Integrated Advanced Information Management system (IAIMS) heterogeneous image
database development, management and delivery (http://healthlinks.washington.edu/iaims/).

Network monitoring, configuration and scaling methodologies for image-related activities (the
widely used Block Oriented Network Simulator BONeS package)
The three software projects will all require ports from UNIX servers to WNT as well as movement
of some client support from Macintosh to IA systems. These three projects are currently being
developed within the context of a regional telemedicine testbed initiative funded by the National
Institutes of Health, National Library of Medicine. (See http://www.hslib.washington.edu/b3) The
testbed will provide an optimum proving ground for research and testing the capabilities of the Intel
hardware. The regional testbed will also support another UW Intel project proposed by Professors
Stewart, Somani and Narayanan (School of Medicine) that will examine Clinical Multimedia Data
Generation, Transmission and Retrieval in a Fault-Tolerant Networked Environment.
Digital Media Laboratories
Highly Reliable Networks for Cardiology
The objective of this project proposal is to ascertain and quantify the efficacy of Intel’s new
powerful processors in handling the rigors of generating, managing and distributing video and
associated data generated in modern clinical cardiology practice. This involves generating,
maintaining, and sharing patient records— chest x-ray images , video (angiograms),
electrocardiogram (data is similar to audio) and other textual data —reliably over a fault-tolerant
networked environment. The composite record should be accessible, over local-area (intra-hospital),
wide-area (referring physician), and mobile networks. Such a clinical information warehouse would
represent a major upgrade from the UNIX-based information system that is being used in
cardiology today. The challenge here is to bring video and images on demand to a remote
workstation minimizing waiting time (e.g., the amount of transmitted data) without compromising
diagnostic quality.
02/13/16
21
The Laboratory for Animation Art (LA2)
LA2, the Laboratory for Animation Art, was designed to support our new multi-disciplinary
curriculum in animation arts. Comprised of an inter-linked set of courses in the Schools of Art and
Music, and the Department of Computer Science & Engineering, the curriculum provides team
taught courses in modeling, animation and rendering. Running parallel to, and integrated with the
animation-specific sequence, the second phase of the LA2 is designed to further partner with arts
and humanities disciplines involved in cinema studies. The LA2 has recently introduced critical and
generative courses in digital film/video, and multiplex (motion picture) holography, taught from the
art and music technology perspectives. Film making, the quintessential interdisciplinary form of art
in the 20th century is a perfect candidate for inclusion in our curriculum. Always a technology-based
form of creative work, film is now moving into the digital domain, and is a natural extension of the
LA2’s time-based digital media focus. A key component in the development of this new digital arts
curriculum has been the creation of a set of new integrated “capstone” courses.
Computer Graphics
The University of Washington’s Department of Computer Science & Engineering has one of the
nation’s preeminent research and education programs in computer graphics, headed by Professor
David Salesin. With Intel’s help, we have begun the transition of this program from Silicon Graphics
hardware (SGI has donated $800,000 in equipment outright, and has provided substantial allowances
on a great deal more) to the Intel Architecture. We seek to continue this transition by converting a
second computer graphics laboratory—20 workstations and 3 servers—to IA systems running
WNT.
Summary Equipment Requirements for Digital Media
Equipment Legend
SP1
Pentium Pro Desktop Minitower 200mHz – 128MB/4.3GB SCSI
SP2
Single Processor AP 440 FX Minitower – 64MB/4GB SCSI
SP3
PD440 FX Pentium II MMX Minitower – 266mHz 128MB/4GB SCSI
DP1
Dual Processor AP440 FX Minitower – 256MB/8GB SCSI
DP2
Dual Processor AP440 FX Minitower – 128MB/4GB SCSI
QP1
Quad Pentium Pro AP450 GX Tower – 256MB/9GB SCSI
QP2
Quad Pentium Pro AP450 GX Tower – 256MB/4GB SCSI
Opt1
V3C38 STB 8MB 3-D Multimedia Accelerator
Opt2 G 1012 Ultra FX32T (60MB)
Opt3 G1006 True FX Pro 16MB Video Card
Opt4 ST1917WC Seagate 9GB SCSI Hard Drive
Mon1 306389 20” Sony GDM 20SE2T
Mon2 M1010 21” Hitachi Monitor
Net1
E5101TX Intel Switchable Hub
Net2
EC100MAFX Uplink Switch Module
02/13/16
22
$ 6,754
$10,219
$ 6,671
$16,316
$12,871
$37,099
$25,158
$
325
$ 2,868
$ 1,510
$ 2,200
$ 2,200
$ 2,500
$ 4,995
$
595
Technology for Higher Education 2000: A Proposal to the Intel Corporation From the University of
Washington
Subtotal of Equipment Requests
Digital Media
Project
DM-1
DM-2
DM-3
DM-4
DM-5
DM-6
DM-7
G2D
Course-on-Demand
Digital Library
Medical Library
Cardiology
Arts Animation
Graphics Lab
UNIT COSTS
02/13/16
SP1
SP2
SP3
60
20
20
22
5
16
DP1
DP2
QP1
QP2 Option 1 Option 2 Option 3 Option 4 Mon 1 Mon 2 Net 1 Net 2
MMAC UltraFX
video 9GB Drive Sony Hitachi Switch Hub
5
4
3
2
1
2
3
4
60
25
24
28
11
23
0
0
143
9
23
0
15
6754 10219 6671 16316 12871 37099 25158
23
148
325
0
2868
16
20
60
30
30
34
17
2
9
60
25
26
22
5
16
20
36
1510
182
2200
174
2200
Project
Request
683760
388335
397684
403484
200910
195654
390033
0
2500
0
4995
0
DM Subtotal 2659860
2375
5. Supporting the “Educational Enterprise”
UW is an enormous enterprise consisting of roughly 3,500 faculty, 12,000 staff, and 34,000 students,
distributed among 16 schools and colleges and hundreds of academic and administrative
departments.
UW has been a national and international leader in the cost-effective support of large-scale
enterprises -- their mail services, web services, file services, computing services, and basic hosting
services -- using clusters of relatively inexpensive servers (see
http://www.washington.edu/cac_docs/windows/issue18/clusters.html ). While C&C practices and
preaches the philosophy “buy, don’t build”; they have written much of the software that operates
these clusters as well as the software that makes them useful to their users (such as the Pine
IMAP-compliant email system, used by tens of millions of people around the globe). This is
leadership, standards-compliant software of the sort that the business world is increasingly
demanding.
The C&C clusters are built from UNIX workstations. Over the next few years, we hope to move
much of this activity to the IA systems using WNT. C&C and CSE are engaged in ongoing
discussions with Microsoft’s NT Server groups’ leadership toward a collaboration that will provide a
national model for the cost-effective support of the educational enterprise on IA systems.
During the first year of this project, our focus will be on proof-of-concept experiments in the areas
of web support, multimedia support, streaming digital video servers, and video compression
“testbenches.” These experiments will be closely tied to the G2D initiative and with the Research
TV initiative.
The Research TV Initiative (RTV)
RTV is a national coalition of more than a dozen major research universities, directed by the
University of Washington’s Office of Computing & Communications. The objectives of RTV
include: to capture, promote, and provide access to research endeavors via network and broadcast
(including DBS) distribution technologies; to collaborate to collectively achieve a self-sustaining
critical mass of materials; to foster better public understanding and appreciation of research; to
experiment with new modes of distribution and access; and to work to ensure research institutions
access to appropriate bandwidth, channels, and other distribution mediums.
RTV presents enormous opportunities for exploring applications of digital media using the Intel
Architecture. Within the context of this RFP, our goal is to prototype some of these applications
using a mixture of material from the University of Washington at large and the Department of
Computer Science & Engineering. If successful (which we surely expect to be), we will pursue
further steps with Intel in future years.
Working with the Office of Computing & Communications, the Department of Computer Science
& Engineering has undertaken a number of distance learning and learning-on-demand initiatives.
These include:
02/13/16
24

The broadcast of our twice-weekly colloquia, both live on the Internet Mbone and delayed on
UWTV cable.

The availability of 100% of our course materials on the web, in certain cases augmented by
RealAudio synchronized to web transparencies.

A new Professional Masters Program with courses available both interactively and on-demand.
As a specific example, a current course is beamed from UW to Microsoft using a combination of
videoconferencing over the public switched network and web transparencies driven by the
lecturer using Microsoft NetMeeting; and to students in Oregon via Mbone audio/video (again
using web transparencies driven by the lecturer using NetMeeting).
Our goal under this RFP is to expand these efforts, and to employ the Intel Architecture exclusively.
We consider it an advantage that we have not founded a company to flog some specific video
compression technology! A key contribution of our effort will be to provide our material in a variety
of formats and a variety of resolutions, providing a testbed for the evaluation of stream video
alternatives, and for the integration of video and the “ASCII” web to provide bandwidth-effective
solutions to distance learning needs -- interactive, live, and on-demand.
Seattle is in fact a terrific place to carry out this effort, given our existing partnerships with
companies such as Microsoft and Progressive Networks. And the University of Washington is in a
unique position to successfully pursue the integration of the entire video and multimedia production,
repository, and distribution food chain. UW Computing & Communications led the nation in long
ago integrating its professional video, graphics, CATV, networking, computing, telecom and user
support groups. CATV includes both a campus CATV system and a CATV channel that reaches 1
million households via direct live feed to numerous headends. C&C’s network operations span a
6-state region and serve as the primary regional Internet Service Provider Network Operations
Center (including a customer base of Microsoft, Boeing, Starwave, Nike, etc.) as well as the hub of
the new $43 million dollar statewide K-20 multimedia SONET based network.
First Year Deliverables

We will dramatically increase the amount of both UW and Computer Science & Engineering
content available in digital video format -- interactively, live, and on-demand, both “stand-alone”
and web-integrated.

We will provide continuously-updated, publicly-available, level-playing-field comparisons of
various live and on-demand streaming video technologies, such as RealVideo, VXtreme,
NetMeeting/NetShow (MPEG-4), VDOLive, and Precept.

We will build modest-scale production clusters to support web and multimedia in the
educational (that is, historically UNIX-based and NFS-based) environment.

We will lay the groundwork for migrating our existing email and hosting clusters to the Intel
Architecture in future years.
02/13/16
25

We will actively share these experiences with the nationwide members of the Research TV
consortium, and the Gigabit-to-the-Desktop consortium, and others.
Summary Equipment Requirements for Educational Enterprise Support
Equipment Legend
SP1
Pentium Pro Desktop Minitower 200mHz – 128MB/4.3GB SCSI
SP2
Single Processor AP 440 FX Minitower – 64MB/4GB SCSI
SP3
PD440 FX Pentium II MMX Minitower – 266mHz 128MB/4GB SCSI
DP1
Dual Processor AP440 FX Minitower – 256MB/8GB SCSI
DP2
Dual Processor AP440 FX Minitower – 128MB/4GB SCSI
QP1
Quad Pentium Pro AP450 GX Tower – 256MB/9GB SCSI
QP2
Quad Pentium Pro AP450 GX Tower – 256MB/4GB SCSI
Opt1
V3C38 STB 8MB 3-D Multimedia Accelerator
Opt2 G 1012 Ultra FX32T (60MB)
Opt3 G1006 True FX Pro 16MB Video Card
Opt4 ST1917WC Seagate 9GB SCSI Hard Drive
Mon1 306389 20” Sony GDM 20SE2T
Mon2 M1010 21” Hitachi Monitor
Net1
E5101TX Intel Switchable Hub
Net2
EC100MAFX Uplink Switch Module
02/13/16
26
$ 6,754
$10,219
$ 6,671
$16,316
$12,871
$37,099
$25,158
$
325
$ 2,868
$ 1,510
$ 2,200
$ 2,200
$ 2,500
$ 4,995
$
595
Technology for Higher Education 2000: A Proposal to the Intel Corporation From the University of
Washington
Subtotal of Equipment Requests
High Performance Clusters
Project
SP1
SP2
SP3
DP1
DP2
HPC-1 Clusters
UNIT COSTS
02/13/16
QP1
QP2 Option 1 Option 2 Option 3 Option 4 Mon 1 Mon 2 Net 1 Net 2
MMAC UltraFX
video 9GB Drive Sony Hitachi Switch Hub
13
13
56
13
6754 10219 6671 16316 12871 37099 25158
27
13
325
2868
1510
56
2200
Project
Request
454479
HPC Subtotal 454479
2200
2500
4995
2375
6. Institutional Support and Coordination
We have described a number of exciting opportunities for UW/Intel partnerships provided by
computationally demanding challenges that face UW and every other major research university. To
meet these challenges, we are requesting more than 500 high-end Intel systems. We recognize that
ordering, installation, and successful utilization of this number of machines needs to be planned and
supported. We also see an amplification of our efforts through planned support of collaboration
between groups on our campus, and dissemination of result to other campuses to facilitate
nationwide collaboration.
Infrastructure Support for UW Projects
In many cases, the Intel Architecture systems acquired under this proposal will be placed with
groups who are currently doing forefront research and education on UNIX systems. All of the
groups have a support infrastructure for their current use of UNIX systems, but they will need help
to build new communities of users and support staff including new forums for the exchange of ideas
for operations, guidelines for operation that permit them to draw on central expertise and the
general encouragement that comes from being part of a larger campus enterprise.
In order to implement the projects described in this proposal, the University and Colleges will
provide four new full-time staff members.
To facilitate the initial delivery and deployment, the UW Office of Computing &
Communications will serve as a single-point delivery address. Orders for specific
configurations will pass to Intel through a single C&C contact. All equipment will be shipped to a
single UW address where it will be tagged, inventoried, and distributed to the project groups. The
new technical staffs’ job will be to insure an efficient startup for this new equipment. The next
important role for these staff will be to serve as a focal point for the exchange of NT systems
information, systems optimization, new tools and techniques and local successes. To insure rapid
integration of the requested state of the art Intel machines, we also request 4 four-day training
courses for our new technical staff. We would also be interested in a training alternative to have a
class on campus.
UW/Intel Coordination
We will use our extensive past experience in information sharing projects between academia and
industry to provide a model for the flow of information between universities and Intel
Using the support staff as a focal point, all projects will participate in providing input to a local Web
page established to support this project. Intel project descriptions, participant list, and equipment
configuration will be posted. Initially, forums will be set up to enable discussion. Faculty, students
and staff will share ideas, get questions answered, and to post new results. Intel will have direct
access to this site to enhance communication between the partners.
02/13/16
28
National Coordination
If there is desire on the part of Intel to support a broader project to manage the flow of information
across the participating campuses, we have the experience required to identify and manage the
resources that would be needed.
We have extensive past experience in information sharing projects between academia and industry.
For the past 8 years, Professor Zick has been director of a Web based information sharing project
for IBM called IKE: the IBM Kiosk for Education (http://ike.engr.washington.edu/ ). This project
provides a focal point on computer innovation for higher education that serves over 400 campuses.
We will utilize experience from this project to ensure a high level of quality content for the Intel
project.
We also propose to work with other campuses and Intel to develop community-building programs.
We will base this on a successful partnership between Microsoft and the CSE department at the
University of Washington.
The University of Washington CSE department, in partnership with Microsoft, has taken the
leadership role in national efforts to move computer science research -- particularly at the premier
departments -- to the Intel/NT platform. A cornerstone of this effort is the USENIX Windows NT
Workshop, a 300-attendee event which will be held in Seattle August 11-13 1997, co-chaired by Ed
Lazowska from the University of Washington and Mike Jones from Microsoft. (Program Committee
members come from Berkeley, Intel, VenturCom, DEC, Illinois, Harvard, Microsoft, Xerox, UW,
the Technion, Michigan, IBM, and Cornell.)
The NT Workshop is a forum for researchers actively using or planning to use Windows NT to
discuss ideas and share information, experiences, and results. While an increasing amount of
research work is being done on Windows NT, until now there has been no common forum where
researchers could gather and learn from each other’s work. The NT Workshop is intended to
address this need, through a mixture of invited talks, panel sessions, contributed talks, case studies,
and demonstrations. Information on the USENIX Windows NT Workshop can be found on the
web at http://www.usenix.org/usenix-nt/.
The University of Washington has a demonstrated commitment to Intel and NT, and a
demonstrated commitment to helping others to get on board.
02/13/16
29
Appendix I – Project details
Advanced Scientific Computing
ASC-1
ASC-2
ASC-3
ASC-4
ASC-5
ASC-6
ASC-7
Projects in the Physical Sciences
Computational Materials Science Laboratory
Computational Fluid Dynamics
Computational Structural Mechanics: Design for Manufacturing
Large-Scale Simulations for Wireless Communications and Electronic Packaging
Haptic Simulation for Virtual Reality
High-Frequency Digital Systems: Design, Modeling, Layout, and Test
Digital Media
DM-1
DM-2
DM-3
DM-4
DM-5
DM-6
DM-7
02/13/16
The Gigabit-to-the-Desktop Initiative: Building a National Distributed
Computer Science Department
A PC-Based Multimedia Course-On-Demand System for Distance Learning in
Electrical Engineering
The University of Washington Digital Library Project
Structure-Based Visual Access to Biomedical Library Information
Highly Reliable Networks for Cardiology
The Laboratory for Animation Art
Computer Graphics
30
ASC-1
Projects in the Physical Sciences
Valerie Daggett and Keith E. Laidig, Biomolecular Dynamics Group
George Lake and Thomas Quinn, Astronomy
Derek C. Richardson - Thomas Quinn - George Lake - Joachim Stadel
Randy LeVeque; Applied Mathematics
Toby Burnett, Physics
Hannes Jónsson, Chemistry
John J. Rehr and Edward A. Stern, Physics
Thomas Quinn, Craig Hogan, Astronomy
For detail, please see http://hermes.astro.washington.edu/intelslide/
02/13/16
31
ASC-2
Computational Materials Science Laboratory
Raj Bordia, Professor and Chair, Materials Science and Engineering,
Lucien Brush, Materials Science and Engineering
Brian Flinn, Materials Science and Engineering
Gretchen Kalonji, Materials Science and Engineering
In materials science, computer simulation and modeling allows analysis of materials processing and
properties, image simulation, and enables a reduction in the laboratory testing of materials. Due to a
disparate range of length scales (from the nano-scale in microelectronics to the macro-scale aircraft
and automotive structures) and time scales (femto-scale in opto-electronics to years in the fatigue of
an aircraft body), a hierarchy of methodologies is often required to thoroughly study a problem.
However, to date even a given methodology is limited in its ability to handle realistic simulations,
due to the inherent complexity of many problems. Efforts to treat these computationally intensive
problems have traditionally been the forte of high end sequential workstations, which have not
become more accessible to diverse research communities in higher education, or to the general user
in the materials community. With a Pentium based platform this can be changed.
Present materials education is largely deficient in the immensely important area of computational
materials science. Herein, we propose an extension of our curricular and research efforts with the
view of expanding our accomplishments and to incorporate these advances into the educational
curriculum by developing a “Laboratory of Computational Materials Science”. Currently, our
department is playing a leadership role in a coalition of top engineering schools as part of a
nationally funded NSF program to improve engineering education around a broad theme of design
(the ECSEL Coalition). We hope to incorporate advances in computational materials sciences into
this curricular reform effort. Some of the advances in Materials Science and Engineering curriculum
will be made available to other schools in this coalition.
This proposal includes activities in two related broad areas. The first area concerns providing
dedicated computational facilities for materials simulation and visualization research projects and for
computational materials science courses at the undergraduate and graduate levels. These facilities
will also aid in the development of thematic digital libraries, such as one on the failure of
microelectronic components, as a part of a senior level undergraduate course on materials design
and failure analysis. Such libraries and multi-media based instructional tools are of immense
pedagogical value and are expected to shape the outlook of the future electronic classrooms (e.g.,
create real time movies on material processes, structures, defects and microscope images). The
second area concerns using Pentium workstation network, to enable parallel computation, for
solving problems in complex systems. For this we will use the facilities being proposed by the
Computational Fluid Mechanics Group (in this proposal) A pictorial outline of the goals of our
proposed laboratory are shown below:
02/13/16
32
The laboratory will be used to investigate problems in the areas of processing, microstructure and
properties of a wide range of materials. Its use will also be integrated in the Departments courses.
Most of these applications will have strong educational, research, visualization and multimedia
components.
02/13/16
33
ASC-3
Computational Fluid Dynamics
Bruce Balick, Astronomy
Uri Shumlak, Aeronautics & Astronautics
Scott Eberhardt, Aeronautics & Astronautics
Jim Riley, Mechanical Engineering
Phil Malte, Mechanical Engineering
Dennis Lettenmaier, Civil Engineering
Since the advent of digital computing, the highest performance computers have been utilized in the
field of Computational Fluid Dynamics (CFD). We proposed to continue with this by exporting
existing CFD codes onto Intel Architecture (IA) based equipment for a range of problems in CFD,
and to perform subsequent scientific and engineering simulations. Although the range of topics is
broad, there is considerable commonality in both the physics of the phenomena studied (e.g.,
hydrodynamic instabilities, shock-dominated flows), as well as the methodologies employed (e.g.,
numerical methods, graphical packages). There are two major components to the proposed
equipment: (i) a central, high performance facility consisting of 6 networked, 4-processor Pentium
Pro computers operating in parallel, and (ii) single high-performance workstations dedicated to
individual topics. The high-performance workstations will be utilized especially for code
development, preliminary simulations, and analysis of output, especially graphical analysis. The 6
networked, parallel computers will be employed for the most computationally-intensive simulations,
and the system should be competitive with existing high-performance computers.
The scientific and engineering topics to be addressed using the IA based equipment are the
following.
1. Planetary Nebula (Bruce Balick, Astronomy). Planetary nebulae are stars in their later stages of
decay, as large quantities of mass are ejected from the stars. Balick and his collaborators are
presently focusing on explaining the evolving shapes of planetaries, and the instabilities that
form in them.
2. Plasma Physics (Uri Shumlak, Aeronautics & Astronautics). Shumlak and his collaborators are
currently developing an advanced algorithm to model time-dependent magneto-hydrodynamic
motion in three dimensions, and to apply the code to designing and testing of plasma-related
technologies such as advanced plasma thrusters for space propulsion.
3. Computational Aerodynamics (Scott Eberhardt, Aeronautics & Astronautics). The goal here is
to run aerodynamic simulations of such problems as: icing on airfoils, flapped airfoils, flapped
wings, hypersonic re-entry, and supersonic shock-induced mixing.
4. Turbulent Flows (Jim Riley, Mechanical Engineering). The IA based computers will be employed
to simulate the detailed, three-dimensional, time-dependent structure of complex turbulent
flows. Output from such simulations is being used to better understand the turbulence
dynamics, and to develop improved turbulence models for applications.
02/13/16
34
5. Combustion (Phil Malte, Mechanical Engineering). Simulations are proposed for the prediction
and reduction of pollutant emissions from turbulent combustion as exhibited in gas turbine
engines, industrial burners, and utility furnaces.
6. Hydrologic Modeling (Dennis Lettenmaier, Civil Engineering). UNIX-based applications for
various hydrologic prediction purposes will be ported to IA based computers, for ultimate
utilization by users of the models in the government and private sectors.
02/13/16
35
ASC-4
Computational Structural Mechanics: Design for Manufacturing
Mark Tuttle, Mechanical Engineering
Colin Daly and Per Reinhall, Mechanical Engineering
Duane Storti, Mechanical Engineering
Mark Ganter, Mechanical Engineering
George Turkiyyah, Civil Engineering
Computationally-intensive computer applications are an integral part of design for manufacturing in
the Department of Mechanical Engineering. These include both the traditional use of large
commercial codes [e.g., ANSYS for finite element analysis (FEA)], and the development of large,
special purpose codes. This work is currently being done using somewhat obsolete UNIX
workstations and mid-range personal computers (100 to 133 MHz Pentium-based computers). As
part of the general Departmental move to using Intel Architecture (IA) systems for support of our
students, there is a need to update the research computing resources to modern, high performance,
but relatively inexpensive systems based upon IA and using MS NT operating systems. It is
proposed to port existing codes to IA systems for the following applications.
1. Mark Tuttle (Mechanical Engineering) is developing a large code used for optimization of
composite structures. This is written in Fortran and runs on Sun SPARC 10 equipment. Runs
may take days to complete. He proposes to port this code to a multiprocessor IA/NT system
and to a language such as C++. Development of the code would then continue.
2. Colin Daly and Per Reinhall (both in Mechanical Engineering) have extensive use of FEA codes
such as ANSYS and DYNA3D in their research activities. Daly is heavily involved in the design
and analysis of the muon detector subsystem for the Atlas particle detector experiment at the
Large Hadron Collider under construction at CERN, Geneva, Switzerland. This involves
standard structural analysis, but there is also a need for structural optimization, thermal analysis
(static and transient), and vibration analysis. The latter analyses need extensive amounts of
compute time and will overwhelm the current, older HP workstations existing in the
Department. Reinhall’s research on impact and destructive evaluation of structures involves
heavy use of the research code DYNA3D.
3. Duane Storti, Mark Ganter (both in Mechanical Engineering), and George Turkiyyah (Civil
Engineering) propose to use the requested IA based equipment to implement threaded versions
of their software for implicit solid modeling, implicit surface polygonization, implicit ray-tracing,
fitting implicit solids to surface data, and skeleton computation. They will optimize and evaluate
the performance of these threaded implementations on advanced multi-processor hardware in
order to establish the feasibility of advanced design systems based on implicit formulations and
Intel Architecture.
02/13/16
36
ASC-5
Large-Scale Simulations for Wireless Communications and Electronic Packaging
Yasuo Kuga, Associate Professor, Electrical Engineering
James Ritcey, Associate Professor, Electrical Engineering
Minoru Taya, Professor, Mechanical Engineering
Leung Tsang, Professor, Electrical Engineering
Jenq-Neng Hwang, Associate Professor, Electrical Engineering
Akira Ishimaru, Professor, Electrical Engineering
The studies of modern wireless communication systems, electromagnetic (EM) field radiation from
high-speed printed circuit boards (PCBs), and EM field scattering from geophysical media require
extensive numerical simulations. For example, Bit-Interleaved Coded Modulation with Iterative
Decoding (BICM-ID) and Overlayed Spread Spectrum Code Design (OSSCD) in wireless
communication systems require extensive computation using integer arithmetic and are ideally
matched to the Intel Pentium Pro architecture. Another example is the numerical solution for the
integral equation for two-dimensional random rough surfaces. Our new method, the Sparse-Matrix
Canonical-Grid Method (SMCG), allows us to solve rough surface problems with 1000 times more
unknowns than what has been traditionally done. This numerical method is not limited to remote
sensing applications but can be applied to many other engineering problems including the study of
the EM field radiation from high-speed PCBs. Our existing lab computing equipment (primarily Sun
SPARCs10 and 20) is insufficient for these large-scale problems that we currently wish to pursue.
We propose to achieve the necessary power with a network of dual- and quad-Pentium Pro
computers under the Parallel Virtual Machine (PVM) operating software.
(1) Numerical Simulations in Electromagnetics and Fast Algorithms
For more than 20 years, the applied electromagnetics group has been conducting extensive
numerical simulations and algorithm development with support from NSF, DoD, NASA, and
industry. We have studied electromagnetic wave radiation from high-speed printed circuit boards,
remote sensing of geophysical media, new composite and smart materials, and wave interaction in
disordered media. For the past few years, we have systematically developed a new methodology for
solving the integral equation for two-dimensional random rough surfaces. The Sparse-Matrix
Canonical-Grid Method (SMCG) allows us to solve rough surface problems with 1000 times more
unknowns than what has been traditionally done. Our new method is not limited to remote sensing
applications but can be applied to many other engineering problems. For example, we are currently
studying the electromagnetic field radiation from high-speed PCBs. Electromagnetic interference
(EMI) is one of the major problems encountered by designers of high-speed digital circuits. Because
the amount of EMI is closely related to the signal line routing and line termination on PCBs, it is
difficult as well as costly to reduce EMI once PCBs are designed and fabricated. The ability to
predict and minimize EMI during the PCB design process, therefore, will be a very important
simulation tool for PCB designers.
High performance computers are also essential for our studies on remote sensing of geophysical
media such as snow, forests, terrain, and the atmosphere. We have studied the volume scattering
from snow with dense medium theory by taking into account the snow microstructures in terms of
02/13/16
37
grain-size distribution, pair-correlation functions, snow density and snow metamorphism. We have
developed a new buried target detection technique using the angular correlation method and
conducted extensive numerical studies. This technique is also applied to 3-D imaging of geophysical
media and height profile retrieval from synthetic aperture radar images. All these applications require
high-performance computers with parallel processing capabilities.
(2) Design of Modulation and Coding for Wireless Communications
The design of any new modulation/coding technique for digital communications requires extensive
system simulation. Typically, the codes which are employed are only one of a select few of a myriad
of options. The codes must be carefully matched to the modulation, which in turn is heavily
dependent on the channel. In wireless applications, the mobility of the users significantly affects the
channel. A key parameter is fading, the variation in the received signal level. To measure system
performance at rates of Pe, we require about 10/Pe simulation trials. Users will tolerate only the
lowest of error rates. While voice traffic is more tolerant, data files must be transmitted with an
end-to-end error rate below Pe=10-8. A research group headed by Prof. Ritcey is interested in
modulation and coding for wireless channels. With the current interest in mobility, wireless systems
are becoming extremely important. In most cases, the design of the modulation and coding portions
of the transceiver must be optimized for the wireless channel. Our efforts are focused on the joint
modulator-code design for Rayleigh fading channels. Our most significant contribution is the use of
iterative decoding for bit interleaved TCM(BICM-ID). The system has been designed through
extensive use of Monte Carlo simulation.
02/13/16
38
ASC-6
Haptic Simulation for Virtual Reality
Blake Hannaford, Associate Professor, Electrical Engineering
Steven Venema, Systems Programmer, Electrical Engineering
Haptic feedback adds the sense of touch to computer simulations such as those used in computer
games and commercial CAD systems (http://brl.ee.washington.edu/BRL/haptics.html). In a
haptic-capable system, simulation forces are fed back to the operator via an active (motorized)
joystick or other mechanized device. The inclusion of haptic feedback capability to existing
computer systems requires large increases in computing resources; this increase is due to the
necessity of maintaining dynamic models of complex environments that must be updated at rates
approaching 1000Hz (as opposed to video-only simulations that typically run at 30Hz-60Hz).
Multiprocessor systems offer the potential of addressing some of these computing requirements. In
particularly demanding simulations, multiple computer systems may be interconnected by
high-speed intercommunication hardware such as 100Mbit Ethernet or 400Mbit FireWire
(IEEE-1394 bus) to distribute dynamic simulation load. Simulation systems with demanding
requirements have historically used multiprocessor embedded systems (VME, etc.) to distribute the
computing load. However, multi-processor Pentium and Pentium Pro systems make powerful but
relatively inexpensive , allowing haptic simulation capabilities to enter the mass market.
Existing haptic simulation systems in the UW Biorobotics Laboratory
(http://brl.ee.washington.edu/BRL/) use single-processor Pentium systems to run
limited-complexity haptic simulations. We propose to use more powerful multiprocessor Pentium
systems to develop the distributed real-time computing capability needed to support the
computational requirements of more complex and feature-rich simulation environments.
We propose the following R&D activities using new Intel hardware from this grant:
1. Our current simulation capability is severely limited by our current single-processor Intel
systems. We will implement distributed multi-processor simulation capabilities using
quad-processor Pentium Pro hardware. This will allow more complex and feature-rich haptic
simulations to be supported.
2. For very high-complexity models, multiple computer platforms must be used. We will explore
the use of 100Mbit Ethernet and/or FireWire (IEEE 1394 bus) to allow the simulation
computations to be distributed across multiple Intel computers. This will require 2-3 quad
Pentium Pro systems and associated networking and Adaptec 1394 PCI cards and developers
system.
3. There is a fast-growing interest in shared simulation capability (e.g., multi-user games) in the
mass market. We will use 2-3 clusters of 1-3 quad-Pentium Pro systems to explore the
possibilities and limitations of networked haptic-enabled simulations. This will require a small
switched 100Mbit Ethernet environment to interconnect the clusters.
02/13/16
39
ASC-7
High-frequency Digital Systems: Design, Modeling, Layout, and Test
Carl Sechen, Associate Professor, Electrical Engineering
Andrew Yang, Associate Professor, Electrical Engineering
Mani Soma, Professor, Electrical Engineering
This proposal addresses the design, verification and testing of high-frequency digital systems (those
with clock speed in the gigahertz range). While the system complexity in terms of device and
functional densities already puts a burden on current design workstations and CPUs, the additional
requirements of high-frequency switching and associated design and test tasks demand a much
higher level of performance from the workstations. We seek advanced equipment from Intel in
order to develop architectures and techniques to solve these problems.
Physical design and testing of VLSI systems have been proven, in many cases, as NP-complete
problems, thus the current tools use heuristics to find reasonable solutions in a finite time and with
current performance of design workstations. While physical design tools have been well developed
and are close to optimal, simulation tools and test techniques still lag behind. The requirements of
high-speed digital systems, with clock speeds in the gigahertz (GHz) range, bring together all these
issues and add on the mathematical complexity in circuit design that were ignored in low-frequency
systems. Specifically, these problems need to be addressed for high-frequency digital designs:
1. Circuit modeling to improve simulation speed while preserving accuracy, especially when
high-frequency coupling and noise are included. This problem, while not NP-complete, is
extremely compute-intensive: it requires finite-element simulation for low-level components on
chips (wires, devices, simple circuits), and better models for high-level simulation of subsystems
and the entire system. The simulation development would also be assisted by interactive
visualization to help the designer see noise coupling issues, hot-spots in circuits, and timing
glitches on signal lines. This visualization, preferably interactive for low-level components, helps
the designer to better understand the problem (instead of having to delve into the mathematics)
and develop reasonable solutions.
2. Physical design to increase functional density while reducing coupling and timing errors. This
problem, already NP-complete for general circuit layout, is further exacerbated by the high clock
speeds. The simulation models developed above will be used as a guide during placement and
wiring of high-frequency circuits. The current physical design methodologies probably will need
to be drastically changed since bus-based systems are very susceptible to high-frequency noise,
and other system architectures need to be studied that would lead to better physical layouts. The
development of new layout algorithms and possibly new system architectures demand a level of
CPU performance that we expect to get from the new Intel systems.
3. Testing and design-for-test of digital systems to ensure performance and test cost reduction.
Digital system testing already involves NP-complete algorithms, even though heuristics have
been developed for simple fault models such as stuck-at faults and somewhat more complicated
timing faults. The high clock frequencies make timing faults more dominant and currently there
is no cohesive methodology to address timing faults, how to test them, and how to design to
02/13/16
40
reduce their occurrences. We have been involved with several companies (Intel, National, Level
One) in working on solutions to this problem using advanced mixed-signal fault modeling
techniques and on-chip measurement techniques. We envision a system in which a subsystem is
tested (e.g. on a tester) while other subsystems are simulated simultaneously so that faults can be
isolated in real time, at least during the prototype development stage. Such a “test-station”
integrating IC testing, on-the-fly design-for-test and simulation demands extremely high CPU
performance that we expect to get from the Intel systems requested.
We have described briefly in this proposal how Intel equipment can be used to solve the
high-frequency design problems that will be extremely important as CMOS scales down to
0.18-micron or smaller technologies. We have the expertise to address these issues, and the Intel
equipment would provide computing resources to explore new efficient solutions.
02/13/16
41
DM-1
The Gigabit-to-the-Desktop Initiative: Building a National
Distributed Computer Science Department
Ed Lazowska, Professor and Chairman, Computer Science & Engineering
Ron Johnson, Vice President and Vice Provost for Computing & Communications
Brian Bershad, Associate Professor, Computer Science & Communications
John Zahorjan, Professor , Computer Science & Engineering
The University of Washington’s Department of Computer Science & Engineering and Office of
Computing & Communications are engaged in a number of efforts whose goal is to explore the use
of advanced digital technology for education and collaboration.
Two of these efforts, closely related to one another, are particularly appropriate in the context of
this RFP. In a nutshell:

The “Gigabit-to-the-Desktop” (G2D) initiative, a collaborative effort involving the University of
Washington, Carnegie Mellon University, MIT, Brown University, Microsoft, DARPA, and
several others, is attempting to anticipate Internet2 / Next Generation Internet by aggressively
deploying bandwidth and creating a “distributed computer science department” among these
institutions, exploring the software that will be required to support close collaboration and
highly interactive distance learning when this bandwidth is ubiquitous. We and our collaborators
propose to conduct this experiment on the Intel Architecture.

The “Research TV” (RTV) initiative, a collaborative effort spearheaded by the University of
Washington and involving more than a dozen major research universities, serves as a testbed for
interactive, live, and on-demand education using digital technology via Internet, cable, and
satellite. We propose to begin the move of RTV to the Intel Architecture, focusing on programs
joint between UW’s Office of Computing & Communications and Department of Computer
Science & Engineering.
Here, we describe the G2D initiative. The RTV initiative, the “server side” of the G2D initiative,
will be described in the third major section of this proposal, relating to the migration of the
“educational enterprise” to Intel Architecture systems.
As noted above, the goal of the Gigabit-to-the-Desktop initiative is to prepare for the Internet2
/NGI by aggressively deploying bandwidth and creating a “distributed computer science
department,” exploring the software that will be required to support close collaboration and highly
interactive distance learning when this bandwidth is ubiquitous.
Since the days of ARPANET, the University of Washington has been a leader in regional
networking. NorthWestNet, the NSF regional network serving the 6-state Pacific Northwest region,
is a creature of the University of Washington, and the NWNet Network Operations Center is
operated by UW Computing & Communications under contract to NWNet.
02/13/16
42
Recently, the University of Washington has been a leader in the Internet2 / Next Generation
Internet initiative. We have established UW as a Regional Partner of the National Computational
Science Alliance, and we have assembled an Internet2 coalition including UW, Boeing, Microsoft,
Alaska, and Hawaii, in an effort to create momentum sufficient to ensure the siting of a gigapop
(gigabit point-of-presence) in Seattle. (The current federal draft of the NGI initiative shows Seattle
as one of six national gigapops, along with SDSC (Los Angeles / San Diego), Texas, Boston, NCAR
(Colorado), NCSA (Chicago), and a location in the southeast.)
Will we -- and will the rest of the nation -- be ready when the bandwidth arrives? The NSF gigabit
testbeds were regarded as flops. A reasonable conjecture is that within each of these testbeds -which were isolated from one another -- there was no community of common interest possessing
critical mass. By focusing on a small number of the nation’s leading computer science departments,
the Gigabit-to-the-Desktop (G2D) initiative avoids this pitfall.
G2D is an emerging DARPA-sponsored effort that is attempting to quickly build a national-scale
digital collaboratory among leading computer science departments, in order to drive the applications
that will support close collaboration and highly interactive distance learning.
G2D builds upon the DARPA AAI program, for which Sprint is the prime bandwidth contractor.
Under the leadership of Raj Reddy, Dean of the School of Computer Science at Carnegie Mellon
University, we expect that CMU, Washington, Brown, MIT, Microsoft Research, and several other
sites (likely Stanford and Berkeley, plus the members of the NSF Science & Technology Center on
Computer Graphics, plus several companies) will be connected in the very short term with dedicated
OC3. Our objective will be to build a “distributed computer science department,” sharing courses
and lectures both interactively and on-demand, and collaborating interactively from the desktop on
research. The goal is to be (and remain) “one step ahead” of Internet2/NGI, serving as a laboratory
for identifying and solving the problems that must be successfully attacked if we are the realize the
potential of coming generations of processing power and network bandwidth -- the potential of
applications such as voice and video email, digital videoconferencing, interactive distance education,
and on-demand education. Our collaboration with companies will ensure the commercialization of
tools that we develop, and/or the incorporation of techniques that we develop into future
generations of existing commercial tools. The collaborative aspects are the key to G2D, and we are
enthusiastic about having Intel as a key partner.
Many ingredients must be present for the success of G2D. One key ingredient is the widespread
deployment at each site of systems that are capable of full participation in this electronic community.
These desktops must be “Giga-PCs” -- systems with relatively prodigious amounts of memory,
storage, processing, network bandwidth, and graphics performance. We must deploy at least 40 of
these systems in offices in CSE, and back them with video-on-demand production facilities (for
digital editing, authoring, and encoding) and significant repository server clusters. These systems will
be the precursors of systems that we expect to be ubiquitous across campus in only a small number
of years. The production and repository systems will be described under the RTV initiative in the
third major section of this proposal. The desktop system requirement for the G2D initiative is:
02/13/16
43
DM-2
A PC-Based Multimedia Course-On-Demand System for Distance Learning in Electrical
Engineering
Jenq-Neng Hwang, Associate Professor, Electrical Engineering
Ming-Ting Sun, Professor, Electrical Engineering
Mani Soma, Professor, Electrical Engineering
Gregory Zick, Professor and Chairman, Electrical Engineering
In this project, we are to investigate the development of a PC-based course-on-demand system
which allows students to access a multimedia database containing recorded video courses over
various mechanisms, such as TCP/IP LANs, public switched telephone network (PSTN), and
integrated service digital network (ISDN), based on a client-server model. The recorded
courses are stored in the server which allows multiple clients to simultaneously access the
multimedia database over the networks. Students, at any time, can access these pre-recorded
sequences through Web browsers (e.g., Netscape, Mosaic, and Microsoft Internet Explorer) by
clicking on the specific course numbers (e.g., EE440, EE505, etc.) or on the specific contents (e.g.,
Z-transform, FIR Filters, etc.). We have also developed several new multimedia features which allow
students to learn the course materials more effectively from the digital video sequences in the
distance learning environments. To facilitate course instructors to create these multimedia features,
we have developed a hyper video editor tool, which allows the instructor to mark various portions
of the class video and create the corresponding hyper links to dynamically access relevant video
courses, to query pre-compiled handbook/encyclopedia knowledge, to electronically download the
lecture captions, and etc.
By taking advantage of the powerful Pentium Pro computing capability as well as the effective
Window NT communication capability, we are to develop several new multimedia features which
allow students to learn the course materials more effectively from the digital video sequences when
compared with learning from the conventional analog media in the distance learning environments.
Some examples of these features include:
1.
A Web browser interface which allows students to browse the multimedia database to select a
desired course or a desired topic, browse the course, and get to specific points in the course
quickly.
2.
A click-and-reset user interface for students to choose any starting point within a specific
video clip. This feature allows students to either fast forward to skip some known materials or
fast backward to review some contents from a past time-stamp.
3.
A click-and-query user interface for students to inquire the contents from a pre-compiled
course handbooks directly on the video sequences. This feature allows students to get a quick
reference of technical terminologies, mathematical formula, and citation information.
4.
A click-and-go cross reference (or the hyper links) between video contents among relevant
courses, which allows students to jump to some relevant video course contents.
02/13/16
44
5.
A synchronized lecture video and view-graphs (e.g., Microsoft Power Point) transmission. The
viewgraph is progressively displayed matched with the video clips, which exhibits a virtual
classroom environment. The viewgraphs are also equipped with the hyperlinking interfaces,
similar to he one provided for video sequences.
6.
A click-and-action user interface to call for electronic downloading the instructor’s lecture
captions in ASCII formats.
02/13/16
45
DM-3
The University of Washington Digital Library Project
Gregory Zick, Professor and Chairman, Electrical Engineering
Geri Bunker, Associate Director of Technical Services, University Library
The University Libraries is collaborating with the Department of Electrical Engineering to accelerate
the development of the UW Digital Library. More than digital pointers to print resources, more than
online textual finding aids, true digital libraries deliver full text, images, video, data in all formats,
regardless of time and place. Scholars are able to make new connections among varied sources of
information, present results graphically, analyze texts more quickly, search databases more easily and
link to needed information. With enough computing power in the library servers and client
programs, users can manipulate and evaluate data, creating new knowledge in profound and
unforeseen ways. Digital library holdings promise a major change for scholarly research in a number
of areas. These types of efforts are one of the great equipment challenges and have national support
from NSH, NIH, and the Library of Congress.
With multimedia collections, many functions are necessarily compute-intensive. Acquisition of
materials often involves scanning and manipulation of images which must be compressed and
reformatted. Texts must be processed with optical character recognition software. On the delivery
side, users need interfaces to sophisticated search engines which can perform complex Boolean
expressions and rank retrievals according to relevance. The price/performance of both mainframe
and RISC platform computing have proven problematic for libraries wishing to launch scalable
digitization efforts. The UW Libraries has chosen to move away from these platforms and is now
focused on replacing UNIX-based X11/Motif clients with personal computers running Windows 95
served by NT. We depend upon this distributed platform for all our staff and public computing
applications.
CONTENT for the UW Digital Library
In the early phases of development we are focusing on materials of regional and historical interest,
building the “Digital Northwest” component of our library. Multimedia data comprise the heart of
the Digital Northwest, which will provide network access to rich, and often unique, textual and
graphical resources in the Libraries’ collection. Our goals include offering to the public, and in
particular to Washington State’s K-12 teachers, digital access to rare materials which are otherwise
very difficult to explore.
The CONTENT multimedia database management system, developed at the CISO lab in the
Department of Electrical Engineering, has been instrumental in launching the Digital Northwest.
Content is a practical, scalable, and high-performance text-indexed multimedia database system. The
novelty of Content is in its approach of integrating high-volume storage, fast searching and
browsing, easy multimedia acquisition, effective updates, scalability, extendibility, and an API based
on HTTP. Content is also a low-cost solution for a large multimedia database that is available today.
Standard Web-based browsers such as Netscape can query the Content server. The API is flexible so
that different and unique Content clients on multiple platforms can be built to access multiple
Content servers. The Content architecture permits any multimedia type to be stored. Text
02/13/16
46
descriptions are used as indices for images and videos. Content includes an easy-to-use
Windows-based acquisition station for acquiring images and video. Currently, Content is being used
in a real library setting and contains more than 25,000 multimedia objects that span two different
collections of valuable historical photographs. In terms of performance, Content can access a single
image in a database of over one million images in less than a second. The description and design of
Content will be presented at the ACM Digital Library Conference this July.
Employed successfully in the UW Libraries’ Special Collections Division, CONTENT has made
significant progress in the area of imagebank management, including acquisition, search and display
of very dense graphical material. We have digitized, indexed and put on display photographs from
the Asahel Curtis and Sayre Theatrical collections. Other materials which we are scanning for search
and display include letters and sketches by Japanese Americans in the incarceration camps of World
War II. Additionally, maps, photographs and guides produced for the Klondike gold rush of the
mid-1890s are currently receiving heavy use due to the centennial of the Yukon gold rush to be
celebrated next year; we have begun to add these to our Digital Northwest repository.
The Pacific Northwest Regional Newspaper and Periodical Index, currently in traditional card
format, is another significant and unique resource in the region. Searchable electronic access to it
and other findings aids, such as those which describe important manuscript collections (e.g., the
Congressional papers of the late Senator Henry M. Jackson), would be a boon to the whole region as
well as the UW community.
Another buried treasure is our wealth of Pacific Northwest ethnic and minority newspapers from
the late 19th century to the present which is currently available only in microfilm. This rich
repository could be scanned for digital display through a “digital on demand” service at a powerful
desktop personal computer, where the user at a microform display device would download a digital
copy of a specific article to the desktop PC. Another method might involve using Content’s
distributed acquisition module on PCs, making a digital copy of the whole repository for network
display and downloading. This project is important because it prototypes methods for accessing and
digitizing the over 5 million microform items owned by the Libraries.
02/13/16
47
DM-4
Structure-Based Visual Access to Biomedical Library Information
Jim Brinkley, Ph.D., Associate Professor, Biological Structure;
Brent Stewart, Ph.D., Associate Professor, Radiology;
Sherrilynne Fuller, Ph.D., Associate Professor, Department of Medical Education
We believe that a structural information framework is a useful means for organizing not only
anatomical information, but most other forms of biomedical information as well. Because much
biomedical information can be thought of as attributes of structure, novel 3-D interfaces which let
the user roam through a virtual body and click on organs, cells or molecules to access remote Web
sites containing relevant data or knowledge can be developed. These interfaces are likely be of great
interest to many healthcare workers, including clinicians, researchers and students. In addition, the
structural informatics strategies developed will likely find applications in other research areas
requiring access to information about complex structures. There are a number of interesting
challenges to building such a structural information framework: image understanding, computer
graphics and visualization, multi-media databases (sound, real-time interactive communications),
user interface design (including virtual reality), and network-based distributed system integration.
Particularly significant for this proposal is the fact that all of these areas have very high
computational requirements, in particular for 3-D visualization and delivery of high-bandwidth data
over the Internet. For that reason, our proposed applications are especially well-suited for the Intel
high-performance computers and will be a visually compelling demonstration of the power of Intel
high-end computers.
The proposed work is an extension and integration of three medical image-related programs in the
UW Academic Medical Center to test and extend the structural information framework as a means
for organizing information in support of medical education and clinical care:
1) The Digital Anatomist on-line information system in anatomy
(http://www9biostr.washington.edu/da.html), an award-winning program currently in use for
the on-line generation and delivery of 3-D Web-based anatomy resources. Ports of current
software from DEC alpha to the NT platform will be completed. Intel user workstations will
replace Macintosh machines. Additional images from the visionary Visible Human Project
(http://www.nlm.nih.gov/research/visible/visible_human.html) will be incorporated.
2) The Integrated Advanced Information Management system (IAIMS) heterogeneous image
database development, management and delivery (http://healthlinks.washington.edu/iaims/).
Ports of current software from DEC alpha and Sun workstation to the NT platform will be
completed.
3) Network monitoring, configuration and scaling methodologies for image-related activities. The
current UNIX-based Block Oriented Network Simulator (BONeS) will be ported to an NT
platform with high-resolution graphics capability. These three projects are currently being
developed within the context of a regional telemedicine testbed initiative funded by the National
Institutes of Health, National Library of Medicine. (See http://hslib.washington.edu/b3) The
testbed will provide an optimum proving ground for research and testing the capabilities of the
02/13/16
48
Intel hardware. The regional testbed will support another UW Intel project proposed by Dr.
Doug Stewart, Dr. Arun Somani and Dr. Siva Narayanan, School of Medicine - Clinical
Multimedia Data Generation, Transmission and Retrieval in a Fault-Tolerant Networked
Environment.
02/13/16
49
DM-5
Highly Reliable Networks for Cardiology
Siva Narayanan, Acting Assistant Professor, Department of Medicine, Division of Cardiology
Doug Stewart Professor, Department of Medicine, Division of Cardiology
Arun Somani, Professor, Electrical Engineering
The project proposes to use Intel equipment for the following activities:

Design and Development of a Cine Angiographic Image Archival System: The system
performs real-time capture and storage of high-resolution angiographic and echocardiographic
data in the cath and echo laboratories. During the capture process, the system also performs
real-time compression of images. Once the procedure is completed, the video images will be
transferred to a server cluster for archiving. Various hardware and software compression
schemes will be considered for these capture stations. The resulting system will be connected to
the Regional Telemedicine Initiative, spearheaded by Dr. Sherrilynne Fuller, Director,
Informatics, School of Medicine.

Mass Archival and Fault-tolerant Retrieval of digitized angiograms: The cardiac
multimedia data would be stored in a client-server environment, where the server cluster consists
of three super PCs. The servers would contain data that are compressed using a low
compression factor , but can perform on-the-fly compression and serve a client querying for a
lower bandwidth video upon demand. Distributed computing issues will be considered. Even if
a single server is busy or down (failure), any of the other servers will take charge, thereby
ensuring a continuous stream of video.

Development of optimal angiographic image compression schemes: The servers
mentioned above would initially perform standard M-JPEG compression on demand. (M-JPEG
has distinct advantages over MPEG for our application.) However, we are also working on a
new compression scheme that is optimized for angiographic data. The scheme will be tested on
Intel developmental machines and will eventually replace the M-PEG compression module on
the server cluster. Other software and hardware compression schemes will also be evaluated.

Real-time distance viewing of a cardiac catheterization/echo procedure: An exciting new
development in the area of telecardiology is the ability to view angiograms and echocardiograms
from a remote location in real-time. This would help hospitals in rural areas obtain real-time
assistance from an expert in performing simple intervention techniques on critically-ill patients
requiring emergency care (e.g. opening a coronary artery that’s acutely closed). The expert at the
other end needs live access to the angiograms that are being generated in the rural hospital.

Migration from UNIX-based Cardiology Information System: The existing outdated
Unix-based cardiology dictation and information systems will be migrated to the PC platform. In
combination with the server cluster developments described above, this project will ensure that
the complete set of reliable, multimedia patient data will be brought to the physicians’ PC upon
demand with minimum waiting period.
02/13/16
50

Hand-on training curriculum for Cardiology residents and fellows: We will also be
developing multimedia training curriculum for cardiology residents and fellows. This involves
development of a multi-user networked multimedia training database system that provides
hands-on training on all aspects of clinical cardiology.
02/13/16
51
DM-6
The Laboratory for Animation Art
Professor Shawn Brixey, Head/Cross-Disciplinary Arts Program
Professor Richard Karpen, Computer Music and Director, CARTAH
This proposal from the Laboratory for Animation Art represents a unique and powerful partnership
between faculty from several departments and administrative units from within the College of Arts
and Sciences. The Studio for Media Arts Research and Technology Lab, and the School of Music
Computer Center each have their own missions within the Schools of Art and Music respectively,
but at the same time they are both highly interdisciplinary environments. The Center for Advanced
Research Technology in the Arts and Humanities, an independent research center in the College,
administers the Laboratory for Animation Arts, supporting a partnership between Art, Music, and
Computer Science and Engineering. Together these three areas represent a powerhouse
collaboration that combines the creation of new and innovative digital art forms with significant new
research in image and audio processing. putting the University of Washington in a leadership
position in the international arena of “digital arts technology”.
The Arts are experiencing profound changes due to the possibilities that high-end technologies now
provide, and the sciences are experiencing the same revolution in their collaboration with the arts.
CARTAH has brought together the creative work of the Arts with the expert knowledge and
innovation from the Sciences and Engineering to create the LA2. The Lab serves a significant
number of faculty and students fostering both discipline-specific work and a broad array of
interdisciplinary digital media projects. Since faculty and students move from one computational
environment to another to accomplish various tasks associated with their digital media projects,
significant improvements in our technology resources for the LA2 through Intel’s program would
have far reaching effects at the University of Washington and beyond. The faculty, students, and
staff who work in the LA2 are well prepared to take full advantage of Intel’s Advanced systems
should this proposal be successful.
02/13/16
52
DM-7
Computer Graphics
David Salesin, Associate Professor, Computer Science & Engineering
The University of Washington’s Department of Computer Science & Engineering has one of the
nation’s preeminent research and education programs in computer graphics, headed by Professor
David Salesin. The success of this program is succinctly indicated by the fact that the all-time record
for papers by one author at a SIGGRAPH conference was 4, held by Pat Hanrahan of Stanford, but
at the most recent SIGGRAPH, Salesin had 8 papers (and 4 of these were co-authored by UW
undergraduates!). The computer graphics program has recently been complemented by a
multidisciplinary education program in computer animation, taught by Salesin in conjunction with
industry professionals from companies such as Pixar, Pacific Data Images, and Rainsound,
in a lab co-established with Professors Shawn Brixey from the School of Art and Richard Karpen
from the School of Music.
With Intel’s help, we have begun the transition of this program from Silicon Graphics hardware
(SGI has donated $800,000 in equipment outright, and has provided substantial allowances on a
great deal more) to the Intel Architecture. This migration is being aided and abetted by our close
colleagues in the outstanding computer graphics group at Microsoft Research. (Of Salesin’s
record-smashing 8 SIGGRAPH papers, 3 were co-authored with colleagues at Microsoft Research!)
We seek to continue this transition by converting a second computer graphics laboratory -- 20
workstations and 5 servers -- to the Intel Architecture and Windows NT. (Conversion of the
computer animation laboratory -- the Laboratory for Animation Arts -- is included elsewhere in this
section of the proposal, under the leadership of Salesin’s colleagues Shawn Brixey from the School
of Art and Richard Karpen from the School of Music.)
Our initial computer graphics laboratory conversion was carried out using dual-processor
Providence systems. We specify that system here, although we actually expect to await successor
systems:
02/13/16
53
Appendix II – Summary of Equipment Requests
This appendix contains the complete equipment request list. Below is the legend for the spread sheet
on the next page. This spread sheet is in file summary.xls on the enclosed floppy disk.
Equipment Legend
SP1
Pentium Pro Desktop Minitower 200mHz – 128MB/4.3GB SCSI
SP2
Single Processor AP 440 FX Minitower – 64MB/4GB SCSI
SP3
PD440 FX Pentium II MMX Minitower – 266mHz 128MB/4GB SCSI
DP1
Dual Processor AP440 FX Minitower – 256MB/8GB SCSI
DP2
Dual Processor AP440 FX Minitower – 128MB/4GB SCSI
QP1
Quad Pentium Pro AP450 GX Tower – 256MB/9GB SCSI
QP2
Quad Pentium Pro AP450 GX Tower – 256MB/4GB SCSI
Opt1
V3C38 STB 8MB 3-D Multimedia Accelerator
Opt2 G 1012 Ultra FX32T (60MB)
Opt3 G1006 True FX Pro 16MB Video Card
Opt4 ST1917WC Seagate 9GB SCSI Hard Drive
Mon1 306389 20” Sony GDM 20SE2T
Mon2 M1010 21” Hitachi Monitor
Net1
E5101TX Intel Switchable Hub
Net2
EC100MAFX Uplink Switch Module
02/13/16
54
$ 6,754
$10,219
$ 6,671
$16,316
$12,871
$37,099
$25,158
$
325
$ 2,868
$ 1,510
$ 2,200
$ 2,200
$ 2,500
$ 4,995
$
595
Technology for Higher Education 2000: A Proposal to the Intel Corporation From the University of
Washington
Summary of Equipment Request
ASC-1
ASC-1
ASC-1
ASC-1
ASC-1
ASC-1
ASC-1
ASC-1
ASC-1
ASC-1
ASC-1
ASC-3
ASC-2
ASC-4
ASC-5
ASC-6
ASC-7
Project
SP1
SP2
Chem lab
ACMS lab
LONEOS
SLOAN
TIMESLICE
SuperK
GLAST
UWAXFS
Hood lab
Smitheatre
Supercenter
CFD
CMS
CSM
E&M
Haptic VR
VLSI Design
2
19
2
2
2
2
2
2
4
2
2
16
DP1
DP2
3
1
2
3
2
1
4
2
32
3
10
1
41
DM-1
DM-2
DM-3
DM-4
DM-5
DM-6
DM-7
SP3
16
G2D
Course-on-Demand
Digital Library
Medical Library
Cardiology
Arts Animation
Graphics Lab
2
36
61
60
20
20
22
5
16
1
2
7
22
QP1
1
0
2
QP2 Option 1 Option 2 Option 3 Option 4 Mon 1 Mon 2 Net 1 Net 2
MMAC UltraFX
video 9GB Drive Sony Hitachi Switch Hub
1
4
1
4
1
6
1
2
1
4
1
6
1
4
1
2
2
8
1
2
3
8
6
64
9
3
4
3
1
10
12
10
1
3
6
4
8
2
8
6
12
6
22
24
22
17
5
4
3
2
1
2
3
4
22
0
143
9
23
HPC-1 Clusters
0
164
45
16
20
60
30
30
34
17
2
9
60
25
26
22
5
16
20
36
182
174
15
148
13
13
56
13
13
56
Total Number Units 41
16
199
70
23
2
37
UNIT COSTS
6754 10219 6671 16316 12871 37099 25158
55
0
60
25
24
28
11
23
0
02/13/16
35
208
325
0
22
2868
36
1510
376
2200
14
3
10
Project
Request
191507
142821
78524
37092
57808
78524
57808
37092
125361
124443
676420
270439
143518
87436
198655
190548
287744
ASC Subtotal 2785740
683760
388335
397684
403484
200910
195654
390033
0
0
0
DM Subtotal 2659860
454479
HPC Subtotal 454479
244
2200
14
2500
3
4995
10 Total Request 5900079
2375
Appendix III – Participant CV’s (alphabetical)
Leroy Hood, M.D., Ph.D.
Biographical Sketch
Dr. Leroy Hood is the William Gates III Professor of Biomedical Sciences, Director of a National
Science Foundation Science and Technology Center and Chairman of the Department of Molecular
Biotechnology at the University of Washington School of Medicine. He has an M.D. from the Johns
Hopkins Medical School and a Ph.D. in biochemistry from the California Institute of Technology.
His research interests focus on the study of molecular immunology and biotechnology. His
laboratory has played a major role in developing automated microchemical instrumentation for the
sequence analysis of proteins and DNA and the synthesis of peptides and gene fragments. More
recently, he has applied his laboratory's expertise in large-scale DNA mapping and sequencing to the
analysis of the human and mouse T-cell receptor loci -- an important effort of the Human Genome
Project. His laboratory is also interested in the study of autoimmune diseases and new approaches to
cancer biology.
Dr. Hood is a member of the National Academy of Sciences and the American Association of Arts
and Sciences. In 1987, he was given the Louis Pasteur Award for Medical Innovation and the Albert
Lasker Basic Medical Research Award for studies of immune diversity. In 1989, Dr. Hood was
awarded the Commonwealth Award of Distinguished Service for work in developing instruments
used to study modern biology and medicine and the Cetus Award for Biotechnology. Dr. Hood
received the American College of Physicians Award in 1990 for distinguished contributions in
science as related to medicine. More recently, he received the 1993 Ciba-Geigy/Drew Award in
Biomedical Research from Drew University and the 1994 Lynen Medal of the Miami Biotechnology
Symposium. In May of 1994, Dr. Hood was presented with the University Distinguished Alumnus
Award from the Johns Hopkins University School of Medicine for changing how diagnoses are
made and opening the doors for miracles in treatments and cures.
Dr. Hood also holds honorary Doctor of Science degrees from Montana State University (1986), Mt.
Sinai School of Medicine of the city University of New York (1987), the University of British
Columbia (1988), the University of Southern California (1989), Wesleyan University (1992), and
Whitman College (1995), as well as a Doctor of Humane Letters honorary degree from Johns
Hopkins University (1990).
Dr. Hood also has had a life-long commitment to bringing science to society. He has spoken widely
on the ethical challenges science presents society. He has a commitment to bringing hands-on,
inquiry-based science to K-12 classrooms. His department manages four major science programs,
including elementary, middle school and high school.
Jenq-Neng Hwang (Associate Professor, Electrical Engineering)
Jenq-Neng Hwang received the BS and MS (1981, 1983), from the National Taiwan University, and
the Ph.D. from the Univ. of Southern California in 1988. He has published more than 100 journal,
conference papers, and book chapters in the areas of signal/image processing, statistical data
analysis, and computational neural networks. Dr. Hwang received the 1995 IEEE Signal Proc.
Society's Annual Best Paper Award in the area of Neural Networks for Signal Processing. He serves
02/13/16
56
as the Secretary of the Neural Networks Sig. Proc. Tech. Committee in the IEEE Sig. Proc. Soc.,
and is an Associate Editor for the IEEE Trans. on Sig. Proc. and Trans. on Neural Networks.
Yasuo Kuga (Associate Professor, Electrical Engineering)
Dr. Kuga received his B.S., M.S., and Ph.D. degrees from the University of Washington, Seattle in
1977, 1979, and 1983, respectively. From 1983 to 1988, he was a Research Assistant Professor of
Electrical Engineering at the University of Washington. From 1988 to 1991, he was an Assistant
Professor of Electrical Engineering and Computer Science at The University of Michigan and since
1991 he has been with the University of Washington. Dr. Kuga was selected as a 1989 Presidential
Young Investigator. He is an Associate Editor of Radio Science (1993-1996) and IEEE Trans.
Geoscience and Remote Sensing (1996-1999).
02/13/16
57
GEORGE LAKE
Homepages:
Group: http://www-hpcc.astro.washington.edu/
Personal: http://www-hpcc.astro.washington.edu/lake
Education:
Ph.D., Physics, 1980; M.A. 1977, Princeton University.
B.A. with High Honors, 1975, Physics and Astronomy, Haverford College
Employment History:
1985—: Univ. of Washington, currently Professor of Astronomy and Physics
1991—93: Staff Assoc., Carnegie Observatories, Pasadena
1981—87: Member of the Technical Staff, AT&T Bell Laboratories.
1983—86: Visiting Member, Institute for Advanced Study, Princeton
1980—81: NATO Postdoctoral Fellow, Institute of Astronomy,
1980—present: Fellow, Churchill College, Cambridge University.
1979—81: Research Astronomer, University of California, Berkeley.
Recent Professional Service 1994:
NASA HPCC/ESS Project Scientist and Science Working Group (SWG) Chair, 1996— ; Grand
Challenge PI and SWG member, 1993--96
NASA HQ Info. Sys. and Oper. Management Working Group, 1997-Organizer, Petaflop Initiative Session, 1997 AAAS Annual Meeting
Organizer, Petaflops BOF Workshop, Supercomputing '95
Science Advisory Council, Sloan Digital Sky Survey, 1995-Plenary Speaker, 8th ACM-SIAM Symposium on Discrete Algorithms
Chair, GSFC MPP Testbed TAC, 1997-Chair, JPL Supercomputing Project Review, 1995
Keynote Speaker, Frontiers Petaflop Workshop
Chair, NASA Astrophysical Theory Program Review, 1994
UW Astronomy Graduate Program Advisor 1990--1995
Astronomy Book Editor, Physics Today, 1991-NASA HRMS/SETI Investigators Working Group 1990--93
Honors and Fellowships:
NASA High Resolution Microwave Survey Group Achievement Award 1993
Fullham/Dudley Award 1989-90
NATO Postdoctoral Fellowship, 1980-81
NSF Predoctoral Fellowship, 1975-78
Phi Beta Kappa, 1975
02/13/16
58
Primary Research Interests:
Formation and Evolution of Large Scale Structure and Galaxies
Stability of the Solar System, Formation of Other Planetary Systems
The Formation and Enrichment of Globular Clusters
Spatially and Temporally Adaptive Computational Methods
Dark Matter
Representative Publications:
“The Formation of Galaxies II. A Control Parameter for the Hubble Sequence, III. The Formation
of the Hubble Sequence”, with R. G. Carlberg, Astronomical J., 96, 1581--6, (reported in the
NEWS AND VIEWS column of Nature, 337, 600).
“Can the Dark Matter be Million Solar Mass Objects?”, with H.-W. Rix,
Ap. J. Lett., 417, L1--4.
“The Formation of Extreme Dwarf Galaxies", Ap. J. Lett., 356, L43--6.
“The Dissolution of Dark Halos in Clusters of Galaxies", with B. Moore and N. Katz, Ap. J., 457,
455.
“Cosmological N-body Simulation”, G. Lake, N. Katz, T. Quiin and J. Stadel, Proceedings of the
Seventh SIAM Conference on Parallel Processing for Scientific Computing, p. 307.
“Galaxy Harassment and the Evolution of Clusters of Galaxies”, with B. Moore, N. Katz, A.
Dressler and A. Oemler, Jr, Nature, 379, 613 (reported in News and Views, Science News, CNN,
AP News wire, etc.)
“From Sir Issac to the Sloan Survey---Calculating the Structure and Chaos Owing to Gravity in the
Universe”, G. Lake, T. Quinn and D. Richardson, Proceedings of the Eight Annual ACM-SIAM
Symposium on Discrete Algorithms, p. 1.
“Cosmological N-body Simulation”, with N. Katz, T. Quiin and J. Stadel, Proceedings of the
Seventh SIAM Conference on Parallel Processing for Scientific Computing, p. 307.
Selected Popular Scientific Articles:
“Cosmology of the Local Group”, Sky & Telescope, 84, 613, Dec. 1992.
“Understanding the Hubble Sequence", Sky & Telescope, 83, 515, May 1992.
“Galaxies, Formation”, in Encyclopedia of Astronomy and Astrophysics", ed. S. Maran, (Cambridge:
Cambridge University Press), pp. 246--8.
“Dwarfs as Massive as Giants”, OMNI, 7, 168, 1984.
02/13/16
59
Edward D. Lazowska
Education
Ph.D. in Computer Science, University of Toronto, 1977.
M.S. in Computer Science, University of Toronto, 1974.
A.B. in Computer Science (independent concentration), Brown University, 1972.
Recent Employment
University of Washington, Department of Computer Science & Engineering
Chair, 1993Professor, 1986Associate Professor, 1982-86
Assistant Professor, 1977-82
Research Interests
Computer systems: modeling and analysis, design and implementation, distributed and parallel systems.
Representative Recent Awards
University of Washington Annual Faculty Lecturer, 1996.
Fellow of the Institute of Electrical and Electronics Engineers, 1996.
Fellow of the Association for Computing Machinery, 1995.
Award paper, ACM SIGCOMM '93 Symposium.
Award paper, 1993 Machnix Workshop.
Award paper, 13th ACM Symposium on Operating Systems Principles (1991).
Award paper, 12th ACM Symposium on Operating Systems Principles (1989).
Award paper, 1989 ACM SIGMETRICS Conference on Measurement and Modeling of Computer Systems.
Award paper, 1985 ACM SIGMETRICS Conference on Measurement and Modeling of Computer Systems.
Representative Recent Grants
National Science Foundation, 1996-98. Co-Principal Investigator (with S. Corbato, T. Gray, and C. Stubbs), Grant No.
NCR-9617039: vBNS Connectivity for the University of Washington, $904,000 (including 60% UW contribution).
Intel Corporation, 1996. Co-Principal Investigator (with B. Bershad, M. Soma, and G. Zick), Enhancing the Engineering
Curriculum on the PC Platform, $925,000 (100% allowance on equipment).
Intel Corporation, 1995. Co-Principal Investigator (with B. Bershad), Research and Educational Computing Infrastructure,
$200,000 (100% allowance on equipment).
IBM Corporation, 1994-95. Co-Principal Investigator (with B. Bershad), System Structure for Advanced Processors,
$750,000 (100% allowance on equipment).
Intel Corporation, 1993-94. Co-Principal Investigator (with G. Zick), Collaborative Research in High Performance
Computing, $761,000 (100% allowance on equipment).
National Science Foundation, 1992-97. Co-Principal Investigator (with H. Levy and J. Zahorjan), Grant No. CCR-9200832:
System Support for High Performance Computing, $1,984,000 (including 20% UW contribution).
National Science Foundation, 1992-96. Co-Principal Investigator (with R. Anderson, A. Borning, T. DeRose, H. Levy, D.
Notkin, L. Snyder, S. Tanimoto, and J. Zahorjan), Grant No. CDA-9123308: High Performance Parallel/Distributed
Computing (II Program), $1,765,000 (including 25% UW contribution).
National Aeronautics and Space Administration, 1992-95. Investigator (with L. Adams, R. Anderson, J. Bardeen, R.
Carlberg, C. Hogan, G. Lake (PI), W. Petersen, and L. Snyder), Large Scale Structure and Galaxy Formation,
$1,549,000.
Digital Equipment Corporation, 1990-93. Co-Principal Investigator (with H. Levy), Research Agreement #1076: Operating
System Support for Contemporary Multi-Computers, $960,000 (80% allowance on $1,200,000 in equipment).
National Science Foundation, 1989-92. Co-Principal Investigator (with H. Levy), Grant No. CCR-8907666: Amber:
Programming Support for Networks of Multiprocessors, $343,000.
National Science Foundation, 1987-92. Project Director and Co-Principal Investigator (with J.-L. Baer, H. Levy, L. Snyder,
W. Ruzzo, S Tanimoto, and J. Zahorjan), Grant No. CCR-8619663: Effective Use of Parallel Computing (CER
Program), $4,808,000 (including 25% UW contribution).
Representative Recent Professional Activities
Computer Science and Telecommunications Board of the National Research Council, 1996-.
National Science Foundation Advisory Committee for Computer and Information Science and Engineering, 1995-.
Technical Advisory Board, Microsoft Research, 1991-.
Board of Directors, Computing Research Association, 1992-; Chair, Government Affairs Committee.
National Research Council Panel to Review the Multi-Agency HPCC Program ("Brooks/Sutherland Committee"), 1994-95.
Board of Directors, Data I/O Corporation, 1996-.
State of Washington Information Services Board, 1995-.
Board of Directors, Washington Software and Digital Media Alliance, 1996-.
02/13/16
60
Standing advisory committees for UC Berkeley Dept. of EECS, Stanford Univ. CS Dept., Univ. of Virginia Dept. of CS,
Hong Kong Univ. of Science & Technology Dept. of CS, DoE Pacific Northwest National Laboratory Molecular
Science Computing Facility.
ACM A.M. Turing Award Committee, 1996-2000.
Representative Recent Departmental and University Activities
Chair, Department of Computer Science & Engineering, 1993-.
Chair, University Advisory Committee for Academic Technology, 1990-.
Committee on the Future of the Graduate School of Library and Information Science, 1996.
Committee on the Deanship of the College of Arts and Sciences, 1994.
Chair, Committee to Review the Proposed Ph.D. Program in Molecular Biotechnology, 1994.
Committee to Review the Performance of the Dean of Engineering, 1992-93.
Student Supervision
18 Ph.D. completed, 21 M.S. completed. Recent Ph.D. students:
John K. Bennett, 1988 (Rice University)
Haim E. Mizrahi, 1988 (co-supervised with J.-L. Baer) (Raphael (Israel))
David B. Wagner, 1989 (University of Colorado -> Principia Consulting)
Brian N. Bershad, 1990 (co-supervised with H. Levy) (Carnegie Mellon University -> University of Washington) (NSF
PYI Award, NSF PFF Award, and ONR YI Award recipient)
Yi-Bing Lin, 1990 (Bell Communications Research -> National Chiao-Tung University (Taiwan))
Mark S. Squillante, 1990 (IBM T.J. Watson Research Center)
Sung K. Chung, 1990 (co-supervised with D. Notkin) (IBM T.J. Watson Research Center (postdoc))
Thomas E. Anderson, 1991 (co-supervised with H. Levy) (University of California, Berkeley) (NSF PYI Award, NSF
PFF Award, and Sloan Research Fellowship recipient)
B. Clifford Neuman, 1992 (USC Information Sciences Institute)
Edward W. Felten, 1993 (co-supervised with J. Zahorjan) (Princeton University) (NSF YI Award recipient)
Chandramohan A. Thekkath, 1994 (co-supervised with H. Levy) (DEC Systems Research Center)
Robert Bedichek, 1994 (co-supervised with H. Levy) (MIT (postdoc) -> Transmeta Corp.)
Michael Rabinovich, 1994 (AT&T Bell Laboratories)
Jeffrey S. Chase, 1995 (co-supervised with H. Levy) (Duke University)
Dylan J. McNamee, 1996 (co-supervised with H. Levy) (Oregon Graduate Institute)
Representative Publications
E. Lazowska, J. Zahorjan, G. Graham, and K. Sevcik. Quantitative System Performance: Computer System Analysis Using
Queueing Network Models. Prentice-Hall, 1984.
D. Eager, E. Lazowska, and J. Zahorjan. Adaptive Load Sharing in Homogeneous Distributed Systems. IEEE Trans. on
Software Engr. SE-12,4 (May 1986).
T. Anderson, E. Lazowska, and H. Levy. The Performance Implications of Thread Management Alternatives for
Shared-Memory Multiprocessors. IEEE Trans. on Computers 38,12 (Dec. 1989). (Award paper, 1989 ACM
SIGMETRICS Conf.)
J. Chase, F. Amador, E. Lazowska, H. Levy, and R. Littlefield. The Amber System: Parallel Programming on a Network of
Multiprocessors. Proc. 12th ACM Symp. on Operating Systems Principles (Dec. 1989).
B. Bershad, T. Anderson, E. Lazowska, and H. Levy. Lightweight Remote Procedure Call. ACM Trans. on Computer
Systems 8,1 (Feb. 1990). (Award paper, 12th ACM Symp. on Operating Systems Principles.)
T. Anderson and E. Lazowska. Quartz: A Tool for Tuning Parallel Program Performance. Proc. ACM SIGMETRICS Conf.
on Measurement and Modeling of Computer Systems (May 1990).
T. Anderson, H. Levy, B. Bershad, and E. Lazowska. The Interaction of Architecture and Operating System Design. Proc.
4th International Conf. On Architectural Support for Programming Languages and Operating Systems (April 1991).
B. Bershad, T. Anderson, E. Lazowska, and H. Levy. User-Level Interprocess Communication for Shared Memory
Multiprocessors. ACM Trans. on Computer Systems 9,2 (May 1991).
T. Anderson, B. Bershad, E. Lazowska, and H. Levy. Scheduler Activations: Effective Kernel Support for the User-Level
Management of Parallelism. ACM Trans. on Computer Systems 10,1 (Feb. 1992). (Award paper, 13th ACM Symp. on
Operating Systems Principles.)
C. Thekkath, T. Nguyen, E. Moy, and E. Lazowska. Implementing Network Protocols at User Level. IEEE/ACM Trans. on
Networking 1,5 (Oct. 1993). (Award paper, ACM SIGCOMM '93 Symp.)
M. Vernon, E. Lazowska, and S. Personick, eds. R&D for the NII: Technical Challenges. EDUCOM, May 1994.
C. Thekkath, H. Levy, and E. Lazowska. Separating Data and Control Transfer in Distributed Operating Systems. Proc. 6th
International Conf. On Architectural Support for Programming Languages and Operating Systems (Oct. 1994).
J. Chase, H. Levy, M. Feeley, and E. Lazowska. Sharing and Protection in a Single Address Space Operating System. ACM
Trans. on Computer Systems 12,4 (Nov. 1994).
Recent Invited Presentations
30 in the past five years, including 20 Distinguished Lecturer Series, Keynote Speaker, or other special event
02/13/16
61
Siva Bala Narayanan graduated from the University of Washington in March 1996 with a Ph.D. in
Electrical Engineering. Since then, he has been with the Department of Medicine at the UW School
of Medicine. His research interests are signal and image processing, image compression, telemedicine
and medical informatics. He is the co-inventor of the CABINET system, a UW technology for
generation and sharing angiographic images over a wide-area network. The CABINET system, now
being licensed to a startup-venture, is in its beta-testing phase now, and is awaiting FDA approval
for commercial use.
Arun K. Somani is a full professor of electrical engineering and computer science and engineering
at the University of Washington. He has been doing research in the area of parallel computer system
design, fault tolerant computing, and interconnection structures at the University of Washington. He
directs the Dependable Parallel Computing and Networks Laboratory at the UW Department of
Electrical Engineering. He is the chief architect of the “Proteus” system, a very large-scale
multi-computer system with a very high-speed, high-bandwidth communication network, funded by
the Naval Coastal System Center.
Douglas K. Stewart is a professor of Medicine at the UW School of Medicine. After his medical
training at Harvard University, Doug came to the University of Washington in 1973 and began
working in the then newly emerging sub-specialty of coronary angiography. For the past 19 years,
he has been the Director of the Heart Catheterization Lab in the University Medical Center. During
his tenure at the University, Dr. Stewart has become respected nation-wide as a leader in the
development and implementation of angiography techniques. He is a co-inventor of the CABINET
technology.
Minoru Taya (Professor of Mechanical Engineering, Materials Science and Engineering)
Dr. Taya received his Ph.D., 1977 in theoretical and applied mechanics from Northwestern
University. He taught at the University of Delaware during 1998-1985 in the area of composite
engineering and at Tohoku University in 1989-1992 in the area of materials processing. He spent his
sabbatical leave at the University of Oxford, Riso National Laboratory, and the University of Tokyo.
He has served as the chair of ASME Materials Div. and currently is the chair of the Electronic
Materials Committee of ASME-Materials Div. Dr. Taya is fellow of the ASME, serves as associate
editor for the J. Applied Mechanics, Materials Science and Eng., and has authored and edited five books in
the composite and electronic packaging subject area. His current interest is design and processing
of smart materials and structures, and electronic materials (conductive adhesives, piezoelectric
composites).
Leung Tsang (Professor, Electrical Engineering)
Leung Tsang received his S.B., S.M., and Ph.D. degrees from MIT in 1971, 1973, and 1976,
respectively. He has been a Professor of Electrical Engineering at the University of Washington
since 1986. He was a faculty member at Texas A&M University between 1980 and 1983. He is a
co-author of the book by L. Tsang, J. A. Kong, and R. Shin entitled Theory of Microwave Remote Sensing
published by Wiley-Interscience in 1985. He is a Fellow of IEEE and OSA. Since 1996, he has been
the Editor of the IEEE Transactions on Geoscience and Remote Sensing.
02/13/16
62
Gregory L. Zick
EDUCATION
University of Illinois
University of Michigan
University of Michigan
B.S.
M.S.
Ph.D.
1970
1972
1974
Electrical Engineering
Biomedical Engineering
Biomedical Engineering
EXPERIENCE
Chairman, University of Washington, Department of Electrical Engineering, 1993-present.
Director, Center for Imaging Systems Optimization, University of Washington, College of Engineering, 1990-present.
Associate Dean for Computing, University of Washington, College of Engineering, 1986-1993.
Program Manager, Olympus Grant, University of Washington, College of Engineering, 1984-87.
Full Professor, University of Washington, Department of Electrical Engineering, 1983-present.
Adjunct Professor, University of Washington, Department of Computer Science, 1979-present.
Associate Professor, University of Washington, Department of Electrical Engineering, 1978-83.
Assistant Professor, University of Washington, Department of Electrical Engineering, 1974-78.
CONSULTING
NeoPath Corporation, Intranet Design, 1996-97.
IBM, Higher education Internet servers, 1993-96.
ESCO Corporation, Re-engineering management and operational computer systems, 1995.
NeoPath Corporation, Real-time computer systems, 1992-93.
IBM ACIS, Information exchange and retrieval systems, 1989-93.
IBM Palo Alto Scientific Center, Heterogeneous database development, 1987-89.
IBM Palo Alto Scientific Center, Expert database systems, 1985-87.
PATENTS
Scene Decomposition of MPEG Video. HC Liu and G Zick, U.S. Patent filed June 1995.
Automatic Indexing of Cine-Angiograms. G. Zick, HC Liu, and F. Sheehan, U.S. Patent Serial No. 5,553,085, issued
February, 1996.
Solid State Reference Electrode. G Zick and SH Saulson, U.S. Patent No. 4,450,842, issued May 29, 1984.
Oxygen Sensing Electrode. G Zick, U.S. Patent No. 4,312,322, issued January 26, 1982.
Apparatus for Oxygen Partial Pressure Measurement. G Zick, U.S. Patent No. 4,269,684, issued May 26, 1981.
SELECTED PUBLICATIONS
HC Liu and GL Zick, "Scene Adaptive MPEG Encoding Algorithm Using the P-picture Based Analysis," IEEE
International Conference on Multimedia Computing and Systems, June 1996.
HC Liu and GL Zick, "Automated Determination of Scene Changes in MPEG Compressed Video," Proceedings of
ISCAS -IEEE Symposium on Circuits & Systems, April 1995.
HC Liu, F Sheehan, and GL Zick, "Image Processing for Indexing of Cine-Angiograms," Proceedings of SPIE Medical
Imaging V Conference, Vol. 2435, Feb. 1995.
HC Liu and GL Zick, "Scene Decomposition of MPEG Compressed Video," SPIE/IS&T Symposium on Electronic
Imaging Science and Technology: Digital Video Compression: Algorithms and Technologies, Vol. 2419, February
1995.
G Miller and G Zick, "The Next Step for Engineering Education: Collaborative Curriculum Development," Directorate
for Education & Human Resources, Division of Undergraduate Education "Project Impact" Conference,
May 1994.
C Yamashita and G Zick, "Prototypes for Engineering Education Exchange," Proceedings of Technology Based
Engineering Education Consortium Conference, November 1993.
02/13/16
63
DJ Dailey, K Eno, GL Zick, and J Brinkley, "A Network Model for Wide Area Access to Structural Information,"
Proceedings of 17th Annual Symposium on Computer Applications in Medical Care, pp. 497-501, 1993.
C Yamashita and GL Zick, "Combined relational and textual retrieval in a medical image archive," Proceedings of SPIE
Medical Imaging Conference, Vol. 1899-1993, pp. 528-535, February 1993.
AH Rowberg and GL Zick, "PACS, Clinical Evaluation and Future Conceptual Design," Integrated Diagnostic Imaging:
Digital PACS in Medicine, pp. 77-99, 1992.
DR Haynor, GL Zick, MB Heritage, and Y Kim, "A Layered Approach to Workstation Design for Medical Image
Viewing," Proceedings of SPIE Medical Imaging VI Conference, pp. 439-448, 1992.
D Benson and GL Zick, "Spatial and Symbolic Queries for 3-D Image Data," Proceedings of SPIE/IS&T Symposium
on Electronic Imaging Science and Technology, Vol. 1662, pp. 134-145, February 1992.
D Benson and GL Zick, "Symbolic and Spatial Database for Structural Biology," Proceedings of OOPSLA '91
Conference, pp. 329-339, October 1991.
D Benson and G Zick, "Obtaining Accuracy and Precision in Spatial Search," Technical report, DEL-91-01, Department
of Electrical Engineering, Univ. of WA, Jan. 1991.
Gl Zick, L Yapp, E Lim, and C Yamashita, "Multibase: An Environment for Data Transfer and SQL-Operations
between Autonomous Databases in a Heterogeneous System," Technical report, June, 1989.
K Williams, D Benson, and GL Zick, "LANA - An Expert Database System for local area Network Design," Technical
report, September, 1987.
JL Baer, SC Kwan, GL Zick, and T Snyder, "Parallel Tag – Distribution Sort," Proceedings of 1985 International
Conference on Parallel Processing, IEEE Press 1985.
J Vanaken and GL Zick, "The Expression Processor: A Pipelined,Multiple-Processor Architecture," IEEE Transactions
on Computers, C-30, 8, pp. 525-536, 1981.
02/13/16
64
Download