Salt Lake City plays host to SC12. Image courtesy Kla4067, Flickr. |
This week, the iSGTW team is
in Salt Lake City for SC12: 'The International Conference for High Performance Computing,
Networking, Storage and Analysis'. The event is undoubtedly one of the highlights
of the industry calendar and is expected to attract over 10,000 attendees. Of course,
organizing any event of this size is going to present some serious logistical challenges,
but when you've got attendees giving demonstrations of some of the latest
innovations in high performance computing applications, then these challenges take
on a whole new level of difficulty – particularly when it comes to networking.
The solution? It's simple
really: just build one of the fastest networks in the world for the week (they
don't do things by half measures here). The network, called SCinet has a huge
790 gigabits of capacity and is the result of hard work and contributions from
many government, research, education and corporate collaborators who have
volunteered time, equipment, and expertise to making the network a success.
Over 100 engineers have volunteered their time over the past year to plan and
build SCinet, using nearly $28m-worth of donated equipment. SCinet worked in
partnership with the Utah Education Network and the University of Utah to
deploy the local infrastructure needed to support the network, including
acquiring access to miles of fiber-optic cable in the Salt Lake metropolitan area.
"SCinet is the primary platform for SC exhibitors to demonstrate the most cutting edge high performance computing applications and collaborations. We support their requirements by building a sophisticated on-site network that links the exhibit floor to the largest and fastest research networks around the world," says Linda Winkler, senior network engineer at Argonne National Laboratory and chair of SCinet for SC12.
"SCinet is the primary platform for SC exhibitors to demonstrate the most cutting edge high performance computing applications and collaborations. We support their requirements by building a sophisticated on-site network that links the exhibit floor to the largest and fastest research networks around the world," says Linda Winkler, senior network engineer at Argonne National Laboratory and chair of SCinet for SC12.
"As science continues
to demand more data intensive and distributed computing – networks play an
important role. SCinet allows the networking community to work closely with
scientists to show researchers at SC firsthand how advanced network
technologies can help accelerate science. Unlike typical Internet traffic,
scientific workflows tend to demand high capacity network links for long
duration large data flows. The SCinet infrastructure was architected to meet these
demanding requirements." This year, SCinet is also providing an
experimental testbed called the SCinet Research Sandbox (SRS), which allows
researchers to showcase disruptive network research using emerging technologies
like 100-gigabit-per-second circuits as well as OpenFlow technology.
“In addition to supporting
the extreme demands of the HPC-based demonstrations that have become the
trademark of the conference, SCinet also seeks to foster and highlight
developments in network research that will be necessary to support the
next-generation of science applications,” says Brian Tierney, SRS co-chair for
SC12 and head of ESnet’s Advanced Network Technologies Group. “Both 100 Gbps
networking and OpenFlow have become some of the most influential networking
technologies of this decade. SRS allows the community to showcase innovations
on these platforms while in their infancy to demonstrate the impact they may
have on the entire HPC community in the future.”
One of the seven projects
selected as part of the SRS program this year is a demonstration of how data produced
by the Large Hadron Collider (LHC) at CERN is analyzed using the Worldwide LHC Computing
Grid (WLCG). The session, entitled 'Efficient LHC Data Distribution across
100Gbps Networks', will provide a demonstration of how state-of-the-art data
movement tools enable high-throughput distribution of the approximately 25 petabytes
of data the LHC produces annually. The demo will interconnect three major WLCG
Tier-2 computing sites and the SC12 show floor using 100-gigabit-per-second
technology.
More information on this and the other projects selected for
demonstrations on the SRS is available on the SC12 website.
No comments:
Post a Comment