Project Name Particle Physics Data Grid
Application Description High Speed WAN-Based Storage and Retrieval of High Energy Physics Data
PI's Harvey B. Newman (Harvey.Newman@cern.ch), Richard Mount (Richard.Mount@slac.stanford.edu)
Sites Involved
  ISP/Network application/system component
ANL ESnet (OC12) / MREN (OC3) HEP simulation, analysis & visualization / ssh, ftp,Globus middleware 
BNL ESnet (OC3) NP simulation, analysis & visualization
CalTech NTON / CalREN 2 (4*OC3) /  ESnet (T1) HEP simulation, analysis & visualization, / GIOD, SRB
FNAL ESnet (OC3) / MREN (OC3) HEP simulation, analysis & visualization / LSF, SAM, Enstore, pnfs
JLAB ESnet (T3) NP simulations, analysis & visualization / LSF, OSM
LBNL NTON  ESnet (OC12) CalREN 2 (OC12) HEP & NP simulations, analysis & visualization, data replication / LSF,
STACS
SDSC NTON CalREN 2 (OC12) SRB, MCAT
SLAC NTON ESnet (T3=>OC3) CalREN 2 (OC3) HEP simulation,  analysis & visualization / LSF, ssh,  Objectivity
U. Wisconsin MREN (?) Condor
Networks Used ESnet, NTON, MREN, CalREN 2
Bandwidth Requirements
  • 800+ Mbps (100 MBytes/sec) to access 100s of TBytes todays rising to 100s of PBytes over the next decade
Latency Requirements none
QOS Requirments (bandwidth, jitter, etc.) Need to set up testbed QoS network for making measurements on impact of load (e.g. FTP, TTCP, NetPerf - both UDP & TCP) on interactive performance (e.g. jitter and VoIP), and what is the impact of  various QoS mechanisms (e.g. DiffServ) and configurations. Need QoS to allow bulk flows to get all available bandwidth while still allowing interactive applications to run.
Other (multicast? multiple streams? etc) No Multicast requirement. Need access to SNMP information in border routers and critical core locations, as well as monitoring QoS settings in routers and switches
Systems used  and their locations (e.g.: HPSS, T3E, PC cluster, Cave, etc.)
site systems used
ANL Robotic store with 80 TB, MPI, Condor, SRB, SOSA
BNL Robotic store with 600 TB, ??
CalTech Robotic store with 600 TB
FNAL Robotic store with 100 TB, SAM
JLAB Robotic store with 300 TB, Linux farm (> 100 proc).
LBNL Robotic store with 600TB, HPSS/STACS
SDSC Robotic store with 300TB uncompressed, 500TB compressed, HPSS/SRB
SLAC Robotic store with 600 TB, Sun E10K with 64 proc each @ 400MHz, farm of Solaris Suns with several hundreds of nodes, HPSS/OOFS
U. Wisconsin Condor
WAN POP Contact (1 per site) ANL:Linda Winkler (lwinkler@anl.gov)
BNL: Mike O'Connnor (moc@bnl.gov)
CalTech: James Patton (patton@cacr.caltech.edu)
FNAL: Phil DeMar (demar@fnal.gov)
JLAB: Keith Jonak (jonak@cebaf.gov)
LBNL: Ted Sopher ( TGSopher@lbl.gov )
SDSC: Margaret Simmons (mls@sdsc.edu)
SLAC: Les Cottrell (cottrell@slac.stanford.edu)
U. Wisconsin: Bill Jensen wej@doit.wisc.edu
LAN Contact (1 per site) ANL:Linda Winkler (lwinkler@anl.gov)
BNL: Terry Healey (thealy@bnl.gov)
Caltech: James Patton (patton@cacr.caltech.edu)
FNAL: Phil DeMar (demar@fnal.gov)
JLAB: Keith Jonak (jonak@cebaf.gov)
LBNL: Jason Lee (Jason_Lee@lbl.gov) / Ted Sopher (TGSopher@lbl.gov)
SDSC:  Jay Dombrowski (dombrowh@sdsc.edu)
SLAC: Davide Salomoni (salomoni@slac.stanford.edu)
U. Wisconsin:David Parter (dparter@cs.wisc.edu)
Application contact (persons responsible for making the applications work well in an NGI environment) Ian Foster (foster@mcs.anl.gov), Bruce Gibbard (gibbard@bnl.gov) for BNL, Harvey Newman (newman@hep.caltech.edu) & Julian Bunn (julian@cacr.caltech.edu), Vicky White (white@fnal.gov) for FNAL, Arie Shoshani (AShoshani@lbl.gov), Andrew Hanushevsky (abh@slac.stanford.edu), Chip Watson (watson@jlab.gov),  Reagan Moore (moore@sdsc.edu), Miron Livny (miron@cs.wisc.edu)

Feedback