设为首页 | 加入收藏
  • A Monitor and Control System for the Synchrotron Radiation Beam Lines at DAΦNE 下载全文
  • Three synchrotron radiation beam lines have been built on DAΦNE,the Frascati electron-positron accelerator.It is Possible to monitor and control all the elements on the beam lines using a modular network distributed I/O system by National Instrunments (FieldPoint) with Bridge VIEW/Lab VIEW programs,Two of these beam lines have radiation safety problems solved by two independent and redundant systems,using mechanical switches ,and S7-200 PLC‘s by Siemens.In this article our solution will be described in details.
  • Classification of Multi-jet Topologies in 3^+e^- Collisions Using Multivariate Analysis Methods and Morphological Variables 下载全文
  • We propose,in this paper,to use several multivariate analysis methode and a new kind of variables to separate between four classes of events produced at LEP2:the events with 2 jets,3 jets,4 jets and those having a more abundant jet topology(n jets,n>4) Neural network have proven themselves to be more efficient classifier than the other techniques.The efficiencies and purities achieved with the optimized neural network are in average 1 to 7% higher than those obtaind with the other methods.
  • KID-KLOE Integrated Dataflow 下载全文
  • KLOEis acquiring and analyzing hundreds of terabytes of data,stored as tens of millons of files,In order to simplify the access to these data,a URI-based mechanism has been put in plase,The KID package is an implementation of that mechanism and is presented in this paper.
  • The BaBar Experiment's Distributed Computing Model 下载全文
  • In order to face the expected increase in statistics between now and 2005,the Babar experiment at SLAC is evolving its computing model toward a distributed multitier system.It is foreseen that data will be spread among Tier-A centers and deleted from the SLAC center,A unifrom computing enviromment is being deployed in the centers,the network bandwidth is continuously increased and data distribution tools has been designed in order to reach a transfer rate of -100 TB of data per year,In parallel,smaller Tier-B and C sites receive subsets of data,presently in Kanga-Root[1] format and later in Objectivity[2] format,GRID tools will be used for remote job submission.
  • The Implementation of Full ATLAS Detector Simulation Program 下载全文
  • The ATLAS detector is one of the most sophisticated and huge detectors ever designed up to now.A detailed,flexible and complete simulation program is needed in order to study the characteristics and possible problems of such a challenging apparatus and to answer to all raising questions in terms of physics,design optimization,etc.To cope with these needs we are implementing an application based on the simulation framework FADS/Goofy(Framework for ATLAS Detector Simulation /Geant4-based Object-Oriented Folly)in the Geant4 environment.The user‘s specific code implementation is presented in detalils for the different applications implemented until now,from the various components of the ATLAS spectrometer top some particular testbeam facilities,particular emphasis is put in describing the simulation of the Muon Spectrometer and its subsystems as a test case for the implementation of the whole detector simulation program:the intrinsic complexity in the geometry description of the Muon System is one of the more demanding problems that are faced.The magnetic field handling,the physics impact in the event processing in presence of backgrounds from different sources and the implementation of different possible generators(including Pythia) are also discussed.
  • The Athena Data Dictionary and Description Language 下载全文
  • We have developed a data object description tool suite and service for Athena consisting of :a language grammar based upon an extended proper subset of IDL 2.0,a compiler front end based upon this language grammar,JavaCC,and a Java Reflection API-like interface,and several compiler back ends which meet specific needs in ATLAS such as automatic generation of object converters and data object scripting interfaces.We present here details of our work and experience to date on the Athena Definition Language and Athena Data Dictionary.
  • Monitoring the BaBar Data Acquisition System 下载全文
  • The BaBar data acquisition system(DAQ)transports data from the detector front end eletronics to short term disk storage.A monitoring application(VMON)has been developed to monitor the one hundred and ninety computers in the dataflow system.Performance information for each CPU is collected and multicast across the existing data transport network.The packets are currently collected by a single UNIX workstation and archived.A ROOT based GUI provides control and displays the DAQ performance in real time.The same GUI is reused to recover archived VMON data,VMON has been deployed and constantly monitors the BaBar dataflow system.It has been used for diagnostics and provides input to models projecting future performance.The application has no measurable impact on data taking ,responds instantaneously on the human timescale to requests for information display,and uses only 3% of a 300MHz Sun Ultra5 CPU.
  • Preliminary Design of BES-Ⅲ Trigger System 下载全文
  • This article desribes the preliminary design of the trigger system for the updating beijing Spectrometer at Beijing Electron Pstiron Collider,including,backgrounds study ,event estimation,new techniques like pileline to be used in the trigger,and the principle and consideration in the trigger design.
  • The LHC Experiments‘ Joint Controls Project (JCOP) 下载全文
  • The development and maintenance of the control systems of the four Large Hadron Collider(LHC) experiments will require a non-negligible amount of resources and effort.In order to minimise the overall effort required the Joint Controls Project(JCOP) was set-up as a collaboration between CERN and the four LHC experiments to find and implement common solutions for the control of the LHC experiments.It is one of the few examples of such a wide collaboration and therefore the existence of the JCOP project is extremely significant.This paper will give a brief overview of the project,its structure and its history.It will go on to summarise the various sub-projects that have been initiated under the auspices of JCOP together with their current status.It will highlight that the JCOP general principle is to promote the use of industrial solutions wherever possible.However,this does not rule out the provision of custom sulutions when non-standard devices or very large numbers of devices have to be controlled.The paper will also discuss the architecture foreseen by JCOP and where in this architecture the various types of solutions are expected to be used.Finally,althouth the selection of common industrial and custom solutions is a necessary condition ofr JCOP to succeed,the use of these solutions in themselves would not necessarily lead to the production of homogeneous control systems, Therefore,the paper will finish with a description of the JCOP Framework,which is being developed to promote the use of these common solutions,toi reduce the development effort required by the various experiment development teams and to help to build and integrate control systems which can be more easily maintained.
  • The BABAR Database:Challenges,Trends and Projections 下载全文
  • The BABAR database,based upon the Objectivity OO database management system,has been in production since early 1999,It has met its initial design requirements which were to accommodate a 100Hz event rate from the experiment at a scale of 200TB per year.However,with increased luminosity and changes in the physics requirements,these requirements have increased significantly for the current running period and will again increase in the future.New capabilities in the underlying ODBMS product,in particular those of multiple federation and read-only database support,have been incorporated into a new design that is backwards compatible with existing application code while offering scaling into the multi-petabyte size regime.Other optimizations,including the increased use of thghtly coupled CORBA servers and an improved awareness of space inefficiencies,are also playing a part in meeting the new scaling requirements.We discuss these optimizations and the prospects for further scaling enhancements to address the longer-term needs of the experiment.
  • The CDF Computing and Analysis System:First Experience 下载全文
  • The Collider Detector at Fermilab(CDF) collaboration records and analyses proton anti-proton interactions with a center-of -mass energy of 2 TeV at the Tevatron,A new collider run,Run II,of the Tevatron started in April.During its more than two year duration the CDF experiment expects to record about 1 PetaByte of data.With its multi-purpose detector and center-of mass energy at the frontier,the experimental program is large and versatile.The over 500 scientists of CDF will engage in searches for new particles,like the Higgs boson or supersymmetric particles,precision measurement of electroweak parameters,like the mass of the W boson,measurement of top quark parameters and a large spectrum of B physics.The experiment has taken data and analysed them in previous runs.For Run II,however,the computing model was changed to incorporate new methodologies.the file format switched.and both data handling and analysis system redesigned to cope with the increased demands.This paper(4-036 at Chep 2001)gives an overview of the CDF Run II compute system with emphasize on areas where the current system does not match initial estimates and projections.For the data handling and analysis system a more detailed description is given.
  • The Dφ Data Handling System 下载全文
  • In this paper we highlight strategies and choices that make the Dφ Data Handling system markedly different from many other experiments‘ systems,We emphasize how far the Dφ system has come in innovating and implementing a Dφ-specific Data Grid system.We discuss experiences during the first months of detector commissioning and give some future plans for the system.
  • Grid Technologies & Applications:Architecture & Achievements 下载全文
  • The 18 months since CHEP‘2000 have seen significant advances in Grid computing,both within and outside high energy physics,While in early 2000,Grid Computing was a novel concept that most CHEP attendees were being exposed to for the first time,we now see considerable consensus on Grid architecture,a solid and widely adopeted technology base,major funding initiatives,a wide variety of projects developing applications and technologies,and major deployment projects aimed at creating robust Grid infrastructures,I provide a summary of major developments and trends,focusing on the Globus open source Grid software project and the GriPhyN data grid project.
  • US Grid Projects:PPDG and iVDGL 下载全文
  • From HEP Computing to Bio—Medical Research and Vice Versa:Technology Transfer and Application Results 下载全文
  • We present a series of achievements associated to the transfer of simulation technologies to the bio-medical environment.We show also how the novel collaborative organization built around Geant4 has changed the traditional concept of technology transfer between the HEP domain and the bio-medical environment,configuring a two-way interaction.
  • Large Scale Cluster Computing Workshop Fermilab,IL,May 22^nd to 25^th,2001 下载全文
  • PBSNG—Batch System for Farm Architecture 下载全文
  • FBSNG [1] is a redesigned version of Farm Batch System (FBS[1]),which was developed as a batch process management system for off-line Run II data processing at FNAL.FBSNG is designed for UNIX computer farms and is capable of managing up to 1000 nodes in a single farm.FBSNG allows users to start arrays of parallel processes on one or more farm computers,It uses a simplified abstract resource counting method for load balancing between computers.The resource counting approach allows FBSNG to be a simple and flexible tool for farm resource management.FBSNG scheduler features include guaranteed and controllable” fair-share” scheduling.FBSNG is easily portable across different flavors of UNIX.The system has been successfully used at Fermilab as well as by off-site collaborators for several years on farms of different sizes and different platforms for off-line data processing,Monte-Carlo data generation and other tasks.
  • The CDF Run 2 Offline Computer Farms 下载全文
  • Run 2 at Fermilab began in March,2001,CDF will collect data at a maximum rate of 20 MByte/sec during the run.The offline reconstruction of this data must keep up with the data taking rate.This reconstruction occurs on a large PC farm,which must have the capacity for quasi-real time data reconstruction,for reprocessing of some data and for generation and processing of Monte Carlo samples.In this paer we will give the design requirements ofr the farm,describe the hardware and software design used to meet those requirements,describe the early experiences with Run 2 data processing,and discussfuture prospects for the farm,including some ideas about Run 2b processing.
  • Lattice QCD Production on a Commodity Cluster at Fermilab 下载全文
  • Large scale QCD Monte Carlo calculations have typically been performed on either commercial supercomputers or specially built massively parallel computers such as Fermilab‘s ACPMAPS.Commodity clusters equipped with high performance networking equipment present an attractive alternative,achieving superior performance to price ratios and offering clear upgrade paths.We describe the construction and results to date of Fermilab‘s prototype production cluster,which consists of 80 dual Pentium Ⅲsystems interconnected with Myrinet networking hardware.We describe software tools and techniques we have developed for operating system installation and administration.We discuss software optimizations using the Pentium‘s built-in parallel computation facilities(SSE),Finally,we present short and long term plans for the construction of larger facilities.
  • Disk Cloning Program “Dolly+” for System Management of PC linux Cluster 下载全文
  • The dolly+is a Linux application program to clone files and disk partition image from a PC to many others.By using several techniques such as logical ring connection,multi threading and pipelining,it could achieve high performance and scalability.For example,in typical condition,installations to a hundred PCs takes almost equivalent time for two PCs.Together with the Intel PXE and the RedHat kickstart,automatic and very fast system installation and upgrading could be performed.
  • The Linux Farm at the RCF 下载全文
  • A description of the Linux Farm at the RHIC Computing Facility (RCF)is presented in this paper.The RCF is a dedicated data processing facility for RHIC,which became operational in the summer of 2000 at Brookhaven National Laboratory.
  • Operation and Optimization of a Linux PC farm for Physics Analysis in the ZEUS Experiment 下载全文
  • The ZEUS experiment has migrated its reconstruction and analysis farms to a PC-based environment.More than one year of experience has been acquired with successful operation of an analysis farm designed for several hundred users.Specially designed software has been used to proveide fast and reliable access to large amounts of data (30 TB in total),After the ongoing upgrade of the HERA luminosity,higher requirements will arise in terms of data storage capacity and throughput rate,The necessity of a bigger disk cache has led to consideration of solutions based on commodity technology,PC-based file servers are being tested as a cost-effective storage system,In this article we present the hardware and software solutions deplogyed and discuss their performance.scalability and maintenance issues.
  • First Operation of the D0 Run Ⅱ Reconstruction Farm 下载全文
  • We report on the first operations of the reconstruction farms for the D0 experiment.Data were read from a tape robot to 50 PC‘s running Linux,processed,spooled to a central disk buffer for merging and then written back to the tape robot.The farms are being used successfully to reconstruct the data as it comes in.Transfer rates well over the 12.5 MB/sec needed for full data rates have been achieved.
  • After the First Five Years:Central Linux Support at DESY 下载全文
  • We will describe how Linux is embedded into DESY‘s unix computing,our support concept and policies,tools used and developed,and the challenges which we are facing now that the number of supported PCs is rapidly approaching one thousand.
  • A Performance Measurement and Simulation for PC—Linux Regional Center Tier—3 System 下载全文
  • An affordable solution to Tier-3 Regional Center(RC) is presented in this talk,a simple prototype has been set up for evaluation of the system performance,this prototype can be easily scaled to bigger one.A set of measurements have been conducted in both computer and network system behaviours in multiple concurrently running jobs contexts.In the talk these measured results are talked and discussed,including system and network effective utilizations and limitations,and the identified bottlenecks in different network layouts,and the comparisons with simulation.
  • Managing and Ever Increasing Number of Linux—PCs at DESY 下载全文
  • An ever increasing number of computer systems-mainly PCs-require elaborated management strategies and tools.In the contribution to CHEP‘01 we will present and discuss new concepts and developments concerning directory services and asset management,We will in particular report on first experiences with systems currently being implemented.
  • The ALICE Data Challenges 下载全文
  • Since 1998,the ALICE experiment and the CERN/IT division have jointly executed several large-scale high throughput distributed computing exercises:the ALICE data challenges.The goals of these regular exercises are to test hardware and software components of the data acqusition and computing systems in realistic conditions and to execute an early integration of the overall ALICE computing infrastructure.This paper reports on the third ALICE Data Challenge (ADC III) that has been performed at CERN from January to March 2001.The data used during the ADC Ⅲ are simulated physics raw data of the ALICE TPC,produced with the ALICE simulation program AliRoot.The data acquisition was based on the ALICE online framework called the ALICE Data Acquisition Test Environment (DATE) system.The data after event building,were then formatted with the ROOT I/O package and a data catalogue based on MySQl was established.The Mass Storage System used during ADC III is CASTOR.Different software tools have been used to monitor the performances,DATE has demonstrated performances of more than 500 MByte/s.An aggregate data throughput of 85 MByte/s was sutained in CASTOR over several days.The total collected data amounts to 100 TBytes in 100.00 files.
  • Experiences Constructing and Running Large Shared Clusters at CERN 下载全文
  • The latest steps in the steady evolution of the CERN Computer Centre have been to reduce the multitude of clusters and architectures and to concentrate on commodity hardware.An active RISC decommissioning program has been undertaken to encourage migration to Linux,and a program of merging dedicated experiment clusters into larger shared facilities has been launched.This paper describes these programs and the experiences running the resultant multi-hundred node shared Linux clusters.
  • The Terabyte Analysis Machine Project The Distance Machine:Performance Report 下载全文
  • The Terabyte Analysis Machine Project is Developing hardware and software to analyze Terabyte scale datasets.The Distance Machine framework provides facilities to flexibly interface application specific indexing and partitioning algorthms to large scientific databases.
  • The Farm Processing System at CDF 下载全文
  • At Fermilab‘s CDF farm a modular and highly scalable software and control system for processing,reprocessing,Monte Carlo generation and many other tasks has been created.The system is called FPS(Farm Processing System).This system consists of independent software components and allows modifications to suit other types of processing as well.FPS is accompanied with fully featured monitoring and control interfaces,including web statistics displays and a multiplatform Java control interface that allow easy management and control.The system also features automatic error recovery procedures with early warnings that allow smooth running.A general overview of the software desing along with a description of the features and limitations of the system and its components will be presented.Run 2 experience with the system will be giver as well.
  • BES Physical Analyzing Environment Composed of PC Farm
  • The Study of BES Reconstruction Codes at PC LX & HP—UX 下载全文
  • This note describes the modifications of the BES offline data reconstruction codes from the HP-UNIX to PC LINUX platform.The main changes of the codes and the results compairson with that of before modifications are presented.
  • Fermilab Distributed Monitoring System(NGOP) 下载全文
  • A Distributed Monitoring System(NGOP)that will scale to the anticipated requirements for RUn II computing has been under development at Fermilab.NGOP [1] provides a framework to create Monitoring Agents for monitoring the overall state of computers and software that are running on them.Several Monitoring Agents are available within NGOP that are capable of analyzing log files,and checking existence of system daemons,CPU and memory utilization,etc,NGOP also provides customizable graphical hierarchical representations of these monitored systems.NGOP is able to generate events when serious problems have occurred as well as raising alarms when potential problems have been detected.NGOP allows performing correctiv actions or sending notifications,NGOP provides persistent storage for collected events,alarms and actions.A first implementation of NGOP was recently deployed at Fermilab.This is a fully functional prototype that satisfies most of the existing requirements.For the time being the NGOP prototype is monitoring 512 nodes.During the first few months of running NGOP has proved to be a useful tool.Multiple problems such as node resets,offline CPUs,and dead system daemons have been detected.NGOP provided system administrators with information required for better system tuning and configuration.The current state of deployment and future steps to improve the prototype and to implement some new features will be presented.
  • A New Interlock Design for the TESLA RF System 下载全文
  • The RF system for TESLA requires a comprehensive interlock system.Usually interlock systems are organized in a hierarchical way,In order to react to different fault conditions in a fast and flexible manner a nonhierarchical organization seems to be the better solution ,At the TESLA Test Facility (TTF) at DESY we will install a nonhierarchical interlock system that is based on user desgned reprogrammable gate-arrays (FPGA‘s) which incorporate an embedded microcontroller system.This system could beused later for the TESLA linear collider replacing a strictly hierarchical design.
  • Communication between Trigger/DAQ and DCS in ATLAS 下载全文
  • Within the ATLAS experiment Trigger/DAQ and DCS are both logically and physically separated.Nevertheless there is a need to communicate.The initial problem definition and analysis suggested three subsystems the Trigger/DAQ DCS Communication (DDC) project should support the ability to :1.exchange data between Trigger/DAQ and DCS;2.send alarm messages from DCS to Trigger/DAQ;3.issue commands to DCS from Trigger/DAQ.Each subsystem is developed and implemented independently using a common software infrastructure.Among the various subsystems of the ATLAS Trigger/DAQ the Online is responsible for the control and configuration.It is the glue connecting the different systems such as data flow.level 1 and high-level triggers.The DDC uses the various Online components as an interface point on the Trigger/DAQ side with the PVSS II SCADA system on the DCS side and addresses issues such as partitioning,time stamps,event numbers,hierarchy,authorization and security,PVSS II is a commercial product chosen by CERN to be the SCADA system for all LHC experiments,Its API provides full access to its database,which is sufficient to implement the 3 subsystems of the DDC software,The DDC project adopted the Online Software Process,which recommends a basic software life-cycle:problem statement,analysis,design,implementation and testing.Each phase results in a corresponding document or in the case of the implementation and testing,a piece of code,Inspection and review take a major role in the Online software process,The DDC documents have been inspected to detect flaws and resulted in a improved quality.A first prototype of the DDC is ready and foreseen to be used at the test-beam during summer 2001.
  • Partitioning,Automation and Error Recovery in the Control and Monitoring System of an LHC Experiment 下载全文
  • The Joint Controls Project(JCOP)is a collaboration between CERN and the four LHC experiments to find and implement common solutions for their control and monitoring systems.As part of this project and Architecture Working Group was set up in order to study the requirements and devise an architectural model that would suit the four experiments.Many issues were studied by this working group:Alarm handling,Access Control,Hierarchical Control,etc.This paper will report on the specific issue of hierarchical control and in particular partitioning,automation and error recovery.
  • Upgrade of Control System for the BEPCII 下载全文
  • The BEPC will increase its luminosity ten times with upgrade of both the machine and detector,which is the project BEPCII,The project will be started at beginning chine and detector,which is the project BEPCII.The project will be started at beginning of 2002 and finished within 3-4 years.In order to reach the goal of the BEPCII,a number of new equipment will be added in the system,such as the superconducting RF cavities,new magnet power supplies and beam feedback system,and the BEPC control system has to be upgraded.The BEPC control system was built in 1987 and was upgraded in 1994.It is an Open VMS and CAMAC based-system,some equipment is controlled by PCs.We are going to upgrade the existing system by EPICS.Several VME IOCs will be added in the system with feildbus,PLCs for the new equipment control.And we will keep the existing system in use,such as CAMAC hardware ,PC based sub-control and application programs,which will be merged into the EPICS system.Recently the development of the EPICS prototype has been started.Regard some slow control,commercial SCADA product can be chosen as the development tool.We have just finished a prototype with the SCADA product Wizcon.This paper will describe the system design and development issues.
  • Technology Integration in the LHC EXperiments Joint Controls Project 下载全文
  • The development and maintenance of the control systems of the four LHC experiments will require a non-negligible amount of resources and effort.The Joint Controls Project(JCOP)[1] has been set-up as a collaboration between CERN and the four LHC experiments to find common solutions for the LHC experiments‘ control systems.Although the JCOP general principle is to promote the use of industrial ssoluitions wherever possible,custom solutions are still required when non-standard devices or very large numbers of devices have to be controlled.Furthermore.to ease the development and integration of both standard and non-standard devices into the control system a number of software Frameworks are under development.This paper will describe the various solutions being proposed by JCOP including the Supervisory and Front-End frameworks as well as the various industrial and custom components.In addition,It will also describe where these fit into the foreseen JCOP controls architecture.The paper will then highlight in more detail the Front-End Framework.
  • The KLOE Online Calibration System 下载全文
  • Based on all the features of the KLOE online software,the online calibration system performs current calibration quality checking in real time and starts automatically new calibration procedures when needed.Acalibration manager process controls the system,implementing the interface to the online system,receiving information from the run control and translating its state transitions to a separate state machine.It acts as a “ calibration run controller“and performs failure recovery when requested by a set of process checkers.The core of the system is a multi-threaded OO histogram server that receives histogramming commands by remote processes and operates on local ROOT histograms.A client library and C,fortran and C++ application interface libraries allow the user to connect and define his own histogram or read histograms owned by others using an bool-like interface.Several calibration processes running in parallel in a destributed,multiplatform environment can fill the same histograms,allowing fast external information check.A monitor thread allow remote browsing for visual inspection,Pre-filtered data are read in nonprivileged spy mode from the data acquisition system via the Kloe Integrated Dataflow,privileged spy mode from the data acquisiton system via the Kole Integrated Dataflow.The main characteristics of the system are presented.
  • Some Problems of Statistical Analysis in Experiment Proposals 下载全文
  • Several criteria used by physicist to quantify the ratio of signal to background are compared .The approach for taking into account the uncertainty in estimations of signal and background is proposed.
  • Go4:Multithreaded Inter—Task Communication with ROOT—writing non—blocking GUIs 下载全文
  • The future GSI Online-Offline-Object-Oriented analysis framework Go4 based on ROOT[CERN,R.Brun et al] provides a mechanism to monitor and control an analuysis at any time.This is achieved by running the GUI and the analysis in different tasks.To control these tasks by one non-lbocking GUI,the Go4TaskHandler package was developed.It offers an asynchronous inter task communication via independent channels for commands,data,and status information,Each channel is processed by a dedicated thread and has a buffer queue as interface to the working thread.The threads are controlled by the Go4ThreadMAanager package,based on the ROOT TThread package,In parallel to the GUI actions,the analysis tasks can display objects like histograms in the GUI.A test GUI was implemented using the Qt widget library(Trolltech Inc.).A Qt to ROOT interface has been developed.The Go4 packages may be utilized for any ROOT application that requires to control independent data processing or monitoring tasks from a non-blocking GUI.
  • Update of an Object Oriented Track Reconstruction Model for LHC Experiments 下载全文
  • In this update report about an Object Oriented (OO) track reconstruction model,which was presented at CHEP‘97,CHEP‘98,and CHEP‘2000,we shall describe subsequent new developments since the beginning of year 2000.The OO model for the Kalman filtering method has been designed for high energy physics experiments at high luminosity hadron colliders.It has been coded in the C++ programming language originally for the CMS experiment at the future Large Hadron Collider (LHC) at CERN,and later has been successfully implemented into three different OO computing environments(including the level-2 trigger and offline software systems)of the ATLAS(another major experiment at LHC).For the level-2 trigger software environment.we shall selectively present some latest performance results(e.g.the B-physics event selection for ATLAS level-2 trigger,the robustness study result,ets.).For the offline environment,we shall present a new 3-D space point package which provides the essential offline input.A major development after CHEP‘2000 is the implementation of the OO model into the new OO software frameworkAthena“of ATLAS experiment.The new modularization of this OO package enables the model to be more flexible and to be more easily implemented into different software environments.Also it provides the potential to handle the more comlpicated realistic situation(e.g.to include the calibration correction and the alignment correction,etc.) Some general interface issues(e.g.design of the common track class)of the algorithms to different framework environments have been investigated by using this OO package.
  • SND Off—Line Framework 下载全文
  • SND is a spherical non-magnetic detector operating since 1996 at VEPP-2M electron-positron collider in Novosibirsk,SND is performing an upgrade of its subsystems,electronics,and software for the next run at VEPP-2000,Present Fortran-based offline programs will be replaced with the object-oriented framework,which supports simulation,reconstruction and analysis activities,New framework exploits the experience obtained in the work with the current offline and supports or extends its essential features.The main framework concept is a module,which is a basic processing unit consuming some data and producing new data.Every module can be parameterized during run time.Formalized description of the modules is used by the framework sequencer for the selection and ordering of minimal subset of modules for any given task.Data persistency services are made sufficiently abstract to allow implementation for different persistency technologies.Prsently there is an implementation for the sequential files with packed data.The framework provides an interface for scripting languages.Together with a custom expression parser this gives a support for extensible run-time histogramming.The functional prototype of the framework was implemented in Python language and proved the concepts of the project.Currently this prototype is being reimplemented in C++.
  • Hidden Adapter 下载全文
  • The need for writing software packages that do not depend explicitly on the server code,and do not need to be modified,when new server interfaces arise,lead to the development of the hidden adapter pattern.We will show its workings,and give an example of how complete decoupling between depended-on and dependent code can be achieved using the hidden adapter pattern.
  • Distributed Analysis with Java and Objectivity 下载全文
  • New experiments including those at the LHC will require analysis of very large datasets which are best handled with distributed computation.We present the design and development of a prototype framework using Java and Objectivity.Our framework solves such analysis-specific problems as selecting event samples from large distributed databases.producing varialbe distributions,and negotiating between multiple analysis service providers.Examples from the successful application of the prototype to the analysis of data from the L3 experiment will also be presented.
  • Studies for Optimization of Data Analysis Queries for HEP Using HERA—B Commissioning Data 下载全文
  • In this paper we present an overview of the ongoing studies to build up a framework that supports the analysis after the reprocessing phase.This framework aims to develop a standard data query language for the HEP community,The related studies have been considering the relational database model as possible approach opposed to the object model.Several optimizing and tunning techniques are being used in technologies like DB2[3],Oracle[5] and Root[2],that are simultaneously being evaluated.The experience obtained can be seen as a valuable testbed for the future LHC and simultaneously as interesting input for the development of the GRID.
  • BES Monitoring & Displaying System 下载全文
  • BES1 Monitoring & Displaying System(BESMDS)is projected to monitor and display the running status of DAQ and Slow Control systems of BES through the Web for worldwide accessing.It provides a real-time remote means of monitoring as well as an approach to study the environmental influence upon physical data taking.The system collects real-time data separately from BES online subsystems by network sockets and stores the data into a database.People can access the system through its web site.which retrieves data on request from the database and can display results in dynamically created images.Its web address in http:// besmds,ihep.ac.cn/
  • The CMS Field Mapping Project at the CERN EDMS 下载全文
  • Various representations of the CMS magnetic field maps in CERN Engineering Data Management System are considered.
  • CATS:a Cellular Automation for Tracking in Silicon for the HERA—B Vertex Detector 下载全文
  • A track reconstruction program CATS based on a cellular automaton has been developed for the vertex detector system of the HERA-B experiment at DESY,A segment model of the cellular automation used for tracking can be regarded as local discrete form of the Denby-Peterson neural net.Since 1999 CATS has been used to reconstruct data collected in HERA-B.Results on the tracking performance,and accuracy of estimates and computing time are presented.
  • Ring Recognition Method Based on the Elastic Neural Net 下载全文
  • We propose an application of the elastic neural net for ring recognition in RICH detectors.The method has been developed to find rings distorted due to misalignment of detectors and contaminated by noise.The algorithm was tested on simulated data of COMPASS RICH-1 detector.Reconstruction efficiency is 99.95% for triple LEPTO events taking 5 ms per event.
  • Summary of the HEPVis‘01 Workshop 下载全文
  • We summarise the HEPVis‘01 workshop held in May 2001 at Northeastern University,Boston.General issues such as architecture and design of the frameworks and toolkits are described as well as the status of a wide range of generic analysis and visualisation tools and experiment-specific systems.
  • Track Reconstruction in the High Rate Environment of the HERA—B Spectrometer 下载全文
  • In this paper a method of track reconstruction developed for the HERA-B main tracking system is discussed.The method based on a track following algorithm is used for track finding in the field-free area and for the track propagation through the inhomogeneous magnetic field.The performance of program for simulated and real data is shown.
  • A Coherent and Non—Invasive Open Analysis Architecture and Framework with Applications in CMS 下载全文
  • The CMS IGUANA project has implemented an open analysis architecture that enables the creation of an integrated analysis environment.In this “analysis desktop“ environment a physicist is able to perform most analysis-related tasks,not just the presentation and visualisation steps usually associated with analysis tools.The motivation behind IGUANA‘s approach is that phsics analysis includes much more than just the visualisation and data presentation.Many factors contribute to the increasing importance of making analysis and visualisation software an integral part of the experiment‘s software:object oriented and ever more advanced data models,GRID,and automated hierarchical storage management systems to name just a few.At the same time the analysis toolkits should be modular and non-invasive to be usable in different contexts within one experiment and generally across experiments.Ideally the analysis environment would appear to be perfectly customised to the experiment and the context,but would mostly consist of generic components.We describe how the IGUANA project is addressing these issues and present both the architecture and examples of how different aspects of analysis appear to the users and the developers.
  • The IGUANA Interactive Graphics TOolkit with Examples from CMS and D0 下载全文
  • IGUANA(Interactive Graphics for User ANAlysis)is a C++ toolkit for developing graphical user interfaces and high performance 2-D and 3-D graphics applications,such as data browsers and detector and event visualisation programs.The IGUANA strategy is to use freely available software(e.g.Qt,SoQt,OpenInventor,OpenGL,HEPVis)and package and extend it to provide a general-purpose and experiment-independent toolkit.We describe the evaluation and choices of publicly available GUI/graphics software and the additional functionality currently provided by IGUANA.We demonstrate the use of IGUANA with several applications built for CMS and D0.
  • CMS Object—Oriented Analysis 下载全文
  • The CMS OO reconstruction program-ORCA-has been used since 1999 to produce large samples of reconstructed Monte-Carlo events for detector optimization,trigger and physics studies,The events are stored in several Objectivity federations at CERN,in the US,Italy and other countries.To perform their studies physicists use different event samples ranging from complete datasets of TByte size to only a few events out of these datasets.We describe the implementation of these requirements in the ORCA software and the way collctions of events are accessed for reading,writing or copying.
  • Object Oriented Reconstruction and Particle Identification in the ATLAS Calorimeter 下载全文
  • The reconstruction and subsequent particle identification is a challenge in a complex and a high luminosity environment such as those expected in the ATLAS detector at the LHC.The ATLAS software has chosen the object oriented paradigm and has recently migrated much of its software components developed earlier using procedural programming languages.The new software,which emphasizes on the separation between algorthms and data objects,has been successfully integrated in the broader ATLAS framework.We will present a status report of the reconstruction software summarizing the experiences gained in the migration of several software components.We will examine some of the components of the calorimeter software design,which include simulation of real-time detector effects and online environment,and strategies deployed for identification of particles.
  • Prototype for a Generic Thin—Client Remote Analysis Environment for CMS 下载全文
  • The multi-tiered architecture of the highly-distributed CMS computing systems necessitates a flexible data distribution and analysis environment.We describe a prototype analysis environment which functions efficiently over wide area networks using a server installed at the Caltech/UCSD Tier 2 prototype to analyze CMS data stored at various locations using a thin client.The analysis environnment is based on existing HEP(Anaphe) and CMOS(CARF,ORCA,IGUANA)software thchnology on the server acessed from a variety of clients.A Java Analysis Studio (JAS,from SLAC)plug-in is being developed as a reference client.The server is operated as “Black box“on the proto-Tier2 system.ORCA Objectivity databases(e.g.an existing large CMS Muon sample)are hosted on the master and slave nodes,and remote clients can request processing of queries across the server nodes ,and get the histogram results returned and rendered in the client.The server is implemented using pure C++,and use XML-RPC as a language-neutral transport.This has several benefits,including much better scalability,better integration with CARF/ORCA,and importanly,Makes the work directly useful to other non-java general-purpose analysis and presentation tools such as Hippodraw,Lizard.or ROOT.
  • Ensuring Long Time Access to DELPHI Data:The IDEA Project 下载全文
  • The long term accessibility of its data is an important concern of the DELPHI collaboration.It is our assumption that the storage of the data itself will be a minor issue due to the progress in storage technologies.Therefore DELPHI focuses on a reorganisation of the data,which should provide a flexible and coherent framework for physics analysis in the future.This paper describes the current status of the IDEA(Improved DElphi Analysis) project which will ensure usability of DELPHI data for future generations of physicists.
  • The Event Browser:An Intutive Approach to Browsing BaBar Object Databases 下载全文
  • Providing efficient access to more than 300TB of experiment data is the responsibility of the BaBar^1 Databases Group.Unlike generic tools,The Event Browser presents users with an abstraction of the BaBar data model.Multithreaded CORBA^2 servers perform database operations using small transactions in an effort to avoid lock contention issues and provide adequate response times.The GUI client is implemented in Java and can be easily deployed throughout the community in the form of a web applet.The browser allows users to examine collections of related physics events and identify associations between the collections and the physical files in which they reside,helping administrators distribute data to other sites worldwide,This paper discusses the various aspects of the Event Browser including requirements,design challenges and key features of the current implementation.
  • The BEPCⅡ Data Production and BESⅢ offline Analysis Software System 下载全文
  • The BES detector has operated for about 12 years,and the BES offline data analysis environment also has been developed and upgraded along with developments of the BES hardware and software.The BESⅢ software system will operate for many years.Thus they should meet developments of the new technology in software,It should be highly flexible,Powerful,stable and easy for maintenance.And following points should be taken into account:1) To benefit the collaboration and make better exchanges with the international HEP experiments this system shoule be set up by adopting or referring the newest technology in the software from advanced experiments in the world.2).It should support hundreds of the existing BES software packages and serve for both old experts who familiar with BESII software and computing environment and new members who is going to benefit from the new system.3).The most BESII existing packages will be modified or re-designed according to the hardware changes.
  • Neural Computing in High Energy Physics 下载全文
  • Artifical neural networks (ANN) are now widely used successfully as tools for hith energy physics.The paper covers two aspects.First,mapping ANNs onto the proposed ring and linear systolic array provides an efficient implementation of VLSI-based architectures since in this case all connections among processing elements are local and regular,Second.it is discussed algorthmic organizing of such structures on the base of modular algebra whose use can provide an essential increase of system throughput.
  • Status of the GAUDI Event—Processing Framework 下载全文
  • The GAuDI architecture and framework are designed to provide a comon infrastructure and environment for simulation,filtering,reconstruction and analysis applications.Initially developed for the LHCb experiment,GAUDI has been adopted and extended by the ATLAS experiment and adopted by several other experiments including GLAST and HARP.we describe the properties and concepts embodied by GAUDI and recent functionality additions and how the project has evolved from a product developed by a tightly-knit team in a single site,to a collaboration between multiple teams at geographically dispersed sites,based loosely on open source concepts.We describe the management infrastructure as well as how we accommodate experment -specific extensions and adaptations as well as an experiment-neutral kernel.
  • Adding a Scripting Interface to Gaudi 下载全文
  • Athena,the Software Framework for ATLAS‘ offline software is based on the Gaudi Framework from LHCb^1,The Processing Model of Gaudi is essentially that of a batch-oriented system -a User prepares a file detailing the configuration of which Algorithms are to be applied to the input data of a job and the parameter values that control the behavior of each Algorithm instance.The Framework then reads that file once at the beginning of a job and runs to completion with no further interaction with the User. We have enhanced the Processing Model to include an interactive mode where a User can cotrol the event loop of a running job and modify the Algorithms and parameters on the fly.We changed only a very small number of Gaudi Classes to provide access to parameters from an embedded Python interpreter,No change was made to the Gaudi Programming Model.i.e., developers need not change anything to make use of this added interface,We present details of the design and implementation of the interactive Python interface for Athena.
  • Abstract Interfaces for Data Analysis —Component Architecture for Data Analysis Tools 下载全文
  • The fast turnover of software technologies,in particular in the domain of in teractivity(covering user interface and visualisation)makes it difficult for a small group of people to produce complete and polished software-tools before the underlying technologies make them obsolete.At the HepVis ‘99 workshop,a working group has been formed to improve the rpoduction of software tools for data analysis in HENP.Beside promoting a distributed development organisation,one goal of the group is to systematically design a set of abstract interfaces based on using modern OO analysis and OO design techniques.An initial domain analysis has come up with several categories(componets)found in typical data analysis tools:historams,Ntuples,Functions,Vectors,Fitter,Plotter,Analyzer and Controller,Special Emphasis was put on reducing the couplings between the categories to a minimum,thus optimising re-use and maintainability of any component individually.The interfaces have been defined in Java and C++ and implementations exist in the form of libraries and tools using C++(Anaphe/Lizard,Openscientist)and Java(Java Analysis Studio),A special implementation aims at accessing the Java Liraries(through their Abstract Interfaces)from C++.This paper giver an overview of the architecture and design of the various components for data analysis as discussed in AIDA.
  • Anaphe—OO Libraries and Tools for Data Analysis 下载全文
  • The Anaple project is an ongoing effort to provide an Object Oriented software environment for data analysis in HENP experiments,A range of commercial and public domain libraries is used to cover basic functionalities;on top of these libraries a set of HENP-sepcific C++ class libraries for histogram management fitting,plotting and ntuple-like data analysis has been developed .In order to comply with the user requireements for a command-line driven tool,we have chosen to use a scripting language(Python)as the fromt-ent for a data analysis tool.The loose coupling provided by the consequent use of (AIDA compliant)Abstract Interfaces for each component in combination with the use of shared libraries for their implementation provies an easy integration of existing libraries into modern scipting languages thus allowing for rapid application development.This integration is simplified even further suing a specialised toolkit(SWIG)to create“ shadow Classes“for the Python language,which map the definitions of the Abstract Interfaces almost at a one-to-one level.This paper will give an overview of the architecture and design choices and will present the current status and future developments of the project.
  • The HippoDraw Application and the HippoPlot C++ Toolkit Upon which it is Built 下载全文
  • OO Software and DataModel of AMS Experiment 下载全文
  • The Alpha Magnetic Spectrometer(AMS)is an experiment to search in space for dark matter,missing matter and antimatter scheduled for installation on the International SPace Station(ISS) Alpha.AMS detector had precursive flight in June 1998 on board the space shuttle Discovery during STS91,More than 100M events been collected and analyzed.The detector will have another flight in the fall of year 2003 for more than three years on ISS.The data will be transmitted from ISS to NASA Marshall Space Flight Center(Huntsvile,Alabama)and then to MIT and CERN for processing and analysis,In this report we describe AMS software in particular conditions database and data processing software.
  • High Performance RAIT 下载全文
  • The ability to move 10s of TeraBytes in reasonable amounts of time are critical to many of the High Energy Physics applications.This paper examines the issues of high performance,high reliability tape storage systems,and presents the results of a 2-year ASCI Path Forward program to be able to reliably move 1GB/s to an archive that can last 20 years.This paper will cover the requirements.approach,hardware,application software,interface descriptions,performance,measured reliability and predicted reliability.This paper will also touch on future directions for this research.The current research allows systems to sustain 80MB/s of uncompressable data per Fibre Channel interface which is striped out to 8 or more drives.This looks to the application as a single tape drive from both mout and data transfer perspectives .Striping 12 RAIT systems together will provide nearly 1GB/s to tape.The reliability is provided by a method of adding parity tapes to the data stripes.For example,adding 2 parity tapes to an 8-stripe group will allow any 2 of the 10 tapes to be lost or damaged without loss of information.The reliability of RAIT with 8 stripes and 2 parities exceeds that of mirrored tapes while RAIT uses 10 tapes instead of the 16 tapes that a mirror would require.The results of this paper is to be abloe to understand the applicability of RAIT and to be able to understand when it may be useful in High Energy Physics applications.
  • dCache,a Distributed Storage Data Cahing System 下载全文
  • This article is about a piece of middle ware,allowing to convert a dump tape based Tertiary Storage System into a multi petabyte random access device with thousands of channels.Using typical caching mechanisms,the software optimizes the access to the underlying Storage System and makes better use of possibly expensive drives and robots or allows to integrate cheap and slow devices without introducing unacceptable performance degadation.In addition,using the standard NFS2 protocol,the dCache provides a unique view into the storage repository,hiding the physical location of the file data,cached or tape only.Bulk data transfer is supported through the kerberized FTP protocol and a C-API,providing the posix file access semantics,Dataset staging and disk space management is performed invisibly to the data clients.The project is a DESY,Fermilab joint effort to overcome limitations in the usage of tertiary storage resources common to many HEP labs.The distributed cache nodes may range from high performance SGI machines to commodity CERN Linux-IDE like file server models.Different cache nodes are assumed to have different affinities to particular storage groups or file sets.Affinities may be defined manually or are calculated by the dCache based on topology considerations.Cache nodes may have different disk space management policies to match the large variety of applications from raw data to user analysis data pools.
  • Study on Limited Projections in Micro—focus X—ray Swing Laminography 下载全文
  • This paper resents a new X-ray imaging method swing Laminography(SL) that is suitalble to perform non-destructive testing on the slender-shaped multi-layer samples.Computer simulations are made to compare the limited projections in SL and the principle of choosing Optimal Projection Angular Region(OPAR) is discussed.The experiment on a two-layer Printed Circuits Board shows that SL with 120° swing angles distributed in OPAR can get the separated images of each layer.
  • Experience Using Different DBMSs in Prototyping a Book—Keeper for ATLAS‘ DAQ Software 下载全文
  • The Online Book-keeper(OBK) was developed to keep track of past data taking activity,as well as providing the hardware and software conditions during physics data taking to the scientists doing offline analysis.The approach adopted to build the OBK was to develop a series of prototypes,experlimenting and comparing different DBMS systems for data storage,In this paper we describe the implemented prototypes,analyse their different characteristics and present the results obtained using a common set of tests.
  • Upgrade of the ZEUS OO Tag Database for Physics Analysis at HERA 下载全文
  • The object-oriented tag database of the ZEUS experiment at HERA is based on Objectivity/DB.It is used to rapidly select events for physics analysis based on intuitive physical criteria.The total number of events currently in the database exceeds 150 millionBased on the detector configuration different information can be stored for each event.A new version of the database software was recently released which serves clients on a multitude of batch machines,workgroup servers and desktop machines running Irix,Linuxand Solaris.This replaces an earlier version which was restricted to three SGI machines Multiple copies of the data can be stored transparently to the users for example if a new offline reconstruction of the data is in progress.A report is given on the upgrade of the database and its superior performance compared to the old event selection method.
  • Performance Analysis of Generic vs.Sliced Tags in HepODBMS 下载全文
  • This paper presents a performance analysis of accessing tag data clustered in two different ways,namely event-wise clustering (generic tag)vs.attribute-wise clustering (sliced tag).The results show that especially “Prefetch-optimisation“ results in an additional performance gain of sliced tags over generic tags when only a subset of all the tag attributes is accessed.
  • An Evaluation of Oracle for Persistent Data Storage and Analysis of LHC Physics Data 下载全文
  • The LHC experiments at CERN will generate huge volumes of data-several PB per year at data rates between 100MB/s and 1.5GB/s.The storage and analysis of these data present a major challenge.In collaboration with other members of the former RD45 project,the central database support group at CERN has been working on this issue for several years,leading to production use of a potential solution,based on the combination of an Object Database and Mass Storage system,bothe at CERN and outside.
  • A Generic Identification Scheme for Hierarchically Structured Objects 下载全文
  • The detector description database,the event data structure,the condition database are all examples(among others)of complex collections of objects which need to be unambiguously identified,not only internally to their own management structure.but also from one collection to the other.The requirements for such an identification scheme include the management of identifiers individually attached to each collected object,the possibility to formally spectify these identifiers (eg through dictionaries),to generate optimised and compact representations for these identifiers and to be able to use them as sorting and searching keys.we present here the generic toolkit developed in the context of the Atlas experiment to primarily provide the identification of the readout elements of the detector.This toolkit offers several either generic or specialized component such as:an XML based dictionary with which the formal specification of a particular object collection is expressed,a set of various binary representations for identifier objects(offering various level of compaction),range operators meant to manipulate ranges of identifiers,and finally a collection manager similar to the STL map but optimised for an organization keyed by Identifiers.All these components easily interoperate.In Particular the Identifier didctionary offers means of specifying permitted cardinalities of objects at each level of the hierarchy,This can then be translated into Identifier Ranges or can be used as the strategy driver for high compactification of the identifiers(e.g.to store very large number of identified objects).Current use of this toolkit within the detector description will be presented,and expected or possible other usages will be discussed.
  • Farming Data for the HyperCP Experiment 下载全文
  • The main goal of the HyperCP(E87) experiment at Fermilab is to search for CP violation in Ξand Α decays at the -10^-4 level.This level of precision dictates a data sample of over a billion events.The experiment collected about 231 billion raw events on about 30,000 5-GB tapes in ten months of running in 1997and 1999,In order to analyze this huge amount of data,the collaboration has reconstructed the events on a farm of 55 dual-processor Linux-based PCs at Fermilab.A set of farm tools has been written by the collaboration to interface with the Farm Batch System (FBS and FBSNG)[1] developed by the Fermilab Computing Division,to automate much of the farming and to allow nonexpert farm shifters to submit and monitor jobs through a web-based interface.Special care has been taken to produce a robust system which facilitates easy recovery from errors.The code has provisions for extensive monitoring of the data on a spill-by-spill basis,as is required by the need to minimize potential Systematic errors.About 36 million plots of various parameters produced from the farm analysis can be accessed through a data management system.The entire data set was farmed in eleven mouths,or about the same time that was taken to acquire the data.We will describe the architecture of the farm.our experience in operating it ,and show some results from the farm analysis.
  • Object Persistency for HEP Data Using an Object—Relational Database 下载全文
  • We present an initial study of the object features of Oracle 9i-the first of the market-leading object-relational database systems that supports a true object model on the server side as well as an ODMG-style C++ language binding on the client side.We discuss how these features can be used to provide persistent object storage in the HEP environment.
  • Jefferson Lab Mass Storage and File Replication Services 下载全文
  • Jefferson Lab has implemented a scalable,distributed,high performance mass storage system-JASMine.The system is entirely implemented in Java,provides access to robotic tape storage and includes disk cache and stage manager components.The disk manager subsystem may be used independently to manage stand-alone disk pools.The system includes a scheduler to provide policy-based access to the storage systems.Security is provided by pluggable authentication modules and is implemented at the network socket level.The tape and disk cache systems have well defined interfaces in order to provids integration with grid-based services.The system is in production and being used to archive 1 TB per day from the experiments.and currently moves over 2 TB per day total.This paper will describe the architecture of JASMine;discuss the rationale for building the system,and present a transparent 3^rd party file replication service to move data to collab-orating institutes using JASMine,XML,and servlet technology interfacing to grid-based file transfer mechanisms.
  • Distributing File—based Data to Remote sites within the BABAR Collaboration 下载全文
  • BABAR[1] uses two formats for its data:Objectivity database and ROOT[1] files.This poster concerns the distribution of the latter-for Objectivity data see [3].The BABAR analysis data is stored in ROOT files-one per physics run and analysis selection channel-maintained in a large directory tree,Currently BABAR has more than 4.5 TBytes in 200,000 ROOT files.This data is (mostly)produced at SLAC,but is required for analysis at universities and research centres throughout the US and Europe.Two basic problems confront us when we seek to import bulk data from SLAC to an institute‘s local storage via the network.We must determine which files must be imported (depending on the local site requirements and which files have already been imported),and we must make the optimum use of the network when transferring the data,Basic ftp-like tools(ftp,scp,etc)do not attempt to solve the first problem.More sophisticated tools like rsync[4],the widely-used mirror/synchronisation program,compare local and remote file systems,checking for changes(based on file date,size and,if desired,an elaborate checksum)in order to only copy new or modified files,However rsync allows for only limited file selection.Also when,as in BABAR,an extremely large directory structure must be scanned,rsync can take several hours just to determine which files need to be copied.Although rsync(and scp)provides on -the=fly compression,it does not allow us to optimise the network transfer by using multiple streams,abjusting the TCP window size or separating encrypted authentication from unencrypted data channels.
  • Simulation Analysis of the Optimal Storage Resource Alloction for Large HENP Databases 下载全文
  • Large High Energy and Nuclear Physics(HENP)databases are commonly stored on robotic tape systems because of cost considerations.Later,selected subsets of the data are cached into disk caches for analysis or data mining.Because of the relatively long time to mount,seek,and read a tape,it is important to minimize the number of times that data is cached into disk.Having too little disk cache will force files to be removed from disk prematurely,thus reducing the potential of their sharing with other users .Similarly,having too few tape drives will not make good use of a large disk cache,as the throughput from the tape system will form the bottleneck.Balancing the tape and disk resources is dependent on the patterns of the requests to the data.In this paper,we describe a simulation that characterizes such a system in terms of the rsources and the request patterns.We learn from the simulation which parameters affect the performance of the system the most.We also observe from the simulation that,there is a point beyond which it is not worth investing in additional resources as the benefit is too marginal.We call this point the “point-of -no-benefit“(or PNB),and show that using this concept we can more easily discover the relationship of various parameters to the performance of the system.
  • User‘s Friendly Interface to the CDF Data Handling System 下载全文
  • The CDF collaboration at the Fermilab Tevatron analyses proton-antiproton interactions at a center-of=mass energy of 2 TeV.during the the collider run starting this year the experiment expects to record 1 Petabyte of data and associated data samples,The Data Handling(DH) system has online and offline components.The DH offline component provides access to the stored data,to stored reconstruction output,to stored Monte-Carlo data samples,and user owned data samples.It serves more than 450 physicists of the collaboration.The extra requirements to the offline component of the Data Handling system are simplicity and convenience for users.More than 50 million events of the CDF Run II data have been already processed using this system.
  • Automatic Schema Evolution in Root 下载全文
  • ROOT version 3(spring 2001) supports automatic class schema evolution.In addition this version also produces files that are self-describing.This is achieved by storing in each file a record with the description of all the persistent classes in the file.Being self-describing guarantees that a file can always be read later,its structure browsed and objects inspected.also when the library with the compiled code of these classes is missing The schema evolution mechanism supports the frequent case when multiple data sets generated with many different class versions must be analyzed in the same session.ROOT supports the automatic generation of C++ code describing the data objects in a file.
  • The Role of XML in the CMS Detector Description 下载全文
  • Offline Software such as Simulation,Reconstruction,Analysis,and Visualisation are all in need of a detector description.These applications have several common but also many specific requirements for the detector description in order to build up their internal representations.To achieve this in a consistent and coherent manner a common source of information,the detector description database,will be consulted by each of the applications.The role and suitability of XML in the design of the detector description database in the scope of the CMS detector at the LHC is discussed.Different aspects such as data modelling capabilities of XML,tool support,integration to C++ representations of data models are treated and recent results of prototype implementations are presented.
  • The CDF Run II Disk Inventory Manager 下载全文
  • The Collider Detector at Fermilab(CDF) experiment records and analyses proton-antiprotion interactions at a center-of -mass energy of 2 TeV,Run II of the Fermilab Tevatron started in April of this year,The duration of the run is expected to be over two years.One of the main data handling strategies of CDF for RUn II is to hide all tape access from the user and to facilitate sharing of data and thus disk space,A disk inventory manager was designed and developed over the past years to keep track of the data on disk.to coordinate user access to the data,and to stage data back from tape to disk as needed.The CDF Run II disk inventory manager consists of a server process,a user and administrator command line interfaces.and a library with the routines of the client API.Data are managed in filesets which are groups of one or more files.The system keeps track of user acess to the filesets and attempts to keep frequently accessed data on disk.Data that are not on disk are automatically staged back from tape as needed.For CDF the main staging method is based on the mt-tools package as tapes are written according to the ANSI standard.
  • CDF Run Ⅱ Data File Catalog 下载全文
  • The CDF experiment started data taking in April 2001,The data are organized into datasets which contain events of similar physics properties and reconstruction version.the information about datasets is stored in the Data File Catalog,a relational database.This information is presented to the data processing framework as objects which are retrieved using compound keys.The objects and the keys are designed to be the algorithms‘ view of information stored in the database.Objects may use several DB tables.A database interface management layer exists for the purpose of managing the mapping of persistent data to transient objects that can be used by the framework.This layer exists between the algorithm code and the code which reads directly from datanbase tables.At the user end,it places get/put interface on a top of a transient class for retrieval or storage of objects of this class using a key.Data File Catalog code makes use of this facility and contains all the code needed to manipulate CDF Data File Catalog from a C++ program or from the command prompt,It supports an Oracle interface using OTL,and a mSQL interface,This code and the Oravcle implementation of Data File Catalog were subjected to test during CDF Commissioning Run last fall and during first weeks of Run II in April.It performed exceptionally well.
  • SAM Overview and Operation at the D0 Experiment 下载全文
  • SAM is a network-distributed data management system developed at Fermilab for use with Run II data,It is being Employed by the D0 Experiment to store,manage,deliver,and track processing of all data.We describe the design and features of the system including resource management and data transfer mechanisms,We show the operational experience D0 has accumulated to date including data acquisition processing,and all levels of access and delivery.We present various configurations of the system and describe their use in the collaboration.
  • Optimizing Parallel Access to the BaBar Database System Using CORBA Servers 下载全文
  • The BaBar Experiment collected around 20 TB of data during its first 6 months of running.Now,after 18 months,data size exceeds 300 TB,and according to prognosis,it is a small fraction of the size of data coming in the next few months,In order to keep up with the data significant effort was put into tuning the database system,It led to great performance improvements,as well as to inevitable system expansion-450 simultaneous processing nodes alone used for data reconstruction.It is believed,that further growth beyond 600 nodes will happen soon.In such an environment,many complex operations are executed simultaneously on hundreds of machines,putting a huge load on data servers and increasing network traffic Introducing two CORBA servers halved startup time,and dramatically offloaded database servers:data servers as well as lock servers The paper describes details of design and implementation of two servers recently in troduced in the Babar system:conditions OID server and Clustering Server,The first experience of using these servers is discussed.A discussion on a Collection Server for data analysis,currently being designed is included.
  • Managing the BaBar Object Oriented Database 下载全文
  • The BaBer experiment stores its data in an Object oriented federated database supplied by Objectivity/DB(tm),this database is surrently 350TB in size and is expected to increase considerably as the experiment matures.Management of this database requires careful planning and specialized tools in order to make the data available to physicists in an efficient and timely manner,We discuss the operational issues and management tools that were developed during the previous run to deal with this vast quantity of data at SLAC.
  • A Model of BES Data Storage Management System 下载全文
  • In this article we will introduce the system structure of a model built for BES data management and storage as well as the basic methods on how to establish the system.Additionally the analysis of the data structure,the data process,the selection of experimental program,the image manipulation and the key techniques will be discussed in detail,The model implements the setup of the system environment and all those functions from data loading,database cresting,data accessing,remote data process to data figuring.
  • Data Transfer Using Buffered I/O API with HPSS 下载全文
  • The new KEK Central computer system employed HPSS for data management To gain high performance access to HPSS easily,we built a wrapper of the client APL.
  • Experience with the COMPASS Conditions Data Base 下载全文
  • The COMPASS experiment at CERN is starting data taking in summer 2001,The COMPASS off-line framework(CORAL)will use the CERN Conditions Data Base(CDB)to handle time dependent quantities like calibration constants and data from the slow control system.We describe the use of the CDB within CORAL and the fullscale performance tests on the COMPASS Computing Farm(CCF),The CDB has been interfaced to the SCADA PVSS slow control system.To continuously transfer all the data to the CDB and make them available to the users,We describe this interface,a feasibility study performed using mock data and we predict the expected performance.
  • Building an Advanced Computing Environment with SAN Support 下载全文
  • The current computing environment of our Computing Center in IHEP uses a SAS (server Attached Storage)architecture,attaching all the storage devices directly to the machines.This kind of storage strategy can‘t meet the requirement of our BEPC II/BESⅢ project properly.Thus we design and implement a SAN-based computing environment,which consists of several computing farms,a three-level storage pool,a set of storage management software and a web-based data management system.The feature of ours system includes cross-platform data sharing,fast data access,high scalability,convenient storage management and data management.
  • Geant4 Low Energy Electromagnetic Physics 下载全文
  • Geant4 Low Energy Electromagnetic package Provides a precise treatment of electromagnetic interations of particles with matter down to very low energies (250 oV for electrons and photons,<1 keV for hadrons and ions),It includes a veriety of models for the electromagnetic processes of electrons,photons,hadrons and ions,taking into account advance features,such as shell effects and effects due to charge dependence.The comprehensive set of particle types it can handle,the variety of modeling approaches and the extended coverage of energy range make this package a unique tool among Monte Carlo codes on the market,and of relevance to serveral experimental domains in HIEP,astroparticle physics,space science and biomedical studies.
  • Simulation for Astroparticle Experiments and Planetary Explorations:Tools and Applications 下载全文
  • We present a set of tools and general-pupose applications for the simulation of astrophysics and astroparticle experiments,concerning both physics and radiation background studies.They addrss the speific requirements of various typical astroparticle detectors:new-generation X-and γ-ray detectors on satellites,underground detectors for astroparticle experiments and solar system explorations.
  • Hadronic Shower Models in GEANT4: Validation Strategy and Results. 下载全文
  • Optimal exploitation of hadronic final states played a key role in successes of all recent hadron collider experiment in HEP,and the ability to use hadronic final states will continue to be one of the decisive issues during the analysis phase of the LHC experinents Monte Carlo implementations of hadronic shower models provided with GEANT4 facilitate the use of hadronic final states,and have been developed for many years.We will give an overview on the physics underlying hadronic shower simulation,discussing the three basic types of modelling;data driven,parametrisation driven,and theory driven modelling,and their respective implementation status in GEANT4.We will confront the different types of modelling with a validation suite for hadronic generators based on cross-sections measurements from thin target experiments,and expose the strength and weaknesses of the individual approaches.
  • Comparison of GEANT4 Simulations with Testbeam Data and GEANT3 for the ATLAS Liquid Argon Calorimeter 下载全文
  • We present several comparisons of GEANT4 simulations with test beam data and GEANT3 simulations for different liquid argon(LAr) calorimeters of the ATLAS detector,All relevant parts of the test beam setup(scintilators,multi wire proportional chambers,cryostat etc.)are described in GEANT4 as well as in GEANT3.Muon and electron data at different energies have been compared with Monte Carlo prediction.
  • Calculation of Energy Response of Cylindrical G—M Tubes with EGS4 Monte Carlo Code 下载全文
  • The energy responses of Ka of two types of cylindrical G-M counter tubes were calculated using an electron-photon cascade Monte Carlo code,EGS4.One type of the G-M counter tubes was GJ4401 (sensitive length 9cm,diameter 1cm),the other was J5 (sensitive length 2cm,diameter 0.3cm),the restricted sampling technique of source photon was used.Good tendency agreements between the simulations and experiments were achieved for gamma radiation with energies ranging from 40keV to 1.25 MeV.For GJ4401,the difference of response between simulations and experiments at 662 keV was 34% and for J5 the difference was 27%.
  • A Method of Large—scale Object Forward Compton Scattering Imaging 下载全文
  • In the field of Compton scatter imaging,the problem of how to get a meaningful image from large-scale object is still not fully sttled.The difficulty mainly lies in the method of the compensation of the attenuation,Based on the principle of small-angle forward scattering,a new attenuation correction method is deduced in this paper,we can distinguish air from other materials even it is deep inside a large-scale object .To verify our method,we design a model of a one-meter-long object and perform Monte Carlo simulations with MCNP software.
  • GBuilder—Computer Aided design of Detector Geometry 下载全文
  • Many tasks typical for High Energy Physics such as simulation,event display,maintenance of the geometry database of an experiment,require input of geometrical data.To simplify the process of preparation of such kinds of data an interactive graphica tool GBuilder is being develogped.Definition of the geometry model in GBuilder is based on the Constructive Solid Geometry approach where objects are defined using boolean operations on basic shapes.To provids parameterization of the model arithmetic expressions may be used in place of numbers,A wide list of predefined materials is also available.Different drivers allow to output the geometry model in a form suitable for specific simulation of visual analysis tools.
  • Integration of Geant4 with the Gaudi Framework 下载全文
  • The GAUDI software framework is to be used for all event-processing applications in the LHCb experiment.The GEANT4 toolkit has been integrated into GAUDI to form the basis of the LHCb simulation program GAUSS.The benefits of this approach are that it permits re-use of basic services,such as persistency,interactivity and data visualization,as well as physics algorithms that were originally developed in the context of the reconstruction and analysis programs.Following the GAUDI philosophy,the integration has been achieved by developing a number of services with abstract interfaces that can be plugged in at run-time.We describe the overall design and details of the components for interfacing the detector geometry,the primary interaction and the output from tracking particles through the detector.
  • A standard Event Class for Monte Carlo Generators 下载全文
  • StdHepC++[1]is a CLHEP[2] Monte Carlo event class library which provides a common interface to Monte Carlo Event Generators,This work is an extensive redesign of the StdHep Fortran interface to use the full power of object oriented design,A generated event maps naturally onto the Directed Acyclic Graph concept and we have used the HepMC classes to implement this.The full implementation allows the user to combine events to simulate beam pileup and access them transparently as though they were a single event.
  • HepMC_Contrib:Persistent Interface Package for HepMC 下载全文
  • A persistent interface package Hep MC_Contrib is developed for the C++ Monte Carlo event class library HepMC.HepMC_Contrib package provides an interface to user programs for storing /retrieving HepMC event records to/from Objectivity database,HepMC_Contrib is designed to utilise the standard features of HepODBMS as much as possible,Two types of implementation of class design were tested to reduce the size of the database,At last,the performance of the package is discussed in terms of the size of database.
  • HepPDT:encapsulating the Particle Data Table 下载全文
  • As a result of discussions within the HEP community,we have written a C++ package which can be used to maintain a table of particle properties,including decay mode information.The classes allow for multiple tables and accept inpu from a number of standard sources,In addition,They provide a mechanism by which an event generator can employ the tabulated information to actually direct the decay of particles.
  • APE—Tflops Computers for Theoretical Particle Physics 下载全文
  • The commissioning of several large installations of APEmille computers in Europe will have been finished in autumn 2001,All these machines together make another 2 Tflops of computing power available for numerical simulations in theoretical particle physics.
  • Network Printing in a Heteregenous Environment 下载全文
  • Mail and printing are often said to be the most visible services for the user in the network.Though many people talked about the paperless bureau a few years ago it seems that the more digital data is accessable,the more it gets printed.Print management in a heterogenous network environments is typically crossing all operating systems.Each of those brings its own requirements and different printing system implementations with individual user interfaces.The scope is to give the user the advantage and features of the native interface of their operating system while making administration tasks as easy as possible by following the general ideas of a centralised network service on the server side.
  • Next Generation Environment for Collaborative Research 下载全文
  • Collaborative environments supporting point to point and multipoint videoconferencing,document and application sharing across both local and wide area networks,video on demand (Broadcast and playback)and interactive text facilities will be a crucial element for the development of the next generation of HEP experiments by geographically dispersed collaborations.The “Virtual Room Videoconferencing System“(VRVS) has been developed since 1995,in order to provide a low cost,bandwidth-efficient,extensible means for videoconferencing and remote collaboration over networks within the high Energy and Nuclear Physics communities.The VRVS(Virtual Rooms Videoconferencing System) provides worldwide videoconferencing service and collaborative environment to the research and education communities,VRVS uses the Internet2 and ESnet high-performance networks infrastructure to deploy its Web-based system,which now includes more than 5790 registered hosts running VRVS software in more than 50 different countries,VRVS hosts an average of 100-multipoint videconference and collaborative sessions worldwide every month.There are around 35 reflectors that manage the traffic flow.at HENP labs and universities in the US and Europe,So far,there are 7 Virtual Rooms for World Wide Conferences(involving more than one contient),and 4 Virtual Rooms each for intra-continental conferences in the US,Europe and Asia,VRVS continues to expand and implement new digital video technologies,including H.323 ITU standard integration,MPEG-2 videoconferencing integration,shared environments,and Quality of Service.
  • An Electronic Logbook for the HEP Control Room 下载全文
  • The Control Room Logbook(CRL)is designed to improve and replace the paper logbooks traditionally used in the HEP accelerator control room.Its features benefit the on-line coordinator,the shift operators,and the remote observers,This paper explains some of the most attractive features for each of these roles.The features include the ability to configure the logbook for the specific needs of a collaboration,a large variety of entry types operator can add by simply clicking and dragging,and a flexible web interface for the remote observer to keep up with control room activities.The entries are saved as UTF-8 based XML files,which allowed us to give the data structure and meaning such that it can easily be parsed in the present and far into the future.The XML tag data is also indexed in a relational database,making queries on dates,keyworks,entry type and other criteria feasible and fast .The CRL is used in the D0 control room.This presentation also discusses our experience with deployment,platform independence and other interesting issues that arose with the installation and use of logbook.
  • Building Mail Server on Distributed Computing SYstem 下载全文
  • The electronic mail has become the indispensable function in daily job,and the server stability and performance are required.Using DCE and DFS we have built the distributed electronic mail sever,that is,servers such as SMPT,IMAP are distributed symmetrically,and provids the seamless access.
  • The Temperature Effects on the Ion Trap Quantum Computer 下载全文
  • We consider one source of decoherence for a quantum computer composed of many trapped ions due to the thermal effects of the system in the presence of laser-ion interaction.The upper limit of the temperature at which the logical gate operations could be carried out reliably is given,and our result is agreement with the experiment.
  • Knowledge Management and Electronic Publishing for the CNAO with EDMS 下载全文
  • The Italian Government has recently approved the construction of a National Center for Oncological Hadrontherapy(CNAO),TERA(Foundation for Oncological Hadrontherapy)will lead the high technology projects of the CNAO,whose machine design is a spin-off to the medical world of the collaboration with CERN.The CERN EDMS(Engineering Data Management System)was initially launched at CERN to support the LHC project but has since become a general service available for all divisions and recognized experiments.As TERA is closely associated to CERN,TERA decided to profit from EDMS and to use it to support the ambitious Quality Assurance plan for the CNAO project.With this EDMS project TERA transfers know-how that has been developed in the HEP Community to a social sector of major importance that also has high-density information management needs.The features available in the CERN EDMS system provide the tools for managing the complete lifecycle of any technical document including a distributed approval process and a controlled distributed collaborative work environment using the World Wide Web.The system allows management of structures representing projects and relative documents including drawings within working contexts and with a customizable release procedure.TERA is customizing CERN EDMS to document the CNAO project activities,to ensure that the medical accelecrator and its auxiliary installations can be properly managed throughout its lifecycle,from design to maintenance and possibly dismantling.The technical performance requirements of EDMS are identical to those for LHC and CERN in general.We will describe what we have learned about how to set-up an EDMS project,and how it benefits a challenging initiative like the CNAO Project of the TERA collaboration.The knowledge managed by the system will facilitate later installations of similar centers (planned for Lyon and Stockholm)and will allow the reuse of experience gained in Italy.
  • H.323 Based Collaborative Environment for High Energy and Nuclear Physics 下载全文
  • After having evaluated various H.323 products for these two years,KEK and Japanese HENP community started to move from ISDN(H.320)-based video conferencing environment into IP(H.323)-based one.Primary reason for the move is to cut down the ever increasing ISDN communication cost.At the same time the H.323 can offer us more powerful collaborative environment.In order to make KEK to be a center for the H.323-based collaborative environment in Japan,Picture Tel‘s LIVE GATEWAY as a H.320/H.323 gateway ,which is essential for the smooth transition,Cisco IP/VA 3510 as a H.323/MCU,and Cisco 2610 as a gatekeeper were installed at KEK in March 2001,And the transition started.In this paper,we‘ll describe the collaborative environment which our users can have,together with its operational results.
  • Achieving High Data Throughput in Research Networks 下载全文
  • After less than a year of operation ,the BaBar experiment at SLAC has collected almost 100 million particle collision events in a database approaching 165TB.Around 20 TB of data has been exported via the Internet to the BaBar regional center at IN2P3 in Lyon,France,and around 40TB of simulated data has been imported from the Lawrence Livermore National Laboratory(LLNL),BaBar Collaborators plan to double data collection each year and export a third of the data to IN2P3.So within a few years the SLAC OC3 (155Mbps) connection will be fully utilized by file transfer to France alone.Upgrades to infrastructure is essential and detailed understanding of performance issues and the requirements for reliable high throughput transfers is critical.In this talk results from active and passive monitoring and direct measurements of throughput will be reviewed.Methods for achieving the ambitious requirements will be discussed.
  • IPv6 in ESnet 下载全文
  • The importance of the Internet to modern High Energy Physics Collaborators is clearly immense,and understanding how new developments in network technology impact networks is critical to the future design of experiments.The next generation Internet Protocol(IPv6) is being deployed on testbeds and production networks throughout the world.The protocol has been designed to solve todays internet problems,and many of the features will be core Internet services in the future.In this talk the features of the protocol will be described.Details will be given on the deployment at sites important to High Energy Physics Research and the network services operating at these sites,In particular IPv6 deployment on the U.S.Energy Sciences Network(ESnet)will be reviewed.The connectivity and performance between High Energy Physics Laboratories,Universities and Institutes will be discussed.
  • An Additional DNS Feature for Different Routing of Electronic Mail inside and outside of a Campus Network 下载全文
  • Several years ago DESY faced the need to change the Electronic Mail service to support it on a central cluster of servers.The centralized architecture was necessary for deployment of unified internal E-Mail standards,better quality of service and security,To implemnet a new policy for Electronic Mail Service and avoid huge modifications to a few hundreds network nodes,an additional DNS feature has been added to ISC‘s (Internet Software Consortium)software bind-4.9.7.The DNS servers running at DESY are capable of distingushing between DNS queries coming from inside and outside of the campus netwokr and reply with different list of MX(Mail Exchanger)records.The external hosts always get a list of MX records pointing to the central mail servers while the internal hosts may use different paths for mail exchange within the campus network.A modified version of DNS software has been used at DESY since 1997,It is fully compliant with the original goal of the projcet and shows good operational performance and reliability.
  • FLink_2—PCI Bus Interface to a 400 MB—Ring 下载全文
  • Tools for Distributed Monitoring of the Campus Network with Low Latency Time 下载全文
  • In addition to deployment of the commercial management products,a number of public and self developed tools are successfully used at Fermilab for monitoring of the campus network.A suite of tools consisting of several programs running in a distributed environment to measure network parameters such as average round-trip time.traffic,throughput,error rate,DNS responses and others from different locations in the network is used.The system maintains a central archive of data,makes analysis and graphical representation available via a web-based interface.The developed tools are based on integration with well known public software RRD,Cricket fping,iperf.
  • Strategy and Management of Network Security at KEK 下载全文
  • Recently the troubles related to the network security have often occurred attacks.It consists of two fundamental thinge;the monitoring and the access control.To monitor the network,we have installed the intrusion detectioin system and have managed it since 1998,For the second thing,we arranged three categories to classify all hosts (about 5000 hosts) at KEK according to their security level.To realize these three categories,we filter the incoming packet from outside KEK whether it has a SYN flag or not.The network monitoring and the access control produced good effects in keeping our security level high.Since 2000 we have started the transiton of LAN from shared-media network to switched network.Now almost part of LAN was re-configured and in this new LAN 10Mbps/100Mbps/1Gbps Ethernet are supported.Currently we are planning further speedup(10Gbps)and redundancy of network.Not only LAN but also WAN,network speed will be upgraded to 10 Gbps thanks to the strong promotion of IT by Japanese government,In this very high speed network,our current strategy will be affected and again the network security becomes a big issue,This paper describes our experiences in practice of the current strategy and management know-how together with the discussion on the new strategy.
  • Applied Techniques for High Bandwidth Data Transfers across Wide Area Networks 下载全文
  • Large distributed systems such as Computational/Data Grids require large amounts of data to be co-located with the computing facilities for processing.From our work develogpin a scalable distributed network cache.we have gained experience with techniques necessary to achieve high data throughput over high bandwidth Wide Area Networks(WAN).In this paper,we discuss several hardware and software dsign techniques,and then describe their application to an implementation of an enhanced FTP protocol called GridFTP,We describe results from the Supercomputing 2000 conference.
  • Data Transfer over the Long Fat Networks 下载全文
  • The necessity to distribute the data over the wide area network( WAN) to the physicists‘home institutes will increase,and the effective utilization of the network becomes crucial,However,networks in the future WAN will typically have a large bandwidth at an order of gigabit per second,with a latency of several hundreds seconds so that the large bandwidth-delay prodeuct extends to tens of megabytes and numerous problems are encountered.such networks are called“long fat networks(LFNs)“ In order to study the data transfer operating on a long fat network,we have built the PC clusters connected with the router which can simulate bandwidth limitations.delays,packet losses,and multipath effects.this router is running on FreeBSD with DUMMYNET kernel option.On these machines we have measured the performance of the bulk data transfter with numerous conditions and studied the effecient transfter methods.
  • High Performance Multiple Stream Data Transfer 下载全文
  • The ALICE detector at LHC( CERN),will record raw data at a rate of 1.2 Gigabytes per second.Trying to analyse all this data at CRN will not be feasible.As originally proposed by the MONARC project,dta collected at CERN will be transferred to remote centres to use their computing infrastructure,The remote centres will reconstruct and analyse the events.and make available the results.Therefore high-rate data transfer between computing centres(Tiers)will become of paramount importance.This paper will present several tests that have been made between CERN and remote centres in Padova(Italy),Torino(Italy),Catania(Italy),Lyon(France),Ohio(United States),Warsaw(Poland)and Calcutta( India),These tests consisted,in a first stage,of sending raw data from CERN to the remote centres and back,using a ftp method that allows connections of several streams at the same time.Thanks to these multiple streams,it is possilble to increase the rate at which the data is transferred.While several “multiple stream ftp solutions“ already exist,our method is based on a parallel socket implementation which allows,besides files,also objects(or any large message)to be send in parallel.A prototype will be presented able to manage different transfers.This is the first step of a system to be implemented that will be able to take care of the connections with the remote centres to exchange data and monitor the status of the transfer.
  • Optimize the Security Performance of the Computing Environment of IHEP 下载全文
  • This paper gives a background of crackers,then some attack events that have happened in IHEP networks are enumerated and introduced.At last a highly efficient defence system that integrates author‘s experience,research results and have put in practice in IHEP networks environment is described in detail,This paper also gives network and information security advice and process for high energy physics computing environment in the Institute of High Energy Physics that will implement in the future.
  • Passive Performance Monitoring and Traffic Characteristics on the SLAC internet Border 下载全文
  • Understanding how the Internet is used by HEP is critical to optimizing the performance of the inter-lab computing environment.Typically use requirements have been defined by discussions between collaborators.However,later analysis of the actual traffic has shown this is often misunderstood and actual use is significantly different to that predicted.Passive monitoring of the real traffic provides insight into the true communications requirements and the performance of a large number of a large number of inter-communicating nodes.It may be useful in identifying performance problems that are due to factors other than Internet congestion especially when compared to other methods such as active monitoring where traffic is generated specifically to measure its performance.Controlled active monitoring between dedicated servers often gives an indication of what can be achieved on a network,Passive monitoring of the real traffic gives a picture of the true performance.This paper will discuss the method and results of collecting and analyzing flows of data obtained from the SLAC Internet border,The unique nature of HEP traffic and the needs of the HEP community will be highlighted.The insights this has brought to understanding the network will be reviewed and the benefit is can bring to engineering networks will be discussed.
  • Peer—to—Peer Computing for secure High Performance Data Copying 下载全文
  • The BaBar Copy Program(bbcp) is an excellent representative of peer-to-peer(P2P) computing.It is also a pioneering application of its type in the p2p arena.Built upon the foundation of its predecessor,Secure Fast Copy(sfctp),bbcp incorporates significant improvements performance and usability,As with sfcp,bbcp usec ssh for authentication;providing an elegant and simple working model-if you can ssh to a location, you can copy files to or from that location.To fully support this notion,bbcp transparently supports 3rd party copy operations.The program also incorporates several mechanism to deal with firewall security;the bane of P2P computing,To achieve high performance in a wide area network,bbcp allows a user to independently specily,the number of parallel network streams,tcp window size,and the file I/O blocking factor.Using these parameters data is pipelined from source to target to provide a uniform traffic pattern that maximizes router efficiency.For improved recoveralbiltiy,bbcp also keeps track of copy operations so that an operation can be restarted from the point of failure at a later time;minimizing the amount of network traffic in the event of a copy failure,Here,we preset the bbcp architecture,it‘s various features,and the reasons for their inclusion.
  • DEPUTY:Analysing Architectural Structures and Checking Style 下载全文
  • The DepUty(dependencies utility)can be classified as a project and process management tool.The main goal of DepUty is to assist by means of source code analysis and graphical representation using UML,in understanding dependencies of sub-systems and packages in CMS object Oriented software,to understand architectureal structure,and to schedule code release in modularised integration.It also allows a new-comer to more easily understand the global structure, of CMS software,and to void circular dependencies up-front or re-factor the code,in case it was already too close to the edge of non-maintainability.We will discuss the various views DepUty provides to analyse package dependencies and illustrate both the metrics and style checking facilities it provides.
  • Software Process Improvement in CMS—Are we Different? 下载全文
  • One of the most challenging issues faced in HEP in recent years is the question of how to capitalise on software development and maintenance experience in a continuous manner.To capitalise inour context means to evaluate and apply new technologies as they arise,and to further evolve technologies arlready widely in use,It also implies the definition and adoption of standards,while ensuring reproducibility and quality of results.The CMS process improvement effort is two-pronged.It aims at continuous improve ment of the ways we do Object Oriented software,as well as continuous improvement in the efficiency of the working enviuronment.In particular the use and creation of de-facto software process standards within CMS has proven to be key to our successful software process improvement program.We describe the successful CMS implementation of a software process improvement strategy,following ISO 15504 since three years.We give the current status of the most important processes families formally established in CMS,and provide the guidelines we followed both for tool development,and methodology establishment.
  • Code Organization and Configuration Management 下载全文
  • Industry experts are increasingly focusing on team productivity on team productivity as the key to success,the base of the team effort is the four-fold structure of software in terms of logical organisation,physical organisation,managerial organisation,and dynamical structure.We describe the ideas put into action within the CMS software for organising software into sub-systems and packages,and to establish configuration management in a multiproject environment.We use a structure that allows to maximise the independence of soft ware development in individual areas,and at the same time emphasises the overwhelming importance of the interdependencies between the packages and components in the system.We comment on release procedures,and describe the inter-relationship between release,development,integration,and testing.
  • XML for Detector Description at GLAST 下载全文
  • The problem of representing a detector in a form which is accessible to a variety of applications,allows retrieval of information in ways which are natural to those applications,and is maintainable has been vexing physicists for some time,Although invented to address an entirely different problem domain,the document markup metalanguage XML is well-suited to detector description.This paper describes its use for a GLAST detector.[7]
  • Experiencing CMT in Software Production of Large and Complex Projects Issues in the Scalability of Software Production Management 下载全文
  • The configuration management tool CMT has been used since several years now,and in quits different projects(Virgo,GLAST,LHCb,Auger,Atlas,etc).The features of the tool have continuously evolved according to the growing needs of the developers and to follow the increasing complexity of the software bases it has to service,However the original concepts:readability,simplicity,flexibility,completeness have been preserved and the syntan of the core element of the system-the requirements file-has been always ketp backward compatible.More and more project specific conventions and needs have founr their expression using CMT,and simultaneously the focuese to CMT features have evolved accordingly,progressively raising importance towards language customisation,new document generators production of patterns,package organization and software distribution.The basic properties of the CMT toolkit will be shortly presented but the focus of the discussion will be set to the CMT toolkit will be shortly presented but the focus of the discussion will be set top these recent evolutions,throught some typical examples obtained from actual projects showing specific definitions or conventions.The discussion is then extended towards the generalized question of the scalability in software production and management in the context of e.g. Grid technologies.The impact of using generic and high level tools such as CMT,which already offers several solutions,RPM or the Grid technologies,in this respect will be presented.In particular,the role of formal specifications for the software configuration appears to be critical for query mechanisms required in management operations or in remote actions.
  • Software Process in Geant4 下载全文
  • Since its erliest years of R&D [1],the GEANT4 simulation toolkit has been developed following software process standards which dictated the overall evolution of the project.The complexity of the software involved,the wide areas of application of the software product,the huge amount of code and Category complexity,the size and distributed nature of the Collaboration itself are all ingredients which involve and correlate together a wide variety of software processes.Although in “production“ and available to the public since December 1998,the GEANT4 software product [1] includes Category Domains which are still under active development.Therefore they require different treatment also in terms of improvement of the development cycle,system,testing and user support,This article is meant to describe some of the software processes as they are applied in GEANT4 for both development,testing and maintenance of the software.
  • The Geometry Description Markup Language 下载全文
  • Currently,a lot of effort is being put on designing complex detectors.A number of simulation and reconstruction frameworks and applications have been developed with the aim to make this job easier.A very important role in this activity is played by the geometry description of the detector apparatus layout and its working environment.However,no real common approach to represent geometry data is available and such data can be found in various forms starting from custom semi-structured text files,source code (C/C++/FORTRAN),to XML and database solutions.The XML(Extensible Markup Language)has proven to provide an interesting approach for describing detector geometries,with several different but incompatible XML-based solutions existing.Therefore,interoperability and geometry data exchange among different frameworks is not possible at present.This article introduces a markup language for geometry descriptions.Its aim is to define a common approach for sharing and exchanging of geometry description data.Its requirements and design have been driven by experience and user feedback from existing projects which have their geometry description in XML.
  • Distributed Simulation of Large Computer Systems 下载全文
  • Sequential simulation of large complex physical systems is often regarded as a computationally expensive task.In order to speed-up complex discrete-event simulations,the paradigm of Parallel and Distributed Discrete Event Simulation (PDES) has been introduced since the late 70s.In this paper we‘ll analyze the applicability of PDES to the modeling and analysis of large computer system;such systems are increasingly common in the area of High Energy and Nuclear Physics,because many modern experiments make use of large“compute farms“,Some feasibility tests have been performed on a prototype distributed siumulator.
  • A Generic Digitization Framework for the CDF Simulation 下载全文
  • Digitization from GEANT tracking requires a predictable sequence of steps to produce raw simulated detector readout information.We have developed a software framework that simplifies the development and integration of digitizers by separating the coordination activities(sequencing and dispatching)from the actual digitization process.This separation allows the developers of digitizers to concentrate on digitization.The framework provides the sequencing infrastructure and a digitizer model,which means that all digitizers are required to follow the same sequencing rules and provide an interface that fits the model.
  • In the Land of the Dinosaurs,How to Survive Experience with Building of Midrange Computing Cluster 下载全文
  • This paper discusses how to put into operation a midrange computing cluster for the Nuclear Chemistry Group(NCG) of the State University of New York at STONY Brook(SUNY-SB).The NCG is part and one of the collaborators within the RHIC/Phenix experiment located at the Brookhaven National Laboratory(BNL).The Phenix detector system produces about half a PB(or 500 TB) of data a year and our goal was to provide to this remote collaborating facility the means to be part of the analysis process.The computing installation was put into operation at the beginning of the year 2000.The cluster consists of 32 peripheral machines running under Linux and central server Alpha 4100 under DIgital Unix 4.of (formally True Unix 64),In the paper the realization process is under discussion.
  • Ignominy:a Tool for Software Dependency and Metric Analysis with Examples from Large HEP Packages 下载全文
  • Ignominy is a tool developed in the CMS IGUANA project to analyse the structure of software systems.Its primary component is a dependency scanner that distills information into human-usable forms.It also includes several tools to visualise the collected data in the form of graphical views and numerical metrics.Ignominy was designed to adapt to almost any reasonable structure,and it has been used to analyse several large projects. The original purpose of Ignominy was to help us better ensure the quality of our own software,and in particular warn us about possible structureal problems early on .As a part of this activity it is now used as a standard part of our release procedure,we also use it to evaluate and study the quality of external packages we plan to make use of .We describe what Ignominy can find out,and how if can be used to ivsualise and assess a software structure.We also discuss the inherent problems of the analysis as well as the different approaches to modularity the tool makes quite evident.The focus is the illustration of these issues through the analysis results for several sizable HEP softwre projects.
  • Re—usable Templates for Documenting the Elaboration and Architectural Design of the CMS Software 下载全文
  • Modern standards and definitions of deliverables for software development are provided by by various standards like PSS-05,CMMI,ECSS,V,Rational Unified process or SPICE(ISO 15504),Modern document templates and the corresponding documents are based on atomic shells that cross-link,and can be subsequently assembled into a set of complete documents;views of the information in the shells.This makes the information easy to maintain,and enables selective views of the documentation.We will present a catalogue of document templates that has been developed in the context of the CMS CAFE forum,as well as their cross-linkage,using UML as the modelling language The templates allow for documenting the elaboration and architectural design phases of software development,They can be used as the basis for establishing and documenting architecture,while establishing trace-ability to use-cases,requirements,constraints,and important technological choices in a maintainable manner.
  • CMS Software Distribution and Installation Systems:Concepts,Practical Solutions and Experience at Fermilab as a CMS Tier 1 Center 下载全文
  • The CMS Collaboration of 2000 scientists involves 150 institutions from 31 nations spread all over the world.CMS software system integration and release management is performed at CERN.Code management is based on CVS,with read or write access to the repository via a CVS server,Software configuration,release management tools(SCRAM) are being developed at CERN.Software releases are then distributed to regional centers,where the software is used by a local community for a wide variety of tasks,such as software development detector simulation and reconstruction and physics analysis.Depending on specific application,the system environment and local hardware requirements,different approaches and tools are used for the CMS software installation at different places.This presentation describes concepts and reactial solutions for a variety of ways of software distribution,with an emphasis on the CMS experience at Fermilab,Installation and usage of different models used for the production farm,for code development and for physics analysis are described.
  • Development of the ATLAS Simulation Framework 下载全文
  • Object-oriented (OO) approach is the key technology to develop a software system in the LHC/ATLAS experiment.We developed a OO simulation framework based on the Geant4 general-purpose simulation toolkit.Because of complexity of simulation in ATLAS,we payed most attention to the scalability in its design.Although the first target to apply this framework is to implement the ATLAS full detector simulation program,there is no experiment-specific code in it,therefore it can be utilized for the development of any simulation package,not only for HEP experiments but also for various different research domains ,In this paper we discuss our approach of design and implementation of the framework.
  • Simulating the Farm Production System Using the MONARC Simulation Tool 下载全文
  • The simulation program developed by the “Models of Networked Analysis at Regional Centers“(MONARC) project is a powerful and flexible tool for simulating the behavior of large scale distributed computing systems,In this study,we further validate this simulation tool in a large-scale distributed farm computing system.We also report the usage of this simulation tool to identify the bottlenecks and limitations of our farm system.
  • Architecture of Collaborating Frameworks:Simulation,Visualisation,User Interface and Analysis 下载全文
  • In modern high energy and astrophysics experiments the variety of user requirements and the complexity of the problem domain often involve the collaboration of several software frameworks,and different components are responsible for providing the functionalities related to each domain.For instance,a common use case consists in studying the physics effects and the detector performance,resulting from primary events,in a given detector configuration,to evaluate the physics reach of the experiment or optimise the detector design,Such a study typically involves various components:simulation,Visualisation,Analysis and (interactive)User Interface.We focus on the design aspects of the collaboration of these frameworks and on the technologies that help to simplify the complex process of software design.
  • StureGate:a Data Model for the ATLAS Software Architecture 下载全文
  • ATLAS[1] has recently joined Gaudi,an open project to develop a data processing framework for HEP experiments[2],The data model is one of the areas where ATLAS has extended more the original Gaudi design to meet the experiment‘s own requirments.This paper describes StoreGate,the first implementation of the ATLAS Data Model.
  • Detector Description Software for Simulation,Reconstruction and Visualisation 下载全文
  • This paper describes a software that reads the detector description from tagbased ASCII files and builds an independent detector representation (its only dependency being the CLHEP library) in transient mode,A second package uses this transient representation to build automatically a GEANT4 representation of the detector geometry and materials.The software supports and kind of element and material,including material mixtures built by giving the weight fractions,the volume fractions of the number of atoms of each component.The common solid shapes(box,cube,cone,sphere,polycone,polyhedra,…)can be used,as well as solids made from a boolean operation of another solids(addition,substraction and intersection),The geometry volumes can be placed through simple positioning or positioning several copies following a given formula,so that each copy can have different position and rotation,Also divisioning of a volume along one of its axis is supported for the basic solid shpaes. The Simulation,Reconstruction and Visualisation(ROOT based)software contain no detector data.Instead,this data is always accesed through a unique interface to the GEANT4 detector representation package,which manages the complicated parameterised positionings and divisions and returns GEANT4 independent objects.Both packages have been built following strict Software Engineering practices,that we also describe in the paper.The software has been used for the detector description of the PS214(HARP) experiment at CERN,which is taking data since april 2001,With minor modifications it is also been used for the GEANT4 simulation of the CMS experiment.
  • Putting It All Together:Experience and Challenges at the DELPHI off—line Processing Farm 下载全文
  • After over 10 years of existence,DELPHI off-line software counts altogether over 1500k lines.Being written by multitude of authorsk,many of them having already left,it is very incoherent and extremely difficult to maintain:it is written in Fortran,it relies on obsolete tools and it has to run in a distributed multi-os computing environment.Still.as the analysis of LEP data will continue during the next 5-6 years,this code will have to be used and be ported to yet another platforms.In order to ensure high efficiency in use of our resources,we have developed several tools which hide from the user most of the intricaices of the operating system,batch system and data access.These tools are well integrated and easy to maintain.As the problems are quite typical for the High Energy Physics software,we believe that the ideas we have implemented can be useful also for the next generation of experiments.
  • GEANT4 in the AliRoot Framework 下载全文
  • The development of the GEANT4 application for ALICE simulation within the aliRoot framework is described.The G3toG4 approach adopted by ALICE collaboration is explained.The overview of the design,present implementation and functionality is presented and the remaining problems are discussed.
  • Extensible Numerical Library in JAVA 下载全文
  • In this paper,we present the current status of the project for developing the numerical librayr in JAVA.We have presented how object-oriented techniques improve usage and also development of numerical libraries compared with the conventional way at previous conference,we need many functions for data analysis which is not provided within JAVA language,for example,good random number generators.special functions and so on.Our development strategy is focused on easiness of implementation and adding new features by users themselves not only by developers.In HPC filed,there are other focus efforts to develop numerical libraries in JAVA,However,their focus is on the performance of execution.not easiness of extension.Following the strategy,we have degigned and implemented more classes for random number generators and so on .
  • Model and Information Abstracion for Description—Driver Systems 下载全文
  • A Crucial factor in the creation of adaptable systems dealing with changing requirements is the suitability of the underlying technology in allowing the evolution of the system.A reflective system utilizes an open architecture where implicit system aspects are reified to become explicit first-class(meta-data)objects.These implicit system aspects are often fundamental structures which are inaccessible and immutable,and theri reification as meta-data objects can serve as the basis for changes and extensions to the system, making it self-describing.To address the evolvability issue,this paper proposes a reflective architecture based on two orthogonal abstractions-model abstraction and information abstraction.In this architecture the modeling abstractions allow for the separation of the description meta-data from the system aspects they represent so that they can be managed and versioned independently,asynchronously and explicitly.
  • PC Farms for Triggering and Online Reconstruction at HERA—B 下载全文
  • The HERA-B data acquisition and triggering systems make use of Linux PC farms for triggering and online event reconstruction.We present in this paper the requirements,implementation and performance of both PC farms.They have been fully working during the year 2000 detector and trigger commissioning run.
  • Data Acquisition System for the SND2000 Experiment 下载全文
  • SND is a spherical non-magnetic detector,which operated since 1996 at VEPP-2M electron-positron collider in Novosibirsk.Now VEPP-2M collider is dismantled to be replaced by a new VEPP-2000 machine with higher energy and luminosity.The SND detector is also performing upgrade of its subsystems including electronics,and software,Expected substantial growth of events dataflow requires radical changes in the Data Acquisition (DAQ) system software.This paper describes the SND2000 software architecrure and its principal components.First the main events flow processuing components are considered-the readout process and the L3-trigger farm.After processing by L3 the events flow is either logged to tape or fed to calibration and slow control process.Using auxiliary control and service software components,which are also described.performs the management of these activities.
  • Application of DSPs in Data Acquisition Systems for Neutron Scattering Experiments at the IBR—2 Pulsed Reactor 下载全文
  • DSPs are widely used in data acquisition systems on neutron spectrometers at the IBR-2 pulsed reactor.In this report several electronic blocks,based on the DSP of the TMS 320CXXXX family by the TI firm and intended to solve different tasks in DAQ systems,are described.
  • A Multi Purpose DAQ System Developed for the nTOF Commissioning 下载全文
  • The neutron Time Of Flight (nTOF) facility at CERN is a high flux spallation neutron source commissioned in Nov 2000 and APr 2001.For the commissioning phases an innovative multipurpose DAQ system was developed and used.This system is capable of handling high data rated(10Mb/s) with the use of the National Instruments PCI-VXI interface board to communicate with the VME crates.The graphical user interface is based on Java^TM and ROOT allowing immediate visualization of the data and providing a flexible way of creating configurations for various experimental setups.
  • High Level Trigger System for the ALICE Experiment 下载全文
  • The ALICE experiment [1] at the Large Hadron Collider(LHC) at CERN will detect up to 20,000 particles in a single Pb-Pb event resulting in a data rate of -75 MByte/event,The event rate is limited by the bandwidth of the data storage system.Higher rates are possible by selecting interesting events and subevents (High Level trigger) or compressing the data efficiently with modeling techniques.Both require a fast parallel pattern recognition.One possible solution to process the detector data at such rates is a farm of clustered SMP nodes,based on off-the-shelf PCs,and connected by a high bandwidt,low latency network.
  • The Digital Analog Optical Module(dAOM)—a Technology for the AMANDA Experiment at the South Pole 下载全文
  • For the AMANDA experiment a new type of optical module-the digital analog optical Module(dAOM)-has been developed,It incorporates some local‘ intelligece‘ for slow control and active electronics for analog pulse transmission.More than 20 dAOM prototypes have been successfully deployed into the polar ice during the 1999/2000 antarctic season and are running since that time.They are connected to the dAOM DAQ boards at the surface by single twisted pari cables over distances upto 2.7km.CORBA based client-server applications establish world-wide,logical access to every dAOM.
  • Large Scale and Performance tests of the ATLAS Online Software 下载全文
  • One of the sub-systems of the Trigger/DAQ system of the future ATLAS experiment is the Online Software system.It encompasses the functionality needed to configure,control and monitor the DAQ.Its architecture is based on a component structure described in the ATLAS Trigger/DAQ technical proposal.Resular integration tests ensure its smooth operation in test beam setups during its evolutionary development towards the final ATLAS online system.Feedback is received and returned into the development process.Studies of the system.behavior have been performed on a set of up to 111 PCs on a configuration which is getting closer to the final size,Large scale and performance tests of the integrated system were performed on this setup with emphasis on investigating the aspects of the inter-dependence of the components and the performance of the communication software.Of particular interest were the run control state transitions in various configurations of the run control hierarchy.For the purpose of the tests,the software from other Trigger/DAQ sub-systems has been emulated.This paper presents a brief overview of the online system structure,its components and the large scale integration tests and their results.
  • Neural Network Real Time Event Selection for the DIRAC Experiment 下载全文
  • The neural network real time event selection for the DIRAC ewperiment at CERN is presented.It comprises of two independent parts.One uses plastic scintillators and the other the vertical Scintillating Fibres,The global event decision is taken in less than 250 ns.Signal events are selected with an efficiency os more than 0.99 with a background rate reduction of about2.
  • Architecture Design of Trigger and DAQ System for Fermilab CKM Experiment 下载全文
  • The Fermilab CKM (E921) experiment studies a rare kaon decay which has a very small branching ratio and can be very hard to separate from background processes.A trigger and DAQ system is required to collecto all necessary unformation for background rejection and to maintain high reliability at high beam rate.The unique challenges have emphasized the following guiding concepts:(1) Collecting background is as important as collecting good events.(2) A DAQ “event“ should not be just a “snap shot“ of the detector.It should be a short history record of the detector around the candidate event. The hit history provides information to understand temporary detector blindness,which is extremely important to the CKM experiment.(3) The main purpose of the trigger system should not be “knocking down trigger rate“ or “throwing out garbage events“ .Instead,it should classify the events and select appropriate data collecting straegies among various predefined ones for the given types of the events.The following methodologies are epmployed in the architecture to fulfill the experiment requirements without confronting unnecessary technical difficulties.(1) Continuous digitization near the detector elements is utilized to preserve the data quality.(2) The concept of minimum synchronization is adopted to eliminate the needs of time matching signal paths.(3) A global level 1 trigger performs coincident and veto functions using digital timing information to avoid problems due to signal degrading in long calbes.(4) The DAQ logic allows to collect chronicle records around the interesting events with different levels of detail of ADC information,so that very low energy particles in the veto systems can be best detected.(5) A re-programmable hardware trigger(L2.5)and a software trigger(L3) sitting in the DAQ stream are planned to perform data selection functioins based on full detector data with adjustability.
  • Bus—based DAQ Architecture for the ARGO—YBJ Experiment 下载全文
  • The ARGO-YBJ experiment is presently under construction at the Yangbaijing high Altitude Comsmic Ray Laboratory(4300m a.s.l).90 Km North to Lhasa(Tibet,peolpl‘s Republic of China) ARGO-YBJ will study fundamental issues in cosmic ray and astroparticle physics by detecting small size air showers.The detector covers-71×74 square meters with a single layer of resistive Plate Counters(RPCs),surrounded by a guard ring partially instrumented.An event-driven data collection scheme is implemented by using a custom bus protocol./Key features of this architecture are block-oriented data transfer and read-out cycles labeled by trigger number,Hardware engines in both master and DAQ boards handle bus transactions and provide event builing capability,achieving real-time data processing with no software overhead.In this paper we present the hardware design of the ARGO experiment‘s DAQ which benefits from the flexible architecture of in-system reconfigurable FPGAs.The dataacquistition boards specifically developed for this application will be described.
  • The PC—Based ATLAS Event Filter Prototype:Supervision Design,Implementation and Tests 下载全文
  • The studies undertaken to prepare the Technical Design Report of the ATLAS 3^rd Level Trigger(Event Filter)are performed on different prototypes based on different technologies.we present here the most recent results obtained for the supervision of the prototype based on conventional,off-the-shelf PC machines and Java Moblie agent technology.
  • On the Way to Maturity—The CLEO Ⅲ Data Acquisition and Control System 下载全文
  • For more than one year the CLEOⅢ experiment at the Cornell electron positron storage ring CESR has accumulated Physics data using a 4 layer silicon vertex detector and a novel ring image Cherenkov detector,along with a conventional Driftchamber ,E.M.calorimeter and muon chambers.By the time of CHEP 2001 the experiment has accumulated 10 fb^-1 of data,The readout and monitoring systems control,ca.400000 electronic channels.Detector configuration.data quality and component monitoring,run control are only some of the Slow Control tasks that have to be performed. Deploying industry standards such as CORBA(inter-platform communication)Java (remote access via Web browser)and Objectivity(event and constants databse) as building blocks of the computer network has been of central importance.Object oriented design has enabled the seamless integration of the many individual components.In our presentation we will describe experiences with this distributed control system and give a report of the measures taken to obtain a high system availability.
  • Design and Prototyping of the ATLAS High Level Trigger 下载全文
  • This paper outlines the desgn and prototyping of the ATLAS High Level Trigger(HLT)wihch is a combined effort of the Data Collection HLT and PESA(Physics and Event Selection Architecture)subgroups within the ATLAS TDAQ collaboration.Two important issues,alresdy outlined in the ATLAS HLT,DAQ and DCS Technical Proposal [1] will be highlighted:the treatment of the LVL2 Trigger and Event Filter as aspects of a general HLT with a view to easier migration of algorthms between the two levels;unification of the selective data collection for LVL2 and Event Building.
  • The CDF Date Acquisition System for Tevatron Run Ⅱ 下载全文
  • The CDF experiment at the Fermilab Tevatron has been significantly upgraded for the collider Run Ⅱ,which started in march 2001 and is scheduled to alst until 2006,Instantaneous luminosities of 10^32cm^-2S^-1 and above are expected.A data acquisition system capable of efficiently recording the data has been one of the most critical elements of the upgrade.Key figures are the abilitity to deal with the short bunch spacing of 132ns.event sizes of the order of 250kB,and permanent logging of 20MB/s ,The design of the system and experience from the first months of data-taking operation are discussed.
  • Clustered Data Acquisition for the CMS Experiment 下载全文
  • Powerful mainstream computing equipment and the advent of affordable multi-Gigabit communication technology allow us to tackle data acquisition problems with clusters of inexpensive computers.Such networks typically incorporate heterogeneous plat forms,real-time partitions and custom devices.Therefore,one must strive for a software infrastructure that efficiently combines the nodes to a single,unified resource for the user,Overall requirements for such middleware are high efficiency and configuration flexibility. Intelligent I/O(I2O) is an industry specification that defines a unifrom messaging format and executing model for processor-enabled communication equipment.Mapping this concept to a distribulted computing environment and encapsulating the details of the specification into an application-programming framework allow us to provide run-time support for cluster operation.This paper gives a brief overview of a framework.XDAQ that we designed and implemented at CERN for the Compact Muon Solenoid experiment‘s prototype data acquisition system.
  • KONOE—A Toolkit for Object—Oriented/Network—Distributed Online Environment 下载全文
  • A series of software has been developed for small-scale data acquisition and analysis systems.Various classes were written un C++ and partly in Java,and a Private protocol was introduced for object persistency.A demonstrationn system was built and the performance was tested in a beam test at KEK-PS.
  • ATLAS DAQ Configuration Databases 下载全文
  • The configuration databases are an important part of the Trigger/DAQ system of the future ATLAS experiment .This paper describes their current status giving details of architecture,implementation,test results and plans for future work.
  • Data Collection and Processing for ARGO Yanbajing Experiment 下载全文
  • ARGO-YBJ,a Chinese-Italian Collaboration,is going to finish the first step of the installation of this cosmic ray telescope consisting in a single layer of RPCs,placed at 4300m.elevation,in Tibet,The detector will provide a detailed space-time picture of the showers front,initiated by primaries of energies in the range 10GeV-500 TeV.The data taking will start at the beginning of 2002 with a fraction of the detector installed.will be upgraded two times,being completed at the end of 2003,In this paper we briefly describe the dataflow,the trigger organization,the three operational steps in data taking and the computing model to process the data.the need of remote monitoring of the experiment will be touched upon.The processing power for the raw data reconstruction and for the Monte Carlo simulation is reported.
  • The Linux Based Distributed Data Acquisition System for the ISTRA+ Experiment 下载全文
  • The DAQ hardware of the ISTRA+experiment consists of the VME system crate that contains two PCI-VME bridges interfacing two PC‘s with VME,external interrupts receiver,the readout controller for dedicated front-end electronics,the read-out controller buffer memory module,the VME-CAMAC interface,and additional control modules,The DAQ computing consist of 6 PC‘s running the Linux operating system and linked into LAN.The first PC serves the external interrupts and acquires the data from front-end electronics,The second one is the slow control computer.The remaining PC‘s host the monitoring and data analysis software.The Linux based DAQ software provides the external interrupts procssing,the data acquisition,recording,and distribution between monitoring and data analysis tasks running at DAQ PC‘s.The monitoring programs are based on two packages for data visualization:home-written on and the ROOT system.My SQl is used as a DAQ database.
  • The CMS Event Builder Demonstrator and Results with Ethernet and Myrinet Switch Technologies 下载全文
  • The data acquisition system for the CMS experiment at the Large Hadron Collider (LHC) will require a large and high performance event building network.Several architectures and swithch technologies are currently being evaluated.This paper describes demonstrators which have been set up to study a small-scale event builder based on PCs emulating high performance sources and sinks connected via Ethernet or Myrinet switches.Results from ongoing studies,including measurements on throughput and scaling,are presented.
  • The DZERO Online System Event Path 下载全文
  • The Online computing system for the DZERO experiment is used to control monitor,and acquire data from the approximately 1-million channel detector,This paper describes the Online Host system event data path requirements and design.
  • The BTeV DAQ and Trigger System—Some Throughput,Usability and Fault Tolerance Aspects 下载全文
  • As presented at the last CHEP conference,the BTeV triggering and data collection pose a significant challenge in construction and operation,generating 1.5 Terabytes/second of raw data from over 30 million detector channels.We report on facets of the DAQ and trigger farms.We report on the current design of the DAQ,especially its partitioning features to support commissioning of the detector.We are exploring collaborations with computer science groups experienced in fault tolerant and dynamic real-time and embedded systems to develop a system to provide the extreme flexibility and high availability required of the heterogeneous trigger farm(-ten thousand DSPs and commodity processors).We describe directions in the following areas:system modeling and analysis using the Model Integrated Computing approach to assist in the creation of domain-specific modeling,analysis,and program synthesis environments for building complex,large-scale computer-based systems;System Configuration Management to include compileable design specifications for configurable hardware components,schedules,and communication maps. Runtime Environment and Hierarchical Fault Detection/Management-a system-wide infrastructure for rapidly detecting,isolating,filtering,and reporting faults which will be encapsulated in intelligent active entites(agents)to run on DSPs,L2/3 processors,and other supporting processors throughout the system.
  • CMS Level—1 Regional Calorimeter Trigger System 下载全文
  • The CMS regional calorimeter trigger system detects signatures of electrons/photons,taus,jets,and missing and total transverse energy in a deadtinmess pipelined architecture .This system receives 7000 calorimeter tregger tower energies on 1.2 Gband digital copper cable serial links and processes them in a low-latency pipelined design using custom-built electronics.At the heart of the system is the Receiver Card which uses the new generation of gigabit ethernet receiver chips on a mezzanine card to convert serial data to parallel data before transmission on a 160 MHz backplane for further processing by cards that sum energies and identify electrons and jets.We describe the algorithms and hardware implementation,and summarize the simulation results that show that this system is capable of handling the rate requirements while triggering on physics signals with high efficiency.
  • Specification and Simulation of the ALICE Trigger and DAQ System 下载全文
  • The ALICE Trigger and Data Acquisition (TRG/DAQ) System is required to support an aggregate event building bandwidth of up to 4 GByte/s and a storage capability of up to 1.25 GByte/s to mass storage.The system has been decomposed in a set of hardware and software components and prototypes of these components are being developed.It is necessary to verity the system design,its capability to reach the expected behavior and the target performances,discover possible bottlenecks and ways to correct for them,and explore alternative algorithms and new architectures.To achieve this the complete TRG/DAQ system has been formally specified.and the verification of the expected behavior has been performed through the execution of the specification,Two tools were used for this.Foresight,and Ptolemy.
  • Trigger & Data Acquisition System for the ANTARES Neutrion Telescope 下载全文
  • The ANTARES collaboration is building a deep underwater neutrino telescope to be immersed in the Mediterranean Sea 40km off the French coast.This detector will be able to detect the Cherenkov light emitted by muons produced in neutrino interactions using a three-dimensional matrix of optical sensors,The telescope will be made of nearly 1000 of these elementary units distributed over a wide area of about 0.1 km2 at an average depth of 2400m In order to reach a sub-nanosecond resolution on ligh pulse detection ,signals from all OMs are analyzed and digitized locally before being sent to shore through a 50km electro-optical cable,Front-end electronics,time alignment (clock distribution),Triggering and data acquistition for such a large and remote detector represent a real hallenge and required considerable R&D studies,The technical solutions adopted by the collaboration will be described and their performances discussed.
  • CDF Run Ⅱ Run Control and Online Monitor 下载全文
  • In this paper,we discuss the CDF Run Ⅱ Run Control and online event monitoring system.Run Control is the top level application that controls the data acquisition activities across 150 front end VME crates and related service processes,Run Control is a real-time multi-threaded application implemented in Java with flexible state machines,using JDBC database connections to configure clients,and including a user friendly and powerful graphical user interface.The CDF online event monitoring system consists of several parts;the eent monitoring programs,the display to browse their results,the server program which communicates with the display via socket connections ,the error receiver which displays error messages and communicates with run Control,and the state manager which monitors the state of the monitor programs.
  • A Dataflow Meta—Computing Framework for Event Processing in the H1 Experiment 下载全文
  • Linux based networked PCs clusters are replacing both the VME non uniform direct memory access systems and SMP shared memory systems used previously for the online event filtering and reconstrucion.To allow an optimal use of the distributed resources of PC clusters an open software framework is presently being developed based on a dataflow paradigm for event processing.This framework allows for the distribution of the data of physics events and associated calibration data to multiple computers from multiple input sources for processing and the subsequent collection of the processed events at multiple outputs.The basis of the system is the event repository,basically a first-in first -out event store which may be read and written in a manner similar to sequential file access.Events are stored in and transferred between repositories as suitably large sequences to enable high throughput.Multiple readers can read simultaneously from a single repository to receive event sequences and multiple writers can insert event sequences to a repository,Hence repositories are used for event distribution and collection.To support synchronisation of the event folow the repository implements baaiers.A barrier must be written by all the writers of a repository before any reader can read the barrier,A reader must read a barrier before it may receive data from behind it.Only after all readers have read the barrier is the barrier emoved from the repository.A barrier may also have attached data,In this way calibration data can be distributed to all proessuing units. The repositories are implemented as multi-threaded CORBA objects in C++ and CORMA is used for all data transfers,Job setup scripts are written in python and interactive status and histogram display is provided by a Java program.Jobs run under the PBS batch system providing shared use of resources for online triggering ,offline mass reporcessing and user analysis jobs.
  • Quality of Service on Linux for the Atlas TDAQ Event Building Network 下载全文
  • Congestion control for packets sent on a network is important for DAQ systems that contain an event builder using switching network technologies.Quality of Service(QoS) is a technique for congestion control.Recent Linux releases provide QoS in the kernel to manage network traffic.We have analyzed the packet-loss and packet distribution for the event builder prototype of the Atlas TDAQ system.We used PC/Linux with Gigabit Ethernet network as the testbed.The result showed that QoS using CBQ and TBF eliminated packet loss on UDP/IP transfer while the UDP/IP transfer in best effort made lots of packet loss.The result also showed that the QoS overhead was small.We concluded that QoS on Linux performed efficiently in TCP/IP and UDP/IP and will have an important role of the Atlas TDAQ system.
  • Deployment of Globus tools at St.Petersburg(Russia) 下载全文
  • In this report we intend to discuss a deployment of the Globus toolkits in regional grid structure devoted for LHC physics analysis,One of our peculiarities is poor network connectivity in between two parts of experimental computing nodes,In early stage of deployment we met several technical difficulties due to several bugs and malfunctions.At PNPI the own Certificate Authority(CA) was created.
  • Querying Large Physics Data Sets Over an Information Grid 下载全文
  • Optimising use of the Web(WWW) for LHC data analysis is a complex problem and illustrates the challenges arising from the integration of and computation across massive ampunts of information distributed worldwide.Finding the right piece of information can,at times,be extremely time-consuming,if not impossible,SO-called Grids have been proposed to facilitate LHC computing and many groups have embarked on studies of data replication,data migration and netwroking plhilosophies.Other aspects such as the role of moddleware‘ for Grids are emerging as requiring research.This paper positions the need for appropriate middleware that enables users to resolve physics queries across massive data sets.It identifies the role of meta-data for query resolution and the importance of Information Grids for high-energy physics analysis rather than just computational or Data Grids,This paper identifies software that is being implemented at CERN to enable the querying of very large collaborating HEP data-sets,initially being employed for the construction of CMS detectors.
  • Java Parallel Secure Stream for Grid Computing 下载全文
  • The emergence of high speed wide area networks makes grid computing a reality.However grid applications that need reliable data transfer still have difficulties to achieve optimal TCP performance due to metwork tuning of TCP window size to imporvethe bandwidth and to reduce latency on a high speed wide area network.This paper presents a pure Java package called JPARSS(java Parallel Secure Stream) that divides data into partitions that are sent over several parallel Java Streams simultaneously and allows Java or Web applications to achieve optimal TCP performance in a gird environment without the necessity of tuning the TCP window size.Several experimental results are provided to show that using parallel stream is more effective than tuning TCP window size.In addition X.509 certificate based single sign-on mechanism and SSL based connection establishment are integrated into this package ,Finally a few applications using this package will be discussed.
  • A Distributed Agent—based Architecture for Dynamic Services 下载全文
  • Aprototype system for agent-based distributed dynamic services that will be applied to the development of Data Grids for high-energy physics is presented.The agent-based systems we are designing and develogping gather,disseminate and coordinate configuration ,time-dependent state and other information in the Grid system as a whole.These systems are being developed as an enabling technology for workflow-management and other forms of end-to-end Grid system monitoring and management.This prototype is being developed in Java and is based on the JINI,Mobile Agents,Self-Organizing Neural Networks.
  • Moving the LHCb Monte Carlo Production System to the GRID 下载全文
  • The fundamental elemets of the LHCb Monte Carlo production system are described,covering security,Job submission,execution,data handling and bookkeeping,An analysis is given of the main requirements for GRID facilities,together with some discussion as to how the GRID can enhance this system.A summary is given of the first experiences in moving the system to a GRID environment.The first planning for interfacing the LHCb OO framework to GRID services is outlined.
  • A Comparison of GSIFTP and RFIO on a WAN 下载全文
  • We present a comparison of wide-area file transfer performance of the new Globus GSIFTP and CERN‘s rfio tools,The full version of the paper is available on the web at the following URL.
  • Grid—Enabled Data Access in the ATLAS Athena Framework 下载全文
  • Athena is the common framework used by the ATLAS experiment for simulation,reconstruction,and analysis,By design,Athena supports multiple persistence services,and insulates users from technology-specific persistence details.Athena users and even most Athena package developers should neither know nor care whether data come from the grid or from local filesystems.nor whether data reside in object databases,in ROOT or ZEBRA files,or in ASCII files.In this paper we describe how Athena applications may transparently take advantage of emerging services provided by grid software today-how data generated by Athea jobs are registered in grid replica catalogs and other collection management services,and the means by which input data are identified and located in a grid-aware collection management environment.We outline an evolutionary path toward incorporation of grid-based virtual data services,whereby locating data may be replaced by locating a recipe according to which that dta may be generated.Several implementation scenarios,ranging from lowlevel grid catalog services(e.g.,from Globus)through higher-level services such as the Grid Data Management Pilot (under development as part of the European DataGrid porject,in collaboration,with the Particle Physics Data Grid)to more conventional database services,and a common architecture to support these various scenarios,are also described.
  • GRID Activities in ALICE 下载全文
  • The challenge of LHC computing,with data rates in the range of several PB/year,requires the development of GRID technologies,to optimize the exploitation of distributed computing power and the authomatic access to distributed data storage.In the framework of the EU-DataGrid project,the ALICE experiment is one of the selected test applications for the early development and implementation of GRID Services.Presently,about 15 ALICE sites are makin use of available GRID tools and a large scale test production involving 9 of them was carried out with our simulation program.Results are discussed in detail,as well as future plans.
  • Globus Toolkit Support for Distributed Data—Intensive Science 下载全文
  • In high-energy physics,terabyte and soon petabyte-scale data collections are emerging as critical community resources.A new class of “Data Grid“ infrastructure is required to support distributed access to and analysis of these datasets by potentially thousands of users.Data Grid technology is being deployed in numerous experiments through collaborationssuch as the EU DataGrid,the Grid Physics Network,and the Particel Physics Data Grid[1],The Globus Toolkit is a widely used set of services designed to support the creation of these Grid infrastrunctures and applications,In this paper we survey the Globus technologies that will play a major role in the development and deployment for these Grids.
  • Bilevel Architecture for High—Thronghput Computing 下载全文
  • We have prototyped and analyzed design of a novel approach for the high throughput computing-a core element for the emerging HENP computational grid.Independent event processing in HENP is well suted for computing in parallel.The prototype facilitateds use of inexpensive mass-market components by poviding fault tolerant resilienece (instead of the expensive total system reliablity) via highly scalable management components. The ability to handle both hardware and software failures on a large dedicated HENP facility limits the need for user intervention.A robust data management is especially important in HENP computing since large data-flows occur before and /or atfer each processing task.The architecture of our active object object coordination schema implements a multi-level hierarchical agent model,It provides fault tolerance by splitting a large overall task into independent atomic processes,performed by lower level agents synchronizing each other via a local database.Necessary control function performed by higher level agents interact with the same database thus managing distributed data production.The system has been tested in production environment for simulations in the STAR experiment at RHIC.Our architectural prototype controlled processes on more than a hundred processors at a time and has run for extended periods of time.Twenty terabytes of simulated data hava been produced.The generic nature of our two level architectural solution fault tolerance in distributed environment has been demonstrated by ist successful test for the grid file replication services between BNL and LBNL.
  • Grid Data Farm for Atlas Simulation Data Challenges 下载全文
  • Evaluation of Mosix—Linux Farm Performances in GRID Environment 下载全文
  • The MOSIX extensions to the Linux Operating System allow the creation of high-performance Linux Farms and an excellent integration of the several CPUs of the Farm,whose computational power can be furtherly inereased and made more effective by netorking them within the GRID environment .Following this strategy,we started to perform computational tests using two independent farrms within the GRID environment.In particular,we performed a preliminary evaluation of the distributed computing efficiency with a MOSIX Linux farm in the simulation of gravitational waves data analysis from coalescing binaries.To this task,two different techniques were compared.the classical matched filters technique and one of its possible evolutions,based on a global optimisation technique.
  • Distributed Parallel Interactive Data Analysis Using the Proof System 下载全文
  • The only way Terabytes of data can be processed and analyzed in a reasonable time is by using parallel processing architectures.The Paralle ROOT Facility,PROOF,is s system for the parallel interactive analysis of such datasets on clusters of heterogeneous computers.Early prototypes have confirmed the validity of the basic PROOF architecture However,some important work still has to be done before PROOF can be used as a production facility.The basic architecture and the planned developments are described in this paper.
  • Integrating GRID Tools to Build a Computing Resource Broker:Activities of DataGrid WP1 下载全文
  • Resources on a computational Grid are geographically istributed,heterogeneous in nature,owned by different individuals or organizations with their own scheduling policies,have different access cost models with dynamically varying loads and availability conditions.This maker traditional approaches to workload management,load balancing and scheduling inappropriate.The first work package(WP1)of the EU-funded DataGrid project is adddressing the issue of optimzing the distribution of jobs onto Grid resources based on a knowledge of the status and characteristics of these resources that is necessarily out-of -date(collected in a finite amonut of time at a very loosely coupled site).We describe the DataGrid approach in integrating existing software components(from Condor,GGlobus,etc.)to build a Grid Resource Broker,and the early efforts to define a workable scheduling strategy.
  • Design and Evaluation of Dynamic Replication Strategies for a High—Performance Data Grid 下载全文
  • Physics experiments that generate large amounts of data need to be able to share it with researchers around the world .High performance grids facilitate the distribution of such data to geographically remote places.Dynamic replication can be used as a technique to reduce bandwidth consumption and access latency in accessuing these huge amounts of data.We describe a simulation framework that we have developed to model a grid scenario,which enables comparative studies of alternative dynamic replication strategies.We present preliminary results obtained with this simulator,in which we evaluate the performance of six different replication strategies for three different kinds of access patterns.The simulation results show that the best strategy has significant savings in latency and bandwidth consumption if the access patterns contain a moderate amount of gerographical locality.
  • Report on the INFN—GRID Globus Evaluation 下载全文
  • This article summarizes the activities and the results of the Globus software evaluation,which was carried out in the framework of the INFN-GRID project.
  • Workflow Management for a Cosmology Collaboratory 下载全文
  • The Nearby Supernova Factory Project will provide a unique opportunity to bring together simulation and observation to address crucial problms in particle and nuclear physics.Itsd goal is to significantly enhance our understanding of the nuclear processes in supernovae and to improve our ability to use both Type Ia and Type II supernovae as reference light sources (standard candles)in precision measurements of cosmological parameters.Over the past several years,astronomers and astrophysicists have been conducting in-depth sky searches with the goal of identifying supernovae in their earliest evolutionary stages and,during the 4 to 8 weeks of their most“explosive~ activity,measure their changing magnitude and spectra.The search program currently under development at LBNL is an earth-based observation program utilizing observational instruments at Haleakala and MaunaKea,Hawaii and Mt.Palomar,California,This new program provides a demanding testbed for the integration of computational,data management and collaboratory technologies.A citical element of this effort is the use of emerging workflow management tools to permit collaborating scientists to manage data processing and storage and to integrate advanced supernova simulation into the real-time control of the experiments .This paper describes the workflow management framework for the project,discusses security and resource allocation requirements and reviews emerging tools to support this important aspect of collaborative work.
  • SAM and the Particle Physics Data Grid 下载全文
  • The D0 experiment‘s data and job management system software,SAM,is an operational prototype of many of the concepts being developed for Grid computing .We explain how the components of SAM map into the Data Grid architecture,We discuss the future use of Grid components to either replace existing components of SAM or to extend its functionality and utility.owrk being carried out as part of the Particle Physics Data Grid(PPDG) project.
  • Resource Management in SAM—The D0 Data Grid 下载全文
  • One of the key components of any grid architecture is managing compute and storage resources and optimizing their utilization.SAM has implemented features that allow it to exercise a “fair share“ and “prioritized“ policy among many groups of users.The goals are as follows:1)implement th experiments policies for resource usage by research group and by data access mode,and 2) optimize the resource usage to maximize the overall throughput defined in terms of real data processing activity,At the lowest level of SAM architecture,called the station,the SAM system integrates the data delivery and cache management with the job control and scheduling of the batch system.At the site-level,for example at Fermilab,requests for data from on-site stations are managed to optimize Mass Storage System resources and network throughput,Management of resources at a various geographic levels are discussed.
  • FarMon:An Extensible,Efficient Cluster Monitoring System 下载全文
  • This paper presents the design and implementation of FarMon-a flexible event monitoring system for computing cluster,Using several techniques including DCL (Dynamic Class Loading)technique,module publish/subacribe/unsubscribe protocol and directory service,we create a high efficient,high extensible and high portable cluster monitoring system.
  • GMAP—Grid aware Monte—Carlo Array Processor 下载全文
  • The Monte-Carlo Array Processor(MAP) has been designed using commodity off the shelf (COTS) items to provide the CPU requirements of full event simulation for the LHC experiments.The solution is however completely general,so and CPU intensive application with limited input requirements can be run on the system.Operating control software has been written to manage the data flow oiver the 100 BaseT ethernet connecting the 300 nodes(400 MHz PII‘s) to the 6 master control nodes 700 MHz PIII‘s each with 500Gb of disk),Upgrade to 1000 nodes is plkanned.Job control software that allows the user to run the same job on all nodes,whilst allowing for small differences in initialisation parameters between nodes has also been written.GMAP is the GRID aware MAP control software,This allows remote job preparation and submission using globus toolkit for authentification and communication.The software will be available and opens the possibility for doing massive Monte Carlo production over several remote MAP sites simultaneously.
  • Simulation Studies in Data Replication Strategies 下载全文
  • The aim of this work is to present the simulation studies in evaluating different data replication strategies between Regional Centers.The simulation Framework developed within the “Models of Networked Analysis at Rgional Centers”(MONARC) project,as a design and optimization tool for large scale distributed systems,has been used for these modeling studies.Remote client-serer access to database servers as well as ftp-like data transfers have been ralistically simulated and the performance and limitations are presented as a function of the characteristics of the protocol used and the network parameters.
  • CMS Grid Activities in Europe 下载全文
  • The CMS experiment at the CERN LHC collider is producing large amounts of simulated data in order to provide an adequate statistic for the Trigger System design.These productions are performed in a distributed environment,prototyping the hierarchical model of LHC computing centers developed by MONARC.A GRID approach is being used for interconnecting the Regional Centers.The main issues which are currently addressed are:automatic submission of data production requests to available productioin sites,data transfer among production sites,“best-replica” location and submission of enduser analysis job to the appropriate Regional Center,In each production site different hardware configurations are being tested and exploited.Furthermore robust job submission systems.which are also able to provide the needed bookkeeping of the produced data are being developed.BOSS(Batch Object Submission System)is an interface to the local computing center scheduling system that has been developed in order to allow recording in a relational database of information produced by the jobe running on the batch facilities A summary of the current activites and a plan for the use of DataGrid PM9 tools are presented.
  • CMS Grid Activities in the United States 下载全文
  • The CMS groups in the USA are actively involved in several grid-elated projects,including the DoE-funded Particle Physics Data Grid(PPDG)and the NSFfunded Grid Physics Network(GriPhyN).We present developments of :the Grid data Management Pilot (GDMP) software;a Java Analysis Studio-based prototype remote analysis service for CMS data;tools for automating job submission schemes for large scale distributed simulation and reconstruction runs for CMS;modeling and development of job scheduling schemes using the MONARC toolkit;a robust execution service for distributed processors.The deployment and use of these tools at prototype Tier1 and Tier2 computing centers in the USA is described.
  • CMS Requirements for the Grid 下载全文
  • CMS physicists need to seamlessly access their experimental data and results,independent of location and storage medium,in order to focus on the exploration for the new physics signals arther than the complexities of worldwide data management .In order to achieve this goal,CMS has adopted a tiered worldwide computing model which will incorporate emerging Grid technology.CMS has started to use Grid tools for data processing,replication and migration,Important Grid components are expected to be delivered by the Data Grid projects.like projects,CMS has created a set of long-term requirements to the Grid projects.These requirements are presented and discussed.
  • A Monitor and Control System for the Synchrotron Radiation Beam Lines at DAΦNE
    Classification of Multi-jet Topologies in 3^+e^- Collisions Using Multivariate Analysis Methods and Morphological Variables
    KID-KLOE Integrated Dataflow
    The BaBar Experiment's Distributed Computing Model
    The Implementation of Full ATLAS Detector Simulation Program
    The Athena Data Dictionary and Description Language
    Monitoring the BaBar Data Acquisition System
    Preliminary Design of BES-Ⅲ Trigger System
    The LHC Experiments‘ Joint Controls Project (JCOP)(WayneSalter)
    The BABAR Database:Challenges,Trends and Projections(I.Gaponenko A.Adesanya 等)
    The CDF Computing and Analysis System:First Experience(RickColombo FedorRatnikov 等)
    The Dφ Data Handling System(V.White D.Adams 等)
    Grid Technologies & Applications:Architecture & Achievements(IanFoster)
    US Grid Projects:PPDG and iVDGL(RichardP.Mount SLAC)
    From HEP Computing to Bio—Medical Research and Vice Versa:Technology Transfer and Application Results(S.Chauvie G.Cosmo 等)
    Large Scale Cluster Computing Workshop Fermilab,IL,May 22^nd to 25^th,2001(AlanSilverman DaneSkow)
    PBSNG—Batch System for Farm Architecture(J.Fromm K.Genser 等)
    The CDF Run 2 Offline Computer Farms(JaroslavAntos TanyaLevshina 等)
    Lattice QCD Production on a Commodity Cluster at Fermilab(D.Holmgren P.Mackenzie 等)
    Disk Cloning Program “Dolly+” for System Management of PC linux Cluster(AtsushiManabe)
    The Linux Farm at the RCF(A.W.Chan R.W.Hogue 等)
    Operation and Optimization of a Linux PC farm for Physics Analysis in the ZEUS Experiment(KrzysztofWrona RadekKaczorowski 等)
    First Operation of the D0 Run Ⅱ Reconstruction Farm(M.Diesburg H.Schellman 等)
    After the First Five Years:Central Linux Support at DESY(KnutWoller ThorstemKleinwort 等)
    A Performance Measurement and Simulation for PC—Linux Regional Center Tier—3 System(GongxingSUN SATOHiroyuki 等)
    Managing and Ever Increasing Number of Linux—PCs at DESY(A.Gellrich M.Ernst 等)
    The ALICE Data Challenges(J.P.Baud W.Carena 等)
    Experiences Constructing and Running Large Shared Clusters at CERN(VladimirBahyl MaiteBarroso 等)
    The Terabyte Analysis Machine Project The Distance Machine:Performance Report(JamesAnnis KoenHoltman 等)
    The Farm Processing System at CDF(JaroslayAntos MarianBabik 等)
    BES Physical Analyzing Environment Composed of PC Farm(TianrongLIU ZepuMAO 等)
    The Study of BES Reconstruction Codes at PC LX & HP—UX(ZepuMAO JFQIU 等)
    Fermilab Distributed Monitoring System(NGOP)(T.Dawson J.Fromm 等)
    A New Interlock Design for the TESLA RF System(H.Leich J.Kahl 等)
    Communication between Trigger/DAQ and DCS in ATLAS(H.Burckhart R.Hart 等)
    Partitioning,Automation and Error Recovery in the Control and Monitoring System of an LHC Experiment(C.Gaspar)
    Upgrade of Control System for the BEPCII(J.Zhao C.H.Wang 等)
    Technology Integration in the LHC EXperiments Joint Controls Project(R.Barillere M.Beharrell 等)
    The KLOE Online Calibration System(E.Pasqualucci)
    Some Problems of Statistical Analysis in Experiment Proposals(S.I.Bityukov N.V.Krasnikov)
    Go4:Multithreaded Inter—Task Communication with ROOT—writing non—blocking GUIs(J.Adamczewski M.Al-Turany 等)
    Update of an Object Oriented Track Reconstruction Model for LHC Experiments(DavidCandilin SijinQIAN 等)
    SND Off—Line Framework(D.A.Bukin V.N.Ivanchenko 等)
    Hidden Adapter(J.P.Wellisch)
    Distributed Analysis with Java and Objectivity(MANSJeremiah)
    Studies for Optimization of Data Analysis Queries for HEP Using HERA—B Commissioning Data(VascoAmaral GuidoMoerkotte 等)
    BES Monitoring & Displaying System(MengWANG BingyunZHANG 等)
    The CMS Field Mapping Project at the CERN EDMS(V.I.Klioukhine)
    CATS:a Cellular Automation for Tracking in Silicon for the HERA—B Vertex Detector(D.Emeliyanov I.Kisel 等)
    Ring Recognition Method Based on the Elastic Neural Net(S.Gorbunov I.Kisel 等)
    Summary of the HEPVis‘01 Workshop(G.Alverson)
    Track Reconstruction in the High Rate Environment of the HERA—B Spectrometer(AlexanderSpiridonov)
    A Coherent and Non—Invasive Open Analysis Architecture and Framework with Applications in CMS(GeorgeAlverson IannaOsborne 等)
    The IGUANA Interactive Graphics TOolkit with Examples from CMS and D0(GeorgeAlverson IannaOsborne 等)
    CMS Object—Oriented Analysis(V.Innocente E.Meschi 等)
    Object Oriented Reconstruction and Particle Identification in the ATLAS Calorimeter(B.Caron J.Collot 等)
    Prototype for a Generic Thin—Client Remote Analysis Environment for CMS(C.D.Steenberg J.J.Bunn 等)
    Ensuring Long Time Access to DELPHI Data:The IDEA Project(TizianoCampores DanielWicke 等)
    The Event Browser:An Intutive Approach to Browsing BaBar Object Databases(AdeyemiAdesanya)
    The BEPCⅡ Data Production and BESⅢ offline Analysis Software System(ZepuMAO)
    Neural Computing in High Energy Physics(O.D.Joukov N.D.Rishe)
    Status of the GAUDI Event—Processing Framework(M.Cattaneo I.melyaey 等)
    Adding a Scripting Interface to Gaudi(ChristopherT.Day DavidQuarrie 等)
    Abstract Interfaces for Data Analysis —Component Architecture for Data Analysis Tools(G.Barrand P.Binko 等)
    Anaphe—OO Libraries and Tools for Data Analysis(O.Couet B.Ferrero-Merlino 等)
    The HippoDraw Application and the HippoPlot C++ Toolkit Upon which it is Built(PaulF.Kunz)
    OO Software and DataModel of AMS Experiment(VitaliChoutko AlexeiKlimentov)
    High Performance RAIT(JamesHughes CharlesMilligan 等)
    dCache,a Distributed Storage Data Cahing System(MichaelErns CharlesWaldman 等)
    Study on Limited Projections in Micro—focus X—ray Swing Laminography(MingMING ZhengLI)
    Experience Using Different DBMSs in Prototyping a Book—Keeper for ATLAS‘ DAQ Software(AAmorim R.Jones 等)
    Upgrade of the ZEUS OO Tag Database for Physics Analysis at HERA(U.Ericke)
    Performance Analysis of Generic vs.Sliced Tags in HepODBMS(KurtStockinger)
    An Evaluation of Oracle for Persistent Data Storage and Analysis of LHC Physics Data(EricGrancher MaciejMarczukajtis)
    A Generic Identification Scheme for Hierarchically Structured Objects(ChristianArnault StanBentvelsen 等)
    Farming Data for the HyperCP Experiment(SirenaBoden Holmstrom 等)
    Object Persistency for HEP Data Using an Object—Relational Database(MarcinNowak DirkDuellmann 等)
    Jefferson Lab Mass Storage and File Replication Services(IanBird YingChen 等)
    Distributing File—based Data to Remote sites within the BABAR Collaboration(TimAdye AlviseDorigo 等)
    Simulation Analysis of the Optimal Storage Resource Alloction for Large HENP Databases(JinbaekKim ArieShoshani)
    User‘s Friendly Interface to the CDF Data Handling System(F.Ratnikov)
    Automatic Schema Evolution in Root(ReneBrun FonsRademakers)
    The Role of XML in the CMS Detector Description(M.Liendl F.vanLingen 等)
    The CDF Run II Disk Inventory Manager(PaulHubbard StephanLammel)
    CDF Run Ⅱ Data File Catalog(J.Kowalkowski F.Ratnikov 等)
    SAM Overview and Operation at the D0 Experiment(LauriLoebel-Carpenter LeeLueking 等)
    Optimizing Parallel Access to the BaBar Database System Using CORBA Servers(JacekBecla IgorGaponenko)
    Managing the BaBar Object Oriented Database(AdilHasan ArtemTrunov)
    A Model of BES Data Storage Management System(MeiYE MeiMA 等)
    Data Transfer Using Buffered I/O API with HPSS(ShigenYahiro TakashiSasaki 等)
    Experience with the COMPASS Conditions Data Base(TakeakiToeda MassimoLamanna 等)
    Building an Advanced Computing Environment with SAN Support(DajianYANG MeiMA 等)
    Geant4 Low Energy Electromagnetic Physics(S.Chauvie G.Depaola 等)
    Simulation for Astroparticle Experiments and Planetary Explorations:Tools and Applications(A.DeAngelis A.Brunengo 等)
    Hadronic Shower Models in GEANT4: Validation Strategy and Results.(JohannesPeterWellisch)
    Comparison of GEANT4 Simulations with Testbeam Data and GEANT3 for the ATLAS Liquid Argon Calorimeter(D.Benchekroun G.Karapetian 等)
    Calculation of Energy Response of Cylindrical G—M Tubes with EGS4 Monte Carlo Code(BoxueLIU YanchunWANG 等)
    A Method of Large—scale Object Forward Compton Scattering Imaging(DonglaiHUO YinongLIU)
    GBuilder—Computer Aided design of Detector Geometry(E.Tcherniasv N.Smirnov)
    Integration of Geant4 with the Gaudi Framework(I.Belyaev M.Frank 等)
    A standard Event Class for Monte Carlo Generators(L.A.Gerren M.Fischler)
    HepMC_Contrib:Persistent Interface Package for HepMC(TakayukiSAEKI YouheiMORITA 等)
    HepPDT:encapsulating the Particle Data Table(L.A.Garren W.Brown 等)
    APE—Tflops Computers for Theoretical Particle Physics(KarlJansen NorbertPaschedag 等)
    Network Printing in a Heteregenous Environment(ChristophBeyer GerhardSchroth)
    Next Generation Environment for Collaborative Research(D.Collados G.Denis 等)
    An Electronic Logbook for the HEP Control Room(G.Roediger P.Pomatto 等)
    Building Mail Server on Distributed Computing SYstem(AkihiroShibata OsamuHamada 等)
    The Temperature Effects on the Ion Trap Quantum Computer(Hongmin JiatiLIN)
    Knowledge Management and Electronic Publishing for the CNAO with EDMS(F.Gerardi O.Rademakers-DiRosa 等)
    H.323 Based Collaborative Environment for High Energy and Nuclear Physics(TeijiNakamura KiyoharuHashimoto 等)
    Achieving High Data Throughput in Research Networks(WarrenMatthews LesCottrell)
    IPv6 in ESnet(WarrenMatthews BobFink 等)
    An Additional DNS Feature for Different Routing of Electronic Mail inside and outside of a Campus Network(AndreyBobyshev MichaelErnst)
    FLink_2—PCI Bus Interface to a 400 MB—Ring(Karl-HeingSulanke)
    Tools for Distributed Monitoring of the Campus Network with Low Latency Time(AndreyBobyshev)
    Strategy and Management of Network Security at KEK(KiyoharuHashimoto TeijiNakamura 等)
    Applied Techniques for High Bandwidth Data Transfers across Wide Area Networks(JasonLee BillAllcock 等)
    Data Transfer over the Long Fat Networks(H.Sato Y.Morita 等)
    High Performance Multiple Stream Data Transfer(F.Rademakers P.Saiz 等)
    Optimize the Security Performance of the Computing Environment of IHEP(Rong-shengXU Bao-XuLIU)
    Passive Performance Monitoring and Traffic Characteristics on the SLAC internet Border(ConnieLogg LesCottrell)
    Peer—to—Peer Computing for secure High Performance Data Copying(AndrewHanushevsky ArtemTrunov 等)
    DEPUTY:Analysing Architectural Structures and Checking Style(D.Gorshkov J.P.Wellisch)
    Software Process Improvement in CMS—Are we Different?(J.P.Wellisch)
    Code Organization and Configuration Management(J.P.Wellisch I.Osborne 等)
    XML for Detector Description at GLAST(J.Bogart D.Favretto 等)
    Experiencing CMT in Software Production of Large and Complex Projects Issues in the Scalability of Software Production Management(ChristianArnault BrunoMansoux 等)
    Software Process in Geant4(GabrieleCosmo)
    The Geometry Description Markup Language(RadovanChytracek)
    Distributed Simulation of Large Computer Systems(MorenoMarzolla)
    A Generic Digitization Framework for the CDF Simulation(JimKowalkowski MarcPaterno)
    In the Land of the Dinosaurs,How to Survive Experience with Building of Midrange Computing Cluster(AndreiE.Chevel JeromeLauret 等)
    Ignominy:a Tool for Software Dependency and Metric Analysis with Examples from Large HEP Packages(LassiA.Tuura LucasTaylor)
    Re—usable Templates for Documenting the Elaboration and Architectural Design of the CMS Software(J.P.Wellisch L.Tuura 等)
    CMS Software Distribution and Installation Systems:Concepts,Practical Solutions and Experience at Fermilab as a CMS Tier 1 Center(NataliaM.Ratnikova GregoryE.Graham)
    Development of the ATLAS Simulation Framework(A.DellAcqua K.Amako 等)
    Simulating the Farm Production System Using the MONARC Simulation Tool(Y.Wu I.C.Legrand 等)
    Architecture of Collaborating Frameworks:Simulation,Visualisation,User Interface and Analysis(A.Pfeiffer R.Giannitrapani 等)
    StureGate:a Data Model for the ATLAS Software Architecture(P.Calafiura H.Ma 等)
    Detector Description Software for Simulation,Reconstruction and Visualisation(PedroArce)
    Putting It All Together:Experience and Challenges at the DELPHI off—line Processing Farm(Jean-DamienDURAND RyszardGOKIELI 等)
    GEANT4 in the AliRoot Framework(I.Hrivnacova)
    Extensible Numerical Library in JAVA(T.Aso H.Okazawa 等)
    Model and Information Abstracion for Description—Driver Systems(FloridaEstrella ZsoltKovacs 等)
    PC Farms for Triggering and Online Reconstruction at HERA—B(J.M.Hernandez)
    Data Acquisition System for the SND2000 Experiment(M.N.Achasov A.G.Bogdanchikov 等)
    Application of DSPs in Data Acquisition Systems for Neutron Scattering Experiments at the IBR—2 Pulsed Reactor(V.Butenko B.Gebauer 等)
    A Multi Purpose DAQ System Developed for the nTOF Commissioning(V.Vlachoudis)
    High Level Trigger System for the ALICE Experiment(U.Frankenfeld H.Helstrup 等)
    The Digital Analog Optical Module(dAOM)—a Technology for the AMANDA Experiment at the South Pole(TorstenSchmidt PaoloDesiati 等)
    Large Scale and Performance tests of the ATLAS Online Software(Alexandrov H.Wolters 等)
    Neural Network Real Time Event Selection for the DIRAC Experiment(S.Vlachos)
    Architecture Design of Trigger and DAQ System for Fermilab CKM Experiment(JinyuanWU)
    Bus—based DAQ Architecture for the ARGO—YBJ Experiment(A.Alosio P.Branchini 等)
    The PC—Based ATLAS Event Filter Prototype:Supervision Design,Implementation and Tests(C.P.Bee F.Etienne 等)
    On the Way to Maturity—The CLEO Ⅲ Data Acquisition and Control System(H.Schwarthoff V.Frolov 等)
    Design and Prototyping of the ATLAS High Level Trigger(J.A.C.Bogaerts)
    The CDF Date Acquisition System for Tevatron Run Ⅱ(ArndMeyer)
    Clustered Data Acquisition for the CMS Experiment(J.Gutleber S.Erhan 等)
    KONOE—A Toolkit for Object—Oriented/Network—Distributed Online Environment(M.Asai S.Enomoto 等)
    ATLAS DAQ Configuration Databases(I.Alexandrov A.Amorim 等)
    Data Collection and Processing for ARGO Yanbajing Experiment(C.Stanescu)
    The Linux Based Distributed Data Acquisition System for the ISTRA+ Experiment(A.Filin A.Inyakin 等)
    The CMS Event Builder Demonstrator and Results with Ethernet and Myrinet Switch Technologies(G.Antchev L.Berti 等)
    The DZERO Online System Event Path(S.Fuess M.Begel 等)
    The BTeV DAQ and Trigger System—Some Throughput,Usability and Fault Tolerance Aspects(E.E.Gottschalk T.Bapty 等)
    CMS Level—1 Regional Calorimeter Trigger System(P.Chumney S.Dasu 等)
    Specification and Simulation of the ALICE Trigger and DAQ System(T.Anticic G.DiMarzoSerugendo 等)
    Trigger & Data Acquisition System for the ANTARES Neutrion Telescope(HerveLafoux)
    CDF Run Ⅱ Run Control and Online Monitor(T.Arisawa W.Badgett 等)
    A Dataflow Meta—Computing Framework for Event Processing in the H1 Experiment(AlanCampbell ChristophGrab 等)
    Quality of Service on Linux for the Atlas TDAQ Event Building Network(Y.Yasu Y.Nagasaka 等)
    Deployment of Globus tools at St.Petersburg(Russia)(AndreiE.Chevel VladimirKorhkov 等)
    Querying Large Physics Data Sets Over an Information Grid(NigelBaker ZsoltKovacs 等)
    Java Parallel Secure Stream for Grid Computing(JieChen WaltAkers 等)
    A Distributed Agent—based Architecture for Dynamic Services(HarveyB.Newman IosifC.Legrand 等)
    Moving the LHCb Monte Carlo Production System to the GRID(E.vanHerwijnen P.Mato 等)
    A Comparison of GSIFTP and RFIO on a WAN(RajeshKalmady BrianTierney 等)
    Grid—Enabled Data Access in the ATLAS Athena Framework(D.Malon S.Resconi 等)
    GRID Activities in ALICE(P.Cerello T.Anticic 等)
    Globus Toolkit Support for Distributed Data—Intensive Science(W.Alcock A.Chervenak 等)
    Bilevel Architecture for High—Thronghput Computing(PavelNewski AlexandreVaniachine 等)
    Grid Data Farm for Atlas Simulation Data Challenges(Y.Morita O.Tatebe 等)
    Evaluation of Mosix—Linux Farm Performances in GRID Environment(F.Barone M.DERose 等)
    Distributed Parallel Interactive Data Analysis Using the Proof System(ReneBrun FonsRademakers)
    Integrating GRID Tools to Build a Computing Resource Broker:Activities of DataGrid WP1(C.Anglano S.Barale 等)
    Design and Evaluation of Dynamic Replication Strategies for a High—Performance Data Grid(KavithaRanganathan IanFoster)
    Report on the INFN—GRID Globus Evaluation(R.Alfieri C.Anglano 等)
    Workflow Management for a Cosmology Collaboratory(StewartC.Loken CharlesMcParland)
    SAM and the Particle Physics Data Grid(LauriLoebel-Carpenter LeeLueking 等)
    Resource Management in SAM—The D0 Data Grid(LauriLoebel-Carpenter LeeLueking 等)
    FarMon:An Extensible,Efficient Cluster Monitoring System(YongFAN MeiMA 等)
    GMAP—Grid aware Monte—Carlo Array Processor(A.Moreton G.D.Patel 等)
    Simulation Studies in Data Replication Strategies(HarveyB.Newman IosifC.Legrand)
    CMS Grid Activities in Europe(C.Grandi L.Berti 等)
    CMS Grid Activities in the United States(I.Fisk J.Amundson 等)
    CMS Requirements for the Grid(K.Holtman J.Amundson 等)
    • 01

    地  址:北京中关村南街16号


    关于我们 | 网站声明 | 合作伙伴 | 联系方式 | IP查询
    金月芽期刊网 2018 触屏版 电脑版 京ICP备13008804号-2