We live and operate in the world of computing and computers. The Internet has drastically changed the computing world from the concept of parallel computing to distributed computing to grid computing ...and now to cloud computing. Cloud computing is a new wave in the field of information technology. Some see it as an emerging field in computer science. It consists of a set of resources and services offered through the Internet. Hence, "cloud computing" is also called "Internet computing." The word "cloud" is a metaphor for describing the Web as a space where computing has been preinstalled and exists as a service. Operating systems, applications, storage, data, and processing capacity all exist on the Web, ready to be shared among users. Figure 1 shows a conceptual diagram of cloud computing.
A Green's function formalism has been applied to solve the equations of motion in classical molecular dynamics simulations. This formalism enables larger time scales to be probed for vibration ...processes in carbon nanomaterials. In causal Green's function molecular dynamics (CGFMD), the total interaction potential is expanded up to the quadratic terms, which enables an exact solution of the equations of motion to be obtained for problems within the harmonic approximation, reasonable energy conservation, and fast temporal convergence. Differently from conventional integration algorithms in molecular dynamics, CGFMD performs matrix multiplications and diagonalizations within its main loop, which make its computational cost high and, therefore, has limited its use. In this work, we propose a method to accelerate CGFMD simulations by treating the full system of N atoms as a collection of N smaller systems of size n. Diagonalization is performed for smaller nd×nd dynamical matrices rather than the full Nd×Nd matrix (d=1,2, or 3). The eigenvalues and eigenvectors are then used in the CGFMD equations to update the atomic positions and velocities. We applied the method for one-dimensional lattices of oscillators and have found that the method rapidly converges to the exact solution as n increases. The computational time of the proposed method scales linearly with N, providing a considerable gain with respect to the O(N3) full diagonalization. The method also exhibits better accuracy and energy conservation than the velocity-Verlet algorithm. An OpenMP parallel version has been implemented and tests indicate a speedup of 14× for N= 50000 in affordable computers. Our findings indicate that CGFMD can be an alternative, competitive integration technique for molecular dynamics simulations.
The processing of raw data from modern astronomical instruments is often carried out nowadays using dedicated software, known as pipelines, largely run in automated operation. In this paper we ...describe the data reduction pipeline of the Multi Unit Spectroscopic Explorer (MUSE) integral field spectrograph operated at the ESO Paranal Observatory. This spectrograph is a complex machine: it records data of 1152 separate spatial elements on detectors in its 24 integral field units. Efficiently handling such data requires sophisticated software with a high degree of automation and parallelization. We describe the algorithms of all processing steps that operate on calibrations and science data in detail, and explain how the raw science data is transformed into calibrated datacubes. We finally check the quality of selected procedures and output data products, and demonstrate that the pipeline provides datacubes ready for scientific analysis.
Deep Neural Networks (DNNs) are becoming an important tool in modern computing applications. Accelerating their training is a major challenge and techniques range from distributed algorithms to ...low-level circuit design. In this survey, we describe the problem from a theoretical perspective, followed by approaches for its parallelization. We present trends in DNN architectures and the resulting implications on parallelization strategies. We then review and model the different types of concurrency in DNNs: from the single operator, through parallelism in network inference and training, to distributed deep learning. We discuss asynchronous stochastic optimization, distributed system architectures, communication schemes, and neural architecture search. Based on those approaches, we extrapolate potential directions for parallelism in deep learning.
The parallel approach Di Ventra, Massimiliano; Pershin, Yuriy V.
Nature physics,
04/2013, Letnik:
9, Številka:
4
Journal Article
Recenzirano
Odprti dostop
A class of two-terminal passive circuit elements that can also act as memories could be the building blocks of a form of massively parallel computation known as memcomputing.