IBM BlueGene L System

Referat
8/10 (1 vot)
Domeniu: Calculatoare
Conține 1 fișier: doc
Pagini : 15 în total
Cuvinte : 7103
Mărime: 298.21KB (arhivat)
Puncte necesare: 6
Profesor îndrumător / Prezentat Profesorului: Cornel Popescu
Referat despre supercalculatoare (in cazul de fata IBM BlueGene L System) prezentat in anul 5 la Facultatea de Calculatoare, Politehnica Bucuresti anul 2007 in cadrul materiei Sisteme multiprocesor. In limba engleza.

Extras din referat

The 30th edition of the TOP500 list was released on Nov. 12, 2007 at SC07, the international conference on high performance computing, networking, storage and analysis, in Reno, Nevada.

The Top 10 shows five new and one substantially upgraded system with five of these changes placing at the top five positions. The latest TOP500 list, as well as the previous 29 lists, can be found on the Web at http://www.top500.org/.

Let’s take a look back at the beginning of this top.

Beginning of the nineties a team of people in Lawrence Berkeley National Laboratory, USA started collecting data and publishing statistcs about the supercomputer market. A considerable number of companies competed in the HPC market with a large variety of architectures such as vector computer, mini vector computer, SIMD (Singel Instruction on Multiple Data) and MPP (Massive Parallel Processing) systems. A clear and flexible definition was needed to decide which of these systems was a „supercomputer“. This definition needed to be architecture independent. Because of Moore’s Law this definition also had to be dynamic in nature to deal with the constant increase in computer performance.

In early 1993 the TOP500 idea was developed in Mannheim. The basic idea was to list the 500 most powerful computer systems installed at some place twice a year and to call these systems supercomputer. The problem was to define how powerful a computer system is. For this it was decided to use the performance results of the Linpack benchmark from Jack Dongarra, as this was the only benchmark for which results were available for nearly all systems of interest.

Since 1993 the TOP500 is published twice a year using Linpack results1. Over the years the TOP500 served well as a tool to track and analyze technological, architectural, and other changes in the HPC arena.

Performance Growth and Dynamic

One trend of major interest to the HPC community is the growth of the performance levels seen in the TOP500. In Figure 1 the evolution of the total installed performance seen in the TOP500. The plot represents the performance of the first and last systems at positions 1 and 500 in the list as well as the total accumulated performance of all 500 systems. Fitting an exponential curve to the observed data points an extrapolation was made until the end of the decade.

Figure 1

The HPC market is by its very nature very dynamic. This is not only reflected by the coming and going of new manufacturers but especially by the need to update and replace systems quite often to keep pace with the general performance increase. This general dynamic of the HPC market is well reflected in the TOP500. The average replacement rate is of about 160 systems every half

year or more than half the list every year, meaning that a system which is at position 100 at a given time will fall off the TOP500 within 2-3 years.

The Systems Ranked #1 Since 1993

- IBM Blue Gene/L (since 2004.11)

- NEC Earth Simulator (2002.06 - 2004.11)

- IBM ASCI White (2000.11 - 2002.06)

- Intel ASCI Red (1997.06 - 2000.11)

- Hitachi CP-PACS (1996.11 - 1997.06)

- Hitachi SR2201 (1996.06 - 1996.11)

- Fujitsu Numerical Wind Tunnel (1994.11 - 1996.06)

- Intel Paragon XP/S140 (1994.06 - 1994.11)

- Fujitsu Numerical Wind Tunnel (1993.11 - 1994.06)

- TMC CM-5 (1993.06 - 1993.11)

The No. 1 position was again claimed in 2007 by the BlueGene/L System, a joint development of IBM and the Department of Energy's (DOE) National Nuclear Security Administration (NNSA) and installed at DOE's Lawrence Livermore National Laboratory in Livermore, Calif. Although BlueGene/L has occupied the No. 1 position since November 2004, the current system has been significantly expanded and now achieves a Linpack benchmark performance of 478.2 TFlop/s ("teraflops" or trillions of calculations per second), compared to 280.6 TFlop/s six months ago before its upgrade.

The BlueGene/L machine is a first step in IBM’s muti-year initiative to build a petaflop scale machine for calculations in the area of life sciences. and is based on a different and more generalized architecture than IBM described in its announcement of the BlueGene program in December of 1999. In particular BlueGene/L is based on an embedded PowerPC processor supporting a large memory space, with standard compilers and message passing environment, albeit with significant additions and modifications to the standard PowerPC system.

Significant progress has been made in recent years mapping numerous compute-intensive applications, many of them grand challenges, to parallel architectures. This has been done to great success largely out of necessity, as it has become clear that currently the only way to achieve teraFLOPS-scale computing is to garner the multiplicative benefits offered by a massively parallel machine. To scale to the next level of parallelism, in which tens of thousands of processors are utilized, the traditional approach of clustering large, fast SMPs will be increasingly limited by power consumption and footprint constraints. For example, to house supercomputers in the 2004 time frame, both the Los Alamos National Laboratory and the Lawrence Livermore National Laboratory have begun constructing buildings with approximately 10x more power and cooling capacity and 2-4x more floor space than existing facilities. In addition, due to the growing gap between the processor cycle times and memory access times, the fastest available processors will typically deliver a continuously decreasing fraction of their peak performance, despite ever more sophisticated memory hierarchies.

The approach taken in BlueGene/L (BG/L) is substantially different. The system is built out of a very large number of nodes, each of which has a relatively modest clock rate. Those nodes present both low power consumption and low cost. The design point of BG/L utilizes IBM PowerPC embedded CMOS processors, embedded DRAM, and system-on-a-chip techniques that allow for integration of all system functions including compute processor, communications processor, 3 cache levels, and multiple high speed interconnection networks with sophisticated routing onto a single ASIC. Because of a relatively modest processor cycle time, the memory is close, in terms of cycles, to the processor. This is also advantageous for power consumption, and enables construction of denser packages in which 1024 compute nodes can be placed within a single rack.

Integration of the inter-node communications network functions onto the same ASIC as the processors reduces cost, since the need for a separate, high-speed switch is eliminated. The current design goals of BG/L aim for a scalable supercomputer having up to 65,536 compute nodes and target peak performance of 360 teraFLOPS with extremely cost effective characteristics and low power (~1 MW), cooling (~300 tons) and floor space (<2,500 sq ft) requirements. This peak performance metric is only applicable for applications that can utilize both processors on a node for compute tasks. Experts anticipate that there will be a large class of problems that will fully utilize one of the two processors in a node with messaging protocol tasks and will therefore not be able to utilize the second processor for computations. For such applications, the target peak performance is 180 teraFLOPS.

The BG/L design philosophy has been influenced by other successful massively parallel machines, including QCDSP at Columbia University. In that machine, thousands of processors are connected to form a multidimensional torus with nearest neighbour connections and simple global functions. Columbia University continues to evolve this architecture with their next generation QCDOC machine [QCDOC], which is being developed in cooperation with IBM research. QCDOC will also use a PowerPC processing core in an earlier technology, a simpler floating point unit, and a simpler nearest neighbor network.

Preview document

IBM BlueGene L System - Pagina 1
IBM BlueGene L System - Pagina 2
IBM BlueGene L System - Pagina 3
IBM BlueGene L System - Pagina 4
IBM BlueGene L System - Pagina 5
IBM BlueGene L System - Pagina 6
IBM BlueGene L System - Pagina 7
IBM BlueGene L System - Pagina 8
IBM BlueGene L System - Pagina 9
IBM BlueGene L System - Pagina 10
IBM BlueGene L System - Pagina 11
IBM BlueGene L System - Pagina 12
IBM BlueGene L System - Pagina 13
IBM BlueGene L System - Pagina 14
IBM BlueGene L System - Pagina 15

Conținut arhivă zip

  • IBM BlueGene L System.doc

Te-ar putea interesa și

Evoluția Supercalculatoarelor

Introducere Odată cu progresul ştiinţific au evoluat şi s-au multiplicat problemele ce afectează întreaga societate. Evoluţia furtunoasă a...

Top 10 Supercomputere

I. Introducere Un supercomputer este un computer special complex, compus din mai multe procesoare care acceseaza aceeasi memorie centrala si care...

Bazele Tehnologiei Informației

Capitolul 1. Concepte de baza privind tehnologia informationala si de comunicatii 1.1. Informatia, resursa strategica a societatii Orice...

Ai nevoie de altceva?