Spec cpu2000 download free




















IBM eServer xSeries xm 2. IBM eServer xSeries xm 3. IBM x 3. A20 Intel Xeon processor , 2. A20 Intel Xeon processor , 3. A20 Intel Xeon processor , 3GHz. A20 Intel Xeon processor , 1.

A20 Intel Xeon processor , 2GHz. BX21 2. BX21 3. I2X2 1. I2X4 1. LN10J 3. LN20J 3. SRTP 3. SRWV2 3. Intel DEEA2 motherboard 1. Intel DEMV2 motherboard 1. Intel DEMV2 motherboard 2. Intel DGB motherboard 1. Intel DGB motherboard 2. Intel DMD motherboard 1. Intel DMD motherboard 2. Intel DPBZ motherboard 2. Intel DPBZ motherboard 3. Intel DGPM motherboard 3. Intel DXBX motherboard 3. Intel OR motherboard.

Intel VC 1. CLV23 Intel Xeon CLW26 Intel Xeon CLW30 Intel Xeon Origin MHz R12k. Netra 20 MHz. Sun Blade Model Sun Blade Model Cu. Improvements to the new suites include longer run times and larger problems for benchmarks, more application diversity, greater ease of use, and standard development platforms that will allow SPEC to produce additional releases for other operating systems.

Each benchmark was tested on different platforms to determine if it was portable, relevant and suitable for the final SPEC CPU suite. Performance results from CPU cannot be compared to those from CPU95, since new benchmarks have been added and existing ones changed. The delay is not counted toward the benchmark runtime. The flagsurl may be an absolute or relative path in the file system, or refer to an http accessible file e. Per the official documentation: One way is to measure how fast the computer completes a single task; this is a speed measure.

Another way is to measure how many tasks a computer can accomplish in a certain amount of time; this is called a throughput, capacity or rate measure. Per the official documentation: A reportable execution runs all the benchmarks in a suite with the test and train data sets as an additional verification that the benchmark binaries get correct results. The test and train workloads are not timed. Then, the reference workloads are run three times, so that median run time can be determined for each benchmark.

If not specified, the benchmark run script will look up the directory tree from both pwd and --output for presence of a 'cpu'. Added to saved results. Columns without descriptions are documented as runtime parameters above. As described in section 1 , it is expected that testers can reproduce other testers' results.

In particular, it must be possible for a new tester to compile both the base and peak benchmarks for an entire suite i.

CINT or CFP in one execution of runspec , with appropriate command line arguments and an appropriate configuration file, and obtain executable binaries that are from a performance point of view equivalent to the binaries used by the original tester. The simplest and least error-prone way to meet this requirement is for the original tester to take production hardware, production software, a SPEC config file, and the SPEC tools and actually build the benchmarks in a single invocation of runspec on the System Under Test SUT.

But SPEC realizes that there is a cost to benchmarking and would like to address this, for example through the rules that follow regarding cross-compilation and individual builds. However, in all cases, the tester is taken to assert that the compiled executables will exhibit the same performance as if they all had been compiled with a single invocation of runspec see 2.

It is permitted to use cross-compilation, that is, a building process where the benchmark executables are built on a system or systems that differ s from the SUT. The runspec tool must be used on all systems typically with -a build on the host s and -a validate on the SUT. If all systems belong to the same product family and if the software used to build the executables is available on all systems, this does not need to be documented.

In the case of a true cross compilation, e. See section 4. It is permitted to use more than one host in a cross-compilation. If more than one host is used in a cross-compilation, they must be sufficiently equivalent so as not to violate rule 2. That is, it must be possible to build the entire suite on a single host and obtain binaries that are equivalent to the binaries produced using multiple hosts. The purpose of allowing multiple hosts is so that testers can save time when recompiling many programs.

Multiple hosts may NOT be used in order to gain performance advantages due to environmental differences among the hosts. In fact, the tester must exercise great care to ensure that any environment differences are performance neutral among the hosts, for example by ensuring that each has the same version of the operating system, the same performance software, the same compilers, and the same libraries. The tester should exercise due diligence to ensure that differences that appear to be performance neutral - such as differing MHz or differing memory amounts on the build hosts - are in fact truly neutral.

Multiple hosts may NOT be used in order to work around system or compiler incompatibilities e. It is permitted to build the benchmarks with multiple invocations of runspec , for example during a tuning effort. But, the executables must be built using a consistent set of software.

If a change to the software environment is introduced for example, installing a new version of the C compiler which is expected to improve the performance of one of the floating point benchmarks , then all affected benchmarks must be rebuilt in this example, all the C benchmarks in the floating point suite. The previous 4 paragraphs may appear to contradict each other 2. Consider the following sequence of events:. In this example, the tester is taken to be asserting that the above sequence of events produces binaries that are, from a performance point of view, equivalent to binaries that would have been produced in a single invocation of the tools.

If there is some optimization that can only be applied to individual benchmark builds and cannot be applied in a continuous build, the optimization is not allowed. Rule 2. If the tester is uncertain whether a cross-compile or an individual benchmark build is equivalent to a full build on the SUT, then a full build on the SUT is required or, in the case of a true cross-compile which is documented as such, then a single runspec -a build is required on a single host.

Although full builds add to the cost of benchmarking, in some instances a full build in a single runspec may be the only way to ensure that results will be reproducible. Additional rules for Base Metrics follow in section 2. No source file or variable or subroutine name may be used within an optimization flag or compiler option.

Identifiers used in preprocessor directives to select alternative source code are also forbidden, except for a rule-compliant library substitution 2. For example, if a benchmark source code uses one of:. Flags which substitute pre-computed e. Exceptions are:. The use of such a flag shall furthermore not count as one of the allowed 4 base switches.

Such substitution shall only be acceptable in a peak run, not in base. Only the training input which is automatically selected by runspec may be used for the run that generates the feedback data. The requirement to use only the train data set at compile time shall not be taken to forbid the use of run-time dynamic optimization tools that would observe the reference execution and dynamically modify the in-memory copy of the benchmark.

However, such tools would not be allowed to in any way affect later executions of the same benchmark for example, when running multiple times in order to determine the median run time.

Such tools would also have to be disclosed in the submission of a result, and would have to be used for the entire suite see section 3. Flags that change a data type size to a size different from the default size of the compilation system are not allowed. Exceptions are: a C long can be 32 or greater bits, b pointer sizes can be set different from the default size.

A flag is considered a portability flag if, and only if, one of the following two conditions hold:. That is, if it is possible to build and run the benchmark without this flag, then this flag is not considered a portability flag. The initial submissions for CPU will include a reviewed set of portability flags on several operating systems; later submitters who propose to apply additional portability flags should prepare a justification for their use.

If the justification is 2. SPEC always prefers to have benchmarks obey the standard, and SPEC attempts to fix as many violations as possible before release of the suites.

But it is recognized that some violations may not be detected until years after a suite is released. In such a case, a portability switch may be the practical solution. Alternatively, the subcommittee may approve a source code fix. If a library is specified as a portability flag, SPEC may request that the table of contents of the library be included in the disclosure. In addition to the rules listed in section 2. The optimizations used are expected to be safe, and it is expected that system or compiler vendors would endorse the general use of these optimizations by customers who seek to achieve good application performance.

The same compiler and same set of optimization flags or options is used for all benchmarks of a given language within a benchmark suite, except for portability flags see 2. All flags must be applied in the same order for all benchmarks. The runspec documentation file covers how to set this up with the SPEC tools.

Specifically, benchmarks that are written in Fortran or Fortran may not use a different set of flags or different compiler invocation in a base run. In a peak run, it is permissible to use different compiler commands, as well as different flags, for each benchmark. PASS2 is optional. For example, it is conceivable that a daemon might optimize the image automatically based on the training run, without further tester intervention.

Such a daemon would have to be noted in the full disclosure to SPEC. If additional processing steps are required, the optimization is allowed for peak only but not for base. When a two-pass process is used, the flag s that explicitly control s the generation or the use of feedback information can be - and usually will be - different in the two compilation passes.

For the other flags, one of the two conditions must hold:. An assertion flag is one that supplies semantic information that the compilation system did not derive from the source statements of the benchmark. With an assertion flag, the programmer asserts to the compiler that the program has certain nice properties that allow the compiler to apply more aggressive optimization techniques for example, that there is no aliasing via C pointers.

The problem is that there can be legal programs possibly strange, but still standard-conforming programs where such a property does not hold.



0コメント

  • 1000 / 1000