Version 5.1.3 – 2011-01-13

下载地址: http://unixetc.com/res/UnixBench.zip

UnixBench是测试类Unix系统性能基本指标的一个软件;使用多个测试来测试系统性能的各个方面。然后将这些测试结果与基线系统的分数进行比较,以产生指数值,该值通常比原始分数更容易理解。然后将整组索引值组合在一起,为系统创建整体评估分数。


Unixbench使用:

  1. UnixBench从5.1版本开始包含系统测试和图形测试。如果需要图形测试,编辑Makefile文件,以确保GRAPHIC_TESTS = defined未被注释。并且确保GL_LIBS变量可用,x11perf命令可用。 如果不需要图形测试,请将Makefile文件中GRAPHIC_TESTS = defined注释掉,切记是注释掉而不是改为其他任何值。

  2. 执行 make命令。

  3. 执行 Run命令以进行系统测试; 执行Run graphics 命令以进行图形测试; 执行Run gindex 命令以同时进行系统和图形测试。

Run是用perl写的,所以得保证系统已经安装perl

For more information on using the tests, read “USAGE”.

For information on adding tests into the benchmark, see “WRITING_TESTS”.

发行备注

======================== Jan 13 ==========================

v5.1.3

Fixed issue that would cause a race condition if you attempted to compile in parallel with more than 3 parallel jobs.

Kelly Lucas, Jan 13, 2011 kdlucas at gmail period com

======================== Dec 07 ==========================

v5.1.2

One big fix: if unixbench is installed in a directory whose pathname contains a space, it should now run (previously it failed).

To avoid possible clashes, the environment variables unixbench uses are now prefixed with “UB_”. These are all optional, and for most people will be completely unnecessary, but if you want you can set these:

UB_BINDIR      Directory where the test programs live.
UB_TMPDIR      Temp directory, for temp files.
UB_RESULTDIR   Directory to put results in.
UB_TESTDIR     Directory where the tests are executed.

And a couple of tiny fixes:

Ian Smith, December 26, 2007 johantheghost at yahoo period com

======================== Oct 07 ==========================

v5.1.1

It turns out that the setting of LANG is crucial to the results. This explains why people in different regions were seeing odd results, and also why runlevel 1 produced odd results – runlevel 1 doesn’t set LANG, and hence reverts to ASCII, whereas most people use a UTF-8 encoding, which is much slower in some tests (eg. shell tests).

So now we manually set LANG to “en_US.utf8”, which is configured with the variable “$language”. Don’t change this if you want to share your results. We also report the language settings in use.

See “The Language Setting” in USAGE for more info. Thanks to nordi for pointing out the LANG issue.

I also added the “grep” and “sysexec” tests. These are non-index tests, and “grep” uses the system’s grep, so it’s not much use for comparing different systems. But some folks on the OpenSuSE list have been finding these useful. They aren’t in any of the main test groups; do “Run grep sysexec” to run them.

Index Changes

The setting of LANG will affect consistency with systems where this is not the default value. However, it should produce more consistent results in future.

Ian Smith, October 15, 2007 johantheghost at yahoo period com

======================== Oct 07 ==========================

v5.1

The major new feature in this version is the addition of graphical benchmarks. Since these may not compile on all systems, you can enable/ disable them with the GRAPHIC_TESTS variable in the Makefile.

As before, each test is run for 3 or 10 iterations. However, we now discard the worst 1/3 of the scores before averaging the remainder. The logic is that a glitch in the system (background process waking up, for example) may make one or two runs go slow, so let’s discard those. Hopefully this will produce more consistent and repeatable results. Check the log file for a test run to see the discarded scores.

Made the tests compile and run on x86-64/Linux (fixed an execl bug passing int instead of pointer).

Also fixed some general bugs.

Thanks to Stefan Esser for help and testing / bug reporting.

Index Changes

The tests are now divided into categories, and each category generates its own index. This keeps the graphics test results separate from the system tests.

The “graphics” test and corresponding index are new.

The “discard the worst scores” strategy should produce slightly higher test scores, but at least they should (hopefully!) be more consistent. The scores should not be higher than the best scores you would have got with 5.0, so this should not be a huge consistency issue.

Ian Smith, October 11, 2007 johantheghost at yahoo period com

======================== Sep 07 ==========================

v5.0

All the work I’ve done on this release is Linux-based, because that’s the only Unix I have access to. I’ve tried to make it more OS-agnostic if anything; for example, it no longer has to figure out the format reported by /usr/bin/time. However, it’s possible that portability has been damaged. If anyone wants to fix this, please feel free to mail me patches.

In particular, the analysis of the system’s CPUs is done via /proc/cpuinfo. For systems which don’t have this, please make appropriate changes in getCpuInfo() and getSystemInfo().

The big change has been to make the tests multi-CPU aware. See the “Multiple CPUs” section in “USAGE” for details. Other changes:

Index Changes

The index is still based on David Niemi’s SPARCstation 20-61 (rated at 10.0), and the intention in the changes I’ve made has been to keep the tests unchanged, in order to maintain consistency with old result sets.

However, the following changes have been made to the index:

Both of these test can be dropped, if you wish, by editing the “TEST SPECIFICATIONS” section of Run.

Ian Smith, September 20, 2007 johantheghost at yahoo period com

======================== Aug 97 ==========================

v4.1.0

Double precision Whetstone put in place instead of the old “double” benchmark.

Removal of some obsolete files.

“system” suite adds shell8.

perlbench and poll added as “exhibition” (non-index) benchmarks.

Incorporates several suggestions by Andre Derrick Balsa [email protected]

Code cleanups to reduce compiler warnings by David C Niemi [email protected] and Andy Kahn [email protected]; Digital Unix options by Andy Kahn.

======================== Jun 97 ==========================

v4.0.1

Minor change to fstime.c to fix overflow problems on fast machines. Counting is now done in units of 256 (smallest BUFSIZE) and unsigned longs are used, giving another 23 dB or so of headroom ;^) Results should be virtually identical aside from very small rounding errors.

======================== Dec 95 ==========================

v4.0

Byte no longer seems to have anything to do with this benchmark, and I was unable to reach any of the original authors; so I have taken it upon myself to clean it up.

This is version 4. Major assumptions made in these benchmarks have changed since they were written, but they are nonetheless popular (particularly for measuring hardware for Linux). Some changes made:

I am still a bit unhappy with the variance in some of the benchmarks, most notably the fstime suite; and with how long it takes to run. But I think it gets significantly more reliable results than the older version in less time.

If anyone has ideas on how to make these benchmarks faster, lower-variance, or more meaningful; or has nice, new, portable benchmarks to add, don’t hesitate to e-mail me.

David C Niemi [email protected] 7 Dec 1995

======================== May 91 ========================== This is version 3. This set of programs should be able to determine if your system is BSD or SysV. (It uses the output format of time (1) to see. If you have any problems, contact me (by email, preferably): [email protected]


The document doc/bench.doc describes the basic flow of the benchmark system. The document doc/bench3.doc describes the major changes in design of this version. As a user of the benchmarks, you should understand some of the methods that have been implemented to generate loop counts:

Tests that are compiled C code: The function wake_me(second, func) is included (from the file timeit.c). This function uses signal and alarm to set a countdown for the time request by the benchmark administration script (Run). As soon as the clock is started, the test is run with a counter keeping track of the number of loops that the test makes. When alarm sends its signal, the loop counter value is sent to stderr and the program terminates. Since the time resolution, signal trapping and other factors don’t insure that the test is for the precise time that was requested, the test program is also run from the time (1) command. The real time value returned from time (1) is what is used in calculating the number of loops per second (or minute, depending on the test). As is obvious, there is some overhead time that is not taken into account, therefore the number of loops per second is not absolute. The overhead of the test starting and stopping and the signal and alarm calls is common to the overhead of real applications. If a program loads quickly, the number of loops per second increases; a phenomenon that favors systems that can load programs quickly. (Setting the sticky bit of the test programs is not considered fair play.)

Test that use existing UNIX programs or shell scripts: The concept is the same as that of compiled tests, except the alarm and signal are contained in separate compiled program, looper (source is looper.c). Looper uses an execvp to invoke the test with its arguments. Here, the overhead includes the invocation and execution of looper.

The index numbers are generated from a baseline file that is in pgms/index.base. You can put tests that you wish in this file. All you need to do is take the results/log file from your baseline machine, edit out the comment and blank lines, and sort the result (vi/ex command: 1,$!sort). The sort in necessary because the process of generating the index report uses join (1). You can regenerate the reports by running “make report.”

========================= Jan 90 ============================= Tom Yager has joined the effort here at BYTE; he is responsible for many refinements in the UNIX benchmarks.

The memory access tests have been deleted from the benchmarks. The file access tests have been reversed so that the test is run for a fixed time. The amount of data transfered (written, read, and copied) is the variable. !WARNING! This test can eat up a large hunk of disk space.

The initial line of all shell scripts has been changed from the SCO and XENIX form (:) to the more standard form “#! /bin/sh”. But different systems handle shell switching differently. Check the documentation on your system and find out how you are supposed to do it. Or, simpler yet, just run the benchmarks from the Bourne shell. (You may need to set SHELL=/bin/sh as well.)

The options to Run have not been checked in a while. They may no longer function. Next time, I’ll get back on them. There needs to be another option added (next time) that halts testing between each test. !WARNING! Some systems have caches that are not getting flushed before the next test or iteration is run. This can cause erroneous values.

========================= Sept 89 ============================= The database (db) programs now have a tuneable message queue space. queue space. The default set in the Run script is 1024 bytes. Other major changes are in the format of the times. We now show Arithmetic and Geometric mean and standard deviation for User Time, System Time, and Real Time. Generally, in reporting, we plan on using the Real Time values with the benchs run with one active user (the bench user). Comments and arguments are requested.

contact: BIX bensmith or rick_g

via https://github.com/kdlucas/byte-unixbench