Brian Homerding d5c558ff21 [lit] - Allow 1 test to report multiple micro-test results to provide support for microbenchmarks.
Summary:
These changes are to allow to a Result object to have nested Result objects in
order to support microbenchmarks. Currently lit is restricted to reporting one
result object for one test, this change provides support tests that want to
report individual timings for individual kernels.

This revision is the result of the discussions in
https://reviews.llvm.org/D32272#794759,
https://reviews.llvm.org/D37421#f8003b27 and https://reviews.llvm.org/D38496.
It is a separation of the changes purposed in https://reviews.llvm.org/D40077.

This change will enable adding LCALS (Livermore Compiler Analysis Loop Suite)
collection of loop kernels to the llvm test suite using the google benchmark
library (https://reviews.llvm.org/D43319) with tracking of individual kernel
timings.

Previously microbenchmarks had been handled by using macros to section groups
of microbenchmarks together and build many executables while still getting a
grouped timing (MultiSource/TSVC). Recently the google benchmark library was
added to the test suite and utilized with a litsupport plugin. However the
limitation of 1 test 1 result limited its use to passing a runtime option to
run only 1 microbenchmark with several hand written tests
(MicroBenchmarks/XRay). This runs the same executable many times with different
hand-written tests. I will update the litsupport plugin to utilize the new
functionality (https://reviews.llvm.org/D43316).

These changes allow lit to report micro test results if desired in order to get
many precise timing results from 1 run of 1 test executable.


Reviewers: MatzeB, hfinkel, rengolin, delcypher

Differential Revision: https://reviews.llvm.org/D43314

llvm-svn: 327422
2018-03-13 16:37:59 +00:00

53 lines
1.9 KiB
Python

import os
try:
import ConfigParser
except ImportError:
import configparser as ConfigParser
import lit.formats
import lit.Test
class DummyFormat(lit.formats.FileBasedTest):
def execute(self, test, lit_config):
# In this dummy format, expect that each test file is actually just a
# .ini format dump of the results to report.
source_path = test.getSourcePath()
cfg = ConfigParser.ConfigParser()
cfg.read(source_path)
# Create the basic test result.
result_code = cfg.get('global', 'result_code')
result_output = cfg.get('global', 'result_output')
result = lit.Test.Result(getattr(lit.Test, result_code),
result_output)
# Load additional metrics.
for key,value_str in cfg.items('results'):
value = eval(value_str)
if isinstance(value, int):
metric = lit.Test.IntMetricValue(value)
elif isinstance(value, float):
metric = lit.Test.RealMetricValue(value)
else:
raise RuntimeError("unsupported result type")
result.addMetric(key, metric)
# Create micro test results
for key,micro_name in cfg.items('micro-tests'):
micro_result = lit.Test.Result(getattr(lit.Test, result_code, ''))
# Load micro test additional metrics
for key,value_str in cfg.items('micro-results'):
value = eval(value_str)
if isinstance(value, int):
metric = lit.Test.IntMetricValue(value)
elif isinstance(value, float):
metric = lit.Test.RealMetricValue(value)
else:
raise RuntimeError("unsupported result type")
micro_result.addMetric(key, metric)
result.addMicroResult(micro_name, micro_result)
return result