[MLIR] Introduce RemarkEngine + pluggable remark streaming (YAML/Bitstream) (#152474)

This PR implements structured, tooling-friendly optimization remarks
with zero cost unless enabled. It implements:
- `RemarkEngine` collects finalized remarks within `MLIRContext`.
- `MLIRRemarkStreamerBase` abstract class streams them to a backend.
- Backends: `MLIRLLVMRemarkStreamer` (bridges to llvm::remarks →
YAML/Bitstream) or your own custom streamer.
- Optional mirroring to DiagnosticEngine (printAsEmitRemarks +
categories).
- Off by default; no behavior change unless enabled. Thread-safe;
ordering best-effort.


## Overview

```
Passes (reportOptimization*)
         │
         ▼
+-------------------+
|  RemarkEngine     |   collects
+-------------------+
     │         │
     │ mirror  │ stream
     ▼         ▼
emitRemark    MLIRRemarkStreamerBase (abstract)
                   │
                   ├── MLIRLLVMRemarkStreamer → llvm::remarks → YAML | Bitstream
                   └── CustomStreamer → your sink
```

## Enable Remark engine and Plug LLVM's Remark streamer
```
// Enable once per MLIRContext. This uses `MLIRLLVMRemarkStreamer`
mlir::remark::enableOptimizationRemarksToFile(
    ctx, path, llvm::remarks::Format::YAML, cats);
```

## API to emit remark
```
// Emit from a pass
 remark::passed(loc, categoryVectorizer, myPassname1)
        << "vectorized loop";

remark::missed(loc, categoryUnroll, "MyPass")
        << remark::reason("not profitable at this size")   // Creates structured reason arg
        << remark::suggest("increase unroll factor to >=4");   // Creates structured suggestion arg

remark::passed(loc, categoryVectorizer, myPassname1)
        << "vectorized loop" 
        << remark::metric("tripCount", 128);                // Create structured metric on-the-fly
```
This commit is contained in:
Guray Ozen 2025-08-21 16:02:31 +02:00 committed by GitHub
parent 5d4aa87ca5
commit 3d41197d68
No known key found for this signature in database
GPG Key ID: B5690EEEBB952194
12 changed files with 1538 additions and 0 deletions

259
mlir/docs/Remarks.md Normal file
View File

@ -0,0 +1,259 @@
# Remark Infrastructure
Remarks are **structured, human- and machine-readable notes** emitted by the
compiler to explain:
- What was transformed
- What was missed
- Why it happened
The **`RemarkEngine`** collects finalized remarks during compilation and sends
them to a pluggable **streamer**. By default, MLIR integrates with LLVMs
[`llvm::remarks`](https://llvm.org/docs/Remarks.html), allowing you to:
- Stream remarks as passes run
- Serialize them to **YAML** or **LLVM bitstream** for tooling
***
## Key Points
- **Opt-in** Disabled by default; zero overhead unless enabled.
- **Per-context** Configured on `MLIRContext`.
- **Formats** LLVM Remark engine (YAML / Bitstream) or custom streamers.
- **Kinds** `Passed`, `Missed`, `Failure`, `Analysis`.
- **API** Lightweight streaming interface using `<<` (like MLIR diagnostics).
***
## How It Works
Two main components:
- **`RemarkEngine`** (owned by `MLIRContext`): Receives finalized
`InFlightRemark`s, optionally mirrors them to the `DiagnosticEngine`, and
dispatches to the installed streamer.
- **`MLIRRemarkStreamerBase`** (abstract): Backend interface with a single hook:
```c++
virtual void streamOptimizationRemark(const Remark &remark) = 0;
```
**Default backend `MLIRLLVMRemarkStreamer`** Adapts `mlir::Remark` to LLVMs
remark format and writes YAML/bitstream via `llvm::remarks::RemarkStreamer`.
**Ownership flow:** `MLIRContext``RemarkEngine``MLIRRemarkStreamerBase`
***
## Categories
MLIR provides four built-in remark categories (extendable if needed):
#### 1. **Passed**
Optimization/transformation succeeded.
```
[Passed] RemarkName | Category:Vectorizer:myPass1 | Function=foo | Remark="vectorized loop", tripCount=128
```
#### 2. **Missed**
Optimization/transformation didnt apply — ideally with actionable feedback.
```
[Missed] | Category:Unroll | Function=foo | Reason="tripCount=4 < threshold=256", Suggestion="increase unroll to 128"
```
#### 3. **Failure**
Optimization/transformation attempted but failed. This is slightly different
from the `Missed` category.
For example, the user specifies `-use-max-register=100` when invoking the
compiler, but the attempt fails for some reason:
```bash
$ your-compiler -use-max-register=100 mycode.xyz
```
```
[Failed] Category:RegisterAllocator | Reason="Limiting to use-max-register=100 failed; it now uses 104 registers for better performance"
```
#### 4. **Analysis**
Neutral analysis results.
```
[Analysis] Category:Register | Remark="Kernel uses 168 registers"
[Analysis] Category:Register | Remark="Kernel uses 10kB local memory"
```
***
## Emitting Remarks
The `remark::*` helpers return an **in-flight remark**.
You append strings or keyvalue metrics using `<<`.
### Remark Options
When constructing a remark, you typically provide four fields that are `StringRef`:
1. **Remark name** identifiable name
2. **Category** high-level classification
3. **Sub-category** more fine-grained classification
4. **Function name** the function where the remark originates
### Example
```c++
#include "mlir/IR/Remarks.h"
LogicalResult MyPass::runOnOperation() {
Location loc = getOperation()->getLoc();
remark::RemarkOpts opts = remark::RemarkOpts::name(MyRemarkName1)
.category(categoryVectorizer)
.function(fName)
.subCategory(myPassname1);
// PASSED
remark::passed(loc, opts)
<< "vectorized loop"
<< remark::metric("tripCount", 128);
// ANALYSIS
remark::analysis(loc, opts)
<< "Kernel uses 168 registers";
// MISSED (with reason + suggestion)
int tripBad = 4, threshold = 256, target = 128;
remark::missed(loc, opts)
<< remark::reason("tripCount={0} < threshold={1}", tripBad, threshold)
<< remark::suggest("increase unroll to {0}", target);
// FAILURE
remark::failed(loc, opts)
<< remark::reason("failed due to unsupported pattern");
return success();
}
```
***
### Metrics and Shortcuts
Helper functions accept
[LLVM format](https://llvm.org/docs/ProgrammersManual.html#formatting-strings-the-formatv-function)
style strings. This format builds lazily, so remarks are zero-cost when
disabled.
#### Adding Remarks
- **`remark::add(fmt, ...)`** Shortcut for `metric("Remark", ...)`.
#### Adding Reasons
- **`remark::reason(fmt, ...)`** Shortcut for `metric("Reason", ...)`. Used to
explain why a remark was missed or failed.
#### Adding Suggestions
- **`remark::suggest(fmt, ...)`** Shortcut for `metric("Suggestion", ...)`.
Used to provide actionable feedback.
#### Adding Custom Metrics
- **`remark::metric(key, value)`** Adds a structured keyvalue metric.
Example: tracking `TripCount`. When exported to YAML, it appears under `args`
for machine readability:
```cpp
remark::metric("TripCount", value)
```
#### String Metrics
Passing a plain string (e.g. `<< "vectorized loop"`) is equivalent to:
```cpp
metric("Remark", "vectorized loop")
```
***
## Enabling Remarks
### 1. **With LLVMRemarkStreamer (YAML or Bitstream)**
Persists remarks to a file in the chosen format.
```c++
mlir::remark::RemarkCategories cats{/*passed=*/categoryLoopunroll,
/*missed=*/std::nullopt,
/*analysis=*/std::nullopt,
/*failed=*/categoryLoopunroll};
mlir::remark::enableOptimizationRemarksWithLLVMStreamer(
context, yamlFile, llvm::remarks::Format::YAML, cats);
```
**YAML format** human-readable, easy to diff:
```yaml
--- !Passed
pass: Category:SubCategory
name: MyRemarkName1
function: myFunc
loc: myfile.mlir:12:3
args:
- Remark: vectorized loop
- tripCount: 128
```
**Bitstream format** compact binary for large runs.
***
### 2. **With `mlir::emitRemarks` (No Streamer)**
If the streamer isn't passed, the remarks are mirrored to the `DiagnosticEngine`
using `mlir::emitRemarks`
```c++
mlir::remark::RemarkCategories cats{/*passed=*/categoryLoopunroll,
/*missed=*/std::nullopt,
/*analysis=*/std::nullopt,
/*failed=*/categoryLoopunroll};
remark::enableOptimizationRemarks(
/*streamer=*/nullptr, cats,
/*printAsEmitRemarks=*/true);
```
***
### 3. **With a Custom Streamer**
You can implement a custom streamer by inheriting `MLIRRemarkStreamerBase` to
consume remarks in any format.
```c++
class MyStreamer : public MLIRRemarkStreamerBase {
public:
void streamOptimizationRemark(const Remark &remark) override {
// Convert and write remark to your custom format
}
};
auto myStreamer = std::make_unique<MyStreamer>();
remark::enableOptimizationRemarks(
/*streamer=*/myStreamer, cats,
/*printAsEmitRemarks=*/true);
```

View File

@ -34,6 +34,9 @@ class MLIRContextImpl;
class RegisteredOperationName; class RegisteredOperationName;
class StorageUniquer; class StorageUniquer;
class IRUnit; class IRUnit;
namespace remark::detail {
class RemarkEngine;
} // namespace remark::detail
/// MLIRContext is the top-level object for a collection of MLIR operations. It /// MLIRContext is the top-level object for a collection of MLIR operations. It
/// holds immortal uniqued objects like types, and the tables used to unique /// holds immortal uniqued objects like types, and the tables used to unique
@ -212,6 +215,13 @@ public:
/// Returns the diagnostic engine for this context. /// Returns the diagnostic engine for this context.
DiagnosticEngine &getDiagEngine(); DiagnosticEngine &getDiagEngine();
/// Returns the remark engine for this context, or nullptr if none has been
/// set.
remark::detail::RemarkEngine *getRemarkEngine();
/// Set the remark engine for this context.
void setRemarkEngine(std::unique_ptr<remark::detail::RemarkEngine> engine);
/// Returns the storage uniquer used for creating affine constructs. /// Returns the storage uniquer used for creating affine constructs.
StorageUniquer &getAffineUniquer(); StorageUniquer &getAffineUniquer();

View File

@ -0,0 +1,520 @@
//===- Remarks.h - MLIR Optimization Remark ----------------------*- C++-*-===//
//
// Part of the LLVM Project, under the Apache License v2.0 with LLVM Exceptions.
// See https://llvm.org/LICENSE.txt for license information.
// SPDX-License-Identifier: Apache-2.0 WITH LLVM-exception
//
//===----------------------------------------------------------------------===//
//
// This file defines utilities for emitting optimization remarks.
//
//===----------------------------------------------------------------------===//
#ifndef MLIR_IR_REMARKS_H
#define MLIR_IR_REMARKS_H
#include "llvm/ADT/StringExtras.h"
#include "llvm/IR/DiagnosticInfo.h"
#include "llvm/Remarks/Remark.h"
#include "llvm/Support/FormatVariadic.h"
#include "llvm/Support/Regex.h"
#include <optional>
#include "mlir/IR/Diagnostics.h"
#include "mlir/IR/MLIRContext.h"
#include "mlir/IR/Value.h"
namespace mlir::remark {
/// Define an the set of categories to accept. By default none are, the provided
/// regex matches against the category names for each kind of remark.
struct RemarkCategories {
std::optional<std::string> passed, missed, analysis, failed;
};
/// Categories describe the outcome of an transformation, not the mechanics of
/// emitting/serializing remarks.
enum class RemarkKind {
RemarkUnknown = 0,
/// An optimization was applied.
RemarkPassed,
/// A profitable optimization opportunity was found but not applied.
RemarkMissed,
/// The compiler attempted the optimization but failed (e.g., legality
/// checks, or better opportunites).
RemarkFailure,
/// Informational context (e.g., analysis numbers) without a pass/fail
/// outcome.
RemarkAnalysis,
};
using namespace llvm;
/// Options to create a Remark
struct RemarkOpts {
StringRef remarkName; // Identifiable name
StringRef categoryName; // Category name (subject to regex filtering)
StringRef subCategoryName; // Subcategory name
StringRef functionName; // Function name if available
RemarkOpts() = delete;
// Construct RemarkOpts from a remark name.
static constexpr RemarkOpts name(StringRef n) {
return RemarkOpts{n, {}, {}, {}};
}
/// Return a copy with the category set.
constexpr RemarkOpts category(StringRef v) const {
return {remarkName, v, subCategoryName, functionName};
}
/// Return a copy with the subcategory set.
constexpr RemarkOpts subCategory(StringRef v) const {
return {remarkName, categoryName, v, functionName};
}
/// Return a copy with the function name set.
constexpr RemarkOpts function(StringRef v) const {
return {remarkName, categoryName, subCategoryName, v};
}
};
} // namespace mlir::remark
namespace mlir::remark::detail {
//===----------------------------------------------------------------------===//
// Remark Base Class
//===----------------------------------------------------------------------===//
class Remark {
public:
Remark(RemarkKind remarkKind, DiagnosticSeverity severity, Location loc,
RemarkOpts opts)
: remarkKind(remarkKind), functionName(opts.functionName), loc(loc),
categoryName(opts.categoryName), subCategoryName(opts.subCategoryName),
remarkName(opts.remarkName) {
if (!categoryName.empty() && !subCategoryName.empty()) {
(llvm::Twine(categoryName) + ":" + subCategoryName)
.toStringRef(fullCategoryName);
}
}
// Remark argument that is a key-value pair that can be printed as machine
// parsable args.
struct Arg {
std::string key;
std::string val;
Arg(llvm::StringRef m) : key("Remark"), val(m) {}
Arg(llvm::StringRef k, llvm::StringRef v) : key(k), val(v) {}
Arg(llvm::StringRef k, std::string v) : key(k), val(std::move(v)) {}
Arg(llvm::StringRef k, const char *v) : Arg(k, llvm::StringRef(v)) {}
Arg(llvm::StringRef k, Value v);
Arg(llvm::StringRef k, Type t);
Arg(llvm::StringRef k, bool b) : key(k), val(b ? "true" : "false") {}
// One constructor for all arithmetic types except bool.
template <typename T, typename = std::enable_if_t<std::is_arithmetic_v<T> &&
!std::is_same_v<T, bool>>>
Arg(llvm::StringRef k, T v) : key(k) {
if constexpr (std::is_floating_point_v<T>) {
llvm::raw_string_ostream os(val);
os << v;
} else if constexpr (std::is_signed_v<T>) {
val = llvm::itostr(static_cast<long long>(v));
} else {
val = llvm::utostr(static_cast<unsigned long long>(v));
}
}
};
void insert(llvm::StringRef s);
void insert(Arg a);
void print(llvm::raw_ostream &os, bool printLocation = false) const;
Location getLocation() const { return loc; }
/// Diagnostic -> Remark
llvm::remarks::Remark generateRemark() const;
StringRef getFunction() const {
if (!functionName.empty())
return functionName;
return "<unknown function>";
}
llvm::StringRef getCategoryName() const { return categoryName; }
llvm::StringRef getFullCategoryName() const {
if (categoryName.empty() && subCategoryName.empty())
return {};
if (subCategoryName.empty())
return categoryName;
if (categoryName.empty())
return subCategoryName;
return fullCategoryName;
}
StringRef getRemarkName() const {
if (remarkName.empty())
return "<unknown remark name>";
return remarkName;
}
std::string getMsg() const;
ArrayRef<Arg> getArgs() const { return args; }
llvm::remarks::Type getRemarkType() const;
StringRef getRemarkTypeString() const;
protected:
/// Keeps the MLIR diagnostic kind, which is used to determine the
/// diagnostic kind in the LLVM remark streamer.
RemarkKind remarkKind;
/// Name of the convering function like interface
StringRef functionName;
Location loc;
/// Sub category passname e.g., "Unroll" or "UnrollAndJam"
StringRef categoryName;
/// Sub category name "Loop Optimizer"
StringRef subCategoryName;
/// Combined name for category and sub-category
SmallString<64> fullCategoryName;
/// Remark identifier
StringRef remarkName;
/// Args collected via the streaming interface.
SmallVector<Arg, 4> args;
private:
/// Convert the MLIR diagnostic severity to LLVM diagnostic severity.
static llvm::DiagnosticSeverity
makeLLVMSeverity(DiagnosticSeverity severity) {
switch (severity) {
case DiagnosticSeverity::Note:
return llvm::DiagnosticSeverity::DS_Note;
case DiagnosticSeverity::Warning:
return llvm::DiagnosticSeverity::DS_Warning;
case DiagnosticSeverity::Error:
return llvm::DiagnosticSeverity::DS_Error;
case DiagnosticSeverity::Remark:
return llvm::DiagnosticSeverity::DS_Remark;
}
llvm_unreachable("Unknown diagnostic severity");
}
/// Convert the MLIR remark kind to LLVM diagnostic kind.
static llvm::DiagnosticKind makeLLVMKind(RemarkKind remarkKind) {
switch (remarkKind) {
case RemarkKind::RemarkUnknown:
return llvm::DiagnosticKind::DK_Generic;
case RemarkKind::RemarkPassed:
return llvm::DiagnosticKind::DK_OptimizationRemark;
case RemarkKind::RemarkMissed:
return llvm::DiagnosticKind::DK_OptimizationRemarkMissed;
case RemarkKind::RemarkFailure:
return llvm::DiagnosticKind::DK_OptimizationFailure;
case RemarkKind::RemarkAnalysis:
return llvm::DiagnosticKind::DK_OptimizationRemarkAnalysis;
}
llvm_unreachable("Unknown diagnostic kind");
}
};
inline Remark &operator<<(Remark &r, StringRef s) {
r.insert(s);
return r;
}
inline Remark &&operator<<(Remark &&r, StringRef s) {
r.insert(s);
return std::move(r);
}
inline Remark &operator<<(Remark &r, const Remark::Arg &kv) {
r.insert(kv);
return r;
}
//===----------------------------------------------------------------------===//
// Shorthand aliases for different kinds of remarks.
//===----------------------------------------------------------------------===//
template <RemarkKind K, DiagnosticSeverity S>
class OptRemarkBase final : public Remark {
public:
explicit OptRemarkBase(Location loc, RemarkOpts opts)
: Remark(K, S, loc, opts) {}
};
using OptRemarkAnalysis =
OptRemarkBase<RemarkKind::RemarkAnalysis, DiagnosticSeverity::Remark>;
using OptRemarkPass =
OptRemarkBase<RemarkKind::RemarkPassed, DiagnosticSeverity::Remark>;
using OptRemarkMissed =
OptRemarkBase<RemarkKind::RemarkMissed, DiagnosticSeverity::Remark>;
using OptRemarkFailure =
OptRemarkBase<RemarkKind::RemarkFailure, DiagnosticSeverity::Remark>;
class RemarkEngine;
//===----------------------------------------------------------------------===//
// InFlightRemark
//===----------------------------------------------------------------------===//
/// Lazy text building for zero cost string formatting.
struct LazyTextBuild {
llvm::StringRef key;
std::function<std::string()> thunk;
};
/// InFlightRemark is a RAII class that holds a reference to a Remark
/// instance and allows to build the remark using the << operator. The remark
/// is emitted when the InFlightRemark instance is destroyed, which happens
/// when the scope ends or when the InFlightRemark instance is moved.
/// Similar to InFlightDiagnostic, but for remarks.
class InFlightRemark {
public:
explicit InFlightRemark(std::unique_ptr<Remark> diag)
: remark(std::move(diag)) {}
InFlightRemark(RemarkEngine &eng, std::unique_ptr<Remark> diag)
: owner(&eng), remark(std::move(diag)) {}
InFlightRemark() = default; // empty ctor
InFlightRemark &operator<<(const LazyTextBuild &l) {
if (remark)
*remark << Remark::Arg(l.key, l.thunk());
return *this;
}
// Generic path, but *not* for Lazy
template <typename T, typename = std::enable_if_t<
!std::is_same_v<std::decay_t<T>, LazyTextBuild>>>
InFlightRemark &operator<<(T &&arg) {
if (remark)
*remark << std::forward<T>(arg);
return *this;
}
explicit operator bool() const { return remark != nullptr; }
~InFlightRemark();
InFlightRemark(const InFlightRemark &) = delete;
InFlightRemark &operator=(const InFlightRemark &) = delete;
InFlightRemark(InFlightRemark &&) = default;
InFlightRemark &operator=(InFlightRemark &&) = default;
private:
RemarkEngine *owner{nullptr};
std::unique_ptr<Remark> remark;
};
//===----------------------------------------------------------------------===//
// MLIR Remark Streamer
//===----------------------------------------------------------------------===//
/// Base class for MLIR remark streamers that is used to stream
/// optimization remarks to the underlying remark streamer. The derived classes
/// should implement the `streamOptimizationRemark` method to provide the
/// actual streaming implementation.
class MLIRRemarkStreamerBase {
public:
virtual ~MLIRRemarkStreamerBase() = default;
/// Stream an optimization remark to the underlying remark streamer. It is
/// called by the RemarkEngine to stream the optimization remarks.
///
/// It must be overridden by the derived classes to provide
/// the actual streaming implementation.
virtual void streamOptimizationRemark(const Remark &remark) = 0;
virtual void finalize() {} // optional
};
//===----------------------------------------------------------------------===//
// Remark Engine (MLIR Context will own this class)
//===----------------------------------------------------------------------===//
class RemarkEngine {
private:
/// Regex that filters missed optimization remarks: only matching one are
/// reported.
std::optional<llvm::Regex> missFilter;
/// The category for passed optimization remarks.
std::optional<llvm::Regex> passedFilter;
/// The category for analysis remarks.
std::optional<llvm::Regex> analysisFilter;
/// The category for failed optimization remarks.
std::optional<llvm::Regex> failedFilter;
/// The MLIR remark streamer that will be used to emit the remarks.
std::unique_ptr<MLIRRemarkStreamerBase> remarkStreamer;
/// When is enabled, engine also prints remarks as mlir::emitRemarks.
bool printAsEmitRemarks = false;
/// Return true if missed optimization remarks are enabled, override
/// to provide different implementation.
bool isMissedOptRemarkEnabled(StringRef categoryName) const;
/// Return true if passed optimization remarks are enabled, override
/// to provide different implementation.
bool isPassedOptRemarkEnabled(StringRef categoryName) const;
/// Return true if analysis optimization remarks are enabled, override
/// to provide different implementation.
bool isAnalysisOptRemarkEnabled(StringRef categoryName) const;
/// Return true if analysis optimization remarks are enabled, override
/// to provide different implementation.
bool isFailedOptRemarkEnabled(StringRef categoryName) const;
/// Return true if any type of remarks are enabled for this pass.
bool isAnyRemarkEnabled(StringRef categoryName) const {
return isMissedOptRemarkEnabled(categoryName) ||
isPassedOptRemarkEnabled(categoryName) ||
isFailedOptRemarkEnabled(categoryName) ||
isAnalysisOptRemarkEnabled(categoryName);
}
/// Emit a remark using the given maker function, which should return
/// a Remark instance. The remark will be emitted using the main
/// remark streamer.
template <typename RemarkT, typename... Args>
InFlightRemark makeRemark(Args &&...args);
template <typename RemarkT>
InFlightRemark emitIfEnabled(Location loc, RemarkOpts opts,
bool (RemarkEngine::*isEnabled)(StringRef)
const);
public:
/// Default constructor is deleted, use the other constructor.
RemarkEngine() = delete;
/// Constructs Remark engine with optional category names. If a category
/// name is not provided, it is not enabled. The category names are used to
/// filter the remarks that are emitted.
RemarkEngine(bool printAsEmitRemarks, const RemarkCategories &cats);
/// Destructor that will close the output file and reset the
/// main remark streamer.
~RemarkEngine();
/// Setup the remark engine with the given output path and format.
LogicalResult initialize(std::unique_ptr<MLIRRemarkStreamerBase> streamer,
std::string *errMsg);
/// Report a remark.
void report(const Remark &&remark);
/// Report a successful remark, this will create an InFlightRemark
/// that can be used to build the remark using the << operator.
InFlightRemark emitOptimizationRemark(Location loc, RemarkOpts opts);
/// Report a missed optimization remark
/// that can be used to build the remark using the << operator.
InFlightRemark emitOptimizationRemarkMiss(Location loc, RemarkOpts opts);
/// Report a failed optimization remark, this will create an InFlightRemark
/// that can be used to build the remark using the << operator.
InFlightRemark emitOptimizationRemarkFailure(Location loc, RemarkOpts opts);
/// Report an analysis remark, this will create an InFlightRemark
/// that can be used to build the remark using the << operator.
InFlightRemark emitOptimizationRemarkAnalysis(Location loc, RemarkOpts opts);
};
template <typename Fn, typename... Args>
inline InFlightRemark withEngine(Fn fn, Location loc, Args &&...args) {
MLIRContext *ctx = loc->getContext();
RemarkEngine *enginePtr = ctx->getRemarkEngine();
if (LLVM_UNLIKELY(enginePtr))
return (enginePtr->*fn)(loc, std::forward<Args>(args)...);
return {};
}
} // namespace mlir::remark::detail
namespace mlir::remark {
/// Create a Reason with llvm::formatv formatting.
template <class... Ts>
inline detail::LazyTextBuild reason(const char *fmt, Ts &&...ts) {
return {"Reason", [=] { return llvm::formatv(fmt, ts...).str(); }};
}
/// Create a Suggestion with llvm::formatv formatting.
template <class... Ts>
inline detail::LazyTextBuild suggest(const char *fmt, Ts &&...ts) {
return {"Suggestion", [=] { return llvm::formatv(fmt, ts...).str(); }};
}
/// Create a Remark with llvm::formatv formatting.
template <class... Ts>
inline detail::LazyTextBuild add(const char *fmt, Ts &&...ts) {
return {"Remark", [=] { return llvm::formatv(fmt, ts...).str(); }};
}
template <class V>
inline detail::LazyTextBuild metric(StringRef key, V &&v) {
using DV = std::decay_t<V>;
return {key, [key, vv = DV(std::forward<V>(v))]() mutable {
// Reuse Arg's formatting logic and return just the value string.
return detail::Remark::Arg(key, std::move(vv)).val;
}};
}
//===----------------------------------------------------------------------===//
// Emitters
//===----------------------------------------------------------------------===//
/// Report an optimization remark that was passed.
inline detail::InFlightRemark passed(Location loc, RemarkOpts opts) {
return withEngine(&detail::RemarkEngine::emitOptimizationRemark, loc, opts);
}
/// Report an optimization remark that was missed.
inline detail::InFlightRemark missed(Location loc, RemarkOpts opts) {
return withEngine(&detail::RemarkEngine::emitOptimizationRemarkMiss, loc,
opts);
}
/// Report an optimization remark that failed.
inline detail::InFlightRemark failed(Location loc, RemarkOpts opts) {
return withEngine(&detail::RemarkEngine::emitOptimizationRemarkFailure, loc,
opts);
}
/// Report an optimization analysis remark.
inline detail::InFlightRemark analysis(Location loc, RemarkOpts opts) {
return withEngine(&detail::RemarkEngine::emitOptimizationRemarkAnalysis, loc,
opts);
}
//===----------------------------------------------------------------------===//
// Setup
//===----------------------------------------------------------------------===//
/// Setup remarks for the context. This function will enable the remark engine
/// and set the streamer to be used for optimization remarks. The remark
/// categories are used to filter the remarks that will be emitted by the remark
/// engine. If a category is not specified, it will not be emitted. If
/// `printAsEmitRemarks` is true, the remarks will be printed as
/// mlir::emitRemarks. 'streamer' must inherit from MLIRRemarkStreamerBase and
/// will be used to stream the remarks.
LogicalResult enableOptimizationRemarks(
MLIRContext &ctx,
std::unique_ptr<remark::detail::MLIRRemarkStreamerBase> streamer,
const remark::RemarkCategories &cats, bool printAsEmitRemarks = false);
} // namespace mlir::remark
#endif // MLIR_IR_REMARKS_H

View File

@ -0,0 +1,49 @@
//===- RemarkStreamer.h - MLIR Optimization Remark ---------------*-===//
//
// Part of the LLVM Project, under the Apache License v2.0 with LLVM Exceptions.
// See https://llvm.org/LICENSE.txt for license information.
// SPDX-License-Identifier: Apache-2.0 WITH LLVM-exception
//
//===----------------------------------------------------------------------===//
//
// This file defines LLVMRemarkStreamer plugging class that uses LLVM's
// streamer.
//
//===----------------------------------------------------------------------===//
#include "mlir/IR/Remarks.h"
#include "llvm/Remarks/RemarkStreamer.h"
#include "llvm/Support/ToolOutputFile.h"
namespace mlir::remark::detail {
/// Concrete streamer that writes LLVM optimization remarks to a file
/// (YAML or Bitstream). Lives outside core.
class LLVMRemarkStreamer final : public MLIRRemarkStreamerBase {
public:
static FailureOr<std::unique_ptr<MLIRRemarkStreamerBase>>
createToFile(llvm::StringRef path, llvm::remarks::Format fmt);
void streamOptimizationRemark(const Remark &remark) override;
void finalize() override {}
~LLVMRemarkStreamer() override;
private:
LLVMRemarkStreamer() = default;
std::unique_ptr<class llvm::remarks::RemarkStreamer> remarkStreamer;
std::unique_ptr<class llvm::ToolOutputFile> file;
};
} // namespace mlir::remark::detail
namespace mlir::remark {
/// Enable optimization remarks to a file with the given path and format.
/// The remark categories are used to filter the remarks that are emitted.
/// If the printAsEmitRemarks flag is set, remarks will also be printed using
/// mlir::emitRemarks.
LogicalResult enableOptimizationRemarksWithLLVMStreamer(
MLIRContext &ctx, StringRef filePath, llvm::remarks::Format fmt,
const RemarkCategories &cat, bool printAsEmitRemarks = false);
} // namespace mlir::remark

View File

@ -13,6 +13,7 @@ add_subdirectory(Parser)
add_subdirectory(Pass) add_subdirectory(Pass)
add_subdirectory(Query) add_subdirectory(Query)
add_subdirectory(Reducer) add_subdirectory(Reducer)
add_subdirectory(Remark)
add_subdirectory(Rewrite) add_subdirectory(Rewrite)
add_subdirectory(Support) add_subdirectory(Support)
add_subdirectory(TableGen) add_subdirectory(TableGen)

View File

@ -33,6 +33,7 @@ add_mlir_library(MLIRIR
PatternMatch.cpp PatternMatch.cpp
Region.cpp Region.cpp
RegionKindInterface.cpp RegionKindInterface.cpp
Remarks.cpp
SymbolTable.cpp SymbolTable.cpp
TensorEncoding.cpp TensorEncoding.cpp
Types.cpp Types.cpp

View File

@ -25,6 +25,7 @@
#include "mlir/IR/Location.h" #include "mlir/IR/Location.h"
#include "mlir/IR/OpImplementation.h" #include "mlir/IR/OpImplementation.h"
#include "mlir/IR/OperationSupport.h" #include "mlir/IR/OperationSupport.h"
#include "mlir/IR/Remarks.h"
#include "llvm/ADT/DenseMap.h" #include "llvm/ADT/DenseMap.h"
#include "llvm/ADT/Twine.h" #include "llvm/ADT/Twine.h"
#include "llvm/Support/Allocator.h" #include "llvm/Support/Allocator.h"
@ -133,6 +134,11 @@ public:
//===--------------------------------------------------------------------===// //===--------------------------------------------------------------------===//
DiagnosticEngine diagEngine; DiagnosticEngine diagEngine;
//===--------------------------------------------------------------------===//
// Remark
//===--------------------------------------------------------------------===//
std::unique_ptr<remark::detail::RemarkEngine> remarkEngine;
//===--------------------------------------------------------------------===// //===--------------------------------------------------------------------===//
// Options // Options
//===--------------------------------------------------------------------===// //===--------------------------------------------------------------------===//
@ -387,6 +393,19 @@ bool MLIRContext::hasActionHandler() { return (bool)getImpl().actionHandler; }
/// Returns the diagnostic engine for this context. /// Returns the diagnostic engine for this context.
DiagnosticEngine &MLIRContext::getDiagEngine() { return getImpl().diagEngine; } DiagnosticEngine &MLIRContext::getDiagEngine() { return getImpl().diagEngine; }
//===----------------------------------------------------------------------===//
// Remark Handlers
//===----------------------------------------------------------------------===//
void MLIRContext::setRemarkEngine(
std::unique_ptr<remark::detail::RemarkEngine> engine) {
getImpl().remarkEngine = std::move(engine);
}
remark::detail::RemarkEngine *MLIRContext::getRemarkEngine() {
return getImpl().remarkEngine.get();
}
//===----------------------------------------------------------------------===// //===----------------------------------------------------------------------===//
// Dialect and Operation Registration // Dialect and Operation Registration
//===----------------------------------------------------------------------===// //===----------------------------------------------------------------------===//

279
mlir/lib/IR/Remarks.cpp Normal file
View File

@ -0,0 +1,279 @@
//===- Remarks.cpp - MLIR Remarks -----------------------------------------===//
//
// Part of the LLVM Project, under the Apache License v2.0 with LLVM Exceptions.
// See https://llvm.org/LICENSE.txt for license information.
// SPDX-License-Identifier: Apache-2.0 WITH LLVM-exception
//
//===----------------------------------------------------------------------===//
#include "mlir/IR/Remarks.h"
#include "mlir/IR/BuiltinAttributes.h"
#include "mlir/IR/Diagnostics.h"
#include "mlir/IR/Value.h"
#include "llvm/ADT/StringExtras.h"
#include "llvm/ADT/StringRef.h"
using namespace mlir::remark::detail;
//------------------------------------------------------------------------------
// Remark
//------------------------------------------------------------------------------
Remark::Arg::Arg(llvm::StringRef k, Value v) : key(k) {
llvm::raw_string_ostream os(val);
os << v;
}
Remark::Arg::Arg(llvm::StringRef k, Type t) : key(k) {
llvm::raw_string_ostream os(val);
os << t;
}
void Remark::insert(llvm::StringRef s) { args.emplace_back(s); }
void Remark::insert(Arg a) { args.push_back(std::move(a)); }
// Simple helper to print key=val list (sorted).
static void printArgs(llvm::raw_ostream &os, llvm::ArrayRef<Remark::Arg> args) {
if (args.empty())
return;
llvm::SmallVector<Remark::Arg, 8> sorted(args.begin(), args.end());
llvm::sort(sorted, [](const Remark::Arg &a, const Remark::Arg &b) {
return a.key < b.key;
});
for (size_t i = 0; i < sorted.size(); ++i) {
const auto &a = sorted[i];
os << a.key << "=";
llvm::StringRef val(a.val);
bool needsQuote = val.contains(' ') || val.contains(',') ||
val.contains('{') || val.contains('}');
if (needsQuote)
os << '"' << val << '"';
else
os << val;
if (i + 1 < sorted.size())
os << ", ";
}
}
/// Print the remark to the given output stream.
/// Example output:
// clang-format off
/// [Missed] Category: Loop | Pass:Unroller | Function=main | Reason="tripCount=4 < threshold=256"
/// [Failure] LoopOptimizer | Reason="failed due to unsupported pattern"
// clang-format on
void Remark::print(llvm::raw_ostream &os, bool printLocation) const {
// Header: [Type] pass:remarkName
StringRef type = getRemarkTypeString();
StringRef categoryName = getFullCategoryName();
StringRef name = remarkName;
os << '[' << type << "] ";
os << name << " | ";
if (!categoryName.empty())
os << "Category:" << categoryName << " | ";
if (!functionName.empty())
os << "Function=" << getFunction() << " | ";
if (printLocation) {
if (auto flc = mlir::dyn_cast<mlir::FileLineColLoc>(getLocation()))
os << " @" << flc.getFilename() << ":" << flc.getLine() << ":"
<< flc.getColumn();
}
printArgs(os, getArgs());
}
std::string Remark::getMsg() const {
std::string s;
llvm::raw_string_ostream os(s);
print(os);
os.flush();
return s;
}
llvm::StringRef Remark::getRemarkTypeString() const {
switch (remarkKind) {
case RemarkKind::RemarkUnknown:
return "Unknown";
case RemarkKind::RemarkPassed:
return "Passed";
case RemarkKind::RemarkMissed:
return "Missed";
case RemarkKind::RemarkFailure:
return "Failure";
case RemarkKind::RemarkAnalysis:
return "Analysis";
}
llvm_unreachable("Unknown remark kind");
}
llvm::remarks::Type Remark::getRemarkType() const {
switch (remarkKind) {
case RemarkKind::RemarkUnknown:
return llvm::remarks::Type::Unknown;
case RemarkKind::RemarkPassed:
return llvm::remarks::Type::Passed;
case RemarkKind::RemarkMissed:
return llvm::remarks::Type::Missed;
case RemarkKind::RemarkFailure:
return llvm::remarks::Type::Failure;
case RemarkKind::RemarkAnalysis:
return llvm::remarks::Type::Analysis;
}
llvm_unreachable("Unknown remark kind");
}
llvm::remarks::Remark Remark::generateRemark() const {
auto locLambda = [&]() -> llvm::remarks::RemarkLocation {
if (auto flc = dyn_cast<FileLineColLoc>(getLocation()))
return {flc.getFilename(), flc.getLine(), flc.getColumn()};
return {"<unknown file>", 0, 0};
};
llvm::remarks::Remark r; // The result.
r.RemarkType = getRemarkType();
r.RemarkName = getRemarkName();
// MLIR does not use passes; instead, it has categories and sub-categories.
r.PassName = getFullCategoryName();
r.FunctionName = getFunction();
r.Loc = locLambda();
for (const Remark::Arg &arg : getArgs()) {
r.Args.emplace_back();
r.Args.back().Key = arg.key;
r.Args.back().Val = arg.val;
}
return r;
}
//===----------------------------------------------------------------------===//
// InFlightRemark
//===----------------------------------------------------------------------===//
InFlightRemark::~InFlightRemark() {
if (remark && owner)
owner->report(std::move(*remark));
owner = nullptr;
}
//===----------------------------------------------------------------------===//
// Remark Engine
//===----------------------------------------------------------------------===//
template <typename RemarkT, typename... Args>
InFlightRemark RemarkEngine::makeRemark(Args &&...args) {
static_assert(std::is_base_of_v<Remark, RemarkT>,
"RemarkT must derive from Remark");
return InFlightRemark(*this,
std::make_unique<RemarkT>(std::forward<Args>(args)...));
}
template <typename RemarkT>
InFlightRemark
RemarkEngine::emitIfEnabled(Location loc, RemarkOpts opts,
bool (RemarkEngine::*isEnabled)(StringRef) const) {
return (this->*isEnabled)(opts.categoryName) ? makeRemark<RemarkT>(loc, opts)
: InFlightRemark{};
}
bool RemarkEngine::isMissedOptRemarkEnabled(StringRef categoryName) const {
return missFilter && missFilter->match(categoryName);
}
bool RemarkEngine::isPassedOptRemarkEnabled(StringRef categoryName) const {
return passedFilter && passedFilter->match(categoryName);
}
bool RemarkEngine::isAnalysisOptRemarkEnabled(StringRef categoryName) const {
return analysisFilter && analysisFilter->match(categoryName);
}
bool RemarkEngine::isFailedOptRemarkEnabled(StringRef categoryName) const {
return failedFilter && failedFilter->match(categoryName);
}
InFlightRemark RemarkEngine::emitOptimizationRemark(Location loc,
RemarkOpts opts) {
return emitIfEnabled<OptRemarkPass>(loc, opts,
&RemarkEngine::isPassedOptRemarkEnabled);
}
InFlightRemark RemarkEngine::emitOptimizationRemarkMiss(Location loc,
RemarkOpts opts) {
return emitIfEnabled<OptRemarkMissed>(
loc, opts, &RemarkEngine::isMissedOptRemarkEnabled);
}
InFlightRemark RemarkEngine::emitOptimizationRemarkFailure(Location loc,
RemarkOpts opts) {
return emitIfEnabled<OptRemarkFailure>(
loc, opts, &RemarkEngine::isFailedOptRemarkEnabled);
}
InFlightRemark RemarkEngine::emitOptimizationRemarkAnalysis(Location loc,
RemarkOpts opts) {
return emitIfEnabled<OptRemarkAnalysis>(
loc, opts, &RemarkEngine::isAnalysisOptRemarkEnabled);
}
//===----------------------------------------------------------------------===//
// RemarkEngine
//===----------------------------------------------------------------------===//
void RemarkEngine::report(const Remark &&remark) {
// Stream the remark
if (remarkStreamer)
remarkStreamer->streamOptimizationRemark(remark);
// Print using MLIR's diagnostic
if (printAsEmitRemarks)
emitRemark(remark.getLocation(), remark.getMsg());
}
RemarkEngine::~RemarkEngine() {
if (remarkStreamer)
remarkStreamer->finalize();
}
llvm::LogicalResult
RemarkEngine::initialize(std::unique_ptr<MLIRRemarkStreamerBase> streamer,
std::string *errMsg) {
// If you need to validate categories/filters, do so here and set errMsg.
remarkStreamer = std::move(streamer);
return success();
}
RemarkEngine::RemarkEngine(bool printAsEmitRemarks,
const RemarkCategories &cats)
: printAsEmitRemarks(printAsEmitRemarks) {
if (cats.passed)
passedFilter = llvm::Regex(cats.passed.value());
if (cats.missed)
missFilter = llvm::Regex(cats.missed.value());
if (cats.analysis)
analysisFilter = llvm::Regex(cats.analysis.value());
if (cats.failed)
failedFilter = llvm::Regex(cats.failed.value());
}
llvm::LogicalResult mlir::remark::enableOptimizationRemarks(
MLIRContext &ctx,
std::unique_ptr<remark::detail::MLIRRemarkStreamerBase> streamer,
const remark::RemarkCategories &cats, bool printAsEmitRemarks) {
auto engine =
std::make_unique<remark::detail::RemarkEngine>(printAsEmitRemarks, cats);
std::string errMsg;
if (failed(engine->initialize(std::move(streamer), &errMsg))) {
llvm::report_fatal_error(
llvm::Twine("Failed to initialize remark engine. Error: ") + errMsg);
}
ctx.setRemarkEngine(std::move(engine));
return success();
}

View File

@ -0,0 +1,14 @@
add_mlir_library(MLIRRemarkStreamer
RemarkStreamer.cpp
ADDITIONAL_HEADER_DIRS
${MLIR_MAIN_INCLUDE_DIR}/mlir/Remark
LINK_LIBS PUBLIC
MLIRIR
LINK_COMPONENTS
Remarks
Core
BitstreamReader
)

View File

@ -0,0 +1,69 @@
#include "mlir/Remark/RemarkStreamer.h"
#include "mlir/IR/MLIRContext.h"
#include "mlir/IR/Remarks.h"
#include "llvm/Remarks/RemarkSerializer.h"
#include "llvm/Remarks/RemarkStreamer.h"
#include "llvm/Support/Error.h"
#include "llvm/Support/FileSystem.h"
#include "llvm/Support/ToolOutputFile.h"
namespace mlir::remark::detail {
FailureOr<std::unique_ptr<MLIRRemarkStreamerBase>>
LLVMRemarkStreamer::createToFile(llvm::StringRef path,
llvm::remarks::Format fmt) {
std::error_code ec;
// Use error_code ctor; YAML is text. (Bitstream also works fine here.)
auto f =
std::make_unique<llvm::ToolOutputFile>(path, ec, llvm::sys::fs::OF_Text);
if (ec)
return failure();
auto serOr = llvm::remarks::createRemarkSerializer(
fmt, llvm::remarks::SerializerMode::Separate, f->os());
if (!serOr) {
llvm::consumeError(serOr.takeError());
return failure();
}
auto rs =
std::make_unique<llvm::remarks::RemarkStreamer>(std::move(*serOr), path);
auto impl = std::unique_ptr<LLVMRemarkStreamer>(new LLVMRemarkStreamer());
impl->remarkStreamer = std::move(rs);
impl->file = std::move(f);
return std::unique_ptr<MLIRRemarkStreamerBase>(std::move(impl));
}
void LLVMRemarkStreamer::streamOptimizationRemark(const Remark &remark) {
if (!remarkStreamer->matchesFilter(remark.getCategoryName()))
return;
// First, convert the diagnostic to a remark.
llvm::remarks::Remark r = remark.generateRemark();
// Then, emit the remark through the serializer.
remarkStreamer->getSerializer().emit(r);
}
LLVMRemarkStreamer::~LLVMRemarkStreamer() {
if (file && remarkStreamer)
file->keep();
}
} // namespace mlir::remark::detail
namespace mlir::remark {
LogicalResult enableOptimizationRemarksWithLLVMStreamer(
MLIRContext &ctx, StringRef path, llvm::remarks::Format fmt,
const RemarkCategories &cat, bool printAsEmitRemarks) {
FailureOr<std::unique_ptr<detail::MLIRRemarkStreamerBase>> sOr =
detail::LLVMRemarkStreamer::createToFile(path, fmt);
if (failed(sOr))
return failure();
return remark::enableOptimizationRemarks(ctx, std::move(*sOr), cat,
printAsEmitRemarks);
}
} // namespace mlir::remark

View File

@ -14,6 +14,7 @@ add_mlir_unittest(MLIRIRTests
MemrefLayoutTest.cpp MemrefLayoutTest.cpp
OperationSupportTest.cpp OperationSupportTest.cpp
PatternMatchTest.cpp PatternMatchTest.cpp
RemarkTest.cpp
ShapedTypeTest.cpp ShapedTypeTest.cpp
SymbolTableTest.cpp SymbolTableTest.cpp
TypeTest.cpp TypeTest.cpp
@ -28,3 +29,4 @@ add_mlir_unittest(MLIRIRTests
target_include_directories(MLIRIRTests PRIVATE "${MLIR_BINARY_DIR}/test/lib/Dialect/Test") target_include_directories(MLIRIRTests PRIVATE "${MLIR_BINARY_DIR}/test/lib/Dialect/Test")
mlir_target_link_libraries(MLIRIRTests PRIVATE MLIRIR) mlir_target_link_libraries(MLIRIRTests PRIVATE MLIRIR)
target_link_libraries(MLIRIRTests PRIVATE MLIRTestDialect) target_link_libraries(MLIRIRTests PRIVATE MLIRTestDialect)
target_link_libraries(MLIRIRTests PRIVATE MLIRRemarkStreamer)

View File

@ -0,0 +1,315 @@
//===- RemarkTest.cpp - Remark unit tests -------------------------------===//
//
// Part of the LLVM Project, under the Apache License v2.0 with LLVM Exceptions.
// See https://llvm.org/LICENSE.txt for license information.
// SPDX-License-Identifier: Apache-2.0 WITH LLVM-exception
//
//===----------------------------------------------------------------------===//
#include "mlir/IR/Diagnostics.h"
#include "mlir/IR/MLIRContext.h"
#include "mlir/IR/Remarks.h"
#include "mlir/Remark/RemarkStreamer.h"
#include "mlir/Support/TypeID.h"
#include "llvm/ADT/StringRef.h"
#include "llvm/IR/LLVMRemarkStreamer.h"
#include "llvm/Remarks/RemarkFormat.h"
#include "llvm/Support/FileSystem.h"
#include "llvm/Support/LogicalResult.h"
#include "llvm/Support/YAMLParser.h"
#include "gmock/gmock.h"
#include "gtest/gtest.h"
#include <optional>
using namespace llvm;
using namespace mlir;
using namespace testing;
namespace {
TEST(Remark, TestOutputOptimizationRemark) {
std::string categoryVectorizer("Vectorizer");
std::string categoryRegister("Register");
std::string categoryUnroll("Unroll");
std::string categoryInliner("Inliner");
std::string categoryReroller("Reroller");
std::string myPassname1("myPass1");
SmallString<64> tmpPathStorage;
sys::fs::createUniquePath("remarks-%%%%%%.yaml", tmpPathStorage,
/*MakeAbsolute=*/true);
std::string yamlFile =
std::string(tmpPathStorage.data(), tmpPathStorage.size());
ASSERT_FALSE(yamlFile.empty());
{
MLIRContext context;
Location loc = UnknownLoc::get(&context);
context.printOpOnDiagnostic(true);
context.printStackTraceOnDiagnostic(true);
// Setup the remark engine
mlir::remark::RemarkCategories cats{/*passed=*/categoryVectorizer,
/*missed=*/categoryUnroll,
/*analysis=*/categoryRegister,
/*failed=*/categoryInliner};
LogicalResult isEnabled =
mlir::remark::enableOptimizationRemarksWithLLVMStreamer(
context, yamlFile, llvm::remarks::Format::YAML, cats);
ASSERT_TRUE(succeeded(isEnabled)) << "Failed to enable remark engine";
// PASS: something succeeded
remark::passed(loc, remark::RemarkOpts::name("Pass1")
.category(categoryVectorizer)
.subCategory(myPassname1)
.function("bar"))
<< "vectorized loop" << remark::metric("tripCount", 128);
// ANALYSIS: neutral insight
remark::analysis(
loc, remark::RemarkOpts::name("Analysis1").category(categoryRegister))
<< "Kernel uses 168 registers";
// MISSED: explain why + suggest a fix
remark::missed(loc, remark::RemarkOpts::name("Miss1")
.category(categoryUnroll)
.subCategory(myPassname1))
<< remark::reason("not profitable at this size")
<< remark::suggest("increase unroll factor to >=4");
// FAILURE: action attempted but failed
remark::failed(loc, remark::RemarkOpts::name("Failed1")
.category(categoryInliner)
.subCategory(myPassname1))
<< remark::reason("failed due to unsupported pattern");
// FAILURE: Won't show up
remark::failed(loc, remark::RemarkOpts::name("Failed2")
.category(categoryReroller)
.subCategory(myPassname1))
<< remark::reason("failed due to rerolling pattern");
}
// Read the file
auto bufferOrErr = MemoryBuffer::getFile(yamlFile);
ASSERT_TRUE(static_cast<bool>(bufferOrErr)) << "Failed to open remarks file";
std::string content = bufferOrErr.get()->getBuffer().str();
EXPECT_THAT(content, HasSubstr("--- !Passed"));
EXPECT_THAT(content, HasSubstr("Name: Pass1"));
EXPECT_THAT(content, HasSubstr("Pass: 'Vectorizer:myPass1'"));
EXPECT_THAT(content, HasSubstr("Function: bar"));
EXPECT_THAT(content, HasSubstr("Remark: vectorized loop"));
EXPECT_THAT(content, HasSubstr("tripCount: '128'"));
EXPECT_THAT(content, HasSubstr("--- !Analysis"));
EXPECT_THAT(content, HasSubstr("Pass: Register"));
EXPECT_THAT(content, HasSubstr("Name: Analysis1"));
EXPECT_THAT(content, HasSubstr("Function: '<unknown function>'"));
EXPECT_THAT(content, HasSubstr("Remark: Kernel uses 168 registers"));
EXPECT_THAT(content, HasSubstr("--- !Missed"));
EXPECT_THAT(content, HasSubstr("Pass: 'Unroll:myPass1'"));
EXPECT_THAT(content, HasSubstr("Name: Miss1"));
EXPECT_THAT(content, HasSubstr("Function: '<unknown function>'"));
EXPECT_THAT(content,
HasSubstr("Reason: not profitable at this size"));
EXPECT_THAT(content,
HasSubstr("Suggestion: 'increase unroll factor to >=4'"));
EXPECT_THAT(content, HasSubstr("--- !Failure"));
EXPECT_THAT(content, HasSubstr("Pass: 'Inliner:myPass1'"));
EXPECT_THAT(content, HasSubstr("Name: Failed1"));
EXPECT_THAT(content, HasSubstr("Function: '<unknown function>'"));
EXPECT_THAT(content,
HasSubstr("Reason: failed due to unsupported pattern"));
EXPECT_THAT(content, Not(HasSubstr("Failed2")));
EXPECT_THAT(content, Not(HasSubstr("Reroller")));
// Also verify document order to avoid false positives.
size_t iPassed = content.find("--- !Passed");
size_t iAnalysis = content.find("--- !Analysis");
size_t iMissed = content.find("--- !Missed");
size_t iFailure = content.find("--- !Failure");
ASSERT_NE(iPassed, std::string::npos);
ASSERT_NE(iAnalysis, std::string::npos);
ASSERT_NE(iMissed, std::string::npos);
ASSERT_NE(iFailure, std::string::npos);
EXPECT_LT(iPassed, iAnalysis);
EXPECT_LT(iAnalysis, iMissed);
EXPECT_LT(iMissed, iFailure);
}
TEST(Remark, TestNoOutputOptimizationRemark) {
const auto *pass1Msg = "My message";
std::string categoryFailName("myImportantCategory");
std::string myPassname1("myPass1");
std::string funcName("myFunc");
SmallString<64> tmpPathStorage;
sys::fs::createUniquePath("remarks-%%%%%%.yaml", tmpPathStorage,
/*MakeAbsolute=*/true);
std::string yamlFile =
std::string(tmpPathStorage.data(), tmpPathStorage.size());
ASSERT_FALSE(yamlFile.empty());
std::error_code ec =
llvm::sys::fs::remove(yamlFile, /*IgnoreNonExisting=*/true);
if (ec) {
FAIL() << "Failed to remove file " << yamlFile << ": " << ec.message();
}
{
MLIRContext context;
Location loc = UnknownLoc::get(&context);
remark::failed(loc, remark::RemarkOpts::name("myfail")
.category(categoryFailName)
.subCategory(myPassname1))
<< remark::reason(pass1Msg);
}
// No setup, so no output file should be created
// check!
bool fileExists = llvm::sys::fs::exists(yamlFile);
EXPECT_FALSE(fileExists)
<< "Expected no YAML file to be created without setupOptimizationRemarks";
}
TEST(Remark, TestOutputOptimizationRemarkDiagnostic) {
std::string categoryVectorizer("Vectorizer");
std::string categoryRegister("Register");
std::string categoryUnroll("Unroll");
std::string myPassname1("myPass1");
std::string fName("foo");
llvm::SmallVector<std::string> seenMsg;
{
MLIRContext context;
Location loc = UnknownLoc::get(&context);
context.printOpOnDiagnostic(true);
context.printStackTraceOnDiagnostic(true);
// Register a handler that captures the diagnostic.
ScopedDiagnosticHandler handler(&context, [&](Diagnostic &diag) {
seenMsg.push_back(diag.str());
return success();
});
// Setup the remark engine
mlir::remark::RemarkCategories cats{/*passed=*/categoryVectorizer,
/*missed=*/categoryUnroll,
/*analysis=*/categoryRegister,
/*failed=*/categoryUnroll};
LogicalResult isEnabled =
remark::enableOptimizationRemarks(context, nullptr, cats, true);
ASSERT_TRUE(succeeded(isEnabled)) << "Failed to enable remark engine";
// PASS: something succeeded
remark::passed(loc, remark::RemarkOpts::name("pass1")
.category(categoryVectorizer)
.function(fName)
.subCategory(myPassname1))
<< "vectorized loop" << remark::metric("tripCount", 128);
// ANALYSIS: neutral insight
remark::analysis(loc, remark::RemarkOpts::name("Analysis1")
.category(categoryRegister)
.function(fName))
<< "Kernel uses 168 registers";
// MISSED: explain why + suggest a fix
int target = 128;
int tripBad = 4;
int threshold = 256;
remark::missed(loc, {"", categoryUnroll, "unroller2", ""})
<< remark::reason("tripCount={0} < threshold={1}", tripBad, threshold);
remark::missed(loc, {"", categoryUnroll, "", ""})
<< remark::reason("tripCount={0} < threshold={1}", tripBad, threshold)
<< remark::suggest("increase unroll to {0}", target);
// FAILURE: action attempted but failed
remark::failed(loc, {"", categoryUnroll, "", ""})
<< remark::reason("failed due to unsupported pattern");
}
// clang-format off
unsigned long expectedSize = 5;
ASSERT_EQ(seenMsg.size(), expectedSize);
EXPECT_EQ(seenMsg[0], "[Passed] pass1 | Category:Vectorizer:myPass1 | Function=foo | Remark=\"vectorized loop\", tripCount=128");
EXPECT_EQ(seenMsg[1], "[Analysis] Analysis1 | Category:Register | Function=foo | Remark=\"Kernel uses 168 registers\"");
EXPECT_EQ(seenMsg[2], "[Missed] | Category:Unroll:unroller2 | Reason=\"tripCount=4 < threshold=256\"");
EXPECT_EQ(seenMsg[3], "[Missed] | Category:Unroll | Reason=\"tripCount=4 < threshold=256\", Suggestion=\"increase unroll to 128\"");
EXPECT_EQ(seenMsg[4], "[Failure] | Category:Unroll | Reason=\"failed due to unsupported pattern\"");
// clang-format on
}
/// Custom remark streamer that prints remarks to stderr.
class MyCustomStreamer : public remark::detail::MLIRRemarkStreamerBase {
public:
MyCustomStreamer() = default;
void streamOptimizationRemark(const remark::detail::Remark &remark) override {
llvm::errs() << "Custom remark: ";
remark.print(llvm::errs(), true);
llvm::errs() << "\n";
}
};
TEST(Remark, TestCustomOptimizationRemarkDiagnostic) {
testing::internal::CaptureStderr();
const auto *pass1Msg = "My message";
const auto *pass2Msg = "My another message";
const auto *pass3Msg = "Do not show this message";
std::string categoryLoopunroll("LoopUnroll");
std::string categoryInline("Inliner");
std::string myPassname1("myPass1");
std::string myPassname2("myPass2");
std::string funcName("myFunc");
std::string seenMsg = "";
{
MLIRContext context;
Location loc = UnknownLoc::get(&context);
// Setup the remark engine
mlir::remark::RemarkCategories cats{/*passed=*/categoryLoopunroll,
/*missed=*/std::nullopt,
/*analysis=*/std::nullopt,
/*failed=*/categoryLoopunroll};
LogicalResult isEnabled = remark::enableOptimizationRemarks(
context, std::make_unique<MyCustomStreamer>(), cats, true);
ASSERT_TRUE(succeeded(isEnabled)) << "Failed to enable remark engine";
// Remark 1: pass, category LoopUnroll
remark::passed(loc, {"", categoryLoopunroll, myPassname1, ""}) << pass1Msg;
// Remark 2: failure, category LoopUnroll
remark::failed(loc, {"", categoryLoopunroll, myPassname2, ""})
<< remark::reason(pass2Msg);
// Remark 3: pass, category Inline (should not be printed)
remark::passed(loc, {"", categoryInline, myPassname1, ""}) << pass3Msg;
}
llvm::errs().flush();
std::string errOut = ::testing::internal::GetCapturedStderr();
// Expect exactly two "Custom remark:" lines.
auto first = errOut.find("Custom remark:");
EXPECT_NE(first, std::string::npos);
auto second = errOut.find("Custom remark:", first + 1);
EXPECT_NE(second, std::string::npos);
auto third = errOut.find("Custom remark:", second + 1);
EXPECT_EQ(third, std::string::npos);
// Containment checks for messages.
EXPECT_NE(errOut.find(pass1Msg), std::string::npos); // printed
EXPECT_NE(errOut.find(pass2Msg), std::string::npos); // printed
EXPECT_EQ(errOut.find(pass3Msg), std::string::npos); // filtered out
}
} // namespace