Guray Ozen 3d41197d68
[MLIR] Introduce RemarkEngine + pluggable remark streaming (YAML/Bitstream) (#152474)
This PR implements structured, tooling-friendly optimization remarks
with zero cost unless enabled. It implements:
- `RemarkEngine` collects finalized remarks within `MLIRContext`.
- `MLIRRemarkStreamerBase` abstract class streams them to a backend.
- Backends: `MLIRLLVMRemarkStreamer` (bridges to llvm::remarks →
YAML/Bitstream) or your own custom streamer.
- Optional mirroring to DiagnosticEngine (printAsEmitRemarks +
categories).
- Off by default; no behavior change unless enabled. Thread-safe;
ordering best-effort.


## Overview

```
Passes (reportOptimization*)
         │
         ▼
+-------------------+
|  RemarkEngine     |   collects
+-------------------+
     │         │
     │ mirror  │ stream
     ▼         ▼
emitRemark    MLIRRemarkStreamerBase (abstract)
                   │
                   ├── MLIRLLVMRemarkStreamer → llvm::remarks → YAML | Bitstream
                   └── CustomStreamer → your sink
```

## Enable Remark engine and Plug LLVM's Remark streamer
```
// Enable once per MLIRContext. This uses `MLIRLLVMRemarkStreamer`
mlir::remark::enableOptimizationRemarksToFile(
    ctx, path, llvm::remarks::Format::YAML, cats);
```

## API to emit remark
```
// Emit from a pass
 remark::passed(loc, categoryVectorizer, myPassname1)
        << "vectorized loop";

remark::missed(loc, categoryUnroll, "MyPass")
        << remark::reason("not profitable at this size")   // Creates structured reason arg
        << remark::suggest("increase unroll factor to >=4");   // Creates structured suggestion arg

remark::passed(loc, categoryVectorizer, myPassname1)
        << "vectorized loop" 
        << remark::metric("tripCount", 128);                // Create structured metric on-the-fly
```
2025-08-21 16:02:31 +02:00
..