508 Commits

Author SHA1 Message Date
Andrzej Warzyński
8db272ffcf
[mlir][SparseTensor] Re-enable tests on AArch64 (#143387)
These tests were disabled in https://reviews.llvm.org/D136273, due to:
* https://github.com/llvm/llvm-project/issues/58465

That issue has now been resolved, so we should be able to re-enable
these tests.
2025-06-20 14:25:36 +01:00
Matthias Springer
5953f191b9
[mlir][sparse_tensor] Fix memory leak in sparse_index_dense.mlir (#137454) 2025-04-28 17:24:27 +02:00
Qinkun Bao
91d2ecf0d5
[NFC] Fix some typos in libc and mlir comments (#133374) 2025-03-28 15:52:37 -04:00
Andrea Faulds
eb206e9ea8
[mlir] Rename mlir-cpu-runner to mlir-runner (#123776)
With the removal of mlir-vulkan-runner (as part of #73457) in
e7e3c45bc70904e24e2b3221ac8521e67eb84668, mlir-cpu-runner is now the
only runner for all CPU and GPU targets, and the "cpu" name has been
misleading for some time already. This commit renames it to mlir-runner.
2025-01-24 14:08:38 +01:00
Christopher Bate
ced2fc7819
[mlir][bufferization] Fix OneShotBufferize when defaultMemorySpaceFn is used (#91524)
As described in issue llvm/llvm-project#91518, a previous PR
llvm/llvm-project#78484 introduced the `defaultMemorySpaceFn` into
bufferization options, allowing one to inform OneShotBufferize that it
should use a specified function to derive the memory space attribute
from the encoding attribute attached to tensor types.

However, introducing this feature exposed unhandled edge cases,
examples of which are introduced by this change in the new test under

`test/Dialect/Bufferization/Transforms/one-shot-bufferize-encodings.mlir`.

Fixing the inconsistencies introduced by `defaultMemorySpaceFn` is
pretty simple. This change:

- Updates the `bufferization.to_memref` and `bufferization.to_tensor`
  operations to explicitly include operand and destination types,
  whereas previously they relied on type inference to deduce the
  tensor types. Since the type inference cannot recover the correct
  tensor encoding/memory space, the operand and result types must be
  explicitly included. This is a small assembly format change, but it
  touches a large number of test files.

- Makes minor updates to other bufferization functions to handle the
  changes in building the above ops.

- Updates bufferization of `tensor.from_elements` to handle memory
  space.


Integration/upgrade guide:

In downstream projects, if you have tests or MLIR files that explicitly
use
`bufferization.to_tensor` or `bufferization.to_memref`, then update
them to the new assembly format as follows:

```
%1 = bufferization.to_memref %0 : memref<10xf32>
%2 = bufferization.to_tensor %1 : memref<10xf32>
```

becomes

```
%1 = bufferization.to_memref %0 : tensor<10xf32> to memref<10xf32>
%2 = bufferization.to_tensor %0 : memref<10xf32> to tensor<10xf32> 
```
2024-11-26 09:45:57 -07:00
Matthias Springer
8e33ff7d56
[mlir][GPU][NFC] Move dump-ptx.mlir test case (#111142) 2024-10-04 15:13:20 +02:00
Mateusz Sokół
e3686f1e44
[MLIR][sparse] Fix SparseTensor test_output.py test (#110882)
This PR fixes a test failure introduced in
https://github.com/llvm/llvm-project/pull/109135
2024-10-02 15:55:00 -07:00
Mateusz Sokół
b50ce4c81e
[MLIR][sparse] Add soa property to sparse_tensor Python bindings (#109135) 2024-10-02 09:07:55 -07:00
Aart Bik
0e34dbb4f4
[mlir][sparse] fix bug with all-dense assembler (#108615)
When only all-dense "sparse" tensors occur in a function prototype, the
assembler would skip the method conversion purely based on input/output
counts. It should rewrite based on the presence of any annotation,
however.
2024-09-13 17:24:48 -07:00
Peiming Liu
f607102a0d
[mlir][sparse] partially support lowering sparse coiteration loops to scf.while/for. (#105565) 2024-08-23 10:47:44 -07:00
Zhaoshi Zheng
fe55c34d19
[MLIR][test] Run SVE and SME Integration tests using qemu-aarch64 (#101568)
To run integration tests using qemu-aarch64 on x64 host, below flags are
added to the cmake command when building mlir/llvm:

      -DMLIR_INCLUDE_INTEGRATION_TESTS=ON \
      -DMLIR_RUN_ARM_SVE_TESTS=ON \
      -DMLIR_RUN_ARM_SME_TESTS=ON \
      -DARM_EMULATOR_EXECUTABLE="<...>/qemu-aarch64" \
      -DARM_EMULATOR_OPTIONS="-L /usr/aarch64-linux-gnu" \

-DARM_EMULATOR_MLIR_CPU_RUNNER_EXECUTABLE="<llvm_arm64_build_top>/bin/mlir-cpu-runner-arm64"
\
      -DARM_EMULATOR_LLI_EXECUTABLE="<llvm_arm64_build_top>/bin/lli" \
      -DARM_EMULATOR_UTILS_LIB_DIR="<llvm_arm64_build_top>/lib"

The last three above are prebuilt on, or cross-built for, an aarch64
host.

This patch introduced substittutions of "%native_mlir_runner_utils" etc. and use
them in SVE/SME integration tests. When configured to run using qemu-aarch64,
mlir runtime util libs will be loaded from ARM_EMULATOR_UTILS_LIB_DIR, if set.

Some tests marked with 'UNSUPPORTED: target=aarch64{{.*}}' are still run
when configured with ARM_EMULATOR_EXECUTABLE and the default target is
not aarch64.
A lit config feature 'mlir_arm_emulator' is added in
mlir/test/lit.site.cfg.py.in and to UNSUPPORTED list of such tests.
2024-08-15 21:37:51 -07:00
Peiming Liu
a02010b3e9
[mlir][sparse] support sparsifying sparse kernels to sparse-iterator-based loop (#95858) 2024-06-17 16:50:12 -07:00
Jay Foad
d4a0154902
[llvm-project] Fix typo "seperate" (#95373) 2024-06-13 20:20:27 +01:00
Yinying Li
eb177803bf
[mlir][sparse] Change sparse_tensor.print format (#91528)
1. Remove the trailing comma for the last element of memref and add
closing parenthesis.
2. Change integration tests to use the new format.
2024-05-09 12:09:40 -04:00
Aart Bik
c4e5a8a4d3
[mlir][sparse] support 'batch' dimensions in sparse_tensor.print (#91411) 2024-05-07 19:01:36 -07:00
Aart Bik
5c5116556f
[mlir][sparse] force a properly sized view on pos/crd/val under codegen (#91288)
Codegen "vectors" for pos/crd/val use the capacity as memref size, not
the actual used size. Although the sparsifier itself always uses just
the defined pos/crd/val parts, printing these and passing them back to a
runtime environment could benefit from wrapping the basic pos/crd/val
getters into a proper memref view that sets the right size.
2024-05-07 09:20:56 -07:00
Peiming Liu
78885395c8
[mlir][sparse] support tensor.pad on CSR tensors (#90687) 2024-05-01 15:37:38 -07:00
Peiming Liu
7cbaaed636
[mlir][sparse] fix sparse tests that uses reshape operations. (#90637)
Due to generalization introduced in
https://github.com/llvm/llvm-project/pull/90040
2024-04-30 11:57:16 -07:00
Peiming Liu
dbe376651a
[mlir][sparse] handle padding on sparse levels. (#90527) 2024-04-30 09:53:44 -07:00
Matthias Springer
9f3334e993
[mlir][SparseTensor] Add missing dependent dialect to pass (#88870)
This commit fixes the following error when stopping the sparse compiler
pipeline after bufferization (e.g., with `test-analysis-only`):

```
LLVM ERROR: Building op `vector.print` but it isn't known in this MLIRContext: the dialect may not be loaded or this operation hasn't been added by the dialect. See also https://mlir.llvm.org/getting_started/Faq/#registered-loaded-dependent-whats-up-with-dialects-management
```
2024-04-17 09:20:55 +02:00
Aart Bik
f388a3a446
[mlir][sparse] update doc and examples of the [dis]assemble operations (#88213)
The doc and examples of the [dis]assemble operations did not reflect all
the recent changes on order of the operands. Also clarified some of the
text.
2024-04-10 09:42:12 -07:00
Aart Bik
dc4cfdbb8f
[mlir][sparse] provide an AoS "view" into sparse runtime support lib (#87116)
Note that even though the sparse runtime support lib always uses SoA
storage for COO storage (and provides correct codegen by means of views
into this storage), in some rare cases we need the true physical SoA
storage as a coordinate buffer. This PR provides that functionality by
means of a (costly) coordinate buffer call.

Since this is currently only used for testing/debugging by means of the
sparse_tensor.print method, this solution is acceptable. If we ever want
a performing version of this, we should truly support AoS storage of COO
in addition to the SoA used right now.
2024-03-29 15:30:36 -07:00
Matthias Springer
a5d7fc1d10
[mlir][sparse] Fix typos in comments (#86074) 2024-03-21 12:30:48 +09:00
Matthias Springer
b1752ddf0a
[mlir][sparse] Fix memory leaks (part 4) (#85729)
This commit fixes memory leaks in sparse tensor integration tests by
adding `bufferization.dealloc_tensor` ops.

Note: Buffer deallocation will be automated in the future with the
ownership-based buffer deallocation pass, making `dealloc_tensor`
obsolete (only codegen path, not when using the runtime library).

This commit fixes the remaining memory leaks in the MLIR test suite.
`check-mlir` now passes when built with ASAN.
2024-03-19 15:38:16 +09:00
Aart Bik
f3a8af07fa
[mlir][sparse] best effort finalization of escaping empty sparse tensors (#85482)
This change lifts the restriction that purely allocated empty sparse
tensors cannot escape the method. Instead it makes a best effort to add
a finalizing operation before the escape.

This assumes that
(1) we never build sparse tensors across method boundaries
    (e.g. allocate in one, insert in other method)
(2) if we have other uses of the empty allocation in the
    same method, we assume that either that op will fail
    or will do the finalization for us.

This is best-effort, but fixes some very obvious missing cases.
2024-03-15 16:43:09 -07:00
Matthias Springer
e8e8df4c1b
[mlir][sparse] Add has_runtime_library test op (#85355)
This commit adds a new test-only op:
`sparse_tensor.has_runtime_library`. The op returns "1" if the sparse
compiler runs in runtime library mode.

This op is useful for writing test cases that require different IR
depending on whether the sparse compiler runs in runtime library or
codegen mode.

This commit fixes a memory leak in `sparse_pack_d.mlir`. This test case
uses `sparse_tensor.assemble` to create a sparse tensor SSA value from
existing buffers. This runtime library reallocates+copies the existing
buffers; the codegen path does not. Therefore, the test requires
additional deallocations when running in runtime library mode.

Alternatives considered:
- Make the codegen path allocate. "Codegen" is the "default" compilation
mode and it is handling `sparse_tensor.assemble` correctly. The issue is
with the runtime library path, which should not allocate. Therefore, it
is better to put a workaround in the runtime library path than to work
around the issue with a new flag in the codegen path.
- Add a `sparse_tensor.runtime_only` attribute to
`bufferization.dealloc_tensor`. Verifying that the attribute can only be
attached to `bufferization.dealloc_tensor` may introduce an unwanted
dependency of `MLIRSparseTensorDialect` on `MLIRBufferizationDialect`.
2024-03-15 13:35:48 +09:00
Matthias Springer
5124eedd35
[mlir][sparse] Fix memory leaks (part 3) (#85184)
This commit fixes memory leaks in sparse tensor integration tests by
adding `bufferization.dealloc_tensor` ops.

Note: Buffer deallocation will be automated in the future with the
ownership-based buffer deallocation pass, making `dealloc_tensor`
obsolete (only codegen path, not when using the runtime library).
2024-03-15 13:31:47 +09:00
Yinying Li
88986d65e4
[mlir][sparse] Fix sparse_generate test (#85009)
std::uniform_int_distribution may behave differently in different
systems.
2024-03-12 21:39:37 -04:00
Yinying Li
c1ac9a09d0
[mlir][sparse] Finish migrating integration tests to use sparse_tensor.print (#84997) 2024-03-12 20:57:21 -04:00
Peiming Liu
94e27c265a
[mlir][sparse] reuse tensor.insert operation to insert elements into … (#84987)
…a sparse tensor.
2024-03-12 16:59:17 -07:00
Yinying Li
83c9244ae4
[mlir][sparse] Migrate more tests to use sparse_tensor.print (#84833)
Continuous efforts following #84249.
2024-03-11 18:44:32 -04:00
Yinying Li
4cb5a96af6
[mlir][sparse] Migrate more tests to sparse_tensor.print (#84249)
Continuous efforts following #83946.
2024-03-07 14:02:20 -05:00
Yinying Li
6e692e726a
[mlir][sparse] Migrate to sparse_tensor.print (#83946)
Continuous efforts following #83506.
2024-03-07 14:02:01 -05:00
Peiming Liu
fc9f1d49aa
[mlir][sparse] use a consistent order between [dis]assembleOp and sto… (#84079)
…rage layout.
2024-03-06 09:57:41 -08:00
Aart Bik
b6ca602658
[mlir][sparse] migrate tests to sparse_tensor.print (#84055)
Continuing the efforts started in #83357
2024-03-05 12:17:45 -08:00
Aart Bik
662d821d44
[mlir][sparse] migrate datastructure tests to sparse_tensor.print (#83956)
Continuing the efforts started in llvm#83357
2024-03-04 21:14:32 -08:00
Aart Bik
275fe3ae2d
[mlir][sparse] support complex type for sparse_tensor.print (#83934)
With an integration test example
2024-03-04 17:14:31 -08:00
Aart Bik
05390df497
[mlir][sparse] migration to sparse_tensor.print (#83926)
Continuing the efforts started in #83357
2024-03-04 15:49:09 -08:00
Aart Bik
691fc7cdcc
[mlir][sparse] add dim/lvl information to sparse_tensor.print (#83913)
More information is more testing!
Also adjusts already migrated integration tests
2024-03-04 14:32:49 -08:00
Aart Bik
2e0693abbc
[mlir][sparse][gpu] migration to sparse_tensor.print (#83510)
Continuous efforts #83357 for our sparse CUDA tests
2024-03-04 09:49:54 -08:00
Yinying Li
5899599b01
[mlir][sparse] Migration to sparse_tensor.print (#83506)
Continuous efforts #83357. Previously reverted #83377.
2024-02-29 19:18:35 -05:00
Mehdi Amini
71eead512e
Revert "[mlir][sparse] Migration to sparse_tensor.print" (#83499)
Reverts llvm/llvm-project#83377

The test does not pass on the bot.
2024-02-29 15:04:58 -08:00
Yinying Li
1ca65dd74a
[mlir][sparse] Migration to sparse_tensor.print (#83377)
Continuous efforts following #83357.
2024-02-29 14:14:31 -05:00
Aart Bik
fdf44b3777
[mlir][sparse] migrate integration tests to sparse_tensor.print (#83357)
This is first step (of many) cleaning up our tests to use the new and
exciting sparse_tensor.print operation instead of lengthy extraction +
print ops.
2024-02-28 16:40:46 -08:00
Peiming Liu
6bc7c9df7f
[mlir][sparse] infer returned type for sparse_tensor.to_[buffer] ops (#83343)
The sparse structure buffers might not always be memrefs with rank == 1
with the presence of batch levels.
2024-02-28 16:10:20 -08:00
Aart Bik
8394ec9ff1
[mlir][sparse] add a few more cases to sparse_tensor.print test (#83338) 2024-02-28 14:05:40 -08:00
Aart Bik
d37affb06f
[mlir][sparse] add a sparse_tensor.print operation (#83321)
This operation is mainly used for testing and debugging purposes but
provides a very convenient way to quickly inspect the contents of a
sparse tensor (all components over all stored levels).

Example:

[ [ 1, 0, 2, 0, 0, 0, 0, 0 ],
  [ 0, 0, 0, 0, 0, 0, 0, 0 ],
  [ 0, 0, 0, 0, 0, 0, 0, 0 ],
  [ 0, 0, 3, 4, 0, 5, 0, 0 ]

when stored sparse as DCSC prints as

---- Sparse Tensor ----
nse = 5
pos[0] : ( 0, 4,  )
crd[0] : ( 0, 2, 3, 5,  )
pos[1] : ( 0, 1, 3, 4, 5,  )
crd[1] : ( 0, 0, 3, 3, 3,  )
values : ( 1, 2, 3, 4, 5,  )
----
2024-02-28 12:33:26 -08:00
Peiming Liu
5248a98724
[mlir][sparse] support SoA COO in codegen path. (#82439)
*NOTE*: the `SoA` property only makes a difference on codegen path, and
is ignored in libgen path at the moment (only SoA COO is supported).
2024-02-20 17:06:21 -08:00
Matthias Springer
ccc20b4e56
[mlir][sparse] Fix memory leaks (part 2) (#81979)
This commit fixes memory leaks in sparse tensor integration tests by
adding `bufferization.dealloc_tensor` ops.

Note: Buffer deallocation will be automated in the future with the
ownership-based buffer deallocation pass, making `dealloc_tensor`
obsolete (only codegen path, not when using the runtime library).
2024-02-17 11:04:17 +01:00
Matthias Springer
b6c453c13f
[mlir][sparse] Fix memory leaks (part 1) (#81843)
This commit fixes memory leaks in sparse tensor integration tests by
adding `bufferization.dealloc_tensor` ops.

Note: Buffer deallocation will be automated in the future with the
ownership-based buffer deallocation pass, making `dealloc_tensor`
obsolete (only codegen path, not when using the runtime library).
2024-02-16 09:57:04 +01:00