* Don't call raw_string_ostream::flush(), which is essentially a no-op
* Strip unneeded calls to raw_string_ostream::str(), to avoid excess indirection.
This helps support generic manipulation of operations that don't (yet)
use properties to store inherent attributes.
Use this mechanism in type inference and operation equivalence.
Note that only minimal unit tests are introduced as all the upstream
dialects seem to have been updated to use properties and the
non-property behavior is essentially deprecated and untested.
The change in c1eab57 fixed the
behavior of `getDiscardableAttrDictionary` for ops that are not using
properties to only return discardable attributes. Bytecode writer was
relying on the wrong behavior and would assume all attributes are
discardable, without appropriate testing. Fix that and add a test.
system_endianness() just returns llvm::endianness::native, a
compile-time constant equivalent to std::native in C++20. This patch
deprecates system_endianness() while replacing all invocations of
system_endianness() with llvm::endianness::native.
While we are at it, this patch replaces
llvm::support::endianness::{big,little} with
llvm::endianness::{big,little} in those statements that happen to call
system_endianness(). It does not go out of its way to replace other
occurrences of llvm::support::endianness::{big,little}.
This partially reverts #66380. The assertion that the underlying buffer
of an EncodingReader is aligned to any required alignments for resource
sections. Resources know their own alignment and pad their buffers
accordingly, but the bytecode reader doesn't know that ahead of time.
Consequently, it cannot give the resource EncodingReader a base buffer
aligned to the maximum required alignment.
A simple example from the test fails without this:
```mlir
module @TestDialectResources attributes {
bytecode.test = dense_resource<resource> : tensor<4xi32>
} {}
{-#
dialect_resources: {
builtin: {
resource: "0x2000000001000000020000000300000004000000",
resource_2: "0x2000000001000000020000000300000004000000"
}
}
```
Before this change, the `ByteCode` test failed on CentOS 7 with
devtoolset-9, because strings happen to be only 8 byte aligned. In
general though, strings have no alignment guarantee.
Increase resource alignment in test to 32 bytes.
Adjust test to sufficiently align buffer.
Add test to check error when buffer is insufficiently aligned.
This makes the bytecode reader/writer work on big-endian platforms.
The only problem was related to encoding of multi-byte integers,
where both reader and writer code make implicit assumptions about
endianness of the host platform.
This fixes the current test failures on s390x, and in addition allows
to remove the UNSUPPORTED markers from all other bytecode-related
test cases - they now also all pass on s390x.
Also adding a GFAIL_SKIP to the MultiModuleWithResource unit test,
as this still fails due to an unrelated endian bug regarding
decoding of external resources.
Differential Revision: https://reviews.llvm.org/D153567
Reviewed By: mehdi_amini, jpienaar, rriddle
The bytecode reader didn't handle properly the case where resource names
conflicted and were renamed, leading to orphan handles in the IR as well
as overwriting the exiting resources.
Differential Revision: https://reviews.llvm.org/D151408