566 Commits

Author SHA1 Message Date
Kazu Hirata
07eb7b7692
[llvm] Replace SmallSet with SmallPtrSet (NFC) (#154068)
This patch replaces SmallSet<T *, N> with SmallPtrSet<T *, N>.  Note
that SmallSet.h "redirects" SmallSet to SmallPtrSet for pointer
element types:

  template <typename PointeeType, unsigned N>
class SmallSet<PointeeType*, N> : public SmallPtrSet<PointeeType*, N>
{};

We only have 140 instances that rely on this "redirection", with the
vast majority of them under llvm/. Since relying on the redirection
doesn't improve readability, this patch replaces SmallSet with
SmallPtrSet for pointer element types.
2025-08-18 07:01:29 -07:00
Nikita Popov
02f3e95a42 [AutoUpgrade] Fix use after free
Determine the intrinsic ID before the name is freed during renaming.
2025-08-08 11:54:09 +02:00
Nikita Popov
c23b4fbdbb
[IR] Remove size argument from lifetime intrinsics (#150248)
Now that #149310 has restricted lifetime intrinsics to only work on
allocas, we can also drop the explicit size argument. Instead, the size
is implied by the alloca.

This removes the ability to only mark a prefix of an alloca alive/dead.
We never used that capability, so we should remove the need to handle
that possibility everywhere (though many key places, including stack
coloring, did not actually respect this).
2025-08-08 11:09:34 +02:00
Meredith Julian
be58069515
[LLVM][NVPTX] Upstream tanh intrinsic for libdevice (#149596)
Currently __nv_fast_tanhf() in libdevice maps to an nvvm intrinsic that
has not been upstreamed, which is causing issues when using the NVPTX
backend from upstream. Instead of upstreaming the intrinsic, we can
instead use the existing Intrinsic::tanh with the afn flag. This change
adds NVPTX backend support for ISD::TANH, adds auto-upgrade for the old
tanh_approx intrinsic to @llvm.tanh.f32 with afn flag so that libdevice
works properly upstream, and adds a basic codegen test and a case to the
auto-upgrade test.
2025-07-24 14:32:59 -07:00
Nikita Popov
92c55a315e
[IR] Only allow lifetime.start/end on allocas (#149310)
lifetime.start and lifetime.end are primarily intended for use on
allocas, to enable stack coloring and other liveness optimizations. This
is necessary because all (static) allocas are hoisted into the entry
block, so lifetime markers are the only way to convey the actual
lifetimes.

However, lifetime.start and lifetime.end are currently *allowed* to be
used on non-alloca pointers. We don't actually do this in practice, but
just the mere fact that this is possible breaks the core purpose of the
lifetime markers, which is stack coloring of allocas. Stack coloring can
only work correctly if all lifetime markers for an alloca are
analyzable.

* If a lifetime marker may operate on multiple allocas via a select/phi,
we don't know which lifetime actually starts/ends and handle it
incorrectly (https://github.com/llvm/llvm-project/issues/104776).
* Stack coloring operates on the assumption that all lifetime markers
are visible, and not, for example, hidden behind a function call or
escaped pointer. It's not possible to change this, as part of the
purpose of lifetime markers is that they work even in the presence of
escaped pointers, where simple use analysis is insufficient.

I don't think there is any way to have coherent semantics for lifetime
markers on allocas, while also permitting them on arbitrary pointer
values.

This PR restricts lifetimes to operate on allocas only. As a followup, I
will also drop the size argument, which is superfluous if we always
operate on an alloca. (This change also renders various code handling
lifetime markers on non-alloca dead. I plan to clean up that kind of
code after dropping the size argument as well.)

In practice, I've only found a few places that currently produce
lifetimes on non-allocas:

* CoroEarly replaces the promise alloca with the result of an intrinsic,
which will later be replaced back with an alloca. I think this is the
only place where there is some legitimate loss of functionality, but I
don't think this is particularly important (I don't think we'd expect
the promise in a coroutine to admit useful lifetime optimization.)
* SafeStack moves unsafe allocas onto a separate frame. We can safely
drop lifetimes here, as SafeStack performs its own stack coloring.
* Similar for AddressSanitizer, it also moves allocas into separate
memory.
* LSR sometimes replaces the lifetime argument with a GEP chain of the
alloca (where the offsets ultimately cancel out). This is just
unnecessary. (Fixed separately in
https://github.com/llvm/llvm-project/pull/149492.)
* InferAddrSpaces sometimes makes lifetimes operate on an addrspacecast
of an alloca. I don't think this is necessary.
2025-07-21 15:04:50 +02:00
David Green
9fcea2e465 [ARM] Add neon vector support for roundeven
As per #142559, this marks froundeven as legal for Neon and upgrades the
existing arm.neon.vrintn intrinsics.
2025-07-04 15:27:33 +01:00
David Green
ec35065789 [ARM] Add neon vector support for rint
As per #142559, this marks frint as legal for Neon and upgrades the existing
arm.neon.vrintx intrinsics.
2025-07-03 21:27:48 +01:00
David Green
1f8f477bd0 [ARM] Add neon vector support for trunc
As per #142559, this marks ftrunc as legal for Neon and upgrades the existing
arm.neon.vrintz intrinsics.
2025-07-03 07:41:13 +01:00
David Green
5332534b9c [ARM] Add neon vector support for ceil
As per #142559, this marks fceil as legal for Neon and upgrades the existing
arm.neon.vrintp intrinsics.
2025-07-01 15:41:10 +01:00
David Green
6bd9ff04af [ARM] Add neon vector support for round
As per #142559, this marks fround as legal for Neon and upgrades the existing
arm.neon.vrinta intrinsics.
2025-06-30 17:15:26 +01:00
David Green
dcc9e36b18
[ARM] Add neon vector support for floor (#142559)
This marks ffloor as legal providing that armv8 and neon is present (or
fullfp16 for the fp16 instructions). The existing arm_neon_vrintm
intrinsics are auto-upgraded to llvm.floor.

If this is OK I will update the other vrint intrinsics.
2025-06-29 11:37:16 +01:00
Nikita Popov
9a6a87da6e [AutoUpgrade] Remove unnecessary name check (NFCI)
If only the name is incorrect (due to added overload), but the
signature is correct, we should go through the generic remangling
upgrade.
2025-06-23 14:56:24 +02:00
Durgadoss R
3e5d50f9c6
[NVPTX] Add cta_group support to TMA G2S intrinsics (#143178)
This patch extends the TMA G2S intrinsics with the
support for cta_group::1/2 available from Blackwell onwards.
The existing intrinsics are auto-upgraded with a default
value of '0' for the `cta_group` flag operand.

* lit tests are added for all combinations of the newer variants.
* Negative tests are added to validate the error-handling 
   when the value of the cta_group flag falls out-of-range.
* The generated PTX is verified with a 12.8 ptxas executable.

Signed-off-by: Durgadoss R <durgadossr@nvidia.com>
2025-06-12 15:20:39 +05:30
Jeremy Morse
459475020a Reapply 76197ea6f91f after removing an assertion
Specifically this is the assertion in BasicBlock.cpp. Now that we're not
examining or setting that flag consistently (because it'll be deleted in
about an hour) there's no need to keep this assertion.

Original commit title:

[DebugInfo][RemoveDIs] Remove some debug intrinsic-only codepaths (#143451)
2025-06-11 17:35:29 +01:00
Jeremy Morse
76197ea6f9 Revert "[DebugInfo][RemoveDIs] Remove some debug intrinsic-only codepaths (#143451)"
This reverts commit c71a2e688828ab3ede4fb54168a674ff68396f61.

/me squints -- this is hitting an assertion I thought had been deleted,
will revert and investigate for a bit.
2025-06-11 14:52:17 +01:00
Jeremy Morse
c71a2e6888
[DebugInfo][RemoveDIs] Remove some debug intrinsic-only codepaths (#143451)
These are opportunistic deletions as more places that make use of the
IsNewDbgInfoFormat flag are removed. It should (TM)(R) all be dead code
now that `IsNewDbgInfoFormat` should be true everywhere.

FastISel: we don't need to do debug-aware instruction counting any more,
because there are no debug instructions,
Autoupgrade: you can no-longer avoid autoupgrading of intrinsics to
records
DIBuilder: Delete the code for creating debug intrinsics (!)
LoopUtils: No need to handle debug instructions, they don't exist
2025-06-11 14:43:15 +01:00
Jeremy Morse
3d7aa961ac
[DebugInfo][RemoveDIs] Use autoupgrader to convert old debug-info (#143452)
By chance, two things have prevented the autoupgrade path being
exercised much so far:
 * LLParser setting the debug-info mode to "old" on seeing intrinsics,
* The test in AutoUpgrade.cpp wanting to upgrade into a "new" debug-info
block.

In practice, this appears to mean this code path hasn't seen the various
invalid inputs that can come its way. This commit does a number of
things:
* Tolerates the various illegal inputs that can be written with
debug-intrinsics, and that must be tolerated until the Verifier runs,
 * Printing illegal/null DbgRecord fields must succeed,
* Verifier errors need to localise the function/block where the error
is,
 * Tests that now see debug records will print debug-record errors,

Plus a few new tests for other intrinsic-to-debug-record failures modes
I found. There are also two edge cases:
* Some of the unit tests switch back and forth between intrinsic and
record modes at will; I've deleted coverage and some assertions to
tolerate this as intrinsic support is now Gone (TM),
* In sroa-extract-bits.ll, the order of debug records flips. This is
because the autoupgrader upgrades in the opposite order to the basic
block conversion routines... which doesn't change the record order, but
_does_ change the use list order in Metadata! This should (TM) have no
consequence to the correctness of LLVM, but will change the order of
various records and the order of DWARF record output too.

I tried to reduce this patch to a smaller collection of changes, but
they're all intertwined, sorry.
2025-06-11 13:56:30 +01:00
David Green
6baaa0afc3
[ARM] Handle roundeven for MVE. (#142557)
Now that #141786 handles scalar and neon types, this adds MVE
definitions and legalization for llvm.roundeven intrinsics. The existing
llvm.arm.mve.vrintn are auto-upgraded to llvm.roundeven like other vrint
instructions, so should continue to work.
2025-06-08 18:23:50 +01:00
Alex MacLean
3a84a4e55d
Reland "[NVPTX] Unify and extend barrier{.cta} intrinsic support" (#141143)
Note: This relands #140615 adding a ".count" suffix to the non-".all"
variants.

Our current intrinsic support for barrier intrinsics is confusing and
incomplete, with multiple intrinsics mapping to the same instruction and
intrinsic names not clearly conveying intrinsic semantics. Further, we
lack support for some variants. This change unifies the IR
representation to a single consistently named set of intrinsics.

- llvm.nvvm.barrier.cta.sync.aligned.all(i32)
- llvm.nvvm.barrier.cta.sync.aligned.count(i32, i32)
- llvm.nvvm.barrier.cta.arrive.aligned.count(i32, i32)
- llvm.nvvm.barrier.cta.sync.all(i32)
- llvm.nvvm.barrier.cta.sync.count(i32, i32)
- llvm.nvvm.barrier.cta.arrive.count(i32, i32)

The following Auto-Upgrade rules are used to maintain compatibility with
IR using the legacy intrinsics:

* llvm.nvvm.barrier0 --> llvm.nvvm.barrier.cta.sync.aligned.all(0)
* llvm.nvvm.barrier.n --> llvm.nvvm.barrier.cta.sync.aligned.all(x)
* llvm.nvvm.bar.sync --> llvm.nvvm.barrier.cta.sync.aligned.all(x)
* llvm.nvvm.barrier --> llvm.nvvm.barrier.cta.sync.aligned.count(x, y)
* llvm.nvvm.barrier.sync --> llvm.nvvm.barrier.cta.sync.all(x)
* llvm.nvvm.barrier.sync.cnt --> llvm.nvvm.barrier.cta.sync.count(x, y)
2025-05-22 19:38:10 -07:00
Alex Maclean
e72d8b2553 Revert "[NVPTX] Unify and extend barrier{.cta} intrinsic support (#140615)"
This reverts commit 735209c0688b10a66c24750422b35d8c2ad01bb5.
2025-05-22 17:28:43 +00:00
Alex MacLean
735209c068
[NVPTX] Unify and extend barrier{.cta} intrinsic support (#140615)
Our current intrinsic support for barrier intrinsics is confusing and
incomplete, with multiple intrinsics mapping to the same instruction and
intrinsic names not clearly conveying intrinsic semantics. Further, we
lack support for some variants. This change unifies the IR
representation to a single consistently named set of intrinsics.

- llvm.nvvm.barrier.cta.sync.aligned.all(i32)
- llvm.nvvm.barrier.cta.sync.aligned(i32, i32)
- llvm.nvvm.barrier.cta.arrive.aligned(i32, i32)
- llvm.nvvm.barrier.cta.sync.all(i32)
- llvm.nvvm.barrier.cta.sync(i32, i32)
- llvm.nvvm.barrier.cta.arrive(i32, i32)

The following Auto-Upgrade rules are used to maintain compatibility with
IR using the legacy intrinsics:

* llvm.nvvm.barrier0 --> llvm.nvvm.barrier.cta.sync.aligned.all(0)
* llvm.nvvm.barrier.n --> llvm.nvvm.barrier.cta.sync.aligned.all(x)
* llvm.nvvm.bar.sync --> llvm.nvvm.barrier.cta.sync.aligned.all(x)
* llvm.nvvm.barrier --> llvm.nvvm.barrier.cta.sync.aligned(x, y)
* llvm.nvvm.barrier.sync --> llvm.nvvm.barrier.cta.sync.all(x)
* llvm.nvvm.barrier.sync.cnt --> llvm.nvvm.barrier.cta.sync(x, y)
2025-05-21 08:14:15 -07:00
Alexander Richardson
07e2ba445d
[AMDGPU] Set AS8 address width to 48 bits
Of the 128-bits of buffer descriptor only 48 bits are address bits, so
following the discussion on https://discourse.llvm.org/t/clarifiying-the-semantics-of-ptrtoint/83987/54,
the logic conclusion is to set the index width to 48 bits instead of
the current value of 128.

Most of the test changes are mechanical datalayout updates, but there
is one actual change: the ptrmask test now uses .i48 instead of .i128
and I had to update SelectionDAGBuilder to correctly extend the mask.

Reviewed By: krzysz00

Pull Request: https://github.com/llvm/llvm-project/pull/139419
2025-05-19 17:26:05 -07:00
Shilei Tian
7314d38b72
[NFC][AutoUpgrade] Use ConstantPointerNull::get instead of Constant::getNullValue for known pointer types (#139984)
This is a preparation change for upcoming PRs that will update the
semantics of
`ConstantPointerNull`, making it to represent an actual `nullptr` rather
than a
zero-valued pointer.
2025-05-15 10:03:20 -04:00
Thurston Dang
0dd2c9f7a3 Fix-forward build error from #132489
Replace deprecated use of getDeclaration that was added in #132489

llvm/lib/IR/AutoUpgrade.cpp:1480:26: error: 'getDeclaration' is deprecated: Use getOrInsertDeclaration instead [-Werror,-Wdeprecated-declarations]
 1480 |       NewFn = Intrinsic::getDeclaration(
      |                          ^~~~~~~~~~~~~~
      |                          getOrInsertDeclaration
2025-05-14 21:44:13 +00:00
Jessica Clarke
864f0ff4ef
[clang][IR] Overload @llvm.thread.pointer to support non-AS0 targets (#132489)
Thread-local globals live, by default, in the default globals address
space, which may not be 0, so we need to overload @llvm.thread.pointer
to support other address spaces, and use the default globals address
space in Clang.
2025-05-14 21:51:56 +01:00
Alex MacLean
eb5280938b
[NVPTX] Fixup AutoUpgrade of llvm.nvvm.atomic.load.{inc,dec}.32 (#138907)
The previous implementation failed to account for the fact that these
intrinsics have an overloaded pointer type. This version handles the
pointer type and adds tests for llvm.nvvm.atomic.load.add.{f32,f64}.
2025-05-08 07:17:32 -07:00
Craig Topper
123758b1f4
[IRBuilder] Add versions of createInsertVector/createExtractVector that take a uint64_t index. (#138324)
Most callers want a constant index. Instead of making every caller
create a ConstantInt, we can do it in IRBuilder. This is similar to
createInsertElement/createExtractElement.
2025-05-02 16:10:18 -07:00
Alex MacLean
a643ac4395
[NVPTX] Remove 'param' variants of nvvm.ptr.* intrinics (#137065)
After #136008 these intrinsics are no longer inserted by the
compiler and can be upgraded to addrspacecasts.
2025-04-25 10:44:12 -07:00
modiking
3d04da5bc0
[NVPTX] Add support for Shared Cluster Memory address space [2/2] (#136768)
Adds support for new Shared Cluster Memory Address Space
(SHARED_CLUSTER, addrspace 7). See
https://docs.nvidia.com/cuda/cuda-c-programming-guide/index.html#distributed-shared-memory
for details.

Follow-up to https://github.com/llvm/llvm-project/pull/135444

1. Update existing codegen/intrinsics in LLVM and MLIR that now use this
address space
2. Auto-upgrade previous intrinsics that used SMEM (addrspace 3) but
were really taking in a shared cluster pointer to the new address space
2025-04-22 16:50:45 -07:00
Alex MacLean
09069088dd
[NVPTX] Add auto-upgrade rules for fabs.{f,d,ftz.f} (#136150)
These auto-upgrade rules are required after these intrinsics were
removed in #135644
2025-04-17 11:44:58 -07:00
Alex MacLean
7daa5010ab
[NVPTX] Cleanup and document nvvm.fabs intrinsics, adding f16 support (#135644)
This change unifies the NVVM intrinsics for floating point absolute
value into two new overloaded intrinsics "llvm.nvvm.fabs.*" and
"llvm.nvvm.fabs.ftz.*". Documentation has been added specifying the
semantics of these intrinsics to clarify how they differ from
"llvm.fabs.*". In addition, support for these new intrinsics is
extended to cover the f16 variants.
2025-04-17 07:37:24 -07:00
Nikita Popov
c3c0b27f2d
[Intrinsics] Add support for range attributes (#135642)
Add support for specifying range attributes in Intrinsics.td. Use this
to specify the ucmp/scmp range [-1,2).

This case is trickier than existing intrinsic attributes, because we
need to create the attribute with the correct bitwidth. As such, the
attribute construction now needs to be aware of the function type.

We also need to be careful to no longer assign attributes on intrinsics
with invalid signatures, as we'd make invalid assumptions about the
number of arguments etc otherwise.

Fixes https://github.com/llvm/llvm-project/issues/130179.
2025-04-17 11:11:00 +02:00
Alex MacLean
a6853cd9af
[NVPTX] Auto-Upgrade llvm.nvvm.atomic.load.{inc,dec}.32 (#134111)
These intrinsics can be upgrade to an atomicrmw instruction.
2025-04-08 13:44:11 -07:00
Rahul Joshi
74b7abf154
[IRBuilder] Add new overload for CreateIntrinsic (#131942)
Add a new `CreateIntrinsic` overload with no `Types`, useful for
creating calls to non-overloaded intrinsics that don't need additional
mangling.
2025-03-31 08:10:34 -07:00
Alex MacLean
10fd5b925f
[NVPTX] Auto-Upgrade !"align" metadata on return values to stackalign (#131726)
This commit follows up 0191307b by auto-upgrading !"align" metadata on
return values to stackalign. This allows us to remove all logic to check
the metadata from NVPTXUtilities.
2025-03-24 12:00:44 -07:00
Alex MacLean
30ff508614
[NVPTX] Auto-Upgrade llvm.nvvm.swap.lo.hi.b64 to llvm.fshl (#132098)
After 3c8c2914e067e132af951f70d2b3577fe049e19a the lowering of 64-bit
funnel shifts has been improved to the point where this intrinsic is no
longer needed.
2025-03-20 19:21:24 -07:00
AdityaK
b17af9d8ee
[NFC][llvm/IR] comparison of unsigned expression in ‘>= 0’ is always true (#130843) 2025-03-14 11:59:29 -07:00
Frederik Harwath
c979ce7e36
Add IRBuilder::CreateFMA (#131112)
This commit adds a function for creating fma intrinsic calls to the IRBuilder.  If the "IsFPConstrained" flag of the builder is set,
the function creates a call to "experimental.constrained.fma" instead of "llvm.fma" .
To support the creation of the constrained intrinsic, a function "CreateConstrainedFPIntrinsic" is introduced.
2025-03-14 13:20:58 +01:00
Alex MacLean
6c2e170d04
[NVPTX] Convert vector function nvvm.annotations to attributes (#127736)
Replace some more nvvm.annotations with function attributes,
auto-upgrading the annotations as needed. These new attributes will be
more idiomatic and compile-time efficient than the annotations.

- !"maxntid[xyz]" -> "nvvm.maxntid"
- !"reqntid[xyz]" -> "nvvm.reqntid"
- !"cluster_dim_[xyz]" -> "nvvm.cluster_dim"
2025-02-26 08:45:27 -08:00
Alex MacLean
a282b6c486
[NVPTX] Convert scalar function nvvm.annotations to attributes (#125908)
Replace some more nvvm.annotations with function attributes,
auto-upgrading the annotations as needed. These new attributes will be
more idiomatic and compile-time efficient than the annotations.

- !"maxclusterrank" / !"cluster_max_blocks" -> "nvvm.maxclusterrank"
- !"minctasm" -> "nvvm.minctasm"
- !"maxnreg" -> "nvvm.maxnreg"
2025-02-12 07:33:22 -08:00
Alex MacLean
de7438e472
[NVPTX] Auto-Upgrade some nvvm.annotations to attributes (#119261)
Add a new AutoUpgrade function to convert some legacy nvvm.annotations
metadata to function level attributes. These attributes are quicker to
look-up so improve compile time and are more idiomatic than using
metadata which should not include required information that changes the
meaning of the program.

Currently supported annotations are:

- !"kernel" -> ptx_kernel calling convention
- !"align" -> alignstack parameter attributes (return not yet supported)
2025-01-29 16:27:27 -08:00
David Green
547bfda56b
[AArch64] Improve bcvtn2 and remove aarch64_neon_bfcvt intrinsics (#120363)
This started out as trying to combine bf16 fpround to BFCVT2
instructions, but ended up removing the aarch64.neon.nfcvt intrinsics in
favour of generating fpround instructions directly. This simplifies the
patterns and can lead to other optimizations. The BFCVT2 instruction is
adjusted to makes sure the types are valid, and a bfcvt2 is now
generated in more place. The old intrinsics are auto-upgraded to fptrunc
instructions too.
2025-01-21 09:16:04 +00:00
Nikita Popov
0b1ae8963e [AutoUpgrade] Avoid unnecessary pointer bitcasts (NFCI)
Not needed with opaque pointers.
2025-01-20 09:55:35 +01:00
Dan Gohman
c5ab70c508
[WebAssembly] Add -i128:128 to the datalayout string. (#119204)
Clang [defaults to aligning `__int128_t` to 16 bytes], while LLVM
`datalayout` strings [default to aligning `i128` to 8 bytes]. Wasm is
currently using the defaults for both, so it's inconsistent. Fix this by
adding `-i128:128` to Wasm's `datalayout` string so that it aligns
`i128` to 16 bytes too.

This is similar to
[llvm/llvm-project@dbad963](dbad963a69)
for SPARC.

This fixes rust-lang/rust#133991; see that issue for further discussion.

[defaults to aligning `__int128_t` to 16 bytes]:
f8b4182f07/clang/lib/Basic/TargetInfo.cpp (L77)
[default to aligning `i128` to 8 bytes]:
https://llvm.org/docs/LangRef.html#langref-datalayout
2024-12-10 09:21:58 -08:00
Lei Huang
a13ec9cd54
[PowerPC] Update data layout aligment of i128 to 16 (#118004)
Fix 64-bit PowerPC part of
https://github.com/llvm/llvm-project/issues/102783.
2024-12-09 18:02:24 -05:00
Graham Hunter
ed5aaddd7b
[IR] Vector extract last active element intrinsic (#113587)
As discussed in #112738, it may be better to have an intrinsic to represent vector element extracts based on mask bits. This intrinsic is for the case of extracting the last active element, if any, or a default value if the mask is all-false.

The target-agnostic SelectionDAG lowering is similar to the IR in #106560.
2024-11-14 17:48:43 +00:00
yingopq
86e4beb702
[MIPS] LLVM data layout give i128 an alignment of 16 for mips64 (#112084)
Fix parts of #102783.
2024-11-06 16:14:30 +01:00
Alex MacLean
fb33af08e4
[NVPTX] Remove nvvm.ldg.global.* intrinsics (#112834)
Remove these intrinsics which can be better represented by load
instructions with `!invariant.load` metadata:

- llvm.nvvm.ldg.global.i
- llvm.nvvm.ldg.global.f
- llvm.nvvm.ldg.global.p
2024-10-27 16:14:13 -07:00
goldsteinn
c85611e858
[SimplifyLibCall][Attribute] Fix bug where we may keep range attr with incompatible type (#112649)
In a variety of places we change the bitwidth of a parameter but don't
update the attributes.

The issue in this case is from the `range` attribute when inlining
`__memset_chk`. `optimizeMemSetChk` will replace an `i32` with an
`i8`, and if the `i32` had a `range` attr assosiated it will cause an
error.

Fixes #112633
2024-10-17 10:32:55 -05:00
Jay Foad
85c17e4092
[LLVM] Make more use of IRBuilder::CreateIntrinsic. NFC. (#112706)
Convert many instances of:
  Fn = Intrinsic::getOrInsertDeclaration(...);
  CreateCall(Fn, ...)
to the equivalent CreateIntrinsic call.
2024-10-17 16:20:43 +01:00