Sanjay Patel f0dd12ec5c [x86] use zero-extending load of a byte outside of loops too (2nd try)
The first attempt missed changing test files for tools
(update_llc_test_checks.py).

Original commit message:

This implements the main suggested change from issue #56498.
Using the shorter (non-extending) instruction with only
-Oz ("minsize") rather than -Os ("optsize") is left as a
possible follow-up.

As noted in the bug report, the zero-extending load may have
shorter latency/better throughput across a wide range of x86
micro-arches, and it avoids a potential false dependency.
The cost is an extra instruction byte.

This could cause perf ups and downs from secondary effects,
but I don't think it is possible to account for those in
advance, and that will likely also depend on exact micro-arch.
This does bring LLVM x86 codegen more in line with existing
gcc codegen, so if problems are exposed they are more likely
to occur for both compilers.

Differential Revision: https://reviews.llvm.org/D129775
2022-07-19 21:27:08 -04:00

35 lines
1.1 KiB
LLVM

; NOTE: Assertions have been autogenerated by utils/update_llc_test_checks.py
; RUN: llc < %s -mtriple=i686-unknown-linux-gnu | FileCheck %s --check-prefix=X86
; RUN: llc < %s -mtriple=x86_64-unknown-linux-gnu | FileCheck %s --check-prefix=X64
%destTy = type { i2, i2 }
define void @crash(i64 %x0, i64 %y0, ptr nocapture %dest) nounwind {
; X86-LABEL: crash:
; X86: # %bb.0:
; X86-NEXT: movl {{[0-9]+}}(%esp), %eax
; X86-NEXT: movzbl {{[0-9]+}}(%esp), %ecx
; X86-NEXT: movzbl {{[0-9]+}}(%esp), %edx
; X86-NEXT: shlb $2, %dl
; X86-NEXT: andb $3, %cl
; X86-NEXT: orb %dl, %cl
; X86-NEXT: andb $15, %cl
; X86-NEXT: movb %cl, (%eax)
; X86-NEXT: retl
;
; X64-LABEL: crash:
; X64: # %bb.0:
; X64-NEXT: shlb $2, %sil
; X64-NEXT: andb $3, %dil
; X64-NEXT: orb %sil, %dil
; X64-NEXT: andb $15, %dil
; X64-NEXT: movb %dil, (%rdx)
; X64-NEXT: retq
%x1 = trunc i64 %x0 to i2
%y1 = trunc i64 %y0 to i2
%1 = insertelement <2 x i2> undef, i2 %x1, i32 0
%2 = insertelement <2 x i2> %1, i2 %y1, i32 1
store <2 x i2> %2, ptr %dest, align 1
ret void
}