2059 Commits

Author SHA1 Message Date
Craig Topper
292ee93a87
[CodeGen] Use Register in SwitchLoweringUtils. NFC (#109092)
Use an empty Register() instead of -1U.
2024-09-18 09:43:21 -07:00
Craig Topper
9d3ab1c36e [SelectionDAGBuilder] Use Register in more places. NFC" 2024-09-17 23:49:58 -07:00
Craig Topper
fe012bd52d [SelectionDAG] Use Register around RegisterSDNode related functions. NFC
RegisterSDNode itself already stored a Register.
2024-09-17 23:26:56 -07:00
anjenner
4af249fe6e
Add usub_cond and usub_sat operations to atomicrmw (#105568)
These both perform conditional subtraction, returning the minuend and
zero respectively, if the difference is negative.
2024-09-06 16:19:20 +01:00
Sam Tebbs
44cfbef1b3
[AArch64] Lower partial add reduction to udot or svdot (#101010)
This patch introduces lowering of the partial add reduction intrinsic to
a udot or svdot for AArch64. This also involves adding a
`shouldExpandPartialReductionIntrinsic` target hook, which AArch64 will
return false from in the cases that it can be lowered.
2024-09-02 14:06:14 +01:00
Dávid Ferenc Szabó
e9eaf19eb6
[CodeGen] Allow mixed scalar type constraints for inline asm (#65465)
GCC supports code like "asm volatile ("" : "=r" (i) : "0" (f))" where i
is integer type and f is floating point type. Currently this code
produces an error with Clang. The change allows mixed scalar types
between input and output constraints.

Co-authored-by: Matt Arsenault <Matthew.Arsenault@amd.com>
2024-08-29 22:53:28 +04:00
Stephen Tozer
3d08ade7bd
[ExtendLifetimes] Implement llvm.fake.use to extend variable lifetimes (#86149)
This patch is part of a set of patches that add an `-fextend-lifetimes`
flag to clang, which extends the lifetimes of local variables and
parameters for improved debuggability. In addition to that flag, the
patch series adds a pragma to selectively disable `-fextend-lifetimes`,
and an `-fextend-this-ptr` flag which functions as `-fextend-lifetimes`
for this pointers only. All changes and tests in these patches were
written by Wolfgang Pieb (@wolfy1961), while Stephen Tozer (@SLTozer)
has handled review and merging. The extend lifetimes flag is intended to
eventually be set on by `-Og`, as discussed in the RFC
here:

https://discourse.llvm.org/t/rfc-redefine-og-o1-and-add-a-new-level-of-og/72850

This patch implements a new intrinsic instruction in LLVM,
`llvm.fake.use` in IR and `FAKE_USE` in MIR, that takes a single operand
and has no effect other than "using" its operand, to ensure that its
operand remains live until after the fake use. This patch does not emit
fake uses anywhere; the next patch in this sequence causes them to be
emitted from the clang frontend, such that for each variable (or this) a
fake.use operand is inserted at the end of that variable's scope, using
that variable's value. This patch covers everything post-frontend, which
is largely just the basic plumbing for a new intrinsic/instruction,
along with a few steps to preserve the fake uses through optimizations
(such as moving them ahead of a tail call or translating them through
SROA).

Co-authored-by: Stephen Tozer <stephen.tozer@sony.com>
2024-08-29 17:53:32 +01:00
Matt Arsenault
7b7b0b95b2
DAG: Check if is_fpclass is custom, instead of isLegalOrCustom (#105577)
For some reason, isOperationLegalOrCustom is not the same as
isOperationLegal || isOperationCustom. Unfortunately, it checks
if the type is legal which makes it uesless for custom lowering
on non-legal types (which is always ppcf128).

Really the DAG builder shouldn't be going to expand this in the
builder, it makes it difficult to work with. It's only here to work
around the DAG requiring legal integer types the same size as
the FP type after type legalization.
2024-08-29 14:05:43 +04:00
Changpeng Fang
41b55071a1
DAG: Change round-mode operand type to i32 for FPTRUNC_ROUND (#106424)
We need this immediate type to be consistent. This is the pre-commit for
https://github.com/llvm/llvm-project/pull/105761
2024-08-28 11:16:41 -07:00
Maciej Gabka
95d2d1cba0
Move stepvector intrinsic out of experimental namespace (#98043)
This patch is moving out stepvector intrinsic from the experimental
namespace.

This intrinsic exists in LLVM for several years now, and is widely used.
2024-08-28 12:48:20 +01:00
Tianqing Wang
7f87b5bf0e
[SelectionDAG][X86] Preserve unpredictable metadata for conditional branches in SelectionDAG, as well as JCCs generated by X86 backend. (#102101)
This builds on 09515f2c2, which preserves unpredictable metadata in
CodeGen for `select`. This patch does it for conditional branches.
2024-08-19 11:04:48 +08:00
Craig Topper
067f2e9f18 [SelectionDAG] Use getSignedConstant/getAllOnesConstant. 2024-08-17 00:04:01 -07:00
Kevin McAfee
3e7ca5f1ef
[SDAG] Read-only intrinsics must have WillReturn and !Throws attributes to be treated as loads (#99999)
This change avoids deleting `!willReturn` intrinsics for which the
return value is unused when building the SDAG. Currently, calls to
read-only intrinsics not marked with `IntrWillReturn` cannot be deleted
at the LLVM IR level but may be deleted when building the SDAG. 
These calls are unsafe to remove from the IR because the functions are
`!willReturn` and should also be unsafe to remove fromthe SDAG for
the same reason. This change aligns the behavior of the SDAG to that
of LLVM IR. This change also requires that intrinsics not have the
`Throws` attribute to be treated as loads for the same reason.
2024-08-16 11:48:57 -07:00
Craig Topper
7afb51e035
[SelectionDAG][X86] Add SelectionDAG::getSignedConstant and use it in a few places. (#104555)
PR #80309 proposes to have users of APInt's uint64_t
constructor opt-in to implicit truncation. Currently, that patch
requires SelectionDAG::getConstant to opt-in.

This patch adds getSignedConstant so we can start fixing some of the
cases that require implicit truncation.
2024-08-16 09:21:11 -07:00
YunQiang Su
fb9e685fc4
Intrinsic: introduce minimumnum and maximumnum for IR and SelectionDAG (#96649)
C23 introduced new functions fminimum_num and fmaximum_num, and they
follow the minimumNumber and maximumNumber of IEEE754-2019. Let's
introduce new intrinsics to support them.

This patch introduces support only support for scalar values. The
support of
  vector (vp, vp.reduce, vector.reduce),
  experimental.constrained
will be added in future patches.

With this patch, MIPSr6 and LoongArch can work out of box with
fcanonical and fmax/fmin.

Aarch64/PowerPC64 can use the same login as MIPSr6 and LoongArch, while
they have no fcanonical support yet.
I will add it in future patches.

The FMIN/FMAX of RISC-V instructions follows the
minimumNumber/maximumNumber of IEEE754-2019. We can just add it in
future patch.

Background

https://discourse.llvm.org/t/rfc-fix-llvm-min-f-and-llvm-max-f-intrinsics/79735
Currently we have fminnum/fmaxnum, which have different behavior on
different platform for NUM vs sNaN:
   1) Fallback to fmin(3)/fmax(3): return qNaN.
   2) ARM64/ARM32+Neon: same as libc.
   3) MIPSr6/LoongArch/RISC-V: return NUM.

And the fix of fminnum/fmaxnum to follow minNUM/maxNUM of IEEE754-2008
will submit as separated patches.
2024-08-15 14:09:36 +08:00
Alex MacLean
ccc31278bd
[NVPTX] support switch statement with brx.idx (reland) (#102550)
Add custom lowering for `BR_JT` DAG nodes to the `brx.idx` PTX
instruction ([PTX ISA 9.7.13.4. Control Flow Instructions: brx.idx]
(https://docs.nvidia.com/cuda/parallel-thread-execution/#control-flow-instructions-brx-idx)).
Depending on the heuristics in DAG selection, `switch` statements may
now be lowered using `brx.idx`.

Note: this fixes the previous issue in #102400 by adding the isBarrier
attribute to BRX_END
2024-08-09 11:46:37 -07:00
Artem Belevich
2756879000
Revert "[NVPTX] support switch statement with brx.idx" (#102530)
Reverts llvm/llvm-project#102400

Causes LLVM to crash on some tests.
2024-08-08 13:31:10 -07:00
Alex MacLean
ba97697189
[NVPTX] support switch statement with brx.idx (#102400)
Add custom lowering for `BR_JT` DAG nodes to the `brx.idx` PTX
instruction ([PTX ISA 9.7.13.4. Control Flow Instructions: brx.idx]
(https://docs.nvidia.com/cuda/parallel-thread-execution/#control-flow-instructions-brx-idx)).
Depending on the heuristics in DAG selection, `switch` statements may
now be lowered using `brx.idx`
2024-08-08 13:19:26 -07:00
Nikita Popov
0564d0665b [SDAG] Transfer gep nusw/nuw to SDAG
The resulting add is nuw if either the gep was nuw or it was
nusw+nneg. Previously only inbounds+nneg was handled.

Test via wasm load offsets, which seems to most directly expose
these SDAG flags.
2024-08-07 09:26:10 +02:00
Alexis Engelke
da0e66e64c
[CodeGen][NFC] Add wrapper method for MBBMap (#101893)
This is a preparation for changing the data structure of MBBMap.
2024-08-04 18:34:26 +02:00
Zequan Wu
ae6dc64ec6 Reapply "[Clang] Fix nomerge attribute not working with __builtin_trap(), __debugbreak(), __builtin_verbose_trap() (#101549)"
This reverts commit 667598d84b16d1789ce90b231565e9e7bfdbe77d and fixes failed tests: llvm/test/CodeGen/X86/nomerge.ll and llvm/test/MC/AArch64/local-bounds-single-trap.ll.
2024-08-01 15:54:50 -07:00
Haowei Wu
667598d84b Revert "[Clang] Fix nomerge attribute not working with __builtin_trap(), __debugbreak(), __builtin_verbose_trap() (#101549)"
This reverts commit 5e84646982d1ec9bc94e48dde4b47f03c044a156, which
broke 'nomerge.ll' test on llvm bots.
2024-08-01 14:46:36 -07:00
Zequan Wu
5e84646982
[Clang] Fix nomerge attribute not working with __builtin_trap(), __debugbreak(), __builtin_verbose_trap() (#101549)
1. It fixes the problem that llvm.trap() not getting the nomerge
attribute.
2. It sets nomerge flag for the node if the instruction has nomerge
arrtibute.

This is a copy of https://reviews.llvm.org/D146164. This only attempts
to fix `nomerge` for `__builtin_trap()`, `__debugbreak()`,
`__builtin_verbose_trap()`, not working for non-trap builtins.

Fixes #53011
2024-08-01 16:13:39 -04:00
Kazu Hirata
34d48279b8
[llvm] Initialize SmallVector with ranges (NFC) (#100948) 2024-07-28 22:08:12 -07:00
Matt Arsenault
6f83a031e4
CodeGen: Move current call site out of MachineModuleInfo (#100369)
I do not know understand what this is for, but it's only used in
SelectionDAGBuilder, so move it to FunctionLoweringInfo like other
function scope DAG builder state. The intrinsics are not documented
in the LangRef or Intrinsics.td.

This removes the last piece of codegen state from MachineModuleInfo.
2024-07-26 13:02:12 +04:00
Vitaly Buka
455990d18f
Reland "SelectionDAG: Avoid using MachineFunction::getMMI" (#99779)
Reverts llvm/llvm-project#99777

Co-authored-by: Matt Arsenault <Matthew.Arsenault@amd.com>
2024-07-24 10:38:53 +04:00
Craig Topper
b8d2b775f2
[SelectionDAGBuilder] Avoid const_cast on call to matchSelectPattern. NFC (#100053)
By making the LHS and RHS const pointers, we can use the const signature
of matchSelectPattern.
2024-07-23 08:58:02 -07:00
Craig Topper
1c798e0b07
[SelectionDAGBuilder][RISCV] Fix crash when using a memory constraint with scalable vector type. (#99821)
We need to use the minimum size of the scalable type and the correct
stack ID.

The code in the PR is still invalid because the instruction used doesn't
have a pointer operand. This is diagnosed later when the assembler
parses it.

Fixes #99782
2024-07-22 09:41:57 -07:00
Vitaly Buka
98c0e55d9d
Revert "SelectionDAG: Avoid using MachineFunction::getMMI" (#99777)
Reverts llvm/llvm-project#99696

https://lab.llvm.org/buildbot/#/builders/164/builds/1262
2024-07-20 12:20:50 -07:00
Joseph Huber
615b7eeaa9 Reapply "[LLVM][LTO] Factor out RTLib calls and allow them to be dropped (#98512)"
This reverts commit 740161a9b98c9920dedf1852b5f1c94d0a683af5.

I moved the `ISD` dependencies into the CodeGen portion of the handling,
it's a little awkward but it's the easiest solution I can think of for
now.
2024-07-20 09:29:31 -05:00
Matt Arsenault
c2019a37bd
SelectionDAG: Avoid using MachineFunction::getMMI (#99696) 2024-07-20 10:53:41 +04:00
NAKAMURA Takumi
740161a9b9 Revert "[LLVM][LTO] Factor out RTLib calls and allow them to be dropped (#98512)"
This reverts commit c05126bdfc3b02daa37d11056fa43db1a6cdef69.
(llvmorg-19-init-17714-gc05126bdfc3b)
See #99610
2024-07-20 12:36:57 +09:00
Matt Arsenault
0f0cfcff2c
CodeGen: Avoid some references to MachineFunction's getMMI (#99652)
MachineFunction's probably should not include a backreference to
the owning MachineModuleInfo. Most of these references were used
just to query the MCContext, which MachineFunction already directly
stores. Other contexts are using it to query the LLVMContext, which
can already be accessed through the IR function reference.
2024-07-19 22:09:05 +04:00
Lawrence Benson
177ce1900f
[LLVM] Add llvm.experimental.vector.compress intrinsic (#92289)
This PR adds a new vector intrinsic `@llvm.experimental.vector.compress`
to "compress" data within a vector based on a selection mask, i.e., it
moves all selected values (i.e., where `mask[i] == 1`) to consecutive
lanes in the result vector. A `passthru` vector can be provided, from
which remaining lanes are filled.

The main reason for this is that the existing
`@llvm.masked.compressstore` has very strong constraints in that it can
only write values that were selected, resulting in guard branches for
all targets except AVX-512 (and even there the AMD implementation is
_very_ slow). More instruction sets support "compress" logic, but only
within registers. So to store the values, an additional store is needed.
But this combination is likely significantly faster on many target as it
avoids branches.

In follow up PRs, my plan is to add target-specific lowerings for x86,
SVE, and possibly RISCV. I also want to combine this with a store
instruction, as this is probably a common case and we can avoid some
memory writes in that case.

See [discussion in
forum](https://discourse.llvm.org/t/new-intrinsic-for-masked-vector-compress-without-store/78663)
for initial discussion on the design.
2024-07-17 14:24:24 +02:00
Amara Emerson
f270a4dd66
[AArch64] Don't tail call memset if it would convert to a bzero. (#98969)
Well, not quite that simple. We can tc memset since it returns the first
argument but bzero doesn't do that and therefore we can end up
miscompiling.

This patch also refactors the logic out of isInTailCallPosition() into the callers.
As a result memcpy and memmove are also modified to do the same thing
for consistency.

rdar://131419786
2024-07-17 01:31:52 -07:00
Joseph Huber
c05126bdfc
[LLVM][LTO] Factor out RTLib calls and allow them to be dropped (#98512)
Summary:
The LTO pass and LLD linker have logic in them that forces extraction
and prevent internalization of needed runtime calls. However, these
currently take all RTLibcalls into account, even if the target does not
support them. The target opts-out of a libcall if it sets its name to
nullptr. This patch pulls this logic out into a class in the header so
that LTO / lld can use it to determine if a symbol actually needs to be
kept.

This is important for targets like AMDGPU that want to be able to use
`lld` to perform the final link step, but does not want the overhead of
uncalled functions. (This adds like a second to the link time trivially)
2024-07-16 06:22:09 -05:00
Ahmed Bougacha
d286efeb1d
[AArch64][PAC] Lower direct authenticated calls to ptrauth constants. (#97664)
This tries to turn indirect ptrauth calls into direct calls, using
`ConstantPtrAuth::isKnownEquivalent` to compare the `ConstantPtrAuth`
target with the ptrauth call bundle.

This should be straightforward, other than the somewhat awkward GISel
handling, which has a handshake between CallLowering and IRTranslator to
elide the ptrauth when possible.
2024-07-15 16:14:43 -07:00
Farzon Lotfi
0b58f34c98
[X86][CodeGen] Add base trig intrinsic lowerings (#96222)
This change is an implementation of
https://github.com/llvm/llvm-project/issues/87367's investigation on
supporting IEEE math operations as intrinsics.
Which was discussed in this RFC:
https://discourse.llvm.org/t/rfc-all-the-math-intrinsics/78294

This change adds constraint intrinsics and some lowering cases for
`acos`, `asin`, `atan`, `cosh`, `sinh`, and `tanh`.
The only x86 specific change was for f80.

https://github.com/llvm/llvm-project/issues/70079
https://github.com/llvm/llvm-project/issues/70080
https://github.com/llvm/llvm-project/issues/70081
https://github.com/llvm/llvm-project/issues/70083
https://github.com/llvm/llvm-project/issues/70084
https://github.com/llvm/llvm-project/issues/95966
    
The x86 lowering is going to be done in three pr changes with this being
the first.
A second PR will be put up for Loop Vectorizing and then SLPVectorizer.

The constraint intrinsics is also going to be in multiple parts, but
just 2.
This part covers just the llvm specific changes, part2 will cover clang
specifc changes and legalization for backends than have special
legalization
 requirements like aarch64 and wasm.
2024-07-11 15:58:43 -04:00
Daniel Kiss
1782810b84 [Clang][ARM][AArch64] Alway emit protection attributes for functions. (#82819)
So far branch protection, sign return address, guarded control stack
attributes are
only emitted as module flags to indicate the functions need to be
generated with
those features.
The problem is in case of an LTO build the module flags are merged with
the `min`
rule which means if one of the module is not build with sign return
address then the features
will be turned off for all functions. Due to the functions take the
branch-protection and
sign-return-address features from the module flags. The
sign-return-address is
function level option therefore it is expected functions from files that
is
compiled with -mbranch-protection=pac-ret to be protected.
The inliner might inline functions with different set of flags as it
doesn't consider
the module flags.

This patch adds the attributes to all functions and drops the checking
of the module flags
for the code generation.
Module flag is still used for generating the ELF markers.
Also drops the "true"/"false" values from the
branch-protection-enforcement,
branch-protection-pauth-lr, guarded-control-stack attributes as presence
of the
attribute means it is on absence means off and no other option.

Releand with test fixes.
2024-07-10 11:32:41 +02:00
Daniel Kiss
4b2daeccc7
Revert "[Clang][ARM][AArch64] Alway emit protection attributes for functions." (#98284)
Reverts llvm/llvm-project#82819
2024-07-10 10:22:38 +02:00
Daniel Kiss
e15d67cfc2
[Clang][ARM][AArch64] Alway emit protection attributes for functions. (#82819)
So far branch protection, sign return address, guarded control stack
attributes are
only emitted as module flags to indicate the functions need to be
generated with
those features.
The problem is in case of an LTO build the module flags are merged with
the `min`
rule which means if one of the module is not build with sign return
address then the features
will be turned off for all functions. Due to the functions take the
branch-protection and
sign-return-address features from the module flags. The
sign-return-address is
function level option therefore it is expected functions from files that
is
compiled with -mbranch-protection=pac-ret to be protected.
The inliner might inline functions with different set of flags as it
doesn't consider
the module flags.
 
This patch adds the attributes to all functions and drops the checking
of the module flags
for the code generation.
Module flag is still used for generating the ELF markers.
Also drops the "true"/"false" values from the
branch-protection-enforcement,
branch-protection-pauth-lr, guarded-control-stack attributes as presence
of the
attribute means it is on absence means off and no other option.
2024-07-10 10:06:14 +02:00
Shengchen Kan
a48305e0f9 [X86][CodeGen] Convert masked.load/store to CLOAD/CSTORE node only when vector size = 1
This fixes the crash when building llvm-test-suite with avx512f + cf.
2024-07-05 15:50:21 +08:00
Shengchen Kan
c60b9307d0 Revert "[X86][CodeGen] Convert masked.load/store to CLOAD/CSTORE node only when vector size = 1"
This reverts commit 74984dee51307779a3eab10a8cd6102be37e1081.

It caused AArch64 test sve-nontemporal-masked-ldst.ll to fail.
2024-07-05 15:14:30 +08:00
Shengchen Kan
74984dee51 [X86][CodeGen] Convert masked.load/store to CLOAD/CSTORE node only when vector size = 1
This fixes the crash when building llvm-test-suite with avx512f + cf.
2024-07-05 14:35:42 +08:00
Nicholas Guy
6222c8f030
[IR][LangRef] Add partial reduction add intrinsic (#94499)
Adds the llvm.experimental.partial.reduce.add.* overloaded intrinsic,
this intrinsic represents add reductions that result in a narrower
vector.
2024-07-04 13:32:42 +01:00
Kazu Hirata
58fd3bea6d
[CodeGen] Use range-based for loops (NFC) (#97467) 2024-07-02 16:36:13 -07:00
Igor Kudrin
23db37c51c
[CodeGen] Do not emit TRAP for unreachable after @llvm.trap (#94570)
With `--trap-unreachable`, `clang` can emit double `TRAP` instructions
for code that contains a call to `__builtin_trap()`:

```
> cat test.c
void test() { __builtin_trap(); }
> clang test.c --target=x86_64 -mllvm --trap-unreachable -O1 -S -o -
...
test:
...
  ud2
  ud2
...
```

`SimplifyCFGPass` inserts `unreachable` after a call to a `noreturn`
function, and later this instruction causes `TRAP/G_TRAP` to be emitted
in `SelectionDAGBuilder::visitUnreachable()` or
`IRTranslator::translateUnreachable()` if
`TargetOptions.TrapUnreachable` is set.

The patch checks the instruction before `unreachable` and avoids
inserting an additional trap.
2024-07-02 15:36:02 -07:00
Daniil Kovalev
1488fb4153
[PAC][AArch64] Lower ptrauth constants in code (#96879)
This re-applies #94241 after fixing buildbot failure, see
https://lab.llvm.org/buildbot/#/builders/51/builds/570

According to standard, `constexpr` variables and `const` variables
initialized with constant expressions can be used in lambdas w/o
capturing - see https://en.cppreference.com/w/cpp/language/lambda.
However, MSVC used on buildkite seems to ignore that rule and does not
allow using such uncaptured variables in lambdas: we have "error C3493:
'Mask16' cannot be implicitly captured because no default capture mode
has been specified" - see
https://buildkite.com/llvm-project/github-pull-requests/builds/73238

Explicitly capturing such a variable, however, makes buildbot fail with
"error: lambda capture 'Mask16' is not required to be captured for this
use [-Werror,-Wunused-lambda-capture]" - see
https://lab.llvm.org/buildbot/#/builders/51/builds/570.

Fix both cases by using `0xffff` value directly instead of giving a name
to it.

Original PR description below.

Depends on #94240.

Define the following pseudos for lowering ptrauth constants in code:

- non-`extern_weak`:
  - no GOT load needed: `MOVaddrPAC` - similar to `MOVaddr`, with added
PAC;
  - GOT load needed: `LOADgotPAC` - similar to `LOADgot`, with added PAC;
- `extern_weak`: `LOADauthptrstatic` - similar to `LOADgot`, but use a
special stub slot named `sym$auth_ptr$key$disc` filled by dynamic linker
during relocation resolving instead of a GOT slot.

---------

Co-authored-by: Ahmed Bougacha <ahmed@bougacha.org>
2024-06-28 07:29:38 +03:00
Shengchen Kan
15fc801cf0
[X86][CodeGen] Support hoisting load/store with conditional faulting (#96720)
1. Add TTI interface for conditional load/store.
2. Mark 1 x i16/i32/i64 masked load/store legal so that it's not
   legalized in pass scalarize-masked-mem-intrin.
3. Visit 1 x i16/i32/i64 masked load/store to build a target-specific
   CLOAD/CSTORE node to avoid error in
   `DAGTypeLegalizer::ScalarizeVectorResult`.
4. Combine DAG to simplify the nodes for CLOAD/CSTORE.
5. Lower CLOAD/CSTORE to CFCMOV by pattern match.

This is CodeGen part of #95515
2024-06-27 17:01:55 +08:00
Daniil Kovalev
99251f5a11
Revert "[PAC][AArch64] Lower ptrauth constants in code (#94241)" (#96865)
This reverts #94241.

See buildbot failure
https://lab.llvm.org/buildbot/#/builders/51/builds/570
2024-06-27 11:10:38 +03:00