From e44c62684d9e0e82636d8e629f97f0f95b351bb7 Mon Sep 17 00:00:00 2001 From: Adam Sawicki Date: Fri, 15 Jun 2018 14:30:39 +0200 Subject: [PATCH] Added debug macro VMA_DEBUG_INITIALIZE_ALLOCATIONS to initialize contents of allocations with a bit pattern. Documented it. Added test for it. Fixed some indentation. --- README.md | 2 +- .../_v_k__k_h_r_dedicated_allocation.html | 100 -- docs/html/about_the_library.html | 84 -- ...ction.html => debugging_memory_usage.html} | 21 +- docs/html/group__general.html | 643 ------------ docs/html/group__layer1.html | 308 ------ docs/html/group__layer2.html | 988 ------------------ docs/html/group__layer3.html | 290 ----- docs/html/index.html | 9 +- docs/html/modules.html | 81 -- docs/html/persistently_mapped_memory.html | 81 -- docs/html/search/all_2.js | 1 - docs/html/search/all_3.js | 1 + docs/html/search/groups_0.html | 26 - docs/html/search/groups_0.js | 4 - docs/html/search/groups_1.html | 26 - docs/html/search/groups_1.js | 6 - docs/html/search/pages_1.js | 1 - docs/html/search/pages_2.js | 1 + ...truct_vma_memory_requirements-members.html | 81 -- docs/html/struct_vma_memory_requirements.html | 181 ---- docs/html/thread_safety.html | 83 -- docs/html/user_guide.html | 254 ----- docs/html/vk__mem__alloc_8h.html | 4 +- docs/html/vk__mem__alloc_8h_source.html | 218 ++-- src/Tests.cpp | 105 +- src/VmaUsage.h | 1 + src/vk_mem_alloc.h | 248 +++-- 28 files changed, 399 insertions(+), 3449 deletions(-) delete mode 100644 docs/html/_v_k__k_h_r_dedicated_allocation.html delete mode 100644 docs/html/about_the_library.html rename docs/html/{corruption_detection.html => debugging_memory_usage.html} (73%) delete mode 100644 docs/html/group__general.html delete mode 100644 docs/html/group__layer1.html delete mode 100644 docs/html/group__layer2.html delete mode 100644 docs/html/group__layer3.html delete mode 100644 docs/html/modules.html delete mode 100644 docs/html/persistently_mapped_memory.html delete mode 100644 docs/html/search/groups_0.html delete mode 100644 docs/html/search/groups_0.js delete mode 100644 docs/html/search/groups_1.html delete mode 100644 docs/html/search/groups_1.js delete mode 100644 docs/html/struct_vma_memory_requirements-members.html delete mode 100644 docs/html/struct_vma_memory_requirements.html delete mode 100644 docs/html/thread_safety.html delete mode 100644 docs/html/user_guide.html diff --git a/README.md b/README.md index 309b179..10be08b 100644 --- a/README.md +++ b/README.md @@ -50,7 +50,7 @@ Additional features: - Debug annotations: Associate string with name or opaque pointer to your own data with every allocation. - JSON dump: Obtain a string in JSON format with detailed map of internal state, including list of allocations and gaps between them. - Convert this JSON dump into a picture to visualize your memory. See [tools/VmaDumpVis](tools/VmaDumpVis/README.md). -- Margins: Enable validation of a magic number before and after every allocation to detect out-of-bounds memory corruption. +- Debugging incorrect memory usage: Enable initialization of all allocated memory with a bit pattern to detect usage of uninitialized or freed memory. Enable validation of a magic number before and after every allocation to detect out-of-bounds memory corruption. # Prequisites diff --git a/docs/html/_v_k__k_h_r_dedicated_allocation.html b/docs/html/_v_k__k_h_r_dedicated_allocation.html deleted file mode 100644 index fded3c2..0000000 --- a/docs/html/_v_k__k_h_r_dedicated_allocation.html +++ /dev/null @@ -1,100 +0,0 @@ - - - - - - - -Vulkan Memory Allocator: VK_KHR_dedicated_allocation - - - - - - - - - -
-
- - - - - - -
-
Vulkan Memory Allocator -
-
-
- - - - - - - - -
-
- - -
- -
- - -
-
-
-
VK_KHR_dedicated_allocation
-
-
-

VK_KHR_dedicated_allocation is a Vulkan extension which can be used to improve performance on some GPUs. It augments Vulkan API with possibility to query driver whether it prefers particular buffer or image to have its own, dedicated allocation (separate VkDeviceMemory block) for better efficiency - to be able to do some internal optimizations.

-

The extension is supported by this library. It will be used automatically when enabled. To enable it:

-

1 . When creating Vulkan device, check if following 2 device extensions are supported (call vkEnumerateDeviceExtensionProperties()). If yes, enable them (fill VkDeviceCreateInfo::ppEnabledExtensionNames).

-
    -
  • VK_KHR_get_memory_requirements2
  • -
  • VK_KHR_dedicated_allocation
  • -
-

If you enabled these extensions:

-

2 . Query device for pointers to following 2 extension functions, using vkGetDeviceProcAddr(). Pass them in structure VmaVulkanFunctions while creating your VmaAllocator.

-
    -
  • vkGetBufferMemoryRequirements2KHR
  • -
  • vkGetImageMemoryRequirements2KHR
  • -
-

Other members of this structure can be null as long as you leave VMA_STATIC_VULKAN_FUNCTIONS defined to 1, which is the default.

-
VmaVulkanFunctions vulkanFunctions = {};
(PFN_vkGetBufferMemoryRequirements2KHR)vkGetDeviceProcAddr(device, "vkGetBufferMemoryRequirements2KHR");
(PFN_vkGetImageMemoryRequirements2KHR)vkGetDeviceProcAddr(device, "vkGetImageMemoryRequirements2KHR");
VmaAllocatorCreateInfo allocatorInfo = {};
allocatorInfo.pVulkanFunctions = &vulkanFunctions;
// Fill other members of allocatorInfo...

3 . Use VMA_ALLOCATOR_CREATE_KHR_DEDICATED_ALLOCATION_BIT flag when creating your VmaAllocator to inform the library that you enabled required extensions and you want the library to use them.

-
vmaCreateAllocator(&allocatorInfo, &allocator);

That's all. The extension will be automatically used whenever you create a buffer using vmaCreateBuffer() or image using vmaCreateImage().

-

When using the extension together with Vulkan Validation Layer, you will receive warnings like this:

vkBindBufferMemory(): Binding memory to buffer 0x33 but vkGetBufferMemoryRequirements() has not been called on that buffer.
-

It is OK, you should just ignore it. It happens because you use function vkGetBufferMemoryRequirements2KHR() instead of standard vkGetBufferMemoryRequirements(), while the validation layer seems to be unaware of it.

-

To learn more about this extension, see:

- -
- - - - diff --git a/docs/html/about_the_library.html b/docs/html/about_the_library.html deleted file mode 100644 index de48ac6..0000000 --- a/docs/html/about_the_library.html +++ /dev/null @@ -1,84 +0,0 @@ - - - - - - - -Vulkan Memory Allocator: About the library - - - - - - - - - -
-
- - - - - - -
-
Vulkan Memory Allocator -
-
-
- - - - - - - - -
-
- - -
- -
- - -
-
-
-
About the library
-
-
-

-Features not supported

-

Features deliberately excluded from the scope of this library:

-
    -
  • Data transfer - issuing commands that transfer data between buffers or images, any usage of VkCommandList or VkCommandQueue and related synchronization is responsibility of the user.
  • -
  • Support for any programming languages other than C/C++. Bindings to other languages are welcomed as external projects.
  • -
-
- - - - diff --git a/docs/html/corruption_detection.html b/docs/html/debugging_memory_usage.html similarity index 73% rename from docs/html/corruption_detection.html rename to docs/html/debugging_memory_usage.html index d05cdb9..dd84301 100644 --- a/docs/html/corruption_detection.html +++ b/docs/html/debugging_memory_usage.html @@ -5,7 +5,7 @@ -Vulkan Memory Allocator: Corruption detection +Vulkan Memory Allocator: Debugging incorrect memory usage @@ -63,13 +63,19 @@ $(function() {
-
Corruption detection
+
Debugging incorrect memory usage
-

If you suspect a bug caused by memory being overwritten out of bounds of an allocation, you can use debug features of this library to verify this.

-

+

If you suspect a bug with memory usage, like usage of uninitialized memory or memory being overwritten out of bounds of an allocation, you can use debug features of this library to verify this.

+

+Memory initialization

+

If you experience a bug with incorrect data in your program and you suspect uninitialized memory to be used, you can enable automatic memory initialization to verify this. To do it, define macro VMA_DEBUG_INITIALIZE_ALLOCATIONS to 1.

+
#define VMA_DEBUG_INITIALIZE_ALLOCATIONS 1
#include "vk_mem_alloc.h"

It makes memory of all new allocations initialized to bit pattern 0xDCDCDCDC. Before an allocation is destroyed, its memory is filled with bit pattern 0xEFEFEFEF. Memory is automatically mapped and unmapped if necessary.

+

If you find these values while debugging your program, good chances are that you incorrectly read Vulkan memory that is allocated but not initialized, or already freed, respectively.

+

Memory initialization works only with memory types that are HOST_VISIBLE. It works also with dedicated allocations. It doesn't work with allocations created with VMA_ALLOCATION_CREATE_CAN_BECOME_LOST_BIT flag, as these they cannot be mapped.

+

Margins

-

By default, allocations are laid your in memory blocks next to each other if possible (considering required alignment, bufferImageGranularity, and nonCoherentAtomSize).

+

By default, allocations are laid out in memory blocks next to each other if possible (considering required alignment, bufferImageGranularity, and nonCoherentAtomSize).

Allocations without margin
@@ -78,11 +84,12 @@ Margins

Allocations with margin

If your bug goes away after enabling margins, it means it may be caused by memory being overwritten outside of allocation boundaries. It is not 100% certain though. Change in application behavior may also be caused by different order and distribution of allocations across memory blocks after margins are applied.

-

The margin is applied also before first and after last allocation in a block. It may happen only once between two adjacent allocations.

+

The margin is applied also before first and after last allocation in a block. It may occur only once between two adjacent allocations.

+

Margins work with all types of memory.

Margin is applied only to allocations made out of memory blocks and not to dedicated allocations, which have their own memory block of specific size. It is thus not applied to allocations made using VMA_ALLOCATION_CREATE_DEDICATED_MEMORY_BIT flag or those automatically decided to put into dedicated allocations, e.g. due to its large size or recommended by VK_KHR_dedicated_allocation extension.

Margins appear in JSON dump as part of free space.

Note that enabling margins increases memory usage and fragmentation.

-

+

Corruption detection

You can additionally define macro VMA_DEBUG_DETECT_CORRUPTION to 1 to enable validation of contents of the margins.

#define VMA_DEBUG_MARGIN 16
#define VMA_DEBUG_DETECT_CORRUPTION 1
#include "vk_mem_alloc.h"

When this feature is enabled, number of bytes specified as VMA_DEBUG_MARGIN (it must be multiply of 4) before and after every allocation is filled with a magic number. This idea is also know as "canary". Memory is automatically mapped and unmapped if necessary.

diff --git a/docs/html/group__general.html b/docs/html/group__general.html deleted file mode 100644 index cbb5a09..0000000 --- a/docs/html/group__general.html +++ /dev/null @@ -1,643 +0,0 @@ - - - - - - - -Vulkan Memory Allocator: General - - - - - - - - - -
-
- - - - - - -
-
Vulkan Memory Allocator -
-
-
- - - - - - - -
- -
-
- - -
- -
- -
- -
-
General
-
-
- - - - - - - - - - - - - - - - - -

-Classes

struct  VmaDeviceMemoryCallbacks
 Set of callbacks that the library will call for vkAllocateMemory and vkFreeMemory. More...
 
struct  VmaVulkanFunctions
 Pointers to some Vulkan functions - a subset used by the library. More...
 
struct  VmaAllocatorCreateInfo
 Description of a Allocator to be created. More...
 
struct  VmaStatInfo
 Calculated statistics of memory usage in entire allocator. More...
 
struct  VmaStats
 General statistics from current state of Allocator. More...
 
- - - -

-Macros

#define VMA_STATS_STRING_ENABLED   1
 
- - - - - - - - - - - - - - - - - - - - - - - - - - - -

-Typedefs

typedef void(VKAPI_PTR * PFN_vmaAllocateDeviceMemoryFunction) (VmaAllocator allocator, uint32_t memoryType, VkDeviceMemory memory, VkDeviceSize size)
 Callback function called after successful vkAllocateMemory. More...
 
typedef void(VKAPI_PTR * PFN_vmaFreeDeviceMemoryFunction) (VmaAllocator allocator, uint32_t memoryType, VkDeviceMemory memory, VkDeviceSize size)
 Callback function called before vkFreeMemory. More...
 
typedef struct VmaDeviceMemoryCallbacks VmaDeviceMemoryCallbacks
 Set of callbacks that the library will call for vkAllocateMemory and vkFreeMemory. More...
 
typedef enum VmaAllocatorCreateFlagBits VmaAllocatorCreateFlagBits
 Flags for created VmaAllocator. More...
 
typedef VkFlags VmaAllocatorCreateFlags
 
typedef struct VmaVulkanFunctions VmaVulkanFunctions
 Pointers to some Vulkan functions - a subset used by the library. More...
 
typedef struct VmaAllocatorCreateInfo VmaAllocatorCreateInfo
 Description of a Allocator to be created. More...
 
typedef struct VmaStatInfo VmaStatInfo
 Calculated statistics of memory usage in entire allocator. More...
 
typedef struct VmaStats VmaStats
 General statistics from current state of Allocator. More...
 
- - - - -

-Enumerations

enum  VmaAllocatorCreateFlagBits { VMA_ALLOCATOR_CREATE_EXTERNALLY_SYNCHRONIZED_BIT = 0x00000001, -VMA_ALLOCATOR_CREATE_KHR_DEDICATED_ALLOCATION_BIT = 0x00000002, -VMA_ALLOCATOR_CREATE_FLAG_BITS_MAX_ENUM = 0x7FFFFFFF - }
 Flags for created VmaAllocator. More...
 
- - - - - - - - - - - - - - - - - - - - - - - - - -

-Functions

VkResult vmaCreateAllocator (const VmaAllocatorCreateInfo *pCreateInfo, VmaAllocator *pAllocator)
 Creates Allocator object. More...
 
void vmaDestroyAllocator (VmaAllocator allocator)
 Destroys allocator object. More...
 
void vmaGetPhysicalDeviceProperties (VmaAllocator allocator, const VkPhysicalDeviceProperties **ppPhysicalDeviceProperties)
 
void vmaGetMemoryProperties (VmaAllocator allocator, const VkPhysicalDeviceMemoryProperties **ppPhysicalDeviceMemoryProperties)
 
void vmaGetMemoryTypeProperties (VmaAllocator allocator, uint32_t memoryTypeIndex, VkMemoryPropertyFlags *pFlags)
 Given Memory Type Index, returns Property Flags of this memory type. More...
 
void vmaSetCurrentFrameIndex (VmaAllocator allocator, uint32_t frameIndex)
 Sets index of the current frame. More...
 
void vmaCalculateStats (VmaAllocator allocator, VmaStats *pStats)
 Retrieves statistics from current state of the Allocator. More...
 
void vmaBuildStatsString (VmaAllocator allocator, char **ppStatsString, VkBool32 detailedMap)
 Builds and returns statistics as string in JSON format. More...
 
void vmaFreeStatsString (VmaAllocator allocator, char *pStatsString)
 
-

Detailed Description

-

Macro Definition Documentation

- -

◆ VMA_STATS_STRING_ENABLED

- -
-
- - - - -
#define VMA_STATS_STRING_ENABLED   1
-
- -
-
-

Typedef Documentation

- -

◆ PFN_vmaAllocateDeviceMemoryFunction

- -
-
- - - - -
typedef void(VKAPI_PTR * PFN_vmaAllocateDeviceMemoryFunction) (VmaAllocator allocator, uint32_t memoryType, VkDeviceMemory memory, VkDeviceSize size)
-
- -

Callback function called after successful vkAllocateMemory.

- -
-
- -

◆ PFN_vmaFreeDeviceMemoryFunction

- -
-
- - - - -
typedef void(VKAPI_PTR * PFN_vmaFreeDeviceMemoryFunction) (VmaAllocator allocator, uint32_t memoryType, VkDeviceMemory memory, VkDeviceSize size)
-
- -

Callback function called before vkFreeMemory.

- -
-
- -

◆ VmaAllocatorCreateFlagBits

- -
-
- -

Flags for created VmaAllocator.

- -
-
- -

◆ VmaAllocatorCreateFlags

- -
-
- - - - -
typedef VkFlags VmaAllocatorCreateFlags
-
- -
-
- -

◆ VmaAllocatorCreateInfo

- -
-
- -

Description of a Allocator to be created.

- -
-
- -

◆ VmaDeviceMemoryCallbacks

- -
-
- -

Set of callbacks that the library will call for vkAllocateMemory and vkFreeMemory.

-

Provided for informative purpose, e.g. to gather statistics about number of allocations or total amount of memory allocated in Vulkan.

-

Used in VmaAllocatorCreateInfo::pDeviceMemoryCallbacks.

- -
-
- -

◆ VmaStatInfo

- -
-
- - - - -
typedef struct VmaStatInfo VmaStatInfo
-
- -

Calculated statistics of memory usage in entire allocator.

- -
-
- -

◆ VmaStats

- -
-
- - - - -
typedef struct VmaStats VmaStats
-
- -

General statistics from current state of Allocator.

- -
-
- -

◆ VmaVulkanFunctions

- -
-
- - - - -
typedef struct VmaVulkanFunctions VmaVulkanFunctions
-
- -

Pointers to some Vulkan functions - a subset used by the library.

-

Used in VmaAllocatorCreateInfo::pVulkanFunctions.

- -
-
-

Enumeration Type Documentation

- -

◆ VmaAllocatorCreateFlagBits

- -
-
- - - - -
enum VmaAllocatorCreateFlagBits
-
- -

Flags for created VmaAllocator.

- - - - -
Enumerator
VMA_ALLOCATOR_CREATE_EXTERNALLY_SYNCHRONIZED_BIT 

Allocator and all objects created from it will not be synchronized internally, so you must guarantee they are used from only one thread at a time or synchronized externally by you.

-

Using this flag may increase performance because internal mutexes are not used.

-
VMA_ALLOCATOR_CREATE_KHR_DEDICATED_ALLOCATION_BIT 

Enables usage of VK_KHR_dedicated_allocation extension.

-

Using this extenion will automatically allocate dedicated blocks of memory for some buffers and images instead of suballocating place for them out of bigger memory blocks (as if you explicitly used VMA_ALLOCATION_CREATE_DEDICATED_MEMORY_BIT flag) when it is recommended by the driver. It may improve performance on some GPUs.

-

You may set this flag only if you found out that following device extensions are supported, you enabled them while creating Vulkan device passed as VmaAllocatorCreateInfo::device, and you want them to be used internally by this library:

-
    -
  • VK_KHR_get_memory_requirements2
  • -
  • VK_KHR_dedicated_allocation
  • -
-

If this flag is enabled, you must also provide VmaAllocatorCreateInfo::pVulkanFunctions and fill at least members: VmaVulkanFunctions::vkGetBufferMemoryRequirements2KHR, VmaVulkanFunctions::vkGetImageMemoryRequirements2KHR, because they are never imported statically.

-

When this flag is set, you can experience following warnings reported by Vulkan validation layer. You can ignore them.

-
-

vkBindBufferMemory(): Binding memory to buffer 0x2d but vkGetBufferMemoryRequirements() has not been called on that buffer.

-
-
VMA_ALLOCATOR_CREATE_FLAG_BITS_MAX_ENUM 
- -
-
-

Function Documentation

- -

◆ vmaBuildStatsString()

- -
-
- - - - - - - - - - - - - - - - - - - - - - - - -
void vmaBuildStatsString (VmaAllocator allocator,
char ** ppStatsString,
VkBool32 detailedMap 
)
-
- -

Builds and returns statistics as string in JSON format.

-
Parameters
- - -
[out]ppStatsStringMust be freed using vmaFreeStatsString() function.
-
-
- -
-
- -

◆ vmaCalculateStats()

- -
-
- - - - - - - - - - - - - - - - - - -
void vmaCalculateStats (VmaAllocator allocator,
VmaStatspStats 
)
-
- -

Retrieves statistics from current state of the Allocator.

- -
-
- -

◆ vmaCreateAllocator()

- -
-
- - - - - - - - - - - - - - - - - - -
VkResult vmaCreateAllocator (const VmaAllocatorCreateInfopCreateInfo,
VmaAllocator * pAllocator 
)
-
- -

Creates Allocator object.

- -
-
- -

◆ vmaDestroyAllocator()

- -
-
- - - - - - - - -
void vmaDestroyAllocator (VmaAllocator allocator)
-
- -

Destroys allocator object.

- -
-
- -

◆ vmaFreeStatsString()

- -
-
- - - - - - - - - - - - - - - - - - -
void vmaFreeStatsString (VmaAllocator allocator,
char * pStatsString 
)
-
- -
-
- -

◆ vmaGetMemoryProperties()

- -
-
- - - - - - - - - - - - - - - - - - -
void vmaGetMemoryProperties (VmaAllocator allocator,
const VkPhysicalDeviceMemoryProperties ** ppPhysicalDeviceMemoryProperties 
)
-
-

PhysicalDeviceMemoryProperties are fetched from physicalDevice by the allocator. You can access it here, without fetching it again on your own.

- -
-
- -

◆ vmaGetMemoryTypeProperties()

- -
-
- - - - - - - - - - - - - - - - - - - - - - - - -
void vmaGetMemoryTypeProperties (VmaAllocator allocator,
uint32_t memoryTypeIndex,
VkMemoryPropertyFlags * pFlags 
)
-
- -

Given Memory Type Index, returns Property Flags of this memory type.

-

This is just a convenience function. Same information can be obtained using vmaGetMemoryProperties().

- -
-
- -

◆ vmaGetPhysicalDeviceProperties()

- -
-
- - - - - - - - - - - - - - - - - - -
void vmaGetPhysicalDeviceProperties (VmaAllocator allocator,
const VkPhysicalDeviceProperties ** ppPhysicalDeviceProperties 
)
-
-

PhysicalDeviceProperties are fetched from physicalDevice by the allocator. You can access it here, without fetching it again on your own.

- -
-
- -

◆ vmaSetCurrentFrameIndex()

- -
-
- - - - - - - - - - - - - - - - - - -
void vmaSetCurrentFrameIndex (VmaAllocator allocator,
uint32_t frameIndex 
)
-
- -

Sets index of the current frame.

-

This function must be used if you make allocations with VMA_ALLOCATION_CREATE_CAN_BECOME_LOST_BIT and VMA_ALLOCATION_CREATE_CAN_MAKE_OTHER_LOST_BIT flags to inform the allocator when a new frame begins. Allocations queried using vmaGetAllocationInfo() cannot become lost in the current frame.

- -
-
-
- - - - diff --git a/docs/html/group__layer1.html b/docs/html/group__layer1.html deleted file mode 100644 index a77dca8..0000000 --- a/docs/html/group__layer1.html +++ /dev/null @@ -1,308 +0,0 @@ - - - - - - - -Vulkan Memory Allocator: Layer 1 Choosing Memory Type - - - - - - - - - -
-
- - - - - - -
-
Vulkan Memory Allocator -
-
-
- - - - - - - -
- -
-
- - -
- -
- -
- -
-
Layer 1 Choosing Memory Type
-
-
- - - - -

-Classes

struct  VmaAllocationCreateInfo
 
- - - - - - - - - - -

-Typedefs

typedef enum VmaMemoryUsage VmaMemoryUsage
 
typedef enum VmaAllocationCreateFlagBits VmaAllocationCreateFlagBits
 Flags to be passed as VmaAllocationCreateInfo::flags. More...
 
typedef VkFlags VmaAllocationCreateFlags
 
typedef struct VmaAllocationCreateInfo VmaAllocationCreateInfo
 
- - - - - - -

-Enumerations

enum  VmaMemoryUsage {
-  VMA_MEMORY_USAGE_UNKNOWN = 0, -VMA_MEMORY_USAGE_GPU_ONLY = 1, -VMA_MEMORY_USAGE_CPU_ONLY = 2, -VMA_MEMORY_USAGE_CPU_TO_GPU = 3, -
-  VMA_MEMORY_USAGE_GPU_TO_CPU = 4, -VMA_MEMORY_USAGE_MAX_ENUM = 0x7FFFFFFF -
- }
 
enum  VmaAllocationCreateFlagBits {
-  VMA_ALLOCATION_CREATE_DEDICATED_MEMORY_BIT = 0x00000001, -VMA_ALLOCATION_CREATE_NEVER_ALLOCATE_BIT = 0x00000002, -VMA_ALLOCATION_CREATE_PERSISTENT_MAP_BIT = 0x00000004, -VMA_ALLOCATION_CREATE_CAN_BECOME_LOST_BIT = 0x00000008, -
-  VMA_ALLOCATION_CREATE_CAN_MAKE_OTHER_LOST_BIT = 0x00000010, -VMA_ALLOCATION_CREATE_FLAG_BITS_MAX_ENUM = 0x7FFFFFFF -
- }
 Flags to be passed as VmaAllocationCreateInfo::flags. More...
 
- - - -

-Functions

VkResult vmaFindMemoryTypeIndex (VmaAllocator allocator, uint32_t memoryTypeBits, const VmaAllocationCreateInfo *pAllocationCreateInfo, uint32_t *pMemoryTypeIndex)
 
-

Detailed Description

-

Typedef Documentation

- -

◆ VmaAllocationCreateFlagBits

- -
-
- -

Flags to be passed as VmaAllocationCreateInfo::flags.

- -
-
- -

◆ VmaAllocationCreateFlags

- -
-
- - - - -
typedef VkFlags VmaAllocationCreateFlags
-
- -
-
- -

◆ VmaAllocationCreateInfo

- -
-
- -
-
- -

◆ VmaMemoryUsage

- -
-
- - - - -
typedef enum VmaMemoryUsage VmaMemoryUsage
-
- -
-
-

Enumeration Type Documentation

- -

◆ VmaAllocationCreateFlagBits

- -
-
- - - - -
enum VmaAllocationCreateFlagBits
-
- -

Flags to be passed as VmaAllocationCreateInfo::flags.

- - - - - - - -
Enumerator
VMA_ALLOCATION_CREATE_DEDICATED_MEMORY_BIT 

Set this flag if the allocation should have its own memory block.

-

Use it for special, big resources, like fullscreen images used as attachments.

-

This flag must also be used for host visible resources that you want to map simultaneously because otherwise they might end up as regions of the same VkDeviceMemory, while mapping same VkDeviceMemory multiple times simultaneously is illegal.

-

You should not use this flag if VmaAllocationCreateInfo::pool is not null.

-
VMA_ALLOCATION_CREATE_NEVER_ALLOCATE_BIT 

Set this flag to only try to allocate from existing VkDeviceMemory blocks and never create new such block.

-

If new allocation cannot be placed in any of the existing blocks, allocation fails with VK_ERROR_OUT_OF_DEVICE_MEMORY error.

-

You should not use VMA_ALLOCATION_CREATE_DEDICATED_MEMORY_BIT and VMA_ALLOCATION_CREATE_NEVER_ALLOCATE_BIT at the same time. It makes no sense.

-

If VmaAllocationCreateInfo::pool is not null, this flag is implied and ignored.

-
VMA_ALLOCATION_CREATE_PERSISTENT_MAP_BIT 

Set this flag to use a memory that will be persistently mapped and retrieve pointer to it.

-

Pointer to mapped memory will be returned through VmaAllocationInfo::pMappedData. You cannot map the memory on your own as multiple mappings of a single VkDeviceMemory are illegal.

-

If VmaAllocationCreateInfo::pool is not null, usage of this flag must match usage of flag VMA_POOL_CREATE_PERSISTENT_MAP_BIT used during pool creation.

-

Is it valid to use this flag for allocation made from memory type that is not HOST_VISIBLE. This flag is then ignored and memory is not mapped. This is useful if you need an allocation that is efficient to use on GPU (DEVICE_LOCAL) and still want to map it directly if possible on platforms that support it (e.g. Intel GPU).

-
VMA_ALLOCATION_CREATE_CAN_BECOME_LOST_BIT 

Allocation created with this flag can become lost as a result of another allocation with VMA_ALLOCATION_CREATE_CAN_MAKE_OTHER_LOST_BIT flag, so you must check it before use.

-

To check if allocation is not lost, call vmaGetAllocationInfo() and check if VmaAllocationInfo::deviceMemory is not VK_NULL_HANDLE.

-

For details about supporting lost allocations, see Lost Allocations chapter of User Guide on Main Page.

-
VMA_ALLOCATION_CREATE_CAN_MAKE_OTHER_LOST_BIT 

While creating allocation using this flag, other allocations that were created with flag VMA_ALLOCATION_CREATE_CAN_BECOME_LOST_BIT can become lost.

-

For details about supporting lost allocations, see Lost Allocations chapter of User Guide on Main Page.

-
VMA_ALLOCATION_CREATE_FLAG_BITS_MAX_ENUM 
- -
-
- -

◆ VmaMemoryUsage

- -
-
- - - - -
enum VmaMemoryUsage
-
- - - - - - - -
Enumerator
VMA_MEMORY_USAGE_UNKNOWN 

No intended memory usage specified.

-
VMA_MEMORY_USAGE_GPU_ONLY 

Memory will be used on device only, so faster access from the device is preferred. No need to be mappable on host.

-
VMA_MEMORY_USAGE_CPU_ONLY 

Memory will be mapped on host. Could be used for transfer to/from device.

-

Guarantees to be HOST_VISIBLE and HOST_COHERENT.

-
VMA_MEMORY_USAGE_CPU_TO_GPU 

Memory will be used for frequent (dynamic) updates from host and reads on device (upload).

-

Guarantees to be HOST_VISIBLE.

-
VMA_MEMORY_USAGE_GPU_TO_CPU 

Memory will be used for frequent writing on device and readback on host (download).

-

Guarantees to be HOST_VISIBLE.

-
VMA_MEMORY_USAGE_MAX_ENUM 
- -
-
-

Function Documentation

- -

◆ vmaFindMemoryTypeIndex()

- -
-
- - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -
VkResult vmaFindMemoryTypeIndex (VmaAllocator allocator,
uint32_t memoryTypeBits,
const VmaAllocationCreateInfopAllocationCreateInfo,
uint32_t * pMemoryTypeIndex 
)
-
-

This algorithm tries to find a memory type that:

-
    -
  • Is allowed by memoryTypeBits.
  • -
  • Contains all the flags from pAllocationCreateInfo->requiredFlags.
  • -
  • Matches intended usage.
  • -
  • Has as many flags from pAllocationCreateInfo->preferredFlags as possible.
  • -
-
Returns
Returns VK_ERROR_FEATURE_NOT_PRESENT if not found. Receiving such result from this function or any other allocating function probably means that your device doesn't support any memory type with requested features for the specific type of resource you want to use it for. Please check parameters of your resource, like image layout (OPTIMAL versus LINEAR) or mip level count.
- -
-
-
- - - - diff --git a/docs/html/group__layer2.html b/docs/html/group__layer2.html deleted file mode 100644 index 1024a4f..0000000 --- a/docs/html/group__layer2.html +++ /dev/null @@ -1,988 +0,0 @@ - - - - - - - -Vulkan Memory Allocator: Layer 2 Allocating Memory - - - - - - - - - -
-
- - - - - - -
-
Vulkan Memory Allocator -
-
-
- - - - - - - -
- -
-
- - -
- -
- -
- -
-
Layer 2 Allocating Memory
-
-
- - - - - - - - - - - - - - - - - -

-Classes

struct  VmaPoolCreateInfo
 Describes parameter of created VmaPool. More...
 
struct  VmaPoolStats
 Describes parameter of existing VmaPool. More...
 
struct  VmaAllocationInfo
 Parameters of VmaAllocation objects, that can be retrieved using function vmaGetAllocationInfo(). More...
 
struct  VmaDefragmentationInfo
 Optional configuration parameters to be passed to function vmaDefragment(). More...
 
struct  VmaDefragmentationStats
 Statistics returned by function vmaDefragment(). More...
 
- - - - - - - - - - - - - - - - - - - - - -

-Typedefs

typedef enum VmaPoolCreateFlagBits VmaPoolCreateFlagBits
 Flags to be passed as VmaPoolCreateInfo::flags. More...
 
typedef VkFlags VmaPoolCreateFlags
 
typedef struct VmaPoolCreateInfo VmaPoolCreateInfo
 Describes parameter of created VmaPool. More...
 
typedef struct VmaPoolStats VmaPoolStats
 Describes parameter of existing VmaPool. More...
 
typedef struct VmaAllocationInfo VmaAllocationInfo
 Parameters of VmaAllocation objects, that can be retrieved using function vmaGetAllocationInfo(). More...
 
typedef struct VmaDefragmentationInfo VmaDefragmentationInfo
 Optional configuration parameters to be passed to function vmaDefragment(). More...
 
typedef struct VmaDefragmentationStats VmaDefragmentationStats
 Statistics returned by function vmaDefragment(). More...
 
- - - - -

-Enumerations

enum  VmaPoolCreateFlagBits { VMA_POOL_CREATE_PERSISTENT_MAP_BIT = 0x00000001, -VMA_POOL_CREATE_IGNORE_BUFFER_IMAGE_GRANULARITY_BIT = 0x00000002, -VMA_POOL_CREATE_FLAG_BITS_MAX_ENUM = 0x7FFFFFFF - }
 Flags to be passed as VmaPoolCreateInfo::flags. More...
 
- - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -

-Functions

VkResult vmaCreatePool (VmaAllocator allocator, const VmaPoolCreateInfo *pCreateInfo, VmaPool *pPool)
 Allocates Vulkan device memory and creates VmaPool object. More...
 
void vmaDestroyPool (VmaAllocator allocator, VmaPool pool)
 Destroys VmaPool object and frees Vulkan device memory. More...
 
void vmaGetPoolStats (VmaAllocator allocator, VmaPool pool, VmaPoolStats *pPoolStats)
 Retrieves statistics of existing VmaPool object. More...
 
void vmaMakePoolAllocationsLost (VmaAllocator allocator, VmaPool pool, size_t *pLostAllocationCount)
 Marks all allocations in given pool as lost if they are not used in current frame or VmaPoolCreateInfo::frameInUseCount back from now. More...
 
VkResult vmaAllocateMemory (VmaAllocator allocator, const VkMemoryRequirements *pVkMemoryRequirements, const VmaAllocationCreateInfo *pCreateInfo, VmaAllocation *pAllocation, VmaAllocationInfo *pAllocationInfo)
 General purpose memory allocation. More...
 
VkResult vmaAllocateMemoryForBuffer (VmaAllocator allocator, VkBuffer buffer, const VmaAllocationCreateInfo *pCreateInfo, VmaAllocation *pAllocation, VmaAllocationInfo *pAllocationInfo)
 
VkResult vmaAllocateMemoryForImage (VmaAllocator allocator, VkImage image, const VmaAllocationCreateInfo *pCreateInfo, VmaAllocation *pAllocation, VmaAllocationInfo *pAllocationInfo)
 Function similar to vmaAllocateMemoryForBuffer(). More...
 
void vmaFreeMemory (VmaAllocator allocator, VmaAllocation allocation)
 Frees memory previously allocated using vmaAllocateMemory(), vmaAllocateMemoryForBuffer(), or vmaAllocateMemoryForImage(). More...
 
void vmaGetAllocationInfo (VmaAllocator allocator, VmaAllocation allocation, VmaAllocationInfo *pAllocationInfo)
 Returns current information about specified allocation. More...
 
void vmaSetAllocationUserData (VmaAllocator allocator, VmaAllocation allocation, void *pUserData)
 Sets pUserData in given allocation to new value. More...
 
void vmaCreateLostAllocation (VmaAllocator allocator, VmaAllocation *pAllocation)
 Creates new allocation that is in lost state from the beginning. More...
 
VkResult vmaMapMemory (VmaAllocator allocator, VmaAllocation allocation, void **ppData)
 
void vmaUnmapMemory (VmaAllocator allocator, VmaAllocation allocation)
 
void vmaUnmapPersistentlyMappedMemory (VmaAllocator allocator)
 Unmaps persistently mapped memory of types that are HOST_COHERENT and DEVICE_LOCAL. More...
 
VkResult vmaMapPersistentlyMappedMemory (VmaAllocator allocator)
 Maps back persistently mapped memory of types that are HOST_COHERENT and DEVICE_LOCAL. More...
 
VkResult vmaDefragment (VmaAllocator allocator, VmaAllocation *pAllocations, size_t allocationCount, VkBool32 *pAllocationsChanged, const VmaDefragmentationInfo *pDefragmentationInfo, VmaDefragmentationStats *pDefragmentationStats)
 Compacts memory by moving allocations. More...
 
-

Detailed Description

-

Typedef Documentation

- -

◆ VmaAllocationInfo

- -
-
- - - - -
typedef struct VmaAllocationInfo VmaAllocationInfo
-
- -

Parameters of VmaAllocation objects, that can be retrieved using function vmaGetAllocationInfo().

- -
-
- -

◆ VmaDefragmentationInfo

- -
-
- -

Optional configuration parameters to be passed to function vmaDefragment().

- -
-
- -

◆ VmaDefragmentationStats

- -
-
- -

Statistics returned by function vmaDefragment().

- -
-
- -

◆ VmaPoolCreateFlagBits

- -
-
- -

Flags to be passed as VmaPoolCreateInfo::flags.

- -
-
- -

◆ VmaPoolCreateFlags

- -
-
- - - - -
typedef VkFlags VmaPoolCreateFlags
-
- -
-
- -

◆ VmaPoolCreateInfo

- -
-
- - - - -
typedef struct VmaPoolCreateInfo VmaPoolCreateInfo
-
- -

Describes parameter of created VmaPool.

- -
-
- -

◆ VmaPoolStats

- -
-
- - - - -
typedef struct VmaPoolStats VmaPoolStats
-
- -

Describes parameter of existing VmaPool.

- -
-
-

Enumeration Type Documentation

- -

◆ VmaPoolCreateFlagBits

- -
-
- - - - -
enum VmaPoolCreateFlagBits
-
- -

Flags to be passed as VmaPoolCreateInfo::flags.

- - - - -
Enumerator
VMA_POOL_CREATE_PERSISTENT_MAP_BIT 

Set this flag to use a memory that will be persistently mapped.

-

Each allocation made from this pool will have VmaAllocationInfo::pMappedData available.

-

Usage of this flag must match usage of VMA_ALLOCATION_CREATE_PERSISTENT_MAP_BIT flag for every allocation made from this pool.

-
VMA_POOL_CREATE_IGNORE_BUFFER_IMAGE_GRANULARITY_BIT 

Use this flag if you always allocate only buffers and linear images or only optimal images out of this pool and so Buffer-Image Granularity can be ignored.

-

This is na optional optimization flag.

-

If you always allocate using vmaCreateBuffer(), vmaCreateImage(), vmaAllocateMemoryForBuffer(), then you don't need to use it because allocator knows exact type of your allocations so it can handle Buffer-Image Granularity in the optimal way.

-

If you also allocate using vmaAllocateMemoryForImage() or vmaAllocateMemory(), exact type of such allocations is not known, so allocator must be conservative in handling Buffer-Image Granularity, which can lead to suboptimal allocation (wasted memory). In that case, if you can make sure you always allocate only buffers and linear images or only optimal images out of this pool, use this flag to make allocator disregard Buffer-Image Granularity and so make allocations more optimal.

-
VMA_POOL_CREATE_FLAG_BITS_MAX_ENUM 
- -
-
-

Function Documentation

- -

◆ vmaAllocateMemory()

- -
-
- - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -
VkResult vmaAllocateMemory (VmaAllocator allocator,
const VkMemoryRequirements * pVkMemoryRequirements,
const VmaAllocationCreateInfopCreateInfo,
VmaAllocation * pAllocation,
VmaAllocationInfopAllocationInfo 
)
-
- -

General purpose memory allocation.

-
Parameters
- - - -
[out]pAllocationHandle to allocated memory.
[out]pAllocationInfoOptional. Information about allocated memory. It can be later fetched using function vmaGetAllocationInfo().
-
-
-

You should free the memory using vmaFreeMemory().

-

It is recommended to use vmaAllocateMemoryForBuffer(), vmaAllocateMemoryForImage(), vmaCreateBuffer(), vmaCreateImage() instead whenever possible.

- -
-
- -

◆ vmaAllocateMemoryForBuffer()

- -
-
- - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -
VkResult vmaAllocateMemoryForBuffer (VmaAllocator allocator,
VkBuffer buffer,
const VmaAllocationCreateInfopCreateInfo,
VmaAllocation * pAllocation,
VmaAllocationInfopAllocationInfo 
)
-
-
Parameters
- - - -
[out]pAllocationHandle to allocated memory.
[out]pAllocationInfoOptional. Information about allocated memory. It can be later fetched using function vmaGetAllocationInfo().
-
-
-

You should free the memory using vmaFreeMemory().

- -
-
- -

◆ vmaAllocateMemoryForImage()

- -
-
- - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -
VkResult vmaAllocateMemoryForImage (VmaAllocator allocator,
VkImage image,
const VmaAllocationCreateInfopCreateInfo,
VmaAllocation * pAllocation,
VmaAllocationInfopAllocationInfo 
)
-
- -

Function similar to vmaAllocateMemoryForBuffer().

- -
-
- -

◆ vmaCreateLostAllocation()

- -
-
- - - - - - - - - - - - - - - - - - -
void vmaCreateLostAllocation (VmaAllocator allocator,
VmaAllocation * pAllocation 
)
-
- -

Creates new allocation that is in lost state from the beginning.

-

It can be useful if you need a dummy, non-null allocation.

-

You still need to destroy created object using vmaFreeMemory().

-

Returned allocation is not tied to any specific memory pool or memory type and not bound to any image or buffer. It has size = 0. It cannot be turned into a real, non-empty allocation.

- -
-
- -

◆ vmaCreatePool()

- -
-
- - - - - - - - - - - - - - - - - - - - - - - - -
VkResult vmaCreatePool (VmaAllocator allocator,
const VmaPoolCreateInfopCreateInfo,
VmaPool * pPool 
)
-
- -

Allocates Vulkan device memory and creates VmaPool object.

-
Parameters
- - - - -
allocatorAllocator object.
pCreateInfoParameters of pool to create.
[out]pPoolHandle to created pool.
-
-
- -
-
- -

◆ vmaDefragment()

- -
-
- - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -
VkResult vmaDefragment (VmaAllocator allocator,
VmaAllocation * pAllocations,
size_t allocationCount,
VkBool32 * pAllocationsChanged,
const VmaDefragmentationInfopDefragmentationInfo,
VmaDefragmentationStatspDefragmentationStats 
)
-
- -

Compacts memory by moving allocations.

-
Parameters
- - - - - - -
pAllocationsArray of allocations that can be moved during this compation.
allocationCountNumber of elements in pAllocations and pAllocationsChanged arrays.
[out]pAllocationsChangedArray of boolean values that will indicate whether matching allocation in pAllocations array has been moved. This parameter is optional. Pass null if you don't need this information.
pDefragmentationInfoConfiguration parameters. Optional - pass null to use default values.
[out]pDefragmentationStatsStatistics returned by the function. Optional - pass null if you don't need this information.
-
-
-
Returns
VK_SUCCESS if completed, VK_INCOMPLETE if succeeded but didn't make all possible optimizations because limits specified in pDefragmentationInfo have been reached, negative error code in case of error.
-

This function works by moving allocations to different places (different VkDeviceMemory objects and/or different offsets) in order to optimize memory usage. Only allocations that are in pAllocations array can be moved. All other allocations are considered nonmovable in this call. Basic rules:

-
    -
  • Only allocations made in memory types that have VK_MEMORY_PROPERTY_HOST_VISIBLE_BIT flag can be compacted. You may pass other allocations but it makes no sense - these will never be moved.
  • -
  • You may pass allocations made with VMA_ALLOCATION_CREATE_DEDICATED_MEMORY_BIT but it makes no sense - they will never be moved.
  • -
  • Both allocations made with or without VMA_ALLOCATION_CREATE_PERSISTENT_MAP_BIT flag can be compacted. If not persistently mapped, memory will be mapped temporarily inside this function if needed, so it shouldn't be mapped by you for the time of this call.
  • -
  • You must not pass same VmaAllocation object multiple times in pAllocations array.
  • -
-

The function also frees empty VkDeviceMemory blocks.

-

After allocation has been moved, its VmaAllocationInfo::deviceMemory and/or VmaAllocationInfo::offset changes. You must query them again using vmaGetAllocationInfo() if you need them.

-

If an allocation has been moved, data in memory is copied to new place automatically, but if it was bound to a buffer or an image, you must destroy that object yourself, create new one and bind it to the new memory pointed by the allocation. You must use vkDestroyBuffer(), vkDestroyImage(), vkCreateBuffer(), vkCreateImage() for that purpose and NOT vmaDestroyBuffer(), vmaDestroyImage(), vmaCreateBuffer(), vmaCreateImage()! Example:

-
VkDevice device = ...;
-VmaAllocator allocator = ...;
-std::vector<VkBuffer> buffers = ...;
-std::vector<VmaAllocation> allocations = ...;
-
-std::vector<VkBool32> allocationsChanged(allocations.size());
-vmaDefragment(allocator, allocations.data(), allocations.size(), allocationsChanged.data(), nullptr, nullptr);
-
-for(size_t i = 0; i < allocations.size(); ++i)
-{
-    if(allocationsChanged[i])
-    {
-        VmaAllocationInfo allocInfo;
-        vmaGetAllocationInfo(allocator, allocations[i], &allocInfo);
-
-        vkDestroyBuffer(device, buffers[i], nullptr);
-
-        VkBufferCreateInfo bufferInfo = ...;
-        vkCreateBuffer(device, &bufferInfo, nullptr, &buffers[i]);
-
-        .// You can make dummy call to vkGetBufferMemoryRequirements here to silence validation layer warning.
-
-        vkBindBufferMemory(device, buffers[i], allocInfo.deviceMemory, allocInfo.offset);
-    }
-}
-

This function may be time-consuming, so you shouldn't call it too often (like every frame or after every resource creation/destruction), but rater you can call it on special occasions (like when reloading a game level, when you just destroyed a lot of objects).

- -
-
- -

◆ vmaDestroyPool()

- -
-
- - - - - - - - - - - - - - - - - - -
void vmaDestroyPool (VmaAllocator allocator,
VmaPool pool 
)
-
- -

Destroys VmaPool object and frees Vulkan device memory.

- -
-
- -

◆ vmaFreeMemory()

- -
-
- - - - - - - - - - - - - - - - - - -
void vmaFreeMemory (VmaAllocator allocator,
VmaAllocation allocation 
)
-
- -

Frees memory previously allocated using vmaAllocateMemory(), vmaAllocateMemoryForBuffer(), or vmaAllocateMemoryForImage().

- -
-
- -

◆ vmaGetAllocationInfo()

- -
-
- - - - - - - - - - - - - - - - - - - - - - - - -
void vmaGetAllocationInfo (VmaAllocator allocator,
VmaAllocation allocation,
VmaAllocationInfopAllocationInfo 
)
-
- -

Returns current information about specified allocation.

- -
-
- -

◆ vmaGetPoolStats()

- -
-
- - - - - - - - - - - - - - - - - - - - - - - - -
void vmaGetPoolStats (VmaAllocator allocator,
VmaPool pool,
VmaPoolStatspPoolStats 
)
-
- -

Retrieves statistics of existing VmaPool object.

-
Parameters
- - - - -
allocatorAllocator object.
poolPool object.
[out]pPoolStatsStatistics of specified pool.
-
-
- -
-
- -

◆ vmaMakePoolAllocationsLost()

- -
-
- - - - - - - - - - - - - - - - - - - - - - - - -
void vmaMakePoolAllocationsLost (VmaAllocator allocator,
VmaPool pool,
size_t * pLostAllocationCount 
)
-
- -

Marks all allocations in given pool as lost if they are not used in current frame or VmaPoolCreateInfo::frameInUseCount back from now.

-
Parameters
- - - - -
allocatorAllocator object.
poolPool.
[out]pLostAllocationCountNumber of allocations marked as lost. Optional - pass null if you don't need this information.
-
-
- -
-
- -

◆ vmaMapMemory()

- -
-
- - - - - - - - - - - - - - - - - - - - - - - - -
VkResult vmaMapMemory (VmaAllocator allocator,
VmaAllocation allocation,
void ** ppData 
)
-
-

Feel free to use vkMapMemory on these memory blocks on you own if you want, but just for convenience and to make sure correct offset and size is always specified, usage of vmaMapMemory() / vmaUnmapMemory() is recommended.

-

Do not use it on memory allocated with VMA_ALLOCATION_CREATE_PERSISTENT_MAP_BIT as multiple maps to same VkDeviceMemory is illegal.

- -
-
- -

◆ vmaMapPersistentlyMappedMemory()

- -
-
- - - - - - - - -
VkResult vmaMapPersistentlyMappedMemory (VmaAllocator allocator)
-
- -

Maps back persistently mapped memory of types that are HOST_COHERENT and DEVICE_LOCAL.

-

See vmaUnmapPersistentlyMappedMemory().

-

After this call VmaAllocationInfo::pMappedData of some allocation may have value different than before calling vmaUnmapPersistentlyMappedMemory().

- -
-
- -

◆ vmaSetAllocationUserData()

- -
-
- - - - - - - - - - - - - - - - - - - - - - - - -
void vmaSetAllocationUserData (VmaAllocator allocator,
VmaAllocation allocation,
void * pUserData 
)
-
- -

Sets pUserData in given allocation to new value.

- -
-
- -

◆ vmaUnmapMemory()

- -
-
- - - - - - - - - - - - - - - - - - -
void vmaUnmapMemory (VmaAllocator allocator,
VmaAllocation allocation 
)
-
- -
-
- -

◆ vmaUnmapPersistentlyMappedMemory()

- -
-
- - - - - - - - -
void vmaUnmapPersistentlyMappedMemory (VmaAllocator allocator)
-
- -

Unmaps persistently mapped memory of types that are HOST_COHERENT and DEVICE_LOCAL.

-

This is optional performance optimization. On AMD GPUs on Windows, Vulkan memory from the type that has both DEVICE_LOCAL and HOST_VISIBLE flags should not be mapped for the time of any call to vkQueueSubmit() or vkQueuePresent(). Although legal, that would cause performance degradation because WDDM migrates such memory to system RAM. To ensure this, you can unmap all persistently mapped memory using this function. Example:

-
vmaUnmapPersistentlyMappedMemory(allocator);
-vkQueueSubmit(...)
-vmaMapPersistentlyMappedMemory(allocator);
-

After this call VmaAllocationInfo::pMappedData of some allocations may become null.

-

This call is reference-counted. Memory is mapped again after you call vmaMapPersistentlyMappedMemory() same number of times that you called vmaUnmapPersistentlyMappedMemory().

- -
-
-
- - - - diff --git a/docs/html/group__layer3.html b/docs/html/group__layer3.html deleted file mode 100644 index 0c38ea5..0000000 --- a/docs/html/group__layer3.html +++ /dev/null @@ -1,290 +0,0 @@ - - - - - - - -Vulkan Memory Allocator: Layer 3 Creating Buffers and Images - - - - - - - - - -
-
- - - - - - -
-
Vulkan Memory Allocator -
-
-
- - - - - - - -
- -
-
- - -
- -
- -
- -
-
Layer 3 Creating Buffers and Images
-
-
- - - - - - - - - - - - - -

-Functions

VkResult vmaCreateBuffer (VmaAllocator allocator, const VkBufferCreateInfo *pBufferCreateInfo, const VmaAllocationCreateInfo *pAllocationCreateInfo, VkBuffer *pBuffer, VmaAllocation *pAllocation, VmaAllocationInfo *pAllocationInfo)
 
void vmaDestroyBuffer (VmaAllocator allocator, VkBuffer buffer, VmaAllocation allocation)
 Destroys Vulkan buffer and frees allocated memory. More...
 
VkResult vmaCreateImage (VmaAllocator allocator, const VkImageCreateInfo *pImageCreateInfo, const VmaAllocationCreateInfo *pAllocationCreateInfo, VkImage *pImage, VmaAllocation *pAllocation, VmaAllocationInfo *pAllocationInfo)
 Function similar to vmaCreateBuffer(). More...
 
void vmaDestroyImage (VmaAllocator allocator, VkImage image, VmaAllocation allocation)
 Destroys Vulkan image and frees allocated memory. More...
 
-

Detailed Description

-

Function Documentation

- -

◆ vmaCreateBuffer()

- -
-
- - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -
VkResult vmaCreateBuffer (VmaAllocator allocator,
const VkBufferCreateInfo * pBufferCreateInfo,
const VmaAllocationCreateInfopAllocationCreateInfo,
VkBuffer * pBuffer,
VmaAllocation * pAllocation,
VmaAllocationInfopAllocationInfo 
)
-
-
Parameters
- - - - -
[out]pBufferBuffer that was created.
[out]pAllocationAllocation that was created.
[out]pAllocationInfoOptional. Information about allocated memory. It can be later fetched using function vmaGetAllocationInfo().
-
-
-

This function automatically:

-
    -
  1. Creates buffer.
  2. -
  3. Allocates appropriate memory for it.
  4. -
  5. Binds the buffer with the memory.
  6. -
-

If any of these operations fail, buffer and allocation are not created, returned value is negative error code, *pBuffer and *pAllocation are null.

-

If the function succeeded, you must destroy both buffer and allocation when you no longer need them using either convenience function vmaDestroyBuffer() or separately, using vkDestroyBuffer() and vmaFreeMemory().

- -
-
- -

◆ vmaCreateImage()

- -
-
- - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -
VkResult vmaCreateImage (VmaAllocator allocator,
const VkImageCreateInfo * pImageCreateInfo,
const VmaAllocationCreateInfopAllocationCreateInfo,
VkImage * pImage,
VmaAllocation * pAllocation,
VmaAllocationInfopAllocationInfo 
)
-
- -

Function similar to vmaCreateBuffer().

- -
-
- -

◆ vmaDestroyBuffer()

- -
-
- - - - - - - - - - - - - - - - - - - - - - - - -
void vmaDestroyBuffer (VmaAllocator allocator,
VkBuffer buffer,
VmaAllocation allocation 
)
-
- -

Destroys Vulkan buffer and frees allocated memory.

-

This is just a convenience function equivalent to:

-
vkDestroyBuffer(device, buffer, allocationCallbacks);
-vmaFreeMemory(allocator, allocation);
-
-
- -

◆ vmaDestroyImage()

- -
-
- - - - - - - - - - - - - - - - - - - - - - - - -
void vmaDestroyImage (VmaAllocator allocator,
VkImage image,
VmaAllocation allocation 
)
-
- -

Destroys Vulkan image and frees allocated memory.

-

This is just a convenience function equivalent to:

-
vkDestroyImage(device, image, allocationCallbacks);
-vmaFreeMemory(allocator, allocation);
-
-
-
- - - - diff --git a/docs/html/index.html b/docs/html/index.html index 773bd84..e4dc71d 100644 --- a/docs/html/index.html +++ b/docs/html/index.html @@ -62,7 +62,7 @@ $(function() {
Vulkan Memory Allocator
-

Version 2.1.0-alpha.3 (2018-06-11)

+

Version 2.1.0-alpha.3 (2018-06-14)

Copyright (c) 2017-2018 Advanced Micro Devices, Inc. All rights reserved.
License: MIT

Documentation of all members: vk_mem_alloc.h

@@ -106,9 +106,10 @@ Table of contents
  • Allocation names
  • -
  • Corruption detection diff --git a/docs/html/modules.html b/docs/html/modules.html deleted file mode 100644 index 94017fd..0000000 --- a/docs/html/modules.html +++ /dev/null @@ -1,81 +0,0 @@ - - - - - - - -Vulkan Memory Allocator: Modules - - - - - - - - - -
    -
    - - - - - - -
    -
    Vulkan Memory Allocator -
    -
    -
    - - - - - - - -
    - -
    -
    - - -
    - -
    - -
    -
    -
    Modules
    -
    - - - - - diff --git a/docs/html/persistently_mapped_memory.html b/docs/html/persistently_mapped_memory.html deleted file mode 100644 index dc98cc8..0000000 --- a/docs/html/persistently_mapped_memory.html +++ /dev/null @@ -1,81 +0,0 @@ - - - - - - - -Vulkan Memory Allocator: Persistently mapped memory - - - - - - - - - -
    -
    - - - - - - -
    -
    Vulkan Memory Allocator -
    -
    -
    - - - - - - - - -
    -
    - - -
    - -
    - - -
    -
    -
    -
    Persistently mapped memory
    -
    -
    -

    If you need to map memory on host, it may happen that two allocations are assigned to the same VkDeviceMemory block, so if you map them both at the same time, it will cause error because mapping single memory block multiple times is illegal in Vulkan.

    -

    It is safer, more convenient and more efficient to use special feature designed for that: persistently mapped memory. Allocations made with VMA_ALLOCATION_CREATE_PERSISTENT_MAP_BIT flag set in VmaAllocationCreateInfo::flags are returned from device memory blocks that stay mapped all the time, so you can just access CPU pointer to it. VmaAllocationInfo::pMappedData pointer is already offseted to the beginning of particular allocation. Example:

    -
    VkBufferCreateInfo bufCreateInfo = { VK_STRUCTURE_TYPE_BUFFER_CREATE_INFO };
    bufCreateInfo.size = 1024;
    bufCreateInfo.usage = VK_BUFFER_USAGE_TRANSFER_SRC_BIT;
    VmaAllocationCreateInfo allocCreateInfo = {};
    allocCreateInfo.usage = VMA_MEMORY_USAGE_CPU_ONLY;
    VkBuffer buf;
    VmaAllocation alloc;
    vmaCreateBuffer(allocator, &bufCreateInfo, &allocCreateInfo, &buf, &alloc, &allocInfo);
    // Buffer is immediately mapped. You can access its memory.
    memcpy(allocInfo.pMappedData, myData, 1024);

    Memory in Vulkan doesn't need to be unmapped before using it e.g. for transfers, but if you are not sure whether it's HOST_COHERENT (here is surely is because it's created with VMA_MEMORY_USAGE_CPU_ONLY), you should check it. If it's not, you should call vkInvalidateMappedMemoryRanges() before reading and vkFlushMappedMemoryRanges() after writing to mapped memory on CPU. Example:

    -
    VkMemoryPropertyFlags memFlags;
    vmaGetMemoryTypeProperties(allocator, allocInfo.memoryType, &memFlags);
    if((memFlags & VK_MEMORY_PROPERTY_HOST_COHERENT_BIT) == 0)
    {
    VkMappedMemoryRange memRange = { VK_STRUCTURE_TYPE_MAPPED_MEMORY_RANGE };
    memRange.memory = allocInfo.deviceMemory;
    memRange.offset = allocInfo.offset;
    memRange.size = allocInfo.size;
    vkFlushMappedMemoryRanges(device, 1, &memRange);
    }

    On AMD GPUs on Windows, Vulkan memory from the type that has both DEVICE_LOCAL and HOST_VISIBLE flags should not be mapped for the time of any call to vkQueueSubmit() or vkQueuePresent(). Although legal, that would cause performance degradation because WDDM migrates such memory to system RAM. To ensure this, you can unmap all persistently mapped memory using just one function call. For details, see function vmaUnmapPersistentlyMappedMemory(), vmaMapPersistentlyMappedMemory().

    -
    - - - - diff --git a/docs/html/search/all_2.js b/docs/html/search/all_2.js index 5289cc4..e5f807d 100644 --- a/docs/html/search/all_2.js +++ b/docs/html/search/all_2.js @@ -2,6 +2,5 @@ var searchData= [ ['choosing_20memory_20type',['Choosing memory type',['../choosing_memory_type.html',1,'index']]], ['configuration',['Configuration',['../configuration.html',1,'index']]], - ['corruption_20detection',['Corruption detection',['../corruption_detection.html',1,'index']]], ['custom_20memory_20pools',['Custom memory pools',['../custom_memory_pools.html',1,'index']]] ]; diff --git a/docs/html/search/all_3.js b/docs/html/search/all_3.js index 8789b19..82a7aa0 100644 --- a/docs/html/search/all_3.js +++ b/docs/html/search/all_3.js @@ -1,5 +1,6 @@ var searchData= [ + ['debugging_20incorrect_20memory_20usage',['Debugging incorrect memory usage',['../debugging_memory_usage.html',1,'index']]], ['defragmentation',['Defragmentation',['../defragmentation.html',1,'index']]], ['device',['device',['../struct_vma_allocator_create_info.html#ad924ddd77b04039c88d0c09b0ffcd500',1,'VmaAllocatorCreateInfo']]], ['devicememory',['deviceMemory',['../struct_vma_allocation_info.html#ae0bfb7dfdf79a76ffefc9a94677a2f67',1,'VmaAllocationInfo']]], diff --git a/docs/html/search/groups_0.html b/docs/html/search/groups_0.html deleted file mode 100644 index 1ede28d..0000000 --- a/docs/html/search/groups_0.html +++ /dev/null @@ -1,26 +0,0 @@ - - - - - - - - - -
    -
    Loading...
    -
    - -
    Searching...
    -
    No Matches
    - -
    - - diff --git a/docs/html/search/groups_0.js b/docs/html/search/groups_0.js deleted file mode 100644 index 025ecae..0000000 --- a/docs/html/search/groups_0.js +++ /dev/null @@ -1,4 +0,0 @@ -var searchData= -[ - ['general',['General',['../group__general.html',1,'']]] -]; diff --git a/docs/html/search/groups_1.html b/docs/html/search/groups_1.html deleted file mode 100644 index 3c05216..0000000 --- a/docs/html/search/groups_1.html +++ /dev/null @@ -1,26 +0,0 @@ - - - - - - - - - -
    -
    Loading...
    -
    - -
    Searching...
    -
    No Matches
    - -
    - - diff --git a/docs/html/search/groups_1.js b/docs/html/search/groups_1.js deleted file mode 100644 index 058f893..0000000 --- a/docs/html/search/groups_1.js +++ /dev/null @@ -1,6 +0,0 @@ -var searchData= -[ - ['layer_201_20choosing_20memory_20type',['Layer 1 Choosing Memory Type',['../group__layer1.html',1,'']]], - ['layer_202_20allocating_20memory',['Layer 2 Allocating Memory',['../group__layer2.html',1,'']]], - ['layer_203_20creating_20buffers_20and_20images',['Layer 3 Creating Buffers and Images',['../group__layer3.html',1,'']]] -]; diff --git a/docs/html/search/pages_1.js b/docs/html/search/pages_1.js index 5289cc4..e5f807d 100644 --- a/docs/html/search/pages_1.js +++ b/docs/html/search/pages_1.js @@ -2,6 +2,5 @@ var searchData= [ ['choosing_20memory_20type',['Choosing memory type',['../choosing_memory_type.html',1,'index']]], ['configuration',['Configuration',['../configuration.html',1,'index']]], - ['corruption_20detection',['Corruption detection',['../corruption_detection.html',1,'index']]], ['custom_20memory_20pools',['Custom memory pools',['../custom_memory_pools.html',1,'index']]] ]; diff --git a/docs/html/search/pages_2.js b/docs/html/search/pages_2.js index 07d814d..12ea7b1 100644 --- a/docs/html/search/pages_2.js +++ b/docs/html/search/pages_2.js @@ -1,4 +1,5 @@ var searchData= [ + ['debugging_20incorrect_20memory_20usage',['Debugging incorrect memory usage',['../debugging_memory_usage.html',1,'index']]], ['defragmentation',['Defragmentation',['../defragmentation.html',1,'index']]] ]; diff --git a/docs/html/struct_vma_memory_requirements-members.html b/docs/html/struct_vma_memory_requirements-members.html deleted file mode 100644 index aefd96c..0000000 --- a/docs/html/struct_vma_memory_requirements-members.html +++ /dev/null @@ -1,81 +0,0 @@ - - - - - - - -Vulkan Memory Allocator: Member List - - - - - - - - - -
    -
    - - - - - - -
    -
    Vulkan Memory Allocator -
    -
    -
    - - - - - - - - -
    -
    - - -
    - -
    - -
    -
    -
    -
    VmaMemoryRequirements Member List
    -
    - - - - - diff --git a/docs/html/struct_vma_memory_requirements.html b/docs/html/struct_vma_memory_requirements.html deleted file mode 100644 index c3572f1..0000000 --- a/docs/html/struct_vma_memory_requirements.html +++ /dev/null @@ -1,181 +0,0 @@ - - - - - - - -Vulkan Memory Allocator: VmaMemoryRequirements Struct Reference - - - - - - - - - -
    -
    - - - - - - -
    -
    Vulkan Memory Allocator -
    -
    -
    - - - - - - - - -
    -
    - - -
    - -
    - -
    -
    - -
    -
    VmaMemoryRequirements Struct Reference
    -
    -
    - -

    #include <vk_mem_alloc.h>

    - - - - - - - - - - - - - - - - -

    -Public Attributes

    VmaMemoryRequirementFlags flags
     
    VmaMemoryUsage usage
     Intended usage of memory. More...
     
    VkMemoryPropertyFlags requiredFlags
     Flags that must be set in a Memory Type chosen for an allocation. More...
     
    VkMemoryPropertyFlags preferredFlags
     Flags that preferably should be set in a Memory Type chosen for an allocation. More...
     
    void * pUserData
     Custom general-purpose pointer that will be stored in VmaAllocation, can be read as VmaAllocationInfo::pUserData and changed using vmaSetAllocationUserData(). More...
     
    -

    Member Data Documentation

    - -

    ◆ flags

    - -
    -
    - - - - -
    VmaMemoryRequirementFlags VmaMemoryRequirements::flags
    -
    - -
    -
    - -

    ◆ preferredFlags

    - -
    -
    - - - - -
    VkMemoryPropertyFlags VmaMemoryRequirements::preferredFlags
    -
    - -

    Flags that preferably should be set in a Memory Type chosen for an allocation.

    -

    Set to 0 if no additional flags are prefered and only requiredFlags should be used. If not 0, it must be a superset or equal to requiredFlags.

    - -
    -
    - -

    ◆ pUserData

    - -
    -
    - - - - -
    void* VmaMemoryRequirements::pUserData
    -
    - -

    Custom general-purpose pointer that will be stored in VmaAllocation, can be read as VmaAllocationInfo::pUserData and changed using vmaSetAllocationUserData().

    - -
    -
    - -

    ◆ requiredFlags

    - -
    -
    - - - - -
    VkMemoryPropertyFlags VmaMemoryRequirements::requiredFlags
    -
    - -

    Flags that must be set in a Memory Type chosen for an allocation.

    -

    Leave 0 if you specify requirement via usage.

    - -
    -
    - -

    ◆ usage

    - -
    -
    - - - - -
    VmaMemoryUsage VmaMemoryRequirements::usage
    -
    - -

    Intended usage of memory.

    -

    Leave VMA_MEMORY_USAGE_UNKNOWN if you specify requiredFlags. You can also use both.

    - -
    -
    -
    The documentation for this struct was generated from the following file: -
    - - - - diff --git a/docs/html/thread_safety.html b/docs/html/thread_safety.html deleted file mode 100644 index 339e09d..0000000 --- a/docs/html/thread_safety.html +++ /dev/null @@ -1,83 +0,0 @@ - - - - - - - -Vulkan Memory Allocator: Thread safety - - - - - - - - - -
    -
    - - - - - - -
    -
    Vulkan Memory Allocator -
    -
    -
    - - - - - - - - -
    -
    - - -
    - -
    - - -
    -
    -
    -
    Thread safety
    -
    -
    -
      -
    • The library has no global state, so separate VmaAllocator objects can be used independently. There should be no need to create multiple such objects though - one per VkDevice is enough.
    • -
    • By default, all calls to functions that take VmaAllocator as first parameter are safe to call from multiple threads simultaneously because they are synchronized internally when needed.
    • -
    • When the allocator is created with VMA_ALLOCATOR_CREATE_EXTERNALLY_SYNCHRONIZED_BIT flag, calls to functions that take such VmaAllocator object must be synchronized externally.
    • -
    • Access to a VmaAllocation object must be externally synchronized. For example, you must not call vmaGetAllocationInfo() and vmaMapMemory() from different threads at the same time if you pass the same VmaAllocation object to these functions.
    • -
    -
    - - - - diff --git a/docs/html/user_guide.html b/docs/html/user_guide.html deleted file mode 100644 index 1fbfab0..0000000 --- a/docs/html/user_guide.html +++ /dev/null @@ -1,254 +0,0 @@ - - - - - - - -Vulkan Memory Allocator: User guide - - - - - - - - - -
    -
    - - - - - - -
    -
    Vulkan Memory Allocator -
    -
    -
    - - - - - - - - -
    -
    - - -
    - -
    - - -
    -
    -
    -
    User guide
    -
    -
    -

    -Quick start

    -

    In your project code:

    -
      -
    1. Include "vk_mem_alloc.h" file wherever you want to use the library.
    2. -
    3. In exacly one C++ file define following macro before include to build library implementation.
    4. -
    -
    #define VMA_IMPLEMENTATION
    -#include "vk_mem_alloc.h"
    -

    At program startup:

    -
      -
    1. Initialize Vulkan to have VkPhysicalDevice and VkDevice object.
    2. -
    3. Fill VmaAllocatorCreateInfo structure and create VmaAllocator object by calling vmaCreateAllocator().
    4. -
    -
    VmaAllocatorCreateInfo allocatorInfo = {};
    -allocatorInfo.physicalDevice = physicalDevice;
    -allocatorInfo.device = device;
    -
    -VmaAllocator allocator;
    -vmaCreateAllocator(&allocatorInfo, &allocator);
    -

    When you want to create a buffer or image:

    -
      -
    1. Fill VkBufferCreateInfo / VkImageCreateInfo structure.
    2. -
    3. Fill VmaAllocationCreateInfo structure.
    4. -
    5. Call vmaCreateBuffer() / vmaCreateImage() to get VkBuffer/VkImage with memory already allocated and bound to it.
    6. -
    -
    VkBufferCreateInfo bufferInfo = { VK_STRUCTURE_TYPE_BUFFER_CREATE_INFO };
    -bufferInfo.size = 65536;
    -bufferInfo.usage = VK_BUFFER_USAGE_VERTEX_BUFFER_BIT | VK_BUFFER_USAGE_TRANSFER_DST_BIT;
    -
    -VmaAllocationCreateInfo allocInfo = {};
    -allocInfo.usage = VMA_MEMORY_USAGE_GPU_ONLY;
    -
    -VkBuffer buffer;
    -VmaAllocation allocation;
    -vmaCreateBuffer(allocator, &bufferInfo, &allocInfo, &buffer, &allocation, nullptr);
    -

    Don't forget to destroy your objects when no longer needed:

    -
    vmaDestroyBuffer(allocator, buffer, allocation);
    -vmaDestroyAllocator(allocator);
    -

    -Persistently mapped memory

    -

    If you need to map memory on host, it may happen that two allocations are assigned to the same VkDeviceMemory block, so if you map them both at the same time, it will cause error because mapping single memory block multiple times is illegal in Vulkan.

    -

    It is safer, more convenient and more efficient to use special feature designed for that: persistently mapped memory. Allocations made with VMA_ALLOCATION_CREATE_PERSISTENT_MAP_BIT flag set in VmaAllocationCreateInfo::flags are returned from device memory blocks that stay mapped all the time, so you can just access CPU pointer to it. VmaAllocationInfo::pMappedData pointer is already offseted to the beginning of particular allocation. Example:

    -
    VkBufferCreateInfo bufCreateInfo = { VK_STRUCTURE_TYPE_BUFFER_CREATE_INFO };
    -bufCreateInfo.size = 1024;
    -bufCreateInfo.usage = VK_BUFFER_USAGE_TRANSFER_SRC_BIT;
    -
    -VmaAllocationCreateInfo allocCreateInfo = {};
    -allocCreateInfo.usage = VMA_MEMORY_USAGE_CPU_ONLY;
    -allocCreateInfo.flags = VMA_ALLOCATION_CREATE_PERSISTENT_MAP_BIT;
    -
    -VkBuffer buf;
    -VmaAllocation alloc;
    -VmaAllocationInfo allocInfo;
    -vmaCreateBuffer(allocator, &bufCreateInfo, &allocCreateInfo, &buf, &alloc, &allocInfo);
    -
    -.// Buffer is immediately mapped. You can access its memory.
    -memcpy(allocInfo.pMappedData, myData, 1024);
    -

    Memory in Vulkan doesn't need to be unmapped before using it e.g. for transfers, but if you are not sure whether it's HOST_COHERENT (here is surely is because it's created with VMA_MEMORY_USAGE_CPU_ONLY), you should check it. If it's not, you should call vkInvalidateMappedMemoryRanges() before reading and vkFlushMappedMemoryRanges() after writing to mapped memory on CPU. Example:

    -
    VkMemoryPropertyFlags memFlags;
    -vmaGetMemoryTypeProperties(allocator, allocInfo.memoryType, &memFlags);
    -if((memFlags & VK_MEMORY_PROPERTY_HOST_COHERENT_BIT) == 0)
    -{
    -    VkMappedMemoryRange memRange = { VK_STRUCTURE_TYPE_MAPPED_MEMORY_RANGE };
    -    memRange.memory = allocInfo.deviceMemory;
    -    memRange.offset = allocInfo.offset;
    -    memRange.size   = allocInfo.size;
    -    vkFlushMappedMemoryRanges(device, 1, &memRange);
    -}
    -

    On AMD GPUs on Windows, Vulkan memory from the type that has both DEVICE_LOCAL and HOST_VISIBLE flags should not be mapped for the time of any call to vkQueueSubmit() or vkQueuePresent(). Although legal, that would cause performance degradation because WDDM migrates such memory to system RAM. To ensure this, you can unmap all persistently mapped memory using just one function call. For details, see function vmaUnmapPersistentlyMappedMemory(), vmaMapPersistentlyMappedMemory().

    -

    -Custom memory pools

    -

    The library automatically creates and manages default memory pool for each memory type available on the device. A pool contains a number of VkDeviceMemory blocks. You can create custom pool and allocate memory out of it. It can be useful if you want to:

    -
      -
    • Keep certain kind of allocations separate from others.
    • -
    • Enforce particular size of Vulkan memory blocks.
    • -
    • Limit maximum amount of Vulkan memory allocated for that pool.
    • -
    -

    To use custom memory pools:

    -
      -
    1. Fill VmaPoolCreateInfo structure.
    2. -
    3. Call vmaCreatePool() to obtain VmaPool handle.
    4. -
    5. When making an allocation, set VmaAllocationCreateInfo::pool to this handle. You don't need to specify any other parameters of this structure, like usage.
    6. -
    -

    Example:

    -
    .// Create a pool that could have at most 2 blocks, 128 MB each.
    -VmaPoolCreateInfo poolCreateInfo = {};
    -poolCreateInfo.memoryTypeIndex = ...
    -poolCreateInfo.blockSize = 128ull * 1024 * 1024;
    -poolCreateInfo.maxBlockCount = 2;
    -
    -VmaPool pool;
    -vmaCreatePool(allocator, &poolCreateInfo, &pool);
    -
    -.// Allocate a buffer out of it.
    -VkBufferCreateInfo bufCreateInfo = { VK_STRUCTURE_TYPE_BUFFER_CREATE_INFO };
    -bufCreateInfo.size = 1024;
    -bufCreateInfo.usage = VK_BUFFER_USAGE_UNIFORM_BUFFER_BIT | VK_BUFFER_USAGE_TRANSFER_DST_BIT;
    -
    -VmaAllocationCreateInfo allocCreateInfo = {};
    -allocCreateInfo.pool = pool;
    -
    -VkBuffer buf;
    -VmaAllocation alloc;
    -VmaAllocationInfo allocInfo;
    -vmaCreateBuffer(allocator, &bufCreateInfo, &allocCreateInfo, &buf, &alloc, &allocInfo);
    -

    You have to free all allocations made from this pool before destroying it.

    -
    vmaDestroyBuffer(allocator, buf, alloc);
    -vmaDestroyPool(allocator, pool);
    -

    -Defragmentation

    -

    Interleaved allocations and deallocations of many objects of varying size can cause fragmentation, which can lead to a situation where the library is unable to find a continuous range of free memory for a new allocation despite there is enough free space, just scattered across many small free ranges between existing allocations.

    -

    To mitigate this problem, you can use vmaDefragment(). Given set of allocations, this function can move them to compact used memory, ensure more continuous free space and possibly also free some VkDeviceMemory. It can work only on allocations made from memory type that is HOST_VISIBLE. Allocations are modified to point to the new VkDeviceMemory and offset. Data in this memory is also memmove-ed to the new place. However, if you have images or buffers bound to these allocations (and you certainly do), you need to destroy, recreate, and bind them to the new place in memory.

    -

    For further details and example code, see documentation of function vmaDefragment().

    -

    -Lost allocations

    -

    If your game oversubscribes video memory, if may work OK in previous-generation graphics APIs (DirectX 9, 10, 11, OpenGL) because resources are automatically paged to system RAM. In Vulkan you can't do it because when you run out of memory, an allocation just fails. If you have more data (e.g. textures) that can fit into VRAM and you don't need it all at once, you may want to upload them to GPU on demand and "push out" ones that are not used for a long time to make room for the new ones, effectively using VRAM (or a cartain memory pool) as a form of cache. Vulkan Memory Allocator can help you with that by supporting a concept of "lost allocations".

    -

    To create an allocation that can become lost, include VMA_ALLOCATION_CREATE_CAN_BECOME_LOST_BIT flag in VmaAllocationCreateInfo::flags. Before using a buffer or image bound to such allocation in every new frame, you need to query it if it's not lost. To check it: call vmaGetAllocationInfo() and see if VmaAllocationInfo::deviceMemory is not VK_NULL_HANDLE. If the allocation is lost, you should not use it or buffer/image bound to it. You mustn't forget to destroy this allocation and this buffer/image.

    -

    To create an allocation that can make some other allocations lost to make room for it, use VMA_ALLOCATION_CREATE_CAN_MAKE_OTHER_LOST_BIT flag. You will usually use both flags VMA_ALLOCATION_CREATE_CAN_MAKE_OTHER_LOST_BIT and VMA_ALLOCATION_CREATE_CAN_BECOME_LOST_BIT at the same time.

    -

    Warning! Current implementation uses quite naive, brute force algorithm, which can make allocation calls that use VMA_ALLOCATION_CREATE_CAN_MAKE_OTHER_LOST_BIT flag quite slow. A new, more optimal algorithm and data structure to speed this up is planned for the future.

    -

    When interleaving creation of new allocations with usage of existing ones, how do you make sure that an allocation won't become lost while it's used in the current frame?

    -

    It is ensured because vmaGetAllocationInfo() not only returns allocation parameters and checks whether it's not lost, but when it's not, it also atomically marks it as used in the current frame, which makes it impossible to become lost in that frame. It uses lockless algorithm, so it works fast and doesn't involve locking any internal mutex.

    -

    What if my allocation may still be in use by the GPU when it's rendering a previous frame while I already submit new frame on the CPU?

    -

    You can make sure that allocations "touched" by vmaGetAllocationInfo() will not become lost for a number of additional frames back from the current one by specifying this number as VmaAllocatorCreateInfo::frameInUseCount (for default memory pool) and VmaPoolCreateInfo::frameInUseCount (for custom pool).

    -

    How do you inform the library when new frame starts?

    -

    You need to call function vmaSetCurrentFrameIndex().

    -

    Example code:

    -
    struct MyBuffer
    -{
    -    VkBuffer m_Buf = nullptr;
    -    VmaAllocation m_Alloc = nullptr;
    -
    -    .// Called when the buffer is really needed in the current frame.
    -    void EnsureBuffer();
    -};
    -
    -void MyBuffer::EnsureBuffer()
    -{
    -    .// Buffer has been created.
    -    if(m_Buf != VK_NULL_HANDLE)
    -    {
    -        .// Check if its allocation is not lost + mark it as used in current frame.
    -        VmaAllocationInfo allocInfo;
    -        vmaGetAllocationInfo(allocator, m_Alloc, &allocInfo);
    -        if(allocInfo.deviceMemory != VK_NULL_HANDLE)
    -        {
    -            .// It's all OK - safe to use m_Buf.
    -            return;
    -        }
    -    }
    -
    -    .// Buffer not yet exists or lost - destroy and recreate it.
    -
    -    vmaDestroyBuffer(allocator, m_Buf, m_Alloc);
    -
    -    VkBufferCreateInfo bufCreateInfo = { VK_STRUCTURE_TYPE_BUFFER_CREATE_INFO };
    -    bufCreateInfo.size = 1024;
    -    bufCreateInfo.usage = VK_BUFFER_USAGE_UNIFORM_BUFFER_BIT | VK_BUFFER_USAGE_TRANSFER_DST_BIT;
    -
    -    VmaAllocationCreateInfo allocCreateInfo = {};
    -    allocCreateInfo.usage = VMA_MEMORY_USAGE_GPU_ONLY;
    -    allocCreateInfo.flags = VMA_ALLOCATION_CREATE_CAN_BECOME_LOST_BIT |
    -        VMA_ALLOCATION_CREATE_CAN_MAKE_OTHER_LOST_BIT;
    -
    -    vmaCreateBuffer(allocator, &bufCreateInfo, &allocCreateInfo, &m_Buf, &m_Alloc, nullptr);
    -}
    -

    When using lost allocations, you may see some Vulkan validation layer warnings about overlapping regions of memory bound to different kinds of buffers and images. This is still valid as long as you implement proper handling of lost allocations (like in the example above) and don't use them.

    -

    The library uses following algorithm for allocation, in order:

    -
      -
    1. Try to find free range of memory in existing blocks.
    2. -
    3. If failed, try to create a new block of VkDeviceMemory, with preferred block size.
    4. -
    5. If failed, try to create such block with size/2 and size/4.
    6. -
    7. If failed and VMA_ALLOCATION_CREATE_CAN_MAKE_OTHER_LOST_BIT flag was specified, try to find space in existing blocks, possilby making some other allocations lost.
    8. -
    9. If failed, try to allocate separate VkDeviceMemory for this allocation, just like when you use VMA_ALLOCATION_CREATE_DEDICATED_MEMORY_BIT.
    10. -
    11. If failed, choose other memory type that meets the requirements specified in VmaAllocationCreateInfo and go to point 1.
    12. -
    13. If failed, return VK_ERROR_OUT_OF_DEVICE_MEMORY.
    14. -
    -
    - - - - diff --git a/docs/html/vk__mem__alloc_8h.html b/docs/html/vk__mem__alloc_8h.html index 01ecea4..8a066b6 100644 --- a/docs/html/vk__mem__alloc_8h.html +++ b/docs/html/vk__mem__alloc_8h.html @@ -1146,7 +1146,7 @@ Functions -

    Corruption detection is enabled only when VMA_DEBUG_DETECT_CORRUPTION macro is defined to nonzero, VMA_DEBUG_MARGIN is defined to nonzero and only for memory types that are HOST_VISIBLE and HOST_COHERENT. For more information, see Corruption detection.

    +

    Corruption detection is enabled only when VMA_DEBUG_DETECT_CORRUPTION macro is defined to nonzero, VMA_DEBUG_MARGIN is defined to nonzero and only for memory types that are HOST_VISIBLE and HOST_COHERENT. For more information, see Corruption detection.

    Possible return values:

    • VK_ERROR_FEATURE_NOT_PRESENT - corruption detection is not enabled for any of specified memory types.
    • @@ -1184,7 +1184,7 @@ Functions
  • Checks magic number in margins around all allocations in given memory pool in search for corruptions.

    -

    Corruption detection is enabled only when VMA_DEBUG_DETECT_CORRUPTION macro is defined to nonzero, VMA_DEBUG_MARGIN is defined to nonzero and the pool is created in memory type that is HOST_VISIBLE and HOST_COHERENT. For more information, see Corruption detection.

    +

    Corruption detection is enabled only when VMA_DEBUG_DETECT_CORRUPTION macro is defined to nonzero, VMA_DEBUG_MARGIN is defined to nonzero and the pool is created in memory type that is HOST_VISIBLE and HOST_COHERENT. For more information, see Corruption detection.

    Possible return values:

    • VK_ERROR_FEATURE_NOT_PRESENT - corruption detection is not enabled for specified pool.
    • diff --git a/docs/html/vk__mem__alloc_8h_source.html b/docs/html/vk__mem__alloc_8h_source.html index 1d913f3..8354ec9 100644 --- a/docs/html/vk__mem__alloc_8h_source.html +++ b/docs/html/vk__mem__alloc_8h_source.html @@ -62,166 +62,166 @@ $(function() {
      vk_mem_alloc.h
    -Go to the documentation of this file.
    1 //
    2 // Copyright (c) 2017-2018 Advanced Micro Devices, Inc. All rights reserved.
    3 //
    4 // Permission is hereby granted, free of charge, to any person obtaining a copy
    5 // of this software and associated documentation files (the "Software"), to deal
    6 // in the Software without restriction, including without limitation the rights
    7 // to use, copy, modify, merge, publish, distribute, sublicense, and/or sell
    8 // copies of the Software, and to permit persons to whom the Software is
    9 // furnished to do so, subject to the following conditions:
    10 //
    11 // The above copyright notice and this permission notice shall be included in
    12 // all copies or substantial portions of the Software.
    13 //
    14 // THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
    15 // IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
    16 // FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE
    17 // AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
    18 // LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM,
    19 // OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN
    20 // THE SOFTWARE.
    21 //
    22 
    23 #ifndef AMD_VULKAN_MEMORY_ALLOCATOR_H
    24 #define AMD_VULKAN_MEMORY_ALLOCATOR_H
    25 
    26 #ifdef __cplusplus
    27 extern "C" {
    28 #endif
    29 
    1157 #include <vulkan/vulkan.h>
    1158 
    1159 #if !defined(VMA_DEDICATED_ALLOCATION)
    1160  #if VK_KHR_get_memory_requirements2 && VK_KHR_dedicated_allocation
    1161  #define VMA_DEDICATED_ALLOCATION 1
    1162  #else
    1163  #define VMA_DEDICATED_ALLOCATION 0
    1164  #endif
    1165 #endif
    1166 
    1176 VK_DEFINE_HANDLE(VmaAllocator)
    1177 
    1178 typedef void (VKAPI_PTR *PFN_vmaAllocateDeviceMemoryFunction)(
    1180  VmaAllocator allocator,
    1181  uint32_t memoryType,
    1182  VkDeviceMemory memory,
    1183  VkDeviceSize size);
    1185 typedef void (VKAPI_PTR *PFN_vmaFreeDeviceMemoryFunction)(
    1186  VmaAllocator allocator,
    1187  uint32_t memoryType,
    1188  VkDeviceMemory memory,
    1189  VkDeviceSize size);
    1190 
    1204 
    1234 
    1237 typedef VkFlags VmaAllocatorCreateFlags;
    1238 
    1243 typedef struct VmaVulkanFunctions {
    1244  PFN_vkGetPhysicalDeviceProperties vkGetPhysicalDeviceProperties;
    1245  PFN_vkGetPhysicalDeviceMemoryProperties vkGetPhysicalDeviceMemoryProperties;
    1246  PFN_vkAllocateMemory vkAllocateMemory;
    1247  PFN_vkFreeMemory vkFreeMemory;
    1248  PFN_vkMapMemory vkMapMemory;
    1249  PFN_vkUnmapMemory vkUnmapMemory;
    1250  PFN_vkFlushMappedMemoryRanges vkFlushMappedMemoryRanges;
    1251  PFN_vkInvalidateMappedMemoryRanges vkInvalidateMappedMemoryRanges;
    1252  PFN_vkBindBufferMemory vkBindBufferMemory;
    1253  PFN_vkBindImageMemory vkBindImageMemory;
    1254  PFN_vkGetBufferMemoryRequirements vkGetBufferMemoryRequirements;
    1255  PFN_vkGetImageMemoryRequirements vkGetImageMemoryRequirements;
    1256  PFN_vkCreateBuffer vkCreateBuffer;
    1257  PFN_vkDestroyBuffer vkDestroyBuffer;
    1258  PFN_vkCreateImage vkCreateImage;
    1259  PFN_vkDestroyImage vkDestroyImage;
    1260 #if VMA_DEDICATED_ALLOCATION
    1261  PFN_vkGetBufferMemoryRequirements2KHR vkGetBufferMemoryRequirements2KHR;
    1262  PFN_vkGetImageMemoryRequirements2KHR vkGetImageMemoryRequirements2KHR;
    1263 #endif
    1265 
    1268 {
    1270  VmaAllocatorCreateFlags flags;
    1272 
    1273  VkPhysicalDevice physicalDevice;
    1275 
    1276  VkDevice device;
    1278 
    1281 
    1282  const VkAllocationCallbacks* pAllocationCallbacks;
    1284 
    1323  const VkDeviceSize* pHeapSizeLimit;
    1337 
    1339 VkResult vmaCreateAllocator(
    1340  const VmaAllocatorCreateInfo* pCreateInfo,
    1341  VmaAllocator* pAllocator);
    1342 
    1344 void vmaDestroyAllocator(
    1345  VmaAllocator allocator);
    1346 
    1352  VmaAllocator allocator,
    1353  const VkPhysicalDeviceProperties** ppPhysicalDeviceProperties);
    1354 
    1360  VmaAllocator allocator,
    1361  const VkPhysicalDeviceMemoryProperties** ppPhysicalDeviceMemoryProperties);
    1362 
    1370  VmaAllocator allocator,
    1371  uint32_t memoryTypeIndex,
    1372  VkMemoryPropertyFlags* pFlags);
    1373 
    1383  VmaAllocator allocator,
    1384  uint32_t frameIndex);
    1385 
    1388 typedef struct VmaStatInfo
    1389 {
    1391  uint32_t blockCount;
    1397  VkDeviceSize usedBytes;
    1399  VkDeviceSize unusedBytes;
    1400  VkDeviceSize allocationSizeMin, allocationSizeAvg, allocationSizeMax;
    1401  VkDeviceSize unusedRangeSizeMin, unusedRangeSizeAvg, unusedRangeSizeMax;
    1402 } VmaStatInfo;
    1403 
    1405 typedef struct VmaStats
    1406 {
    1407  VmaStatInfo memoryType[VK_MAX_MEMORY_TYPES];
    1408  VmaStatInfo memoryHeap[VK_MAX_MEMORY_HEAPS];
    1410 } VmaStats;
    1411 
    1413 void vmaCalculateStats(
    1414  VmaAllocator allocator,
    1415  VmaStats* pStats);
    1416 
    1417 #define VMA_STATS_STRING_ENABLED 1
    1418 
    1419 #if VMA_STATS_STRING_ENABLED
    1420 
    1422 
    1424 void vmaBuildStatsString(
    1425  VmaAllocator allocator,
    1426  char** ppStatsString,
    1427  VkBool32 detailedMap);
    1428 
    1429 void vmaFreeStatsString(
    1430  VmaAllocator allocator,
    1431  char* pStatsString);
    1432 
    1433 #endif // #if VMA_STATS_STRING_ENABLED
    1434 
    1443 VK_DEFINE_HANDLE(VmaPool)
    1444 
    1445 typedef enum VmaMemoryUsage
    1446 {
    1495 } VmaMemoryUsage;
    1496 
    1511 
    1561 
    1565 
    1567 {
    1569  VmaAllocationCreateFlags flags;
    1580  VkMemoryPropertyFlags requiredFlags;
    1585  VkMemoryPropertyFlags preferredFlags;
    1593  uint32_t memoryTypeBits;
    1606  void* pUserData;
    1608 
    1625 VkResult vmaFindMemoryTypeIndex(
    1626  VmaAllocator allocator,
    1627  uint32_t memoryTypeBits,
    1628  const VmaAllocationCreateInfo* pAllocationCreateInfo,
    1629  uint32_t* pMemoryTypeIndex);
    1630 
    1644  VmaAllocator allocator,
    1645  const VkBufferCreateInfo* pBufferCreateInfo,
    1646  const VmaAllocationCreateInfo* pAllocationCreateInfo,
    1647  uint32_t* pMemoryTypeIndex);
    1648 
    1662  VmaAllocator allocator,
    1663  const VkImageCreateInfo* pImageCreateInfo,
    1664  const VmaAllocationCreateInfo* pAllocationCreateInfo,
    1665  uint32_t* pMemoryTypeIndex);
    1666 
    1687 
    1690 typedef VkFlags VmaPoolCreateFlags;
    1691 
    1694 typedef struct VmaPoolCreateInfo {
    1700  VmaPoolCreateFlags flags;
    1705  VkDeviceSize blockSize;
    1734 
    1737 typedef struct VmaPoolStats {
    1740  VkDeviceSize size;
    1743  VkDeviceSize unusedSize;
    1756  VkDeviceSize unusedRangeSizeMax;
    1757 } VmaPoolStats;
    1758 
    1765 VkResult vmaCreatePool(
    1766  VmaAllocator allocator,
    1767  const VmaPoolCreateInfo* pCreateInfo,
    1768  VmaPool* pPool);
    1769 
    1772 void vmaDestroyPool(
    1773  VmaAllocator allocator,
    1774  VmaPool pool);
    1775 
    1782 void vmaGetPoolStats(
    1783  VmaAllocator allocator,
    1784  VmaPool pool,
    1785  VmaPoolStats* pPoolStats);
    1786 
    1794  VmaAllocator allocator,
    1795  VmaPool pool,
    1796  size_t* pLostAllocationCount);
    1797 
    1812 VkResult vmaCheckPoolCorruption(VmaAllocator allocator, VmaPool pool);
    1813 
    1838 VK_DEFINE_HANDLE(VmaAllocation)
    1839 
    1840 
    1842 typedef struct VmaAllocationInfo {
    1847  uint32_t memoryType;
    1856  VkDeviceMemory deviceMemory;
    1861  VkDeviceSize offset;
    1866  VkDeviceSize size;
    1880  void* pUserData;
    1882 
    1893 VkResult vmaAllocateMemory(
    1894  VmaAllocator allocator,
    1895  const VkMemoryRequirements* pVkMemoryRequirements,
    1896  const VmaAllocationCreateInfo* pCreateInfo,
    1897  VmaAllocation* pAllocation,
    1898  VmaAllocationInfo* pAllocationInfo);
    1899 
    1907  VmaAllocator allocator,
    1908  VkBuffer buffer,
    1909  const VmaAllocationCreateInfo* pCreateInfo,
    1910  VmaAllocation* pAllocation,
    1911  VmaAllocationInfo* pAllocationInfo);
    1912 
    1914 VkResult vmaAllocateMemoryForImage(
    1915  VmaAllocator allocator,
    1916  VkImage image,
    1917  const VmaAllocationCreateInfo* pCreateInfo,
    1918  VmaAllocation* pAllocation,
    1919  VmaAllocationInfo* pAllocationInfo);
    1920 
    1922 void vmaFreeMemory(
    1923  VmaAllocator allocator,
    1924  VmaAllocation allocation);
    1925 
    1943  VmaAllocator allocator,
    1944  VmaAllocation allocation,
    1945  VmaAllocationInfo* pAllocationInfo);
    1946 
    1961 VkBool32 vmaTouchAllocation(
    1962  VmaAllocator allocator,
    1963  VmaAllocation allocation);
    1964 
    1979  VmaAllocator allocator,
    1980  VmaAllocation allocation,
    1981  void* pUserData);
    1982 
    1994  VmaAllocator allocator,
    1995  VmaAllocation* pAllocation);
    1996 
    2031 VkResult vmaMapMemory(
    2032  VmaAllocator allocator,
    2033  VmaAllocation allocation,
    2034  void** ppData);
    2035 
    2040 void vmaUnmapMemory(
    2041  VmaAllocator allocator,
    2042  VmaAllocation allocation);
    2043 
    2056 void vmaFlushAllocation(VmaAllocator allocator, VmaAllocation allocation, VkDeviceSize offset, VkDeviceSize size);
    2057 
    2070 void vmaInvalidateAllocation(VmaAllocator allocator, VmaAllocation allocation, VkDeviceSize offset, VkDeviceSize size);
    2071 
    2088 VkResult vmaCheckCorruption(VmaAllocator allocator, uint32_t memoryTypeBits);
    2089 
    2091 typedef struct VmaDefragmentationInfo {
    2096  VkDeviceSize maxBytesToMove;
    2103 
    2105 typedef struct VmaDefragmentationStats {
    2107  VkDeviceSize bytesMoved;
    2109  VkDeviceSize bytesFreed;
    2115 
    2198 VkResult vmaDefragment(
    2199  VmaAllocator allocator,
    2200  VmaAllocation* pAllocations,
    2201  size_t allocationCount,
    2202  VkBool32* pAllocationsChanged,
    2203  const VmaDefragmentationInfo *pDefragmentationInfo,
    2204  VmaDefragmentationStats* pDefragmentationStats);
    2205 
    2218 VkResult vmaBindBufferMemory(
    2219  VmaAllocator allocator,
    2220  VmaAllocation allocation,
    2221  VkBuffer buffer);
    2222 
    2235 VkResult vmaBindImageMemory(
    2236  VmaAllocator allocator,
    2237  VmaAllocation allocation,
    2238  VkImage image);
    2239 
    2266 VkResult vmaCreateBuffer(
    2267  VmaAllocator allocator,
    2268  const VkBufferCreateInfo* pBufferCreateInfo,
    2269  const VmaAllocationCreateInfo* pAllocationCreateInfo,
    2270  VkBuffer* pBuffer,
    2271  VmaAllocation* pAllocation,
    2272  VmaAllocationInfo* pAllocationInfo);
    2273 
    2285 void vmaDestroyBuffer(
    2286  VmaAllocator allocator,
    2287  VkBuffer buffer,
    2288  VmaAllocation allocation);
    2289 
    2291 VkResult vmaCreateImage(
    2292  VmaAllocator allocator,
    2293  const VkImageCreateInfo* pImageCreateInfo,
    2294  const VmaAllocationCreateInfo* pAllocationCreateInfo,
    2295  VkImage* pImage,
    2296  VmaAllocation* pAllocation,
    2297  VmaAllocationInfo* pAllocationInfo);
    2298 
    2310 void vmaDestroyImage(
    2311  VmaAllocator allocator,
    2312  VkImage image,
    2313  VmaAllocation allocation);
    2314 
    2315 #ifdef __cplusplus
    2316 }
    2317 #endif
    2318 
    2319 #endif // AMD_VULKAN_MEMORY_ALLOCATOR_H
    2320 
    2321 // For Visual Studio IntelliSense.
    2322 #if defined(__cplusplus) && defined(__INTELLISENSE__)
    2323 #define VMA_IMPLEMENTATION
    2324 #endif
    2325 
    2326 #ifdef VMA_IMPLEMENTATION
    2327 #undef VMA_IMPLEMENTATION
    2328 
    2329 #include <cstdint>
    2330 #include <cstdlib>
    2331 #include <cstring>
    2332 
    2333 /*******************************************************************************
    2334 CONFIGURATION SECTION
    2335 
    2336 Define some of these macros before each #include of this header or change them
    2337 here if you need other then default behavior depending on your environment.
    2338 */
    2339 
    2340 /*
    2341 Define this macro to 1 to make the library fetch pointers to Vulkan functions
    2342 internally, like:
    2343 
    2344  vulkanFunctions.vkAllocateMemory = &vkAllocateMemory;
    2345 
    2346 Define to 0 if you are going to provide you own pointers to Vulkan functions via
    2347 VmaAllocatorCreateInfo::pVulkanFunctions.
    2348 */
    2349 #if !defined(VMA_STATIC_VULKAN_FUNCTIONS) && !defined(VK_NO_PROTOTYPES)
    2350 #define VMA_STATIC_VULKAN_FUNCTIONS 1
    2351 #endif
    2352 
    2353 // Define this macro to 1 to make the library use STL containers instead of its own implementation.
    2354 //#define VMA_USE_STL_CONTAINERS 1
    2355 
    2356 /* Set this macro to 1 to make the library including and using STL containers:
    2357 std::pair, std::vector, std::list, std::unordered_map.
    2358 
    2359 Set it to 0 or undefined to make the library using its own implementation of
    2360 the containers.
    2361 */
    2362 #if VMA_USE_STL_CONTAINERS
    2363  #define VMA_USE_STL_VECTOR 1
    2364  #define VMA_USE_STL_UNORDERED_MAP 1
    2365  #define VMA_USE_STL_LIST 1
    2366 #endif
    2367 
    2368 #if VMA_USE_STL_VECTOR
    2369  #include <vector>
    2370 #endif
    2371 
    2372 #if VMA_USE_STL_UNORDERED_MAP
    2373  #include <unordered_map>
    2374 #endif
    2375 
    2376 #if VMA_USE_STL_LIST
    2377  #include <list>
    2378 #endif
    2379 
    2380 /*
    2381 Following headers are used in this CONFIGURATION section only, so feel free to
    2382 remove them if not needed.
    2383 */
    2384 #include <cassert> // for assert
    2385 #include <algorithm> // for min, max
    2386 #include <mutex> // for std::mutex
    2387 #include <atomic> // for std::atomic
    2388 
    2389 #ifndef VMA_NULL
    2390  // Value used as null pointer. Define it to e.g.: nullptr, NULL, 0, (void*)0.
    2391  #define VMA_NULL nullptr
    2392 #endif
    2393 
    2394 #if defined(__APPLE__) || defined(__ANDROID__)
    2395 #include <cstdlib>
    2396 void *aligned_alloc(size_t alignment, size_t size)
    2397 {
    2398  // alignment must be >= sizeof(void*)
    2399  if(alignment < sizeof(void*))
    2400  {
    2401  alignment = sizeof(void*);
    2402  }
    2403 
    2404  void *pointer;
    2405  if(posix_memalign(&pointer, alignment, size) == 0)
    2406  return pointer;
    2407  return VMA_NULL;
    2408 }
    2409 #endif
    2410 
    2411 // If your compiler is not compatible with C++11 and definition of
    2412 // aligned_alloc() function is missing, uncommeting following line may help:
    2413 
    2414 //#include <malloc.h>
    2415 
    2416 // Normal assert to check for programmer's errors, especially in Debug configuration.
    2417 #ifndef VMA_ASSERT
    2418  #ifdef _DEBUG
    2419  #define VMA_ASSERT(expr) assert(expr)
    2420  #else
    2421  #define VMA_ASSERT(expr)
    2422  #endif
    2423 #endif
    2424 
    2425 // Assert that will be called very often, like inside data structures e.g. operator[].
    2426 // Making it non-empty can make program slow.
    2427 #ifndef VMA_HEAVY_ASSERT
    2428  #ifdef _DEBUG
    2429  #define VMA_HEAVY_ASSERT(expr) //VMA_ASSERT(expr)
    2430  #else
    2431  #define VMA_HEAVY_ASSERT(expr)
    2432  #endif
    2433 #endif
    2434 
    2435 #ifndef VMA_ALIGN_OF
    2436  #define VMA_ALIGN_OF(type) (__alignof(type))
    2437 #endif
    2438 
    2439 #ifndef VMA_SYSTEM_ALIGNED_MALLOC
    2440  #if defined(_WIN32)
    2441  #define VMA_SYSTEM_ALIGNED_MALLOC(size, alignment) (_aligned_malloc((size), (alignment)))
    2442  #else
    2443  #define VMA_SYSTEM_ALIGNED_MALLOC(size, alignment) (aligned_alloc((alignment), (size) ))
    2444  #endif
    2445 #endif
    2446 
    2447 #ifndef VMA_SYSTEM_FREE
    2448  #if defined(_WIN32)
    2449  #define VMA_SYSTEM_FREE(ptr) _aligned_free(ptr)
    2450  #else
    2451  #define VMA_SYSTEM_FREE(ptr) free(ptr)
    2452  #endif
    2453 #endif
    2454 
    2455 #ifndef VMA_MIN
    2456  #define VMA_MIN(v1, v2) (std::min((v1), (v2)))
    2457 #endif
    2458 
    2459 #ifndef VMA_MAX
    2460  #define VMA_MAX(v1, v2) (std::max((v1), (v2)))
    2461 #endif
    2462 
    2463 #ifndef VMA_SWAP
    2464  #define VMA_SWAP(v1, v2) std::swap((v1), (v2))
    2465 #endif
    2466 
    2467 #ifndef VMA_SORT
    2468  #define VMA_SORT(beg, end, cmp) std::sort(beg, end, cmp)
    2469 #endif
    2470 
    2471 #ifndef VMA_DEBUG_LOG
    2472  #define VMA_DEBUG_LOG(format, ...)
    2473  /*
    2474  #define VMA_DEBUG_LOG(format, ...) do { \
    2475  printf(format, __VA_ARGS__); \
    2476  printf("\n"); \
    2477  } while(false)
    2478  */
    2479 #endif
    2480 
    2481 // Define this macro to 1 to enable functions: vmaBuildStatsString, vmaFreeStatsString.
    2482 #if VMA_STATS_STRING_ENABLED
    2483  static inline void VmaUint32ToStr(char* outStr, size_t strLen, uint32_t num)
    2484  {
    2485  snprintf(outStr, strLen, "%u", static_cast<unsigned int>(num));
    2486  }
    2487  static inline void VmaUint64ToStr(char* outStr, size_t strLen, uint64_t num)
    2488  {
    2489  snprintf(outStr, strLen, "%llu", static_cast<unsigned long long>(num));
    2490  }
    2491  static inline void VmaPtrToStr(char* outStr, size_t strLen, const void* ptr)
    2492  {
    2493  snprintf(outStr, strLen, "%p", ptr);
    2494  }
    2495 #endif
    2496 
    2497 #ifndef VMA_MUTEX
    2498  class VmaMutex
    2499  {
    2500  public:
    2501  VmaMutex() { }
    2502  ~VmaMutex() { }
    2503  void Lock() { m_Mutex.lock(); }
    2504  void Unlock() { m_Mutex.unlock(); }
    2505  private:
    2506  std::mutex m_Mutex;
    2507  };
    2508  #define VMA_MUTEX VmaMutex
    2509 #endif
    2510 
    2511 /*
    2512 If providing your own implementation, you need to implement a subset of std::atomic:
    2513 
    2514 - Constructor(uint32_t desired)
    2515 - uint32_t load() const
    2516 - void store(uint32_t desired)
    2517 - bool compare_exchange_weak(uint32_t& expected, uint32_t desired)
    2518 */
    2519 #ifndef VMA_ATOMIC_UINT32
    2520  #define VMA_ATOMIC_UINT32 std::atomic<uint32_t>
    2521 #endif
    2522 
    2523 #ifndef VMA_BEST_FIT
    2524 
    2536  #define VMA_BEST_FIT (1)
    2537 #endif
    2538 
    2539 #ifndef VMA_DEBUG_ALWAYS_DEDICATED_MEMORY
    2540 
    2544  #define VMA_DEBUG_ALWAYS_DEDICATED_MEMORY (0)
    2545 #endif
    2546 
    2547 #ifndef VMA_DEBUG_ALIGNMENT
    2548 
    2552  #define VMA_DEBUG_ALIGNMENT (1)
    2553 #endif
    2554 
    2555 #ifndef VMA_DEBUG_MARGIN
    2556 
    2560  #define VMA_DEBUG_MARGIN (0)
    2561 #endif
    2562 
    2563 #ifndef VMA_DEBUG_DETECT_CORRUPTION
    2564 
    2569  #define VMA_DEBUG_DETECT_CORRUPTION (0)
    2570 #endif
    2571 
    2572 #ifndef VMA_DEBUG_GLOBAL_MUTEX
    2573 
    2577  #define VMA_DEBUG_GLOBAL_MUTEX (0)
    2578 #endif
    2579 
    2580 #ifndef VMA_DEBUG_MIN_BUFFER_IMAGE_GRANULARITY
    2581 
    2585  #define VMA_DEBUG_MIN_BUFFER_IMAGE_GRANULARITY (1)
    2586 #endif
    2587 
    2588 #ifndef VMA_SMALL_HEAP_MAX_SIZE
    2589  #define VMA_SMALL_HEAP_MAX_SIZE (1024ull * 1024 * 1024)
    2591 #endif
    2592 
    2593 #ifndef VMA_DEFAULT_LARGE_HEAP_BLOCK_SIZE
    2594  #define VMA_DEFAULT_LARGE_HEAP_BLOCK_SIZE (256ull * 1024 * 1024)
    2596 #endif
    2597 
    2598 #ifndef VMA_CLASS_NO_COPY
    2599  #define VMA_CLASS_NO_COPY(className) \
    2600  private: \
    2601  className(const className&) = delete; \
    2602  className& operator=(const className&) = delete;
    2603 #endif
    2604 
    2605 static const uint32_t VMA_FRAME_INDEX_LOST = UINT32_MAX;
    2606 
    2607 // Decimal 2139416166, float NaN, little-endian binary 66 E6 84 7F.
    2608 static const uint32_t VMA_CORRUPTION_DETECTION_MAGIC_VALUE = 0x7F84E666;
    2609 
    2610 /*******************************************************************************
    2611 END OF CONFIGURATION
    2612 */
    2613 
    2614 static VkAllocationCallbacks VmaEmptyAllocationCallbacks = {
    2615  VMA_NULL, VMA_NULL, VMA_NULL, VMA_NULL, VMA_NULL, VMA_NULL };
    2616 
    2617 // Returns number of bits set to 1 in (v).
    2618 static inline uint32_t VmaCountBitsSet(uint32_t v)
    2619 {
    2620  uint32_t c = v - ((v >> 1) & 0x55555555);
    2621  c = ((c >> 2) & 0x33333333) + (c & 0x33333333);
    2622  c = ((c >> 4) + c) & 0x0F0F0F0F;
    2623  c = ((c >> 8) + c) & 0x00FF00FF;
    2624  c = ((c >> 16) + c) & 0x0000FFFF;
    2625  return c;
    2626 }
    2627 
    2628 // Aligns given value up to nearest multiply of align value. For example: VmaAlignUp(11, 8) = 16.
    2629 // Use types like uint32_t, uint64_t as T.
    2630 template <typename T>
    2631 static inline T VmaAlignUp(T val, T align)
    2632 {
    2633  return (val + align - 1) / align * align;
    2634 }
    2635 // Aligns given value down to nearest multiply of align value. For example: VmaAlignUp(11, 8) = 8.
    2636 // Use types like uint32_t, uint64_t as T.
    2637 template <typename T>
    2638 static inline T VmaAlignDown(T val, T align)
    2639 {
    2640  return val / align * align;
    2641 }
    2642 
    2643 // Division with mathematical rounding to nearest number.
    2644 template <typename T>
    2645 inline T VmaRoundDiv(T x, T y)
    2646 {
    2647  return (x + (y / (T)2)) / y;
    2648 }
    2649 
    2650 #ifndef VMA_SORT
    2651 
    2652 template<typename Iterator, typename Compare>
    2653 Iterator VmaQuickSortPartition(Iterator beg, Iterator end, Compare cmp)
    2654 {
    2655  Iterator centerValue = end; --centerValue;
    2656  Iterator insertIndex = beg;
    2657  for(Iterator memTypeIndex = beg; memTypeIndex < centerValue; ++memTypeIndex)
    2658  {
    2659  if(cmp(*memTypeIndex, *centerValue))
    2660  {
    2661  if(insertIndex != memTypeIndex)
    2662  {
    2663  VMA_SWAP(*memTypeIndex, *insertIndex);
    2664  }
    2665  ++insertIndex;
    2666  }
    2667  }
    2668  if(insertIndex != centerValue)
    2669  {
    2670  VMA_SWAP(*insertIndex, *centerValue);
    2671  }
    2672  return insertIndex;
    2673 }
    2674 
    2675 template<typename Iterator, typename Compare>
    2676 void VmaQuickSort(Iterator beg, Iterator end, Compare cmp)
    2677 {
    2678  if(beg < end)
    2679  {
    2680  Iterator it = VmaQuickSortPartition<Iterator, Compare>(beg, end, cmp);
    2681  VmaQuickSort<Iterator, Compare>(beg, it, cmp);
    2682  VmaQuickSort<Iterator, Compare>(it + 1, end, cmp);
    2683  }
    2684 }
    2685 
    2686 #define VMA_SORT(beg, end, cmp) VmaQuickSort(beg, end, cmp)
    2687 
    2688 #endif // #ifndef VMA_SORT
    2689 
    2690 /*
    2691 Returns true if two memory blocks occupy overlapping pages.
    2692 ResourceA must be in less memory offset than ResourceB.
    2693 
    2694 Algorithm is based on "Vulkan 1.0.39 - A Specification (with all registered Vulkan extensions)"
    2695 chapter 11.6 "Resource Memory Association", paragraph "Buffer-Image Granularity".
    2696 */
    2697 static inline bool VmaBlocksOnSamePage(
    2698  VkDeviceSize resourceAOffset,
    2699  VkDeviceSize resourceASize,
    2700  VkDeviceSize resourceBOffset,
    2701  VkDeviceSize pageSize)
    2702 {
    2703  VMA_ASSERT(resourceAOffset + resourceASize <= resourceBOffset && resourceASize > 0 && pageSize > 0);
    2704  VkDeviceSize resourceAEnd = resourceAOffset + resourceASize - 1;
    2705  VkDeviceSize resourceAEndPage = resourceAEnd & ~(pageSize - 1);
    2706  VkDeviceSize resourceBStart = resourceBOffset;
    2707  VkDeviceSize resourceBStartPage = resourceBStart & ~(pageSize - 1);
    2708  return resourceAEndPage == resourceBStartPage;
    2709 }
    2710 
    2711 enum VmaSuballocationType
    2712 {
    2713  VMA_SUBALLOCATION_TYPE_FREE = 0,
    2714  VMA_SUBALLOCATION_TYPE_UNKNOWN = 1,
    2715  VMA_SUBALLOCATION_TYPE_BUFFER = 2,
    2716  VMA_SUBALLOCATION_TYPE_IMAGE_UNKNOWN = 3,
    2717  VMA_SUBALLOCATION_TYPE_IMAGE_LINEAR = 4,
    2718  VMA_SUBALLOCATION_TYPE_IMAGE_OPTIMAL = 5,
    2719  VMA_SUBALLOCATION_TYPE_MAX_ENUM = 0x7FFFFFFF
    2720 };
    2721 
    2722 /*
    2723 Returns true if given suballocation types could conflict and must respect
    2724 VkPhysicalDeviceLimits::bufferImageGranularity. They conflict if one is buffer
    2725 or linear image and another one is optimal image. If type is unknown, behave
    2726 conservatively.
    2727 */
    2728 static inline bool VmaIsBufferImageGranularityConflict(
    2729  VmaSuballocationType suballocType1,
    2730  VmaSuballocationType suballocType2)
    2731 {
    2732  if(suballocType1 > suballocType2)
    2733  {
    2734  VMA_SWAP(suballocType1, suballocType2);
    2735  }
    2736 
    2737  switch(suballocType1)
    2738  {
    2739  case VMA_SUBALLOCATION_TYPE_FREE:
    2740  return false;
    2741  case VMA_SUBALLOCATION_TYPE_UNKNOWN:
    2742  return true;
    2743  case VMA_SUBALLOCATION_TYPE_BUFFER:
    2744  return
    2745  suballocType2 == VMA_SUBALLOCATION_TYPE_IMAGE_UNKNOWN ||
    2746  suballocType2 == VMA_SUBALLOCATION_TYPE_IMAGE_OPTIMAL;
    2747  case VMA_SUBALLOCATION_TYPE_IMAGE_UNKNOWN:
    2748  return
    2749  suballocType2 == VMA_SUBALLOCATION_TYPE_IMAGE_UNKNOWN ||
    2750  suballocType2 == VMA_SUBALLOCATION_TYPE_IMAGE_LINEAR ||
    2751  suballocType2 == VMA_SUBALLOCATION_TYPE_IMAGE_OPTIMAL;
    2752  case VMA_SUBALLOCATION_TYPE_IMAGE_LINEAR:
    2753  return
    2754  suballocType2 == VMA_SUBALLOCATION_TYPE_IMAGE_OPTIMAL;
    2755  case VMA_SUBALLOCATION_TYPE_IMAGE_OPTIMAL:
    2756  return false;
    2757  default:
    2758  VMA_ASSERT(0);
    2759  return true;
    2760  }
    2761 }
    2762 
    2763 static void VmaWriteMagicValue(void* pData, VkDeviceSize offset)
    2764 {
    2765  uint32_t* pDst = (uint32_t*)((char*)pData + offset);
    2766  const size_t numberCount = VMA_DEBUG_MARGIN / sizeof(uint32_t);
    2767  for(size_t i = 0; i < numberCount; ++i, ++pDst)
    2768  {
    2769  *pDst = VMA_CORRUPTION_DETECTION_MAGIC_VALUE;
    2770  }
    2771 }
    2772 
    2773 static bool VmaValidateMagicValue(const void* pData, VkDeviceSize offset)
    2774 {
    2775  const uint32_t* pSrc = (const uint32_t*)((const char*)pData + offset);
    2776  const size_t numberCount = VMA_DEBUG_MARGIN / sizeof(uint32_t);
    2777  for(size_t i = 0; i < numberCount; ++i, ++pSrc)
    2778  {
    2779  if(*pSrc != VMA_CORRUPTION_DETECTION_MAGIC_VALUE)
    2780  {
    2781  return false;
    2782  }
    2783  }
    2784  return true;
    2785 }
    2786 
    2787 // Helper RAII class to lock a mutex in constructor and unlock it in destructor (at the end of scope).
    2788 struct VmaMutexLock
    2789 {
    2790  VMA_CLASS_NO_COPY(VmaMutexLock)
    2791 public:
    2792  VmaMutexLock(VMA_MUTEX& mutex, bool useMutex) :
    2793  m_pMutex(useMutex ? &mutex : VMA_NULL)
    2794  {
    2795  if(m_pMutex)
    2796  {
    2797  m_pMutex->Lock();
    2798  }
    2799  }
    2800 
    2801  ~VmaMutexLock()
    2802  {
    2803  if(m_pMutex)
    2804  {
    2805  m_pMutex->Unlock();
    2806  }
    2807  }
    2808 
    2809 private:
    2810  VMA_MUTEX* m_pMutex;
    2811 };
    2812 
    2813 #if VMA_DEBUG_GLOBAL_MUTEX
    2814  static VMA_MUTEX gDebugGlobalMutex;
    2815  #define VMA_DEBUG_GLOBAL_MUTEX_LOCK VmaMutexLock debugGlobalMutexLock(gDebugGlobalMutex, true);
    2816 #else
    2817  #define VMA_DEBUG_GLOBAL_MUTEX_LOCK
    2818 #endif
    2819 
    2820 // Minimum size of a free suballocation to register it in the free suballocation collection.
    2821 static const VkDeviceSize VMA_MIN_FREE_SUBALLOCATION_SIZE_TO_REGISTER = 16;
    2822 
    2823 /*
    2824 Performs binary search and returns iterator to first element that is greater or
    2825 equal to (key), according to comparison (cmp).
    2826 
    2827 Cmp should return true if first argument is less than second argument.
    2828 
    2829 Returned value is the found element, if present in the collection or place where
    2830 new element with value (key) should be inserted.
    2831 */
    2832 template <typename IterT, typename KeyT, typename CmpT>
    2833 static IterT VmaBinaryFindFirstNotLess(IterT beg, IterT end, const KeyT &key, CmpT cmp)
    2834 {
    2835  size_t down = 0, up = (end - beg);
    2836  while(down < up)
    2837  {
    2838  const size_t mid = (down + up) / 2;
    2839  if(cmp(*(beg+mid), key))
    2840  {
    2841  down = mid + 1;
    2842  }
    2843  else
    2844  {
    2845  up = mid;
    2846  }
    2847  }
    2848  return beg + down;
    2849 }
    2850 
    2852 // Memory allocation
    2853 
    2854 static void* VmaMalloc(const VkAllocationCallbacks* pAllocationCallbacks, size_t size, size_t alignment)
    2855 {
    2856  if((pAllocationCallbacks != VMA_NULL) &&
    2857  (pAllocationCallbacks->pfnAllocation != VMA_NULL))
    2858  {
    2859  return (*pAllocationCallbacks->pfnAllocation)(
    2860  pAllocationCallbacks->pUserData,
    2861  size,
    2862  alignment,
    2863  VK_SYSTEM_ALLOCATION_SCOPE_OBJECT);
    2864  }
    2865  else
    2866  {
    2867  return VMA_SYSTEM_ALIGNED_MALLOC(size, alignment);
    2868  }
    2869 }
    2870 
    2871 static void VmaFree(const VkAllocationCallbacks* pAllocationCallbacks, void* ptr)
    2872 {
    2873  if((pAllocationCallbacks != VMA_NULL) &&
    2874  (pAllocationCallbacks->pfnFree != VMA_NULL))
    2875  {
    2876  (*pAllocationCallbacks->pfnFree)(pAllocationCallbacks->pUserData, ptr);
    2877  }
    2878  else
    2879  {
    2880  VMA_SYSTEM_FREE(ptr);
    2881  }
    2882 }
    2883 
    2884 template<typename T>
    2885 static T* VmaAllocate(const VkAllocationCallbacks* pAllocationCallbacks)
    2886 {
    2887  return (T*)VmaMalloc(pAllocationCallbacks, sizeof(T), VMA_ALIGN_OF(T));
    2888 }
    2889 
    2890 template<typename T>
    2891 static T* VmaAllocateArray(const VkAllocationCallbacks* pAllocationCallbacks, size_t count)
    2892 {
    2893  return (T*)VmaMalloc(pAllocationCallbacks, sizeof(T) * count, VMA_ALIGN_OF(T));
    2894 }
    2895 
    2896 #define vma_new(allocator, type) new(VmaAllocate<type>(allocator))(type)
    2897 
    2898 #define vma_new_array(allocator, type, count) new(VmaAllocateArray<type>((allocator), (count)))(type)
    2899 
    2900 template<typename T>
    2901 static void vma_delete(const VkAllocationCallbacks* pAllocationCallbacks, T* ptr)
    2902 {
    2903  ptr->~T();
    2904  VmaFree(pAllocationCallbacks, ptr);
    2905 }
    2906 
    2907 template<typename T>
    2908 static void vma_delete_array(const VkAllocationCallbacks* pAllocationCallbacks, T* ptr, size_t count)
    2909 {
    2910  if(ptr != VMA_NULL)
    2911  {
    2912  for(size_t i = count; i--; )
    2913  {
    2914  ptr[i].~T();
    2915  }
    2916  VmaFree(pAllocationCallbacks, ptr);
    2917  }
    2918 }
    2919 
    2920 // STL-compatible allocator.
    2921 template<typename T>
    2922 class VmaStlAllocator
    2923 {
    2924 public:
    2925  const VkAllocationCallbacks* const m_pCallbacks;
    2926  typedef T value_type;
    2927 
    2928  VmaStlAllocator(const VkAllocationCallbacks* pCallbacks) : m_pCallbacks(pCallbacks) { }
    2929  template<typename U> VmaStlAllocator(const VmaStlAllocator<U>& src) : m_pCallbacks(src.m_pCallbacks) { }
    2930 
    2931  T* allocate(size_t n) { return VmaAllocateArray<T>(m_pCallbacks, n); }
    2932  void deallocate(T* p, size_t n) { VmaFree(m_pCallbacks, p); }
    2933 
    2934  template<typename U>
    2935  bool operator==(const VmaStlAllocator<U>& rhs) const
    2936  {
    2937  return m_pCallbacks == rhs.m_pCallbacks;
    2938  }
    2939  template<typename U>
    2940  bool operator!=(const VmaStlAllocator<U>& rhs) const
    2941  {
    2942  return m_pCallbacks != rhs.m_pCallbacks;
    2943  }
    2944 
    2945  VmaStlAllocator& operator=(const VmaStlAllocator& x) = delete;
    2946 };
    2947 
    2948 #if VMA_USE_STL_VECTOR
    2949 
    2950 #define VmaVector std::vector
    2951 
    2952 template<typename T, typename allocatorT>
    2953 static void VmaVectorInsert(std::vector<T, allocatorT>& vec, size_t index, const T& item)
    2954 {
    2955  vec.insert(vec.begin() + index, item);
    2956 }
    2957 
    2958 template<typename T, typename allocatorT>
    2959 static void VmaVectorRemove(std::vector<T, allocatorT>& vec, size_t index)
    2960 {
    2961  vec.erase(vec.begin() + index);
    2962 }
    2963 
    2964 #else // #if VMA_USE_STL_VECTOR
    2965 
    2966 /* Class with interface compatible with subset of std::vector.
    2967 T must be POD because constructors and destructors are not called and memcpy is
    2968 used for these objects. */
    2969 template<typename T, typename AllocatorT>
    2970 class VmaVector
    2971 {
    2972 public:
    2973  typedef T value_type;
    2974 
    2975  VmaVector(const AllocatorT& allocator) :
    2976  m_Allocator(allocator),
    2977  m_pArray(VMA_NULL),
    2978  m_Count(0),
    2979  m_Capacity(0)
    2980  {
    2981  }
    2982 
    2983  VmaVector(size_t count, const AllocatorT& allocator) :
    2984  m_Allocator(allocator),
    2985  m_pArray(count ? (T*)VmaAllocateArray<T>(allocator.m_pCallbacks, count) : VMA_NULL),
    2986  m_Count(count),
    2987  m_Capacity(count)
    2988  {
    2989  }
    2990 
    2991  VmaVector(const VmaVector<T, AllocatorT>& src) :
    2992  m_Allocator(src.m_Allocator),
    2993  m_pArray(src.m_Count ? (T*)VmaAllocateArray<T>(src.m_Allocator.m_pCallbacks, src.m_Count) : VMA_NULL),
    2994  m_Count(src.m_Count),
    2995  m_Capacity(src.m_Count)
    2996  {
    2997  if(m_Count != 0)
    2998  {
    2999  memcpy(m_pArray, src.m_pArray, m_Count * sizeof(T));
    3000  }
    3001  }
    3002 
    3003  ~VmaVector()
    3004  {
    3005  VmaFree(m_Allocator.m_pCallbacks, m_pArray);
    3006  }
    3007 
    3008  VmaVector& operator=(const VmaVector<T, AllocatorT>& rhs)
    3009  {
    3010  if(&rhs != this)
    3011  {
    3012  resize(rhs.m_Count);
    3013  if(m_Count != 0)
    3014  {
    3015  memcpy(m_pArray, rhs.m_pArray, m_Count * sizeof(T));
    3016  }
    3017  }
    3018  return *this;
    3019  }
    3020 
    3021  bool empty() const { return m_Count == 0; }
    3022  size_t size() const { return m_Count; }
    3023  T* data() { return m_pArray; }
    3024  const T* data() const { return m_pArray; }
    3025 
    3026  T& operator[](size_t index)
    3027  {
    3028  VMA_HEAVY_ASSERT(index < m_Count);
    3029  return m_pArray[index];
    3030  }
    3031  const T& operator[](size_t index) const
    3032  {
    3033  VMA_HEAVY_ASSERT(index < m_Count);
    3034  return m_pArray[index];
    3035  }
    3036 
    3037  T& front()
    3038  {
    3039  VMA_HEAVY_ASSERT(m_Count > 0);
    3040  return m_pArray[0];
    3041  }
    3042  const T& front() const
    3043  {
    3044  VMA_HEAVY_ASSERT(m_Count > 0);
    3045  return m_pArray[0];
    3046  }
    3047  T& back()
    3048  {
    3049  VMA_HEAVY_ASSERT(m_Count > 0);
    3050  return m_pArray[m_Count - 1];
    3051  }
    3052  const T& back() const
    3053  {
    3054  VMA_HEAVY_ASSERT(m_Count > 0);
    3055  return m_pArray[m_Count - 1];
    3056  }
    3057 
    3058  void reserve(size_t newCapacity, bool freeMemory = false)
    3059  {
    3060  newCapacity = VMA_MAX(newCapacity, m_Count);
    3061 
    3062  if((newCapacity < m_Capacity) && !freeMemory)
    3063  {
    3064  newCapacity = m_Capacity;
    3065  }
    3066 
    3067  if(newCapacity != m_Capacity)
    3068  {
    3069  T* const newArray = newCapacity ? VmaAllocateArray<T>(m_Allocator, newCapacity) : VMA_NULL;
    3070  if(m_Count != 0)
    3071  {
    3072  memcpy(newArray, m_pArray, m_Count * sizeof(T));
    3073  }
    3074  VmaFree(m_Allocator.m_pCallbacks, m_pArray);
    3075  m_Capacity = newCapacity;
    3076  m_pArray = newArray;
    3077  }
    3078  }
    3079 
    3080  void resize(size_t newCount, bool freeMemory = false)
    3081  {
    3082  size_t newCapacity = m_Capacity;
    3083  if(newCount > m_Capacity)
    3084  {
    3085  newCapacity = VMA_MAX(newCount, VMA_MAX(m_Capacity * 3 / 2, (size_t)8));
    3086  }
    3087  else if(freeMemory)
    3088  {
    3089  newCapacity = newCount;
    3090  }
    3091 
    3092  if(newCapacity != m_Capacity)
    3093  {
    3094  T* const newArray = newCapacity ? VmaAllocateArray<T>(m_Allocator.m_pCallbacks, newCapacity) : VMA_NULL;
    3095  const size_t elementsToCopy = VMA_MIN(m_Count, newCount);
    3096  if(elementsToCopy != 0)
    3097  {
    3098  memcpy(newArray, m_pArray, elementsToCopy * sizeof(T));
    3099  }
    3100  VmaFree(m_Allocator.m_pCallbacks, m_pArray);
    3101  m_Capacity = newCapacity;
    3102  m_pArray = newArray;
    3103  }
    3104 
    3105  m_Count = newCount;
    3106  }
    3107 
    3108  void clear(bool freeMemory = false)
    3109  {
    3110  resize(0, freeMemory);
    3111  }
    3112 
    3113  void insert(size_t index, const T& src)
    3114  {
    3115  VMA_HEAVY_ASSERT(index <= m_Count);
    3116  const size_t oldCount = size();
    3117  resize(oldCount + 1);
    3118  if(index < oldCount)
    3119  {
    3120  memmove(m_pArray + (index + 1), m_pArray + index, (oldCount - index) * sizeof(T));
    3121  }
    3122  m_pArray[index] = src;
    3123  }
    3124 
    3125  void remove(size_t index)
    3126  {
    3127  VMA_HEAVY_ASSERT(index < m_Count);
    3128  const size_t oldCount = size();
    3129  if(index < oldCount - 1)
    3130  {
    3131  memmove(m_pArray + index, m_pArray + (index + 1), (oldCount - index - 1) * sizeof(T));
    3132  }
    3133  resize(oldCount - 1);
    3134  }
    3135 
    3136  void push_back(const T& src)
    3137  {
    3138  const size_t newIndex = size();
    3139  resize(newIndex + 1);
    3140  m_pArray[newIndex] = src;
    3141  }
    3142 
    3143  void pop_back()
    3144  {
    3145  VMA_HEAVY_ASSERT(m_Count > 0);
    3146  resize(size() - 1);
    3147  }
    3148 
    3149  void push_front(const T& src)
    3150  {
    3151  insert(0, src);
    3152  }
    3153 
    3154  void pop_front()
    3155  {
    3156  VMA_HEAVY_ASSERT(m_Count > 0);
    3157  remove(0);
    3158  }
    3159 
    3160  typedef T* iterator;
    3161 
    3162  iterator begin() { return m_pArray; }
    3163  iterator end() { return m_pArray + m_Count; }
    3164 
    3165 private:
    3166  AllocatorT m_Allocator;
    3167  T* m_pArray;
    3168  size_t m_Count;
    3169  size_t m_Capacity;
    3170 };
    3171 
    3172 template<typename T, typename allocatorT>
    3173 static void VmaVectorInsert(VmaVector<T, allocatorT>& vec, size_t index, const T& item)
    3174 {
    3175  vec.insert(index, item);
    3176 }
    3177 
    3178 template<typename T, typename allocatorT>
    3179 static void VmaVectorRemove(VmaVector<T, allocatorT>& vec, size_t index)
    3180 {
    3181  vec.remove(index);
    3182 }
    3183 
    3184 #endif // #if VMA_USE_STL_VECTOR
    3185 
    3186 template<typename CmpLess, typename VectorT>
    3187 size_t VmaVectorInsertSorted(VectorT& vector, const typename VectorT::value_type& value)
    3188 {
    3189  const size_t indexToInsert = VmaBinaryFindFirstNotLess(
    3190  vector.data(),
    3191  vector.data() + vector.size(),
    3192  value,
    3193  CmpLess()) - vector.data();
    3194  VmaVectorInsert(vector, indexToInsert, value);
    3195  return indexToInsert;
    3196 }
    3197 
    3198 template<typename CmpLess, typename VectorT>
    3199 bool VmaVectorRemoveSorted(VectorT& vector, const typename VectorT::value_type& value)
    3200 {
    3201  CmpLess comparator;
    3202  typename VectorT::iterator it = VmaBinaryFindFirstNotLess(
    3203  vector.begin(),
    3204  vector.end(),
    3205  value,
    3206  comparator);
    3207  if((it != vector.end()) && !comparator(*it, value) && !comparator(value, *it))
    3208  {
    3209  size_t indexToRemove = it - vector.begin();
    3210  VmaVectorRemove(vector, indexToRemove);
    3211  return true;
    3212  }
    3213  return false;
    3214 }
    3215 
    3216 template<typename CmpLess, typename VectorT>
    3217 size_t VmaVectorFindSorted(const VectorT& vector, const typename VectorT::value_type& value)
    3218 {
    3219  CmpLess comparator;
    3220  typename VectorT::iterator it = VmaBinaryFindFirstNotLess(
    3221  vector.data(),
    3222  vector.data() + vector.size(),
    3223  value,
    3224  comparator);
    3225  if(it != vector.size() && !comparator(*it, value) && !comparator(value, *it))
    3226  {
    3227  return it - vector.begin();
    3228  }
    3229  else
    3230  {
    3231  return vector.size();
    3232  }
    3233 }
    3234 
    3236 // class VmaPoolAllocator
    3237 
    3238 /*
    3239 Allocator for objects of type T using a list of arrays (pools) to speed up
    3240 allocation. Number of elements that can be allocated is not bounded because
    3241 allocator can create multiple blocks.
    3242 */
    3243 template<typename T>
    3244 class VmaPoolAllocator
    3245 {
    3246  VMA_CLASS_NO_COPY(VmaPoolAllocator)
    3247 public:
    3248  VmaPoolAllocator(const VkAllocationCallbacks* pAllocationCallbacks, size_t itemsPerBlock);
    3249  ~VmaPoolAllocator();
    3250  void Clear();
    3251  T* Alloc();
    3252  void Free(T* ptr);
    3253 
    3254 private:
    3255  union Item
    3256  {
    3257  uint32_t NextFreeIndex;
    3258  T Value;
    3259  };
    3260 
    3261  struct ItemBlock
    3262  {
    3263  Item* pItems;
    3264  uint32_t FirstFreeIndex;
    3265  };
    3266 
    3267  const VkAllocationCallbacks* m_pAllocationCallbacks;
    3268  size_t m_ItemsPerBlock;
    3269  VmaVector< ItemBlock, VmaStlAllocator<ItemBlock> > m_ItemBlocks;
    3270 
    3271  ItemBlock& CreateNewBlock();
    3272 };
    3273 
    3274 template<typename T>
    3275 VmaPoolAllocator<T>::VmaPoolAllocator(const VkAllocationCallbacks* pAllocationCallbacks, size_t itemsPerBlock) :
    3276  m_pAllocationCallbacks(pAllocationCallbacks),
    3277  m_ItemsPerBlock(itemsPerBlock),
    3278  m_ItemBlocks(VmaStlAllocator<ItemBlock>(pAllocationCallbacks))
    3279 {
    3280  VMA_ASSERT(itemsPerBlock > 0);
    3281 }
    3282 
    3283 template<typename T>
    3284 VmaPoolAllocator<T>::~VmaPoolAllocator()
    3285 {
    3286  Clear();
    3287 }
    3288 
    3289 template<typename T>
    3290 void VmaPoolAllocator<T>::Clear()
    3291 {
    3292  for(size_t i = m_ItemBlocks.size(); i--; )
    3293  vma_delete_array(m_pAllocationCallbacks, m_ItemBlocks[i].pItems, m_ItemsPerBlock);
    3294  m_ItemBlocks.clear();
    3295 }
    3296 
    3297 template<typename T>
    3298 T* VmaPoolAllocator<T>::Alloc()
    3299 {
    3300  for(size_t i = m_ItemBlocks.size(); i--; )
    3301  {
    3302  ItemBlock& block = m_ItemBlocks[i];
    3303  // This block has some free items: Use first one.
    3304  if(block.FirstFreeIndex != UINT32_MAX)
    3305  {
    3306  Item* const pItem = &block.pItems[block.FirstFreeIndex];
    3307  block.FirstFreeIndex = pItem->NextFreeIndex;
    3308  return &pItem->Value;
    3309  }
    3310  }
    3311 
    3312  // No block has free item: Create new one and use it.
    3313  ItemBlock& newBlock = CreateNewBlock();
    3314  Item* const pItem = &newBlock.pItems[0];
    3315  newBlock.FirstFreeIndex = pItem->NextFreeIndex;
    3316  return &pItem->Value;
    3317 }
    3318 
    3319 template<typename T>
    3320 void VmaPoolAllocator<T>::Free(T* ptr)
    3321 {
    3322  // Search all memory blocks to find ptr.
    3323  for(size_t i = 0; i < m_ItemBlocks.size(); ++i)
    3324  {
    3325  ItemBlock& block = m_ItemBlocks[i];
    3326 
    3327  // Casting to union.
    3328  Item* pItemPtr;
    3329  memcpy(&pItemPtr, &ptr, sizeof(pItemPtr));
    3330 
    3331  // Check if pItemPtr is in address range of this block.
    3332  if((pItemPtr >= block.pItems) && (pItemPtr < block.pItems + m_ItemsPerBlock))
    3333  {
    3334  const uint32_t index = static_cast<uint32_t>(pItemPtr - block.pItems);
    3335  pItemPtr->NextFreeIndex = block.FirstFreeIndex;
    3336  block.FirstFreeIndex = index;
    3337  return;
    3338  }
    3339  }
    3340  VMA_ASSERT(0 && "Pointer doesn't belong to this memory pool.");
    3341 }
    3342 
    3343 template<typename T>
    3344 typename VmaPoolAllocator<T>::ItemBlock& VmaPoolAllocator<T>::CreateNewBlock()
    3345 {
    3346  ItemBlock newBlock = {
    3347  vma_new_array(m_pAllocationCallbacks, Item, m_ItemsPerBlock), 0 };
    3348 
    3349  m_ItemBlocks.push_back(newBlock);
    3350 
    3351  // Setup singly-linked list of all free items in this block.
    3352  for(uint32_t i = 0; i < m_ItemsPerBlock - 1; ++i)
    3353  newBlock.pItems[i].NextFreeIndex = i + 1;
    3354  newBlock.pItems[m_ItemsPerBlock - 1].NextFreeIndex = UINT32_MAX;
    3355  return m_ItemBlocks.back();
    3356 }
    3357 
    3359 // class VmaRawList, VmaList
    3360 
    3361 #if VMA_USE_STL_LIST
    3362 
    3363 #define VmaList std::list
    3364 
    3365 #else // #if VMA_USE_STL_LIST
    3366 
    3367 template<typename T>
    3368 struct VmaListItem
    3369 {
    3370  VmaListItem* pPrev;
    3371  VmaListItem* pNext;
    3372  T Value;
    3373 };
    3374 
    3375 // Doubly linked list.
    3376 template<typename T>
    3377 class VmaRawList
    3378 {
    3379  VMA_CLASS_NO_COPY(VmaRawList)
    3380 public:
    3381  typedef VmaListItem<T> ItemType;
    3382 
    3383  VmaRawList(const VkAllocationCallbacks* pAllocationCallbacks);
    3384  ~VmaRawList();
    3385  void Clear();
    3386 
    3387  size_t GetCount() const { return m_Count; }
    3388  bool IsEmpty() const { return m_Count == 0; }
    3389 
    3390  ItemType* Front() { return m_pFront; }
    3391  const ItemType* Front() const { return m_pFront; }
    3392  ItemType* Back() { return m_pBack; }
    3393  const ItemType* Back() const { return m_pBack; }
    3394 
    3395  ItemType* PushBack();
    3396  ItemType* PushFront();
    3397  ItemType* PushBack(const T& value);
    3398  ItemType* PushFront(const T& value);
    3399  void PopBack();
    3400  void PopFront();
    3401 
    3402  // Item can be null - it means PushBack.
    3403  ItemType* InsertBefore(ItemType* pItem);
    3404  // Item can be null - it means PushFront.
    3405  ItemType* InsertAfter(ItemType* pItem);
    3406 
    3407  ItemType* InsertBefore(ItemType* pItem, const T& value);
    3408  ItemType* InsertAfter(ItemType* pItem, const T& value);
    3409 
    3410  void Remove(ItemType* pItem);
    3411 
    3412 private:
    3413  const VkAllocationCallbacks* const m_pAllocationCallbacks;
    3414  VmaPoolAllocator<ItemType> m_ItemAllocator;
    3415  ItemType* m_pFront;
    3416  ItemType* m_pBack;
    3417  size_t m_Count;
    3418 };
    3419 
    3420 template<typename T>
    3421 VmaRawList<T>::VmaRawList(const VkAllocationCallbacks* pAllocationCallbacks) :
    3422  m_pAllocationCallbacks(pAllocationCallbacks),
    3423  m_ItemAllocator(pAllocationCallbacks, 128),
    3424  m_pFront(VMA_NULL),
    3425  m_pBack(VMA_NULL),
    3426  m_Count(0)
    3427 {
    3428 }
    3429 
    3430 template<typename T>
    3431 VmaRawList<T>::~VmaRawList()
    3432 {
    3433  // Intentionally not calling Clear, because that would be unnecessary
    3434  // computations to return all items to m_ItemAllocator as free.
    3435 }
    3436 
    3437 template<typename T>
    3438 void VmaRawList<T>::Clear()
    3439 {
    3440  if(IsEmpty() == false)
    3441  {
    3442  ItemType* pItem = m_pBack;
    3443  while(pItem != VMA_NULL)
    3444  {
    3445  ItemType* const pPrevItem = pItem->pPrev;
    3446  m_ItemAllocator.Free(pItem);
    3447  pItem = pPrevItem;
    3448  }
    3449  m_pFront = VMA_NULL;
    3450  m_pBack = VMA_NULL;
    3451  m_Count = 0;
    3452  }
    3453 }
    3454 
    3455 template<typename T>
    3456 VmaListItem<T>* VmaRawList<T>::PushBack()
    3457 {
    3458  ItemType* const pNewItem = m_ItemAllocator.Alloc();
    3459  pNewItem->pNext = VMA_NULL;
    3460  if(IsEmpty())
    3461  {
    3462  pNewItem->pPrev = VMA_NULL;
    3463  m_pFront = pNewItem;
    3464  m_pBack = pNewItem;
    3465  m_Count = 1;
    3466  }
    3467  else
    3468  {
    3469  pNewItem->pPrev = m_pBack;
    3470  m_pBack->pNext = pNewItem;
    3471  m_pBack = pNewItem;
    3472  ++m_Count;
    3473  }
    3474  return pNewItem;
    3475 }
    3476 
    3477 template<typename T>
    3478 VmaListItem<T>* VmaRawList<T>::PushFront()
    3479 {
    3480  ItemType* const pNewItem = m_ItemAllocator.Alloc();
    3481  pNewItem->pPrev = VMA_NULL;
    3482  if(IsEmpty())
    3483  {
    3484  pNewItem->pNext = VMA_NULL;
    3485  m_pFront = pNewItem;
    3486  m_pBack = pNewItem;
    3487  m_Count = 1;
    3488  }
    3489  else
    3490  {
    3491  pNewItem->pNext = m_pFront;
    3492  m_pFront->pPrev = pNewItem;
    3493  m_pFront = pNewItem;
    3494  ++m_Count;
    3495  }
    3496  return pNewItem;
    3497 }
    3498 
    3499 template<typename T>
    3500 VmaListItem<T>* VmaRawList<T>::PushBack(const T& value)
    3501 {
    3502  ItemType* const pNewItem = PushBack();
    3503  pNewItem->Value = value;
    3504  return pNewItem;
    3505 }
    3506 
    3507 template<typename T>
    3508 VmaListItem<T>* VmaRawList<T>::PushFront(const T& value)
    3509 {
    3510  ItemType* const pNewItem = PushFront();
    3511  pNewItem->Value = value;
    3512  return pNewItem;
    3513 }
    3514 
    3515 template<typename T>
    3516 void VmaRawList<T>::PopBack()
    3517 {
    3518  VMA_HEAVY_ASSERT(m_Count > 0);
    3519  ItemType* const pBackItem = m_pBack;
    3520  ItemType* const pPrevItem = pBackItem->pPrev;
    3521  if(pPrevItem != VMA_NULL)
    3522  {
    3523  pPrevItem->pNext = VMA_NULL;
    3524  }
    3525  m_pBack = pPrevItem;
    3526  m_ItemAllocator.Free(pBackItem);
    3527  --m_Count;
    3528 }
    3529 
    3530 template<typename T>
    3531 void VmaRawList<T>::PopFront()
    3532 {
    3533  VMA_HEAVY_ASSERT(m_Count > 0);
    3534  ItemType* const pFrontItem = m_pFront;
    3535  ItemType* const pNextItem = pFrontItem->pNext;
    3536  if(pNextItem != VMA_NULL)
    3537  {
    3538  pNextItem->pPrev = VMA_NULL;
    3539  }
    3540  m_pFront = pNextItem;
    3541  m_ItemAllocator.Free(pFrontItem);
    3542  --m_Count;
    3543 }
    3544 
    3545 template<typename T>
    3546 void VmaRawList<T>::Remove(ItemType* pItem)
    3547 {
    3548  VMA_HEAVY_ASSERT(pItem != VMA_NULL);
    3549  VMA_HEAVY_ASSERT(m_Count > 0);
    3550 
    3551  if(pItem->pPrev != VMA_NULL)
    3552  {
    3553  pItem->pPrev->pNext = pItem->pNext;
    3554  }
    3555  else
    3556  {
    3557  VMA_HEAVY_ASSERT(m_pFront == pItem);
    3558  m_pFront = pItem->pNext;
    3559  }
    3560 
    3561  if(pItem->pNext != VMA_NULL)
    3562  {
    3563  pItem->pNext->pPrev = pItem->pPrev;
    3564  }
    3565  else
    3566  {
    3567  VMA_HEAVY_ASSERT(m_pBack == pItem);
    3568  m_pBack = pItem->pPrev;
    3569  }
    3570 
    3571  m_ItemAllocator.Free(pItem);
    3572  --m_Count;
    3573 }
    3574 
    3575 template<typename T>
    3576 VmaListItem<T>* VmaRawList<T>::InsertBefore(ItemType* pItem)
    3577 {
    3578  if(pItem != VMA_NULL)
    3579  {
    3580  ItemType* const prevItem = pItem->pPrev;
    3581  ItemType* const newItem = m_ItemAllocator.Alloc();
    3582  newItem->pPrev = prevItem;
    3583  newItem->pNext = pItem;
    3584  pItem->pPrev = newItem;
    3585  if(prevItem != VMA_NULL)
    3586  {
    3587  prevItem->pNext = newItem;
    3588  }
    3589  else
    3590  {
    3591  VMA_HEAVY_ASSERT(m_pFront == pItem);
    3592  m_pFront = newItem;
    3593  }
    3594  ++m_Count;
    3595  return newItem;
    3596  }
    3597  else
    3598  return PushBack();
    3599 }
    3600 
    3601 template<typename T>
    3602 VmaListItem<T>* VmaRawList<T>::InsertAfter(ItemType* pItem)
    3603 {
    3604  if(pItem != VMA_NULL)
    3605  {
    3606  ItemType* const nextItem = pItem->pNext;
    3607  ItemType* const newItem = m_ItemAllocator.Alloc();
    3608  newItem->pNext = nextItem;
    3609  newItem->pPrev = pItem;
    3610  pItem->pNext = newItem;
    3611  if(nextItem != VMA_NULL)
    3612  {
    3613  nextItem->pPrev = newItem;
    3614  }
    3615  else
    3616  {
    3617  VMA_HEAVY_ASSERT(m_pBack == pItem);
    3618  m_pBack = newItem;
    3619  }
    3620  ++m_Count;
    3621  return newItem;
    3622  }
    3623  else
    3624  return PushFront();
    3625 }
    3626 
    3627 template<typename T>
    3628 VmaListItem<T>* VmaRawList<T>::InsertBefore(ItemType* pItem, const T& value)
    3629 {
    3630  ItemType* const newItem = InsertBefore(pItem);
    3631  newItem->Value = value;
    3632  return newItem;
    3633 }
    3634 
    3635 template<typename T>
    3636 VmaListItem<T>* VmaRawList<T>::InsertAfter(ItemType* pItem, const T& value)
    3637 {
    3638  ItemType* const newItem = InsertAfter(pItem);
    3639  newItem->Value = value;
    3640  return newItem;
    3641 }
    3642 
    3643 template<typename T, typename AllocatorT>
    3644 class VmaList
    3645 {
    3646  VMA_CLASS_NO_COPY(VmaList)
    3647 public:
    3648  class iterator
    3649  {
    3650  public:
    3651  iterator() :
    3652  m_pList(VMA_NULL),
    3653  m_pItem(VMA_NULL)
    3654  {
    3655  }
    3656 
    3657  T& operator*() const
    3658  {
    3659  VMA_HEAVY_ASSERT(m_pItem != VMA_NULL);
    3660  return m_pItem->Value;
    3661  }
    3662  T* operator->() const
    3663  {
    3664  VMA_HEAVY_ASSERT(m_pItem != VMA_NULL);
    3665  return &m_pItem->Value;
    3666  }
    3667 
    3668  iterator& operator++()
    3669  {
    3670  VMA_HEAVY_ASSERT(m_pItem != VMA_NULL);
    3671  m_pItem = m_pItem->pNext;
    3672  return *this;
    3673  }
    3674  iterator& operator--()
    3675  {
    3676  if(m_pItem != VMA_NULL)
    3677  {
    3678  m_pItem = m_pItem->pPrev;
    3679  }
    3680  else
    3681  {
    3682  VMA_HEAVY_ASSERT(!m_pList->IsEmpty());
    3683  m_pItem = m_pList->Back();
    3684  }
    3685  return *this;
    3686  }
    3687 
    3688  iterator operator++(int)
    3689  {
    3690  iterator result = *this;
    3691  ++*this;
    3692  return result;
    3693  }
    3694  iterator operator--(int)
    3695  {
    3696  iterator result = *this;
    3697  --*this;
    3698  return result;
    3699  }
    3700 
    3701  bool operator==(const iterator& rhs) const
    3702  {
    3703  VMA_HEAVY_ASSERT(m_pList == rhs.m_pList);
    3704  return m_pItem == rhs.m_pItem;
    3705  }
    3706  bool operator!=(const iterator& rhs) const
    3707  {
    3708  VMA_HEAVY_ASSERT(m_pList == rhs.m_pList);
    3709  return m_pItem != rhs.m_pItem;
    3710  }
    3711 
    3712  private:
    3713  VmaRawList<T>* m_pList;
    3714  VmaListItem<T>* m_pItem;
    3715 
    3716  iterator(VmaRawList<T>* pList, VmaListItem<T>* pItem) :
    3717  m_pList(pList),
    3718  m_pItem(pItem)
    3719  {
    3720  }
    3721 
    3722  friend class VmaList<T, AllocatorT>;
    3723  };
    3724 
    3725  class const_iterator
    3726  {
    3727  public:
    3728  const_iterator() :
    3729  m_pList(VMA_NULL),
    3730  m_pItem(VMA_NULL)
    3731  {
    3732  }
    3733 
    3734  const_iterator(const iterator& src) :
    3735  m_pList(src.m_pList),
    3736  m_pItem(src.m_pItem)
    3737  {
    3738  }
    3739 
    3740  const T& operator*() const
    3741  {
    3742  VMA_HEAVY_ASSERT(m_pItem != VMA_NULL);
    3743  return m_pItem->Value;
    3744  }
    3745  const T* operator->() const
    3746  {
    3747  VMA_HEAVY_ASSERT(m_pItem != VMA_NULL);
    3748  return &m_pItem->Value;
    3749  }
    3750 
    3751  const_iterator& operator++()
    3752  {
    3753  VMA_HEAVY_ASSERT(m_pItem != VMA_NULL);
    3754  m_pItem = m_pItem->pNext;
    3755  return *this;
    3756  }
    3757  const_iterator& operator--()
    3758  {
    3759  if(m_pItem != VMA_NULL)
    3760  {
    3761  m_pItem = m_pItem->pPrev;
    3762  }
    3763  else
    3764  {
    3765  VMA_HEAVY_ASSERT(!m_pList->IsEmpty());
    3766  m_pItem = m_pList->Back();
    3767  }
    3768  return *this;
    3769  }
    3770 
    3771  const_iterator operator++(int)
    3772  {
    3773  const_iterator result = *this;
    3774  ++*this;
    3775  return result;
    3776  }
    3777  const_iterator operator--(int)
    3778  {
    3779  const_iterator result = *this;
    3780  --*this;
    3781  return result;
    3782  }
    3783 
    3784  bool operator==(const const_iterator& rhs) const
    3785  {
    3786  VMA_HEAVY_ASSERT(m_pList == rhs.m_pList);
    3787  return m_pItem == rhs.m_pItem;
    3788  }
    3789  bool operator!=(const const_iterator& rhs) const
    3790  {
    3791  VMA_HEAVY_ASSERT(m_pList == rhs.m_pList);
    3792  return m_pItem != rhs.m_pItem;
    3793  }
    3794 
    3795  private:
    3796  const_iterator(const VmaRawList<T>* pList, const VmaListItem<T>* pItem) :
    3797  m_pList(pList),
    3798  m_pItem(pItem)
    3799  {
    3800  }
    3801 
    3802  const VmaRawList<T>* m_pList;
    3803  const VmaListItem<T>* m_pItem;
    3804 
    3805  friend class VmaList<T, AllocatorT>;
    3806  };
    3807 
    3808  VmaList(const AllocatorT& allocator) : m_RawList(allocator.m_pCallbacks) { }
    3809 
    3810  bool empty() const { return m_RawList.IsEmpty(); }
    3811  size_t size() const { return m_RawList.GetCount(); }
    3812 
    3813  iterator begin() { return iterator(&m_RawList, m_RawList.Front()); }
    3814  iterator end() { return iterator(&m_RawList, VMA_NULL); }
    3815 
    3816  const_iterator cbegin() const { return const_iterator(&m_RawList, m_RawList.Front()); }
    3817  const_iterator cend() const { return const_iterator(&m_RawList, VMA_NULL); }
    3818 
    3819  void clear() { m_RawList.Clear(); }
    3820  void push_back(const T& value) { m_RawList.PushBack(value); }
    3821  void erase(iterator it) { m_RawList.Remove(it.m_pItem); }
    3822  iterator insert(iterator it, const T& value) { return iterator(&m_RawList, m_RawList.InsertBefore(it.m_pItem, value)); }
    3823 
    3824 private:
    3825  VmaRawList<T> m_RawList;
    3826 };
    3827 
    3828 #endif // #if VMA_USE_STL_LIST
    3829 
    3831 // class VmaMap
    3832 
    3833 // Unused in this version.
    3834 #if 0
    3835 
    3836 #if VMA_USE_STL_UNORDERED_MAP
    3837 
    3838 #define VmaPair std::pair
    3839 
    3840 #define VMA_MAP_TYPE(KeyT, ValueT) \
    3841  std::unordered_map< KeyT, ValueT, std::hash<KeyT>, std::equal_to<KeyT>, VmaStlAllocator< std::pair<KeyT, ValueT> > >
    3842 
    3843 #else // #if VMA_USE_STL_UNORDERED_MAP
    3844 
    3845 template<typename T1, typename T2>
    3846 struct VmaPair
    3847 {
    3848  T1 first;
    3849  T2 second;
    3850 
    3851  VmaPair() : first(), second() { }
    3852  VmaPair(const T1& firstSrc, const T2& secondSrc) : first(firstSrc), second(secondSrc) { }
    3853 };
    3854 
    3855 /* Class compatible with subset of interface of std::unordered_map.
    3856 KeyT, ValueT must be POD because they will be stored in VmaVector.
    3857 */
    3858 template<typename KeyT, typename ValueT>
    3859 class VmaMap
    3860 {
    3861 public:
    3862  typedef VmaPair<KeyT, ValueT> PairType;
    3863  typedef PairType* iterator;
    3864 
    3865  VmaMap(const VmaStlAllocator<PairType>& allocator) : m_Vector(allocator) { }
    3866 
    3867  iterator begin() { return m_Vector.begin(); }
    3868  iterator end() { return m_Vector.end(); }
    3869 
    3870  void insert(const PairType& pair);
    3871  iterator find(const KeyT& key);
    3872  void erase(iterator it);
    3873 
    3874 private:
    3875  VmaVector< PairType, VmaStlAllocator<PairType> > m_Vector;
    3876 };
    3877 
    3878 #define VMA_MAP_TYPE(KeyT, ValueT) VmaMap<KeyT, ValueT>
    3879 
    3880 template<typename FirstT, typename SecondT>
    3881 struct VmaPairFirstLess
    3882 {
    3883  bool operator()(const VmaPair<FirstT, SecondT>& lhs, const VmaPair<FirstT, SecondT>& rhs) const
    3884  {
    3885  return lhs.first < rhs.first;
    3886  }
    3887  bool operator()(const VmaPair<FirstT, SecondT>& lhs, const FirstT& rhsFirst) const
    3888  {
    3889  return lhs.first < rhsFirst;
    3890  }
    3891 };
    3892 
    3893 template<typename KeyT, typename ValueT>
    3894 void VmaMap<KeyT, ValueT>::insert(const PairType& pair)
    3895 {
    3896  const size_t indexToInsert = VmaBinaryFindFirstNotLess(
    3897  m_Vector.data(),
    3898  m_Vector.data() + m_Vector.size(),
    3899  pair,
    3900  VmaPairFirstLess<KeyT, ValueT>()) - m_Vector.data();
    3901  VmaVectorInsert(m_Vector, indexToInsert, pair);
    3902 }
    3903 
    3904 template<typename KeyT, typename ValueT>
    3905 VmaPair<KeyT, ValueT>* VmaMap<KeyT, ValueT>::find(const KeyT& key)
    3906 {
    3907  PairType* it = VmaBinaryFindFirstNotLess(
    3908  m_Vector.data(),
    3909  m_Vector.data() + m_Vector.size(),
    3910  key,
    3911  VmaPairFirstLess<KeyT, ValueT>());
    3912  if((it != m_Vector.end()) && (it->first == key))
    3913  {
    3914  return it;
    3915  }
    3916  else
    3917  {
    3918  return m_Vector.end();
    3919  }
    3920 }
    3921 
    3922 template<typename KeyT, typename ValueT>
    3923 void VmaMap<KeyT, ValueT>::erase(iterator it)
    3924 {
    3925  VmaVectorRemove(m_Vector, it - m_Vector.begin());
    3926 }
    3927 
    3928 #endif // #if VMA_USE_STL_UNORDERED_MAP
    3929 
    3930 #endif // #if 0
    3931 
    3933 
    3934 class VmaDeviceMemoryBlock;
    3935 
    3936 enum VMA_CACHE_OPERATION { VMA_CACHE_FLUSH, VMA_CACHE_INVALIDATE };
    3937 
    3938 struct VmaAllocation_T
    3939 {
    3940  VMA_CLASS_NO_COPY(VmaAllocation_T)
    3941 private:
    3942  static const uint8_t MAP_COUNT_FLAG_PERSISTENT_MAP = 0x80;
    3943 
    3944  enum FLAGS
    3945  {
    3946  FLAG_USER_DATA_STRING = 0x01,
    3947  };
    3948 
    3949 public:
    3950  enum ALLOCATION_TYPE
    3951  {
    3952  ALLOCATION_TYPE_NONE,
    3953  ALLOCATION_TYPE_BLOCK,
    3954  ALLOCATION_TYPE_DEDICATED,
    3955  };
    3956 
    3957  VmaAllocation_T(uint32_t currentFrameIndex, bool userDataString) :
    3958  m_Alignment(1),
    3959  m_Size(0),
    3960  m_pUserData(VMA_NULL),
    3961  m_LastUseFrameIndex(currentFrameIndex),
    3962  m_Type((uint8_t)ALLOCATION_TYPE_NONE),
    3963  m_SuballocationType((uint8_t)VMA_SUBALLOCATION_TYPE_UNKNOWN),
    3964  m_MapCount(0),
    3965  m_Flags(userDataString ? (uint8_t)FLAG_USER_DATA_STRING : 0)
    3966  {
    3967 #if VMA_STATS_STRING_ENABLED
    3968  m_CreationFrameIndex = currentFrameIndex;
    3969  m_BufferImageUsage = 0;
    3970 #endif
    3971  }
    3972 
    3973  ~VmaAllocation_T()
    3974  {
    3975  VMA_ASSERT((m_MapCount & ~MAP_COUNT_FLAG_PERSISTENT_MAP) == 0 && "Allocation was not unmapped before destruction.");
    3976 
    3977  // Check if owned string was freed.
    3978  VMA_ASSERT(m_pUserData == VMA_NULL);
    3979  }
    3980 
    3981  void InitBlockAllocation(
    3982  VmaPool hPool,
    3983  VmaDeviceMemoryBlock* block,
    3984  VkDeviceSize offset,
    3985  VkDeviceSize alignment,
    3986  VkDeviceSize size,
    3987  VmaSuballocationType suballocationType,
    3988  bool mapped,
    3989  bool canBecomeLost)
    3990  {
    3991  VMA_ASSERT(m_Type == ALLOCATION_TYPE_NONE);
    3992  VMA_ASSERT(block != VMA_NULL);
    3993  m_Type = (uint8_t)ALLOCATION_TYPE_BLOCK;
    3994  m_Alignment = alignment;
    3995  m_Size = size;
    3996  m_MapCount = mapped ? MAP_COUNT_FLAG_PERSISTENT_MAP : 0;
    3997  m_SuballocationType = (uint8_t)suballocationType;
    3998  m_BlockAllocation.m_hPool = hPool;
    3999  m_BlockAllocation.m_Block = block;
    4000  m_BlockAllocation.m_Offset = offset;
    4001  m_BlockAllocation.m_CanBecomeLost = canBecomeLost;
    4002  }
    4003 
    4004  void InitLost()
    4005  {
    4006  VMA_ASSERT(m_Type == ALLOCATION_TYPE_NONE);
    4007  VMA_ASSERT(m_LastUseFrameIndex.load() == VMA_FRAME_INDEX_LOST);
    4008  m_Type = (uint8_t)ALLOCATION_TYPE_BLOCK;
    4009  m_BlockAllocation.m_hPool = VK_NULL_HANDLE;
    4010  m_BlockAllocation.m_Block = VMA_NULL;
    4011  m_BlockAllocation.m_Offset = 0;
    4012  m_BlockAllocation.m_CanBecomeLost = true;
    4013  }
    4014 
    4015  void ChangeBlockAllocation(
    4016  VmaAllocator hAllocator,
    4017  VmaDeviceMemoryBlock* block,
    4018  VkDeviceSize offset);
    4019 
    4020  // pMappedData not null means allocation is created with MAPPED flag.
    4021  void InitDedicatedAllocation(
    4022  uint32_t memoryTypeIndex,
    4023  VkDeviceMemory hMemory,
    4024  VmaSuballocationType suballocationType,
    4025  void* pMappedData,
    4026  VkDeviceSize size)
    4027  {
    4028  VMA_ASSERT(m_Type == ALLOCATION_TYPE_NONE);
    4029  VMA_ASSERT(hMemory != VK_NULL_HANDLE);
    4030  m_Type = (uint8_t)ALLOCATION_TYPE_DEDICATED;
    4031  m_Alignment = 0;
    4032  m_Size = size;
    4033  m_SuballocationType = (uint8_t)suballocationType;
    4034  m_MapCount = (pMappedData != VMA_NULL) ? MAP_COUNT_FLAG_PERSISTENT_MAP : 0;
    4035  m_DedicatedAllocation.m_MemoryTypeIndex = memoryTypeIndex;
    4036  m_DedicatedAllocation.m_hMemory = hMemory;
    4037  m_DedicatedAllocation.m_pMappedData = pMappedData;
    4038  }
    4039 
    4040  ALLOCATION_TYPE GetType() const { return (ALLOCATION_TYPE)m_Type; }
    4041  VkDeviceSize GetAlignment() const { return m_Alignment; }
    4042  VkDeviceSize GetSize() const { return m_Size; }
    4043  bool IsUserDataString() const { return (m_Flags & FLAG_USER_DATA_STRING) != 0; }
    4044  void* GetUserData() const { return m_pUserData; }
    4045  void SetUserData(VmaAllocator hAllocator, void* pUserData);
    4046  VmaSuballocationType GetSuballocationType() const { return (VmaSuballocationType)m_SuballocationType; }
    4047 
    4048  VmaDeviceMemoryBlock* GetBlock() const
    4049  {
    4050  VMA_ASSERT(m_Type == ALLOCATION_TYPE_BLOCK);
    4051  return m_BlockAllocation.m_Block;
    4052  }
    4053  VkDeviceSize GetOffset() const;
    4054  VkDeviceMemory GetMemory() const;
    4055  uint32_t GetMemoryTypeIndex() const;
    4056  bool IsPersistentMap() const { return (m_MapCount & MAP_COUNT_FLAG_PERSISTENT_MAP) != 0; }
    4057  void* GetMappedData() const;
    4058  bool CanBecomeLost() const;
    4059  VmaPool GetPool() const;
    4060 
    4061  uint32_t GetLastUseFrameIndex() const
    4062  {
    4063  return m_LastUseFrameIndex.load();
    4064  }
    4065  bool CompareExchangeLastUseFrameIndex(uint32_t& expected, uint32_t desired)
    4066  {
    4067  return m_LastUseFrameIndex.compare_exchange_weak(expected, desired);
    4068  }
    4069  /*
    4070  - If hAllocation.LastUseFrameIndex + frameInUseCount < allocator.CurrentFrameIndex,
    4071  makes it lost by setting LastUseFrameIndex = VMA_FRAME_INDEX_LOST and returns true.
    4072  - Else, returns false.
    4073 
    4074  If hAllocation is already lost, assert - you should not call it then.
    4075  If hAllocation was not created with CAN_BECOME_LOST_BIT, assert.
    4076  */
    4077  bool MakeLost(uint32_t currentFrameIndex, uint32_t frameInUseCount);
    4078 
    4079  void DedicatedAllocCalcStatsInfo(VmaStatInfo& outInfo)
    4080  {
    4081  VMA_ASSERT(m_Type == ALLOCATION_TYPE_DEDICATED);
    4082  outInfo.blockCount = 1;
    4083  outInfo.allocationCount = 1;
    4084  outInfo.unusedRangeCount = 0;
    4085  outInfo.usedBytes = m_Size;
    4086  outInfo.unusedBytes = 0;
    4087  outInfo.allocationSizeMin = outInfo.allocationSizeMax = m_Size;
    4088  outInfo.unusedRangeSizeMin = UINT64_MAX;
    4089  outInfo.unusedRangeSizeMax = 0;
    4090  }
    4091 
    4092  void BlockAllocMap();
    4093  void BlockAllocUnmap();
    4094  VkResult DedicatedAllocMap(VmaAllocator hAllocator, void** ppData);
    4095  void DedicatedAllocUnmap(VmaAllocator hAllocator);
    4096 
    4097 #if VMA_STATS_STRING_ENABLED
    4098  uint32_t GetCreationFrameIndex() const { return m_CreationFrameIndex; }
    4099  uint32_t GetBufferImageUsage() const { return m_BufferImageUsage; }
    4100 
    4101  void InitBufferImageUsage(uint32_t bufferImageUsage)
    4102  {
    4103  VMA_ASSERT(m_BufferImageUsage == 0);
    4104  m_BufferImageUsage = bufferImageUsage;
    4105  }
    4106 
    4107  void PrintParameters(class VmaJsonWriter& json) const;
    4108 #endif
    4109 
    4110 private:
    4111  VkDeviceSize m_Alignment;
    4112  VkDeviceSize m_Size;
    4113  void* m_pUserData;
    4114  VMA_ATOMIC_UINT32 m_LastUseFrameIndex;
    4115  uint8_t m_Type; // ALLOCATION_TYPE
    4116  uint8_t m_SuballocationType; // VmaSuballocationType
    4117  // Bit 0x80 is set when allocation was created with VMA_ALLOCATION_CREATE_MAPPED_BIT.
    4118  // Bits with mask 0x7F are reference counter for vmaMapMemory()/vmaUnmapMemory().
    4119  uint8_t m_MapCount;
    4120  uint8_t m_Flags; // enum FLAGS
    4121 
    4122  // Allocation out of VmaDeviceMemoryBlock.
    4123  struct BlockAllocation
    4124  {
    4125  VmaPool m_hPool; // Null if belongs to general memory.
    4126  VmaDeviceMemoryBlock* m_Block;
    4127  VkDeviceSize m_Offset;
    4128  bool m_CanBecomeLost;
    4129  };
    4130 
    4131  // Allocation for an object that has its own private VkDeviceMemory.
    4132  struct DedicatedAllocation
    4133  {
    4134  uint32_t m_MemoryTypeIndex;
    4135  VkDeviceMemory m_hMemory;
    4136  void* m_pMappedData; // Not null means memory is mapped.
    4137  };
    4138 
    4139  union
    4140  {
    4141  // Allocation out of VmaDeviceMemoryBlock.
    4142  BlockAllocation m_BlockAllocation;
    4143  // Allocation for an object that has its own private VkDeviceMemory.
    4144  DedicatedAllocation m_DedicatedAllocation;
    4145  };
    4146 
    4147 #if VMA_STATS_STRING_ENABLED
    4148  uint32_t m_CreationFrameIndex;
    4149  uint32_t m_BufferImageUsage; // 0 if unknown.
    4150 #endif
    4151 
    4152  void FreeUserDataString(VmaAllocator hAllocator);
    4153 };
    4154 
    4155 /*
    4156 Represents a region of VmaDeviceMemoryBlock that is either assigned and returned as
    4157 allocated memory block or free.
    4158 */
    4159 struct VmaSuballocation
    4160 {
    4161  VkDeviceSize offset;
    4162  VkDeviceSize size;
    4163  VmaAllocation hAllocation;
    4164  VmaSuballocationType type;
    4165 };
    4166 
    4167 typedef VmaList< VmaSuballocation, VmaStlAllocator<VmaSuballocation> > VmaSuballocationList;
    4168 
    4169 // Cost of one additional allocation lost, as equivalent in bytes.
    4170 static const VkDeviceSize VMA_LOST_ALLOCATION_COST = 1048576;
    4171 
    4172 /*
    4173 Parameters of planned allocation inside a VmaDeviceMemoryBlock.
    4174 
    4175 If canMakeOtherLost was false:
    4176 - item points to a FREE suballocation.
    4177 - itemsToMakeLostCount is 0.
    4178 
    4179 If canMakeOtherLost was true:
    4180 - item points to first of sequence of suballocations, which are either FREE,
    4181  or point to VmaAllocations that can become lost.
    4182 - itemsToMakeLostCount is the number of VmaAllocations that need to be made lost for
    4183  the requested allocation to succeed.
    4184 */
    4185 struct VmaAllocationRequest
    4186 {
    4187  VkDeviceSize offset;
    4188  VkDeviceSize sumFreeSize; // Sum size of free items that overlap with proposed allocation.
    4189  VkDeviceSize sumItemSize; // Sum size of items to make lost that overlap with proposed allocation.
    4190  VmaSuballocationList::iterator item;
    4191  size_t itemsToMakeLostCount;
    4192 
    4193  VkDeviceSize CalcCost() const
    4194  {
    4195  return sumItemSize + itemsToMakeLostCount * VMA_LOST_ALLOCATION_COST;
    4196  }
    4197 };
    4198 
    4199 /*
    4200 Data structure used for bookkeeping of allocations and unused ranges of memory
    4201 in a single VkDeviceMemory block.
    4202 */
    4203 class VmaBlockMetadata
    4204 {
    4205  VMA_CLASS_NO_COPY(VmaBlockMetadata)
    4206 public:
    4207  VmaBlockMetadata(VmaAllocator hAllocator);
    4208  ~VmaBlockMetadata();
    4209  void Init(VkDeviceSize size);
    4210 
    4211  // Validates all data structures inside this object. If not valid, returns false.
    4212  bool Validate() const;
    4213  VkDeviceSize GetSize() const { return m_Size; }
    4214  size_t GetAllocationCount() const { return m_Suballocations.size() - m_FreeCount; }
    4215  VkDeviceSize GetSumFreeSize() const { return m_SumFreeSize; }
    4216  VkDeviceSize GetUnusedRangeSizeMax() const;
    4217  // Returns true if this block is empty - contains only single free suballocation.
    4218  bool IsEmpty() const;
    4219 
    4220  void CalcAllocationStatInfo(VmaStatInfo& outInfo) const;
    4221  void AddPoolStats(VmaPoolStats& inoutStats) const;
    4222 
    4223 #if VMA_STATS_STRING_ENABLED
    4224  void PrintDetailedMap(class VmaJsonWriter& json) const;
    4225 #endif
    4226 
    4227  // Tries to find a place for suballocation with given parameters inside this block.
    4228  // If succeeded, fills pAllocationRequest and returns true.
    4229  // If failed, returns false.
    4230  bool CreateAllocationRequest(
    4231  uint32_t currentFrameIndex,
    4232  uint32_t frameInUseCount,
    4233  VkDeviceSize bufferImageGranularity,
    4234  VkDeviceSize allocSize,
    4235  VkDeviceSize allocAlignment,
    4236  VmaSuballocationType allocType,
    4237  bool canMakeOtherLost,
    4238  VmaAllocationRequest* pAllocationRequest);
    4239 
    4240  bool MakeRequestedAllocationsLost(
    4241  uint32_t currentFrameIndex,
    4242  uint32_t frameInUseCount,
    4243  VmaAllocationRequest* pAllocationRequest);
    4244 
    4245  uint32_t MakeAllocationsLost(uint32_t currentFrameIndex, uint32_t frameInUseCount);
    4246 
    4247  VkResult CheckCorruption(const void* pBlockData);
    4248 
    4249  // Makes actual allocation based on request. Request must already be checked and valid.
    4250  void Alloc(
    4251  const VmaAllocationRequest& request,
    4252  VmaSuballocationType type,
    4253  VkDeviceSize allocSize,
    4254  VmaAllocation hAllocation);
    4255 
    4256  // Frees suballocation assigned to given memory region.
    4257  void Free(const VmaAllocation allocation);
    4258  void FreeAtOffset(VkDeviceSize offset);
    4259 
    4260 private:
    4261  VkDeviceSize m_Size;
    4262  uint32_t m_FreeCount;
    4263  VkDeviceSize m_SumFreeSize;
    4264  VmaSuballocationList m_Suballocations;
    4265  // Suballocations that are free and have size greater than certain threshold.
    4266  // Sorted by size, ascending.
    4267  VmaVector< VmaSuballocationList::iterator, VmaStlAllocator< VmaSuballocationList::iterator > > m_FreeSuballocationsBySize;
    4268 
    4269  bool ValidateFreeSuballocationList() const;
    4270 
    4271  // Checks if requested suballocation with given parameters can be placed in given pFreeSuballocItem.
    4272  // If yes, fills pOffset and returns true. If no, returns false.
    4273  bool CheckAllocation(
    4274  uint32_t currentFrameIndex,
    4275  uint32_t frameInUseCount,
    4276  VkDeviceSize bufferImageGranularity,
    4277  VkDeviceSize allocSize,
    4278  VkDeviceSize allocAlignment,
    4279  VmaSuballocationType allocType,
    4280  VmaSuballocationList::const_iterator suballocItem,
    4281  bool canMakeOtherLost,
    4282  VkDeviceSize* pOffset,
    4283  size_t* itemsToMakeLostCount,
    4284  VkDeviceSize* pSumFreeSize,
    4285  VkDeviceSize* pSumItemSize) const;
    4286  // Given free suballocation, it merges it with following one, which must also be free.
    4287  void MergeFreeWithNext(VmaSuballocationList::iterator item);
    4288  // Releases given suballocation, making it free.
    4289  // Merges it with adjacent free suballocations if applicable.
    4290  // Returns iterator to new free suballocation at this place.
    4291  VmaSuballocationList::iterator FreeSuballocation(VmaSuballocationList::iterator suballocItem);
    4292  // Given free suballocation, it inserts it into sorted list of
    4293  // m_FreeSuballocationsBySize if it's suitable.
    4294  void RegisterFreeSuballocation(VmaSuballocationList::iterator item);
    4295  // Given free suballocation, it removes it from sorted list of
    4296  // m_FreeSuballocationsBySize if it's suitable.
    4297  void UnregisterFreeSuballocation(VmaSuballocationList::iterator item);
    4298 };
    4299 
    4300 /*
    4301 Represents a single block of device memory (`VkDeviceMemory`) with all the
    4302 data about its regions (aka suballocations, #VmaAllocation), assigned and free.
    4303 
    4304 Thread-safety: This class must be externally synchronized.
    4305 */
    4306 class VmaDeviceMemoryBlock
    4307 {
    4308  VMA_CLASS_NO_COPY(VmaDeviceMemoryBlock)
    4309 public:
    4310  VmaBlockMetadata m_Metadata;
    4311 
    4312  VmaDeviceMemoryBlock(VmaAllocator hAllocator);
    4313 
    4314  ~VmaDeviceMemoryBlock()
    4315  {
    4316  VMA_ASSERT(m_MapCount == 0 && "VkDeviceMemory block is being destroyed while it is still mapped.");
    4317  VMA_ASSERT(m_hMemory == VK_NULL_HANDLE);
    4318  }
    4319 
    4320  // Always call after construction.
    4321  void Init(
    4322  uint32_t newMemoryTypeIndex,
    4323  VkDeviceMemory newMemory,
    4324  VkDeviceSize newSize,
    4325  uint32_t id);
    4326  // Always call before destruction.
    4327  void Destroy(VmaAllocator allocator);
    4328 
    4329  VkDeviceMemory GetDeviceMemory() const { return m_hMemory; }
    4330  uint32_t GetMemoryTypeIndex() const { return m_MemoryTypeIndex; }
    4331  uint32_t GetId() const { return m_Id; }
    4332  void* GetMappedData() const { return m_pMappedData; }
    4333 
    4334  // Validates all data structures inside this object. If not valid, returns false.
    4335  bool Validate() const;
    4336 
    4337  VkResult CheckCorruption(VmaAllocator hAllocator);
    4338 
    4339  // ppData can be null.
    4340  VkResult Map(VmaAllocator hAllocator, uint32_t count, void** ppData);
    4341  void Unmap(VmaAllocator hAllocator, uint32_t count);
    4342 
    4343  VkResult WriteMagicValueAroundAllocation(VmaAllocator hAllocator, VkDeviceSize allocOffset, VkDeviceSize allocSize);
    4344  VkResult ValidateMagicValueAroundAllocation(VmaAllocator hAllocator, VkDeviceSize allocOffset, VkDeviceSize allocSize);
    4345 
    4346  VkResult BindBufferMemory(
    4347  const VmaAllocator hAllocator,
    4348  const VmaAllocation hAllocation,
    4349  VkBuffer hBuffer);
    4350  VkResult BindImageMemory(
    4351  const VmaAllocator hAllocator,
    4352  const VmaAllocation hAllocation,
    4353  VkImage hImage);
    4354 
    4355 private:
    4356  uint32_t m_MemoryTypeIndex;
    4357  uint32_t m_Id;
    4358  VkDeviceMemory m_hMemory;
    4359 
    4360  // Protects access to m_hMemory so it's not used by multiple threads simultaneously, e.g. vkMapMemory, vkBindBufferMemory.
    4361  // Also protects m_MapCount, m_pMappedData.
    4362  VMA_MUTEX m_Mutex;
    4363  uint32_t m_MapCount;
    4364  void* m_pMappedData;
    4365 };
    4366 
    4367 struct VmaPointerLess
    4368 {
    4369  bool operator()(const void* lhs, const void* rhs) const
    4370  {
    4371  return lhs < rhs;
    4372  }
    4373 };
    4374 
    4375 class VmaDefragmentator;
    4376 
    4377 /*
    4378 Sequence of VmaDeviceMemoryBlock. Represents memory blocks allocated for a specific
    4379 Vulkan memory type.
    4380 
    4381 Synchronized internally with a mutex.
    4382 */
    4383 struct VmaBlockVector
    4384 {
    4385  VMA_CLASS_NO_COPY(VmaBlockVector)
    4386 public:
    4387  VmaBlockVector(
    4388  VmaAllocator hAllocator,
    4389  uint32_t memoryTypeIndex,
    4390  VkDeviceSize preferredBlockSize,
    4391  size_t minBlockCount,
    4392  size_t maxBlockCount,
    4393  VkDeviceSize bufferImageGranularity,
    4394  uint32_t frameInUseCount,
    4395  bool isCustomPool);
    4396  ~VmaBlockVector();
    4397 
    4398  VkResult CreateMinBlocks();
    4399 
    4400  uint32_t GetMemoryTypeIndex() const { return m_MemoryTypeIndex; }
    4401  VkDeviceSize GetPreferredBlockSize() const { return m_PreferredBlockSize; }
    4402  VkDeviceSize GetBufferImageGranularity() const { return m_BufferImageGranularity; }
    4403  uint32_t GetFrameInUseCount() const { return m_FrameInUseCount; }
    4404 
    4405  void GetPoolStats(VmaPoolStats* pStats);
    4406 
    4407  bool IsEmpty() const { return m_Blocks.empty(); }
    4408  bool IsCorruptionDetectionEnabled() const;
    4409 
    4410  VkResult Allocate(
    4411  VmaPool hCurrentPool,
    4412  uint32_t currentFrameIndex,
    4413  VkDeviceSize size,
    4414  VkDeviceSize alignment,
    4415  const VmaAllocationCreateInfo& createInfo,
    4416  VmaSuballocationType suballocType,
    4417  VmaAllocation* pAllocation);
    4418 
    4419  void Free(
    4420  VmaAllocation hAllocation);
    4421 
    4422  // Adds statistics of this BlockVector to pStats.
    4423  void AddStats(VmaStats* pStats);
    4424 
    4425 #if VMA_STATS_STRING_ENABLED
    4426  void PrintDetailedMap(class VmaJsonWriter& json);
    4427 #endif
    4428 
    4429  void MakePoolAllocationsLost(
    4430  uint32_t currentFrameIndex,
    4431  size_t* pLostAllocationCount);
    4432  VkResult CheckCorruption();
    4433 
    4434  VmaDefragmentator* EnsureDefragmentator(
    4435  VmaAllocator hAllocator,
    4436  uint32_t currentFrameIndex);
    4437 
    4438  VkResult Defragment(
    4439  VmaDefragmentationStats* pDefragmentationStats,
    4440  VkDeviceSize& maxBytesToMove,
    4441  uint32_t& maxAllocationsToMove);
    4442 
    4443  void DestroyDefragmentator();
    4444 
    4445 private:
    4446  friend class VmaDefragmentator;
    4447 
    4448  const VmaAllocator m_hAllocator;
    4449  const uint32_t m_MemoryTypeIndex;
    4450  const VkDeviceSize m_PreferredBlockSize;
    4451  const size_t m_MinBlockCount;
    4452  const size_t m_MaxBlockCount;
    4453  const VkDeviceSize m_BufferImageGranularity;
    4454  const uint32_t m_FrameInUseCount;
    4455  const bool m_IsCustomPool;
    4456  VMA_MUTEX m_Mutex;
    4457  // Incrementally sorted by sumFreeSize, ascending.
    4458  VmaVector< VmaDeviceMemoryBlock*, VmaStlAllocator<VmaDeviceMemoryBlock*> > m_Blocks;
    4459  /* There can be at most one allocation that is completely empty - a
    4460  hysteresis to avoid pessimistic case of alternating creation and destruction
    4461  of a VkDeviceMemory. */
    4462  bool m_HasEmptyBlock;
    4463  VmaDefragmentator* m_pDefragmentator;
    4464  uint32_t m_NextBlockId;
    4465 
    4466  VkDeviceSize CalcMaxBlockSize() const;
    4467 
    4468  // Finds and removes given block from vector.
    4469  void Remove(VmaDeviceMemoryBlock* pBlock);
    4470 
    4471  // Performs single step in sorting m_Blocks. They may not be fully sorted
    4472  // after this call.
    4473  void IncrementallySortBlocks();
    4474 
    4475  VkResult CreateBlock(VkDeviceSize blockSize, size_t* pNewBlockIndex);
    4476 };
    4477 
    4478 struct VmaPool_T
    4479 {
    4480  VMA_CLASS_NO_COPY(VmaPool_T)
    4481 public:
    4482  VmaBlockVector m_BlockVector;
    4483 
    4484  VmaPool_T(
    4485  VmaAllocator hAllocator,
    4486  const VmaPoolCreateInfo& createInfo);
    4487  ~VmaPool_T();
    4488 
    4489  VmaBlockVector& GetBlockVector() { return m_BlockVector; }
    4490  uint32_t GetId() const { return m_Id; }
    4491  void SetId(uint32_t id) { VMA_ASSERT(m_Id == 0); m_Id = id; }
    4492 
    4493 #if VMA_STATS_STRING_ENABLED
    4494  //void PrintDetailedMap(class VmaStringBuilder& sb);
    4495 #endif
    4496 
    4497 private:
    4498  uint32_t m_Id;
    4499 };
    4500 
    4501 class VmaDefragmentator
    4502 {
    4503  VMA_CLASS_NO_COPY(VmaDefragmentator)
    4504 private:
    4505  const VmaAllocator m_hAllocator;
    4506  VmaBlockVector* const m_pBlockVector;
    4507  uint32_t m_CurrentFrameIndex;
    4508  VkDeviceSize m_BytesMoved;
    4509  uint32_t m_AllocationsMoved;
    4510 
    4511  struct AllocationInfo
    4512  {
    4513  VmaAllocation m_hAllocation;
    4514  VkBool32* m_pChanged;
    4515 
    4516  AllocationInfo() :
    4517  m_hAllocation(VK_NULL_HANDLE),
    4518  m_pChanged(VMA_NULL)
    4519  {
    4520  }
    4521  };
    4522 
    4523  struct AllocationInfoSizeGreater
    4524  {
    4525  bool operator()(const AllocationInfo& lhs, const AllocationInfo& rhs) const
    4526  {
    4527  return lhs.m_hAllocation->GetSize() > rhs.m_hAllocation->GetSize();
    4528  }
    4529  };
    4530 
    4531  // Used between AddAllocation and Defragment.
    4532  VmaVector< AllocationInfo, VmaStlAllocator<AllocationInfo> > m_Allocations;
    4533 
    4534  struct BlockInfo
    4535  {
    4536  VmaDeviceMemoryBlock* m_pBlock;
    4537  bool m_HasNonMovableAllocations;
    4538  VmaVector< AllocationInfo, VmaStlAllocator<AllocationInfo> > m_Allocations;
    4539 
    4540  BlockInfo(const VkAllocationCallbacks* pAllocationCallbacks) :
    4541  m_pBlock(VMA_NULL),
    4542  m_HasNonMovableAllocations(true),
    4543  m_Allocations(pAllocationCallbacks),
    4544  m_pMappedDataForDefragmentation(VMA_NULL)
    4545  {
    4546  }
    4547 
    4548  void CalcHasNonMovableAllocations()
    4549  {
    4550  const size_t blockAllocCount = m_pBlock->m_Metadata.GetAllocationCount();
    4551  const size_t defragmentAllocCount = m_Allocations.size();
    4552  m_HasNonMovableAllocations = blockAllocCount != defragmentAllocCount;
    4553  }
    4554 
    4555  void SortAllocationsBySizeDescecnding()
    4556  {
    4557  VMA_SORT(m_Allocations.begin(), m_Allocations.end(), AllocationInfoSizeGreater());
    4558  }
    4559 
    4560  VkResult EnsureMapping(VmaAllocator hAllocator, void** ppMappedData);
    4561  void Unmap(VmaAllocator hAllocator);
    4562 
    4563  private:
    4564  // Not null if mapped for defragmentation only, not originally mapped.
    4565  void* m_pMappedDataForDefragmentation;
    4566  };
    4567 
    4568  struct BlockPointerLess
    4569  {
    4570  bool operator()(const BlockInfo* pLhsBlockInfo, const VmaDeviceMemoryBlock* pRhsBlock) const
    4571  {
    4572  return pLhsBlockInfo->m_pBlock < pRhsBlock;
    4573  }
    4574  bool operator()(const BlockInfo* pLhsBlockInfo, const BlockInfo* pRhsBlockInfo) const
    4575  {
    4576  return pLhsBlockInfo->m_pBlock < pRhsBlockInfo->m_pBlock;
    4577  }
    4578  };
    4579 
    4580  // 1. Blocks with some non-movable allocations go first.
    4581  // 2. Blocks with smaller sumFreeSize go first.
    4582  struct BlockInfoCompareMoveDestination
    4583  {
    4584  bool operator()(const BlockInfo* pLhsBlockInfo, const BlockInfo* pRhsBlockInfo) const
    4585  {
    4586  if(pLhsBlockInfo->m_HasNonMovableAllocations && !pRhsBlockInfo->m_HasNonMovableAllocations)
    4587  {
    4588  return true;
    4589  }
    4590  if(!pLhsBlockInfo->m_HasNonMovableAllocations && pRhsBlockInfo->m_HasNonMovableAllocations)
    4591  {
    4592  return false;
    4593  }
    4594  if(pLhsBlockInfo->m_pBlock->m_Metadata.GetSumFreeSize() < pRhsBlockInfo->m_pBlock->m_Metadata.GetSumFreeSize())
    4595  {
    4596  return true;
    4597  }
    4598  return false;
    4599  }
    4600  };
    4601 
    4602  typedef VmaVector< BlockInfo*, VmaStlAllocator<BlockInfo*> > BlockInfoVector;
    4603  BlockInfoVector m_Blocks;
    4604 
    4605  VkResult DefragmentRound(
    4606  VkDeviceSize maxBytesToMove,
    4607  uint32_t maxAllocationsToMove);
    4608 
    4609  static bool MoveMakesSense(
    4610  size_t dstBlockIndex, VkDeviceSize dstOffset,
    4611  size_t srcBlockIndex, VkDeviceSize srcOffset);
    4612 
    4613 public:
    4614  VmaDefragmentator(
    4615  VmaAllocator hAllocator,
    4616  VmaBlockVector* pBlockVector,
    4617  uint32_t currentFrameIndex);
    4618 
    4619  ~VmaDefragmentator();
    4620 
    4621  VkDeviceSize GetBytesMoved() const { return m_BytesMoved; }
    4622  uint32_t GetAllocationsMoved() const { return m_AllocationsMoved; }
    4623 
    4624  void AddAllocation(VmaAllocation hAlloc, VkBool32* pChanged);
    4625 
    4626  VkResult Defragment(
    4627  VkDeviceSize maxBytesToMove,
    4628  uint32_t maxAllocationsToMove);
    4629 };
    4630 
    4631 // Main allocator object.
    4632 struct VmaAllocator_T
    4633 {
    4634  VMA_CLASS_NO_COPY(VmaAllocator_T)
    4635 public:
    4636  bool m_UseMutex;
    4637  bool m_UseKhrDedicatedAllocation;
    4638  VkDevice m_hDevice;
    4639  bool m_AllocationCallbacksSpecified;
    4640  VkAllocationCallbacks m_AllocationCallbacks;
    4641  VmaDeviceMemoryCallbacks m_DeviceMemoryCallbacks;
    4642 
    4643  // Number of bytes free out of limit, or VK_WHOLE_SIZE if not limit for that heap.
    4644  VkDeviceSize m_HeapSizeLimit[VK_MAX_MEMORY_HEAPS];
    4645  VMA_MUTEX m_HeapSizeLimitMutex;
    4646 
    4647  VkPhysicalDeviceProperties m_PhysicalDeviceProperties;
    4648  VkPhysicalDeviceMemoryProperties m_MemProps;
    4649 
    4650  // Default pools.
    4651  VmaBlockVector* m_pBlockVectors[VK_MAX_MEMORY_TYPES];
    4652 
    4653  // Each vector is sorted by memory (handle value).
    4654  typedef VmaVector< VmaAllocation, VmaStlAllocator<VmaAllocation> > AllocationVectorType;
    4655  AllocationVectorType* m_pDedicatedAllocations[VK_MAX_MEMORY_TYPES];
    4656  VMA_MUTEX m_DedicatedAllocationsMutex[VK_MAX_MEMORY_TYPES];
    4657 
    4658  VmaAllocator_T(const VmaAllocatorCreateInfo* pCreateInfo);
    4659  ~VmaAllocator_T();
    4660 
    4661  const VkAllocationCallbacks* GetAllocationCallbacks() const
    4662  {
    4663  return m_AllocationCallbacksSpecified ? &m_AllocationCallbacks : 0;
    4664  }
    4665  const VmaVulkanFunctions& GetVulkanFunctions() const
    4666  {
    4667  return m_VulkanFunctions;
    4668  }
    4669 
    4670  VkDeviceSize GetBufferImageGranularity() const
    4671  {
    4672  return VMA_MAX(
    4673  static_cast<VkDeviceSize>(VMA_DEBUG_MIN_BUFFER_IMAGE_GRANULARITY),
    4674  m_PhysicalDeviceProperties.limits.bufferImageGranularity);
    4675  }
    4676 
    4677  uint32_t GetMemoryHeapCount() const { return m_MemProps.memoryHeapCount; }
    4678  uint32_t GetMemoryTypeCount() const { return m_MemProps.memoryTypeCount; }
    4679 
    4680  uint32_t MemoryTypeIndexToHeapIndex(uint32_t memTypeIndex) const
    4681  {
    4682  VMA_ASSERT(memTypeIndex < m_MemProps.memoryTypeCount);
    4683  return m_MemProps.memoryTypes[memTypeIndex].heapIndex;
    4684  }
    4685  // True when specific memory type is HOST_VISIBLE but not HOST_COHERENT.
    4686  bool IsMemoryTypeNonCoherent(uint32_t memTypeIndex) const
    4687  {
    4688  return (m_MemProps.memoryTypes[memTypeIndex].propertyFlags & (VK_MEMORY_PROPERTY_HOST_VISIBLE_BIT | VK_MEMORY_PROPERTY_HOST_COHERENT_BIT)) ==
    4689  VK_MEMORY_PROPERTY_HOST_VISIBLE_BIT;
    4690  }
    4691  // Minimum alignment for all allocations in specific memory type.
    4692  VkDeviceSize GetMemoryTypeMinAlignment(uint32_t memTypeIndex) const
    4693  {
    4694  return IsMemoryTypeNonCoherent(memTypeIndex) ?
    4695  VMA_MAX((VkDeviceSize)VMA_DEBUG_ALIGNMENT, m_PhysicalDeviceProperties.limits.nonCoherentAtomSize) :
    4696  (VkDeviceSize)VMA_DEBUG_ALIGNMENT;
    4697  }
    4698 
    4699  bool IsIntegratedGpu() const
    4700  {
    4701  return m_PhysicalDeviceProperties.deviceType == VK_PHYSICAL_DEVICE_TYPE_INTEGRATED_GPU;
    4702  }
    4703 
    4704  void GetBufferMemoryRequirements(
    4705  VkBuffer hBuffer,
    4706  VkMemoryRequirements& memReq,
    4707  bool& requiresDedicatedAllocation,
    4708  bool& prefersDedicatedAllocation) const;
    4709  void GetImageMemoryRequirements(
    4710  VkImage hImage,
    4711  VkMemoryRequirements& memReq,
    4712  bool& requiresDedicatedAllocation,
    4713  bool& prefersDedicatedAllocation) const;
    4714 
    4715  // Main allocation function.
    4716  VkResult AllocateMemory(
    4717  const VkMemoryRequirements& vkMemReq,
    4718  bool requiresDedicatedAllocation,
    4719  bool prefersDedicatedAllocation,
    4720  VkBuffer dedicatedBuffer,
    4721  VkImage dedicatedImage,
    4722  const VmaAllocationCreateInfo& createInfo,
    4723  VmaSuballocationType suballocType,
    4724  VmaAllocation* pAllocation);
    4725 
    4726  // Main deallocation function.
    4727  void FreeMemory(const VmaAllocation allocation);
    4728 
    4729  void CalculateStats(VmaStats* pStats);
    4730 
    4731 #if VMA_STATS_STRING_ENABLED
    4732  void PrintDetailedMap(class VmaJsonWriter& json);
    4733 #endif
    4734 
    4735  VkResult Defragment(
    4736  VmaAllocation* pAllocations,
    4737  size_t allocationCount,
    4738  VkBool32* pAllocationsChanged,
    4739  const VmaDefragmentationInfo* pDefragmentationInfo,
    4740  VmaDefragmentationStats* pDefragmentationStats);
    4741 
    4742  void GetAllocationInfo(VmaAllocation hAllocation, VmaAllocationInfo* pAllocationInfo);
    4743  bool TouchAllocation(VmaAllocation hAllocation);
    4744 
    4745  VkResult CreatePool(const VmaPoolCreateInfo* pCreateInfo, VmaPool* pPool);
    4746  void DestroyPool(VmaPool pool);
    4747  void GetPoolStats(VmaPool pool, VmaPoolStats* pPoolStats);
    4748 
    4749  void SetCurrentFrameIndex(uint32_t frameIndex);
    4750 
    4751  void MakePoolAllocationsLost(
    4752  VmaPool hPool,
    4753  size_t* pLostAllocationCount);
    4754  VkResult CheckPoolCorruption(VmaPool hPool);
    4755  VkResult CheckCorruption(uint32_t memoryTypeBits);
    4756 
    4757  void CreateLostAllocation(VmaAllocation* pAllocation);
    4758 
    4759  VkResult AllocateVulkanMemory(const VkMemoryAllocateInfo* pAllocateInfo, VkDeviceMemory* pMemory);
    4760  void FreeVulkanMemory(uint32_t memoryType, VkDeviceSize size, VkDeviceMemory hMemory);
    4761 
    4762  VkResult Map(VmaAllocation hAllocation, void** ppData);
    4763  void Unmap(VmaAllocation hAllocation);
    4764 
    4765  VkResult BindBufferMemory(VmaAllocation hAllocation, VkBuffer hBuffer);
    4766  VkResult BindImageMemory(VmaAllocation hAllocation, VkImage hImage);
    4767 
    4768  void FlushOrInvalidateAllocation(
    4769  VmaAllocation hAllocation,
    4770  VkDeviceSize offset, VkDeviceSize size,
    4771  VMA_CACHE_OPERATION op);
    4772 
    4773 private:
    4774  VkDeviceSize m_PreferredLargeHeapBlockSize;
    4775 
    4776  VkPhysicalDevice m_PhysicalDevice;
    4777  VMA_ATOMIC_UINT32 m_CurrentFrameIndex;
    4778 
    4779  VMA_MUTEX m_PoolsMutex;
    4780  // Protected by m_PoolsMutex. Sorted by pointer value.
    4781  VmaVector<VmaPool, VmaStlAllocator<VmaPool> > m_Pools;
    4782  uint32_t m_NextPoolId;
    4783 
    4784  VmaVulkanFunctions m_VulkanFunctions;
    4785 
    4786  void ImportVulkanFunctions(const VmaVulkanFunctions* pVulkanFunctions);
    4787 
    4788  VkDeviceSize CalcPreferredBlockSize(uint32_t memTypeIndex);
    4789 
    4790  VkResult AllocateMemoryOfType(
    4791  VkDeviceSize size,
    4792  VkDeviceSize alignment,
    4793  bool dedicatedAllocation,
    4794  VkBuffer dedicatedBuffer,
    4795  VkImage dedicatedImage,
    4796  const VmaAllocationCreateInfo& createInfo,
    4797  uint32_t memTypeIndex,
    4798  VmaSuballocationType suballocType,
    4799  VmaAllocation* pAllocation);
    4800 
    4801  // Allocates and registers new VkDeviceMemory specifically for single allocation.
    4802  VkResult AllocateDedicatedMemory(
    4803  VkDeviceSize size,
    4804  VmaSuballocationType suballocType,
    4805  uint32_t memTypeIndex,
    4806  bool map,
    4807  bool isUserDataString,
    4808  void* pUserData,
    4809  VkBuffer dedicatedBuffer,
    4810  VkImage dedicatedImage,
    4811  VmaAllocation* pAllocation);
    4812 
    4813  // Tries to free pMemory as Dedicated Memory. Returns true if found and freed.
    4814  void FreeDedicatedMemory(VmaAllocation allocation);
    4815 };
    4816 
    4818 // Memory allocation #2 after VmaAllocator_T definition
    4819 
    4820 static void* VmaMalloc(VmaAllocator hAllocator, size_t size, size_t alignment)
    4821 {
    4822  return VmaMalloc(&hAllocator->m_AllocationCallbacks, size, alignment);
    4823 }
    4824 
    4825 static void VmaFree(VmaAllocator hAllocator, void* ptr)
    4826 {
    4827  VmaFree(&hAllocator->m_AllocationCallbacks, ptr);
    4828 }
    4829 
    4830 template<typename T>
    4831 static T* VmaAllocate(VmaAllocator hAllocator)
    4832 {
    4833  return (T*)VmaMalloc(hAllocator, sizeof(T), VMA_ALIGN_OF(T));
    4834 }
    4835 
    4836 template<typename T>
    4837 static T* VmaAllocateArray(VmaAllocator hAllocator, size_t count)
    4838 {
    4839  return (T*)VmaMalloc(hAllocator, sizeof(T) * count, VMA_ALIGN_OF(T));
    4840 }
    4841 
    4842 template<typename T>
    4843 static void vma_delete(VmaAllocator hAllocator, T* ptr)
    4844 {
    4845  if(ptr != VMA_NULL)
    4846  {
    4847  ptr->~T();
    4848  VmaFree(hAllocator, ptr);
    4849  }
    4850 }
    4851 
    4852 template<typename T>
    4853 static void vma_delete_array(VmaAllocator hAllocator, T* ptr, size_t count)
    4854 {
    4855  if(ptr != VMA_NULL)
    4856  {
    4857  for(size_t i = count; i--; )
    4858  ptr[i].~T();
    4859  VmaFree(hAllocator, ptr);
    4860  }
    4861 }
    4862 
    4864 // VmaStringBuilder
    4865 
    4866 #if VMA_STATS_STRING_ENABLED
    4867 
    4868 class VmaStringBuilder
    4869 {
    4870 public:
    4871  VmaStringBuilder(VmaAllocator alloc) : m_Data(VmaStlAllocator<char>(alloc->GetAllocationCallbacks())) { }
    4872  size_t GetLength() const { return m_Data.size(); }
    4873  const char* GetData() const { return m_Data.data(); }
    4874 
    4875  void Add(char ch) { m_Data.push_back(ch); }
    4876  void Add(const char* pStr);
    4877  void AddNewLine() { Add('\n'); }
    4878  void AddNumber(uint32_t num);
    4879  void AddNumber(uint64_t num);
    4880  void AddPointer(const void* ptr);
    4881 
    4882 private:
    4883  VmaVector< char, VmaStlAllocator<char> > m_Data;
    4884 };
    4885 
    4886 void VmaStringBuilder::Add(const char* pStr)
    4887 {
    4888  const size_t strLen = strlen(pStr);
    4889  if(strLen > 0)
    4890  {
    4891  const size_t oldCount = m_Data.size();
    4892  m_Data.resize(oldCount + strLen);
    4893  memcpy(m_Data.data() + oldCount, pStr, strLen);
    4894  }
    4895 }
    4896 
    4897 void VmaStringBuilder::AddNumber(uint32_t num)
    4898 {
    4899  char buf[11];
    4900  VmaUint32ToStr(buf, sizeof(buf), num);
    4901  Add(buf);
    4902 }
    4903 
    4904 void VmaStringBuilder::AddNumber(uint64_t num)
    4905 {
    4906  char buf[21];
    4907  VmaUint64ToStr(buf, sizeof(buf), num);
    4908  Add(buf);
    4909 }
    4910 
    4911 void VmaStringBuilder::AddPointer(const void* ptr)
    4912 {
    4913  char buf[21];
    4914  VmaPtrToStr(buf, sizeof(buf), ptr);
    4915  Add(buf);
    4916 }
    4917 
    4918 #endif // #if VMA_STATS_STRING_ENABLED
    4919 
    4921 // VmaJsonWriter
    4922 
    4923 #if VMA_STATS_STRING_ENABLED
    4924 
    4925 class VmaJsonWriter
    4926 {
    4927  VMA_CLASS_NO_COPY(VmaJsonWriter)
    4928 public:
    4929  VmaJsonWriter(const VkAllocationCallbacks* pAllocationCallbacks, VmaStringBuilder& sb);
    4930  ~VmaJsonWriter();
    4931 
    4932  void BeginObject(bool singleLine = false);
    4933  void EndObject();
    4934 
    4935  void BeginArray(bool singleLine = false);
    4936  void EndArray();
    4937 
    4938  void WriteString(const char* pStr);
    4939  void BeginString(const char* pStr = VMA_NULL);
    4940  void ContinueString(const char* pStr);
    4941  void ContinueString(uint32_t n);
    4942  void ContinueString(uint64_t n);
    4943  void ContinueString_Pointer(const void* ptr);
    4944  void EndString(const char* pStr = VMA_NULL);
    4945 
    4946  void WriteNumber(uint32_t n);
    4947  void WriteNumber(uint64_t n);
    4948  void WriteBool(bool b);
    4949  void WriteNull();
    4950 
    4951 private:
    4952  static const char* const INDENT;
    4953 
    4954  enum COLLECTION_TYPE
    4955  {
    4956  COLLECTION_TYPE_OBJECT,
    4957  COLLECTION_TYPE_ARRAY,
    4958  };
    4959  struct StackItem
    4960  {
    4961  COLLECTION_TYPE type;
    4962  uint32_t valueCount;
    4963  bool singleLineMode;
    4964  };
    4965 
    4966  VmaStringBuilder& m_SB;
    4967  VmaVector< StackItem, VmaStlAllocator<StackItem> > m_Stack;
    4968  bool m_InsideString;
    4969 
    4970  void BeginValue(bool isString);
    4971  void WriteIndent(bool oneLess = false);
    4972 };
    4973 
    4974 const char* const VmaJsonWriter::INDENT = " ";
    4975 
    4976 VmaJsonWriter::VmaJsonWriter(const VkAllocationCallbacks* pAllocationCallbacks, VmaStringBuilder& sb) :
    4977  m_SB(sb),
    4978  m_Stack(VmaStlAllocator<StackItem>(pAllocationCallbacks)),
    4979  m_InsideString(false)
    4980 {
    4981 }
    4982 
    4983 VmaJsonWriter::~VmaJsonWriter()
    4984 {
    4985  VMA_ASSERT(!m_InsideString);
    4986  VMA_ASSERT(m_Stack.empty());
    4987 }
    4988 
    4989 void VmaJsonWriter::BeginObject(bool singleLine)
    4990 {
    4991  VMA_ASSERT(!m_InsideString);
    4992 
    4993  BeginValue(false);
    4994  m_SB.Add('{');
    4995 
    4996  StackItem item;
    4997  item.type = COLLECTION_TYPE_OBJECT;
    4998  item.valueCount = 0;
    4999  item.singleLineMode = singleLine;
    5000  m_Stack.push_back(item);
    5001 }
    5002 
    5003 void VmaJsonWriter::EndObject()
    5004 {
    5005  VMA_ASSERT(!m_InsideString);
    5006 
    5007  WriteIndent(true);
    5008  m_SB.Add('}');
    5009 
    5010  VMA_ASSERT(!m_Stack.empty() && m_Stack.back().type == COLLECTION_TYPE_OBJECT);
    5011  m_Stack.pop_back();
    5012 }
    5013 
    5014 void VmaJsonWriter::BeginArray(bool singleLine)
    5015 {
    5016  VMA_ASSERT(!m_InsideString);
    5017 
    5018  BeginValue(false);
    5019  m_SB.Add('[');
    5020 
    5021  StackItem item;
    5022  item.type = COLLECTION_TYPE_ARRAY;
    5023  item.valueCount = 0;
    5024  item.singleLineMode = singleLine;
    5025  m_Stack.push_back(item);
    5026 }
    5027 
    5028 void VmaJsonWriter::EndArray()
    5029 {
    5030  VMA_ASSERT(!m_InsideString);
    5031 
    5032  WriteIndent(true);
    5033  m_SB.Add(']');
    5034 
    5035  VMA_ASSERT(!m_Stack.empty() && m_Stack.back().type == COLLECTION_TYPE_ARRAY);
    5036  m_Stack.pop_back();
    5037 }
    5038 
    5039 void VmaJsonWriter::WriteString(const char* pStr)
    5040 {
    5041  BeginString(pStr);
    5042  EndString();
    5043 }
    5044 
    5045 void VmaJsonWriter::BeginString(const char* pStr)
    5046 {
    5047  VMA_ASSERT(!m_InsideString);
    5048 
    5049  BeginValue(true);
    5050  m_SB.Add('"');
    5051  m_InsideString = true;
    5052  if(pStr != VMA_NULL && pStr[0] != '\0')
    5053  {
    5054  ContinueString(pStr);
    5055  }
    5056 }
    5057 
    5058 void VmaJsonWriter::ContinueString(const char* pStr)
    5059 {
    5060  VMA_ASSERT(m_InsideString);
    5061 
    5062  const size_t strLen = strlen(pStr);
    5063  for(size_t i = 0; i < strLen; ++i)
    5064  {
    5065  char ch = pStr[i];
    5066  if(ch == '\'')
    5067  {
    5068  m_SB.Add("\\\\");
    5069  }
    5070  else if(ch == '"')
    5071  {
    5072  m_SB.Add("\\\"");
    5073  }
    5074  else if(ch >= 32)
    5075  {
    5076  m_SB.Add(ch);
    5077  }
    5078  else switch(ch)
    5079  {
    5080  case '\b':
    5081  m_SB.Add("\\b");
    5082  break;
    5083  case '\f':
    5084  m_SB.Add("\\f");
    5085  break;
    5086  case '\n':
    5087  m_SB.Add("\\n");
    5088  break;
    5089  case '\r':
    5090  m_SB.Add("\\r");
    5091  break;
    5092  case '\t':
    5093  m_SB.Add("\\t");
    5094  break;
    5095  default:
    5096  VMA_ASSERT(0 && "Character not currently supported.");
    5097  break;
    5098  }
    5099  }
    5100 }
    5101 
    5102 void VmaJsonWriter::ContinueString(uint32_t n)
    5103 {
    5104  VMA_ASSERT(m_InsideString);
    5105  m_SB.AddNumber(n);
    5106 }
    5107 
    5108 void VmaJsonWriter::ContinueString(uint64_t n)
    5109 {
    5110  VMA_ASSERT(m_InsideString);
    5111  m_SB.AddNumber(n);
    5112 }
    5113 
    5114 void VmaJsonWriter::ContinueString_Pointer(const void* ptr)
    5115 {
    5116  VMA_ASSERT(m_InsideString);
    5117  m_SB.AddPointer(ptr);
    5118 }
    5119 
    5120 void VmaJsonWriter::EndString(const char* pStr)
    5121 {
    5122  VMA_ASSERT(m_InsideString);
    5123  if(pStr != VMA_NULL && pStr[0] != '\0')
    5124  {
    5125  ContinueString(pStr);
    5126  }
    5127  m_SB.Add('"');
    5128  m_InsideString = false;
    5129 }
    5130 
    5131 void VmaJsonWriter::WriteNumber(uint32_t n)
    5132 {
    5133  VMA_ASSERT(!m_InsideString);
    5134  BeginValue(false);
    5135  m_SB.AddNumber(n);
    5136 }
    5137 
    5138 void VmaJsonWriter::WriteNumber(uint64_t n)
    5139 {
    5140  VMA_ASSERT(!m_InsideString);
    5141  BeginValue(false);
    5142  m_SB.AddNumber(n);
    5143 }
    5144 
    5145 void VmaJsonWriter::WriteBool(bool b)
    5146 {
    5147  VMA_ASSERT(!m_InsideString);
    5148  BeginValue(false);
    5149  m_SB.Add(b ? "true" : "false");
    5150 }
    5151 
    5152 void VmaJsonWriter::WriteNull()
    5153 {
    5154  VMA_ASSERT(!m_InsideString);
    5155  BeginValue(false);
    5156  m_SB.Add("null");
    5157 }
    5158 
    5159 void VmaJsonWriter::BeginValue(bool isString)
    5160 {
    5161  if(!m_Stack.empty())
    5162  {
    5163  StackItem& currItem = m_Stack.back();
    5164  if(currItem.type == COLLECTION_TYPE_OBJECT &&
    5165  currItem.valueCount % 2 == 0)
    5166  {
    5167  VMA_ASSERT(isString);
    5168  }
    5169 
    5170  if(currItem.type == COLLECTION_TYPE_OBJECT &&
    5171  currItem.valueCount % 2 != 0)
    5172  {
    5173  m_SB.Add(": ");
    5174  }
    5175  else if(currItem.valueCount > 0)
    5176  {
    5177  m_SB.Add(", ");
    5178  WriteIndent();
    5179  }
    5180  else
    5181  {
    5182  WriteIndent();
    5183  }
    5184  ++currItem.valueCount;
    5185  }
    5186 }
    5187 
    5188 void VmaJsonWriter::WriteIndent(bool oneLess)
    5189 {
    5190  if(!m_Stack.empty() && !m_Stack.back().singleLineMode)
    5191  {
    5192  m_SB.AddNewLine();
    5193 
    5194  size_t count = m_Stack.size();
    5195  if(count > 0 && oneLess)
    5196  {
    5197  --count;
    5198  }
    5199  for(size_t i = 0; i < count; ++i)
    5200  {
    5201  m_SB.Add(INDENT);
    5202  }
    5203  }
    5204 }
    5205 
    5206 #endif // #if VMA_STATS_STRING_ENABLED
    5207 
    5209 
    5210 void VmaAllocation_T::SetUserData(VmaAllocator hAllocator, void* pUserData)
    5211 {
    5212  if(IsUserDataString())
    5213  {
    5214  VMA_ASSERT(pUserData == VMA_NULL || pUserData != m_pUserData);
    5215 
    5216  FreeUserDataString(hAllocator);
    5217 
    5218  if(pUserData != VMA_NULL)
    5219  {
    5220  const char* const newStrSrc = (char*)pUserData;
    5221  const size_t newStrLen = strlen(newStrSrc);
    5222  char* const newStrDst = vma_new_array(hAllocator, char, newStrLen + 1);
    5223  memcpy(newStrDst, newStrSrc, newStrLen + 1);
    5224  m_pUserData = newStrDst;
    5225  }
    5226  }
    5227  else
    5228  {
    5229  m_pUserData = pUserData;
    5230  }
    5231 }
    5232 
    5233 void VmaAllocation_T::ChangeBlockAllocation(
    5234  VmaAllocator hAllocator,
    5235  VmaDeviceMemoryBlock* block,
    5236  VkDeviceSize offset)
    5237 {
    5238  VMA_ASSERT(block != VMA_NULL);
    5239  VMA_ASSERT(m_Type == ALLOCATION_TYPE_BLOCK);
    5240 
    5241  // Move mapping reference counter from old block to new block.
    5242  if(block != m_BlockAllocation.m_Block)
    5243  {
    5244  uint32_t mapRefCount = m_MapCount & ~MAP_COUNT_FLAG_PERSISTENT_MAP;
    5245  if(IsPersistentMap())
    5246  ++mapRefCount;
    5247  m_BlockAllocation.m_Block->Unmap(hAllocator, mapRefCount);
    5248  block->Map(hAllocator, mapRefCount, VMA_NULL);
    5249  }
    5250 
    5251  m_BlockAllocation.m_Block = block;
    5252  m_BlockAllocation.m_Offset = offset;
    5253 }
    5254 
    5255 VkDeviceSize VmaAllocation_T::GetOffset() const
    5256 {
    5257  switch(m_Type)
    5258  {
    5259  case ALLOCATION_TYPE_BLOCK:
    5260  return m_BlockAllocation.m_Offset;
    5261  case ALLOCATION_TYPE_DEDICATED:
    5262  return 0;
    5263  default:
    5264  VMA_ASSERT(0);
    5265  return 0;
    5266  }
    5267 }
    5268 
    5269 VkDeviceMemory VmaAllocation_T::GetMemory() const
    5270 {
    5271  switch(m_Type)
    5272  {
    5273  case ALLOCATION_TYPE_BLOCK:
    5274  return m_BlockAllocation.m_Block->GetDeviceMemory();
    5275  case ALLOCATION_TYPE_DEDICATED:
    5276  return m_DedicatedAllocation.m_hMemory;
    5277  default:
    5278  VMA_ASSERT(0);
    5279  return VK_NULL_HANDLE;
    5280  }
    5281 }
    5282 
    5283 uint32_t VmaAllocation_T::GetMemoryTypeIndex() const
    5284 {
    5285  switch(m_Type)
    5286  {
    5287  case ALLOCATION_TYPE_BLOCK:
    5288  return m_BlockAllocation.m_Block->GetMemoryTypeIndex();
    5289  case ALLOCATION_TYPE_DEDICATED:
    5290  return m_DedicatedAllocation.m_MemoryTypeIndex;
    5291  default:
    5292  VMA_ASSERT(0);
    5293  return UINT32_MAX;
    5294  }
    5295 }
    5296 
    5297 void* VmaAllocation_T::GetMappedData() const
    5298 {
    5299  switch(m_Type)
    5300  {
    5301  case ALLOCATION_TYPE_BLOCK:
    5302  if(m_MapCount != 0)
    5303  {
    5304  void* pBlockData = m_BlockAllocation.m_Block->GetMappedData();
    5305  VMA_ASSERT(pBlockData != VMA_NULL);
    5306  return (char*)pBlockData + m_BlockAllocation.m_Offset;
    5307  }
    5308  else
    5309  {
    5310  return VMA_NULL;
    5311  }
    5312  break;
    5313  case ALLOCATION_TYPE_DEDICATED:
    5314  VMA_ASSERT((m_DedicatedAllocation.m_pMappedData != VMA_NULL) == (m_MapCount != 0));
    5315  return m_DedicatedAllocation.m_pMappedData;
    5316  default:
    5317  VMA_ASSERT(0);
    5318  return VMA_NULL;
    5319  }
    5320 }
    5321 
    5322 bool VmaAllocation_T::CanBecomeLost() const
    5323 {
    5324  switch(m_Type)
    5325  {
    5326  case ALLOCATION_TYPE_BLOCK:
    5327  return m_BlockAllocation.m_CanBecomeLost;
    5328  case ALLOCATION_TYPE_DEDICATED:
    5329  return false;
    5330  default:
    5331  VMA_ASSERT(0);
    5332  return false;
    5333  }
    5334 }
    5335 
    5336 VmaPool VmaAllocation_T::GetPool() const
    5337 {
    5338  VMA_ASSERT(m_Type == ALLOCATION_TYPE_BLOCK);
    5339  return m_BlockAllocation.m_hPool;
    5340 }
    5341 
    5342 bool VmaAllocation_T::MakeLost(uint32_t currentFrameIndex, uint32_t frameInUseCount)
    5343 {
    5344  VMA_ASSERT(CanBecomeLost());
    5345 
    5346  /*
    5347  Warning: This is a carefully designed algorithm.
    5348  Do not modify unless you really know what you're doing :)
    5349  */
    5350  uint32_t localLastUseFrameIndex = GetLastUseFrameIndex();
    5351  for(;;)
    5352  {
    5353  if(localLastUseFrameIndex == VMA_FRAME_INDEX_LOST)
    5354  {
    5355  VMA_ASSERT(0);
    5356  return false;
    5357  }
    5358  else if(localLastUseFrameIndex + frameInUseCount >= currentFrameIndex)
    5359  {
    5360  return false;
    5361  }
    5362  else // Last use time earlier than current time.
    5363  {
    5364  if(CompareExchangeLastUseFrameIndex(localLastUseFrameIndex, VMA_FRAME_INDEX_LOST))
    5365  {
    5366  // Setting hAllocation.LastUseFrameIndex atomic to VMA_FRAME_INDEX_LOST is enough to mark it as LOST.
    5367  // Calling code just needs to unregister this allocation in owning VmaDeviceMemoryBlock.
    5368  return true;
    5369  }
    5370  }
    5371  }
    5372 }
    5373 
    5374 #if VMA_STATS_STRING_ENABLED
    5375 
    5376 // Correspond to values of enum VmaSuballocationType.
    5377 static const char* VMA_SUBALLOCATION_TYPE_NAMES[] = {
    5378  "FREE",
    5379  "UNKNOWN",
    5380  "BUFFER",
    5381  "IMAGE_UNKNOWN",
    5382  "IMAGE_LINEAR",
    5383  "IMAGE_OPTIMAL",
    5384 };
    5385 
    5386 void VmaAllocation_T::PrintParameters(class VmaJsonWriter& json) const
    5387 {
    5388  json.WriteString("Type");
    5389  json.WriteString(VMA_SUBALLOCATION_TYPE_NAMES[m_SuballocationType]);
    5390 
    5391  json.WriteString("Size");
    5392  json.WriteNumber(m_Size);
    5393 
    5394  if(m_pUserData != VMA_NULL)
    5395  {
    5396  json.WriteString("UserData");
    5397  if(IsUserDataString())
    5398  {
    5399  json.WriteString((const char*)m_pUserData);
    5400  }
    5401  else
    5402  {
    5403  json.BeginString();
    5404  json.ContinueString_Pointer(m_pUserData);
    5405  json.EndString();
    5406  }
    5407  }
    5408 
    5409  json.WriteString("CreationFrameIndex");
    5410  json.WriteNumber(m_CreationFrameIndex);
    5411 
    5412  json.WriteString("LastUseFrameIndex");
    5413  json.WriteNumber(GetLastUseFrameIndex());
    5414 
    5415  if(m_BufferImageUsage != 0)
    5416  {
    5417  json.WriteString("Usage");
    5418  json.WriteNumber(m_BufferImageUsage);
    5419  }
    5420 }
    5421 
    5422 #endif
    5423 
    5424 void VmaAllocation_T::FreeUserDataString(VmaAllocator hAllocator)
    5425 {
    5426  VMA_ASSERT(IsUserDataString());
    5427  if(m_pUserData != VMA_NULL)
    5428  {
    5429  char* const oldStr = (char*)m_pUserData;
    5430  const size_t oldStrLen = strlen(oldStr);
    5431  vma_delete_array(hAllocator, oldStr, oldStrLen + 1);
    5432  m_pUserData = VMA_NULL;
    5433  }
    5434 }
    5435 
    5436 void VmaAllocation_T::BlockAllocMap()
    5437 {
    5438  VMA_ASSERT(GetType() == ALLOCATION_TYPE_BLOCK);
    5439 
    5440  if((m_MapCount & ~MAP_COUNT_FLAG_PERSISTENT_MAP) < 0x7F)
    5441  {
    5442  ++m_MapCount;
    5443  }
    5444  else
    5445  {
    5446  VMA_ASSERT(0 && "Allocation mapped too many times simultaneously.");
    5447  }
    5448 }
    5449 
    5450 void VmaAllocation_T::BlockAllocUnmap()
    5451 {
    5452  VMA_ASSERT(GetType() == ALLOCATION_TYPE_BLOCK);
    5453 
    5454  if((m_MapCount & ~MAP_COUNT_FLAG_PERSISTENT_MAP) != 0)
    5455  {
    5456  --m_MapCount;
    5457  }
    5458  else
    5459  {
    5460  VMA_ASSERT(0 && "Unmapping allocation not previously mapped.");
    5461  }
    5462 }
    5463 
    5464 VkResult VmaAllocation_T::DedicatedAllocMap(VmaAllocator hAllocator, void** ppData)
    5465 {
    5466  VMA_ASSERT(GetType() == ALLOCATION_TYPE_DEDICATED);
    5467 
    5468  if(m_MapCount != 0)
    5469  {
    5470  if((m_MapCount & ~MAP_COUNT_FLAG_PERSISTENT_MAP) < 0x7F)
    5471  {
    5472  VMA_ASSERT(m_DedicatedAllocation.m_pMappedData != VMA_NULL);
    5473  *ppData = m_DedicatedAllocation.m_pMappedData;
    5474  ++m_MapCount;
    5475  return VK_SUCCESS;
    5476  }
    5477  else
    5478  {
    5479  VMA_ASSERT(0 && "Dedicated allocation mapped too many times simultaneously.");
    5480  return VK_ERROR_MEMORY_MAP_FAILED;
    5481  }
    5482  }
    5483  else
    5484  {
    5485  VkResult result = (*hAllocator->GetVulkanFunctions().vkMapMemory)(
    5486  hAllocator->m_hDevice,
    5487  m_DedicatedAllocation.m_hMemory,
    5488  0, // offset
    5489  VK_WHOLE_SIZE,
    5490  0, // flags
    5491  ppData);
    5492  if(result == VK_SUCCESS)
    5493  {
    5494  m_DedicatedAllocation.m_pMappedData = *ppData;
    5495  m_MapCount = 1;
    5496  }
    5497  return result;
    5498  }
    5499 }
    5500 
    5501 void VmaAllocation_T::DedicatedAllocUnmap(VmaAllocator hAllocator)
    5502 {
    5503  VMA_ASSERT(GetType() == ALLOCATION_TYPE_DEDICATED);
    5504 
    5505  if((m_MapCount & ~MAP_COUNT_FLAG_PERSISTENT_MAP) != 0)
    5506  {
    5507  --m_MapCount;
    5508  if(m_MapCount == 0)
    5509  {
    5510  m_DedicatedAllocation.m_pMappedData = VMA_NULL;
    5511  (*hAllocator->GetVulkanFunctions().vkUnmapMemory)(
    5512  hAllocator->m_hDevice,
    5513  m_DedicatedAllocation.m_hMemory);
    5514  }
    5515  }
    5516  else
    5517  {
    5518  VMA_ASSERT(0 && "Unmapping dedicated allocation not previously mapped.");
    5519  }
    5520 }
    5521 
    5522 #if VMA_STATS_STRING_ENABLED
    5523 
    5524 static void VmaPrintStatInfo(VmaJsonWriter& json, const VmaStatInfo& stat)
    5525 {
    5526  json.BeginObject();
    5527 
    5528  json.WriteString("Blocks");
    5529  json.WriteNumber(stat.blockCount);
    5530 
    5531  json.WriteString("Allocations");
    5532  json.WriteNumber(stat.allocationCount);
    5533 
    5534  json.WriteString("UnusedRanges");
    5535  json.WriteNumber(stat.unusedRangeCount);
    5536 
    5537  json.WriteString("UsedBytes");
    5538  json.WriteNumber(stat.usedBytes);
    5539 
    5540  json.WriteString("UnusedBytes");
    5541  json.WriteNumber(stat.unusedBytes);
    5542 
    5543  if(stat.allocationCount > 1)
    5544  {
    5545  json.WriteString("AllocationSize");
    5546  json.BeginObject(true);
    5547  json.WriteString("Min");
    5548  json.WriteNumber(stat.allocationSizeMin);
    5549  json.WriteString("Avg");
    5550  json.WriteNumber(stat.allocationSizeAvg);
    5551  json.WriteString("Max");
    5552  json.WriteNumber(stat.allocationSizeMax);
    5553  json.EndObject();
    5554  }
    5555 
    5556  if(stat.unusedRangeCount > 1)
    5557  {
    5558  json.WriteString("UnusedRangeSize");
    5559  json.BeginObject(true);
    5560  json.WriteString("Min");
    5561  json.WriteNumber(stat.unusedRangeSizeMin);
    5562  json.WriteString("Avg");
    5563  json.WriteNumber(stat.unusedRangeSizeAvg);
    5564  json.WriteString("Max");
    5565  json.WriteNumber(stat.unusedRangeSizeMax);
    5566  json.EndObject();
    5567  }
    5568 
    5569  json.EndObject();
    5570 }
    5571 
    5572 #endif // #if VMA_STATS_STRING_ENABLED
    5573 
    5574 struct VmaSuballocationItemSizeLess
    5575 {
    5576  bool operator()(
    5577  const VmaSuballocationList::iterator lhs,
    5578  const VmaSuballocationList::iterator rhs) const
    5579  {
    5580  return lhs->size < rhs->size;
    5581  }
    5582  bool operator()(
    5583  const VmaSuballocationList::iterator lhs,
    5584  VkDeviceSize rhsSize) const
    5585  {
    5586  return lhs->size < rhsSize;
    5587  }
    5588 };
    5589 
    5591 // class VmaBlockMetadata
    5592 
    5593 VmaBlockMetadata::VmaBlockMetadata(VmaAllocator hAllocator) :
    5594  m_Size(0),
    5595  m_FreeCount(0),
    5596  m_SumFreeSize(0),
    5597  m_Suballocations(VmaStlAllocator<VmaSuballocation>(hAllocator->GetAllocationCallbacks())),
    5598  m_FreeSuballocationsBySize(VmaStlAllocator<VmaSuballocationList::iterator>(hAllocator->GetAllocationCallbacks()))
    5599 {
    5600 }
    5601 
    5602 VmaBlockMetadata::~VmaBlockMetadata()
    5603 {
    5604 }
    5605 
    5606 void VmaBlockMetadata::Init(VkDeviceSize size)
    5607 {
    5608  m_Size = size;
    5609  m_FreeCount = 1;
    5610  m_SumFreeSize = size;
    5611 
    5612  VmaSuballocation suballoc = {};
    5613  suballoc.offset = 0;
    5614  suballoc.size = size;
    5615  suballoc.type = VMA_SUBALLOCATION_TYPE_FREE;
    5616  suballoc.hAllocation = VK_NULL_HANDLE;
    5617 
    5618  m_Suballocations.push_back(suballoc);
    5619  VmaSuballocationList::iterator suballocItem = m_Suballocations.end();
    5620  --suballocItem;
    5621  m_FreeSuballocationsBySize.push_back(suballocItem);
    5622 }
    5623 
    5624 bool VmaBlockMetadata::Validate() const
    5625 {
    5626  if(m_Suballocations.empty())
    5627  {
    5628  return false;
    5629  }
    5630 
    5631  // Expected offset of new suballocation as calculates from previous ones.
    5632  VkDeviceSize calculatedOffset = 0;
    5633  // Expected number of free suballocations as calculated from traversing their list.
    5634  uint32_t calculatedFreeCount = 0;
    5635  // Expected sum size of free suballocations as calculated from traversing their list.
    5636  VkDeviceSize calculatedSumFreeSize = 0;
    5637  // Expected number of free suballocations that should be registered in
    5638  // m_FreeSuballocationsBySize calculated from traversing their list.
    5639  size_t freeSuballocationsToRegister = 0;
    5640  // True if previous visited suballocation was free.
    5641  bool prevFree = false;
    5642 
    5643  for(VmaSuballocationList::const_iterator suballocItem = m_Suballocations.cbegin();
    5644  suballocItem != m_Suballocations.cend();
    5645  ++suballocItem)
    5646  {
    5647  const VmaSuballocation& subAlloc = *suballocItem;
    5648 
    5649  // Actual offset of this suballocation doesn't match expected one.
    5650  if(subAlloc.offset != calculatedOffset)
    5651  {
    5652  return false;
    5653  }
    5654 
    5655  const bool currFree = (subAlloc.type == VMA_SUBALLOCATION_TYPE_FREE);
    5656  // Two adjacent free suballocations are invalid. They should be merged.
    5657  if(prevFree && currFree)
    5658  {
    5659  return false;
    5660  }
    5661 
    5662  if(currFree != (subAlloc.hAllocation == VK_NULL_HANDLE))
    5663  {
    5664  return false;
    5665  }
    5666 
    5667  if(currFree)
    5668  {
    5669  calculatedSumFreeSize += subAlloc.size;
    5670  ++calculatedFreeCount;
    5671  if(subAlloc.size >= VMA_MIN_FREE_SUBALLOCATION_SIZE_TO_REGISTER)
    5672  {
    5673  ++freeSuballocationsToRegister;
    5674  }
    5675 
    5676  // Margin required between allocations - every free space must be at least that large.
    5677  if(subAlloc.size < VMA_DEBUG_MARGIN)
    5678  {
    5679  return false;
    5680  }
    5681  }
    5682  else
    5683  {
    5684  if(subAlloc.hAllocation->GetOffset() != subAlloc.offset)
    5685  {
    5686  return false;
    5687  }
    5688  if(subAlloc.hAllocation->GetSize() != subAlloc.size)
    5689  {
    5690  return false;
    5691  }
    5692 
    5693  // Margin required between allocations - previous allocation must be free.
    5694  if(VMA_DEBUG_MARGIN > 0 && !prevFree)
    5695  {
    5696  return false;
    5697  }
    5698  }
    5699 
    5700  calculatedOffset += subAlloc.size;
    5701  prevFree = currFree;
    5702  }
    5703 
    5704  // Number of free suballocations registered in m_FreeSuballocationsBySize doesn't
    5705  // match expected one.
    5706  if(m_FreeSuballocationsBySize.size() != freeSuballocationsToRegister)
    5707  {
    5708  return false;
    5709  }
    5710 
    5711  VkDeviceSize lastSize = 0;
    5712  for(size_t i = 0; i < m_FreeSuballocationsBySize.size(); ++i)
    5713  {
    5714  VmaSuballocationList::iterator suballocItem = m_FreeSuballocationsBySize[i];
    5715 
    5716  // Only free suballocations can be registered in m_FreeSuballocationsBySize.
    5717  if(suballocItem->type != VMA_SUBALLOCATION_TYPE_FREE)
    5718  {
    5719  return false;
    5720  }
    5721  // They must be sorted by size ascending.
    5722  if(suballocItem->size < lastSize)
    5723  {
    5724  return false;
    5725  }
    5726 
    5727  lastSize = suballocItem->size;
    5728  }
    5729 
    5730  // Check if totals match calculacted values.
    5731  if(!ValidateFreeSuballocationList() ||
    5732  (calculatedOffset != m_Size) ||
    5733  (calculatedSumFreeSize != m_SumFreeSize) ||
    5734  (calculatedFreeCount != m_FreeCount))
    5735  {
    5736  return false;
    5737  }
    5738 
    5739  return true;
    5740 }
    5741 
    5742 VkDeviceSize VmaBlockMetadata::GetUnusedRangeSizeMax() const
    5743 {
    5744  if(!m_FreeSuballocationsBySize.empty())
    5745  {
    5746  return m_FreeSuballocationsBySize.back()->size;
    5747  }
    5748  else
    5749  {
    5750  return 0;
    5751  }
    5752 }
    5753 
    5754 bool VmaBlockMetadata::IsEmpty() const
    5755 {
    5756  return (m_Suballocations.size() == 1) && (m_FreeCount == 1);
    5757 }
    5758 
    5759 void VmaBlockMetadata::CalcAllocationStatInfo(VmaStatInfo& outInfo) const
    5760 {
    5761  outInfo.blockCount = 1;
    5762 
    5763  const uint32_t rangeCount = (uint32_t)m_Suballocations.size();
    5764  outInfo.allocationCount = rangeCount - m_FreeCount;
    5765  outInfo.unusedRangeCount = m_FreeCount;
    5766 
    5767  outInfo.unusedBytes = m_SumFreeSize;
    5768  outInfo.usedBytes = m_Size - outInfo.unusedBytes;
    5769 
    5770  outInfo.allocationSizeMin = UINT64_MAX;
    5771  outInfo.allocationSizeMax = 0;
    5772  outInfo.unusedRangeSizeMin = UINT64_MAX;
    5773  outInfo.unusedRangeSizeMax = 0;
    5774 
    5775  for(VmaSuballocationList::const_iterator suballocItem = m_Suballocations.cbegin();
    5776  suballocItem != m_Suballocations.cend();
    5777  ++suballocItem)
    5778  {
    5779  const VmaSuballocation& suballoc = *suballocItem;
    5780  if(suballoc.type != VMA_SUBALLOCATION_TYPE_FREE)
    5781  {
    5782  outInfo.allocationSizeMin = VMA_MIN(outInfo.allocationSizeMin, suballoc.size);
    5783  outInfo.allocationSizeMax = VMA_MAX(outInfo.allocationSizeMax, suballoc.size);
    5784  }
    5785  else
    5786  {
    5787  outInfo.unusedRangeSizeMin = VMA_MIN(outInfo.unusedRangeSizeMin, suballoc.size);
    5788  outInfo.unusedRangeSizeMax = VMA_MAX(outInfo.unusedRangeSizeMax, suballoc.size);
    5789  }
    5790  }
    5791 }
    5792 
    5793 void VmaBlockMetadata::AddPoolStats(VmaPoolStats& inoutStats) const
    5794 {
    5795  const uint32_t rangeCount = (uint32_t)m_Suballocations.size();
    5796 
    5797  inoutStats.size += m_Size;
    5798  inoutStats.unusedSize += m_SumFreeSize;
    5799  inoutStats.allocationCount += rangeCount - m_FreeCount;
    5800  inoutStats.unusedRangeCount += m_FreeCount;
    5801  inoutStats.unusedRangeSizeMax = VMA_MAX(inoutStats.unusedRangeSizeMax, GetUnusedRangeSizeMax());
    5802 }
    5803 
    5804 #if VMA_STATS_STRING_ENABLED
    5805 
    5806 void VmaBlockMetadata::PrintDetailedMap(class VmaJsonWriter& json) const
    5807 {
    5808  json.BeginObject();
    5809 
    5810  json.WriteString("TotalBytes");
    5811  json.WriteNumber(m_Size);
    5812 
    5813  json.WriteString("UnusedBytes");
    5814  json.WriteNumber(m_SumFreeSize);
    5815 
    5816  json.WriteString("Allocations");
    5817  json.WriteNumber((uint64_t)m_Suballocations.size() - m_FreeCount);
    5818 
    5819  json.WriteString("UnusedRanges");
    5820  json.WriteNumber(m_FreeCount);
    5821 
    5822  json.WriteString("Suballocations");
    5823  json.BeginArray();
    5824  size_t i = 0;
    5825  for(VmaSuballocationList::const_iterator suballocItem = m_Suballocations.cbegin();
    5826  suballocItem != m_Suballocations.cend();
    5827  ++suballocItem, ++i)
    5828  {
    5829  json.BeginObject(true);
    5830 
    5831  json.WriteString("Offset");
    5832  json.WriteNumber(suballocItem->offset);
    5833 
    5834  if(suballocItem->type == VMA_SUBALLOCATION_TYPE_FREE)
    5835  {
    5836  json.WriteString("Type");
    5837  json.WriteString(VMA_SUBALLOCATION_TYPE_NAMES[VMA_SUBALLOCATION_TYPE_FREE]);
    5838 
    5839  json.WriteString("Size");
    5840  json.WriteNumber(suballocItem->size);
    5841  }
    5842  else
    5843  {
    5844  suballocItem->hAllocation->PrintParameters(json);
    5845  }
    5846 
    5847  json.EndObject();
    5848  }
    5849  json.EndArray();
    5850 
    5851  json.EndObject();
    5852 }
    5853 
    5854 #endif // #if VMA_STATS_STRING_ENABLED
    5855 
    5856 /*
    5857 How many suitable free suballocations to analyze before choosing best one.
    5858 - Set to 1 to use First-Fit algorithm - first suitable free suballocation will
    5859  be chosen.
    5860 - Set to UINT32_MAX to use Best-Fit/Worst-Fit algorithm - all suitable free
    5861  suballocations will be analized and best one will be chosen.
    5862 - Any other value is also acceptable.
    5863 */
    5864 //static const uint32_t MAX_SUITABLE_SUBALLOCATIONS_TO_CHECK = 8;
    5865 
    5866 bool VmaBlockMetadata::CreateAllocationRequest(
    5867  uint32_t currentFrameIndex,
    5868  uint32_t frameInUseCount,
    5869  VkDeviceSize bufferImageGranularity,
    5870  VkDeviceSize allocSize,
    5871  VkDeviceSize allocAlignment,
    5872  VmaSuballocationType allocType,
    5873  bool canMakeOtherLost,
    5874  VmaAllocationRequest* pAllocationRequest)
    5875 {
    5876  VMA_ASSERT(allocSize > 0);
    5877  VMA_ASSERT(allocType != VMA_SUBALLOCATION_TYPE_FREE);
    5878  VMA_ASSERT(pAllocationRequest != VMA_NULL);
    5879  VMA_HEAVY_ASSERT(Validate());
    5880 
    5881  // There is not enough total free space in this block to fullfill the request: Early return.
    5882  if(canMakeOtherLost == false && m_SumFreeSize < allocSize + 2 * VMA_DEBUG_MARGIN)
    5883  {
    5884  return false;
    5885  }
    5886 
    5887  // New algorithm, efficiently searching freeSuballocationsBySize.
    5888  const size_t freeSuballocCount = m_FreeSuballocationsBySize.size();
    5889  if(freeSuballocCount > 0)
    5890  {
    5891  if(VMA_BEST_FIT)
    5892  {
    5893  // Find first free suballocation with size not less than allocSize + 2 * VMA_DEBUG_MARGIN.
    5894  VmaSuballocationList::iterator* const it = VmaBinaryFindFirstNotLess(
    5895  m_FreeSuballocationsBySize.data(),
    5896  m_FreeSuballocationsBySize.data() + freeSuballocCount,
    5897  allocSize + 2 * VMA_DEBUG_MARGIN,
    5898  VmaSuballocationItemSizeLess());
    5899  size_t index = it - m_FreeSuballocationsBySize.data();
    5900  for(; index < freeSuballocCount; ++index)
    5901  {
    5902  if(CheckAllocation(
    5903  currentFrameIndex,
    5904  frameInUseCount,
    5905  bufferImageGranularity,
    5906  allocSize,
    5907  allocAlignment,
    5908  allocType,
    5909  m_FreeSuballocationsBySize[index],
    5910  false, // canMakeOtherLost
    5911  &pAllocationRequest->offset,
    5912  &pAllocationRequest->itemsToMakeLostCount,
    5913  &pAllocationRequest->sumFreeSize,
    5914  &pAllocationRequest->sumItemSize))
    5915  {
    5916  pAllocationRequest->item = m_FreeSuballocationsBySize[index];
    5917  return true;
    5918  }
    5919  }
    5920  }
    5921  else
    5922  {
    5923  // Search staring from biggest suballocations.
    5924  for(size_t index = freeSuballocCount; index--; )
    5925  {
    5926  if(CheckAllocation(
    5927  currentFrameIndex,
    5928  frameInUseCount,
    5929  bufferImageGranularity,
    5930  allocSize,
    5931  allocAlignment,
    5932  allocType,
    5933  m_FreeSuballocationsBySize[index],
    5934  false, // canMakeOtherLost
    5935  &pAllocationRequest->offset,
    5936  &pAllocationRequest->itemsToMakeLostCount,
    5937  &pAllocationRequest->sumFreeSize,
    5938  &pAllocationRequest->sumItemSize))
    5939  {
    5940  pAllocationRequest->item = m_FreeSuballocationsBySize[index];
    5941  return true;
    5942  }
    5943  }
    5944  }
    5945  }
    5946 
    5947  if(canMakeOtherLost)
    5948  {
    5949  // Brute-force algorithm. TODO: Come up with something better.
    5950 
    5951  pAllocationRequest->sumFreeSize = VK_WHOLE_SIZE;
    5952  pAllocationRequest->sumItemSize = VK_WHOLE_SIZE;
    5953 
    5954  VmaAllocationRequest tmpAllocRequest = {};
    5955  for(VmaSuballocationList::iterator suballocIt = m_Suballocations.begin();
    5956  suballocIt != m_Suballocations.end();
    5957  ++suballocIt)
    5958  {
    5959  if(suballocIt->type == VMA_SUBALLOCATION_TYPE_FREE ||
    5960  suballocIt->hAllocation->CanBecomeLost())
    5961  {
    5962  if(CheckAllocation(
    5963  currentFrameIndex,
    5964  frameInUseCount,
    5965  bufferImageGranularity,
    5966  allocSize,
    5967  allocAlignment,
    5968  allocType,
    5969  suballocIt,
    5970  canMakeOtherLost,
    5971  &tmpAllocRequest.offset,
    5972  &tmpAllocRequest.itemsToMakeLostCount,
    5973  &tmpAllocRequest.sumFreeSize,
    5974  &tmpAllocRequest.sumItemSize))
    5975  {
    5976  tmpAllocRequest.item = suballocIt;
    5977 
    5978  if(tmpAllocRequest.CalcCost() < pAllocationRequest->CalcCost())
    5979  {
    5980  *pAllocationRequest = tmpAllocRequest;
    5981  }
    5982  }
    5983  }
    5984  }
    5985 
    5986  if(pAllocationRequest->sumItemSize != VK_WHOLE_SIZE)
    5987  {
    5988  return true;
    5989  }
    5990  }
    5991 
    5992  return false;
    5993 }
    5994 
    5995 bool VmaBlockMetadata::MakeRequestedAllocationsLost(
    5996  uint32_t currentFrameIndex,
    5997  uint32_t frameInUseCount,
    5998  VmaAllocationRequest* pAllocationRequest)
    5999 {
    6000  while(pAllocationRequest->itemsToMakeLostCount > 0)
    6001  {
    6002  if(pAllocationRequest->item->type == VMA_SUBALLOCATION_TYPE_FREE)
    6003  {
    6004  ++pAllocationRequest->item;
    6005  }
    6006  VMA_ASSERT(pAllocationRequest->item != m_Suballocations.end());
    6007  VMA_ASSERT(pAllocationRequest->item->hAllocation != VK_NULL_HANDLE);
    6008  VMA_ASSERT(pAllocationRequest->item->hAllocation->CanBecomeLost());
    6009  if(pAllocationRequest->item->hAllocation->MakeLost(currentFrameIndex, frameInUseCount))
    6010  {
    6011  pAllocationRequest->item = FreeSuballocation(pAllocationRequest->item);
    6012  --pAllocationRequest->itemsToMakeLostCount;
    6013  }
    6014  else
    6015  {
    6016  return false;
    6017  }
    6018  }
    6019 
    6020  VMA_HEAVY_ASSERT(Validate());
    6021  VMA_ASSERT(pAllocationRequest->item != m_Suballocations.end());
    6022  VMA_ASSERT(pAllocationRequest->item->type == VMA_SUBALLOCATION_TYPE_FREE);
    6023 
    6024  return true;
    6025 }
    6026 
    6027 uint32_t VmaBlockMetadata::MakeAllocationsLost(uint32_t currentFrameIndex, uint32_t frameInUseCount)
    6028 {
    6029  uint32_t lostAllocationCount = 0;
    6030  for(VmaSuballocationList::iterator it = m_Suballocations.begin();
    6031  it != m_Suballocations.end();
    6032  ++it)
    6033  {
    6034  if(it->type != VMA_SUBALLOCATION_TYPE_FREE &&
    6035  it->hAllocation->CanBecomeLost() &&
    6036  it->hAllocation->MakeLost(currentFrameIndex, frameInUseCount))
    6037  {
    6038  it = FreeSuballocation(it);
    6039  ++lostAllocationCount;
    6040  }
    6041  }
    6042  return lostAllocationCount;
    6043 }
    6044 
    6045 VkResult VmaBlockMetadata::CheckCorruption(const void* pBlockData)
    6046 {
    6047  for(VmaSuballocationList::iterator it = m_Suballocations.begin();
    6048  it != m_Suballocations.end();
    6049  ++it)
    6050  {
    6051  if(it->type != VMA_SUBALLOCATION_TYPE_FREE)
    6052  {
    6053  if(!VmaValidateMagicValue(pBlockData, it->offset - VMA_DEBUG_MARGIN))
    6054  {
    6055  VMA_ASSERT(0 && "MEMORY CORRUPTION DETECTED BEFORE VALIDATED ALLOCATION!");
    6056  return VK_ERROR_VALIDATION_FAILED_EXT;
    6057  }
    6058  if(!VmaValidateMagicValue(pBlockData, it->offset + it->size))
    6059  {
    6060  VMA_ASSERT(0 && "MEMORY CORRUPTION DETECTED AFTER VALIDATED ALLOCATION!");
    6061  return VK_ERROR_VALIDATION_FAILED_EXT;
    6062  }
    6063  }
    6064  }
    6065 
    6066  return VK_SUCCESS;
    6067 }
    6068 
    6069 void VmaBlockMetadata::Alloc(
    6070  const VmaAllocationRequest& request,
    6071  VmaSuballocationType type,
    6072  VkDeviceSize allocSize,
    6073  VmaAllocation hAllocation)
    6074 {
    6075  VMA_ASSERT(request.item != m_Suballocations.end());
    6076  VmaSuballocation& suballoc = *request.item;
    6077  // Given suballocation is a free block.
    6078  VMA_ASSERT(suballoc.type == VMA_SUBALLOCATION_TYPE_FREE);
    6079  // Given offset is inside this suballocation.
    6080  VMA_ASSERT(request.offset >= suballoc.offset);
    6081  const VkDeviceSize paddingBegin = request.offset - suballoc.offset;
    6082  VMA_ASSERT(suballoc.size >= paddingBegin + allocSize);
    6083  const VkDeviceSize paddingEnd = suballoc.size - paddingBegin - allocSize;
    6084 
    6085  // Unregister this free suballocation from m_FreeSuballocationsBySize and update
    6086  // it to become used.
    6087  UnregisterFreeSuballocation(request.item);
    6088 
    6089  suballoc.offset = request.offset;
    6090  suballoc.size = allocSize;
    6091  suballoc.type = type;
    6092  suballoc.hAllocation = hAllocation;
    6093 
    6094  // If there are any free bytes remaining at the end, insert new free suballocation after current one.
    6095  if(paddingEnd)
    6096  {
    6097  VmaSuballocation paddingSuballoc = {};
    6098  paddingSuballoc.offset = request.offset + allocSize;
    6099  paddingSuballoc.size = paddingEnd;
    6100  paddingSuballoc.type = VMA_SUBALLOCATION_TYPE_FREE;
    6101  VmaSuballocationList::iterator next = request.item;
    6102  ++next;
    6103  const VmaSuballocationList::iterator paddingEndItem =
    6104  m_Suballocations.insert(next, paddingSuballoc);
    6105  RegisterFreeSuballocation(paddingEndItem);
    6106  }
    6107 
    6108  // If there are any free bytes remaining at the beginning, insert new free suballocation before current one.
    6109  if(paddingBegin)
    6110  {
    6111  VmaSuballocation paddingSuballoc = {};
    6112  paddingSuballoc.offset = request.offset - paddingBegin;
    6113  paddingSuballoc.size = paddingBegin;
    6114  paddingSuballoc.type = VMA_SUBALLOCATION_TYPE_FREE;
    6115  const VmaSuballocationList::iterator paddingBeginItem =
    6116  m_Suballocations.insert(request.item, paddingSuballoc);
    6117  RegisterFreeSuballocation(paddingBeginItem);
    6118  }
    6119 
    6120  // Update totals.
    6121  m_FreeCount = m_FreeCount - 1;
    6122  if(paddingBegin > 0)
    6123  {
    6124  ++m_FreeCount;
    6125  }
    6126  if(paddingEnd > 0)
    6127  {
    6128  ++m_FreeCount;
    6129  }
    6130  m_SumFreeSize -= allocSize;
    6131 }
    6132 
    6133 void VmaBlockMetadata::Free(const VmaAllocation allocation)
    6134 {
    6135  for(VmaSuballocationList::iterator suballocItem = m_Suballocations.begin();
    6136  suballocItem != m_Suballocations.end();
    6137  ++suballocItem)
    6138  {
    6139  VmaSuballocation& suballoc = *suballocItem;
    6140  if(suballoc.hAllocation == allocation)
    6141  {
    6142  FreeSuballocation(suballocItem);
    6143  VMA_HEAVY_ASSERT(Validate());
    6144  return;
    6145  }
    6146  }
    6147  VMA_ASSERT(0 && "Not found!");
    6148 }
    6149 
    6150 void VmaBlockMetadata::FreeAtOffset(VkDeviceSize offset)
    6151 {
    6152  for(VmaSuballocationList::iterator suballocItem = m_Suballocations.begin();
    6153  suballocItem != m_Suballocations.end();
    6154  ++suballocItem)
    6155  {
    6156  VmaSuballocation& suballoc = *suballocItem;
    6157  if(suballoc.offset == offset)
    6158  {
    6159  FreeSuballocation(suballocItem);
    6160  return;
    6161  }
    6162  }
    6163  VMA_ASSERT(0 && "Not found!");
    6164 }
    6165 
    6166 bool VmaBlockMetadata::ValidateFreeSuballocationList() const
    6167 {
    6168  VkDeviceSize lastSize = 0;
    6169  for(size_t i = 0, count = m_FreeSuballocationsBySize.size(); i < count; ++i)
    6170  {
    6171  const VmaSuballocationList::iterator it = m_FreeSuballocationsBySize[i];
    6172 
    6173  if(it->type != VMA_SUBALLOCATION_TYPE_FREE)
    6174  {
    6175  VMA_ASSERT(0);
    6176  return false;
    6177  }
    6178  if(it->size < VMA_MIN_FREE_SUBALLOCATION_SIZE_TO_REGISTER)
    6179  {
    6180  VMA_ASSERT(0);
    6181  return false;
    6182  }
    6183  if(it->size < lastSize)
    6184  {
    6185  VMA_ASSERT(0);
    6186  return false;
    6187  }
    6188 
    6189  lastSize = it->size;
    6190  }
    6191  return true;
    6192 }
    6193 
    6194 bool VmaBlockMetadata::CheckAllocation(
    6195  uint32_t currentFrameIndex,
    6196  uint32_t frameInUseCount,
    6197  VkDeviceSize bufferImageGranularity,
    6198  VkDeviceSize allocSize,
    6199  VkDeviceSize allocAlignment,
    6200  VmaSuballocationType allocType,
    6201  VmaSuballocationList::const_iterator suballocItem,
    6202  bool canMakeOtherLost,
    6203  VkDeviceSize* pOffset,
    6204  size_t* itemsToMakeLostCount,
    6205  VkDeviceSize* pSumFreeSize,
    6206  VkDeviceSize* pSumItemSize) const
    6207 {
    6208  VMA_ASSERT(allocSize > 0);
    6209  VMA_ASSERT(allocType != VMA_SUBALLOCATION_TYPE_FREE);
    6210  VMA_ASSERT(suballocItem != m_Suballocations.cend());
    6211  VMA_ASSERT(pOffset != VMA_NULL);
    6212 
    6213  *itemsToMakeLostCount = 0;
    6214  *pSumFreeSize = 0;
    6215  *pSumItemSize = 0;
    6216 
    6217  if(canMakeOtherLost)
    6218  {
    6219  if(suballocItem->type == VMA_SUBALLOCATION_TYPE_FREE)
    6220  {
    6221  *pSumFreeSize = suballocItem->size;
    6222  }
    6223  else
    6224  {
    6225  if(suballocItem->hAllocation->CanBecomeLost() &&
    6226  suballocItem->hAllocation->GetLastUseFrameIndex() + frameInUseCount < currentFrameIndex)
    6227  {
    6228  ++*itemsToMakeLostCount;
    6229  *pSumItemSize = suballocItem->size;
    6230  }
    6231  else
    6232  {
    6233  return false;
    6234  }
    6235  }
    6236 
    6237  // Remaining size is too small for this request: Early return.
    6238  if(m_Size - suballocItem->offset < allocSize)
    6239  {
    6240  return false;
    6241  }
    6242 
    6243  // Start from offset equal to beginning of this suballocation.
    6244  *pOffset = suballocItem->offset;
    6245 
    6246  // Apply VMA_DEBUG_MARGIN at the beginning.
    6247  if(VMA_DEBUG_MARGIN > 0)
    6248  {
    6249  *pOffset += VMA_DEBUG_MARGIN;
    6250  }
    6251 
    6252  // Apply alignment.
    6253  *pOffset = VmaAlignUp(*pOffset, allocAlignment);
    6254 
    6255  // Check previous suballocations for BufferImageGranularity conflicts.
    6256  // Make bigger alignment if necessary.
    6257  if(bufferImageGranularity > 1)
    6258  {
    6259  bool bufferImageGranularityConflict = false;
    6260  VmaSuballocationList::const_iterator prevSuballocItem = suballocItem;
    6261  while(prevSuballocItem != m_Suballocations.cbegin())
    6262  {
    6263  --prevSuballocItem;
    6264  const VmaSuballocation& prevSuballoc = *prevSuballocItem;
    6265  if(VmaBlocksOnSamePage(prevSuballoc.offset, prevSuballoc.size, *pOffset, bufferImageGranularity))
    6266  {
    6267  if(VmaIsBufferImageGranularityConflict(prevSuballoc.type, allocType))
    6268  {
    6269  bufferImageGranularityConflict = true;
    6270  break;
    6271  }
    6272  }
    6273  else
    6274  // Already on previous page.
    6275  break;
    6276  }
    6277  if(bufferImageGranularityConflict)
    6278  {
    6279  *pOffset = VmaAlignUp(*pOffset, bufferImageGranularity);
    6280  }
    6281  }
    6282 
    6283  // Now that we have final *pOffset, check if we are past suballocItem.
    6284  // If yes, return false - this function should be called for another suballocItem as starting point.
    6285  if(*pOffset >= suballocItem->offset + suballocItem->size)
    6286  {
    6287  return false;
    6288  }
    6289 
    6290  // Calculate padding at the beginning based on current offset.
    6291  const VkDeviceSize paddingBegin = *pOffset - suballocItem->offset;
    6292 
    6293  // Calculate required margin at the end.
    6294  const VkDeviceSize requiredEndMargin = VMA_DEBUG_MARGIN;
    6295 
    6296  const VkDeviceSize totalSize = paddingBegin + allocSize + requiredEndMargin;
    6297  // Another early return check.
    6298  if(suballocItem->offset + totalSize > m_Size)
    6299  {
    6300  return false;
    6301  }
    6302 
    6303  // Advance lastSuballocItem until desired size is reached.
    6304  // Update itemsToMakeLostCount.
    6305  VmaSuballocationList::const_iterator lastSuballocItem = suballocItem;
    6306  if(totalSize > suballocItem->size)
    6307  {
    6308  VkDeviceSize remainingSize = totalSize - suballocItem->size;
    6309  while(remainingSize > 0)
    6310  {
    6311  ++lastSuballocItem;
    6312  if(lastSuballocItem == m_Suballocations.cend())
    6313  {
    6314  return false;
    6315  }
    6316  if(lastSuballocItem->type == VMA_SUBALLOCATION_TYPE_FREE)
    6317  {
    6318  *pSumFreeSize += lastSuballocItem->size;
    6319  }
    6320  else
    6321  {
    6322  VMA_ASSERT(lastSuballocItem->hAllocation != VK_NULL_HANDLE);
    6323  if(lastSuballocItem->hAllocation->CanBecomeLost() &&
    6324  lastSuballocItem->hAllocation->GetLastUseFrameIndex() + frameInUseCount < currentFrameIndex)
    6325  {
    6326  ++*itemsToMakeLostCount;
    6327  *pSumItemSize += lastSuballocItem->size;
    6328  }
    6329  else
    6330  {
    6331  return false;
    6332  }
    6333  }
    6334  remainingSize = (lastSuballocItem->size < remainingSize) ?
    6335  remainingSize - lastSuballocItem->size : 0;
    6336  }
    6337  }
    6338 
    6339  // Check next suballocations for BufferImageGranularity conflicts.
    6340  // If conflict exists, we must mark more allocations lost or fail.
    6341  if(bufferImageGranularity > 1)
    6342  {
    6343  VmaSuballocationList::const_iterator nextSuballocItem = lastSuballocItem;
    6344  ++nextSuballocItem;
    6345  while(nextSuballocItem != m_Suballocations.cend())
    6346  {
    6347  const VmaSuballocation& nextSuballoc = *nextSuballocItem;
    6348  if(VmaBlocksOnSamePage(*pOffset, allocSize, nextSuballoc.offset, bufferImageGranularity))
    6349  {
    6350  if(VmaIsBufferImageGranularityConflict(allocType, nextSuballoc.type))
    6351  {
    6352  VMA_ASSERT(nextSuballoc.hAllocation != VK_NULL_HANDLE);
    6353  if(nextSuballoc.hAllocation->CanBecomeLost() &&
    6354  nextSuballoc.hAllocation->GetLastUseFrameIndex() + frameInUseCount < currentFrameIndex)
    6355  {
    6356  ++*itemsToMakeLostCount;
    6357  }
    6358  else
    6359  {
    6360  return false;
    6361  }
    6362  }
    6363  }
    6364  else
    6365  {
    6366  // Already on next page.
    6367  break;
    6368  }
    6369  ++nextSuballocItem;
    6370  }
    6371  }
    6372  }
    6373  else
    6374  {
    6375  const VmaSuballocation& suballoc = *suballocItem;
    6376  VMA_ASSERT(suballoc.type == VMA_SUBALLOCATION_TYPE_FREE);
    6377 
    6378  *pSumFreeSize = suballoc.size;
    6379 
    6380  // Size of this suballocation is too small for this request: Early return.
    6381  if(suballoc.size < allocSize)
    6382  {
    6383  return false;
    6384  }
    6385 
    6386  // Start from offset equal to beginning of this suballocation.
    6387  *pOffset = suballoc.offset;
    6388 
    6389  // Apply VMA_DEBUG_MARGIN at the beginning.
    6390  if(VMA_DEBUG_MARGIN > 0)
    6391  {
    6392  *pOffset += VMA_DEBUG_MARGIN;
    6393  }
    6394 
    6395  // Apply alignment.
    6396  *pOffset = VmaAlignUp(*pOffset, allocAlignment);
    6397 
    6398  // Check previous suballocations for BufferImageGranularity conflicts.
    6399  // Make bigger alignment if necessary.
    6400  if(bufferImageGranularity > 1)
    6401  {
    6402  bool bufferImageGranularityConflict = false;
    6403  VmaSuballocationList::const_iterator prevSuballocItem = suballocItem;
    6404  while(prevSuballocItem != m_Suballocations.cbegin())
    6405  {
    6406  --prevSuballocItem;
    6407  const VmaSuballocation& prevSuballoc = *prevSuballocItem;
    6408  if(VmaBlocksOnSamePage(prevSuballoc.offset, prevSuballoc.size, *pOffset, bufferImageGranularity))
    6409  {
    6410  if(VmaIsBufferImageGranularityConflict(prevSuballoc.type, allocType))
    6411  {
    6412  bufferImageGranularityConflict = true;
    6413  break;
    6414  }
    6415  }
    6416  else
    6417  // Already on previous page.
    6418  break;
    6419  }
    6420  if(bufferImageGranularityConflict)
    6421  {
    6422  *pOffset = VmaAlignUp(*pOffset, bufferImageGranularity);
    6423  }
    6424  }
    6425 
    6426  // Calculate padding at the beginning based on current offset.
    6427  const VkDeviceSize paddingBegin = *pOffset - suballoc.offset;
    6428 
    6429  // Calculate required margin at the end.
    6430  const VkDeviceSize requiredEndMargin = VMA_DEBUG_MARGIN;
    6431 
    6432  // Fail if requested size plus margin before and after is bigger than size of this suballocation.
    6433  if(paddingBegin + allocSize + requiredEndMargin > suballoc.size)
    6434  {
    6435  return false;
    6436  }
    6437 
    6438  // Check next suballocations for BufferImageGranularity conflicts.
    6439  // If conflict exists, allocation cannot be made here.
    6440  if(bufferImageGranularity > 1)
    6441  {
    6442  VmaSuballocationList::const_iterator nextSuballocItem = suballocItem;
    6443  ++nextSuballocItem;
    6444  while(nextSuballocItem != m_Suballocations.cend())
    6445  {
    6446  const VmaSuballocation& nextSuballoc = *nextSuballocItem;
    6447  if(VmaBlocksOnSamePage(*pOffset, allocSize, nextSuballoc.offset, bufferImageGranularity))
    6448  {
    6449  if(VmaIsBufferImageGranularityConflict(allocType, nextSuballoc.type))
    6450  {
    6451  return false;
    6452  }
    6453  }
    6454  else
    6455  {
    6456  // Already on next page.
    6457  break;
    6458  }
    6459  ++nextSuballocItem;
    6460  }
    6461  }
    6462  }
    6463 
    6464  // All tests passed: Success. pOffset is already filled.
    6465  return true;
    6466 }
    6467 
    6468 void VmaBlockMetadata::MergeFreeWithNext(VmaSuballocationList::iterator item)
    6469 {
    6470  VMA_ASSERT(item != m_Suballocations.end());
    6471  VMA_ASSERT(item->type == VMA_SUBALLOCATION_TYPE_FREE);
    6472 
    6473  VmaSuballocationList::iterator nextItem = item;
    6474  ++nextItem;
    6475  VMA_ASSERT(nextItem != m_Suballocations.end());
    6476  VMA_ASSERT(nextItem->type == VMA_SUBALLOCATION_TYPE_FREE);
    6477 
    6478  item->size += nextItem->size;
    6479  --m_FreeCount;
    6480  m_Suballocations.erase(nextItem);
    6481 }
    6482 
    6483 VmaSuballocationList::iterator VmaBlockMetadata::FreeSuballocation(VmaSuballocationList::iterator suballocItem)
    6484 {
    6485  // Change this suballocation to be marked as free.
    6486  VmaSuballocation& suballoc = *suballocItem;
    6487  suballoc.type = VMA_SUBALLOCATION_TYPE_FREE;
    6488  suballoc.hAllocation = VK_NULL_HANDLE;
    6489 
    6490  // Update totals.
    6491  ++m_FreeCount;
    6492  m_SumFreeSize += suballoc.size;
    6493 
    6494  // Merge with previous and/or next suballocation if it's also free.
    6495  bool mergeWithNext = false;
    6496  bool mergeWithPrev = false;
    6497 
    6498  VmaSuballocationList::iterator nextItem = suballocItem;
    6499  ++nextItem;
    6500  if((nextItem != m_Suballocations.end()) && (nextItem->type == VMA_SUBALLOCATION_TYPE_FREE))
    6501  {
    6502  mergeWithNext = true;
    6503  }
    6504 
    6505  VmaSuballocationList::iterator prevItem = suballocItem;
    6506  if(suballocItem != m_Suballocations.begin())
    6507  {
    6508  --prevItem;
    6509  if(prevItem->type == VMA_SUBALLOCATION_TYPE_FREE)
    6510  {
    6511  mergeWithPrev = true;
    6512  }
    6513  }
    6514 
    6515  if(mergeWithNext)
    6516  {
    6517  UnregisterFreeSuballocation(nextItem);
    6518  MergeFreeWithNext(suballocItem);
    6519  }
    6520 
    6521  if(mergeWithPrev)
    6522  {
    6523  UnregisterFreeSuballocation(prevItem);
    6524  MergeFreeWithNext(prevItem);
    6525  RegisterFreeSuballocation(prevItem);
    6526  return prevItem;
    6527  }
    6528  else
    6529  {
    6530  RegisterFreeSuballocation(suballocItem);
    6531  return suballocItem;
    6532  }
    6533 }
    6534 
    6535 void VmaBlockMetadata::RegisterFreeSuballocation(VmaSuballocationList::iterator item)
    6536 {
    6537  VMA_ASSERT(item->type == VMA_SUBALLOCATION_TYPE_FREE);
    6538  VMA_ASSERT(item->size > 0);
    6539 
    6540  // You may want to enable this validation at the beginning or at the end of
    6541  // this function, depending on what do you want to check.
    6542  VMA_HEAVY_ASSERT(ValidateFreeSuballocationList());
    6543 
    6544  if(item->size >= VMA_MIN_FREE_SUBALLOCATION_SIZE_TO_REGISTER)
    6545  {
    6546  if(m_FreeSuballocationsBySize.empty())
    6547  {
    6548  m_FreeSuballocationsBySize.push_back(item);
    6549  }
    6550  else
    6551  {
    6552  VmaVectorInsertSorted<VmaSuballocationItemSizeLess>(m_FreeSuballocationsBySize, item);
    6553  }
    6554  }
    6555 
    6556  //VMA_HEAVY_ASSERT(ValidateFreeSuballocationList());
    6557 }
    6558 
    6559 
    6560 void VmaBlockMetadata::UnregisterFreeSuballocation(VmaSuballocationList::iterator item)
    6561 {
    6562  VMA_ASSERT(item->type == VMA_SUBALLOCATION_TYPE_FREE);
    6563  VMA_ASSERT(item->size > 0);
    6564 
    6565  // You may want to enable this validation at the beginning or at the end of
    6566  // this function, depending on what do you want to check.
    6567  VMA_HEAVY_ASSERT(ValidateFreeSuballocationList());
    6568 
    6569  if(item->size >= VMA_MIN_FREE_SUBALLOCATION_SIZE_TO_REGISTER)
    6570  {
    6571  VmaSuballocationList::iterator* const it = VmaBinaryFindFirstNotLess(
    6572  m_FreeSuballocationsBySize.data(),
    6573  m_FreeSuballocationsBySize.data() + m_FreeSuballocationsBySize.size(),
    6574  item,
    6575  VmaSuballocationItemSizeLess());
    6576  for(size_t index = it - m_FreeSuballocationsBySize.data();
    6577  index < m_FreeSuballocationsBySize.size();
    6578  ++index)
    6579  {
    6580  if(m_FreeSuballocationsBySize[index] == item)
    6581  {
    6582  VmaVectorRemove(m_FreeSuballocationsBySize, index);
    6583  return;
    6584  }
    6585  VMA_ASSERT((m_FreeSuballocationsBySize[index]->size == item->size) && "Not found.");
    6586  }
    6587  VMA_ASSERT(0 && "Not found.");
    6588  }
    6589 
    6590  //VMA_HEAVY_ASSERT(ValidateFreeSuballocationList());
    6591 }
    6592 
    6594 // class VmaDeviceMemoryBlock
    6595 
    6596 VmaDeviceMemoryBlock::VmaDeviceMemoryBlock(VmaAllocator hAllocator) :
    6597  m_Metadata(hAllocator),
    6598  m_MemoryTypeIndex(UINT32_MAX),
    6599  m_Id(0),
    6600  m_hMemory(VK_NULL_HANDLE),
    6601  m_MapCount(0),
    6602  m_pMappedData(VMA_NULL)
    6603 {
    6604 }
    6605 
    6606 void VmaDeviceMemoryBlock::Init(
    6607  uint32_t newMemoryTypeIndex,
    6608  VkDeviceMemory newMemory,
    6609  VkDeviceSize newSize,
    6610  uint32_t id)
    6611 {
    6612  VMA_ASSERT(m_hMemory == VK_NULL_HANDLE);
    6613 
    6614  m_MemoryTypeIndex = newMemoryTypeIndex;
    6615  m_Id = id;
    6616  m_hMemory = newMemory;
    6617 
    6618  m_Metadata.Init(newSize);
    6619 }
    6620 
    6621 void VmaDeviceMemoryBlock::Destroy(VmaAllocator allocator)
    6622 {
    6623  // This is the most important assert in the entire library.
    6624  // Hitting it means you have some memory leak - unreleased VmaAllocation objects.
    6625  VMA_ASSERT(m_Metadata.IsEmpty() && "Some allocations were not freed before destruction of this memory block!");
    6626 
    6627  VMA_ASSERT(m_hMemory != VK_NULL_HANDLE);
    6628  allocator->FreeVulkanMemory(m_MemoryTypeIndex, m_Metadata.GetSize(), m_hMemory);
    6629  m_hMemory = VK_NULL_HANDLE;
    6630 }
    6631 
    6632 bool VmaDeviceMemoryBlock::Validate() const
    6633 {
    6634  if((m_hMemory == VK_NULL_HANDLE) ||
    6635  (m_Metadata.GetSize() == 0))
    6636  {
    6637  return false;
    6638  }
    6639 
    6640  return m_Metadata.Validate();
    6641 }
    6642 
    6643 VkResult VmaDeviceMemoryBlock::CheckCorruption(VmaAllocator hAllocator)
    6644 {
    6645  void* pData = nullptr;
    6646  VkResult res = Map(hAllocator, 1, &pData);
    6647  if(res != VK_SUCCESS)
    6648  {
    6649  return res;
    6650  }
    6651 
    6652  res = m_Metadata.CheckCorruption(pData);
    6653 
    6654  Unmap(hAllocator, 1);
    6655 
    6656  return res;
    6657 }
    6658 
    6659 VkResult VmaDeviceMemoryBlock::Map(VmaAllocator hAllocator, uint32_t count, void** ppData)
    6660 {
    6661  if(count == 0)
    6662  {
    6663  return VK_SUCCESS;
    6664  }
    6665 
    6666  VmaMutexLock lock(m_Mutex, hAllocator->m_UseMutex);
    6667  if(m_MapCount != 0)
    6668  {
    6669  m_MapCount += count;
    6670  VMA_ASSERT(m_pMappedData != VMA_NULL);
    6671  if(ppData != VMA_NULL)
    6672  {
    6673  *ppData = m_pMappedData;
    6674  }
    6675  return VK_SUCCESS;
    6676  }
    6677  else
    6678  {
    6679  VkResult result = (*hAllocator->GetVulkanFunctions().vkMapMemory)(
    6680  hAllocator->m_hDevice,
    6681  m_hMemory,
    6682  0, // offset
    6683  VK_WHOLE_SIZE,
    6684  0, // flags
    6685  &m_pMappedData);
    6686  if(result == VK_SUCCESS)
    6687  {
    6688  if(ppData != VMA_NULL)
    6689  {
    6690  *ppData = m_pMappedData;
    6691  }
    6692  m_MapCount = count;
    6693  }
    6694  return result;
    6695  }
    6696 }
    6697 
    6698 void VmaDeviceMemoryBlock::Unmap(VmaAllocator hAllocator, uint32_t count)
    6699 {
    6700  if(count == 0)
    6701  {
    6702  return;
    6703  }
    6704 
    6705  VmaMutexLock lock(m_Mutex, hAllocator->m_UseMutex);
    6706  if(m_MapCount >= count)
    6707  {
    6708  m_MapCount -= count;
    6709  if(m_MapCount == 0)
    6710  {
    6711  m_pMappedData = VMA_NULL;
    6712  (*hAllocator->GetVulkanFunctions().vkUnmapMemory)(hAllocator->m_hDevice, m_hMemory);
    6713  }
    6714  }
    6715  else
    6716  {
    6717  VMA_ASSERT(0 && "VkDeviceMemory block is being unmapped while it was not previously mapped.");
    6718  }
    6719 }
    6720 
    6721 VkResult VmaDeviceMemoryBlock::WriteMagicValueAroundAllocation(VmaAllocator hAllocator, VkDeviceSize allocOffset, VkDeviceSize allocSize)
    6722 {
    6723  VMA_ASSERT(VMA_DEBUG_MARGIN > 0 && VMA_DEBUG_MARGIN % 4 == 0 && VMA_DEBUG_DETECT_CORRUPTION);
    6724  VMA_ASSERT(allocOffset >= VMA_DEBUG_MARGIN);
    6725 
    6726  void* pData;
    6727  VkResult res = Map(hAllocator, 1, &pData);
    6728  if(res != VK_SUCCESS)
    6729  {
    6730  return res;
    6731  }
    6732 
    6733  VmaWriteMagicValue(pData, allocOffset - VMA_DEBUG_MARGIN);
    6734  VmaWriteMagicValue(pData, allocOffset + allocSize);
    6735 
    6736  Unmap(hAllocator, 1);
    6737 
    6738  return VK_SUCCESS;
    6739 }
    6740 
    6741 VkResult VmaDeviceMemoryBlock::ValidateMagicValueAroundAllocation(VmaAllocator hAllocator, VkDeviceSize allocOffset, VkDeviceSize allocSize)
    6742 {
    6743  VMA_ASSERT(VMA_DEBUG_MARGIN > 0 && VMA_DEBUG_MARGIN % 4 == 0 && VMA_DEBUG_DETECT_CORRUPTION);
    6744  VMA_ASSERT(allocOffset >= VMA_DEBUG_MARGIN);
    6745 
    6746  void* pData;
    6747  VkResult res = Map(hAllocator, 1, &pData);
    6748  if(res != VK_SUCCESS)
    6749  {
    6750  return res;
    6751  }
    6752 
    6753  if(!VmaValidateMagicValue(pData, allocOffset - VMA_DEBUG_MARGIN))
    6754  {
    6755  VMA_ASSERT(0 && "MEMORY CORRUPTION DETECTED BEFORE FREED ALLOCATION!");
    6756  }
    6757  else if(!VmaValidateMagicValue(pData, allocOffset + allocSize))
    6758  {
    6759  VMA_ASSERT(0 && "MEMORY CORRUPTION DETECTED AFTER FREED ALLOCATION!");
    6760  }
    6761 
    6762  Unmap(hAllocator, 1);
    6763 
    6764  return VK_SUCCESS;
    6765 }
    6766 
    6767 VkResult VmaDeviceMemoryBlock::BindBufferMemory(
    6768  const VmaAllocator hAllocator,
    6769  const VmaAllocation hAllocation,
    6770  VkBuffer hBuffer)
    6771 {
    6772  VMA_ASSERT(hAllocation->GetType() == VmaAllocation_T::ALLOCATION_TYPE_BLOCK &&
    6773  hAllocation->GetBlock() == this);
    6774  // This lock is important so that we don't call vkBind... and/or vkMap... simultaneously on the same VkDeviceMemory from multiple threads.
    6775  VmaMutexLock lock(m_Mutex, hAllocator->m_UseMutex);
    6776  return hAllocator->GetVulkanFunctions().vkBindBufferMemory(
    6777  hAllocator->m_hDevice,
    6778  hBuffer,
    6779  m_hMemory,
    6780  hAllocation->GetOffset());
    6781 }
    6782 
    6783 VkResult VmaDeviceMemoryBlock::BindImageMemory(
    6784  const VmaAllocator hAllocator,
    6785  const VmaAllocation hAllocation,
    6786  VkImage hImage)
    6787 {
    6788  VMA_ASSERT(hAllocation->GetType() == VmaAllocation_T::ALLOCATION_TYPE_BLOCK &&
    6789  hAllocation->GetBlock() == this);
    6790  // This lock is important so that we don't call vkBind... and/or vkMap... simultaneously on the same VkDeviceMemory from multiple threads.
    6791  VmaMutexLock lock(m_Mutex, hAllocator->m_UseMutex);
    6792  return hAllocator->GetVulkanFunctions().vkBindImageMemory(
    6793  hAllocator->m_hDevice,
    6794  hImage,
    6795  m_hMemory,
    6796  hAllocation->GetOffset());
    6797 }
    6798 
    6799 static void InitStatInfo(VmaStatInfo& outInfo)
    6800 {
    6801  memset(&outInfo, 0, sizeof(outInfo));
    6802  outInfo.allocationSizeMin = UINT64_MAX;
    6803  outInfo.unusedRangeSizeMin = UINT64_MAX;
    6804 }
    6805 
    6806 // Adds statistics srcInfo into inoutInfo, like: inoutInfo += srcInfo.
    6807 static void VmaAddStatInfo(VmaStatInfo& inoutInfo, const VmaStatInfo& srcInfo)
    6808 {
    6809  inoutInfo.blockCount += srcInfo.blockCount;
    6810  inoutInfo.allocationCount += srcInfo.allocationCount;
    6811  inoutInfo.unusedRangeCount += srcInfo.unusedRangeCount;
    6812  inoutInfo.usedBytes += srcInfo.usedBytes;
    6813  inoutInfo.unusedBytes += srcInfo.unusedBytes;
    6814  inoutInfo.allocationSizeMin = VMA_MIN(inoutInfo.allocationSizeMin, srcInfo.allocationSizeMin);
    6815  inoutInfo.allocationSizeMax = VMA_MAX(inoutInfo.allocationSizeMax, srcInfo.allocationSizeMax);
    6816  inoutInfo.unusedRangeSizeMin = VMA_MIN(inoutInfo.unusedRangeSizeMin, srcInfo.unusedRangeSizeMin);
    6817  inoutInfo.unusedRangeSizeMax = VMA_MAX(inoutInfo.unusedRangeSizeMax, srcInfo.unusedRangeSizeMax);
    6818 }
    6819 
    6820 static void VmaPostprocessCalcStatInfo(VmaStatInfo& inoutInfo)
    6821 {
    6822  inoutInfo.allocationSizeAvg = (inoutInfo.allocationCount > 0) ?
    6823  VmaRoundDiv<VkDeviceSize>(inoutInfo.usedBytes, inoutInfo.allocationCount) : 0;
    6824  inoutInfo.unusedRangeSizeAvg = (inoutInfo.unusedRangeCount > 0) ?
    6825  VmaRoundDiv<VkDeviceSize>(inoutInfo.unusedBytes, inoutInfo.unusedRangeCount) : 0;
    6826 }
    6827 
    6828 VmaPool_T::VmaPool_T(
    6829  VmaAllocator hAllocator,
    6830  const VmaPoolCreateInfo& createInfo) :
    6831  m_BlockVector(
    6832  hAllocator,
    6833  createInfo.memoryTypeIndex,
    6834  createInfo.blockSize,
    6835  createInfo.minBlockCount,
    6836  createInfo.maxBlockCount,
    6837  (createInfo.flags & VMA_POOL_CREATE_IGNORE_BUFFER_IMAGE_GRANULARITY_BIT) != 0 ? 1 : hAllocator->GetBufferImageGranularity(),
    6838  createInfo.frameInUseCount,
    6839  true), // isCustomPool
    6840  m_Id(0)
    6841 {
    6842 }
    6843 
    6844 VmaPool_T::~VmaPool_T()
    6845 {
    6846 }
    6847 
    6848 #if VMA_STATS_STRING_ENABLED
    6849 
    6850 #endif // #if VMA_STATS_STRING_ENABLED
    6851 
    6852 VmaBlockVector::VmaBlockVector(
    6853  VmaAllocator hAllocator,
    6854  uint32_t memoryTypeIndex,
    6855  VkDeviceSize preferredBlockSize,
    6856  size_t minBlockCount,
    6857  size_t maxBlockCount,
    6858  VkDeviceSize bufferImageGranularity,
    6859  uint32_t frameInUseCount,
    6860  bool isCustomPool) :
    6861  m_hAllocator(hAllocator),
    6862  m_MemoryTypeIndex(memoryTypeIndex),
    6863  m_PreferredBlockSize(preferredBlockSize),
    6864  m_MinBlockCount(minBlockCount),
    6865  m_MaxBlockCount(maxBlockCount),
    6866  m_BufferImageGranularity(bufferImageGranularity),
    6867  m_FrameInUseCount(frameInUseCount),
    6868  m_IsCustomPool(isCustomPool),
    6869  m_Blocks(VmaStlAllocator<VmaDeviceMemoryBlock*>(hAllocator->GetAllocationCallbacks())),
    6870  m_HasEmptyBlock(false),
    6871  m_pDefragmentator(VMA_NULL),
    6872  m_NextBlockId(0)
    6873 {
    6874 }
    6875 
    6876 VmaBlockVector::~VmaBlockVector()
    6877 {
    6878  VMA_ASSERT(m_pDefragmentator == VMA_NULL);
    6879 
    6880  for(size_t i = m_Blocks.size(); i--; )
    6881  {
    6882  m_Blocks[i]->Destroy(m_hAllocator);
    6883  vma_delete(m_hAllocator, m_Blocks[i]);
    6884  }
    6885 }
    6886 
    6887 VkResult VmaBlockVector::CreateMinBlocks()
    6888 {
    6889  for(size_t i = 0; i < m_MinBlockCount; ++i)
    6890  {
    6891  VkResult res = CreateBlock(m_PreferredBlockSize, VMA_NULL);
    6892  if(res != VK_SUCCESS)
    6893  {
    6894  return res;
    6895  }
    6896  }
    6897  return VK_SUCCESS;
    6898 }
    6899 
    6900 void VmaBlockVector::GetPoolStats(VmaPoolStats* pStats)
    6901 {
    6902  pStats->size = 0;
    6903  pStats->unusedSize = 0;
    6904  pStats->allocationCount = 0;
    6905  pStats->unusedRangeCount = 0;
    6906  pStats->unusedRangeSizeMax = 0;
    6907 
    6908  VmaMutexLock lock(m_Mutex, m_hAllocator->m_UseMutex);
    6909 
    6910  for(uint32_t blockIndex = 0; blockIndex < m_Blocks.size(); ++blockIndex)
    6911  {
    6912  const VmaDeviceMemoryBlock* const pBlock = m_Blocks[blockIndex];
    6913  VMA_ASSERT(pBlock);
    6914  VMA_HEAVY_ASSERT(pBlock->Validate());
    6915  pBlock->m_Metadata.AddPoolStats(*pStats);
    6916  }
    6917 }
    6918 
    6919 bool VmaBlockVector::IsCorruptionDetectionEnabled() const
    6920 {
    6921  const uint32_t requiredMemFlags = VK_MEMORY_PROPERTY_HOST_VISIBLE_BIT | VK_MEMORY_PROPERTY_HOST_COHERENT_BIT;
    6922  return (VMA_DEBUG_DETECT_CORRUPTION != 0) &&
    6923  (VMA_DEBUG_MARGIN > 0) &&
    6924  (m_hAllocator->m_MemProps.memoryTypes[m_MemoryTypeIndex].propertyFlags & requiredMemFlags) == requiredMemFlags;
    6925 }
    6926 
    6927 static const uint32_t VMA_ALLOCATION_TRY_COUNT = 32;
    6928 
    6929 VkResult VmaBlockVector::Allocate(
    6930  VmaPool hCurrentPool,
    6931  uint32_t currentFrameIndex,
    6932  VkDeviceSize size,
    6933  VkDeviceSize alignment,
    6934  const VmaAllocationCreateInfo& createInfo,
    6935  VmaSuballocationType suballocType,
    6936  VmaAllocation* pAllocation)
    6937 {
    6938  // Early reject: requested allocation size is larger that maximum block size for this block vector.
    6939  if(size + 2 * VMA_DEBUG_MARGIN > m_PreferredBlockSize)
    6940  {
    6941  return VK_ERROR_OUT_OF_DEVICE_MEMORY;
    6942  }
    6943 
    6944  const bool mapped = (createInfo.flags & VMA_ALLOCATION_CREATE_MAPPED_BIT) != 0;
    6945  const bool isUserDataString = (createInfo.flags & VMA_ALLOCATION_CREATE_USER_DATA_COPY_STRING_BIT) != 0;
    6946 
    6947  VmaMutexLock lock(m_Mutex, m_hAllocator->m_UseMutex);
    6948 
    6949  // 1. Search existing allocations. Try to allocate without making other allocations lost.
    6950  // Forward order in m_Blocks - prefer blocks with smallest amount of free space.
    6951  for(size_t blockIndex = 0; blockIndex < m_Blocks.size(); ++blockIndex )
    6952  {
    6953  VmaDeviceMemoryBlock* const pCurrBlock = m_Blocks[blockIndex];
    6954  VMA_ASSERT(pCurrBlock);
    6955  VmaAllocationRequest currRequest = {};
    6956  if(pCurrBlock->m_Metadata.CreateAllocationRequest(
    6957  currentFrameIndex,
    6958  m_FrameInUseCount,
    6959  m_BufferImageGranularity,
    6960  size,
    6961  alignment,
    6962  suballocType,
    6963  false, // canMakeOtherLost
    6964  &currRequest))
    6965  {
    6966  // Allocate from pCurrBlock.
    6967  VMA_ASSERT(currRequest.itemsToMakeLostCount == 0);
    6968 
    6969  if(mapped)
    6970  {
    6971  VkResult res = pCurrBlock->Map(m_hAllocator, 1, VMA_NULL);
    6972  if(res != VK_SUCCESS)
    6973  {
    6974  return res;
    6975  }
    6976  }
    6977 
    6978  // We no longer have an empty Allocation.
    6979  if(pCurrBlock->m_Metadata.IsEmpty())
    6980  {
    6981  m_HasEmptyBlock = false;
    6982  }
    6983 
    6984  *pAllocation = vma_new(m_hAllocator, VmaAllocation_T)(currentFrameIndex, isUserDataString);
    6985  pCurrBlock->m_Metadata.Alloc(currRequest, suballocType, size, *pAllocation);
    6986  (*pAllocation)->InitBlockAllocation(
    6987  hCurrentPool,
    6988  pCurrBlock,
    6989  currRequest.offset,
    6990  alignment,
    6991  size,
    6992  suballocType,
    6993  mapped,
    6994  (createInfo.flags & VMA_ALLOCATION_CREATE_CAN_BECOME_LOST_BIT) != 0);
    6995  VMA_HEAVY_ASSERT(pCurrBlock->Validate());
    6996  VMA_DEBUG_LOG(" Returned from existing allocation #%u", (uint32_t)blockIndex);
    6997  (*pAllocation)->SetUserData(m_hAllocator, createInfo.pUserData);
    6998  if(IsCorruptionDetectionEnabled())
    6999  {
    7000  VkResult res = pCurrBlock->WriteMagicValueAroundAllocation(m_hAllocator, currRequest.offset, size);
    7001  VMA_ASSERT(res == VK_SUCCESS && "Couldn't map block memory to write magic value.");
    7002  }
    7003  return VK_SUCCESS;
    7004  }
    7005  }
    7006 
    7007  const bool canCreateNewBlock =
    7008  ((createInfo.flags & VMA_ALLOCATION_CREATE_NEVER_ALLOCATE_BIT) == 0) &&
    7009  (m_Blocks.size() < m_MaxBlockCount);
    7010 
    7011  // 2. Try to create new block.
    7012  if(canCreateNewBlock)
    7013  {
    7014  // Calculate optimal size for new block.
    7015  VkDeviceSize newBlockSize = m_PreferredBlockSize;
    7016  uint32_t newBlockSizeShift = 0;
    7017  const uint32_t NEW_BLOCK_SIZE_SHIFT_MAX = 3;
    7018 
    7019  // Allocating blocks of other sizes is allowed only in default pools.
    7020  // In custom pools block size is fixed.
    7021  if(m_IsCustomPool == false)
    7022  {
    7023  // Allocate 1/8, 1/4, 1/2 as first blocks.
    7024  const VkDeviceSize maxExistingBlockSize = CalcMaxBlockSize();
    7025  for(uint32_t i = 0; i < NEW_BLOCK_SIZE_SHIFT_MAX; ++i)
    7026  {
    7027  const VkDeviceSize smallerNewBlockSize = newBlockSize / 2;
    7028  if(smallerNewBlockSize > maxExistingBlockSize && smallerNewBlockSize >= size * 2)
    7029  {
    7030  newBlockSize = smallerNewBlockSize;
    7031  ++newBlockSizeShift;
    7032  }
    7033  else
    7034  {
    7035  break;
    7036  }
    7037  }
    7038  }
    7039 
    7040  size_t newBlockIndex = 0;
    7041  VkResult res = CreateBlock(newBlockSize, &newBlockIndex);
    7042  // Allocation of this size failed? Try 1/2, 1/4, 1/8 of m_PreferredBlockSize.
    7043  if(m_IsCustomPool == false)
    7044  {
    7045  while(res < 0 && newBlockSizeShift < NEW_BLOCK_SIZE_SHIFT_MAX)
    7046  {
    7047  const VkDeviceSize smallerNewBlockSize = newBlockSize / 2;
    7048  if(smallerNewBlockSize >= size)
    7049  {
    7050  newBlockSize = smallerNewBlockSize;
    7051  ++newBlockSizeShift;
    7052  res = CreateBlock(newBlockSize, &newBlockIndex);
    7053  }
    7054  else
    7055  {
    7056  break;
    7057  }
    7058  }
    7059  }
    7060 
    7061  if(res == VK_SUCCESS)
    7062  {
    7063  VmaDeviceMemoryBlock* const pBlock = m_Blocks[newBlockIndex];
    7064  VMA_ASSERT(pBlock->m_Metadata.GetSize() >= size);
    7065 
    7066  if(mapped)
    7067  {
    7068  res = pBlock->Map(m_hAllocator, 1, VMA_NULL);
    7069  if(res != VK_SUCCESS)
    7070  {
    7071  return res;
    7072  }
    7073  }
    7074 
    7075  // Allocate from pBlock. Because it is empty, dstAllocRequest can be trivially filled.
    7076  VmaAllocationRequest allocRequest;
    7077  if(pBlock->m_Metadata.CreateAllocationRequest(
    7078  currentFrameIndex,
    7079  m_FrameInUseCount,
    7080  m_BufferImageGranularity,
    7081  size,
    7082  alignment,
    7083  suballocType,
    7084  false, // canMakeOtherLost
    7085  &allocRequest))
    7086  {
    7087  *pAllocation = vma_new(m_hAllocator, VmaAllocation_T)(currentFrameIndex, isUserDataString);
    7088  pBlock->m_Metadata.Alloc(allocRequest, suballocType, size, *pAllocation);
    7089  (*pAllocation)->InitBlockAllocation(
    7090  hCurrentPool,
    7091  pBlock,
    7092  allocRequest.offset,
    7093  alignment,
    7094  size,
    7095  suballocType,
    7096  mapped,
    7097  (createInfo.flags & VMA_ALLOCATION_CREATE_CAN_BECOME_LOST_BIT) != 0);
    7098  VMA_HEAVY_ASSERT(pBlock->Validate());
    7099  VMA_DEBUG_LOG(" Created new allocation Size=%llu", allocInfo.allocationSize);
    7100  (*pAllocation)->SetUserData(m_hAllocator, createInfo.pUserData);
    7101  if(IsCorruptionDetectionEnabled())
    7102  {
    7103  res = pBlock->WriteMagicValueAroundAllocation(m_hAllocator, allocRequest.offset, size);
    7104  VMA_ASSERT(res == VK_SUCCESS && "Couldn't map block memory to write magic value.");
    7105  }
    7106  return VK_SUCCESS;
    7107  }
    7108  else
    7109  {
    7110  // Allocation from empty block failed, possibly due to VMA_DEBUG_MARGIN or alignment.
    7111  return VK_ERROR_OUT_OF_DEVICE_MEMORY;
    7112  }
    7113  }
    7114  }
    7115 
    7116  const bool canMakeOtherLost = (createInfo.flags & VMA_ALLOCATION_CREATE_CAN_MAKE_OTHER_LOST_BIT) != 0;
    7117 
    7118  // 3. Try to allocate from existing blocks with making other allocations lost.
    7119  if(canMakeOtherLost)
    7120  {
    7121  uint32_t tryIndex = 0;
    7122  for(; tryIndex < VMA_ALLOCATION_TRY_COUNT; ++tryIndex)
    7123  {
    7124  VmaDeviceMemoryBlock* pBestRequestBlock = VMA_NULL;
    7125  VmaAllocationRequest bestRequest = {};
    7126  VkDeviceSize bestRequestCost = VK_WHOLE_SIZE;
    7127 
    7128  // 1. Search existing allocations.
    7129  // Forward order in m_Blocks - prefer blocks with smallest amount of free space.
    7130  for(size_t blockIndex = 0; blockIndex < m_Blocks.size(); ++blockIndex )
    7131  {
    7132  VmaDeviceMemoryBlock* const pCurrBlock = m_Blocks[blockIndex];
    7133  VMA_ASSERT(pCurrBlock);
    7134  VmaAllocationRequest currRequest = {};
    7135  if(pCurrBlock->m_Metadata.CreateAllocationRequest(
    7136  currentFrameIndex,
    7137  m_FrameInUseCount,
    7138  m_BufferImageGranularity,
    7139  size,
    7140  alignment,
    7141  suballocType,
    7142  canMakeOtherLost,
    7143  &currRequest))
    7144  {
    7145  const VkDeviceSize currRequestCost = currRequest.CalcCost();
    7146  if(pBestRequestBlock == VMA_NULL ||
    7147  currRequestCost < bestRequestCost)
    7148  {
    7149  pBestRequestBlock = pCurrBlock;
    7150  bestRequest = currRequest;
    7151  bestRequestCost = currRequestCost;
    7152 
    7153  if(bestRequestCost == 0)
    7154  {
    7155  break;
    7156  }
    7157  }
    7158  }
    7159  }
    7160 
    7161  if(pBestRequestBlock != VMA_NULL)
    7162  {
    7163  if(mapped)
    7164  {
    7165  VkResult res = pBestRequestBlock->Map(m_hAllocator, 1, VMA_NULL);
    7166  if(res != VK_SUCCESS)
    7167  {
    7168  return res;
    7169  }
    7170  }
    7171 
    7172  if(pBestRequestBlock->m_Metadata.MakeRequestedAllocationsLost(
    7173  currentFrameIndex,
    7174  m_FrameInUseCount,
    7175  &bestRequest))
    7176  {
    7177  // We no longer have an empty Allocation.
    7178  if(pBestRequestBlock->m_Metadata.IsEmpty())
    7179  {
    7180  m_HasEmptyBlock = false;
    7181  }
    7182  // Allocate from this pBlock.
    7183  *pAllocation = vma_new(m_hAllocator, VmaAllocation_T)(currentFrameIndex, isUserDataString);
    7184  pBestRequestBlock->m_Metadata.Alloc(bestRequest, suballocType, size, *pAllocation);
    7185  (*pAllocation)->InitBlockAllocation(
    7186  hCurrentPool,
    7187  pBestRequestBlock,
    7188  bestRequest.offset,
    7189  alignment,
    7190  size,
    7191  suballocType,
    7192  mapped,
    7193  (createInfo.flags & VMA_ALLOCATION_CREATE_CAN_BECOME_LOST_BIT) != 0);
    7194  VMA_HEAVY_ASSERT(pBestRequestBlock->Validate());
    7195  VMA_DEBUG_LOG(" Returned from existing allocation #%u", (uint32_t)blockIndex);
    7196  (*pAllocation)->SetUserData(m_hAllocator, createInfo.pUserData);
    7197  if(IsCorruptionDetectionEnabled())
    7198  {
    7199  VkResult res = pBestRequestBlock->WriteMagicValueAroundAllocation(m_hAllocator, bestRequest.offset, size);
    7200  VMA_ASSERT(res == VK_SUCCESS && "Couldn't map block memory to write magic value.");
    7201  }
    7202  return VK_SUCCESS;
    7203  }
    7204  // else: Some allocations must have been touched while we are here. Next try.
    7205  }
    7206  else
    7207  {
    7208  // Could not find place in any of the blocks - break outer loop.
    7209  break;
    7210  }
    7211  }
    7212  /* Maximum number of tries exceeded - a very unlike event when many other
    7213  threads are simultaneously touching allocations making it impossible to make
    7214  lost at the same time as we try to allocate. */
    7215  if(tryIndex == VMA_ALLOCATION_TRY_COUNT)
    7216  {
    7217  return VK_ERROR_TOO_MANY_OBJECTS;
    7218  }
    7219  }
    7220 
    7221  return VK_ERROR_OUT_OF_DEVICE_MEMORY;
    7222 }
    7223 
    7224 void VmaBlockVector::Free(
    7225  VmaAllocation hAllocation)
    7226 {
    7227  VmaDeviceMemoryBlock* pBlockToDelete = VMA_NULL;
    7228 
    7229  // Scope for lock.
    7230  {
    7231  VmaMutexLock lock(m_Mutex, m_hAllocator->m_UseMutex);
    7232 
    7233  VmaDeviceMemoryBlock* pBlock = hAllocation->GetBlock();
    7234 
    7235  if(IsCorruptionDetectionEnabled())
    7236  {
    7237  VkResult res = pBlock->ValidateMagicValueAroundAllocation(m_hAllocator, hAllocation->GetOffset(), hAllocation->GetSize());
    7238  VMA_ASSERT(res == VK_SUCCESS && "Couldn't map block memory to validate magic value.");
    7239  }
    7240 
    7241  if(hAllocation->IsPersistentMap())
    7242  {
    7243  pBlock->Unmap(m_hAllocator, 1);
    7244  }
    7245 
    7246  pBlock->m_Metadata.Free(hAllocation);
    7247  VMA_HEAVY_ASSERT(pBlock->Validate());
    7248 
    7249  VMA_DEBUG_LOG(" Freed from MemoryTypeIndex=%u", memTypeIndex);
    7250 
    7251  // pBlock became empty after this deallocation.
    7252  if(pBlock->m_Metadata.IsEmpty())
    7253  {
    7254  // Already has empty Allocation. We don't want to have two, so delete this one.
    7255  if(m_HasEmptyBlock && m_Blocks.size() > m_MinBlockCount)
    7256  {
    7257  pBlockToDelete = pBlock;
    7258  Remove(pBlock);
    7259  }
    7260  // We now have first empty block.
    7261  else
    7262  {
    7263  m_HasEmptyBlock = true;
    7264  }
    7265  }
    7266  // pBlock didn't become empty, but we have another empty block - find and free that one.
    7267  // (This is optional, heuristics.)
    7268  else if(m_HasEmptyBlock)
    7269  {
    7270  VmaDeviceMemoryBlock* pLastBlock = m_Blocks.back();
    7271  if(pLastBlock->m_Metadata.IsEmpty() && m_Blocks.size() > m_MinBlockCount)
    7272  {
    7273  pBlockToDelete = pLastBlock;
    7274  m_Blocks.pop_back();
    7275  m_HasEmptyBlock = false;
    7276  }
    7277  }
    7278 
    7279  IncrementallySortBlocks();
    7280  }
    7281 
    7282  // Destruction of a free Allocation. Deferred until this point, outside of mutex
    7283  // lock, for performance reason.
    7284  if(pBlockToDelete != VMA_NULL)
    7285  {
    7286  VMA_DEBUG_LOG(" Deleted empty allocation");
    7287  pBlockToDelete->Destroy(m_hAllocator);
    7288  vma_delete(m_hAllocator, pBlockToDelete);
    7289  }
    7290 }
    7291 
    7292 VkDeviceSize VmaBlockVector::CalcMaxBlockSize() const
    7293 {
    7294  VkDeviceSize result = 0;
    7295  for(size_t i = m_Blocks.size(); i--; )
    7296  {
    7297  result = VMA_MAX(result, m_Blocks[i]->m_Metadata.GetSize());
    7298  if(result >= m_PreferredBlockSize)
    7299  {
    7300  break;
    7301  }
    7302  }
    7303  return result;
    7304 }
    7305 
    7306 void VmaBlockVector::Remove(VmaDeviceMemoryBlock* pBlock)
    7307 {
    7308  for(uint32_t blockIndex = 0; blockIndex < m_Blocks.size(); ++blockIndex)
    7309  {
    7310  if(m_Blocks[blockIndex] == pBlock)
    7311  {
    7312  VmaVectorRemove(m_Blocks, blockIndex);
    7313  return;
    7314  }
    7315  }
    7316  VMA_ASSERT(0);
    7317 }
    7318 
    7319 void VmaBlockVector::IncrementallySortBlocks()
    7320 {
    7321  // Bubble sort only until first swap.
    7322  for(size_t i = 1; i < m_Blocks.size(); ++i)
    7323  {
    7324  if(m_Blocks[i - 1]->m_Metadata.GetSumFreeSize() > m_Blocks[i]->m_Metadata.GetSumFreeSize())
    7325  {
    7326  VMA_SWAP(m_Blocks[i - 1], m_Blocks[i]);
    7327  return;
    7328  }
    7329  }
    7330 }
    7331 
    7332 VkResult VmaBlockVector::CreateBlock(VkDeviceSize blockSize, size_t* pNewBlockIndex)
    7333 {
    7334  VkMemoryAllocateInfo allocInfo = { VK_STRUCTURE_TYPE_MEMORY_ALLOCATE_INFO };
    7335  allocInfo.memoryTypeIndex = m_MemoryTypeIndex;
    7336  allocInfo.allocationSize = blockSize;
    7337  VkDeviceMemory mem = VK_NULL_HANDLE;
    7338  VkResult res = m_hAllocator->AllocateVulkanMemory(&allocInfo, &mem);
    7339  if(res < 0)
    7340  {
    7341  return res;
    7342  }
    7343 
    7344  // New VkDeviceMemory successfully created.
    7345 
    7346  // Create new Allocation for it.
    7347  VmaDeviceMemoryBlock* const pBlock = vma_new(m_hAllocator, VmaDeviceMemoryBlock)(m_hAllocator);
    7348  pBlock->Init(
    7349  m_MemoryTypeIndex,
    7350  mem,
    7351  allocInfo.allocationSize,
    7352  m_NextBlockId++);
    7353 
    7354  m_Blocks.push_back(pBlock);
    7355  if(pNewBlockIndex != VMA_NULL)
    7356  {
    7357  *pNewBlockIndex = m_Blocks.size() - 1;
    7358  }
    7359 
    7360  return VK_SUCCESS;
    7361 }
    7362 
    7363 #if VMA_STATS_STRING_ENABLED
    7364 
    7365 void VmaBlockVector::PrintDetailedMap(class VmaJsonWriter& json)
    7366 {
    7367  VmaMutexLock lock(m_Mutex, m_hAllocator->m_UseMutex);
    7368 
    7369  json.BeginObject();
    7370 
    7371  if(m_IsCustomPool)
    7372  {
    7373  json.WriteString("MemoryTypeIndex");
    7374  json.WriteNumber(m_MemoryTypeIndex);
    7375 
    7376  json.WriteString("BlockSize");
    7377  json.WriteNumber(m_PreferredBlockSize);
    7378 
    7379  json.WriteString("BlockCount");
    7380  json.BeginObject(true);
    7381  if(m_MinBlockCount > 0)
    7382  {
    7383  json.WriteString("Min");
    7384  json.WriteNumber((uint64_t)m_MinBlockCount);
    7385  }
    7386  if(m_MaxBlockCount < SIZE_MAX)
    7387  {
    7388  json.WriteString("Max");
    7389  json.WriteNumber((uint64_t)m_MaxBlockCount);
    7390  }
    7391  json.WriteString("Cur");
    7392  json.WriteNumber((uint64_t)m_Blocks.size());
    7393  json.EndObject();
    7394 
    7395  if(m_FrameInUseCount > 0)
    7396  {
    7397  json.WriteString("FrameInUseCount");
    7398  json.WriteNumber(m_FrameInUseCount);
    7399  }
    7400  }
    7401  else
    7402  {
    7403  json.WriteString("PreferredBlockSize");
    7404  json.WriteNumber(m_PreferredBlockSize);
    7405  }
    7406 
    7407  json.WriteString("Blocks");
    7408  json.BeginObject();
    7409  for(size_t i = 0; i < m_Blocks.size(); ++i)
    7410  {
    7411  json.BeginString();
    7412  json.ContinueString(m_Blocks[i]->GetId());
    7413  json.EndString();
    7414 
    7415  m_Blocks[i]->m_Metadata.PrintDetailedMap(json);
    7416  }
    7417  json.EndObject();
    7418 
    7419  json.EndObject();
    7420 }
    7421 
    7422 #endif // #if VMA_STATS_STRING_ENABLED
    7423 
    7424 VmaDefragmentator* VmaBlockVector::EnsureDefragmentator(
    7425  VmaAllocator hAllocator,
    7426  uint32_t currentFrameIndex)
    7427 {
    7428  if(m_pDefragmentator == VMA_NULL)
    7429  {
    7430  m_pDefragmentator = vma_new(m_hAllocator, VmaDefragmentator)(
    7431  hAllocator,
    7432  this,
    7433  currentFrameIndex);
    7434  }
    7435 
    7436  return m_pDefragmentator;
    7437 }
    7438 
    7439 VkResult VmaBlockVector::Defragment(
    7440  VmaDefragmentationStats* pDefragmentationStats,
    7441  VkDeviceSize& maxBytesToMove,
    7442  uint32_t& maxAllocationsToMove)
    7443 {
    7444  if(m_pDefragmentator == VMA_NULL)
    7445  {
    7446  return VK_SUCCESS;
    7447  }
    7448 
    7449  VmaMutexLock lock(m_Mutex, m_hAllocator->m_UseMutex);
    7450 
    7451  // Defragment.
    7452  VkResult result = m_pDefragmentator->Defragment(maxBytesToMove, maxAllocationsToMove);
    7453 
    7454  // Accumulate statistics.
    7455  if(pDefragmentationStats != VMA_NULL)
    7456  {
    7457  const VkDeviceSize bytesMoved = m_pDefragmentator->GetBytesMoved();
    7458  const uint32_t allocationsMoved = m_pDefragmentator->GetAllocationsMoved();
    7459  pDefragmentationStats->bytesMoved += bytesMoved;
    7460  pDefragmentationStats->allocationsMoved += allocationsMoved;
    7461  VMA_ASSERT(bytesMoved <= maxBytesToMove);
    7462  VMA_ASSERT(allocationsMoved <= maxAllocationsToMove);
    7463  maxBytesToMove -= bytesMoved;
    7464  maxAllocationsToMove -= allocationsMoved;
    7465  }
    7466 
    7467  // Free empty blocks.
    7468  m_HasEmptyBlock = false;
    7469  for(size_t blockIndex = m_Blocks.size(); blockIndex--; )
    7470  {
    7471  VmaDeviceMemoryBlock* pBlock = m_Blocks[blockIndex];
    7472  if(pBlock->m_Metadata.IsEmpty())
    7473  {
    7474  if(m_Blocks.size() > m_MinBlockCount)
    7475  {
    7476  if(pDefragmentationStats != VMA_NULL)
    7477  {
    7478  ++pDefragmentationStats->deviceMemoryBlocksFreed;
    7479  pDefragmentationStats->bytesFreed += pBlock->m_Metadata.GetSize();
    7480  }
    7481 
    7482  VmaVectorRemove(m_Blocks, blockIndex);
    7483  pBlock->Destroy(m_hAllocator);
    7484  vma_delete(m_hAllocator, pBlock);
    7485  }
    7486  else
    7487  {
    7488  m_HasEmptyBlock = true;
    7489  }
    7490  }
    7491  }
    7492 
    7493  return result;
    7494 }
    7495 
    7496 void VmaBlockVector::DestroyDefragmentator()
    7497 {
    7498  if(m_pDefragmentator != VMA_NULL)
    7499  {
    7500  vma_delete(m_hAllocator, m_pDefragmentator);
    7501  m_pDefragmentator = VMA_NULL;
    7502  }
    7503 }
    7504 
    7505 void VmaBlockVector::MakePoolAllocationsLost(
    7506  uint32_t currentFrameIndex,
    7507  size_t* pLostAllocationCount)
    7508 {
    7509  VmaMutexLock lock(m_Mutex, m_hAllocator->m_UseMutex);
    7510  size_t lostAllocationCount = 0;
    7511  for(uint32_t blockIndex = 0; blockIndex < m_Blocks.size(); ++blockIndex)
    7512  {
    7513  VmaDeviceMemoryBlock* const pBlock = m_Blocks[blockIndex];
    7514  VMA_ASSERT(pBlock);
    7515  lostAllocationCount += pBlock->m_Metadata.MakeAllocationsLost(currentFrameIndex, m_FrameInUseCount);
    7516  }
    7517  if(pLostAllocationCount != VMA_NULL)
    7518  {
    7519  *pLostAllocationCount = lostAllocationCount;
    7520  }
    7521 }
    7522 
    7523 VkResult VmaBlockVector::CheckCorruption()
    7524 {
    7525  if(!IsCorruptionDetectionEnabled())
    7526  {
    7527  return VK_ERROR_FEATURE_NOT_PRESENT;
    7528  }
    7529 
    7530  VmaMutexLock lock(m_Mutex, m_hAllocator->m_UseMutex);
    7531  for(uint32_t blockIndex = 0; blockIndex < m_Blocks.size(); ++blockIndex)
    7532  {
    7533  VmaDeviceMemoryBlock* const pBlock = m_Blocks[blockIndex];
    7534  VMA_ASSERT(pBlock);
    7535  VkResult res = pBlock->CheckCorruption(m_hAllocator);
    7536  if(res != VK_SUCCESS)
    7537  {
    7538  return res;
    7539  }
    7540  }
    7541  return VK_SUCCESS;
    7542 }
    7543 
    7544 void VmaBlockVector::AddStats(VmaStats* pStats)
    7545 {
    7546  const uint32_t memTypeIndex = m_MemoryTypeIndex;
    7547  const uint32_t memHeapIndex = m_hAllocator->MemoryTypeIndexToHeapIndex(memTypeIndex);
    7548 
    7549  VmaMutexLock lock(m_Mutex, m_hAllocator->m_UseMutex);
    7550 
    7551  for(uint32_t blockIndex = 0; blockIndex < m_Blocks.size(); ++blockIndex)
    7552  {
    7553  const VmaDeviceMemoryBlock* const pBlock = m_Blocks[blockIndex];
    7554  VMA_ASSERT(pBlock);
    7555  VMA_HEAVY_ASSERT(pBlock->Validate());
    7556  VmaStatInfo allocationStatInfo;
    7557  pBlock->m_Metadata.CalcAllocationStatInfo(allocationStatInfo);
    7558  VmaAddStatInfo(pStats->total, allocationStatInfo);
    7559  VmaAddStatInfo(pStats->memoryType[memTypeIndex], allocationStatInfo);
    7560  VmaAddStatInfo(pStats->memoryHeap[memHeapIndex], allocationStatInfo);
    7561  }
    7562 }
    7563 
    7565 // VmaDefragmentator members definition
    7566 
    7567 VmaDefragmentator::VmaDefragmentator(
    7568  VmaAllocator hAllocator,
    7569  VmaBlockVector* pBlockVector,
    7570  uint32_t currentFrameIndex) :
    7571  m_hAllocator(hAllocator),
    7572  m_pBlockVector(pBlockVector),
    7573  m_CurrentFrameIndex(currentFrameIndex),
    7574  m_BytesMoved(0),
    7575  m_AllocationsMoved(0),
    7576  m_Allocations(VmaStlAllocator<AllocationInfo>(hAllocator->GetAllocationCallbacks())),
    7577  m_Blocks(VmaStlAllocator<BlockInfo*>(hAllocator->GetAllocationCallbacks()))
    7578 {
    7579 }
    7580 
    7581 VmaDefragmentator::~VmaDefragmentator()
    7582 {
    7583  for(size_t i = m_Blocks.size(); i--; )
    7584  {
    7585  vma_delete(m_hAllocator, m_Blocks[i]);
    7586  }
    7587 }
    7588 
    7589 void VmaDefragmentator::AddAllocation(VmaAllocation hAlloc, VkBool32* pChanged)
    7590 {
    7591  AllocationInfo allocInfo;
    7592  allocInfo.m_hAllocation = hAlloc;
    7593  allocInfo.m_pChanged = pChanged;
    7594  m_Allocations.push_back(allocInfo);
    7595 }
    7596 
    7597 VkResult VmaDefragmentator::BlockInfo::EnsureMapping(VmaAllocator hAllocator, void** ppMappedData)
    7598 {
    7599  // It has already been mapped for defragmentation.
    7600  if(m_pMappedDataForDefragmentation)
    7601  {
    7602  *ppMappedData = m_pMappedDataForDefragmentation;
    7603  return VK_SUCCESS;
    7604  }
    7605 
    7606  // It is originally mapped.
    7607  if(m_pBlock->GetMappedData())
    7608  {
    7609  *ppMappedData = m_pBlock->GetMappedData();
    7610  return VK_SUCCESS;
    7611  }
    7612 
    7613  // Map on first usage.
    7614  VkResult res = m_pBlock->Map(hAllocator, 1, &m_pMappedDataForDefragmentation);
    7615  *ppMappedData = m_pMappedDataForDefragmentation;
    7616  return res;
    7617 }
    7618 
    7619 void VmaDefragmentator::BlockInfo::Unmap(VmaAllocator hAllocator)
    7620 {
    7621  if(m_pMappedDataForDefragmentation != VMA_NULL)
    7622  {
    7623  m_pBlock->Unmap(hAllocator, 1);
    7624  }
    7625 }
    7626 
    7627 VkResult VmaDefragmentator::DefragmentRound(
    7628  VkDeviceSize maxBytesToMove,
    7629  uint32_t maxAllocationsToMove)
    7630 {
    7631  if(m_Blocks.empty())
    7632  {
    7633  return VK_SUCCESS;
    7634  }
    7635 
    7636  size_t srcBlockIndex = m_Blocks.size() - 1;
    7637  size_t srcAllocIndex = SIZE_MAX;
    7638  for(;;)
    7639  {
    7640  // 1. Find next allocation to move.
    7641  // 1.1. Start from last to first m_Blocks - they are sorted from most "destination" to most "source".
    7642  // 1.2. Then start from last to first m_Allocations - they are sorted from largest to smallest.
    7643  while(srcAllocIndex >= m_Blocks[srcBlockIndex]->m_Allocations.size())
    7644  {
    7645  if(m_Blocks[srcBlockIndex]->m_Allocations.empty())
    7646  {
    7647  // Finished: no more allocations to process.
    7648  if(srcBlockIndex == 0)
    7649  {
    7650  return VK_SUCCESS;
    7651  }
    7652  else
    7653  {
    7654  --srcBlockIndex;
    7655  srcAllocIndex = SIZE_MAX;
    7656  }
    7657  }
    7658  else
    7659  {
    7660  srcAllocIndex = m_Blocks[srcBlockIndex]->m_Allocations.size() - 1;
    7661  }
    7662  }
    7663 
    7664  BlockInfo* pSrcBlockInfo = m_Blocks[srcBlockIndex];
    7665  AllocationInfo& allocInfo = pSrcBlockInfo->m_Allocations[srcAllocIndex];
    7666 
    7667  const VkDeviceSize size = allocInfo.m_hAllocation->GetSize();
    7668  const VkDeviceSize srcOffset = allocInfo.m_hAllocation->GetOffset();
    7669  const VkDeviceSize alignment = allocInfo.m_hAllocation->GetAlignment();
    7670  const VmaSuballocationType suballocType = allocInfo.m_hAllocation->GetSuballocationType();
    7671 
    7672  // 2. Try to find new place for this allocation in preceding or current block.
    7673  for(size_t dstBlockIndex = 0; dstBlockIndex <= srcBlockIndex; ++dstBlockIndex)
    7674  {
    7675  BlockInfo* pDstBlockInfo = m_Blocks[dstBlockIndex];
    7676  VmaAllocationRequest dstAllocRequest;
    7677  if(pDstBlockInfo->m_pBlock->m_Metadata.CreateAllocationRequest(
    7678  m_CurrentFrameIndex,
    7679  m_pBlockVector->GetFrameInUseCount(),
    7680  m_pBlockVector->GetBufferImageGranularity(),
    7681  size,
    7682  alignment,
    7683  suballocType,
    7684  false, // canMakeOtherLost
    7685  &dstAllocRequest) &&
    7686  MoveMakesSense(
    7687  dstBlockIndex, dstAllocRequest.offset, srcBlockIndex, srcOffset))
    7688  {
    7689  VMA_ASSERT(dstAllocRequest.itemsToMakeLostCount == 0);
    7690 
    7691  // Reached limit on number of allocations or bytes to move.
    7692  if((m_AllocationsMoved + 1 > maxAllocationsToMove) ||
    7693  (m_BytesMoved + size > maxBytesToMove))
    7694  {
    7695  return VK_INCOMPLETE;
    7696  }
    7697 
    7698  void* pDstMappedData = VMA_NULL;
    7699  VkResult res = pDstBlockInfo->EnsureMapping(m_hAllocator, &pDstMappedData);
    7700  if(res != VK_SUCCESS)
    7701  {
    7702  return res;
    7703  }
    7704 
    7705  void* pSrcMappedData = VMA_NULL;
    7706  res = pSrcBlockInfo->EnsureMapping(m_hAllocator, &pSrcMappedData);
    7707  if(res != VK_SUCCESS)
    7708  {
    7709  return res;
    7710  }
    7711 
    7712  // THE PLACE WHERE ACTUAL DATA COPY HAPPENS.
    7713  memcpy(
    7714  reinterpret_cast<char*>(pDstMappedData) + dstAllocRequest.offset,
    7715  reinterpret_cast<char*>(pSrcMappedData) + srcOffset,
    7716  static_cast<size_t>(size));
    7717 
    7718  if(VMA_DEBUG_MARGIN > 0)
    7719  {
    7720  VmaWriteMagicValue(pDstMappedData, dstAllocRequest.offset - VMA_DEBUG_MARGIN);
    7721  VmaWriteMagicValue(pDstMappedData, dstAllocRequest.offset + size);
    7722  }
    7723 
    7724  pDstBlockInfo->m_pBlock->m_Metadata.Alloc(dstAllocRequest, suballocType, size, allocInfo.m_hAllocation);
    7725  pSrcBlockInfo->m_pBlock->m_Metadata.FreeAtOffset(srcOffset);
    7726 
    7727  allocInfo.m_hAllocation->ChangeBlockAllocation(m_hAllocator, pDstBlockInfo->m_pBlock, dstAllocRequest.offset);
    7728 
    7729  if(allocInfo.m_pChanged != VMA_NULL)
    7730  {
    7731  *allocInfo.m_pChanged = VK_TRUE;
    7732  }
    7733 
    7734  ++m_AllocationsMoved;
    7735  m_BytesMoved += size;
    7736 
    7737  VmaVectorRemove(pSrcBlockInfo->m_Allocations, srcAllocIndex);
    7738 
    7739  break;
    7740  }
    7741  }
    7742 
    7743  // If not processed, this allocInfo remains in pBlockInfo->m_Allocations for next round.
    7744 
    7745  if(srcAllocIndex > 0)
    7746  {
    7747  --srcAllocIndex;
    7748  }
    7749  else
    7750  {
    7751  if(srcBlockIndex > 0)
    7752  {
    7753  --srcBlockIndex;
    7754  srcAllocIndex = SIZE_MAX;
    7755  }
    7756  else
    7757  {
    7758  return VK_SUCCESS;
    7759  }
    7760  }
    7761  }
    7762 }
    7763 
    7764 VkResult VmaDefragmentator::Defragment(
    7765  VkDeviceSize maxBytesToMove,
    7766  uint32_t maxAllocationsToMove)
    7767 {
    7768  if(m_Allocations.empty())
    7769  {
    7770  return VK_SUCCESS;
    7771  }
    7772 
    7773  // Create block info for each block.
    7774  const size_t blockCount = m_pBlockVector->m_Blocks.size();
    7775  for(size_t blockIndex = 0; blockIndex < blockCount; ++blockIndex)
    7776  {
    7777  BlockInfo* pBlockInfo = vma_new(m_hAllocator, BlockInfo)(m_hAllocator->GetAllocationCallbacks());
    7778  pBlockInfo->m_pBlock = m_pBlockVector->m_Blocks[blockIndex];
    7779  m_Blocks.push_back(pBlockInfo);
    7780  }
    7781 
    7782  // Sort them by m_pBlock pointer value.
    7783  VMA_SORT(m_Blocks.begin(), m_Blocks.end(), BlockPointerLess());
    7784 
    7785  // Move allocation infos from m_Allocations to appropriate m_Blocks[memTypeIndex].m_Allocations.
    7786  for(size_t blockIndex = 0, allocCount = m_Allocations.size(); blockIndex < allocCount; ++blockIndex)
    7787  {
    7788  AllocationInfo& allocInfo = m_Allocations[blockIndex];
    7789  // Now as we are inside VmaBlockVector::m_Mutex, we can make final check if this allocation was not lost.
    7790  if(allocInfo.m_hAllocation->GetLastUseFrameIndex() != VMA_FRAME_INDEX_LOST)
    7791  {
    7792  VmaDeviceMemoryBlock* pBlock = allocInfo.m_hAllocation->GetBlock();
    7793  BlockInfoVector::iterator it = VmaBinaryFindFirstNotLess(m_Blocks.begin(), m_Blocks.end(), pBlock, BlockPointerLess());
    7794  if(it != m_Blocks.end() && (*it)->m_pBlock == pBlock)
    7795  {
    7796  (*it)->m_Allocations.push_back(allocInfo);
    7797  }
    7798  else
    7799  {
    7800  VMA_ASSERT(0);
    7801  }
    7802  }
    7803  }
    7804  m_Allocations.clear();
    7805 
    7806  for(size_t blockIndex = 0; blockIndex < blockCount; ++blockIndex)
    7807  {
    7808  BlockInfo* pBlockInfo = m_Blocks[blockIndex];
    7809  pBlockInfo->CalcHasNonMovableAllocations();
    7810  pBlockInfo->SortAllocationsBySizeDescecnding();
    7811  }
    7812 
    7813  // Sort m_Blocks this time by the main criterium, from most "destination" to most "source" blocks.
    7814  VMA_SORT(m_Blocks.begin(), m_Blocks.end(), BlockInfoCompareMoveDestination());
    7815 
    7816  // Execute defragmentation rounds (the main part).
    7817  VkResult result = VK_SUCCESS;
    7818  for(size_t round = 0; (round < 2) && (result == VK_SUCCESS); ++round)
    7819  {
    7820  result = DefragmentRound(maxBytesToMove, maxAllocationsToMove);
    7821  }
    7822 
    7823  // Unmap blocks that were mapped for defragmentation.
    7824  for(size_t blockIndex = 0; blockIndex < blockCount; ++blockIndex)
    7825  {
    7826  m_Blocks[blockIndex]->Unmap(m_hAllocator);
    7827  }
    7828 
    7829  return result;
    7830 }
    7831 
    7832 bool VmaDefragmentator::MoveMakesSense(
    7833  size_t dstBlockIndex, VkDeviceSize dstOffset,
    7834  size_t srcBlockIndex, VkDeviceSize srcOffset)
    7835 {
    7836  if(dstBlockIndex < srcBlockIndex)
    7837  {
    7838  return true;
    7839  }
    7840  if(dstBlockIndex > srcBlockIndex)
    7841  {
    7842  return false;
    7843  }
    7844  if(dstOffset < srcOffset)
    7845  {
    7846  return true;
    7847  }
    7848  return false;
    7849 }
    7850 
    7852 // VmaAllocator_T
    7853 
    7854 VmaAllocator_T::VmaAllocator_T(const VmaAllocatorCreateInfo* pCreateInfo) :
    7855  m_UseMutex((pCreateInfo->flags & VMA_ALLOCATOR_CREATE_EXTERNALLY_SYNCHRONIZED_BIT) == 0),
    7856  m_UseKhrDedicatedAllocation((pCreateInfo->flags & VMA_ALLOCATOR_CREATE_KHR_DEDICATED_ALLOCATION_BIT) != 0),
    7857  m_hDevice(pCreateInfo->device),
    7858  m_AllocationCallbacksSpecified(pCreateInfo->pAllocationCallbacks != VMA_NULL),
    7859  m_AllocationCallbacks(pCreateInfo->pAllocationCallbacks ?
    7860  *pCreateInfo->pAllocationCallbacks : VmaEmptyAllocationCallbacks),
    7861  m_PreferredLargeHeapBlockSize(0),
    7862  m_PhysicalDevice(pCreateInfo->physicalDevice),
    7863  m_CurrentFrameIndex(0),
    7864  m_Pools(VmaStlAllocator<VmaPool>(GetAllocationCallbacks())),
    7865  m_NextPoolId(0)
    7866 {
    7867  if(VMA_DEBUG_DETECT_CORRUPTION)
    7868  {
    7869  // Needs to be multiply of uint32_t size because we are going to write VMA_CORRUPTION_DETECTION_MAGIC_VALUE to it.
    7870  VMA_ASSERT(VMA_DEBUG_MARGIN % sizeof(uint32_t) == 0);
    7871  }
    7872 
    7873  VMA_ASSERT(pCreateInfo->physicalDevice && pCreateInfo->device);
    7874 
    7875 #if !(VMA_DEDICATED_ALLOCATION)
    7877  {
    7878  VMA_ASSERT(0 && "VMA_ALLOCATOR_CREATE_KHR_DEDICATED_ALLOCATION_BIT set but required extensions are disabled by preprocessor macros.");
    7879  }
    7880 #endif
    7881 
    7882  memset(&m_DeviceMemoryCallbacks, 0 ,sizeof(m_DeviceMemoryCallbacks));
    7883  memset(&m_PhysicalDeviceProperties, 0, sizeof(m_PhysicalDeviceProperties));
    7884  memset(&m_MemProps, 0, sizeof(m_MemProps));
    7885 
    7886  memset(&m_pBlockVectors, 0, sizeof(m_pBlockVectors));
    7887  memset(&m_pDedicatedAllocations, 0, sizeof(m_pDedicatedAllocations));
    7888 
    7889  for(uint32_t i = 0; i < VK_MAX_MEMORY_HEAPS; ++i)
    7890  {
    7891  m_HeapSizeLimit[i] = VK_WHOLE_SIZE;
    7892  }
    7893 
    7894  if(pCreateInfo->pDeviceMemoryCallbacks != VMA_NULL)
    7895  {
    7896  m_DeviceMemoryCallbacks.pfnAllocate = pCreateInfo->pDeviceMemoryCallbacks->pfnAllocate;
    7897  m_DeviceMemoryCallbacks.pfnFree = pCreateInfo->pDeviceMemoryCallbacks->pfnFree;
    7898  }
    7899 
    7900  ImportVulkanFunctions(pCreateInfo->pVulkanFunctions);
    7901 
    7902  (*m_VulkanFunctions.vkGetPhysicalDeviceProperties)(m_PhysicalDevice, &m_PhysicalDeviceProperties);
    7903  (*m_VulkanFunctions.vkGetPhysicalDeviceMemoryProperties)(m_PhysicalDevice, &m_MemProps);
    7904 
    7905  m_PreferredLargeHeapBlockSize = (pCreateInfo->preferredLargeHeapBlockSize != 0) ?
    7906  pCreateInfo->preferredLargeHeapBlockSize : static_cast<VkDeviceSize>(VMA_DEFAULT_LARGE_HEAP_BLOCK_SIZE);
    7907 
    7908  if(pCreateInfo->pHeapSizeLimit != VMA_NULL)
    7909  {
    7910  for(uint32_t heapIndex = 0; heapIndex < GetMemoryHeapCount(); ++heapIndex)
    7911  {
    7912  const VkDeviceSize limit = pCreateInfo->pHeapSizeLimit[heapIndex];
    7913  if(limit != VK_WHOLE_SIZE)
    7914  {
    7915  m_HeapSizeLimit[heapIndex] = limit;
    7916  if(limit < m_MemProps.memoryHeaps[heapIndex].size)
    7917  {
    7918  m_MemProps.memoryHeaps[heapIndex].size = limit;
    7919  }
    7920  }
    7921  }
    7922  }
    7923 
    7924  for(uint32_t memTypeIndex = 0; memTypeIndex < GetMemoryTypeCount(); ++memTypeIndex)
    7925  {
    7926  const VkDeviceSize preferredBlockSize = CalcPreferredBlockSize(memTypeIndex);
    7927 
    7928  m_pBlockVectors[memTypeIndex] = vma_new(this, VmaBlockVector)(
    7929  this,
    7930  memTypeIndex,
    7931  preferredBlockSize,
    7932  0,
    7933  SIZE_MAX,
    7934  GetBufferImageGranularity(),
    7935  pCreateInfo->frameInUseCount,
    7936  false); // isCustomPool
    7937  // No need to call m_pBlockVectors[memTypeIndex][blockVectorTypeIndex]->CreateMinBlocks here,
    7938  // becase minBlockCount is 0.
    7939  m_pDedicatedAllocations[memTypeIndex] = vma_new(this, AllocationVectorType)(VmaStlAllocator<VmaAllocation>(GetAllocationCallbacks()));
    7940 
    7941  }
    7942 }
    7943 
    7944 VmaAllocator_T::~VmaAllocator_T()
    7945 {
    7946  VMA_ASSERT(m_Pools.empty());
    7947 
    7948  for(size_t i = GetMemoryTypeCount(); i--; )
    7949  {
    7950  vma_delete(this, m_pDedicatedAllocations[i]);
    7951  vma_delete(this, m_pBlockVectors[i]);
    7952  }
    7953 }
    7954 
    7955 void VmaAllocator_T::ImportVulkanFunctions(const VmaVulkanFunctions* pVulkanFunctions)
    7956 {
    7957 #if VMA_STATIC_VULKAN_FUNCTIONS == 1
    7958  m_VulkanFunctions.vkGetPhysicalDeviceProperties = &vkGetPhysicalDeviceProperties;
    7959  m_VulkanFunctions.vkGetPhysicalDeviceMemoryProperties = &vkGetPhysicalDeviceMemoryProperties;
    7960  m_VulkanFunctions.vkAllocateMemory = &vkAllocateMemory;
    7961  m_VulkanFunctions.vkFreeMemory = &vkFreeMemory;
    7962  m_VulkanFunctions.vkMapMemory = &vkMapMemory;
    7963  m_VulkanFunctions.vkUnmapMemory = &vkUnmapMemory;
    7964  m_VulkanFunctions.vkFlushMappedMemoryRanges = &vkFlushMappedMemoryRanges;
    7965  m_VulkanFunctions.vkInvalidateMappedMemoryRanges = &vkInvalidateMappedMemoryRanges;
    7966  m_VulkanFunctions.vkBindBufferMemory = &vkBindBufferMemory;
    7967  m_VulkanFunctions.vkBindImageMemory = &vkBindImageMemory;
    7968  m_VulkanFunctions.vkGetBufferMemoryRequirements = &vkGetBufferMemoryRequirements;
    7969  m_VulkanFunctions.vkGetImageMemoryRequirements = &vkGetImageMemoryRequirements;
    7970  m_VulkanFunctions.vkCreateBuffer = &vkCreateBuffer;
    7971  m_VulkanFunctions.vkDestroyBuffer = &vkDestroyBuffer;
    7972  m_VulkanFunctions.vkCreateImage = &vkCreateImage;
    7973  m_VulkanFunctions.vkDestroyImage = &vkDestroyImage;
    7974 #if VMA_DEDICATED_ALLOCATION
    7975  if(m_UseKhrDedicatedAllocation)
    7976  {
    7977  m_VulkanFunctions.vkGetBufferMemoryRequirements2KHR =
    7978  (PFN_vkGetBufferMemoryRequirements2KHR)vkGetDeviceProcAddr(m_hDevice, "vkGetBufferMemoryRequirements2KHR");
    7979  m_VulkanFunctions.vkGetImageMemoryRequirements2KHR =
    7980  (PFN_vkGetImageMemoryRequirements2KHR)vkGetDeviceProcAddr(m_hDevice, "vkGetImageMemoryRequirements2KHR");
    7981  }
    7982 #endif // #if VMA_DEDICATED_ALLOCATION
    7983 #endif // #if VMA_STATIC_VULKAN_FUNCTIONS == 1
    7984 
    7985 #define VMA_COPY_IF_NOT_NULL(funcName) \
    7986  if(pVulkanFunctions->funcName != VMA_NULL) m_VulkanFunctions.funcName = pVulkanFunctions->funcName;
    7987 
    7988  if(pVulkanFunctions != VMA_NULL)
    7989  {
    7990  VMA_COPY_IF_NOT_NULL(vkGetPhysicalDeviceProperties);
    7991  VMA_COPY_IF_NOT_NULL(vkGetPhysicalDeviceMemoryProperties);
    7992  VMA_COPY_IF_NOT_NULL(vkAllocateMemory);
    7993  VMA_COPY_IF_NOT_NULL(vkFreeMemory);
    7994  VMA_COPY_IF_NOT_NULL(vkMapMemory);
    7995  VMA_COPY_IF_NOT_NULL(vkUnmapMemory);
    7996  VMA_COPY_IF_NOT_NULL(vkFlushMappedMemoryRanges);
    7997  VMA_COPY_IF_NOT_NULL(vkInvalidateMappedMemoryRanges);
    7998  VMA_COPY_IF_NOT_NULL(vkBindBufferMemory);
    7999  VMA_COPY_IF_NOT_NULL(vkBindImageMemory);
    8000  VMA_COPY_IF_NOT_NULL(vkGetBufferMemoryRequirements);
    8001  VMA_COPY_IF_NOT_NULL(vkGetImageMemoryRequirements);
    8002  VMA_COPY_IF_NOT_NULL(vkCreateBuffer);
    8003  VMA_COPY_IF_NOT_NULL(vkDestroyBuffer);
    8004  VMA_COPY_IF_NOT_NULL(vkCreateImage);
    8005  VMA_COPY_IF_NOT_NULL(vkDestroyImage);
    8006 #if VMA_DEDICATED_ALLOCATION
    8007  VMA_COPY_IF_NOT_NULL(vkGetBufferMemoryRequirements2KHR);
    8008  VMA_COPY_IF_NOT_NULL(vkGetImageMemoryRequirements2KHR);
    8009 #endif
    8010  }
    8011 
    8012 #undef VMA_COPY_IF_NOT_NULL
    8013 
    8014  // If these asserts are hit, you must either #define VMA_STATIC_VULKAN_FUNCTIONS 1
    8015  // or pass valid pointers as VmaAllocatorCreateInfo::pVulkanFunctions.
    8016  VMA_ASSERT(m_VulkanFunctions.vkGetPhysicalDeviceProperties != VMA_NULL);
    8017  VMA_ASSERT(m_VulkanFunctions.vkGetPhysicalDeviceMemoryProperties != VMA_NULL);
    8018  VMA_ASSERT(m_VulkanFunctions.vkAllocateMemory != VMA_NULL);
    8019  VMA_ASSERT(m_VulkanFunctions.vkFreeMemory != VMA_NULL);
    8020  VMA_ASSERT(m_VulkanFunctions.vkMapMemory != VMA_NULL);
    8021  VMA_ASSERT(m_VulkanFunctions.vkUnmapMemory != VMA_NULL);
    8022  VMA_ASSERT(m_VulkanFunctions.vkFlushMappedMemoryRanges != VMA_NULL);
    8023  VMA_ASSERT(m_VulkanFunctions.vkInvalidateMappedMemoryRanges != VMA_NULL);
    8024  VMA_ASSERT(m_VulkanFunctions.vkBindBufferMemory != VMA_NULL);
    8025  VMA_ASSERT(m_VulkanFunctions.vkBindImageMemory != VMA_NULL);
    8026  VMA_ASSERT(m_VulkanFunctions.vkGetBufferMemoryRequirements != VMA_NULL);
    8027  VMA_ASSERT(m_VulkanFunctions.vkGetImageMemoryRequirements != VMA_NULL);
    8028  VMA_ASSERT(m_VulkanFunctions.vkCreateBuffer != VMA_NULL);
    8029  VMA_ASSERT(m_VulkanFunctions.vkDestroyBuffer != VMA_NULL);
    8030  VMA_ASSERT(m_VulkanFunctions.vkCreateImage != VMA_NULL);
    8031  VMA_ASSERT(m_VulkanFunctions.vkDestroyImage != VMA_NULL);
    8032 #if VMA_DEDICATED_ALLOCATION
    8033  if(m_UseKhrDedicatedAllocation)
    8034  {
    8035  VMA_ASSERT(m_VulkanFunctions.vkGetBufferMemoryRequirements2KHR != VMA_NULL);
    8036  VMA_ASSERT(m_VulkanFunctions.vkGetImageMemoryRequirements2KHR != VMA_NULL);
    8037  }
    8038 #endif
    8039 }
    8040 
    8041 VkDeviceSize VmaAllocator_T::CalcPreferredBlockSize(uint32_t memTypeIndex)
    8042 {
    8043  const uint32_t heapIndex = MemoryTypeIndexToHeapIndex(memTypeIndex);
    8044  const VkDeviceSize heapSize = m_MemProps.memoryHeaps[heapIndex].size;
    8045  const bool isSmallHeap = heapSize <= VMA_SMALL_HEAP_MAX_SIZE;
    8046  return isSmallHeap ? (heapSize / 8) : m_PreferredLargeHeapBlockSize;
    8047 }
    8048 
    8049 VkResult VmaAllocator_T::AllocateMemoryOfType(
    8050  VkDeviceSize size,
    8051  VkDeviceSize alignment,
    8052  bool dedicatedAllocation,
    8053  VkBuffer dedicatedBuffer,
    8054  VkImage dedicatedImage,
    8055  const VmaAllocationCreateInfo& createInfo,
    8056  uint32_t memTypeIndex,
    8057  VmaSuballocationType suballocType,
    8058  VmaAllocation* pAllocation)
    8059 {
    8060  VMA_ASSERT(pAllocation != VMA_NULL);
    8061  VMA_DEBUG_LOG(" AllocateMemory: MemoryTypeIndex=%u, Size=%llu", memTypeIndex, vkMemReq.size);
    8062 
    8063  VmaAllocationCreateInfo finalCreateInfo = createInfo;
    8064 
    8065  // If memory type is not HOST_VISIBLE, disable MAPPED.
    8066  if((finalCreateInfo.flags & VMA_ALLOCATION_CREATE_MAPPED_BIT) != 0 &&
    8067  (m_MemProps.memoryTypes[memTypeIndex].propertyFlags & VK_MEMORY_PROPERTY_HOST_VISIBLE_BIT) == 0)
    8068  {
    8069  finalCreateInfo.flags &= ~VMA_ALLOCATION_CREATE_MAPPED_BIT;
    8070  }
    8071 
    8072  VmaBlockVector* const blockVector = m_pBlockVectors[memTypeIndex];
    8073  VMA_ASSERT(blockVector);
    8074 
    8075  const VkDeviceSize preferredBlockSize = blockVector->GetPreferredBlockSize();
    8076  bool preferDedicatedMemory =
    8077  VMA_DEBUG_ALWAYS_DEDICATED_MEMORY ||
    8078  dedicatedAllocation ||
    8079  // Heuristics: Allocate dedicated memory if requested size if greater than half of preferred block size.
    8080  size > preferredBlockSize / 2;
    8081 
    8082  if(preferDedicatedMemory &&
    8083  (finalCreateInfo.flags & VMA_ALLOCATION_CREATE_NEVER_ALLOCATE_BIT) == 0 &&
    8084  finalCreateInfo.pool == VK_NULL_HANDLE)
    8085  {
    8087  }
    8088 
    8089  if((finalCreateInfo.flags & VMA_ALLOCATION_CREATE_DEDICATED_MEMORY_BIT) != 0)
    8090  {
    8091  if((finalCreateInfo.flags & VMA_ALLOCATION_CREATE_NEVER_ALLOCATE_BIT) != 0)
    8092  {
    8093  return VK_ERROR_OUT_OF_DEVICE_MEMORY;
    8094  }
    8095  else
    8096  {
    8097  return AllocateDedicatedMemory(
    8098  size,
    8099  suballocType,
    8100  memTypeIndex,
    8101  (finalCreateInfo.flags & VMA_ALLOCATION_CREATE_MAPPED_BIT) != 0,
    8102  (finalCreateInfo.flags & VMA_ALLOCATION_CREATE_USER_DATA_COPY_STRING_BIT) != 0,
    8103  finalCreateInfo.pUserData,
    8104  dedicatedBuffer,
    8105  dedicatedImage,
    8106  pAllocation);
    8107  }
    8108  }
    8109  else
    8110  {
    8111  VkResult res = blockVector->Allocate(
    8112  VK_NULL_HANDLE, // hCurrentPool
    8113  m_CurrentFrameIndex.load(),
    8114  size,
    8115  alignment,
    8116  finalCreateInfo,
    8117  suballocType,
    8118  pAllocation);
    8119  if(res == VK_SUCCESS)
    8120  {
    8121  return res;
    8122  }
    8123 
    8124  // 5. Try dedicated memory.
    8125  if((finalCreateInfo.flags & VMA_ALLOCATION_CREATE_NEVER_ALLOCATE_BIT) != 0)
    8126  {
    8127  return VK_ERROR_OUT_OF_DEVICE_MEMORY;
    8128  }
    8129  else
    8130  {
    8131  res = AllocateDedicatedMemory(
    8132  size,
    8133  suballocType,
    8134  memTypeIndex,
    8135  (finalCreateInfo.flags & VMA_ALLOCATION_CREATE_MAPPED_BIT) != 0,
    8136  (finalCreateInfo.flags & VMA_ALLOCATION_CREATE_USER_DATA_COPY_STRING_BIT) != 0,
    8137  finalCreateInfo.pUserData,
    8138  dedicatedBuffer,
    8139  dedicatedImage,
    8140  pAllocation);
    8141  if(res == VK_SUCCESS)
    8142  {
    8143  // Succeeded: AllocateDedicatedMemory function already filld pMemory, nothing more to do here.
    8144  VMA_DEBUG_LOG(" Allocated as DedicatedMemory");
    8145  return VK_SUCCESS;
    8146  }
    8147  else
    8148  {
    8149  // Everything failed: Return error code.
    8150  VMA_DEBUG_LOG(" vkAllocateMemory FAILED");
    8151  return res;
    8152  }
    8153  }
    8154  }
    8155 }
    8156 
    8157 VkResult VmaAllocator_T::AllocateDedicatedMemory(
    8158  VkDeviceSize size,
    8159  VmaSuballocationType suballocType,
    8160  uint32_t memTypeIndex,
    8161  bool map,
    8162  bool isUserDataString,
    8163  void* pUserData,
    8164  VkBuffer dedicatedBuffer,
    8165  VkImage dedicatedImage,
    8166  VmaAllocation* pAllocation)
    8167 {
    8168  VMA_ASSERT(pAllocation);
    8169 
    8170  VkMemoryAllocateInfo allocInfo = { VK_STRUCTURE_TYPE_MEMORY_ALLOCATE_INFO };
    8171  allocInfo.memoryTypeIndex = memTypeIndex;
    8172  allocInfo.allocationSize = size;
    8173 
    8174 #if VMA_DEDICATED_ALLOCATION
    8175  VkMemoryDedicatedAllocateInfoKHR dedicatedAllocInfo = { VK_STRUCTURE_TYPE_MEMORY_DEDICATED_ALLOCATE_INFO_KHR };
    8176  if(m_UseKhrDedicatedAllocation)
    8177  {
    8178  if(dedicatedBuffer != VK_NULL_HANDLE)
    8179  {
    8180  VMA_ASSERT(dedicatedImage == VK_NULL_HANDLE);
    8181  dedicatedAllocInfo.buffer = dedicatedBuffer;
    8182  allocInfo.pNext = &dedicatedAllocInfo;
    8183  }
    8184  else if(dedicatedImage != VK_NULL_HANDLE)
    8185  {
    8186  dedicatedAllocInfo.image = dedicatedImage;
    8187  allocInfo.pNext = &dedicatedAllocInfo;
    8188  }
    8189  }
    8190 #endif // #if VMA_DEDICATED_ALLOCATION
    8191 
    8192  // Allocate VkDeviceMemory.
    8193  VkDeviceMemory hMemory = VK_NULL_HANDLE;
    8194  VkResult res = AllocateVulkanMemory(&allocInfo, &hMemory);
    8195  if(res < 0)
    8196  {
    8197  VMA_DEBUG_LOG(" vkAllocateMemory FAILED");
    8198  return res;
    8199  }
    8200 
    8201  void* pMappedData = VMA_NULL;
    8202  if(map)
    8203  {
    8204  res = (*m_VulkanFunctions.vkMapMemory)(
    8205  m_hDevice,
    8206  hMemory,
    8207  0,
    8208  VK_WHOLE_SIZE,
    8209  0,
    8210  &pMappedData);
    8211  if(res < 0)
    8212  {
    8213  VMA_DEBUG_LOG(" vkMapMemory FAILED");
    8214  FreeVulkanMemory(memTypeIndex, size, hMemory);
    8215  return res;
    8216  }
    8217  }
    8218 
    8219  *pAllocation = vma_new(this, VmaAllocation_T)(m_CurrentFrameIndex.load(), isUserDataString);
    8220  (*pAllocation)->InitDedicatedAllocation(memTypeIndex, hMemory, suballocType, pMappedData, size);
    8221  (*pAllocation)->SetUserData(this, pUserData);
    8222 
    8223  // Register it in m_pDedicatedAllocations.
    8224  {
    8225  VmaMutexLock lock(m_DedicatedAllocationsMutex[memTypeIndex], m_UseMutex);
    8226  AllocationVectorType* pDedicatedAllocations = m_pDedicatedAllocations[memTypeIndex];
    8227  VMA_ASSERT(pDedicatedAllocations);
    8228  VmaVectorInsertSorted<VmaPointerLess>(*pDedicatedAllocations, *pAllocation);
    8229  }
    8230 
    8231  VMA_DEBUG_LOG(" Allocated DedicatedMemory MemoryTypeIndex=#%u", memTypeIndex);
    8232 
    8233  return VK_SUCCESS;
    8234 }
    8235 
    8236 void VmaAllocator_T::GetBufferMemoryRequirements(
    8237  VkBuffer hBuffer,
    8238  VkMemoryRequirements& memReq,
    8239  bool& requiresDedicatedAllocation,
    8240  bool& prefersDedicatedAllocation) const
    8241 {
    8242 #if VMA_DEDICATED_ALLOCATION
    8243  if(m_UseKhrDedicatedAllocation)
    8244  {
    8245  VkBufferMemoryRequirementsInfo2KHR memReqInfo = { VK_STRUCTURE_TYPE_BUFFER_MEMORY_REQUIREMENTS_INFO_2_KHR };
    8246  memReqInfo.buffer = hBuffer;
    8247 
    8248  VkMemoryDedicatedRequirementsKHR memDedicatedReq = { VK_STRUCTURE_TYPE_MEMORY_DEDICATED_REQUIREMENTS_KHR };
    8249 
    8250  VkMemoryRequirements2KHR memReq2 = { VK_STRUCTURE_TYPE_MEMORY_REQUIREMENTS_2_KHR };
    8251  memReq2.pNext = &memDedicatedReq;
    8252 
    8253  (*m_VulkanFunctions.vkGetBufferMemoryRequirements2KHR)(m_hDevice, &memReqInfo, &memReq2);
    8254 
    8255  memReq = memReq2.memoryRequirements;
    8256  requiresDedicatedAllocation = (memDedicatedReq.requiresDedicatedAllocation != VK_FALSE);
    8257  prefersDedicatedAllocation = (memDedicatedReq.prefersDedicatedAllocation != VK_FALSE);
    8258  }
    8259  else
    8260 #endif // #if VMA_DEDICATED_ALLOCATION
    8261  {
    8262  (*m_VulkanFunctions.vkGetBufferMemoryRequirements)(m_hDevice, hBuffer, &memReq);
    8263  requiresDedicatedAllocation = false;
    8264  prefersDedicatedAllocation = false;
    8265  }
    8266 }
    8267 
    8268 void VmaAllocator_T::GetImageMemoryRequirements(
    8269  VkImage hImage,
    8270  VkMemoryRequirements& memReq,
    8271  bool& requiresDedicatedAllocation,
    8272  bool& prefersDedicatedAllocation) const
    8273 {
    8274 #if VMA_DEDICATED_ALLOCATION
    8275  if(m_UseKhrDedicatedAllocation)
    8276  {
    8277  VkImageMemoryRequirementsInfo2KHR memReqInfo = { VK_STRUCTURE_TYPE_IMAGE_MEMORY_REQUIREMENTS_INFO_2_KHR };
    8278  memReqInfo.image = hImage;
    8279 
    8280  VkMemoryDedicatedRequirementsKHR memDedicatedReq = { VK_STRUCTURE_TYPE_MEMORY_DEDICATED_REQUIREMENTS_KHR };
    8281 
    8282  VkMemoryRequirements2KHR memReq2 = { VK_STRUCTURE_TYPE_MEMORY_REQUIREMENTS_2_KHR };
    8283  memReq2.pNext = &memDedicatedReq;
    8284 
    8285  (*m_VulkanFunctions.vkGetImageMemoryRequirements2KHR)(m_hDevice, &memReqInfo, &memReq2);
    8286 
    8287  memReq = memReq2.memoryRequirements;
    8288  requiresDedicatedAllocation = (memDedicatedReq.requiresDedicatedAllocation != VK_FALSE);
    8289  prefersDedicatedAllocation = (memDedicatedReq.prefersDedicatedAllocation != VK_FALSE);
    8290  }
    8291  else
    8292 #endif // #if VMA_DEDICATED_ALLOCATION
    8293  {
    8294  (*m_VulkanFunctions.vkGetImageMemoryRequirements)(m_hDevice, hImage, &memReq);
    8295  requiresDedicatedAllocation = false;
    8296  prefersDedicatedAllocation = false;
    8297  }
    8298 }
    8299 
    8300 VkResult VmaAllocator_T::AllocateMemory(
    8301  const VkMemoryRequirements& vkMemReq,
    8302  bool requiresDedicatedAllocation,
    8303  bool prefersDedicatedAllocation,
    8304  VkBuffer dedicatedBuffer,
    8305  VkImage dedicatedImage,
    8306  const VmaAllocationCreateInfo& createInfo,
    8307  VmaSuballocationType suballocType,
    8308  VmaAllocation* pAllocation)
    8309 {
    8310  if((createInfo.flags & VMA_ALLOCATION_CREATE_DEDICATED_MEMORY_BIT) != 0 &&
    8311  (createInfo.flags & VMA_ALLOCATION_CREATE_NEVER_ALLOCATE_BIT) != 0)
    8312  {
    8313  VMA_ASSERT(0 && "Specifying VMA_ALLOCATION_CREATE_DEDICATED_MEMORY_BIT together with VMA_ALLOCATION_CREATE_NEVER_ALLOCATE_BIT makes no sense.");
    8314  return VK_ERROR_OUT_OF_DEVICE_MEMORY;
    8315  }
    8316  if((createInfo.flags & VMA_ALLOCATION_CREATE_MAPPED_BIT) != 0 &&
    8318  {
    8319  VMA_ASSERT(0 && "Specifying VMA_ALLOCATION_CREATE_MAPPED_BIT together with VMA_ALLOCATION_CREATE_CAN_BECOME_LOST_BIT is invalid.");
    8320  return VK_ERROR_OUT_OF_DEVICE_MEMORY;
    8321  }
    8322  if(requiresDedicatedAllocation)
    8323  {
    8324  if((createInfo.flags & VMA_ALLOCATION_CREATE_NEVER_ALLOCATE_BIT) != 0)
    8325  {
    8326  VMA_ASSERT(0 && "VMA_ALLOCATION_CREATE_NEVER_ALLOCATE_BIT specified while dedicated allocation is required.");
    8327  return VK_ERROR_OUT_OF_DEVICE_MEMORY;
    8328  }
    8329  if(createInfo.pool != VK_NULL_HANDLE)
    8330  {
    8331  VMA_ASSERT(0 && "Pool specified while dedicated allocation is required.");
    8332  return VK_ERROR_OUT_OF_DEVICE_MEMORY;
    8333  }
    8334  }
    8335  if((createInfo.pool != VK_NULL_HANDLE) &&
    8336  ((createInfo.flags & (VMA_ALLOCATION_CREATE_DEDICATED_MEMORY_BIT)) != 0))
    8337  {
    8338  VMA_ASSERT(0 && "Specifying VMA_ALLOCATION_CREATE_DEDICATED_MEMORY_BIT when pool != null is invalid.");
    8339  return VK_ERROR_OUT_OF_DEVICE_MEMORY;
    8340  }
    8341 
    8342  if(createInfo.pool != VK_NULL_HANDLE)
    8343  {
    8344  const VkDeviceSize alignmentForPool = VMA_MAX(
    8345  vkMemReq.alignment,
    8346  GetMemoryTypeMinAlignment(createInfo.pool->m_BlockVector.GetMemoryTypeIndex()));
    8347  return createInfo.pool->m_BlockVector.Allocate(
    8348  createInfo.pool,
    8349  m_CurrentFrameIndex.load(),
    8350  vkMemReq.size,
    8351  alignmentForPool,
    8352  createInfo,
    8353  suballocType,
    8354  pAllocation);
    8355  }
    8356  else
    8357  {
    8358  // Bit mask of memory Vulkan types acceptable for this allocation.
    8359  uint32_t memoryTypeBits = vkMemReq.memoryTypeBits;
    8360  uint32_t memTypeIndex = UINT32_MAX;
    8361  VkResult res = vmaFindMemoryTypeIndex(this, memoryTypeBits, &createInfo, &memTypeIndex);
    8362  if(res == VK_SUCCESS)
    8363  {
    8364  VkDeviceSize alignmentForMemType = VMA_MAX(
    8365  vkMemReq.alignment,
    8366  GetMemoryTypeMinAlignment(memTypeIndex));
    8367 
    8368  res = AllocateMemoryOfType(
    8369  vkMemReq.size,
    8370  alignmentForMemType,
    8371  requiresDedicatedAllocation || prefersDedicatedAllocation,
    8372  dedicatedBuffer,
    8373  dedicatedImage,
    8374  createInfo,
    8375  memTypeIndex,
    8376  suballocType,
    8377  pAllocation);
    8378  // Succeeded on first try.
    8379  if(res == VK_SUCCESS)
    8380  {
    8381  return res;
    8382  }
    8383  // Allocation from this memory type failed. Try other compatible memory types.
    8384  else
    8385  {
    8386  for(;;)
    8387  {
    8388  // Remove old memTypeIndex from list of possibilities.
    8389  memoryTypeBits &= ~(1u << memTypeIndex);
    8390  // Find alternative memTypeIndex.
    8391  res = vmaFindMemoryTypeIndex(this, memoryTypeBits, &createInfo, &memTypeIndex);
    8392  if(res == VK_SUCCESS)
    8393  {
    8394  alignmentForMemType = VMA_MAX(
    8395  vkMemReq.alignment,
    8396  GetMemoryTypeMinAlignment(memTypeIndex));
    8397 
    8398  res = AllocateMemoryOfType(
    8399  vkMemReq.size,
    8400  alignmentForMemType,
    8401  requiresDedicatedAllocation || prefersDedicatedAllocation,
    8402  dedicatedBuffer,
    8403  dedicatedImage,
    8404  createInfo,
    8405  memTypeIndex,
    8406  suballocType,
    8407  pAllocation);
    8408  // Allocation from this alternative memory type succeeded.
    8409  if(res == VK_SUCCESS)
    8410  {
    8411  return res;
    8412  }
    8413  // else: Allocation from this memory type failed. Try next one - next loop iteration.
    8414  }
    8415  // No other matching memory type index could be found.
    8416  else
    8417  {
    8418  // Not returning res, which is VK_ERROR_FEATURE_NOT_PRESENT, because we already failed to allocate once.
    8419  return VK_ERROR_OUT_OF_DEVICE_MEMORY;
    8420  }
    8421  }
    8422  }
    8423  }
    8424  // Can't find any single memory type maching requirements. res is VK_ERROR_FEATURE_NOT_PRESENT.
    8425  else
    8426  return res;
    8427  }
    8428 }
    8429 
    8430 void VmaAllocator_T::FreeMemory(const VmaAllocation allocation)
    8431 {
    8432  VMA_ASSERT(allocation);
    8433 
    8434  if(allocation->CanBecomeLost() == false ||
    8435  allocation->GetLastUseFrameIndex() != VMA_FRAME_INDEX_LOST)
    8436  {
    8437  switch(allocation->GetType())
    8438  {
    8439  case VmaAllocation_T::ALLOCATION_TYPE_BLOCK:
    8440  {
    8441  VmaBlockVector* pBlockVector = VMA_NULL;
    8442  VmaPool hPool = allocation->GetPool();
    8443  if(hPool != VK_NULL_HANDLE)
    8444  {
    8445  pBlockVector = &hPool->m_BlockVector;
    8446  }
    8447  else
    8448  {
    8449  const uint32_t memTypeIndex = allocation->GetMemoryTypeIndex();
    8450  pBlockVector = m_pBlockVectors[memTypeIndex];
    8451  }
    8452  pBlockVector->Free(allocation);
    8453  }
    8454  break;
    8455  case VmaAllocation_T::ALLOCATION_TYPE_DEDICATED:
    8456  FreeDedicatedMemory(allocation);
    8457  break;
    8458  default:
    8459  VMA_ASSERT(0);
    8460  }
    8461  }
    8462 
    8463  allocation->SetUserData(this, VMA_NULL);
    8464  vma_delete(this, allocation);
    8465 }
    8466 
    8467 void VmaAllocator_T::CalculateStats(VmaStats* pStats)
    8468 {
    8469  // Initialize.
    8470  InitStatInfo(pStats->total);
    8471  for(size_t i = 0; i < VK_MAX_MEMORY_TYPES; ++i)
    8472  InitStatInfo(pStats->memoryType[i]);
    8473  for(size_t i = 0; i < VK_MAX_MEMORY_HEAPS; ++i)
    8474  InitStatInfo(pStats->memoryHeap[i]);
    8475 
    8476  // Process default pools.
    8477  for(uint32_t memTypeIndex = 0; memTypeIndex < GetMemoryTypeCount(); ++memTypeIndex)
    8478  {
    8479  VmaBlockVector* const pBlockVector = m_pBlockVectors[memTypeIndex];
    8480  VMA_ASSERT(pBlockVector);
    8481  pBlockVector->AddStats(pStats);
    8482  }
    8483 
    8484  // Process custom pools.
    8485  {
    8486  VmaMutexLock lock(m_PoolsMutex, m_UseMutex);
    8487  for(size_t poolIndex = 0, poolCount = m_Pools.size(); poolIndex < poolCount; ++poolIndex)
    8488  {
    8489  m_Pools[poolIndex]->GetBlockVector().AddStats(pStats);
    8490  }
    8491  }
    8492 
    8493  // Process dedicated allocations.
    8494  for(uint32_t memTypeIndex = 0; memTypeIndex < GetMemoryTypeCount(); ++memTypeIndex)
    8495  {
    8496  const uint32_t memHeapIndex = MemoryTypeIndexToHeapIndex(memTypeIndex);
    8497  VmaMutexLock dedicatedAllocationsLock(m_DedicatedAllocationsMutex[memTypeIndex], m_UseMutex);
    8498  AllocationVectorType* const pDedicatedAllocVector = m_pDedicatedAllocations[memTypeIndex];
    8499  VMA_ASSERT(pDedicatedAllocVector);
    8500  for(size_t allocIndex = 0, allocCount = pDedicatedAllocVector->size(); allocIndex < allocCount; ++allocIndex)
    8501  {
    8502  VmaStatInfo allocationStatInfo;
    8503  (*pDedicatedAllocVector)[allocIndex]->DedicatedAllocCalcStatsInfo(allocationStatInfo);
    8504  VmaAddStatInfo(pStats->total, allocationStatInfo);
    8505  VmaAddStatInfo(pStats->memoryType[memTypeIndex], allocationStatInfo);
    8506  VmaAddStatInfo(pStats->memoryHeap[memHeapIndex], allocationStatInfo);
    8507  }
    8508  }
    8509 
    8510  // Postprocess.
    8511  VmaPostprocessCalcStatInfo(pStats->total);
    8512  for(size_t i = 0; i < GetMemoryTypeCount(); ++i)
    8513  VmaPostprocessCalcStatInfo(pStats->memoryType[i]);
    8514  for(size_t i = 0; i < GetMemoryHeapCount(); ++i)
    8515  VmaPostprocessCalcStatInfo(pStats->memoryHeap[i]);
    8516 }
    8517 
    8518 static const uint32_t VMA_VENDOR_ID_AMD = 4098;
    8519 
    8520 VkResult VmaAllocator_T::Defragment(
    8521  VmaAllocation* pAllocations,
    8522  size_t allocationCount,
    8523  VkBool32* pAllocationsChanged,
    8524  const VmaDefragmentationInfo* pDefragmentationInfo,
    8525  VmaDefragmentationStats* pDefragmentationStats)
    8526 {
    8527  if(pAllocationsChanged != VMA_NULL)
    8528  {
    8529  memset(pAllocationsChanged, 0, sizeof(*pAllocationsChanged));
    8530  }
    8531  if(pDefragmentationStats != VMA_NULL)
    8532  {
    8533  memset(pDefragmentationStats, 0, sizeof(*pDefragmentationStats));
    8534  }
    8535 
    8536  const uint32_t currentFrameIndex = m_CurrentFrameIndex.load();
    8537 
    8538  VmaMutexLock poolsLock(m_PoolsMutex, m_UseMutex);
    8539 
    8540  const size_t poolCount = m_Pools.size();
    8541 
    8542  // Dispatch pAllocations among defragmentators. Create them in BlockVectors when necessary.
    8543  for(size_t allocIndex = 0; allocIndex < allocationCount; ++allocIndex)
    8544  {
    8545  VmaAllocation hAlloc = pAllocations[allocIndex];
    8546  VMA_ASSERT(hAlloc);
    8547  const uint32_t memTypeIndex = hAlloc->GetMemoryTypeIndex();
    8548  // DedicatedAlloc cannot be defragmented.
    8549  if((hAlloc->GetType() == VmaAllocation_T::ALLOCATION_TYPE_BLOCK) &&
    8550  // Only HOST_VISIBLE memory types can be defragmented.
    8551  ((m_MemProps.memoryTypes[memTypeIndex].propertyFlags & VK_MEMORY_PROPERTY_HOST_VISIBLE_BIT) != 0) &&
    8552  // Lost allocation cannot be defragmented.
    8553  (hAlloc->GetLastUseFrameIndex() != VMA_FRAME_INDEX_LOST))
    8554  {
    8555  VmaBlockVector* pAllocBlockVector = VMA_NULL;
    8556 
    8557  const VmaPool hAllocPool = hAlloc->GetPool();
    8558  // This allocation belongs to custom pool.
    8559  if(hAllocPool != VK_NULL_HANDLE)
    8560  {
    8561  pAllocBlockVector = &hAllocPool->GetBlockVector();
    8562  }
    8563  // This allocation belongs to general pool.
    8564  else
    8565  {
    8566  pAllocBlockVector = m_pBlockVectors[memTypeIndex];
    8567  }
    8568 
    8569  VmaDefragmentator* const pDefragmentator = pAllocBlockVector->EnsureDefragmentator(this, currentFrameIndex);
    8570 
    8571  VkBool32* const pChanged = (pAllocationsChanged != VMA_NULL) ?
    8572  &pAllocationsChanged[allocIndex] : VMA_NULL;
    8573  pDefragmentator->AddAllocation(hAlloc, pChanged);
    8574  }
    8575  }
    8576 
    8577  VkResult result = VK_SUCCESS;
    8578 
    8579  // ======== Main processing.
    8580 
    8581  VkDeviceSize maxBytesToMove = SIZE_MAX;
    8582  uint32_t maxAllocationsToMove = UINT32_MAX;
    8583  if(pDefragmentationInfo != VMA_NULL)
    8584  {
    8585  maxBytesToMove = pDefragmentationInfo->maxBytesToMove;
    8586  maxAllocationsToMove = pDefragmentationInfo->maxAllocationsToMove;
    8587  }
    8588 
    8589  // Process standard memory.
    8590  for(uint32_t memTypeIndex = 0;
    8591  (memTypeIndex < GetMemoryTypeCount()) && (result == VK_SUCCESS);
    8592  ++memTypeIndex)
    8593  {
    8594  // Only HOST_VISIBLE memory types can be defragmented.
    8595  if((m_MemProps.memoryTypes[memTypeIndex].propertyFlags & VK_MEMORY_PROPERTY_HOST_VISIBLE_BIT) != 0)
    8596  {
    8597  result = m_pBlockVectors[memTypeIndex]->Defragment(
    8598  pDefragmentationStats,
    8599  maxBytesToMove,
    8600  maxAllocationsToMove);
    8601  }
    8602  }
    8603 
    8604  // Process custom pools.
    8605  for(size_t poolIndex = 0; (poolIndex < poolCount) && (result == VK_SUCCESS); ++poolIndex)
    8606  {
    8607  result = m_Pools[poolIndex]->GetBlockVector().Defragment(
    8608  pDefragmentationStats,
    8609  maxBytesToMove,
    8610  maxAllocationsToMove);
    8611  }
    8612 
    8613  // ======== Destroy defragmentators.
    8614 
    8615  // Process custom pools.
    8616  for(size_t poolIndex = poolCount; poolIndex--; )
    8617  {
    8618  m_Pools[poolIndex]->GetBlockVector().DestroyDefragmentator();
    8619  }
    8620 
    8621  // Process standard memory.
    8622  for(uint32_t memTypeIndex = GetMemoryTypeCount(); memTypeIndex--; )
    8623  {
    8624  if((m_MemProps.memoryTypes[memTypeIndex].propertyFlags & VK_MEMORY_PROPERTY_HOST_VISIBLE_BIT) != 0)
    8625  {
    8626  m_pBlockVectors[memTypeIndex]->DestroyDefragmentator();
    8627  }
    8628  }
    8629 
    8630  return result;
    8631 }
    8632 
    8633 void VmaAllocator_T::GetAllocationInfo(VmaAllocation hAllocation, VmaAllocationInfo* pAllocationInfo)
    8634 {
    8635  if(hAllocation->CanBecomeLost())
    8636  {
    8637  /*
    8638  Warning: This is a carefully designed algorithm.
    8639  Do not modify unless you really know what you're doing :)
    8640  */
    8641  uint32_t localCurrFrameIndex = m_CurrentFrameIndex.load();
    8642  uint32_t localLastUseFrameIndex = hAllocation->GetLastUseFrameIndex();
    8643  for(;;)
    8644  {
    8645  if(localLastUseFrameIndex == VMA_FRAME_INDEX_LOST)
    8646  {
    8647  pAllocationInfo->memoryType = UINT32_MAX;
    8648  pAllocationInfo->deviceMemory = VK_NULL_HANDLE;
    8649  pAllocationInfo->offset = 0;
    8650  pAllocationInfo->size = hAllocation->GetSize();
    8651  pAllocationInfo->pMappedData = VMA_NULL;
    8652  pAllocationInfo->pUserData = hAllocation->GetUserData();
    8653  return;
    8654  }
    8655  else if(localLastUseFrameIndex == localCurrFrameIndex)
    8656  {
    8657  pAllocationInfo->memoryType = hAllocation->GetMemoryTypeIndex();
    8658  pAllocationInfo->deviceMemory = hAllocation->GetMemory();
    8659  pAllocationInfo->offset = hAllocation->GetOffset();
    8660  pAllocationInfo->size = hAllocation->GetSize();
    8661  pAllocationInfo->pMappedData = VMA_NULL;
    8662  pAllocationInfo->pUserData = hAllocation->GetUserData();
    8663  return;
    8664  }
    8665  else // Last use time earlier than current time.
    8666  {
    8667  if(hAllocation->CompareExchangeLastUseFrameIndex(localLastUseFrameIndex, localCurrFrameIndex))
    8668  {
    8669  localLastUseFrameIndex = localCurrFrameIndex;
    8670  }
    8671  }
    8672  }
    8673  }
    8674  else
    8675  {
    8676 #if VMA_STATS_STRING_ENABLED
    8677  uint32_t localCurrFrameIndex = m_CurrentFrameIndex.load();
    8678  uint32_t localLastUseFrameIndex = hAllocation->GetLastUseFrameIndex();
    8679  for(;;)
    8680  {
    8681  VMA_ASSERT(localLastUseFrameIndex != VMA_FRAME_INDEX_LOST);
    8682  if(localLastUseFrameIndex == localCurrFrameIndex)
    8683  {
    8684  break;
    8685  }
    8686  else // Last use time earlier than current time.
    8687  {
    8688  if(hAllocation->CompareExchangeLastUseFrameIndex(localLastUseFrameIndex, localCurrFrameIndex))
    8689  {
    8690  localLastUseFrameIndex = localCurrFrameIndex;
    8691  }
    8692  }
    8693  }
    8694 #endif
    8695 
    8696  pAllocationInfo->memoryType = hAllocation->GetMemoryTypeIndex();
    8697  pAllocationInfo->deviceMemory = hAllocation->GetMemory();
    8698  pAllocationInfo->offset = hAllocation->GetOffset();
    8699  pAllocationInfo->size = hAllocation->GetSize();
    8700  pAllocationInfo->pMappedData = hAllocation->GetMappedData();
    8701  pAllocationInfo->pUserData = hAllocation->GetUserData();
    8702  }
    8703 }
    8704 
    8705 bool VmaAllocator_T::TouchAllocation(VmaAllocation hAllocation)
    8706 {
    8707  // This is a stripped-down version of VmaAllocator_T::GetAllocationInfo.
    8708  if(hAllocation->CanBecomeLost())
    8709  {
    8710  uint32_t localCurrFrameIndex = m_CurrentFrameIndex.load();
    8711  uint32_t localLastUseFrameIndex = hAllocation->GetLastUseFrameIndex();
    8712  for(;;)
    8713  {
    8714  if(localLastUseFrameIndex == VMA_FRAME_INDEX_LOST)
    8715  {
    8716  return false;
    8717  }
    8718  else if(localLastUseFrameIndex == localCurrFrameIndex)
    8719  {
    8720  return true;
    8721  }
    8722  else // Last use time earlier than current time.
    8723  {
    8724  if(hAllocation->CompareExchangeLastUseFrameIndex(localLastUseFrameIndex, localCurrFrameIndex))
    8725  {
    8726  localLastUseFrameIndex = localCurrFrameIndex;
    8727  }
    8728  }
    8729  }
    8730  }
    8731  else
    8732  {
    8733 #if VMA_STATS_STRING_ENABLED
    8734  uint32_t localCurrFrameIndex = m_CurrentFrameIndex.load();
    8735  uint32_t localLastUseFrameIndex = hAllocation->GetLastUseFrameIndex();
    8736  for(;;)
    8737  {
    8738  VMA_ASSERT(localLastUseFrameIndex != VMA_FRAME_INDEX_LOST);
    8739  if(localLastUseFrameIndex == localCurrFrameIndex)
    8740  {
    8741  break;
    8742  }
    8743  else // Last use time earlier than current time.
    8744  {
    8745  if(hAllocation->CompareExchangeLastUseFrameIndex(localLastUseFrameIndex, localCurrFrameIndex))
    8746  {
    8747  localLastUseFrameIndex = localCurrFrameIndex;
    8748  }
    8749  }
    8750  }
    8751 #endif
    8752 
    8753  return true;
    8754  }
    8755 }
    8756 
    8757 VkResult VmaAllocator_T::CreatePool(const VmaPoolCreateInfo* pCreateInfo, VmaPool* pPool)
    8758 {
    8759  VMA_DEBUG_LOG(" CreatePool: MemoryTypeIndex=%u", pCreateInfo->memoryTypeIndex);
    8760 
    8761  VmaPoolCreateInfo newCreateInfo = *pCreateInfo;
    8762 
    8763  if(newCreateInfo.maxBlockCount == 0)
    8764  {
    8765  newCreateInfo.maxBlockCount = SIZE_MAX;
    8766  }
    8767  if(newCreateInfo.blockSize == 0)
    8768  {
    8769  newCreateInfo.blockSize = CalcPreferredBlockSize(newCreateInfo.memoryTypeIndex);
    8770  }
    8771 
    8772  *pPool = vma_new(this, VmaPool_T)(this, newCreateInfo);
    8773 
    8774  VkResult res = (*pPool)->m_BlockVector.CreateMinBlocks();
    8775  if(res != VK_SUCCESS)
    8776  {
    8777  vma_delete(this, *pPool);
    8778  *pPool = VMA_NULL;
    8779  return res;
    8780  }
    8781 
    8782  // Add to m_Pools.
    8783  {
    8784  VmaMutexLock lock(m_PoolsMutex, m_UseMutex);
    8785  (*pPool)->SetId(m_NextPoolId++);
    8786  VmaVectorInsertSorted<VmaPointerLess>(m_Pools, *pPool);
    8787  }
    8788 
    8789  return VK_SUCCESS;
    8790 }
    8791 
    8792 void VmaAllocator_T::DestroyPool(VmaPool pool)
    8793 {
    8794  // Remove from m_Pools.
    8795  {
    8796  VmaMutexLock lock(m_PoolsMutex, m_UseMutex);
    8797  bool success = VmaVectorRemoveSorted<VmaPointerLess>(m_Pools, pool);
    8798  VMA_ASSERT(success && "Pool not found in Allocator.");
    8799  }
    8800 
    8801  vma_delete(this, pool);
    8802 }
    8803 
    8804 void VmaAllocator_T::GetPoolStats(VmaPool pool, VmaPoolStats* pPoolStats)
    8805 {
    8806  pool->m_BlockVector.GetPoolStats(pPoolStats);
    8807 }
    8808 
    8809 void VmaAllocator_T::SetCurrentFrameIndex(uint32_t frameIndex)
    8810 {
    8811  m_CurrentFrameIndex.store(frameIndex);
    8812 }
    8813 
    8814 void VmaAllocator_T::MakePoolAllocationsLost(
    8815  VmaPool hPool,
    8816  size_t* pLostAllocationCount)
    8817 {
    8818  hPool->m_BlockVector.MakePoolAllocationsLost(
    8819  m_CurrentFrameIndex.load(),
    8820  pLostAllocationCount);
    8821 }
    8822 
    8823 VkResult VmaAllocator_T::CheckPoolCorruption(VmaPool hPool)
    8824 {
    8825  return hPool->m_BlockVector.CheckCorruption();
    8826 }
    8827 
    8828 VkResult VmaAllocator_T::CheckCorruption(uint32_t memoryTypeBits)
    8829 {
    8830  VkResult finalRes = VK_ERROR_FEATURE_NOT_PRESENT;
    8831 
    8832  // Process default pools.
    8833  for(uint32_t memTypeIndex = 0; memTypeIndex < GetMemoryTypeCount(); ++memTypeIndex)
    8834  {
    8835  if(((1u << memTypeIndex) & memoryTypeBits) != 0)
    8836  {
    8837  VmaBlockVector* const pBlockVector = m_pBlockVectors[memTypeIndex];
    8838  VMA_ASSERT(pBlockVector);
    8839  VkResult localRes = pBlockVector->CheckCorruption();
    8840  switch(localRes)
    8841  {
    8842  case VK_ERROR_FEATURE_NOT_PRESENT:
    8843  break;
    8844  case VK_SUCCESS:
    8845  finalRes = VK_SUCCESS;
    8846  break;
    8847  default:
    8848  return localRes;
    8849  }
    8850  }
    8851  }
    8852 
    8853  // Process custom pools.
    8854  {
    8855  VmaMutexLock lock(m_PoolsMutex, m_UseMutex);
    8856  for(size_t poolIndex = 0, poolCount = m_Pools.size(); poolIndex < poolCount; ++poolIndex)
    8857  {
    8858  if(((1u << m_Pools[poolIndex]->GetBlockVector().GetMemoryTypeIndex()) & memoryTypeBits) != 0)
    8859  {
    8860  VkResult localRes = m_Pools[poolIndex]->GetBlockVector().CheckCorruption();
    8861  switch(localRes)
    8862  {
    8863  case VK_ERROR_FEATURE_NOT_PRESENT:
    8864  break;
    8865  case VK_SUCCESS:
    8866  finalRes = VK_SUCCESS;
    8867  break;
    8868  default:
    8869  return localRes;
    8870  }
    8871  }
    8872  }
    8873  }
    8874 
    8875  return finalRes;
    8876 }
    8877 
    8878 void VmaAllocator_T::CreateLostAllocation(VmaAllocation* pAllocation)
    8879 {
    8880  *pAllocation = vma_new(this, VmaAllocation_T)(VMA_FRAME_INDEX_LOST, false);
    8881  (*pAllocation)->InitLost();
    8882 }
    8883 
    8884 VkResult VmaAllocator_T::AllocateVulkanMemory(const VkMemoryAllocateInfo* pAllocateInfo, VkDeviceMemory* pMemory)
    8885 {
    8886  const uint32_t heapIndex = MemoryTypeIndexToHeapIndex(pAllocateInfo->memoryTypeIndex);
    8887 
    8888  VkResult res;
    8889  if(m_HeapSizeLimit[heapIndex] != VK_WHOLE_SIZE)
    8890  {
    8891  VmaMutexLock lock(m_HeapSizeLimitMutex, m_UseMutex);
    8892  if(m_HeapSizeLimit[heapIndex] >= pAllocateInfo->allocationSize)
    8893  {
    8894  res = (*m_VulkanFunctions.vkAllocateMemory)(m_hDevice, pAllocateInfo, GetAllocationCallbacks(), pMemory);
    8895  if(res == VK_SUCCESS)
    8896  {
    8897  m_HeapSizeLimit[heapIndex] -= pAllocateInfo->allocationSize;
    8898  }
    8899  }
    8900  else
    8901  {
    8902  res = VK_ERROR_OUT_OF_DEVICE_MEMORY;
    8903  }
    8904  }
    8905  else
    8906  {
    8907  res = (*m_VulkanFunctions.vkAllocateMemory)(m_hDevice, pAllocateInfo, GetAllocationCallbacks(), pMemory);
    8908  }
    8909 
    8910  if(res == VK_SUCCESS && m_DeviceMemoryCallbacks.pfnAllocate != VMA_NULL)
    8911  {
    8912  (*m_DeviceMemoryCallbacks.pfnAllocate)(this, pAllocateInfo->memoryTypeIndex, *pMemory, pAllocateInfo->allocationSize);
    8913  }
    8914 
    8915  return res;
    8916 }
    8917 
    8918 void VmaAllocator_T::FreeVulkanMemory(uint32_t memoryType, VkDeviceSize size, VkDeviceMemory hMemory)
    8919 {
    8920  if(m_DeviceMemoryCallbacks.pfnFree != VMA_NULL)
    8921  {
    8922  (*m_DeviceMemoryCallbacks.pfnFree)(this, memoryType, hMemory, size);
    8923  }
    8924 
    8925  (*m_VulkanFunctions.vkFreeMemory)(m_hDevice, hMemory, GetAllocationCallbacks());
    8926 
    8927  const uint32_t heapIndex = MemoryTypeIndexToHeapIndex(memoryType);
    8928  if(m_HeapSizeLimit[heapIndex] != VK_WHOLE_SIZE)
    8929  {
    8930  VmaMutexLock lock(m_HeapSizeLimitMutex, m_UseMutex);
    8931  m_HeapSizeLimit[heapIndex] += size;
    8932  }
    8933 }
    8934 
    8935 VkResult VmaAllocator_T::Map(VmaAllocation hAllocation, void** ppData)
    8936 {
    8937  if(hAllocation->CanBecomeLost())
    8938  {
    8939  return VK_ERROR_MEMORY_MAP_FAILED;
    8940  }
    8941 
    8942  switch(hAllocation->GetType())
    8943  {
    8944  case VmaAllocation_T::ALLOCATION_TYPE_BLOCK:
    8945  {
    8946  VmaDeviceMemoryBlock* const pBlock = hAllocation->GetBlock();
    8947  char *pBytes = VMA_NULL;
    8948  VkResult res = pBlock->Map(this, 1, (void**)&pBytes);
    8949  if(res == VK_SUCCESS)
    8950  {
    8951  *ppData = pBytes + (ptrdiff_t)hAllocation->GetOffset();
    8952  hAllocation->BlockAllocMap();
    8953  }
    8954  return res;
    8955  }
    8956  case VmaAllocation_T::ALLOCATION_TYPE_DEDICATED:
    8957  return hAllocation->DedicatedAllocMap(this, ppData);
    8958  default:
    8959  VMA_ASSERT(0);
    8960  return VK_ERROR_MEMORY_MAP_FAILED;
    8961  }
    8962 }
    8963 
    8964 void VmaAllocator_T::Unmap(VmaAllocation hAllocation)
    8965 {
    8966  switch(hAllocation->GetType())
    8967  {
    8968  case VmaAllocation_T::ALLOCATION_TYPE_BLOCK:
    8969  {
    8970  VmaDeviceMemoryBlock* const pBlock = hAllocation->GetBlock();
    8971  hAllocation->BlockAllocUnmap();
    8972  pBlock->Unmap(this, 1);
    8973  }
    8974  break;
    8975  case VmaAllocation_T::ALLOCATION_TYPE_DEDICATED:
    8976  hAllocation->DedicatedAllocUnmap(this);
    8977  break;
    8978  default:
    8979  VMA_ASSERT(0);
    8980  }
    8981 }
    8982 
    8983 VkResult VmaAllocator_T::BindBufferMemory(VmaAllocation hAllocation, VkBuffer hBuffer)
    8984 {
    8985  VkResult res = VK_SUCCESS;
    8986  switch(hAllocation->GetType())
    8987  {
    8988  case VmaAllocation_T::ALLOCATION_TYPE_DEDICATED:
    8989  res = GetVulkanFunctions().vkBindBufferMemory(
    8990  m_hDevice,
    8991  hBuffer,
    8992  hAllocation->GetMemory(),
    8993  0); //memoryOffset
    8994  break;
    8995  case VmaAllocation_T::ALLOCATION_TYPE_BLOCK:
    8996  {
    8997  VmaDeviceMemoryBlock* pBlock = hAllocation->GetBlock();
    8998  VMA_ASSERT(pBlock && "Binding buffer to allocation that doesn't belong to any block. Is the allocation lost?");
    8999  res = pBlock->BindBufferMemory(this, hAllocation, hBuffer);
    9000  break;
    9001  }
    9002  default:
    9003  VMA_ASSERT(0);
    9004  }
    9005  return res;
    9006 }
    9007 
    9008 VkResult VmaAllocator_T::BindImageMemory(VmaAllocation hAllocation, VkImage hImage)
    9009 {
    9010  VkResult res = VK_SUCCESS;
    9011  switch(hAllocation->GetType())
    9012  {
    9013  case VmaAllocation_T::ALLOCATION_TYPE_DEDICATED:
    9014  res = GetVulkanFunctions().vkBindImageMemory(
    9015  m_hDevice,
    9016  hImage,
    9017  hAllocation->GetMemory(),
    9018  0); //memoryOffset
    9019  break;
    9020  case VmaAllocation_T::ALLOCATION_TYPE_BLOCK:
    9021  {
    9022  VmaDeviceMemoryBlock* pBlock = hAllocation->GetBlock();
    9023  VMA_ASSERT(pBlock && "Binding image to allocation that doesn't belong to any block. Is the allocation lost?");
    9024  res = pBlock->BindImageMemory(this, hAllocation, hImage);
    9025  break;
    9026  }
    9027  default:
    9028  VMA_ASSERT(0);
    9029  }
    9030  return res;
    9031 }
    9032 
    9033 void VmaAllocator_T::FlushOrInvalidateAllocation(
    9034  VmaAllocation hAllocation,
    9035  VkDeviceSize offset, VkDeviceSize size,
    9036  VMA_CACHE_OPERATION op)
    9037 {
    9038  const uint32_t memTypeIndex = hAllocation->GetMemoryTypeIndex();
    9039  if(size > 0 && IsMemoryTypeNonCoherent(memTypeIndex))
    9040  {
    9041  const VkDeviceSize allocationSize = hAllocation->GetSize();
    9042  VMA_ASSERT(offset <= allocationSize);
    9043 
    9044  const VkDeviceSize nonCoherentAtomSize = m_PhysicalDeviceProperties.limits.nonCoherentAtomSize;
    9045 
    9046  VkMappedMemoryRange memRange = { VK_STRUCTURE_TYPE_MAPPED_MEMORY_RANGE };
    9047  memRange.memory = hAllocation->GetMemory();
    9048 
    9049  switch(hAllocation->GetType())
    9050  {
    9051  case VmaAllocation_T::ALLOCATION_TYPE_DEDICATED:
    9052  memRange.offset = VmaAlignDown(offset, nonCoherentAtomSize);
    9053  if(size == VK_WHOLE_SIZE)
    9054  {
    9055  memRange.size = allocationSize - memRange.offset;
    9056  }
    9057  else
    9058  {
    9059  VMA_ASSERT(offset + size <= allocationSize);
    9060  memRange.size = VMA_MIN(
    9061  VmaAlignUp(size + (offset - memRange.offset), nonCoherentAtomSize),
    9062  allocationSize - memRange.offset);
    9063  }
    9064  break;
    9065 
    9066  case VmaAllocation_T::ALLOCATION_TYPE_BLOCK:
    9067  {
    9068  // 1. Still within this allocation.
    9069  memRange.offset = VmaAlignDown(offset, nonCoherentAtomSize);
    9070  if(size == VK_WHOLE_SIZE)
    9071  {
    9072  size = allocationSize - offset;
    9073  }
    9074  else
    9075  {
    9076  VMA_ASSERT(offset + size <= allocationSize);
    9077  }
    9078  memRange.size = VmaAlignUp(size + (offset - memRange.offset), nonCoherentAtomSize);
    9079 
    9080  // 2. Adjust to whole block.
    9081  const VkDeviceSize allocationOffset = hAllocation->GetOffset();
    9082  VMA_ASSERT(allocationOffset % nonCoherentAtomSize == 0);
    9083  const VkDeviceSize blockSize = hAllocation->GetBlock()->m_Metadata.GetSize();
    9084  memRange.offset += allocationOffset;
    9085  memRange.size = VMA_MIN(memRange.size, blockSize - memRange.offset);
    9086 
    9087  break;
    9088  }
    9089 
    9090  default:
    9091  VMA_ASSERT(0);
    9092  }
    9093 
    9094  switch(op)
    9095  {
    9096  case VMA_CACHE_FLUSH:
    9097  (*GetVulkanFunctions().vkFlushMappedMemoryRanges)(m_hDevice, 1, &memRange);
    9098  break;
    9099  case VMA_CACHE_INVALIDATE:
    9100  (*GetVulkanFunctions().vkInvalidateMappedMemoryRanges)(m_hDevice, 1, &memRange);
    9101  break;
    9102  default:
    9103  VMA_ASSERT(0);
    9104  }
    9105  }
    9106  // else: Just ignore this call.
    9107 }
    9108 
    9109 void VmaAllocator_T::FreeDedicatedMemory(VmaAllocation allocation)
    9110 {
    9111  VMA_ASSERT(allocation && allocation->GetType() == VmaAllocation_T::ALLOCATION_TYPE_DEDICATED);
    9112 
    9113  const uint32_t memTypeIndex = allocation->GetMemoryTypeIndex();
    9114  {
    9115  VmaMutexLock lock(m_DedicatedAllocationsMutex[memTypeIndex], m_UseMutex);
    9116  AllocationVectorType* const pDedicatedAllocations = m_pDedicatedAllocations[memTypeIndex];
    9117  VMA_ASSERT(pDedicatedAllocations);
    9118  bool success = VmaVectorRemoveSorted<VmaPointerLess>(*pDedicatedAllocations, allocation);
    9119  VMA_ASSERT(success);
    9120  }
    9121 
    9122  VkDeviceMemory hMemory = allocation->GetMemory();
    9123 
    9124  if(allocation->GetMappedData() != VMA_NULL)
    9125  {
    9126  (*m_VulkanFunctions.vkUnmapMemory)(m_hDevice, hMemory);
    9127  }
    9128 
    9129  FreeVulkanMemory(memTypeIndex, allocation->GetSize(), hMemory);
    9130 
    9131  VMA_DEBUG_LOG(" Freed DedicatedMemory MemoryTypeIndex=%u", memTypeIndex);
    9132 }
    9133 
    9134 #if VMA_STATS_STRING_ENABLED
    9135 
    9136 void VmaAllocator_T::PrintDetailedMap(VmaJsonWriter& json)
    9137 {
    9138  bool dedicatedAllocationsStarted = false;
    9139  for(uint32_t memTypeIndex = 0; memTypeIndex < GetMemoryTypeCount(); ++memTypeIndex)
    9140  {
    9141  VmaMutexLock dedicatedAllocationsLock(m_DedicatedAllocationsMutex[memTypeIndex], m_UseMutex);
    9142  AllocationVectorType* const pDedicatedAllocVector = m_pDedicatedAllocations[memTypeIndex];
    9143  VMA_ASSERT(pDedicatedAllocVector);
    9144  if(pDedicatedAllocVector->empty() == false)
    9145  {
    9146  if(dedicatedAllocationsStarted == false)
    9147  {
    9148  dedicatedAllocationsStarted = true;
    9149  json.WriteString("DedicatedAllocations");
    9150  json.BeginObject();
    9151  }
    9152 
    9153  json.BeginString("Type ");
    9154  json.ContinueString(memTypeIndex);
    9155  json.EndString();
    9156 
    9157  json.BeginArray();
    9158 
    9159  for(size_t i = 0; i < pDedicatedAllocVector->size(); ++i)
    9160  {
    9161  json.BeginObject(true);
    9162  const VmaAllocation hAlloc = (*pDedicatedAllocVector)[i];
    9163  hAlloc->PrintParameters(json);
    9164  json.EndObject();
    9165  }
    9166 
    9167  json.EndArray();
    9168  }
    9169  }
    9170  if(dedicatedAllocationsStarted)
    9171  {
    9172  json.EndObject();
    9173  }
    9174 
    9175  {
    9176  bool allocationsStarted = false;
    9177  for(uint32_t memTypeIndex = 0; memTypeIndex < GetMemoryTypeCount(); ++memTypeIndex)
    9178  {
    9179  if(m_pBlockVectors[memTypeIndex]->IsEmpty() == false)
    9180  {
    9181  if(allocationsStarted == false)
    9182  {
    9183  allocationsStarted = true;
    9184  json.WriteString("DefaultPools");
    9185  json.BeginObject();
    9186  }
    9187 
    9188  json.BeginString("Type ");
    9189  json.ContinueString(memTypeIndex);
    9190  json.EndString();
    9191 
    9192  m_pBlockVectors[memTypeIndex]->PrintDetailedMap(json);
    9193  }
    9194  }
    9195  if(allocationsStarted)
    9196  {
    9197  json.EndObject();
    9198  }
    9199  }
    9200 
    9201  {
    9202  VmaMutexLock lock(m_PoolsMutex, m_UseMutex);
    9203  const size_t poolCount = m_Pools.size();
    9204  if(poolCount > 0)
    9205  {
    9206  json.WriteString("Pools");
    9207  json.BeginObject();
    9208  for(size_t poolIndex = 0; poolIndex < poolCount; ++poolIndex)
    9209  {
    9210  json.BeginString();
    9211  json.ContinueString(m_Pools[poolIndex]->GetId());
    9212  json.EndString();
    9213 
    9214  m_Pools[poolIndex]->m_BlockVector.PrintDetailedMap(json);
    9215  }
    9216  json.EndObject();
    9217  }
    9218  }
    9219 }
    9220 
    9221 #endif // #if VMA_STATS_STRING_ENABLED
    9222 
    9223 static VkResult AllocateMemoryForImage(
    9224  VmaAllocator allocator,
    9225  VkImage image,
    9226  const VmaAllocationCreateInfo* pAllocationCreateInfo,
    9227  VmaSuballocationType suballocType,
    9228  VmaAllocation* pAllocation)
    9229 {
    9230  VMA_ASSERT(allocator && (image != VK_NULL_HANDLE) && pAllocationCreateInfo && pAllocation);
    9231 
    9232  VkMemoryRequirements vkMemReq = {};
    9233  bool requiresDedicatedAllocation = false;
    9234  bool prefersDedicatedAllocation = false;
    9235  allocator->GetImageMemoryRequirements(image, vkMemReq,
    9236  requiresDedicatedAllocation, prefersDedicatedAllocation);
    9237 
    9238  return allocator->AllocateMemory(
    9239  vkMemReq,
    9240  requiresDedicatedAllocation,
    9241  prefersDedicatedAllocation,
    9242  VK_NULL_HANDLE, // dedicatedBuffer
    9243  image, // dedicatedImage
    9244  *pAllocationCreateInfo,
    9245  suballocType,
    9246  pAllocation);
    9247 }
    9248 
    9250 // Public interface
    9251 
    9252 VkResult vmaCreateAllocator(
    9253  const VmaAllocatorCreateInfo* pCreateInfo,
    9254  VmaAllocator* pAllocator)
    9255 {
    9256  VMA_ASSERT(pCreateInfo && pAllocator);
    9257  VMA_DEBUG_LOG("vmaCreateAllocator");
    9258  *pAllocator = vma_new(pCreateInfo->pAllocationCallbacks, VmaAllocator_T)(pCreateInfo);
    9259  return VK_SUCCESS;
    9260 }
    9261 
    9262 void vmaDestroyAllocator(
    9263  VmaAllocator allocator)
    9264 {
    9265  if(allocator != VK_NULL_HANDLE)
    9266  {
    9267  VMA_DEBUG_LOG("vmaDestroyAllocator");
    9268  VkAllocationCallbacks allocationCallbacks = allocator->m_AllocationCallbacks;
    9269  vma_delete(&allocationCallbacks, allocator);
    9270  }
    9271 }
    9272 
    9274  VmaAllocator allocator,
    9275  const VkPhysicalDeviceProperties **ppPhysicalDeviceProperties)
    9276 {
    9277  VMA_ASSERT(allocator && ppPhysicalDeviceProperties);
    9278  *ppPhysicalDeviceProperties = &allocator->m_PhysicalDeviceProperties;
    9279 }
    9280 
    9282  VmaAllocator allocator,
    9283  const VkPhysicalDeviceMemoryProperties** ppPhysicalDeviceMemoryProperties)
    9284 {
    9285  VMA_ASSERT(allocator && ppPhysicalDeviceMemoryProperties);
    9286  *ppPhysicalDeviceMemoryProperties = &allocator->m_MemProps;
    9287 }
    9288 
    9290  VmaAllocator allocator,
    9291  uint32_t memoryTypeIndex,
    9292  VkMemoryPropertyFlags* pFlags)
    9293 {
    9294  VMA_ASSERT(allocator && pFlags);
    9295  VMA_ASSERT(memoryTypeIndex < allocator->GetMemoryTypeCount());
    9296  *pFlags = allocator->m_MemProps.memoryTypes[memoryTypeIndex].propertyFlags;
    9297 }
    9298 
    9300  VmaAllocator allocator,
    9301  uint32_t frameIndex)
    9302 {
    9303  VMA_ASSERT(allocator);
    9304  VMA_ASSERT(frameIndex != VMA_FRAME_INDEX_LOST);
    9305 
    9306  VMA_DEBUG_GLOBAL_MUTEX_LOCK
    9307 
    9308  allocator->SetCurrentFrameIndex(frameIndex);
    9309 }
    9310 
    9311 void vmaCalculateStats(
    9312  VmaAllocator allocator,
    9313  VmaStats* pStats)
    9314 {
    9315  VMA_ASSERT(allocator && pStats);
    9316  VMA_DEBUG_GLOBAL_MUTEX_LOCK
    9317  allocator->CalculateStats(pStats);
    9318 }
    9319 
    9320 #if VMA_STATS_STRING_ENABLED
    9321 
    9322 void vmaBuildStatsString(
    9323  VmaAllocator allocator,
    9324  char** ppStatsString,
    9325  VkBool32 detailedMap)
    9326 {
    9327  VMA_ASSERT(allocator && ppStatsString);
    9328  VMA_DEBUG_GLOBAL_MUTEX_LOCK
    9329 
    9330  VmaStringBuilder sb(allocator);
    9331  {
    9332  VmaJsonWriter json(allocator->GetAllocationCallbacks(), sb);
    9333  json.BeginObject();
    9334 
    9335  VmaStats stats;
    9336  allocator->CalculateStats(&stats);
    9337 
    9338  json.WriteString("Total");
    9339  VmaPrintStatInfo(json, stats.total);
    9340 
    9341  for(uint32_t heapIndex = 0; heapIndex < allocator->GetMemoryHeapCount(); ++heapIndex)
    9342  {
    9343  json.BeginString("Heap ");
    9344  json.ContinueString(heapIndex);
    9345  json.EndString();
    9346  json.BeginObject();
    9347 
    9348  json.WriteString("Size");
    9349  json.WriteNumber(allocator->m_MemProps.memoryHeaps[heapIndex].size);
    9350 
    9351  json.WriteString("Flags");
    9352  json.BeginArray(true);
    9353  if((allocator->m_MemProps.memoryHeaps[heapIndex].flags & VK_MEMORY_HEAP_DEVICE_LOCAL_BIT) != 0)
    9354  {
    9355  json.WriteString("DEVICE_LOCAL");
    9356  }
    9357  json.EndArray();
    9358 
    9359  if(stats.memoryHeap[heapIndex].blockCount > 0)
    9360  {
    9361  json.WriteString("Stats");
    9362  VmaPrintStatInfo(json, stats.memoryHeap[heapIndex]);
    9363  }
    9364 
    9365  for(uint32_t typeIndex = 0; typeIndex < allocator->GetMemoryTypeCount(); ++typeIndex)
    9366  {
    9367  if(allocator->MemoryTypeIndexToHeapIndex(typeIndex) == heapIndex)
    9368  {
    9369  json.BeginString("Type ");
    9370  json.ContinueString(typeIndex);
    9371  json.EndString();
    9372 
    9373  json.BeginObject();
    9374 
    9375  json.WriteString("Flags");
    9376  json.BeginArray(true);
    9377  VkMemoryPropertyFlags flags = allocator->m_MemProps.memoryTypes[typeIndex].propertyFlags;
    9378  if((flags & VK_MEMORY_PROPERTY_DEVICE_LOCAL_BIT) != 0)
    9379  {
    9380  json.WriteString("DEVICE_LOCAL");
    9381  }
    9382  if((flags & VK_MEMORY_PROPERTY_HOST_VISIBLE_BIT) != 0)
    9383  {
    9384  json.WriteString("HOST_VISIBLE");
    9385  }
    9386  if((flags & VK_MEMORY_PROPERTY_HOST_COHERENT_BIT) != 0)
    9387  {
    9388  json.WriteString("HOST_COHERENT");
    9389  }
    9390  if((flags & VK_MEMORY_PROPERTY_HOST_CACHED_BIT) != 0)
    9391  {
    9392  json.WriteString("HOST_CACHED");
    9393  }
    9394  if((flags & VK_MEMORY_PROPERTY_LAZILY_ALLOCATED_BIT) != 0)
    9395  {
    9396  json.WriteString("LAZILY_ALLOCATED");
    9397  }
    9398  json.EndArray();
    9399 
    9400  if(stats.memoryType[typeIndex].blockCount > 0)
    9401  {
    9402  json.WriteString("Stats");
    9403  VmaPrintStatInfo(json, stats.memoryType[typeIndex]);
    9404  }
    9405 
    9406  json.EndObject();
    9407  }
    9408  }
    9409 
    9410  json.EndObject();
    9411  }
    9412  if(detailedMap == VK_TRUE)
    9413  {
    9414  allocator->PrintDetailedMap(json);
    9415  }
    9416 
    9417  json.EndObject();
    9418  }
    9419 
    9420  const size_t len = sb.GetLength();
    9421  char* const pChars = vma_new_array(allocator, char, len + 1);
    9422  if(len > 0)
    9423  {
    9424  memcpy(pChars, sb.GetData(), len);
    9425  }
    9426  pChars[len] = '\0';
    9427  *ppStatsString = pChars;
    9428 }
    9429 
    9430 void vmaFreeStatsString(
    9431  VmaAllocator allocator,
    9432  char* pStatsString)
    9433 {
    9434  if(pStatsString != VMA_NULL)
    9435  {
    9436  VMA_ASSERT(allocator);
    9437  size_t len = strlen(pStatsString);
    9438  vma_delete_array(allocator, pStatsString, len + 1);
    9439  }
    9440 }
    9441 
    9442 #endif // #if VMA_STATS_STRING_ENABLED
    9443 
    9444 /*
    9445 This function is not protected by any mutex because it just reads immutable data.
    9446 */
    9447 VkResult vmaFindMemoryTypeIndex(
    9448  VmaAllocator allocator,
    9449  uint32_t memoryTypeBits,
    9450  const VmaAllocationCreateInfo* pAllocationCreateInfo,
    9451  uint32_t* pMemoryTypeIndex)
    9452 {
    9453  VMA_ASSERT(allocator != VK_NULL_HANDLE);
    9454  VMA_ASSERT(pAllocationCreateInfo != VMA_NULL);
    9455  VMA_ASSERT(pMemoryTypeIndex != VMA_NULL);
    9456 
    9457  if(pAllocationCreateInfo->memoryTypeBits != 0)
    9458  {
    9459  memoryTypeBits &= pAllocationCreateInfo->memoryTypeBits;
    9460  }
    9461 
    9462  uint32_t requiredFlags = pAllocationCreateInfo->requiredFlags;
    9463  uint32_t preferredFlags = pAllocationCreateInfo->preferredFlags;
    9464 
    9465  const bool mapped = (pAllocationCreateInfo->flags & VMA_ALLOCATION_CREATE_MAPPED_BIT) != 0;
    9466  if(mapped)
    9467  {
    9468  preferredFlags |= VK_MEMORY_PROPERTY_HOST_VISIBLE_BIT;
    9469  }
    9470 
    9471  // Convert usage to requiredFlags and preferredFlags.
    9472  switch(pAllocationCreateInfo->usage)
    9473  {
    9475  break;
    9477  if(!allocator->IsIntegratedGpu() || (preferredFlags & VK_MEMORY_PROPERTY_HOST_VISIBLE_BIT) == 0)
    9478  {
    9479  preferredFlags |= VK_MEMORY_PROPERTY_DEVICE_LOCAL_BIT;
    9480  }
    9481  break;
    9483  requiredFlags |= VK_MEMORY_PROPERTY_HOST_VISIBLE_BIT | VK_MEMORY_PROPERTY_HOST_COHERENT_BIT;
    9484  break;
    9486  requiredFlags |= VK_MEMORY_PROPERTY_HOST_VISIBLE_BIT;
    9487  if(!allocator->IsIntegratedGpu() || (preferredFlags & VK_MEMORY_PROPERTY_HOST_VISIBLE_BIT) == 0)
    9488  {
    9489  preferredFlags |= VK_MEMORY_PROPERTY_DEVICE_LOCAL_BIT;
    9490  }
    9491  break;
    9493  requiredFlags |= VK_MEMORY_PROPERTY_HOST_VISIBLE_BIT;
    9494  preferredFlags |= VK_MEMORY_PROPERTY_HOST_COHERENT_BIT | VK_MEMORY_PROPERTY_HOST_CACHED_BIT;
    9495  break;
    9496  default:
    9497  break;
    9498  }
    9499 
    9500  *pMemoryTypeIndex = UINT32_MAX;
    9501  uint32_t minCost = UINT32_MAX;
    9502  for(uint32_t memTypeIndex = 0, memTypeBit = 1;
    9503  memTypeIndex < allocator->GetMemoryTypeCount();
    9504  ++memTypeIndex, memTypeBit <<= 1)
    9505  {
    9506  // This memory type is acceptable according to memoryTypeBits bitmask.
    9507  if((memTypeBit & memoryTypeBits) != 0)
    9508  {
    9509  const VkMemoryPropertyFlags currFlags =
    9510  allocator->m_MemProps.memoryTypes[memTypeIndex].propertyFlags;
    9511  // This memory type contains requiredFlags.
    9512  if((requiredFlags & ~currFlags) == 0)
    9513  {
    9514  // Calculate cost as number of bits from preferredFlags not present in this memory type.
    9515  uint32_t currCost = VmaCountBitsSet(preferredFlags & ~currFlags);
    9516  // Remember memory type with lowest cost.
    9517  if(currCost < minCost)
    9518  {
    9519  *pMemoryTypeIndex = memTypeIndex;
    9520  if(currCost == 0)
    9521  {
    9522  return VK_SUCCESS;
    9523  }
    9524  minCost = currCost;
    9525  }
    9526  }
    9527  }
    9528  }
    9529  return (*pMemoryTypeIndex != UINT32_MAX) ? VK_SUCCESS : VK_ERROR_FEATURE_NOT_PRESENT;
    9530 }
    9531 
    9533  VmaAllocator allocator,
    9534  const VkBufferCreateInfo* pBufferCreateInfo,
    9535  const VmaAllocationCreateInfo* pAllocationCreateInfo,
    9536  uint32_t* pMemoryTypeIndex)
    9537 {
    9538  VMA_ASSERT(allocator != VK_NULL_HANDLE);
    9539  VMA_ASSERT(pBufferCreateInfo != VMA_NULL);
    9540  VMA_ASSERT(pAllocationCreateInfo != VMA_NULL);
    9541  VMA_ASSERT(pMemoryTypeIndex != VMA_NULL);
    9542 
    9543  const VkDevice hDev = allocator->m_hDevice;
    9544  VkBuffer hBuffer = VK_NULL_HANDLE;
    9545  VkResult res = allocator->GetVulkanFunctions().vkCreateBuffer(
    9546  hDev, pBufferCreateInfo, allocator->GetAllocationCallbacks(), &hBuffer);
    9547  if(res == VK_SUCCESS)
    9548  {
    9549  VkMemoryRequirements memReq = {};
    9550  allocator->GetVulkanFunctions().vkGetBufferMemoryRequirements(
    9551  hDev, hBuffer, &memReq);
    9552 
    9553  res = vmaFindMemoryTypeIndex(
    9554  allocator,
    9555  memReq.memoryTypeBits,
    9556  pAllocationCreateInfo,
    9557  pMemoryTypeIndex);
    9558 
    9559  allocator->GetVulkanFunctions().vkDestroyBuffer(
    9560  hDev, hBuffer, allocator->GetAllocationCallbacks());
    9561  }
    9562  return res;
    9563 }
    9564 
    9566  VmaAllocator allocator,
    9567  const VkImageCreateInfo* pImageCreateInfo,
    9568  const VmaAllocationCreateInfo* pAllocationCreateInfo,
    9569  uint32_t* pMemoryTypeIndex)
    9570 {
    9571  VMA_ASSERT(allocator != VK_NULL_HANDLE);
    9572  VMA_ASSERT(pImageCreateInfo != VMA_NULL);
    9573  VMA_ASSERT(pAllocationCreateInfo != VMA_NULL);
    9574  VMA_ASSERT(pMemoryTypeIndex != VMA_NULL);
    9575 
    9576  const VkDevice hDev = allocator->m_hDevice;
    9577  VkImage hImage = VK_NULL_HANDLE;
    9578  VkResult res = allocator->GetVulkanFunctions().vkCreateImage(
    9579  hDev, pImageCreateInfo, allocator->GetAllocationCallbacks(), &hImage);
    9580  if(res == VK_SUCCESS)
    9581  {
    9582  VkMemoryRequirements memReq = {};
    9583  allocator->GetVulkanFunctions().vkGetImageMemoryRequirements(
    9584  hDev, hImage, &memReq);
    9585 
    9586  res = vmaFindMemoryTypeIndex(
    9587  allocator,
    9588  memReq.memoryTypeBits,
    9589  pAllocationCreateInfo,
    9590  pMemoryTypeIndex);
    9591 
    9592  allocator->GetVulkanFunctions().vkDestroyImage(
    9593  hDev, hImage, allocator->GetAllocationCallbacks());
    9594  }
    9595  return res;
    9596 }
    9597 
    9598 VkResult vmaCreatePool(
    9599  VmaAllocator allocator,
    9600  const VmaPoolCreateInfo* pCreateInfo,
    9601  VmaPool* pPool)
    9602 {
    9603  VMA_ASSERT(allocator && pCreateInfo && pPool);
    9604 
    9605  VMA_DEBUG_LOG("vmaCreatePool");
    9606 
    9607  VMA_DEBUG_GLOBAL_MUTEX_LOCK
    9608 
    9609  return allocator->CreatePool(pCreateInfo, pPool);
    9610 }
    9611 
    9612 void vmaDestroyPool(
    9613  VmaAllocator allocator,
    9614  VmaPool pool)
    9615 {
    9616  VMA_ASSERT(allocator);
    9617 
    9618  if(pool == VK_NULL_HANDLE)
    9619  {
    9620  return;
    9621  }
    9622 
    9623  VMA_DEBUG_LOG("vmaDestroyPool");
    9624 
    9625  VMA_DEBUG_GLOBAL_MUTEX_LOCK
    9626 
    9627  allocator->DestroyPool(pool);
    9628 }
    9629 
    9630 void vmaGetPoolStats(
    9631  VmaAllocator allocator,
    9632  VmaPool pool,
    9633  VmaPoolStats* pPoolStats)
    9634 {
    9635  VMA_ASSERT(allocator && pool && pPoolStats);
    9636 
    9637  VMA_DEBUG_GLOBAL_MUTEX_LOCK
    9638 
    9639  allocator->GetPoolStats(pool, pPoolStats);
    9640 }
    9641 
    9643  VmaAllocator allocator,
    9644  VmaPool pool,
    9645  size_t* pLostAllocationCount)
    9646 {
    9647  VMA_ASSERT(allocator && pool);
    9648 
    9649  VMA_DEBUG_GLOBAL_MUTEX_LOCK
    9650 
    9651  allocator->MakePoolAllocationsLost(pool, pLostAllocationCount);
    9652 }
    9653 
    9654 VkResult vmaCheckPoolCorruption(VmaAllocator allocator, VmaPool pool)
    9655 {
    9656  VMA_ASSERT(allocator && pool);
    9657 
    9658  VMA_DEBUG_GLOBAL_MUTEX_LOCK
    9659 
    9660  VMA_DEBUG_LOG("vmaCheckPoolCorruption");
    9661 
    9662  return allocator->CheckPoolCorruption(pool);
    9663 }
    9664 
    9665 VkResult vmaAllocateMemory(
    9666  VmaAllocator allocator,
    9667  const VkMemoryRequirements* pVkMemoryRequirements,
    9668  const VmaAllocationCreateInfo* pCreateInfo,
    9669  VmaAllocation* pAllocation,
    9670  VmaAllocationInfo* pAllocationInfo)
    9671 {
    9672  VMA_ASSERT(allocator && pVkMemoryRequirements && pCreateInfo && pAllocation);
    9673 
    9674  VMA_DEBUG_LOG("vmaAllocateMemory");
    9675 
    9676  VMA_DEBUG_GLOBAL_MUTEX_LOCK
    9677 
    9678  VkResult result = allocator->AllocateMemory(
    9679  *pVkMemoryRequirements,
    9680  false, // requiresDedicatedAllocation
    9681  false, // prefersDedicatedAllocation
    9682  VK_NULL_HANDLE, // dedicatedBuffer
    9683  VK_NULL_HANDLE, // dedicatedImage
    9684  *pCreateInfo,
    9685  VMA_SUBALLOCATION_TYPE_UNKNOWN,
    9686  pAllocation);
    9687 
    9688  if(pAllocationInfo && result == VK_SUCCESS)
    9689  {
    9690  allocator->GetAllocationInfo(*pAllocation, pAllocationInfo);
    9691  }
    9692 
    9693  return result;
    9694 }
    9695 
    9697  VmaAllocator allocator,
    9698  VkBuffer buffer,
    9699  const VmaAllocationCreateInfo* pCreateInfo,
    9700  VmaAllocation* pAllocation,
    9701  VmaAllocationInfo* pAllocationInfo)
    9702 {
    9703  VMA_ASSERT(allocator && buffer != VK_NULL_HANDLE && pCreateInfo && pAllocation);
    9704 
    9705  VMA_DEBUG_LOG("vmaAllocateMemoryForBuffer");
    9706 
    9707  VMA_DEBUG_GLOBAL_MUTEX_LOCK
    9708 
    9709  VkMemoryRequirements vkMemReq = {};
    9710  bool requiresDedicatedAllocation = false;
    9711  bool prefersDedicatedAllocation = false;
    9712  allocator->GetBufferMemoryRequirements(buffer, vkMemReq,
    9713  requiresDedicatedAllocation,
    9714  prefersDedicatedAllocation);
    9715 
    9716  VkResult result = allocator->AllocateMemory(
    9717  vkMemReq,
    9718  requiresDedicatedAllocation,
    9719  prefersDedicatedAllocation,
    9720  buffer, // dedicatedBuffer
    9721  VK_NULL_HANDLE, // dedicatedImage
    9722  *pCreateInfo,
    9723  VMA_SUBALLOCATION_TYPE_BUFFER,
    9724  pAllocation);
    9725 
    9726  if(pAllocationInfo && result == VK_SUCCESS)
    9727  {
    9728  allocator->GetAllocationInfo(*pAllocation, pAllocationInfo);
    9729  }
    9730 
    9731  return result;
    9732 }
    9733 
    9734 VkResult vmaAllocateMemoryForImage(
    9735  VmaAllocator allocator,
    9736  VkImage image,
    9737  const VmaAllocationCreateInfo* pCreateInfo,
    9738  VmaAllocation* pAllocation,
    9739  VmaAllocationInfo* pAllocationInfo)
    9740 {
    9741  VMA_ASSERT(allocator && image != VK_NULL_HANDLE && pCreateInfo && pAllocation);
    9742 
    9743  VMA_DEBUG_LOG("vmaAllocateMemoryForImage");
    9744 
    9745  VMA_DEBUG_GLOBAL_MUTEX_LOCK
    9746 
    9747  VkResult result = AllocateMemoryForImage(
    9748  allocator,
    9749  image,
    9750  pCreateInfo,
    9751  VMA_SUBALLOCATION_TYPE_IMAGE_UNKNOWN,
    9752  pAllocation);
    9753 
    9754  if(pAllocationInfo && result == VK_SUCCESS)
    9755  {
    9756  allocator->GetAllocationInfo(*pAllocation, pAllocationInfo);
    9757  }
    9758 
    9759  return result;
    9760 }
    9761 
    9762 void vmaFreeMemory(
    9763  VmaAllocator allocator,
    9764  VmaAllocation allocation)
    9765 {
    9766  VMA_ASSERT(allocator);
    9767  VMA_DEBUG_LOG("vmaFreeMemory");
    9768  VMA_DEBUG_GLOBAL_MUTEX_LOCK
    9769  if(allocation != VK_NULL_HANDLE)
    9770  {
    9771  allocator->FreeMemory(allocation);
    9772  }
    9773 }
    9774 
    9776  VmaAllocator allocator,
    9777  VmaAllocation allocation,
    9778  VmaAllocationInfo* pAllocationInfo)
    9779 {
    9780  VMA_ASSERT(allocator && allocation && pAllocationInfo);
    9781 
    9782  VMA_DEBUG_GLOBAL_MUTEX_LOCK
    9783 
    9784  allocator->GetAllocationInfo(allocation, pAllocationInfo);
    9785 }
    9786 
    9787 VkBool32 vmaTouchAllocation(
    9788  VmaAllocator allocator,
    9789  VmaAllocation allocation)
    9790 {
    9791  VMA_ASSERT(allocator && allocation);
    9792 
    9793  VMA_DEBUG_GLOBAL_MUTEX_LOCK
    9794 
    9795  return allocator->TouchAllocation(allocation);
    9796 }
    9797 
    9799  VmaAllocator allocator,
    9800  VmaAllocation allocation,
    9801  void* pUserData)
    9802 {
    9803  VMA_ASSERT(allocator && allocation);
    9804 
    9805  VMA_DEBUG_GLOBAL_MUTEX_LOCK
    9806 
    9807  allocation->SetUserData(allocator, pUserData);
    9808 }
    9809 
    9811  VmaAllocator allocator,
    9812  VmaAllocation* pAllocation)
    9813 {
    9814  VMA_ASSERT(allocator && pAllocation);
    9815 
    9816  VMA_DEBUG_GLOBAL_MUTEX_LOCK;
    9817 
    9818  allocator->CreateLostAllocation(pAllocation);
    9819 }
    9820 
    9821 VkResult vmaMapMemory(
    9822  VmaAllocator allocator,
    9823  VmaAllocation allocation,
    9824  void** ppData)
    9825 {
    9826  VMA_ASSERT(allocator && allocation && ppData);
    9827 
    9828  VMA_DEBUG_GLOBAL_MUTEX_LOCK
    9829 
    9830  return allocator->Map(allocation, ppData);
    9831 }
    9832 
    9833 void vmaUnmapMemory(
    9834  VmaAllocator allocator,
    9835  VmaAllocation allocation)
    9836 {
    9837  VMA_ASSERT(allocator && allocation);
    9838 
    9839  VMA_DEBUG_GLOBAL_MUTEX_LOCK
    9840 
    9841  allocator->Unmap(allocation);
    9842 }
    9843 
    9844 void vmaFlushAllocation(VmaAllocator allocator, VmaAllocation allocation, VkDeviceSize offset, VkDeviceSize size)
    9845 {
    9846  VMA_ASSERT(allocator && allocation);
    9847 
    9848  VMA_DEBUG_LOG("vmaFlushAllocation");
    9849 
    9850  VMA_DEBUG_GLOBAL_MUTEX_LOCK
    9851 
    9852  allocator->FlushOrInvalidateAllocation(allocation, offset, size, VMA_CACHE_FLUSH);
    9853 }
    9854 
    9855 void vmaInvalidateAllocation(VmaAllocator allocator, VmaAllocation allocation, VkDeviceSize offset, VkDeviceSize size)
    9856 {
    9857  VMA_ASSERT(allocator && allocation);
    9858 
    9859  VMA_DEBUG_LOG("vmaInvalidateAllocation");
    9860 
    9861  VMA_DEBUG_GLOBAL_MUTEX_LOCK
    9862 
    9863  allocator->FlushOrInvalidateAllocation(allocation, offset, size, VMA_CACHE_INVALIDATE);
    9864 }
    9865 
    9866 VkResult vmaCheckCorruption(VmaAllocator allocator, uint32_t memoryTypeBits)
    9867 {
    9868  VMA_ASSERT(allocator);
    9869 
    9870  VMA_DEBUG_LOG("vmaCheckCorruption");
    9871 
    9872  VMA_DEBUG_GLOBAL_MUTEX_LOCK
    9873 
    9874  return allocator->CheckCorruption(memoryTypeBits);
    9875 }
    9876 
    9877 VkResult vmaDefragment(
    9878  VmaAllocator allocator,
    9879  VmaAllocation* pAllocations,
    9880  size_t allocationCount,
    9881  VkBool32* pAllocationsChanged,
    9882  const VmaDefragmentationInfo *pDefragmentationInfo,
    9883  VmaDefragmentationStats* pDefragmentationStats)
    9884 {
    9885  VMA_ASSERT(allocator && pAllocations);
    9886 
    9887  VMA_DEBUG_LOG("vmaDefragment");
    9888 
    9889  VMA_DEBUG_GLOBAL_MUTEX_LOCK
    9890 
    9891  return allocator->Defragment(pAllocations, allocationCount, pAllocationsChanged, pDefragmentationInfo, pDefragmentationStats);
    9892 }
    9893 
    9894 VkResult vmaBindBufferMemory(
    9895  VmaAllocator allocator,
    9896  VmaAllocation allocation,
    9897  VkBuffer buffer)
    9898 {
    9899  VMA_ASSERT(allocator && allocation && buffer);
    9900 
    9901  VMA_DEBUG_LOG("vmaBindBufferMemory");
    9902 
    9903  VMA_DEBUG_GLOBAL_MUTEX_LOCK
    9904 
    9905  return allocator->BindBufferMemory(allocation, buffer);
    9906 }
    9907 
    9908 VkResult vmaBindImageMemory(
    9909  VmaAllocator allocator,
    9910  VmaAllocation allocation,
    9911  VkImage image)
    9912 {
    9913  VMA_ASSERT(allocator && allocation && image);
    9914 
    9915  VMA_DEBUG_LOG("vmaBindImageMemory");
    9916 
    9917  VMA_DEBUG_GLOBAL_MUTEX_LOCK
    9918 
    9919  return allocator->BindImageMemory(allocation, image);
    9920 }
    9921 
    9922 VkResult vmaCreateBuffer(
    9923  VmaAllocator allocator,
    9924  const VkBufferCreateInfo* pBufferCreateInfo,
    9925  const VmaAllocationCreateInfo* pAllocationCreateInfo,
    9926  VkBuffer* pBuffer,
    9927  VmaAllocation* pAllocation,
    9928  VmaAllocationInfo* pAllocationInfo)
    9929 {
    9930  VMA_ASSERT(allocator && pBufferCreateInfo && pAllocationCreateInfo && pBuffer && pAllocation);
    9931 
    9932  VMA_DEBUG_LOG("vmaCreateBuffer");
    9933 
    9934  VMA_DEBUG_GLOBAL_MUTEX_LOCK
    9935 
    9936  *pBuffer = VK_NULL_HANDLE;
    9937  *pAllocation = VK_NULL_HANDLE;
    9938 
    9939  // 1. Create VkBuffer.
    9940  VkResult res = (*allocator->GetVulkanFunctions().vkCreateBuffer)(
    9941  allocator->m_hDevice,
    9942  pBufferCreateInfo,
    9943  allocator->GetAllocationCallbacks(),
    9944  pBuffer);
    9945  if(res >= 0)
    9946  {
    9947  // 2. vkGetBufferMemoryRequirements.
    9948  VkMemoryRequirements vkMemReq = {};
    9949  bool requiresDedicatedAllocation = false;
    9950  bool prefersDedicatedAllocation = false;
    9951  allocator->GetBufferMemoryRequirements(*pBuffer, vkMemReq,
    9952  requiresDedicatedAllocation, prefersDedicatedAllocation);
    9953 
    9954  // Make sure alignment requirements for specific buffer usages reported
    9955  // in Physical Device Properties are included in alignment reported by memory requirements.
    9956  if((pBufferCreateInfo->usage & VK_BUFFER_USAGE_UNIFORM_TEXEL_BUFFER_BIT) != 0)
    9957  {
    9958  VMA_ASSERT(vkMemReq.alignment %
    9959  allocator->m_PhysicalDeviceProperties.limits.minTexelBufferOffsetAlignment == 0);
    9960  }
    9961  if((pBufferCreateInfo->usage & VK_BUFFER_USAGE_UNIFORM_BUFFER_BIT) != 0)
    9962  {
    9963  VMA_ASSERT(vkMemReq.alignment %
    9964  allocator->m_PhysicalDeviceProperties.limits.minUniformBufferOffsetAlignment == 0);
    9965  }
    9966  if((pBufferCreateInfo->usage & VK_BUFFER_USAGE_STORAGE_BUFFER_BIT) != 0)
    9967  {
    9968  VMA_ASSERT(vkMemReq.alignment %
    9969  allocator->m_PhysicalDeviceProperties.limits.minStorageBufferOffsetAlignment == 0);
    9970  }
    9971 
    9972  // 3. Allocate memory using allocator.
    9973  res = allocator->AllocateMemory(
    9974  vkMemReq,
    9975  requiresDedicatedAllocation,
    9976  prefersDedicatedAllocation,
    9977  *pBuffer, // dedicatedBuffer
    9978  VK_NULL_HANDLE, // dedicatedImage
    9979  *pAllocationCreateInfo,
    9980  VMA_SUBALLOCATION_TYPE_BUFFER,
    9981  pAllocation);
    9982  if(res >= 0)
    9983  {
    9984  // 3. Bind buffer with memory.
    9985  res = allocator->BindBufferMemory(*pAllocation, *pBuffer);
    9986  if(res >= 0)
    9987  {
    9988  // All steps succeeded.
    9989  #if VMA_STATS_STRING_ENABLED
    9990  (*pAllocation)->InitBufferImageUsage(pBufferCreateInfo->usage);
    9991  #endif
    9992  if(pAllocationInfo != VMA_NULL)
    9993  {
    9994  allocator->GetAllocationInfo(*pAllocation, pAllocationInfo);
    9995  }
    9996  return VK_SUCCESS;
    9997  }
    9998  allocator->FreeMemory(*pAllocation);
    9999  *pAllocation = VK_NULL_HANDLE;
    10000  (*allocator->GetVulkanFunctions().vkDestroyBuffer)(allocator->m_hDevice, *pBuffer, allocator->GetAllocationCallbacks());
    10001  *pBuffer = VK_NULL_HANDLE;
    10002  return res;
    10003  }
    10004  (*allocator->GetVulkanFunctions().vkDestroyBuffer)(allocator->m_hDevice, *pBuffer, allocator->GetAllocationCallbacks());
    10005  *pBuffer = VK_NULL_HANDLE;
    10006  return res;
    10007  }
    10008  return res;
    10009 }
    10010 
    10011 void vmaDestroyBuffer(
    10012  VmaAllocator allocator,
    10013  VkBuffer buffer,
    10014  VmaAllocation allocation)
    10015 {
    10016  VMA_ASSERT(allocator);
    10017  VMA_DEBUG_LOG("vmaDestroyBuffer");
    10018  VMA_DEBUG_GLOBAL_MUTEX_LOCK
    10019  if(buffer != VK_NULL_HANDLE)
    10020  {
    10021  (*allocator->GetVulkanFunctions().vkDestroyBuffer)(allocator->m_hDevice, buffer, allocator->GetAllocationCallbacks());
    10022  }
    10023  if(allocation != VK_NULL_HANDLE)
    10024  {
    10025  allocator->FreeMemory(allocation);
    10026  }
    10027 }
    10028 
    10029 VkResult vmaCreateImage(
    10030  VmaAllocator allocator,
    10031  const VkImageCreateInfo* pImageCreateInfo,
    10032  const VmaAllocationCreateInfo* pAllocationCreateInfo,
    10033  VkImage* pImage,
    10034  VmaAllocation* pAllocation,
    10035  VmaAllocationInfo* pAllocationInfo)
    10036 {
    10037  VMA_ASSERT(allocator && pImageCreateInfo && pAllocationCreateInfo && pImage && pAllocation);
    10038 
    10039  VMA_DEBUG_LOG("vmaCreateImage");
    10040 
    10041  VMA_DEBUG_GLOBAL_MUTEX_LOCK
    10042 
    10043  *pImage = VK_NULL_HANDLE;
    10044  *pAllocation = VK_NULL_HANDLE;
    10045 
    10046  // 1. Create VkImage.
    10047  VkResult res = (*allocator->GetVulkanFunctions().vkCreateImage)(
    10048  allocator->m_hDevice,
    10049  pImageCreateInfo,
    10050  allocator->GetAllocationCallbacks(),
    10051  pImage);
    10052  if(res >= 0)
    10053  {
    10054  VmaSuballocationType suballocType = pImageCreateInfo->tiling == VK_IMAGE_TILING_OPTIMAL ?
    10055  VMA_SUBALLOCATION_TYPE_IMAGE_OPTIMAL :
    10056  VMA_SUBALLOCATION_TYPE_IMAGE_LINEAR;
    10057 
    10058  // 2. Allocate memory using allocator.
    10059  res = AllocateMemoryForImage(allocator, *pImage, pAllocationCreateInfo, suballocType, pAllocation);
    10060  if(res >= 0)
    10061  {
    10062  // 3. Bind image with memory.
    10063  res = allocator->BindImageMemory(*pAllocation, *pImage);
    10064  if(res >= 0)
    10065  {
    10066  // All steps succeeded.
    10067  #if VMA_STATS_STRING_ENABLED
    10068  (*pAllocation)->InitBufferImageUsage(pImageCreateInfo->usage);
    10069  #endif
    10070  if(pAllocationInfo != VMA_NULL)
    10071  {
    10072  allocator->GetAllocationInfo(*pAllocation, pAllocationInfo);
    10073  }
    10074  return VK_SUCCESS;
    10075  }
    10076  allocator->FreeMemory(*pAllocation);
    10077  *pAllocation = VK_NULL_HANDLE;
    10078  (*allocator->GetVulkanFunctions().vkDestroyImage)(allocator->m_hDevice, *pImage, allocator->GetAllocationCallbacks());
    10079  *pImage = VK_NULL_HANDLE;
    10080  return res;
    10081  }
    10082  (*allocator->GetVulkanFunctions().vkDestroyImage)(allocator->m_hDevice, *pImage, allocator->GetAllocationCallbacks());
    10083  *pImage = VK_NULL_HANDLE;
    10084  return res;
    10085  }
    10086  return res;
    10087 }
    10088 
    10089 void vmaDestroyImage(
    10090  VmaAllocator allocator,
    10091  VkImage image,
    10092  VmaAllocation allocation)
    10093 {
    10094  VMA_ASSERT(allocator);
    10095  VMA_DEBUG_LOG("vmaDestroyImage");
    10096  VMA_DEBUG_GLOBAL_MUTEX_LOCK
    10097  if(image != VK_NULL_HANDLE)
    10098  {
    10099  (*allocator->GetVulkanFunctions().vkDestroyImage)(allocator->m_hDevice, image, allocator->GetAllocationCallbacks());
    10100  }
    10101  if(allocation != VK_NULL_HANDLE)
    10102  {
    10103  allocator->FreeMemory(allocation);
    10104  }
    10105 }
    10106 
    10107 #endif // #ifdef VMA_IMPLEMENTATION
    PFN_vkGetPhysicalDeviceProperties vkGetPhysicalDeviceProperties
    Definition: vk_mem_alloc.h:1244
    -
    Set this flag if the allocation should have its own memory block.
    Definition: vk_mem_alloc.h:1510
    +Go to the documentation of this file.
    1 //
    2 // Copyright (c) 2017-2018 Advanced Micro Devices, Inc. All rights reserved.
    3 //
    4 // Permission is hereby granted, free of charge, to any person obtaining a copy
    5 // of this software and associated documentation files (the "Software"), to deal
    6 // in the Software without restriction, including without limitation the rights
    7 // to use, copy, modify, merge, publish, distribute, sublicense, and/or sell
    8 // copies of the Software, and to permit persons to whom the Software is
    9 // furnished to do so, subject to the following conditions:
    10 //
    11 // The above copyright notice and this permission notice shall be included in
    12 // all copies or substantial portions of the Software.
    13 //
    14 // THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
    15 // IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
    16 // FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE
    17 // AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
    18 // LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM,
    19 // OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN
    20 // THE SOFTWARE.
    21 //
    22 
    23 #ifndef AMD_VULKAN_MEMORY_ALLOCATOR_H
    24 #define AMD_VULKAN_MEMORY_ALLOCATOR_H
    25 
    26 #ifdef __cplusplus
    27 extern "C" {
    28 #endif
    29 
    1184 #include <vulkan/vulkan.h>
    1185 
    1186 #if !defined(VMA_DEDICATED_ALLOCATION)
    1187  #if VK_KHR_get_memory_requirements2 && VK_KHR_dedicated_allocation
    1188  #define VMA_DEDICATED_ALLOCATION 1
    1189  #else
    1190  #define VMA_DEDICATED_ALLOCATION 0
    1191  #endif
    1192 #endif
    1193 
    1203 VK_DEFINE_HANDLE(VmaAllocator)
    1204 
    1205 typedef void (VKAPI_PTR *PFN_vmaAllocateDeviceMemoryFunction)(
    1207  VmaAllocator allocator,
    1208  uint32_t memoryType,
    1209  VkDeviceMemory memory,
    1210  VkDeviceSize size);
    1212 typedef void (VKAPI_PTR *PFN_vmaFreeDeviceMemoryFunction)(
    1213  VmaAllocator allocator,
    1214  uint32_t memoryType,
    1215  VkDeviceMemory memory,
    1216  VkDeviceSize size);
    1217 
    1231 
    1261 
    1264 typedef VkFlags VmaAllocatorCreateFlags;
    1265 
    1270 typedef struct VmaVulkanFunctions {
    1271  PFN_vkGetPhysicalDeviceProperties vkGetPhysicalDeviceProperties;
    1272  PFN_vkGetPhysicalDeviceMemoryProperties vkGetPhysicalDeviceMemoryProperties;
    1273  PFN_vkAllocateMemory vkAllocateMemory;
    1274  PFN_vkFreeMemory vkFreeMemory;
    1275  PFN_vkMapMemory vkMapMemory;
    1276  PFN_vkUnmapMemory vkUnmapMemory;
    1277  PFN_vkFlushMappedMemoryRanges vkFlushMappedMemoryRanges;
    1278  PFN_vkInvalidateMappedMemoryRanges vkInvalidateMappedMemoryRanges;
    1279  PFN_vkBindBufferMemory vkBindBufferMemory;
    1280  PFN_vkBindImageMemory vkBindImageMemory;
    1281  PFN_vkGetBufferMemoryRequirements vkGetBufferMemoryRequirements;
    1282  PFN_vkGetImageMemoryRequirements vkGetImageMemoryRequirements;
    1283  PFN_vkCreateBuffer vkCreateBuffer;
    1284  PFN_vkDestroyBuffer vkDestroyBuffer;
    1285  PFN_vkCreateImage vkCreateImage;
    1286  PFN_vkDestroyImage vkDestroyImage;
    1287 #if VMA_DEDICATED_ALLOCATION
    1288  PFN_vkGetBufferMemoryRequirements2KHR vkGetBufferMemoryRequirements2KHR;
    1289  PFN_vkGetImageMemoryRequirements2KHR vkGetImageMemoryRequirements2KHR;
    1290 #endif
    1292 
    1295 {
    1297  VmaAllocatorCreateFlags flags;
    1299 
    1300  VkPhysicalDevice physicalDevice;
    1302 
    1303  VkDevice device;
    1305 
    1308 
    1309  const VkAllocationCallbacks* pAllocationCallbacks;
    1311 
    1350  const VkDeviceSize* pHeapSizeLimit;
    1364 
    1366 VkResult vmaCreateAllocator(
    1367  const VmaAllocatorCreateInfo* pCreateInfo,
    1368  VmaAllocator* pAllocator);
    1369 
    1371 void vmaDestroyAllocator(
    1372  VmaAllocator allocator);
    1373 
    1379  VmaAllocator allocator,
    1380  const VkPhysicalDeviceProperties** ppPhysicalDeviceProperties);
    1381 
    1387  VmaAllocator allocator,
    1388  const VkPhysicalDeviceMemoryProperties** ppPhysicalDeviceMemoryProperties);
    1389 
    1397  VmaAllocator allocator,
    1398  uint32_t memoryTypeIndex,
    1399  VkMemoryPropertyFlags* pFlags);
    1400 
    1410  VmaAllocator allocator,
    1411  uint32_t frameIndex);
    1412 
    1415 typedef struct VmaStatInfo
    1416 {
    1418  uint32_t blockCount;
    1424  VkDeviceSize usedBytes;
    1426  VkDeviceSize unusedBytes;
    1427  VkDeviceSize allocationSizeMin, allocationSizeAvg, allocationSizeMax;
    1428  VkDeviceSize unusedRangeSizeMin, unusedRangeSizeAvg, unusedRangeSizeMax;
    1429 } VmaStatInfo;
    1430 
    1432 typedef struct VmaStats
    1433 {
    1434  VmaStatInfo memoryType[VK_MAX_MEMORY_TYPES];
    1435  VmaStatInfo memoryHeap[VK_MAX_MEMORY_HEAPS];
    1437 } VmaStats;
    1438 
    1440 void vmaCalculateStats(
    1441  VmaAllocator allocator,
    1442  VmaStats* pStats);
    1443 
    1444 #define VMA_STATS_STRING_ENABLED 1
    1445 
    1446 #if VMA_STATS_STRING_ENABLED
    1447 
    1449 
    1451 void vmaBuildStatsString(
    1452  VmaAllocator allocator,
    1453  char** ppStatsString,
    1454  VkBool32 detailedMap);
    1455 
    1456 void vmaFreeStatsString(
    1457  VmaAllocator allocator,
    1458  char* pStatsString);
    1459 
    1460 #endif // #if VMA_STATS_STRING_ENABLED
    1461 
    1470 VK_DEFINE_HANDLE(VmaPool)
    1471 
    1472 typedef enum VmaMemoryUsage
    1473 {
    1522 } VmaMemoryUsage;
    1523 
    1538 
    1588 
    1592 
    1594 {
    1596  VmaAllocationCreateFlags flags;
    1607  VkMemoryPropertyFlags requiredFlags;
    1612  VkMemoryPropertyFlags preferredFlags;
    1620  uint32_t memoryTypeBits;
    1633  void* pUserData;
    1635 
    1652 VkResult vmaFindMemoryTypeIndex(
    1653  VmaAllocator allocator,
    1654  uint32_t memoryTypeBits,
    1655  const VmaAllocationCreateInfo* pAllocationCreateInfo,
    1656  uint32_t* pMemoryTypeIndex);
    1657 
    1671  VmaAllocator allocator,
    1672  const VkBufferCreateInfo* pBufferCreateInfo,
    1673  const VmaAllocationCreateInfo* pAllocationCreateInfo,
    1674  uint32_t* pMemoryTypeIndex);
    1675 
    1689  VmaAllocator allocator,
    1690  const VkImageCreateInfo* pImageCreateInfo,
    1691  const VmaAllocationCreateInfo* pAllocationCreateInfo,
    1692  uint32_t* pMemoryTypeIndex);
    1693 
    1714 
    1717 typedef VkFlags VmaPoolCreateFlags;
    1718 
    1721 typedef struct VmaPoolCreateInfo {
    1727  VmaPoolCreateFlags flags;
    1732  VkDeviceSize blockSize;
    1761 
    1764 typedef struct VmaPoolStats {
    1767  VkDeviceSize size;
    1770  VkDeviceSize unusedSize;
    1783  VkDeviceSize unusedRangeSizeMax;
    1784 } VmaPoolStats;
    1785 
    1792 VkResult vmaCreatePool(
    1793  VmaAllocator allocator,
    1794  const VmaPoolCreateInfo* pCreateInfo,
    1795  VmaPool* pPool);
    1796 
    1799 void vmaDestroyPool(
    1800  VmaAllocator allocator,
    1801  VmaPool pool);
    1802 
    1809 void vmaGetPoolStats(
    1810  VmaAllocator allocator,
    1811  VmaPool pool,
    1812  VmaPoolStats* pPoolStats);
    1813 
    1821  VmaAllocator allocator,
    1822  VmaPool pool,
    1823  size_t* pLostAllocationCount);
    1824 
    1839 VkResult vmaCheckPoolCorruption(VmaAllocator allocator, VmaPool pool);
    1840 
    1865 VK_DEFINE_HANDLE(VmaAllocation)
    1866 
    1867 
    1869 typedef struct VmaAllocationInfo {
    1874  uint32_t memoryType;
    1883  VkDeviceMemory deviceMemory;
    1888  VkDeviceSize offset;
    1893  VkDeviceSize size;
    1907  void* pUserData;
    1909 
    1920 VkResult vmaAllocateMemory(
    1921  VmaAllocator allocator,
    1922  const VkMemoryRequirements* pVkMemoryRequirements,
    1923  const VmaAllocationCreateInfo* pCreateInfo,
    1924  VmaAllocation* pAllocation,
    1925  VmaAllocationInfo* pAllocationInfo);
    1926 
    1934  VmaAllocator allocator,
    1935  VkBuffer buffer,
    1936  const VmaAllocationCreateInfo* pCreateInfo,
    1937  VmaAllocation* pAllocation,
    1938  VmaAllocationInfo* pAllocationInfo);
    1939 
    1941 VkResult vmaAllocateMemoryForImage(
    1942  VmaAllocator allocator,
    1943  VkImage image,
    1944  const VmaAllocationCreateInfo* pCreateInfo,
    1945  VmaAllocation* pAllocation,
    1946  VmaAllocationInfo* pAllocationInfo);
    1947 
    1949 void vmaFreeMemory(
    1950  VmaAllocator allocator,
    1951  VmaAllocation allocation);
    1952 
    1970  VmaAllocator allocator,
    1971  VmaAllocation allocation,
    1972  VmaAllocationInfo* pAllocationInfo);
    1973 
    1988 VkBool32 vmaTouchAllocation(
    1989  VmaAllocator allocator,
    1990  VmaAllocation allocation);
    1991 
    2006  VmaAllocator allocator,
    2007  VmaAllocation allocation,
    2008  void* pUserData);
    2009 
    2021  VmaAllocator allocator,
    2022  VmaAllocation* pAllocation);
    2023 
    2058 VkResult vmaMapMemory(
    2059  VmaAllocator allocator,
    2060  VmaAllocation allocation,
    2061  void** ppData);
    2062 
    2067 void vmaUnmapMemory(
    2068  VmaAllocator allocator,
    2069  VmaAllocation allocation);
    2070 
    2083 void vmaFlushAllocation(VmaAllocator allocator, VmaAllocation allocation, VkDeviceSize offset, VkDeviceSize size);
    2084 
    2097 void vmaInvalidateAllocation(VmaAllocator allocator, VmaAllocation allocation, VkDeviceSize offset, VkDeviceSize size);
    2098 
    2115 VkResult vmaCheckCorruption(VmaAllocator allocator, uint32_t memoryTypeBits);
    2116 
    2118 typedef struct VmaDefragmentationInfo {
    2123  VkDeviceSize maxBytesToMove;
    2130 
    2132 typedef struct VmaDefragmentationStats {
    2134  VkDeviceSize bytesMoved;
    2136  VkDeviceSize bytesFreed;
    2142 
    2225 VkResult vmaDefragment(
    2226  VmaAllocator allocator,
    2227  VmaAllocation* pAllocations,
    2228  size_t allocationCount,
    2229  VkBool32* pAllocationsChanged,
    2230  const VmaDefragmentationInfo *pDefragmentationInfo,
    2231  VmaDefragmentationStats* pDefragmentationStats);
    2232 
    2245 VkResult vmaBindBufferMemory(
    2246  VmaAllocator allocator,
    2247  VmaAllocation allocation,
    2248  VkBuffer buffer);
    2249 
    2262 VkResult vmaBindImageMemory(
    2263  VmaAllocator allocator,
    2264  VmaAllocation allocation,
    2265  VkImage image);
    2266 
    2293 VkResult vmaCreateBuffer(
    2294  VmaAllocator allocator,
    2295  const VkBufferCreateInfo* pBufferCreateInfo,
    2296  const VmaAllocationCreateInfo* pAllocationCreateInfo,
    2297  VkBuffer* pBuffer,
    2298  VmaAllocation* pAllocation,
    2299  VmaAllocationInfo* pAllocationInfo);
    2300 
    2312 void vmaDestroyBuffer(
    2313  VmaAllocator allocator,
    2314  VkBuffer buffer,
    2315  VmaAllocation allocation);
    2316 
    2318 VkResult vmaCreateImage(
    2319  VmaAllocator allocator,
    2320  const VkImageCreateInfo* pImageCreateInfo,
    2321  const VmaAllocationCreateInfo* pAllocationCreateInfo,
    2322  VkImage* pImage,
    2323  VmaAllocation* pAllocation,
    2324  VmaAllocationInfo* pAllocationInfo);
    2325 
    2337 void vmaDestroyImage(
    2338  VmaAllocator allocator,
    2339  VkImage image,
    2340  VmaAllocation allocation);
    2341 
    2342 #ifdef __cplusplus
    2343 }
    2344 #endif
    2345 
    2346 #endif // AMD_VULKAN_MEMORY_ALLOCATOR_H
    2347 
    2348 // For Visual Studio IntelliSense.
    2349 #if defined(__cplusplus) && defined(__INTELLISENSE__)
    2350 #define VMA_IMPLEMENTATION
    2351 #endif
    2352 
    2353 #ifdef VMA_IMPLEMENTATION
    2354 #undef VMA_IMPLEMENTATION
    2355 
    2356 #include <cstdint>
    2357 #include <cstdlib>
    2358 #include <cstring>
    2359 
    2360 /*******************************************************************************
    2361 CONFIGURATION SECTION
    2362 
    2363 Define some of these macros before each #include of this header or change them
    2364 here if you need other then default behavior depending on your environment.
    2365 */
    2366 
    2367 /*
    2368 Define this macro to 1 to make the library fetch pointers to Vulkan functions
    2369 internally, like:
    2370 
    2371  vulkanFunctions.vkAllocateMemory = &vkAllocateMemory;
    2372 
    2373 Define to 0 if you are going to provide you own pointers to Vulkan functions via
    2374 VmaAllocatorCreateInfo::pVulkanFunctions.
    2375 */
    2376 #if !defined(VMA_STATIC_VULKAN_FUNCTIONS) && !defined(VK_NO_PROTOTYPES)
    2377 #define VMA_STATIC_VULKAN_FUNCTIONS 1
    2378 #endif
    2379 
    2380 // Define this macro to 1 to make the library use STL containers instead of its own implementation.
    2381 //#define VMA_USE_STL_CONTAINERS 1
    2382 
    2383 /* Set this macro to 1 to make the library including and using STL containers:
    2384 std::pair, std::vector, std::list, std::unordered_map.
    2385 
    2386 Set it to 0 or undefined to make the library using its own implementation of
    2387 the containers.
    2388 */
    2389 #if VMA_USE_STL_CONTAINERS
    2390  #define VMA_USE_STL_VECTOR 1
    2391  #define VMA_USE_STL_UNORDERED_MAP 1
    2392  #define VMA_USE_STL_LIST 1
    2393 #endif
    2394 
    2395 #if VMA_USE_STL_VECTOR
    2396  #include <vector>
    2397 #endif
    2398 
    2399 #if VMA_USE_STL_UNORDERED_MAP
    2400  #include <unordered_map>
    2401 #endif
    2402 
    2403 #if VMA_USE_STL_LIST
    2404  #include <list>
    2405 #endif
    2406 
    2407 /*
    2408 Following headers are used in this CONFIGURATION section only, so feel free to
    2409 remove them if not needed.
    2410 */
    2411 #include <cassert> // for assert
    2412 #include <algorithm> // for min, max
    2413 #include <mutex> // for std::mutex
    2414 #include <atomic> // for std::atomic
    2415 
    2416 #ifndef VMA_NULL
    2417  // Value used as null pointer. Define it to e.g.: nullptr, NULL, 0, (void*)0.
    2418  #define VMA_NULL nullptr
    2419 #endif
    2420 
    2421 #if defined(__APPLE__) || defined(__ANDROID__)
    2422 #include <cstdlib>
    2423 void *aligned_alloc(size_t alignment, size_t size)
    2424 {
    2425  // alignment must be >= sizeof(void*)
    2426  if(alignment < sizeof(void*))
    2427  {
    2428  alignment = sizeof(void*);
    2429  }
    2430 
    2431  void *pointer;
    2432  if(posix_memalign(&pointer, alignment, size) == 0)
    2433  return pointer;
    2434  return VMA_NULL;
    2435 }
    2436 #endif
    2437 
    2438 // If your compiler is not compatible with C++11 and definition of
    2439 // aligned_alloc() function is missing, uncommeting following line may help:
    2440 
    2441 //#include <malloc.h>
    2442 
    2443 // Normal assert to check for programmer's errors, especially in Debug configuration.
    2444 #ifndef VMA_ASSERT
    2445  #ifdef _DEBUG
    2446  #define VMA_ASSERT(expr) assert(expr)
    2447  #else
    2448  #define VMA_ASSERT(expr)
    2449  #endif
    2450 #endif
    2451 
    2452 // Assert that will be called very often, like inside data structures e.g. operator[].
    2453 // Making it non-empty can make program slow.
    2454 #ifndef VMA_HEAVY_ASSERT
    2455  #ifdef _DEBUG
    2456  #define VMA_HEAVY_ASSERT(expr) //VMA_ASSERT(expr)
    2457  #else
    2458  #define VMA_HEAVY_ASSERT(expr)
    2459  #endif
    2460 #endif
    2461 
    2462 #ifndef VMA_ALIGN_OF
    2463  #define VMA_ALIGN_OF(type) (__alignof(type))
    2464 #endif
    2465 
    2466 #ifndef VMA_SYSTEM_ALIGNED_MALLOC
    2467  #if defined(_WIN32)
    2468  #define VMA_SYSTEM_ALIGNED_MALLOC(size, alignment) (_aligned_malloc((size), (alignment)))
    2469  #else
    2470  #define VMA_SYSTEM_ALIGNED_MALLOC(size, alignment) (aligned_alloc((alignment), (size) ))
    2471  #endif
    2472 #endif
    2473 
    2474 #ifndef VMA_SYSTEM_FREE
    2475  #if defined(_WIN32)
    2476  #define VMA_SYSTEM_FREE(ptr) _aligned_free(ptr)
    2477  #else
    2478  #define VMA_SYSTEM_FREE(ptr) free(ptr)
    2479  #endif
    2480 #endif
    2481 
    2482 #ifndef VMA_MIN
    2483  #define VMA_MIN(v1, v2) (std::min((v1), (v2)))
    2484 #endif
    2485 
    2486 #ifndef VMA_MAX
    2487  #define VMA_MAX(v1, v2) (std::max((v1), (v2)))
    2488 #endif
    2489 
    2490 #ifndef VMA_SWAP
    2491  #define VMA_SWAP(v1, v2) std::swap((v1), (v2))
    2492 #endif
    2493 
    2494 #ifndef VMA_SORT
    2495  #define VMA_SORT(beg, end, cmp) std::sort(beg, end, cmp)
    2496 #endif
    2497 
    2498 #ifndef VMA_DEBUG_LOG
    2499  #define VMA_DEBUG_LOG(format, ...)
    2500  /*
    2501  #define VMA_DEBUG_LOG(format, ...) do { \
    2502  printf(format, __VA_ARGS__); \
    2503  printf("\n"); \
    2504  } while(false)
    2505  */
    2506 #endif
    2507 
    2508 // Define this macro to 1 to enable functions: vmaBuildStatsString, vmaFreeStatsString.
    2509 #if VMA_STATS_STRING_ENABLED
    2510  static inline void VmaUint32ToStr(char* outStr, size_t strLen, uint32_t num)
    2511  {
    2512  snprintf(outStr, strLen, "%u", static_cast<unsigned int>(num));
    2513  }
    2514  static inline void VmaUint64ToStr(char* outStr, size_t strLen, uint64_t num)
    2515  {
    2516  snprintf(outStr, strLen, "%llu", static_cast<unsigned long long>(num));
    2517  }
    2518  static inline void VmaPtrToStr(char* outStr, size_t strLen, const void* ptr)
    2519  {
    2520  snprintf(outStr, strLen, "%p", ptr);
    2521  }
    2522 #endif
    2523 
    2524 #ifndef VMA_MUTEX
    2525  class VmaMutex
    2526  {
    2527  public:
    2528  VmaMutex() { }
    2529  ~VmaMutex() { }
    2530  void Lock() { m_Mutex.lock(); }
    2531  void Unlock() { m_Mutex.unlock(); }
    2532  private:
    2533  std::mutex m_Mutex;
    2534  };
    2535  #define VMA_MUTEX VmaMutex
    2536 #endif
    2537 
    2538 /*
    2539 If providing your own implementation, you need to implement a subset of std::atomic:
    2540 
    2541 - Constructor(uint32_t desired)
    2542 - uint32_t load() const
    2543 - void store(uint32_t desired)
    2544 - bool compare_exchange_weak(uint32_t& expected, uint32_t desired)
    2545 */
    2546 #ifndef VMA_ATOMIC_UINT32
    2547  #define VMA_ATOMIC_UINT32 std::atomic<uint32_t>
    2548 #endif
    2549 
    2550 #ifndef VMA_BEST_FIT
    2551 
    2563  #define VMA_BEST_FIT (1)
    2564 #endif
    2565 
    2566 #ifndef VMA_DEBUG_ALWAYS_DEDICATED_MEMORY
    2567 
    2571  #define VMA_DEBUG_ALWAYS_DEDICATED_MEMORY (0)
    2572 #endif
    2573 
    2574 #ifndef VMA_DEBUG_ALIGNMENT
    2575 
    2579  #define VMA_DEBUG_ALIGNMENT (1)
    2580 #endif
    2581 
    2582 #ifndef VMA_DEBUG_MARGIN
    2583 
    2587  #define VMA_DEBUG_MARGIN (0)
    2588 #endif
    2589 
    2590 #ifndef VMA_DEBUG_INITIALIZE_ALLOCATIONS
    2591 
    2595  #define VMA_DEBUG_INITIALIZE_ALLOCATIONS (0)
    2596 #endif
    2597 
    2598 #ifndef VMA_DEBUG_DETECT_CORRUPTION
    2599 
    2604  #define VMA_DEBUG_DETECT_CORRUPTION (0)
    2605 #endif
    2606 
    2607 #ifndef VMA_DEBUG_GLOBAL_MUTEX
    2608 
    2612  #define VMA_DEBUG_GLOBAL_MUTEX (0)
    2613 #endif
    2614 
    2615 #ifndef VMA_DEBUG_MIN_BUFFER_IMAGE_GRANULARITY
    2616 
    2620  #define VMA_DEBUG_MIN_BUFFER_IMAGE_GRANULARITY (1)
    2621 #endif
    2622 
    2623 #ifndef VMA_SMALL_HEAP_MAX_SIZE
    2624  #define VMA_SMALL_HEAP_MAX_SIZE (1024ull * 1024 * 1024)
    2626 #endif
    2627 
    2628 #ifndef VMA_DEFAULT_LARGE_HEAP_BLOCK_SIZE
    2629  #define VMA_DEFAULT_LARGE_HEAP_BLOCK_SIZE (256ull * 1024 * 1024)
    2631 #endif
    2632 
    2633 #ifndef VMA_CLASS_NO_COPY
    2634  #define VMA_CLASS_NO_COPY(className) \
    2635  private: \
    2636  className(const className&) = delete; \
    2637  className& operator=(const className&) = delete;
    2638 #endif
    2639 
    2640 static const uint32_t VMA_FRAME_INDEX_LOST = UINT32_MAX;
    2641 
    2642 // Decimal 2139416166, float NaN, little-endian binary 66 E6 84 7F.
    2643 static const uint32_t VMA_CORRUPTION_DETECTION_MAGIC_VALUE = 0x7F84E666;
    2644 
    2645 static const uint8_t VMA_ALLOCATION_FILL_PATTERN_CREATED = 0xDC;
    2646 static const uint8_t VMA_ALLOCATION_FILL_PATTERN_DESTROYED = 0xEF;
    2647 
    2648 /*******************************************************************************
    2649 END OF CONFIGURATION
    2650 */
    2651 
    2652 static VkAllocationCallbacks VmaEmptyAllocationCallbacks = {
    2653  VMA_NULL, VMA_NULL, VMA_NULL, VMA_NULL, VMA_NULL, VMA_NULL };
    2654 
    2655 // Returns number of bits set to 1 in (v).
    2656 static inline uint32_t VmaCountBitsSet(uint32_t v)
    2657 {
    2658  uint32_t c = v - ((v >> 1) & 0x55555555);
    2659  c = ((c >> 2) & 0x33333333) + (c & 0x33333333);
    2660  c = ((c >> 4) + c) & 0x0F0F0F0F;
    2661  c = ((c >> 8) + c) & 0x00FF00FF;
    2662  c = ((c >> 16) + c) & 0x0000FFFF;
    2663  return c;
    2664 }
    2665 
    2666 // Aligns given value up to nearest multiply of align value. For example: VmaAlignUp(11, 8) = 16.
    2667 // Use types like uint32_t, uint64_t as T.
    2668 template <typename T>
    2669 static inline T VmaAlignUp(T val, T align)
    2670 {
    2671  return (val + align - 1) / align * align;
    2672 }
    2673 // Aligns given value down to nearest multiply of align value. For example: VmaAlignUp(11, 8) = 8.
    2674 // Use types like uint32_t, uint64_t as T.
    2675 template <typename T>
    2676 static inline T VmaAlignDown(T val, T align)
    2677 {
    2678  return val / align * align;
    2679 }
    2680 
    2681 // Division with mathematical rounding to nearest number.
    2682 template <typename T>
    2683 inline T VmaRoundDiv(T x, T y)
    2684 {
    2685  return (x + (y / (T)2)) / y;
    2686 }
    2687 
    2688 #ifndef VMA_SORT
    2689 
    2690 template<typename Iterator, typename Compare>
    2691 Iterator VmaQuickSortPartition(Iterator beg, Iterator end, Compare cmp)
    2692 {
    2693  Iterator centerValue = end; --centerValue;
    2694  Iterator insertIndex = beg;
    2695  for(Iterator memTypeIndex = beg; memTypeIndex < centerValue; ++memTypeIndex)
    2696  {
    2697  if(cmp(*memTypeIndex, *centerValue))
    2698  {
    2699  if(insertIndex != memTypeIndex)
    2700  {
    2701  VMA_SWAP(*memTypeIndex, *insertIndex);
    2702  }
    2703  ++insertIndex;
    2704  }
    2705  }
    2706  if(insertIndex != centerValue)
    2707  {
    2708  VMA_SWAP(*insertIndex, *centerValue);
    2709  }
    2710  return insertIndex;
    2711 }
    2712 
    2713 template<typename Iterator, typename Compare>
    2714 void VmaQuickSort(Iterator beg, Iterator end, Compare cmp)
    2715 {
    2716  if(beg < end)
    2717  {
    2718  Iterator it = VmaQuickSortPartition<Iterator, Compare>(beg, end, cmp);
    2719  VmaQuickSort<Iterator, Compare>(beg, it, cmp);
    2720  VmaQuickSort<Iterator, Compare>(it + 1, end, cmp);
    2721  }
    2722 }
    2723 
    2724 #define VMA_SORT(beg, end, cmp) VmaQuickSort(beg, end, cmp)
    2725 
    2726 #endif // #ifndef VMA_SORT
    2727 
    2728 /*
    2729 Returns true if two memory blocks occupy overlapping pages.
    2730 ResourceA must be in less memory offset than ResourceB.
    2731 
    2732 Algorithm is based on "Vulkan 1.0.39 - A Specification (with all registered Vulkan extensions)"
    2733 chapter 11.6 "Resource Memory Association", paragraph "Buffer-Image Granularity".
    2734 */
    2735 static inline bool VmaBlocksOnSamePage(
    2736  VkDeviceSize resourceAOffset,
    2737  VkDeviceSize resourceASize,
    2738  VkDeviceSize resourceBOffset,
    2739  VkDeviceSize pageSize)
    2740 {
    2741  VMA_ASSERT(resourceAOffset + resourceASize <= resourceBOffset && resourceASize > 0 && pageSize > 0);
    2742  VkDeviceSize resourceAEnd = resourceAOffset + resourceASize - 1;
    2743  VkDeviceSize resourceAEndPage = resourceAEnd & ~(pageSize - 1);
    2744  VkDeviceSize resourceBStart = resourceBOffset;
    2745  VkDeviceSize resourceBStartPage = resourceBStart & ~(pageSize - 1);
    2746  return resourceAEndPage == resourceBStartPage;
    2747 }
    2748 
    2749 enum VmaSuballocationType
    2750 {
    2751  VMA_SUBALLOCATION_TYPE_FREE = 0,
    2752  VMA_SUBALLOCATION_TYPE_UNKNOWN = 1,
    2753  VMA_SUBALLOCATION_TYPE_BUFFER = 2,
    2754  VMA_SUBALLOCATION_TYPE_IMAGE_UNKNOWN = 3,
    2755  VMA_SUBALLOCATION_TYPE_IMAGE_LINEAR = 4,
    2756  VMA_SUBALLOCATION_TYPE_IMAGE_OPTIMAL = 5,
    2757  VMA_SUBALLOCATION_TYPE_MAX_ENUM = 0x7FFFFFFF
    2758 };
    2759 
    2760 /*
    2761 Returns true if given suballocation types could conflict and must respect
    2762 VkPhysicalDeviceLimits::bufferImageGranularity. They conflict if one is buffer
    2763 or linear image and another one is optimal image. If type is unknown, behave
    2764 conservatively.
    2765 */
    2766 static inline bool VmaIsBufferImageGranularityConflict(
    2767  VmaSuballocationType suballocType1,
    2768  VmaSuballocationType suballocType2)
    2769 {
    2770  if(suballocType1 > suballocType2)
    2771  {
    2772  VMA_SWAP(suballocType1, suballocType2);
    2773  }
    2774 
    2775  switch(suballocType1)
    2776  {
    2777  case VMA_SUBALLOCATION_TYPE_FREE:
    2778  return false;
    2779  case VMA_SUBALLOCATION_TYPE_UNKNOWN:
    2780  return true;
    2781  case VMA_SUBALLOCATION_TYPE_BUFFER:
    2782  return
    2783  suballocType2 == VMA_SUBALLOCATION_TYPE_IMAGE_UNKNOWN ||
    2784  suballocType2 == VMA_SUBALLOCATION_TYPE_IMAGE_OPTIMAL;
    2785  case VMA_SUBALLOCATION_TYPE_IMAGE_UNKNOWN:
    2786  return
    2787  suballocType2 == VMA_SUBALLOCATION_TYPE_IMAGE_UNKNOWN ||
    2788  suballocType2 == VMA_SUBALLOCATION_TYPE_IMAGE_LINEAR ||
    2789  suballocType2 == VMA_SUBALLOCATION_TYPE_IMAGE_OPTIMAL;
    2790  case VMA_SUBALLOCATION_TYPE_IMAGE_LINEAR:
    2791  return
    2792  suballocType2 == VMA_SUBALLOCATION_TYPE_IMAGE_OPTIMAL;
    2793  case VMA_SUBALLOCATION_TYPE_IMAGE_OPTIMAL:
    2794  return false;
    2795  default:
    2796  VMA_ASSERT(0);
    2797  return true;
    2798  }
    2799 }
    2800 
    2801 static void VmaWriteMagicValue(void* pData, VkDeviceSize offset)
    2802 {
    2803  uint32_t* pDst = (uint32_t*)((char*)pData + offset);
    2804  const size_t numberCount = VMA_DEBUG_MARGIN / sizeof(uint32_t);
    2805  for(size_t i = 0; i < numberCount; ++i, ++pDst)
    2806  {
    2807  *pDst = VMA_CORRUPTION_DETECTION_MAGIC_VALUE;
    2808  }
    2809 }
    2810 
    2811 static bool VmaValidateMagicValue(const void* pData, VkDeviceSize offset)
    2812 {
    2813  const uint32_t* pSrc = (const uint32_t*)((const char*)pData + offset);
    2814  const size_t numberCount = VMA_DEBUG_MARGIN / sizeof(uint32_t);
    2815  for(size_t i = 0; i < numberCount; ++i, ++pSrc)
    2816  {
    2817  if(*pSrc != VMA_CORRUPTION_DETECTION_MAGIC_VALUE)
    2818  {
    2819  return false;
    2820  }
    2821  }
    2822  return true;
    2823 }
    2824 
    2825 // Helper RAII class to lock a mutex in constructor and unlock it in destructor (at the end of scope).
    2826 struct VmaMutexLock
    2827 {
    2828  VMA_CLASS_NO_COPY(VmaMutexLock)
    2829 public:
    2830  VmaMutexLock(VMA_MUTEX& mutex, bool useMutex) :
    2831  m_pMutex(useMutex ? &mutex : VMA_NULL)
    2832  {
    2833  if(m_pMutex)
    2834  {
    2835  m_pMutex->Lock();
    2836  }
    2837  }
    2838 
    2839  ~VmaMutexLock()
    2840  {
    2841  if(m_pMutex)
    2842  {
    2843  m_pMutex->Unlock();
    2844  }
    2845  }
    2846 
    2847 private:
    2848  VMA_MUTEX* m_pMutex;
    2849 };
    2850 
    2851 #if VMA_DEBUG_GLOBAL_MUTEX
    2852  static VMA_MUTEX gDebugGlobalMutex;
    2853  #define VMA_DEBUG_GLOBAL_MUTEX_LOCK VmaMutexLock debugGlobalMutexLock(gDebugGlobalMutex, true);
    2854 #else
    2855  #define VMA_DEBUG_GLOBAL_MUTEX_LOCK
    2856 #endif
    2857 
    2858 // Minimum size of a free suballocation to register it in the free suballocation collection.
    2859 static const VkDeviceSize VMA_MIN_FREE_SUBALLOCATION_SIZE_TO_REGISTER = 16;
    2860 
    2861 /*
    2862 Performs binary search and returns iterator to first element that is greater or
    2863 equal to (key), according to comparison (cmp).
    2864 
    2865 Cmp should return true if first argument is less than second argument.
    2866 
    2867 Returned value is the found element, if present in the collection or place where
    2868 new element with value (key) should be inserted.
    2869 */
    2870 template <typename IterT, typename KeyT, typename CmpT>
    2871 static IterT VmaBinaryFindFirstNotLess(IterT beg, IterT end, const KeyT &key, CmpT cmp)
    2872 {
    2873  size_t down = 0, up = (end - beg);
    2874  while(down < up)
    2875  {
    2876  const size_t mid = (down + up) / 2;
    2877  if(cmp(*(beg+mid), key))
    2878  {
    2879  down = mid + 1;
    2880  }
    2881  else
    2882  {
    2883  up = mid;
    2884  }
    2885  }
    2886  return beg + down;
    2887 }
    2888 
    2890 // Memory allocation
    2891 
    2892 static void* VmaMalloc(const VkAllocationCallbacks* pAllocationCallbacks, size_t size, size_t alignment)
    2893 {
    2894  if((pAllocationCallbacks != VMA_NULL) &&
    2895  (pAllocationCallbacks->pfnAllocation != VMA_NULL))
    2896  {
    2897  return (*pAllocationCallbacks->pfnAllocation)(
    2898  pAllocationCallbacks->pUserData,
    2899  size,
    2900  alignment,
    2901  VK_SYSTEM_ALLOCATION_SCOPE_OBJECT);
    2902  }
    2903  else
    2904  {
    2905  return VMA_SYSTEM_ALIGNED_MALLOC(size, alignment);
    2906  }
    2907 }
    2908 
    2909 static void VmaFree(const VkAllocationCallbacks* pAllocationCallbacks, void* ptr)
    2910 {
    2911  if((pAllocationCallbacks != VMA_NULL) &&
    2912  (pAllocationCallbacks->pfnFree != VMA_NULL))
    2913  {
    2914  (*pAllocationCallbacks->pfnFree)(pAllocationCallbacks->pUserData, ptr);
    2915  }
    2916  else
    2917  {
    2918  VMA_SYSTEM_FREE(ptr);
    2919  }
    2920 }
    2921 
    2922 template<typename T>
    2923 static T* VmaAllocate(const VkAllocationCallbacks* pAllocationCallbacks)
    2924 {
    2925  return (T*)VmaMalloc(pAllocationCallbacks, sizeof(T), VMA_ALIGN_OF(T));
    2926 }
    2927 
    2928 template<typename T>
    2929 static T* VmaAllocateArray(const VkAllocationCallbacks* pAllocationCallbacks, size_t count)
    2930 {
    2931  return (T*)VmaMalloc(pAllocationCallbacks, sizeof(T) * count, VMA_ALIGN_OF(T));
    2932 }
    2933 
    2934 #define vma_new(allocator, type) new(VmaAllocate<type>(allocator))(type)
    2935 
    2936 #define vma_new_array(allocator, type, count) new(VmaAllocateArray<type>((allocator), (count)))(type)
    2937 
    2938 template<typename T>
    2939 static void vma_delete(const VkAllocationCallbacks* pAllocationCallbacks, T* ptr)
    2940 {
    2941  ptr->~T();
    2942  VmaFree(pAllocationCallbacks, ptr);
    2943 }
    2944 
    2945 template<typename T>
    2946 static void vma_delete_array(const VkAllocationCallbacks* pAllocationCallbacks, T* ptr, size_t count)
    2947 {
    2948  if(ptr != VMA_NULL)
    2949  {
    2950  for(size_t i = count; i--; )
    2951  {
    2952  ptr[i].~T();
    2953  }
    2954  VmaFree(pAllocationCallbacks, ptr);
    2955  }
    2956 }
    2957 
    2958 // STL-compatible allocator.
    2959 template<typename T>
    2960 class VmaStlAllocator
    2961 {
    2962 public:
    2963  const VkAllocationCallbacks* const m_pCallbacks;
    2964  typedef T value_type;
    2965 
    2966  VmaStlAllocator(const VkAllocationCallbacks* pCallbacks) : m_pCallbacks(pCallbacks) { }
    2967  template<typename U> VmaStlAllocator(const VmaStlAllocator<U>& src) : m_pCallbacks(src.m_pCallbacks) { }
    2968 
    2969  T* allocate(size_t n) { return VmaAllocateArray<T>(m_pCallbacks, n); }
    2970  void deallocate(T* p, size_t n) { VmaFree(m_pCallbacks, p); }
    2971 
    2972  template<typename U>
    2973  bool operator==(const VmaStlAllocator<U>& rhs) const
    2974  {
    2975  return m_pCallbacks == rhs.m_pCallbacks;
    2976  }
    2977  template<typename U>
    2978  bool operator!=(const VmaStlAllocator<U>& rhs) const
    2979  {
    2980  return m_pCallbacks != rhs.m_pCallbacks;
    2981  }
    2982 
    2983  VmaStlAllocator& operator=(const VmaStlAllocator& x) = delete;
    2984 };
    2985 
    2986 #if VMA_USE_STL_VECTOR
    2987 
    2988 #define VmaVector std::vector
    2989 
    2990 template<typename T, typename allocatorT>
    2991 static void VmaVectorInsert(std::vector<T, allocatorT>& vec, size_t index, const T& item)
    2992 {
    2993  vec.insert(vec.begin() + index, item);
    2994 }
    2995 
    2996 template<typename T, typename allocatorT>
    2997 static void VmaVectorRemove(std::vector<T, allocatorT>& vec, size_t index)
    2998 {
    2999  vec.erase(vec.begin() + index);
    3000 }
    3001 
    3002 #else // #if VMA_USE_STL_VECTOR
    3003 
    3004 /* Class with interface compatible with subset of std::vector.
    3005 T must be POD because constructors and destructors are not called and memcpy is
    3006 used for these objects. */
    3007 template<typename T, typename AllocatorT>
    3008 class VmaVector
    3009 {
    3010 public:
    3011  typedef T value_type;
    3012 
    3013  VmaVector(const AllocatorT& allocator) :
    3014  m_Allocator(allocator),
    3015  m_pArray(VMA_NULL),
    3016  m_Count(0),
    3017  m_Capacity(0)
    3018  {
    3019  }
    3020 
    3021  VmaVector(size_t count, const AllocatorT& allocator) :
    3022  m_Allocator(allocator),
    3023  m_pArray(count ? (T*)VmaAllocateArray<T>(allocator.m_pCallbacks, count) : VMA_NULL),
    3024  m_Count(count),
    3025  m_Capacity(count)
    3026  {
    3027  }
    3028 
    3029  VmaVector(const VmaVector<T, AllocatorT>& src) :
    3030  m_Allocator(src.m_Allocator),
    3031  m_pArray(src.m_Count ? (T*)VmaAllocateArray<T>(src.m_Allocator.m_pCallbacks, src.m_Count) : VMA_NULL),
    3032  m_Count(src.m_Count),
    3033  m_Capacity(src.m_Count)
    3034  {
    3035  if(m_Count != 0)
    3036  {
    3037  memcpy(m_pArray, src.m_pArray, m_Count * sizeof(T));
    3038  }
    3039  }
    3040 
    3041  ~VmaVector()
    3042  {
    3043  VmaFree(m_Allocator.m_pCallbacks, m_pArray);
    3044  }
    3045 
    3046  VmaVector& operator=(const VmaVector<T, AllocatorT>& rhs)
    3047  {
    3048  if(&rhs != this)
    3049  {
    3050  resize(rhs.m_Count);
    3051  if(m_Count != 0)
    3052  {
    3053  memcpy(m_pArray, rhs.m_pArray, m_Count * sizeof(T));
    3054  }
    3055  }
    3056  return *this;
    3057  }
    3058 
    3059  bool empty() const { return m_Count == 0; }
    3060  size_t size() const { return m_Count; }
    3061  T* data() { return m_pArray; }
    3062  const T* data() const { return m_pArray; }
    3063 
    3064  T& operator[](size_t index)
    3065  {
    3066  VMA_HEAVY_ASSERT(index < m_Count);
    3067  return m_pArray[index];
    3068  }
    3069  const T& operator[](size_t index) const
    3070  {
    3071  VMA_HEAVY_ASSERT(index < m_Count);
    3072  return m_pArray[index];
    3073  }
    3074 
    3075  T& front()
    3076  {
    3077  VMA_HEAVY_ASSERT(m_Count > 0);
    3078  return m_pArray[0];
    3079  }
    3080  const T& front() const
    3081  {
    3082  VMA_HEAVY_ASSERT(m_Count > 0);
    3083  return m_pArray[0];
    3084  }
    3085  T& back()
    3086  {
    3087  VMA_HEAVY_ASSERT(m_Count > 0);
    3088  return m_pArray[m_Count - 1];
    3089  }
    3090  const T& back() const
    3091  {
    3092  VMA_HEAVY_ASSERT(m_Count > 0);
    3093  return m_pArray[m_Count - 1];
    3094  }
    3095 
    3096  void reserve(size_t newCapacity, bool freeMemory = false)
    3097  {
    3098  newCapacity = VMA_MAX(newCapacity, m_Count);
    3099 
    3100  if((newCapacity < m_Capacity) && !freeMemory)
    3101  {
    3102  newCapacity = m_Capacity;
    3103  }
    3104 
    3105  if(newCapacity != m_Capacity)
    3106  {
    3107  T* const newArray = newCapacity ? VmaAllocateArray<T>(m_Allocator, newCapacity) : VMA_NULL;
    3108  if(m_Count != 0)
    3109  {
    3110  memcpy(newArray, m_pArray, m_Count * sizeof(T));
    3111  }
    3112  VmaFree(m_Allocator.m_pCallbacks, m_pArray);
    3113  m_Capacity = newCapacity;
    3114  m_pArray = newArray;
    3115  }
    3116  }
    3117 
    3118  void resize(size_t newCount, bool freeMemory = false)
    3119  {
    3120  size_t newCapacity = m_Capacity;
    3121  if(newCount > m_Capacity)
    3122  {
    3123  newCapacity = VMA_MAX(newCount, VMA_MAX(m_Capacity * 3 / 2, (size_t)8));
    3124  }
    3125  else if(freeMemory)
    3126  {
    3127  newCapacity = newCount;
    3128  }
    3129 
    3130  if(newCapacity != m_Capacity)
    3131  {
    3132  T* const newArray = newCapacity ? VmaAllocateArray<T>(m_Allocator.m_pCallbacks, newCapacity) : VMA_NULL;
    3133  const size_t elementsToCopy = VMA_MIN(m_Count, newCount);
    3134  if(elementsToCopy != 0)
    3135  {
    3136  memcpy(newArray, m_pArray, elementsToCopy * sizeof(T));
    3137  }
    3138  VmaFree(m_Allocator.m_pCallbacks, m_pArray);
    3139  m_Capacity = newCapacity;
    3140  m_pArray = newArray;
    3141  }
    3142 
    3143  m_Count = newCount;
    3144  }
    3145 
    3146  void clear(bool freeMemory = false)
    3147  {
    3148  resize(0, freeMemory);
    3149  }
    3150 
    3151  void insert(size_t index, const T& src)
    3152  {
    3153  VMA_HEAVY_ASSERT(index <= m_Count);
    3154  const size_t oldCount = size();
    3155  resize(oldCount + 1);
    3156  if(index < oldCount)
    3157  {
    3158  memmove(m_pArray + (index + 1), m_pArray + index, (oldCount - index) * sizeof(T));
    3159  }
    3160  m_pArray[index] = src;
    3161  }
    3162 
    3163  void remove(size_t index)
    3164  {
    3165  VMA_HEAVY_ASSERT(index < m_Count);
    3166  const size_t oldCount = size();
    3167  if(index < oldCount - 1)
    3168  {
    3169  memmove(m_pArray + index, m_pArray + (index + 1), (oldCount - index - 1) * sizeof(T));
    3170  }
    3171  resize(oldCount - 1);
    3172  }
    3173 
    3174  void push_back(const T& src)
    3175  {
    3176  const size_t newIndex = size();
    3177  resize(newIndex + 1);
    3178  m_pArray[newIndex] = src;
    3179  }
    3180 
    3181  void pop_back()
    3182  {
    3183  VMA_HEAVY_ASSERT(m_Count > 0);
    3184  resize(size() - 1);
    3185  }
    3186 
    3187  void push_front(const T& src)
    3188  {
    3189  insert(0, src);
    3190  }
    3191 
    3192  void pop_front()
    3193  {
    3194  VMA_HEAVY_ASSERT(m_Count > 0);
    3195  remove(0);
    3196  }
    3197 
    3198  typedef T* iterator;
    3199 
    3200  iterator begin() { return m_pArray; }
    3201  iterator end() { return m_pArray + m_Count; }
    3202 
    3203 private:
    3204  AllocatorT m_Allocator;
    3205  T* m_pArray;
    3206  size_t m_Count;
    3207  size_t m_Capacity;
    3208 };
    3209 
    3210 template<typename T, typename allocatorT>
    3211 static void VmaVectorInsert(VmaVector<T, allocatorT>& vec, size_t index, const T& item)
    3212 {
    3213  vec.insert(index, item);
    3214 }
    3215 
    3216 template<typename T, typename allocatorT>
    3217 static void VmaVectorRemove(VmaVector<T, allocatorT>& vec, size_t index)
    3218 {
    3219  vec.remove(index);
    3220 }
    3221 
    3222 #endif // #if VMA_USE_STL_VECTOR
    3223 
    3224 template<typename CmpLess, typename VectorT>
    3225 size_t VmaVectorInsertSorted(VectorT& vector, const typename VectorT::value_type& value)
    3226 {
    3227  const size_t indexToInsert = VmaBinaryFindFirstNotLess(
    3228  vector.data(),
    3229  vector.data() + vector.size(),
    3230  value,
    3231  CmpLess()) - vector.data();
    3232  VmaVectorInsert(vector, indexToInsert, value);
    3233  return indexToInsert;
    3234 }
    3235 
    3236 template<typename CmpLess, typename VectorT>
    3237 bool VmaVectorRemoveSorted(VectorT& vector, const typename VectorT::value_type& value)
    3238 {
    3239  CmpLess comparator;
    3240  typename VectorT::iterator it = VmaBinaryFindFirstNotLess(
    3241  vector.begin(),
    3242  vector.end(),
    3243  value,
    3244  comparator);
    3245  if((it != vector.end()) && !comparator(*it, value) && !comparator(value, *it))
    3246  {
    3247  size_t indexToRemove = it - vector.begin();
    3248  VmaVectorRemove(vector, indexToRemove);
    3249  return true;
    3250  }
    3251  return false;
    3252 }
    3253 
    3254 template<typename CmpLess, typename VectorT>
    3255 size_t VmaVectorFindSorted(const VectorT& vector, const typename VectorT::value_type& value)
    3256 {
    3257  CmpLess comparator;
    3258  typename VectorT::iterator it = VmaBinaryFindFirstNotLess(
    3259  vector.data(),
    3260  vector.data() + vector.size(),
    3261  value,
    3262  comparator);
    3263  if(it != vector.size() && !comparator(*it, value) && !comparator(value, *it))
    3264  {
    3265  return it - vector.begin();
    3266  }
    3267  else
    3268  {
    3269  return vector.size();
    3270  }
    3271 }
    3272 
    3274 // class VmaPoolAllocator
    3275 
    3276 /*
    3277 Allocator for objects of type T using a list of arrays (pools) to speed up
    3278 allocation. Number of elements that can be allocated is not bounded because
    3279 allocator can create multiple blocks.
    3280 */
    3281 template<typename T>
    3282 class VmaPoolAllocator
    3283 {
    3284  VMA_CLASS_NO_COPY(VmaPoolAllocator)
    3285 public:
    3286  VmaPoolAllocator(const VkAllocationCallbacks* pAllocationCallbacks, size_t itemsPerBlock);
    3287  ~VmaPoolAllocator();
    3288  void Clear();
    3289  T* Alloc();
    3290  void Free(T* ptr);
    3291 
    3292 private:
    3293  union Item
    3294  {
    3295  uint32_t NextFreeIndex;
    3296  T Value;
    3297  };
    3298 
    3299  struct ItemBlock
    3300  {
    3301  Item* pItems;
    3302  uint32_t FirstFreeIndex;
    3303  };
    3304 
    3305  const VkAllocationCallbacks* m_pAllocationCallbacks;
    3306  size_t m_ItemsPerBlock;
    3307  VmaVector< ItemBlock, VmaStlAllocator<ItemBlock> > m_ItemBlocks;
    3308 
    3309  ItemBlock& CreateNewBlock();
    3310 };
    3311 
    3312 template<typename T>
    3313 VmaPoolAllocator<T>::VmaPoolAllocator(const VkAllocationCallbacks* pAllocationCallbacks, size_t itemsPerBlock) :
    3314  m_pAllocationCallbacks(pAllocationCallbacks),
    3315  m_ItemsPerBlock(itemsPerBlock),
    3316  m_ItemBlocks(VmaStlAllocator<ItemBlock>(pAllocationCallbacks))
    3317 {
    3318  VMA_ASSERT(itemsPerBlock > 0);
    3319 }
    3320 
    3321 template<typename T>
    3322 VmaPoolAllocator<T>::~VmaPoolAllocator()
    3323 {
    3324  Clear();
    3325 }
    3326 
    3327 template<typename T>
    3328 void VmaPoolAllocator<T>::Clear()
    3329 {
    3330  for(size_t i = m_ItemBlocks.size(); i--; )
    3331  vma_delete_array(m_pAllocationCallbacks, m_ItemBlocks[i].pItems, m_ItemsPerBlock);
    3332  m_ItemBlocks.clear();
    3333 }
    3334 
    3335 template<typename T>
    3336 T* VmaPoolAllocator<T>::Alloc()
    3337 {
    3338  for(size_t i = m_ItemBlocks.size(); i--; )
    3339  {
    3340  ItemBlock& block = m_ItemBlocks[i];
    3341  // This block has some free items: Use first one.
    3342  if(block.FirstFreeIndex != UINT32_MAX)
    3343  {
    3344  Item* const pItem = &block.pItems[block.FirstFreeIndex];
    3345  block.FirstFreeIndex = pItem->NextFreeIndex;
    3346  return &pItem->Value;
    3347  }
    3348  }
    3349 
    3350  // No block has free item: Create new one and use it.
    3351  ItemBlock& newBlock = CreateNewBlock();
    3352  Item* const pItem = &newBlock.pItems[0];
    3353  newBlock.FirstFreeIndex = pItem->NextFreeIndex;
    3354  return &pItem->Value;
    3355 }
    3356 
    3357 template<typename T>
    3358 void VmaPoolAllocator<T>::Free(T* ptr)
    3359 {
    3360  // Search all memory blocks to find ptr.
    3361  for(size_t i = 0; i < m_ItemBlocks.size(); ++i)
    3362  {
    3363  ItemBlock& block = m_ItemBlocks[i];
    3364 
    3365  // Casting to union.
    3366  Item* pItemPtr;
    3367  memcpy(&pItemPtr, &ptr, sizeof(pItemPtr));
    3368 
    3369  // Check if pItemPtr is in address range of this block.
    3370  if((pItemPtr >= block.pItems) && (pItemPtr < block.pItems + m_ItemsPerBlock))
    3371  {
    3372  const uint32_t index = static_cast<uint32_t>(pItemPtr - block.pItems);
    3373  pItemPtr->NextFreeIndex = block.FirstFreeIndex;
    3374  block.FirstFreeIndex = index;
    3375  return;
    3376  }
    3377  }
    3378  VMA_ASSERT(0 && "Pointer doesn't belong to this memory pool.");
    3379 }
    3380 
    3381 template<typename T>
    3382 typename VmaPoolAllocator<T>::ItemBlock& VmaPoolAllocator<T>::CreateNewBlock()
    3383 {
    3384  ItemBlock newBlock = {
    3385  vma_new_array(m_pAllocationCallbacks, Item, m_ItemsPerBlock), 0 };
    3386 
    3387  m_ItemBlocks.push_back(newBlock);
    3388 
    3389  // Setup singly-linked list of all free items in this block.
    3390  for(uint32_t i = 0; i < m_ItemsPerBlock - 1; ++i)
    3391  newBlock.pItems[i].NextFreeIndex = i + 1;
    3392  newBlock.pItems[m_ItemsPerBlock - 1].NextFreeIndex = UINT32_MAX;
    3393  return m_ItemBlocks.back();
    3394 }
    3395 
    3397 // class VmaRawList, VmaList
    3398 
    3399 #if VMA_USE_STL_LIST
    3400 
    3401 #define VmaList std::list
    3402 
    3403 #else // #if VMA_USE_STL_LIST
    3404 
    3405 template<typename T>
    3406 struct VmaListItem
    3407 {
    3408  VmaListItem* pPrev;
    3409  VmaListItem* pNext;
    3410  T Value;
    3411 };
    3412 
    3413 // Doubly linked list.
    3414 template<typename T>
    3415 class VmaRawList
    3416 {
    3417  VMA_CLASS_NO_COPY(VmaRawList)
    3418 public:
    3419  typedef VmaListItem<T> ItemType;
    3420 
    3421  VmaRawList(const VkAllocationCallbacks* pAllocationCallbacks);
    3422  ~VmaRawList();
    3423  void Clear();
    3424 
    3425  size_t GetCount() const { return m_Count; }
    3426  bool IsEmpty() const { return m_Count == 0; }
    3427 
    3428  ItemType* Front() { return m_pFront; }
    3429  const ItemType* Front() const { return m_pFront; }
    3430  ItemType* Back() { return m_pBack; }
    3431  const ItemType* Back() const { return m_pBack; }
    3432 
    3433  ItemType* PushBack();
    3434  ItemType* PushFront();
    3435  ItemType* PushBack(const T& value);
    3436  ItemType* PushFront(const T& value);
    3437  void PopBack();
    3438  void PopFront();
    3439 
    3440  // Item can be null - it means PushBack.
    3441  ItemType* InsertBefore(ItemType* pItem);
    3442  // Item can be null - it means PushFront.
    3443  ItemType* InsertAfter(ItemType* pItem);
    3444 
    3445  ItemType* InsertBefore(ItemType* pItem, const T& value);
    3446  ItemType* InsertAfter(ItemType* pItem, const T& value);
    3447 
    3448  void Remove(ItemType* pItem);
    3449 
    3450 private:
    3451  const VkAllocationCallbacks* const m_pAllocationCallbacks;
    3452  VmaPoolAllocator<ItemType> m_ItemAllocator;
    3453  ItemType* m_pFront;
    3454  ItemType* m_pBack;
    3455  size_t m_Count;
    3456 };
    3457 
    3458 template<typename T>
    3459 VmaRawList<T>::VmaRawList(const VkAllocationCallbacks* pAllocationCallbacks) :
    3460  m_pAllocationCallbacks(pAllocationCallbacks),
    3461  m_ItemAllocator(pAllocationCallbacks, 128),
    3462  m_pFront(VMA_NULL),
    3463  m_pBack(VMA_NULL),
    3464  m_Count(0)
    3465 {
    3466 }
    3467 
    3468 template<typename T>
    3469 VmaRawList<T>::~VmaRawList()
    3470 {
    3471  // Intentionally not calling Clear, because that would be unnecessary
    3472  // computations to return all items to m_ItemAllocator as free.
    3473 }
    3474 
    3475 template<typename T>
    3476 void VmaRawList<T>::Clear()
    3477 {
    3478  if(IsEmpty() == false)
    3479  {
    3480  ItemType* pItem = m_pBack;
    3481  while(pItem != VMA_NULL)
    3482  {
    3483  ItemType* const pPrevItem = pItem->pPrev;
    3484  m_ItemAllocator.Free(pItem);
    3485  pItem = pPrevItem;
    3486  }
    3487  m_pFront = VMA_NULL;
    3488  m_pBack = VMA_NULL;
    3489  m_Count = 0;
    3490  }
    3491 }
    3492 
    3493 template<typename T>
    3494 VmaListItem<T>* VmaRawList<T>::PushBack()
    3495 {
    3496  ItemType* const pNewItem = m_ItemAllocator.Alloc();
    3497  pNewItem->pNext = VMA_NULL;
    3498  if(IsEmpty())
    3499  {
    3500  pNewItem->pPrev = VMA_NULL;
    3501  m_pFront = pNewItem;
    3502  m_pBack = pNewItem;
    3503  m_Count = 1;
    3504  }
    3505  else
    3506  {
    3507  pNewItem->pPrev = m_pBack;
    3508  m_pBack->pNext = pNewItem;
    3509  m_pBack = pNewItem;
    3510  ++m_Count;
    3511  }
    3512  return pNewItem;
    3513 }
    3514 
    3515 template<typename T>
    3516 VmaListItem<T>* VmaRawList<T>::PushFront()
    3517 {
    3518  ItemType* const pNewItem = m_ItemAllocator.Alloc();
    3519  pNewItem->pPrev = VMA_NULL;
    3520  if(IsEmpty())
    3521  {
    3522  pNewItem->pNext = VMA_NULL;
    3523  m_pFront = pNewItem;
    3524  m_pBack = pNewItem;
    3525  m_Count = 1;
    3526  }
    3527  else
    3528  {
    3529  pNewItem->pNext = m_pFront;
    3530  m_pFront->pPrev = pNewItem;
    3531  m_pFront = pNewItem;
    3532  ++m_Count;
    3533  }
    3534  return pNewItem;
    3535 }
    3536 
    3537 template<typename T>
    3538 VmaListItem<T>* VmaRawList<T>::PushBack(const T& value)
    3539 {
    3540  ItemType* const pNewItem = PushBack();
    3541  pNewItem->Value = value;
    3542  return pNewItem;
    3543 }
    3544 
    3545 template<typename T>
    3546 VmaListItem<T>* VmaRawList<T>::PushFront(const T& value)
    3547 {
    3548  ItemType* const pNewItem = PushFront();
    3549  pNewItem->Value = value;
    3550  return pNewItem;
    3551 }
    3552 
    3553 template<typename T>
    3554 void VmaRawList<T>::PopBack()
    3555 {
    3556  VMA_HEAVY_ASSERT(m_Count > 0);
    3557  ItemType* const pBackItem = m_pBack;
    3558  ItemType* const pPrevItem = pBackItem->pPrev;
    3559  if(pPrevItem != VMA_NULL)
    3560  {
    3561  pPrevItem->pNext = VMA_NULL;
    3562  }
    3563  m_pBack = pPrevItem;
    3564  m_ItemAllocator.Free(pBackItem);
    3565  --m_Count;
    3566 }
    3567 
    3568 template<typename T>
    3569 void VmaRawList<T>::PopFront()
    3570 {
    3571  VMA_HEAVY_ASSERT(m_Count > 0);
    3572  ItemType* const pFrontItem = m_pFront;
    3573  ItemType* const pNextItem = pFrontItem->pNext;
    3574  if(pNextItem != VMA_NULL)
    3575  {
    3576  pNextItem->pPrev = VMA_NULL;
    3577  }
    3578  m_pFront = pNextItem;
    3579  m_ItemAllocator.Free(pFrontItem);
    3580  --m_Count;
    3581 }
    3582 
    3583 template<typename T>
    3584 void VmaRawList<T>::Remove(ItemType* pItem)
    3585 {
    3586  VMA_HEAVY_ASSERT(pItem != VMA_NULL);
    3587  VMA_HEAVY_ASSERT(m_Count > 0);
    3588 
    3589  if(pItem->pPrev != VMA_NULL)
    3590  {
    3591  pItem->pPrev->pNext = pItem->pNext;
    3592  }
    3593  else
    3594  {
    3595  VMA_HEAVY_ASSERT(m_pFront == pItem);
    3596  m_pFront = pItem->pNext;
    3597  }
    3598 
    3599  if(pItem->pNext != VMA_NULL)
    3600  {
    3601  pItem->pNext->pPrev = pItem->pPrev;
    3602  }
    3603  else
    3604  {
    3605  VMA_HEAVY_ASSERT(m_pBack == pItem);
    3606  m_pBack = pItem->pPrev;
    3607  }
    3608 
    3609  m_ItemAllocator.Free(pItem);
    3610  --m_Count;
    3611 }
    3612 
    3613 template<typename T>
    3614 VmaListItem<T>* VmaRawList<T>::InsertBefore(ItemType* pItem)
    3615 {
    3616  if(pItem != VMA_NULL)
    3617  {
    3618  ItemType* const prevItem = pItem->pPrev;
    3619  ItemType* const newItem = m_ItemAllocator.Alloc();
    3620  newItem->pPrev = prevItem;
    3621  newItem->pNext = pItem;
    3622  pItem->pPrev = newItem;
    3623  if(prevItem != VMA_NULL)
    3624  {
    3625  prevItem->pNext = newItem;
    3626  }
    3627  else
    3628  {
    3629  VMA_HEAVY_ASSERT(m_pFront == pItem);
    3630  m_pFront = newItem;
    3631  }
    3632  ++m_Count;
    3633  return newItem;
    3634  }
    3635  else
    3636  return PushBack();
    3637 }
    3638 
    3639 template<typename T>
    3640 VmaListItem<T>* VmaRawList<T>::InsertAfter(ItemType* pItem)
    3641 {
    3642  if(pItem != VMA_NULL)
    3643  {
    3644  ItemType* const nextItem = pItem->pNext;
    3645  ItemType* const newItem = m_ItemAllocator.Alloc();
    3646  newItem->pNext = nextItem;
    3647  newItem->pPrev = pItem;
    3648  pItem->pNext = newItem;
    3649  if(nextItem != VMA_NULL)
    3650  {
    3651  nextItem->pPrev = newItem;
    3652  }
    3653  else
    3654  {
    3655  VMA_HEAVY_ASSERT(m_pBack == pItem);
    3656  m_pBack = newItem;
    3657  }
    3658  ++m_Count;
    3659  return newItem;
    3660  }
    3661  else
    3662  return PushFront();
    3663 }
    3664 
    3665 template<typename T>
    3666 VmaListItem<T>* VmaRawList<T>::InsertBefore(ItemType* pItem, const T& value)
    3667 {
    3668  ItemType* const newItem = InsertBefore(pItem);
    3669  newItem->Value = value;
    3670  return newItem;
    3671 }
    3672 
    3673 template<typename T>
    3674 VmaListItem<T>* VmaRawList<T>::InsertAfter(ItemType* pItem, const T& value)
    3675 {
    3676  ItemType* const newItem = InsertAfter(pItem);
    3677  newItem->Value = value;
    3678  return newItem;
    3679 }
    3680 
    3681 template<typename T, typename AllocatorT>
    3682 class VmaList
    3683 {
    3684  VMA_CLASS_NO_COPY(VmaList)
    3685 public:
    3686  class iterator
    3687  {
    3688  public:
    3689  iterator() :
    3690  m_pList(VMA_NULL),
    3691  m_pItem(VMA_NULL)
    3692  {
    3693  }
    3694 
    3695  T& operator*() const
    3696  {
    3697  VMA_HEAVY_ASSERT(m_pItem != VMA_NULL);
    3698  return m_pItem->Value;
    3699  }
    3700  T* operator->() const
    3701  {
    3702  VMA_HEAVY_ASSERT(m_pItem != VMA_NULL);
    3703  return &m_pItem->Value;
    3704  }
    3705 
    3706  iterator& operator++()
    3707  {
    3708  VMA_HEAVY_ASSERT(m_pItem != VMA_NULL);
    3709  m_pItem = m_pItem->pNext;
    3710  return *this;
    3711  }
    3712  iterator& operator--()
    3713  {
    3714  if(m_pItem != VMA_NULL)
    3715  {
    3716  m_pItem = m_pItem->pPrev;
    3717  }
    3718  else
    3719  {
    3720  VMA_HEAVY_ASSERT(!m_pList->IsEmpty());
    3721  m_pItem = m_pList->Back();
    3722  }
    3723  return *this;
    3724  }
    3725 
    3726  iterator operator++(int)
    3727  {
    3728  iterator result = *this;
    3729  ++*this;
    3730  return result;
    3731  }
    3732  iterator operator--(int)
    3733  {
    3734  iterator result = *this;
    3735  --*this;
    3736  return result;
    3737  }
    3738 
    3739  bool operator==(const iterator& rhs) const
    3740  {
    3741  VMA_HEAVY_ASSERT(m_pList == rhs.m_pList);
    3742  return m_pItem == rhs.m_pItem;
    3743  }
    3744  bool operator!=(const iterator& rhs) const
    3745  {
    3746  VMA_HEAVY_ASSERT(m_pList == rhs.m_pList);
    3747  return m_pItem != rhs.m_pItem;
    3748  }
    3749 
    3750  private:
    3751  VmaRawList<T>* m_pList;
    3752  VmaListItem<T>* m_pItem;
    3753 
    3754  iterator(VmaRawList<T>* pList, VmaListItem<T>* pItem) :
    3755  m_pList(pList),
    3756  m_pItem(pItem)
    3757  {
    3758  }
    3759 
    3760  friend class VmaList<T, AllocatorT>;
    3761  };
    3762 
    3763  class const_iterator
    3764  {
    3765  public:
    3766  const_iterator() :
    3767  m_pList(VMA_NULL),
    3768  m_pItem(VMA_NULL)
    3769  {
    3770  }
    3771 
    3772  const_iterator(const iterator& src) :
    3773  m_pList(src.m_pList),
    3774  m_pItem(src.m_pItem)
    3775  {
    3776  }
    3777 
    3778  const T& operator*() const
    3779  {
    3780  VMA_HEAVY_ASSERT(m_pItem != VMA_NULL);
    3781  return m_pItem->Value;
    3782  }
    3783  const T* operator->() const
    3784  {
    3785  VMA_HEAVY_ASSERT(m_pItem != VMA_NULL);
    3786  return &m_pItem->Value;
    3787  }
    3788 
    3789  const_iterator& operator++()
    3790  {
    3791  VMA_HEAVY_ASSERT(m_pItem != VMA_NULL);
    3792  m_pItem = m_pItem->pNext;
    3793  return *this;
    3794  }
    3795  const_iterator& operator--()
    3796  {
    3797  if(m_pItem != VMA_NULL)
    3798  {
    3799  m_pItem = m_pItem->pPrev;
    3800  }
    3801  else
    3802  {
    3803  VMA_HEAVY_ASSERT(!m_pList->IsEmpty());
    3804  m_pItem = m_pList->Back();
    3805  }
    3806  return *this;
    3807  }
    3808 
    3809  const_iterator operator++(int)
    3810  {
    3811  const_iterator result = *this;
    3812  ++*this;
    3813  return result;
    3814  }
    3815  const_iterator operator--(int)
    3816  {
    3817  const_iterator result = *this;
    3818  --*this;
    3819  return result;
    3820  }
    3821 
    3822  bool operator==(const const_iterator& rhs) const
    3823  {
    3824  VMA_HEAVY_ASSERT(m_pList == rhs.m_pList);
    3825  return m_pItem == rhs.m_pItem;
    3826  }
    3827  bool operator!=(const const_iterator& rhs) const
    3828  {
    3829  VMA_HEAVY_ASSERT(m_pList == rhs.m_pList);
    3830  return m_pItem != rhs.m_pItem;
    3831  }
    3832 
    3833  private:
    3834  const_iterator(const VmaRawList<T>* pList, const VmaListItem<T>* pItem) :
    3835  m_pList(pList),
    3836  m_pItem(pItem)
    3837  {
    3838  }
    3839 
    3840  const VmaRawList<T>* m_pList;
    3841  const VmaListItem<T>* m_pItem;
    3842 
    3843  friend class VmaList<T, AllocatorT>;
    3844  };
    3845 
    3846  VmaList(const AllocatorT& allocator) : m_RawList(allocator.m_pCallbacks) { }
    3847 
    3848  bool empty() const { return m_RawList.IsEmpty(); }
    3849  size_t size() const { return m_RawList.GetCount(); }
    3850 
    3851  iterator begin() { return iterator(&m_RawList, m_RawList.Front()); }
    3852  iterator end() { return iterator(&m_RawList, VMA_NULL); }
    3853 
    3854  const_iterator cbegin() const { return const_iterator(&m_RawList, m_RawList.Front()); }
    3855  const_iterator cend() const { return const_iterator(&m_RawList, VMA_NULL); }
    3856 
    3857  void clear() { m_RawList.Clear(); }
    3858  void push_back(const T& value) { m_RawList.PushBack(value); }
    3859  void erase(iterator it) { m_RawList.Remove(it.m_pItem); }
    3860  iterator insert(iterator it, const T& value) { return iterator(&m_RawList, m_RawList.InsertBefore(it.m_pItem, value)); }
    3861 
    3862 private:
    3863  VmaRawList<T> m_RawList;
    3864 };
    3865 
    3866 #endif // #if VMA_USE_STL_LIST
    3867 
    3869 // class VmaMap
    3870 
    3871 // Unused in this version.
    3872 #if 0
    3873 
    3874 #if VMA_USE_STL_UNORDERED_MAP
    3875 
    3876 #define VmaPair std::pair
    3877 
    3878 #define VMA_MAP_TYPE(KeyT, ValueT) \
    3879  std::unordered_map< KeyT, ValueT, std::hash<KeyT>, std::equal_to<KeyT>, VmaStlAllocator< std::pair<KeyT, ValueT> > >
    3880 
    3881 #else // #if VMA_USE_STL_UNORDERED_MAP
    3882 
    3883 template<typename T1, typename T2>
    3884 struct VmaPair
    3885 {
    3886  T1 first;
    3887  T2 second;
    3888 
    3889  VmaPair() : first(), second() { }
    3890  VmaPair(const T1& firstSrc, const T2& secondSrc) : first(firstSrc), second(secondSrc) { }
    3891 };
    3892 
    3893 /* Class compatible with subset of interface of std::unordered_map.
    3894 KeyT, ValueT must be POD because they will be stored in VmaVector.
    3895 */
    3896 template<typename KeyT, typename ValueT>
    3897 class VmaMap
    3898 {
    3899 public:
    3900  typedef VmaPair<KeyT, ValueT> PairType;
    3901  typedef PairType* iterator;
    3902 
    3903  VmaMap(const VmaStlAllocator<PairType>& allocator) : m_Vector(allocator) { }
    3904 
    3905  iterator begin() { return m_Vector.begin(); }
    3906  iterator end() { return m_Vector.end(); }
    3907 
    3908  void insert(const PairType& pair);
    3909  iterator find(const KeyT& key);
    3910  void erase(iterator it);
    3911 
    3912 private:
    3913  VmaVector< PairType, VmaStlAllocator<PairType> > m_Vector;
    3914 };
    3915 
    3916 #define VMA_MAP_TYPE(KeyT, ValueT) VmaMap<KeyT, ValueT>
    3917 
    3918 template<typename FirstT, typename SecondT>
    3919 struct VmaPairFirstLess
    3920 {
    3921  bool operator()(const VmaPair<FirstT, SecondT>& lhs, const VmaPair<FirstT, SecondT>& rhs) const
    3922  {
    3923  return lhs.first < rhs.first;
    3924  }
    3925  bool operator()(const VmaPair<FirstT, SecondT>& lhs, const FirstT& rhsFirst) const
    3926  {
    3927  return lhs.first < rhsFirst;
    3928  }
    3929 };
    3930 
    3931 template<typename KeyT, typename ValueT>
    3932 void VmaMap<KeyT, ValueT>::insert(const PairType& pair)
    3933 {
    3934  const size_t indexToInsert = VmaBinaryFindFirstNotLess(
    3935  m_Vector.data(),
    3936  m_Vector.data() + m_Vector.size(),
    3937  pair,
    3938  VmaPairFirstLess<KeyT, ValueT>()) - m_Vector.data();
    3939  VmaVectorInsert(m_Vector, indexToInsert, pair);
    3940 }
    3941 
    3942 template<typename KeyT, typename ValueT>
    3943 VmaPair<KeyT, ValueT>* VmaMap<KeyT, ValueT>::find(const KeyT& key)
    3944 {
    3945  PairType* it = VmaBinaryFindFirstNotLess(
    3946  m_Vector.data(),
    3947  m_Vector.data() + m_Vector.size(),
    3948  key,
    3949  VmaPairFirstLess<KeyT, ValueT>());
    3950  if((it != m_Vector.end()) && (it->first == key))
    3951  {
    3952  return it;
    3953  }
    3954  else
    3955  {
    3956  return m_Vector.end();
    3957  }
    3958 }
    3959 
    3960 template<typename KeyT, typename ValueT>
    3961 void VmaMap<KeyT, ValueT>::erase(iterator it)
    3962 {
    3963  VmaVectorRemove(m_Vector, it - m_Vector.begin());
    3964 }
    3965 
    3966 #endif // #if VMA_USE_STL_UNORDERED_MAP
    3967 
    3968 #endif // #if 0
    3969 
    3971 
    3972 class VmaDeviceMemoryBlock;
    3973 
    3974 enum VMA_CACHE_OPERATION { VMA_CACHE_FLUSH, VMA_CACHE_INVALIDATE };
    3975 
    3976 struct VmaAllocation_T
    3977 {
    3978  VMA_CLASS_NO_COPY(VmaAllocation_T)
    3979 private:
    3980  static const uint8_t MAP_COUNT_FLAG_PERSISTENT_MAP = 0x80;
    3981 
    3982  enum FLAGS
    3983  {
    3984  FLAG_USER_DATA_STRING = 0x01,
    3985  };
    3986 
    3987 public:
    3988  enum ALLOCATION_TYPE
    3989  {
    3990  ALLOCATION_TYPE_NONE,
    3991  ALLOCATION_TYPE_BLOCK,
    3992  ALLOCATION_TYPE_DEDICATED,
    3993  };
    3994 
    3995  VmaAllocation_T(uint32_t currentFrameIndex, bool userDataString) :
    3996  m_Alignment(1),
    3997  m_Size(0),
    3998  m_pUserData(VMA_NULL),
    3999  m_LastUseFrameIndex(currentFrameIndex),
    4000  m_Type((uint8_t)ALLOCATION_TYPE_NONE),
    4001  m_SuballocationType((uint8_t)VMA_SUBALLOCATION_TYPE_UNKNOWN),
    4002  m_MapCount(0),
    4003  m_Flags(userDataString ? (uint8_t)FLAG_USER_DATA_STRING : 0)
    4004  {
    4005 #if VMA_STATS_STRING_ENABLED
    4006  m_CreationFrameIndex = currentFrameIndex;
    4007  m_BufferImageUsage = 0;
    4008 #endif
    4009  }
    4010 
    4011  ~VmaAllocation_T()
    4012  {
    4013  VMA_ASSERT((m_MapCount & ~MAP_COUNT_FLAG_PERSISTENT_MAP) == 0 && "Allocation was not unmapped before destruction.");
    4014 
    4015  // Check if owned string was freed.
    4016  VMA_ASSERT(m_pUserData == VMA_NULL);
    4017  }
    4018 
    4019  void InitBlockAllocation(
    4020  VmaPool hPool,
    4021  VmaDeviceMemoryBlock* block,
    4022  VkDeviceSize offset,
    4023  VkDeviceSize alignment,
    4024  VkDeviceSize size,
    4025  VmaSuballocationType suballocationType,
    4026  bool mapped,
    4027  bool canBecomeLost)
    4028  {
    4029  VMA_ASSERT(m_Type == ALLOCATION_TYPE_NONE);
    4030  VMA_ASSERT(block != VMA_NULL);
    4031  m_Type = (uint8_t)ALLOCATION_TYPE_BLOCK;
    4032  m_Alignment = alignment;
    4033  m_Size = size;
    4034  m_MapCount = mapped ? MAP_COUNT_FLAG_PERSISTENT_MAP : 0;
    4035  m_SuballocationType = (uint8_t)suballocationType;
    4036  m_BlockAllocation.m_hPool = hPool;
    4037  m_BlockAllocation.m_Block = block;
    4038  m_BlockAllocation.m_Offset = offset;
    4039  m_BlockAllocation.m_CanBecomeLost = canBecomeLost;
    4040  }
    4041 
    4042  void InitLost()
    4043  {
    4044  VMA_ASSERT(m_Type == ALLOCATION_TYPE_NONE);
    4045  VMA_ASSERT(m_LastUseFrameIndex.load() == VMA_FRAME_INDEX_LOST);
    4046  m_Type = (uint8_t)ALLOCATION_TYPE_BLOCK;
    4047  m_BlockAllocation.m_hPool = VK_NULL_HANDLE;
    4048  m_BlockAllocation.m_Block = VMA_NULL;
    4049  m_BlockAllocation.m_Offset = 0;
    4050  m_BlockAllocation.m_CanBecomeLost = true;
    4051  }
    4052 
    4053  void ChangeBlockAllocation(
    4054  VmaAllocator hAllocator,
    4055  VmaDeviceMemoryBlock* block,
    4056  VkDeviceSize offset);
    4057 
    4058  // pMappedData not null means allocation is created with MAPPED flag.
    4059  void InitDedicatedAllocation(
    4060  uint32_t memoryTypeIndex,
    4061  VkDeviceMemory hMemory,
    4062  VmaSuballocationType suballocationType,
    4063  void* pMappedData,
    4064  VkDeviceSize size)
    4065  {
    4066  VMA_ASSERT(m_Type == ALLOCATION_TYPE_NONE);
    4067  VMA_ASSERT(hMemory != VK_NULL_HANDLE);
    4068  m_Type = (uint8_t)ALLOCATION_TYPE_DEDICATED;
    4069  m_Alignment = 0;
    4070  m_Size = size;
    4071  m_SuballocationType = (uint8_t)suballocationType;
    4072  m_MapCount = (pMappedData != VMA_NULL) ? MAP_COUNT_FLAG_PERSISTENT_MAP : 0;
    4073  m_DedicatedAllocation.m_MemoryTypeIndex = memoryTypeIndex;
    4074  m_DedicatedAllocation.m_hMemory = hMemory;
    4075  m_DedicatedAllocation.m_pMappedData = pMappedData;
    4076  }
    4077 
    4078  ALLOCATION_TYPE GetType() const { return (ALLOCATION_TYPE)m_Type; }
    4079  VkDeviceSize GetAlignment() const { return m_Alignment; }
    4080  VkDeviceSize GetSize() const { return m_Size; }
    4081  bool IsUserDataString() const { return (m_Flags & FLAG_USER_DATA_STRING) != 0; }
    4082  void* GetUserData() const { return m_pUserData; }
    4083  void SetUserData(VmaAllocator hAllocator, void* pUserData);
    4084  VmaSuballocationType GetSuballocationType() const { return (VmaSuballocationType)m_SuballocationType; }
    4085 
    4086  VmaDeviceMemoryBlock* GetBlock() const
    4087  {
    4088  VMA_ASSERT(m_Type == ALLOCATION_TYPE_BLOCK);
    4089  return m_BlockAllocation.m_Block;
    4090  }
    4091  VkDeviceSize GetOffset() const;
    4092  VkDeviceMemory GetMemory() const;
    4093  uint32_t GetMemoryTypeIndex() const;
    4094  bool IsPersistentMap() const { return (m_MapCount & MAP_COUNT_FLAG_PERSISTENT_MAP) != 0; }
    4095  void* GetMappedData() const;
    4096  bool CanBecomeLost() const;
    4097  VmaPool GetPool() const;
    4098 
    4099  uint32_t GetLastUseFrameIndex() const
    4100  {
    4101  return m_LastUseFrameIndex.load();
    4102  }
    4103  bool CompareExchangeLastUseFrameIndex(uint32_t& expected, uint32_t desired)
    4104  {
    4105  return m_LastUseFrameIndex.compare_exchange_weak(expected, desired);
    4106  }
    4107  /*
    4108  - If hAllocation.LastUseFrameIndex + frameInUseCount < allocator.CurrentFrameIndex,
    4109  makes it lost by setting LastUseFrameIndex = VMA_FRAME_INDEX_LOST and returns true.
    4110  - Else, returns false.
    4111 
    4112  If hAllocation is already lost, assert - you should not call it then.
    4113  If hAllocation was not created with CAN_BECOME_LOST_BIT, assert.
    4114  */
    4115  bool MakeLost(uint32_t currentFrameIndex, uint32_t frameInUseCount);
    4116 
    4117  void DedicatedAllocCalcStatsInfo(VmaStatInfo& outInfo)
    4118  {
    4119  VMA_ASSERT(m_Type == ALLOCATION_TYPE_DEDICATED);
    4120  outInfo.blockCount = 1;
    4121  outInfo.allocationCount = 1;
    4122  outInfo.unusedRangeCount = 0;
    4123  outInfo.usedBytes = m_Size;
    4124  outInfo.unusedBytes = 0;
    4125  outInfo.allocationSizeMin = outInfo.allocationSizeMax = m_Size;
    4126  outInfo.unusedRangeSizeMin = UINT64_MAX;
    4127  outInfo.unusedRangeSizeMax = 0;
    4128  }
    4129 
    4130  void BlockAllocMap();
    4131  void BlockAllocUnmap();
    4132  VkResult DedicatedAllocMap(VmaAllocator hAllocator, void** ppData);
    4133  void DedicatedAllocUnmap(VmaAllocator hAllocator);
    4134 
    4135 #if VMA_STATS_STRING_ENABLED
    4136  uint32_t GetCreationFrameIndex() const { return m_CreationFrameIndex; }
    4137  uint32_t GetBufferImageUsage() const { return m_BufferImageUsage; }
    4138 
    4139  void InitBufferImageUsage(uint32_t bufferImageUsage)
    4140  {
    4141  VMA_ASSERT(m_BufferImageUsage == 0);
    4142  m_BufferImageUsage = bufferImageUsage;
    4143  }
    4144 
    4145  void PrintParameters(class VmaJsonWriter& json) const;
    4146 #endif
    4147 
    4148 private:
    4149  VkDeviceSize m_Alignment;
    4150  VkDeviceSize m_Size;
    4151  void* m_pUserData;
    4152  VMA_ATOMIC_UINT32 m_LastUseFrameIndex;
    4153  uint8_t m_Type; // ALLOCATION_TYPE
    4154  uint8_t m_SuballocationType; // VmaSuballocationType
    4155  // Bit 0x80 is set when allocation was created with VMA_ALLOCATION_CREATE_MAPPED_BIT.
    4156  // Bits with mask 0x7F are reference counter for vmaMapMemory()/vmaUnmapMemory().
    4157  uint8_t m_MapCount;
    4158  uint8_t m_Flags; // enum FLAGS
    4159 
    4160  // Allocation out of VmaDeviceMemoryBlock.
    4161  struct BlockAllocation
    4162  {
    4163  VmaPool m_hPool; // Null if belongs to general memory.
    4164  VmaDeviceMemoryBlock* m_Block;
    4165  VkDeviceSize m_Offset;
    4166  bool m_CanBecomeLost;
    4167  };
    4168 
    4169  // Allocation for an object that has its own private VkDeviceMemory.
    4170  struct DedicatedAllocation
    4171  {
    4172  uint32_t m_MemoryTypeIndex;
    4173  VkDeviceMemory m_hMemory;
    4174  void* m_pMappedData; // Not null means memory is mapped.
    4175  };
    4176 
    4177  union
    4178  {
    4179  // Allocation out of VmaDeviceMemoryBlock.
    4180  BlockAllocation m_BlockAllocation;
    4181  // Allocation for an object that has its own private VkDeviceMemory.
    4182  DedicatedAllocation m_DedicatedAllocation;
    4183  };
    4184 
    4185 #if VMA_STATS_STRING_ENABLED
    4186  uint32_t m_CreationFrameIndex;
    4187  uint32_t m_BufferImageUsage; // 0 if unknown.
    4188 #endif
    4189 
    4190  void FreeUserDataString(VmaAllocator hAllocator);
    4191 };
    4192 
    4193 /*
    4194 Represents a region of VmaDeviceMemoryBlock that is either assigned and returned as
    4195 allocated memory block or free.
    4196 */
    4197 struct VmaSuballocation
    4198 {
    4199  VkDeviceSize offset;
    4200  VkDeviceSize size;
    4201  VmaAllocation hAllocation;
    4202  VmaSuballocationType type;
    4203 };
    4204 
    4205 typedef VmaList< VmaSuballocation, VmaStlAllocator<VmaSuballocation> > VmaSuballocationList;
    4206 
    4207 // Cost of one additional allocation lost, as equivalent in bytes.
    4208 static const VkDeviceSize VMA_LOST_ALLOCATION_COST = 1048576;
    4209 
    4210 /*
    4211 Parameters of planned allocation inside a VmaDeviceMemoryBlock.
    4212 
    4213 If canMakeOtherLost was false:
    4214 - item points to a FREE suballocation.
    4215 - itemsToMakeLostCount is 0.
    4216 
    4217 If canMakeOtherLost was true:
    4218 - item points to first of sequence of suballocations, which are either FREE,
    4219  or point to VmaAllocations that can become lost.
    4220 - itemsToMakeLostCount is the number of VmaAllocations that need to be made lost for
    4221  the requested allocation to succeed.
    4222 */
    4223 struct VmaAllocationRequest
    4224 {
    4225  VkDeviceSize offset;
    4226  VkDeviceSize sumFreeSize; // Sum size of free items that overlap with proposed allocation.
    4227  VkDeviceSize sumItemSize; // Sum size of items to make lost that overlap with proposed allocation.
    4228  VmaSuballocationList::iterator item;
    4229  size_t itemsToMakeLostCount;
    4230 
    4231  VkDeviceSize CalcCost() const
    4232  {
    4233  return sumItemSize + itemsToMakeLostCount * VMA_LOST_ALLOCATION_COST;
    4234  }
    4235 };
    4236 
    4237 /*
    4238 Data structure used for bookkeeping of allocations and unused ranges of memory
    4239 in a single VkDeviceMemory block.
    4240 */
    4241 class VmaBlockMetadata
    4242 {
    4243  VMA_CLASS_NO_COPY(VmaBlockMetadata)
    4244 public:
    4245  VmaBlockMetadata(VmaAllocator hAllocator);
    4246  ~VmaBlockMetadata();
    4247  void Init(VkDeviceSize size);
    4248 
    4249  // Validates all data structures inside this object. If not valid, returns false.
    4250  bool Validate() const;
    4251  VkDeviceSize GetSize() const { return m_Size; }
    4252  size_t GetAllocationCount() const { return m_Suballocations.size() - m_FreeCount; }
    4253  VkDeviceSize GetSumFreeSize() const { return m_SumFreeSize; }
    4254  VkDeviceSize GetUnusedRangeSizeMax() const;
    4255  // Returns true if this block is empty - contains only single free suballocation.
    4256  bool IsEmpty() const;
    4257 
    4258  void CalcAllocationStatInfo(VmaStatInfo& outInfo) const;
    4259  void AddPoolStats(VmaPoolStats& inoutStats) const;
    4260 
    4261 #if VMA_STATS_STRING_ENABLED
    4262  void PrintDetailedMap(class VmaJsonWriter& json) const;
    4263 #endif
    4264 
    4265  // Tries to find a place for suballocation with given parameters inside this block.
    4266  // If succeeded, fills pAllocationRequest and returns true.
    4267  // If failed, returns false.
    4268  bool CreateAllocationRequest(
    4269  uint32_t currentFrameIndex,
    4270  uint32_t frameInUseCount,
    4271  VkDeviceSize bufferImageGranularity,
    4272  VkDeviceSize allocSize,
    4273  VkDeviceSize allocAlignment,
    4274  VmaSuballocationType allocType,
    4275  bool canMakeOtherLost,
    4276  VmaAllocationRequest* pAllocationRequest);
    4277 
    4278  bool MakeRequestedAllocationsLost(
    4279  uint32_t currentFrameIndex,
    4280  uint32_t frameInUseCount,
    4281  VmaAllocationRequest* pAllocationRequest);
    4282 
    4283  uint32_t MakeAllocationsLost(uint32_t currentFrameIndex, uint32_t frameInUseCount);
    4284 
    4285  VkResult CheckCorruption(const void* pBlockData);
    4286 
    4287  // Makes actual allocation based on request. Request must already be checked and valid.
    4288  void Alloc(
    4289  const VmaAllocationRequest& request,
    4290  VmaSuballocationType type,
    4291  VkDeviceSize allocSize,
    4292  VmaAllocation hAllocation);
    4293 
    4294  // Frees suballocation assigned to given memory region.
    4295  void Free(const VmaAllocation allocation);
    4296  void FreeAtOffset(VkDeviceSize offset);
    4297 
    4298 private:
    4299  VkDeviceSize m_Size;
    4300  uint32_t m_FreeCount;
    4301  VkDeviceSize m_SumFreeSize;
    4302  VmaSuballocationList m_Suballocations;
    4303  // Suballocations that are free and have size greater than certain threshold.
    4304  // Sorted by size, ascending.
    4305  VmaVector< VmaSuballocationList::iterator, VmaStlAllocator< VmaSuballocationList::iterator > > m_FreeSuballocationsBySize;
    4306 
    4307  bool ValidateFreeSuballocationList() const;
    4308 
    4309  // Checks if requested suballocation with given parameters can be placed in given pFreeSuballocItem.
    4310  // If yes, fills pOffset and returns true. If no, returns false.
    4311  bool CheckAllocation(
    4312  uint32_t currentFrameIndex,
    4313  uint32_t frameInUseCount,
    4314  VkDeviceSize bufferImageGranularity,
    4315  VkDeviceSize allocSize,
    4316  VkDeviceSize allocAlignment,
    4317  VmaSuballocationType allocType,
    4318  VmaSuballocationList::const_iterator suballocItem,
    4319  bool canMakeOtherLost,
    4320  VkDeviceSize* pOffset,
    4321  size_t* itemsToMakeLostCount,
    4322  VkDeviceSize* pSumFreeSize,
    4323  VkDeviceSize* pSumItemSize) const;
    4324  // Given free suballocation, it merges it with following one, which must also be free.
    4325  void MergeFreeWithNext(VmaSuballocationList::iterator item);
    4326  // Releases given suballocation, making it free.
    4327  // Merges it with adjacent free suballocations if applicable.
    4328  // Returns iterator to new free suballocation at this place.
    4329  VmaSuballocationList::iterator FreeSuballocation(VmaSuballocationList::iterator suballocItem);
    4330  // Given free suballocation, it inserts it into sorted list of
    4331  // m_FreeSuballocationsBySize if it's suitable.
    4332  void RegisterFreeSuballocation(VmaSuballocationList::iterator item);
    4333  // Given free suballocation, it removes it from sorted list of
    4334  // m_FreeSuballocationsBySize if it's suitable.
    4335  void UnregisterFreeSuballocation(VmaSuballocationList::iterator item);
    4336 };
    4337 
    4338 /*
    4339 Represents a single block of device memory (`VkDeviceMemory`) with all the
    4340 data about its regions (aka suballocations, #VmaAllocation), assigned and free.
    4341 
    4342 Thread-safety: This class must be externally synchronized.
    4343 */
    4344 class VmaDeviceMemoryBlock
    4345 {
    4346  VMA_CLASS_NO_COPY(VmaDeviceMemoryBlock)
    4347 public:
    4348  VmaBlockMetadata m_Metadata;
    4349 
    4350  VmaDeviceMemoryBlock(VmaAllocator hAllocator);
    4351 
    4352  ~VmaDeviceMemoryBlock()
    4353  {
    4354  VMA_ASSERT(m_MapCount == 0 && "VkDeviceMemory block is being destroyed while it is still mapped.");
    4355  VMA_ASSERT(m_hMemory == VK_NULL_HANDLE);
    4356  }
    4357 
    4358  // Always call after construction.
    4359  void Init(
    4360  uint32_t newMemoryTypeIndex,
    4361  VkDeviceMemory newMemory,
    4362  VkDeviceSize newSize,
    4363  uint32_t id);
    4364  // Always call before destruction.
    4365  void Destroy(VmaAllocator allocator);
    4366 
    4367  VkDeviceMemory GetDeviceMemory() const { return m_hMemory; }
    4368  uint32_t GetMemoryTypeIndex() const { return m_MemoryTypeIndex; }
    4369  uint32_t GetId() const { return m_Id; }
    4370  void* GetMappedData() const { return m_pMappedData; }
    4371 
    4372  // Validates all data structures inside this object. If not valid, returns false.
    4373  bool Validate() const;
    4374 
    4375  VkResult CheckCorruption(VmaAllocator hAllocator);
    4376 
    4377  // ppData can be null.
    4378  VkResult Map(VmaAllocator hAllocator, uint32_t count, void** ppData);
    4379  void Unmap(VmaAllocator hAllocator, uint32_t count);
    4380 
    4381  VkResult WriteMagicValueAroundAllocation(VmaAllocator hAllocator, VkDeviceSize allocOffset, VkDeviceSize allocSize);
    4382  VkResult ValidateMagicValueAroundAllocation(VmaAllocator hAllocator, VkDeviceSize allocOffset, VkDeviceSize allocSize);
    4383 
    4384  VkResult BindBufferMemory(
    4385  const VmaAllocator hAllocator,
    4386  const VmaAllocation hAllocation,
    4387  VkBuffer hBuffer);
    4388  VkResult BindImageMemory(
    4389  const VmaAllocator hAllocator,
    4390  const VmaAllocation hAllocation,
    4391  VkImage hImage);
    4392 
    4393 private:
    4394  uint32_t m_MemoryTypeIndex;
    4395  uint32_t m_Id;
    4396  VkDeviceMemory m_hMemory;
    4397 
    4398  // Protects access to m_hMemory so it's not used by multiple threads simultaneously, e.g. vkMapMemory, vkBindBufferMemory.
    4399  // Also protects m_MapCount, m_pMappedData.
    4400  VMA_MUTEX m_Mutex;
    4401  uint32_t m_MapCount;
    4402  void* m_pMappedData;
    4403 };
    4404 
    4405 struct VmaPointerLess
    4406 {
    4407  bool operator()(const void* lhs, const void* rhs) const
    4408  {
    4409  return lhs < rhs;
    4410  }
    4411 };
    4412 
    4413 class VmaDefragmentator;
    4414 
    4415 /*
    4416 Sequence of VmaDeviceMemoryBlock. Represents memory blocks allocated for a specific
    4417 Vulkan memory type.
    4418 
    4419 Synchronized internally with a mutex.
    4420 */
    4421 struct VmaBlockVector
    4422 {
    4423  VMA_CLASS_NO_COPY(VmaBlockVector)
    4424 public:
    4425  VmaBlockVector(
    4426  VmaAllocator hAllocator,
    4427  uint32_t memoryTypeIndex,
    4428  VkDeviceSize preferredBlockSize,
    4429  size_t minBlockCount,
    4430  size_t maxBlockCount,
    4431  VkDeviceSize bufferImageGranularity,
    4432  uint32_t frameInUseCount,
    4433  bool isCustomPool);
    4434  ~VmaBlockVector();
    4435 
    4436  VkResult CreateMinBlocks();
    4437 
    4438  uint32_t GetMemoryTypeIndex() const { return m_MemoryTypeIndex; }
    4439  VkDeviceSize GetPreferredBlockSize() const { return m_PreferredBlockSize; }
    4440  VkDeviceSize GetBufferImageGranularity() const { return m_BufferImageGranularity; }
    4441  uint32_t GetFrameInUseCount() const { return m_FrameInUseCount; }
    4442 
    4443  void GetPoolStats(VmaPoolStats* pStats);
    4444 
    4445  bool IsEmpty() const { return m_Blocks.empty(); }
    4446  bool IsCorruptionDetectionEnabled() const;
    4447 
    4448  VkResult Allocate(
    4449  VmaPool hCurrentPool,
    4450  uint32_t currentFrameIndex,
    4451  VkDeviceSize size,
    4452  VkDeviceSize alignment,
    4453  const VmaAllocationCreateInfo& createInfo,
    4454  VmaSuballocationType suballocType,
    4455  VmaAllocation* pAllocation);
    4456 
    4457  void Free(
    4458  VmaAllocation hAllocation);
    4459 
    4460  // Adds statistics of this BlockVector to pStats.
    4461  void AddStats(VmaStats* pStats);
    4462 
    4463 #if VMA_STATS_STRING_ENABLED
    4464  void PrintDetailedMap(class VmaJsonWriter& json);
    4465 #endif
    4466 
    4467  void MakePoolAllocationsLost(
    4468  uint32_t currentFrameIndex,
    4469  size_t* pLostAllocationCount);
    4470  VkResult CheckCorruption();
    4471 
    4472  VmaDefragmentator* EnsureDefragmentator(
    4473  VmaAllocator hAllocator,
    4474  uint32_t currentFrameIndex);
    4475 
    4476  VkResult Defragment(
    4477  VmaDefragmentationStats* pDefragmentationStats,
    4478  VkDeviceSize& maxBytesToMove,
    4479  uint32_t& maxAllocationsToMove);
    4480 
    4481  void DestroyDefragmentator();
    4482 
    4483 private:
    4484  friend class VmaDefragmentator;
    4485 
    4486  const VmaAllocator m_hAllocator;
    4487  const uint32_t m_MemoryTypeIndex;
    4488  const VkDeviceSize m_PreferredBlockSize;
    4489  const size_t m_MinBlockCount;
    4490  const size_t m_MaxBlockCount;
    4491  const VkDeviceSize m_BufferImageGranularity;
    4492  const uint32_t m_FrameInUseCount;
    4493  const bool m_IsCustomPool;
    4494  VMA_MUTEX m_Mutex;
    4495  // Incrementally sorted by sumFreeSize, ascending.
    4496  VmaVector< VmaDeviceMemoryBlock*, VmaStlAllocator<VmaDeviceMemoryBlock*> > m_Blocks;
    4497  /* There can be at most one allocation that is completely empty - a
    4498  hysteresis to avoid pessimistic case of alternating creation and destruction
    4499  of a VkDeviceMemory. */
    4500  bool m_HasEmptyBlock;
    4501  VmaDefragmentator* m_pDefragmentator;
    4502  uint32_t m_NextBlockId;
    4503 
    4504  VkDeviceSize CalcMaxBlockSize() const;
    4505 
    4506  // Finds and removes given block from vector.
    4507  void Remove(VmaDeviceMemoryBlock* pBlock);
    4508 
    4509  // Performs single step in sorting m_Blocks. They may not be fully sorted
    4510  // after this call.
    4511  void IncrementallySortBlocks();
    4512 
    4513  VkResult CreateBlock(VkDeviceSize blockSize, size_t* pNewBlockIndex);
    4514 };
    4515 
    4516 struct VmaPool_T
    4517 {
    4518  VMA_CLASS_NO_COPY(VmaPool_T)
    4519 public:
    4520  VmaBlockVector m_BlockVector;
    4521 
    4522  VmaPool_T(
    4523  VmaAllocator hAllocator,
    4524  const VmaPoolCreateInfo& createInfo);
    4525  ~VmaPool_T();
    4526 
    4527  VmaBlockVector& GetBlockVector() { return m_BlockVector; }
    4528  uint32_t GetId() const { return m_Id; }
    4529  void SetId(uint32_t id) { VMA_ASSERT(m_Id == 0); m_Id = id; }
    4530 
    4531 #if VMA_STATS_STRING_ENABLED
    4532  //void PrintDetailedMap(class VmaStringBuilder& sb);
    4533 #endif
    4534 
    4535 private:
    4536  uint32_t m_Id;
    4537 };
    4538 
    4539 class VmaDefragmentator
    4540 {
    4541  VMA_CLASS_NO_COPY(VmaDefragmentator)
    4542 private:
    4543  const VmaAllocator m_hAllocator;
    4544  VmaBlockVector* const m_pBlockVector;
    4545  uint32_t m_CurrentFrameIndex;
    4546  VkDeviceSize m_BytesMoved;
    4547  uint32_t m_AllocationsMoved;
    4548 
    4549  struct AllocationInfo
    4550  {
    4551  VmaAllocation m_hAllocation;
    4552  VkBool32* m_pChanged;
    4553 
    4554  AllocationInfo() :
    4555  m_hAllocation(VK_NULL_HANDLE),
    4556  m_pChanged(VMA_NULL)
    4557  {
    4558  }
    4559  };
    4560 
    4561  struct AllocationInfoSizeGreater
    4562  {
    4563  bool operator()(const AllocationInfo& lhs, const AllocationInfo& rhs) const
    4564  {
    4565  return lhs.m_hAllocation->GetSize() > rhs.m_hAllocation->GetSize();
    4566  }
    4567  };
    4568 
    4569  // Used between AddAllocation and Defragment.
    4570  VmaVector< AllocationInfo, VmaStlAllocator<AllocationInfo> > m_Allocations;
    4571 
    4572  struct BlockInfo
    4573  {
    4574  VmaDeviceMemoryBlock* m_pBlock;
    4575  bool m_HasNonMovableAllocations;
    4576  VmaVector< AllocationInfo, VmaStlAllocator<AllocationInfo> > m_Allocations;
    4577 
    4578  BlockInfo(const VkAllocationCallbacks* pAllocationCallbacks) :
    4579  m_pBlock(VMA_NULL),
    4580  m_HasNonMovableAllocations(true),
    4581  m_Allocations(pAllocationCallbacks),
    4582  m_pMappedDataForDefragmentation(VMA_NULL)
    4583  {
    4584  }
    4585 
    4586  void CalcHasNonMovableAllocations()
    4587  {
    4588  const size_t blockAllocCount = m_pBlock->m_Metadata.GetAllocationCount();
    4589  const size_t defragmentAllocCount = m_Allocations.size();
    4590  m_HasNonMovableAllocations = blockAllocCount != defragmentAllocCount;
    4591  }
    4592 
    4593  void SortAllocationsBySizeDescecnding()
    4594  {
    4595  VMA_SORT(m_Allocations.begin(), m_Allocations.end(), AllocationInfoSizeGreater());
    4596  }
    4597 
    4598  VkResult EnsureMapping(VmaAllocator hAllocator, void** ppMappedData);
    4599  void Unmap(VmaAllocator hAllocator);
    4600 
    4601  private:
    4602  // Not null if mapped for defragmentation only, not originally mapped.
    4603  void* m_pMappedDataForDefragmentation;
    4604  };
    4605 
    4606  struct BlockPointerLess
    4607  {
    4608  bool operator()(const BlockInfo* pLhsBlockInfo, const VmaDeviceMemoryBlock* pRhsBlock) const
    4609  {
    4610  return pLhsBlockInfo->m_pBlock < pRhsBlock;
    4611  }
    4612  bool operator()(const BlockInfo* pLhsBlockInfo, const BlockInfo* pRhsBlockInfo) const
    4613  {
    4614  return pLhsBlockInfo->m_pBlock < pRhsBlockInfo->m_pBlock;
    4615  }
    4616  };
    4617 
    4618  // 1. Blocks with some non-movable allocations go first.
    4619  // 2. Blocks with smaller sumFreeSize go first.
    4620  struct BlockInfoCompareMoveDestination
    4621  {
    4622  bool operator()(const BlockInfo* pLhsBlockInfo, const BlockInfo* pRhsBlockInfo) const
    4623  {
    4624  if(pLhsBlockInfo->m_HasNonMovableAllocations && !pRhsBlockInfo->m_HasNonMovableAllocations)
    4625  {
    4626  return true;
    4627  }
    4628  if(!pLhsBlockInfo->m_HasNonMovableAllocations && pRhsBlockInfo->m_HasNonMovableAllocations)
    4629  {
    4630  return false;
    4631  }
    4632  if(pLhsBlockInfo->m_pBlock->m_Metadata.GetSumFreeSize() < pRhsBlockInfo->m_pBlock->m_Metadata.GetSumFreeSize())
    4633  {
    4634  return true;
    4635  }
    4636  return false;
    4637  }
    4638  };
    4639 
    4640  typedef VmaVector< BlockInfo*, VmaStlAllocator<BlockInfo*> > BlockInfoVector;
    4641  BlockInfoVector m_Blocks;
    4642 
    4643  VkResult DefragmentRound(
    4644  VkDeviceSize maxBytesToMove,
    4645  uint32_t maxAllocationsToMove);
    4646 
    4647  static bool MoveMakesSense(
    4648  size_t dstBlockIndex, VkDeviceSize dstOffset,
    4649  size_t srcBlockIndex, VkDeviceSize srcOffset);
    4650 
    4651 public:
    4652  VmaDefragmentator(
    4653  VmaAllocator hAllocator,
    4654  VmaBlockVector* pBlockVector,
    4655  uint32_t currentFrameIndex);
    4656 
    4657  ~VmaDefragmentator();
    4658 
    4659  VkDeviceSize GetBytesMoved() const { return m_BytesMoved; }
    4660  uint32_t GetAllocationsMoved() const { return m_AllocationsMoved; }
    4661 
    4662  void AddAllocation(VmaAllocation hAlloc, VkBool32* pChanged);
    4663 
    4664  VkResult Defragment(
    4665  VkDeviceSize maxBytesToMove,
    4666  uint32_t maxAllocationsToMove);
    4667 };
    4668 
    4669 // Main allocator object.
    4670 struct VmaAllocator_T
    4671 {
    4672  VMA_CLASS_NO_COPY(VmaAllocator_T)
    4673 public:
    4674  bool m_UseMutex;
    4675  bool m_UseKhrDedicatedAllocation;
    4676  VkDevice m_hDevice;
    4677  bool m_AllocationCallbacksSpecified;
    4678  VkAllocationCallbacks m_AllocationCallbacks;
    4679  VmaDeviceMemoryCallbacks m_DeviceMemoryCallbacks;
    4680 
    4681  // Number of bytes free out of limit, or VK_WHOLE_SIZE if not limit for that heap.
    4682  VkDeviceSize m_HeapSizeLimit[VK_MAX_MEMORY_HEAPS];
    4683  VMA_MUTEX m_HeapSizeLimitMutex;
    4684 
    4685  VkPhysicalDeviceProperties m_PhysicalDeviceProperties;
    4686  VkPhysicalDeviceMemoryProperties m_MemProps;
    4687 
    4688  // Default pools.
    4689  VmaBlockVector* m_pBlockVectors[VK_MAX_MEMORY_TYPES];
    4690 
    4691  // Each vector is sorted by memory (handle value).
    4692  typedef VmaVector< VmaAllocation, VmaStlAllocator<VmaAllocation> > AllocationVectorType;
    4693  AllocationVectorType* m_pDedicatedAllocations[VK_MAX_MEMORY_TYPES];
    4694  VMA_MUTEX m_DedicatedAllocationsMutex[VK_MAX_MEMORY_TYPES];
    4695 
    4696  VmaAllocator_T(const VmaAllocatorCreateInfo* pCreateInfo);
    4697  ~VmaAllocator_T();
    4698 
    4699  const VkAllocationCallbacks* GetAllocationCallbacks() const
    4700  {
    4701  return m_AllocationCallbacksSpecified ? &m_AllocationCallbacks : 0;
    4702  }
    4703  const VmaVulkanFunctions& GetVulkanFunctions() const
    4704  {
    4705  return m_VulkanFunctions;
    4706  }
    4707 
    4708  VkDeviceSize GetBufferImageGranularity() const
    4709  {
    4710  return VMA_MAX(
    4711  static_cast<VkDeviceSize>(VMA_DEBUG_MIN_BUFFER_IMAGE_GRANULARITY),
    4712  m_PhysicalDeviceProperties.limits.bufferImageGranularity);
    4713  }
    4714 
    4715  uint32_t GetMemoryHeapCount() const { return m_MemProps.memoryHeapCount; }
    4716  uint32_t GetMemoryTypeCount() const { return m_MemProps.memoryTypeCount; }
    4717 
    4718  uint32_t MemoryTypeIndexToHeapIndex(uint32_t memTypeIndex) const
    4719  {
    4720  VMA_ASSERT(memTypeIndex < m_MemProps.memoryTypeCount);
    4721  return m_MemProps.memoryTypes[memTypeIndex].heapIndex;
    4722  }
    4723  // True when specific memory type is HOST_VISIBLE but not HOST_COHERENT.
    4724  bool IsMemoryTypeNonCoherent(uint32_t memTypeIndex) const
    4725  {
    4726  return (m_MemProps.memoryTypes[memTypeIndex].propertyFlags & (VK_MEMORY_PROPERTY_HOST_VISIBLE_BIT | VK_MEMORY_PROPERTY_HOST_COHERENT_BIT)) ==
    4727  VK_MEMORY_PROPERTY_HOST_VISIBLE_BIT;
    4728  }
    4729  // Minimum alignment for all allocations in specific memory type.
    4730  VkDeviceSize GetMemoryTypeMinAlignment(uint32_t memTypeIndex) const
    4731  {
    4732  return IsMemoryTypeNonCoherent(memTypeIndex) ?
    4733  VMA_MAX((VkDeviceSize)VMA_DEBUG_ALIGNMENT, m_PhysicalDeviceProperties.limits.nonCoherentAtomSize) :
    4734  (VkDeviceSize)VMA_DEBUG_ALIGNMENT;
    4735  }
    4736 
    4737  bool IsIntegratedGpu() const
    4738  {
    4739  return m_PhysicalDeviceProperties.deviceType == VK_PHYSICAL_DEVICE_TYPE_INTEGRATED_GPU;
    4740  }
    4741 
    4742  void GetBufferMemoryRequirements(
    4743  VkBuffer hBuffer,
    4744  VkMemoryRequirements& memReq,
    4745  bool& requiresDedicatedAllocation,
    4746  bool& prefersDedicatedAllocation) const;
    4747  void GetImageMemoryRequirements(
    4748  VkImage hImage,
    4749  VkMemoryRequirements& memReq,
    4750  bool& requiresDedicatedAllocation,
    4751  bool& prefersDedicatedAllocation) const;
    4752 
    4753  // Main allocation function.
    4754  VkResult AllocateMemory(
    4755  const VkMemoryRequirements& vkMemReq,
    4756  bool requiresDedicatedAllocation,
    4757  bool prefersDedicatedAllocation,
    4758  VkBuffer dedicatedBuffer,
    4759  VkImage dedicatedImage,
    4760  const VmaAllocationCreateInfo& createInfo,
    4761  VmaSuballocationType suballocType,
    4762  VmaAllocation* pAllocation);
    4763 
    4764  // Main deallocation function.
    4765  void FreeMemory(const VmaAllocation allocation);
    4766 
    4767  void CalculateStats(VmaStats* pStats);
    4768 
    4769 #if VMA_STATS_STRING_ENABLED
    4770  void PrintDetailedMap(class VmaJsonWriter& json);
    4771 #endif
    4772 
    4773  VkResult Defragment(
    4774  VmaAllocation* pAllocations,
    4775  size_t allocationCount,
    4776  VkBool32* pAllocationsChanged,
    4777  const VmaDefragmentationInfo* pDefragmentationInfo,
    4778  VmaDefragmentationStats* pDefragmentationStats);
    4779 
    4780  void GetAllocationInfo(VmaAllocation hAllocation, VmaAllocationInfo* pAllocationInfo);
    4781  bool TouchAllocation(VmaAllocation hAllocation);
    4782 
    4783  VkResult CreatePool(const VmaPoolCreateInfo* pCreateInfo, VmaPool* pPool);
    4784  void DestroyPool(VmaPool pool);
    4785  void GetPoolStats(VmaPool pool, VmaPoolStats* pPoolStats);
    4786 
    4787  void SetCurrentFrameIndex(uint32_t frameIndex);
    4788 
    4789  void MakePoolAllocationsLost(
    4790  VmaPool hPool,
    4791  size_t* pLostAllocationCount);
    4792  VkResult CheckPoolCorruption(VmaPool hPool);
    4793  VkResult CheckCorruption(uint32_t memoryTypeBits);
    4794 
    4795  void CreateLostAllocation(VmaAllocation* pAllocation);
    4796 
    4797  VkResult AllocateVulkanMemory(const VkMemoryAllocateInfo* pAllocateInfo, VkDeviceMemory* pMemory);
    4798  void FreeVulkanMemory(uint32_t memoryType, VkDeviceSize size, VkDeviceMemory hMemory);
    4799 
    4800  VkResult Map(VmaAllocation hAllocation, void** ppData);
    4801  void Unmap(VmaAllocation hAllocation);
    4802 
    4803  VkResult BindBufferMemory(VmaAllocation hAllocation, VkBuffer hBuffer);
    4804  VkResult BindImageMemory(VmaAllocation hAllocation, VkImage hImage);
    4805 
    4806  void FlushOrInvalidateAllocation(
    4807  VmaAllocation hAllocation,
    4808  VkDeviceSize offset, VkDeviceSize size,
    4809  VMA_CACHE_OPERATION op);
    4810 
    4811  void FillAllocation(const VmaAllocation hAllocation, uint8_t pattern);
    4812 
    4813 private:
    4814  VkDeviceSize m_PreferredLargeHeapBlockSize;
    4815 
    4816  VkPhysicalDevice m_PhysicalDevice;
    4817  VMA_ATOMIC_UINT32 m_CurrentFrameIndex;
    4818 
    4819  VMA_MUTEX m_PoolsMutex;
    4820  // Protected by m_PoolsMutex. Sorted by pointer value.
    4821  VmaVector<VmaPool, VmaStlAllocator<VmaPool> > m_Pools;
    4822  uint32_t m_NextPoolId;
    4823 
    4824  VmaVulkanFunctions m_VulkanFunctions;
    4825 
    4826  void ImportVulkanFunctions(const VmaVulkanFunctions* pVulkanFunctions);
    4827 
    4828  VkDeviceSize CalcPreferredBlockSize(uint32_t memTypeIndex);
    4829 
    4830  VkResult AllocateMemoryOfType(
    4831  VkDeviceSize size,
    4832  VkDeviceSize alignment,
    4833  bool dedicatedAllocation,
    4834  VkBuffer dedicatedBuffer,
    4835  VkImage dedicatedImage,
    4836  const VmaAllocationCreateInfo& createInfo,
    4837  uint32_t memTypeIndex,
    4838  VmaSuballocationType suballocType,
    4839  VmaAllocation* pAllocation);
    4840 
    4841  // Allocates and registers new VkDeviceMemory specifically for single allocation.
    4842  VkResult AllocateDedicatedMemory(
    4843  VkDeviceSize size,
    4844  VmaSuballocationType suballocType,
    4845  uint32_t memTypeIndex,
    4846  bool map,
    4847  bool isUserDataString,
    4848  void* pUserData,
    4849  VkBuffer dedicatedBuffer,
    4850  VkImage dedicatedImage,
    4851  VmaAllocation* pAllocation);
    4852 
    4853  // Tries to free pMemory as Dedicated Memory. Returns true if found and freed.
    4854  void FreeDedicatedMemory(VmaAllocation allocation);
    4855 };
    4856 
    4858 // Memory allocation #2 after VmaAllocator_T definition
    4859 
    4860 static void* VmaMalloc(VmaAllocator hAllocator, size_t size, size_t alignment)
    4861 {
    4862  return VmaMalloc(&hAllocator->m_AllocationCallbacks, size, alignment);
    4863 }
    4864 
    4865 static void VmaFree(VmaAllocator hAllocator, void* ptr)
    4866 {
    4867  VmaFree(&hAllocator->m_AllocationCallbacks, ptr);
    4868 }
    4869 
    4870 template<typename T>
    4871 static T* VmaAllocate(VmaAllocator hAllocator)
    4872 {
    4873  return (T*)VmaMalloc(hAllocator, sizeof(T), VMA_ALIGN_OF(T));
    4874 }
    4875 
    4876 template<typename T>
    4877 static T* VmaAllocateArray(VmaAllocator hAllocator, size_t count)
    4878 {
    4879  return (T*)VmaMalloc(hAllocator, sizeof(T) * count, VMA_ALIGN_OF(T));
    4880 }
    4881 
    4882 template<typename T>
    4883 static void vma_delete(VmaAllocator hAllocator, T* ptr)
    4884 {
    4885  if(ptr != VMA_NULL)
    4886  {
    4887  ptr->~T();
    4888  VmaFree(hAllocator, ptr);
    4889  }
    4890 }
    4891 
    4892 template<typename T>
    4893 static void vma_delete_array(VmaAllocator hAllocator, T* ptr, size_t count)
    4894 {
    4895  if(ptr != VMA_NULL)
    4896  {
    4897  for(size_t i = count; i--; )
    4898  ptr[i].~T();
    4899  VmaFree(hAllocator, ptr);
    4900  }
    4901 }
    4902 
    4904 // VmaStringBuilder
    4905 
    4906 #if VMA_STATS_STRING_ENABLED
    4907 
    4908 class VmaStringBuilder
    4909 {
    4910 public:
    4911  VmaStringBuilder(VmaAllocator alloc) : m_Data(VmaStlAllocator<char>(alloc->GetAllocationCallbacks())) { }
    4912  size_t GetLength() const { return m_Data.size(); }
    4913  const char* GetData() const { return m_Data.data(); }
    4914 
    4915  void Add(char ch) { m_Data.push_back(ch); }
    4916  void Add(const char* pStr);
    4917  void AddNewLine() { Add('\n'); }
    4918  void AddNumber(uint32_t num);
    4919  void AddNumber(uint64_t num);
    4920  void AddPointer(const void* ptr);
    4921 
    4922 private:
    4923  VmaVector< char, VmaStlAllocator<char> > m_Data;
    4924 };
    4925 
    4926 void VmaStringBuilder::Add(const char* pStr)
    4927 {
    4928  const size_t strLen = strlen(pStr);
    4929  if(strLen > 0)
    4930  {
    4931  const size_t oldCount = m_Data.size();
    4932  m_Data.resize(oldCount + strLen);
    4933  memcpy(m_Data.data() + oldCount, pStr, strLen);
    4934  }
    4935 }
    4936 
    4937 void VmaStringBuilder::AddNumber(uint32_t num)
    4938 {
    4939  char buf[11];
    4940  VmaUint32ToStr(buf, sizeof(buf), num);
    4941  Add(buf);
    4942 }
    4943 
    4944 void VmaStringBuilder::AddNumber(uint64_t num)
    4945 {
    4946  char buf[21];
    4947  VmaUint64ToStr(buf, sizeof(buf), num);
    4948  Add(buf);
    4949 }
    4950 
    4951 void VmaStringBuilder::AddPointer(const void* ptr)
    4952 {
    4953  char buf[21];
    4954  VmaPtrToStr(buf, sizeof(buf), ptr);
    4955  Add(buf);
    4956 }
    4957 
    4958 #endif // #if VMA_STATS_STRING_ENABLED
    4959 
    4961 // VmaJsonWriter
    4962 
    4963 #if VMA_STATS_STRING_ENABLED
    4964 
    4965 class VmaJsonWriter
    4966 {
    4967  VMA_CLASS_NO_COPY(VmaJsonWriter)
    4968 public:
    4969  VmaJsonWriter(const VkAllocationCallbacks* pAllocationCallbacks, VmaStringBuilder& sb);
    4970  ~VmaJsonWriter();
    4971 
    4972  void BeginObject(bool singleLine = false);
    4973  void EndObject();
    4974 
    4975  void BeginArray(bool singleLine = false);
    4976  void EndArray();
    4977 
    4978  void WriteString(const char* pStr);
    4979  void BeginString(const char* pStr = VMA_NULL);
    4980  void ContinueString(const char* pStr);
    4981  void ContinueString(uint32_t n);
    4982  void ContinueString(uint64_t n);
    4983  void ContinueString_Pointer(const void* ptr);
    4984  void EndString(const char* pStr = VMA_NULL);
    4985 
    4986  void WriteNumber(uint32_t n);
    4987  void WriteNumber(uint64_t n);
    4988  void WriteBool(bool b);
    4989  void WriteNull();
    4990 
    4991 private:
    4992  static const char* const INDENT;
    4993 
    4994  enum COLLECTION_TYPE
    4995  {
    4996  COLLECTION_TYPE_OBJECT,
    4997  COLLECTION_TYPE_ARRAY,
    4998  };
    4999  struct StackItem
    5000  {
    5001  COLLECTION_TYPE type;
    5002  uint32_t valueCount;
    5003  bool singleLineMode;
    5004  };
    5005 
    5006  VmaStringBuilder& m_SB;
    5007  VmaVector< StackItem, VmaStlAllocator<StackItem> > m_Stack;
    5008  bool m_InsideString;
    5009 
    5010  void BeginValue(bool isString);
    5011  void WriteIndent(bool oneLess = false);
    5012 };
    5013 
    5014 const char* const VmaJsonWriter::INDENT = " ";
    5015 
    5016 VmaJsonWriter::VmaJsonWriter(const VkAllocationCallbacks* pAllocationCallbacks, VmaStringBuilder& sb) :
    5017  m_SB(sb),
    5018  m_Stack(VmaStlAllocator<StackItem>(pAllocationCallbacks)),
    5019  m_InsideString(false)
    5020 {
    5021 }
    5022 
    5023 VmaJsonWriter::~VmaJsonWriter()
    5024 {
    5025  VMA_ASSERT(!m_InsideString);
    5026  VMA_ASSERT(m_Stack.empty());
    5027 }
    5028 
    5029 void VmaJsonWriter::BeginObject(bool singleLine)
    5030 {
    5031  VMA_ASSERT(!m_InsideString);
    5032 
    5033  BeginValue(false);
    5034  m_SB.Add('{');
    5035 
    5036  StackItem item;
    5037  item.type = COLLECTION_TYPE_OBJECT;
    5038  item.valueCount = 0;
    5039  item.singleLineMode = singleLine;
    5040  m_Stack.push_back(item);
    5041 }
    5042 
    5043 void VmaJsonWriter::EndObject()
    5044 {
    5045  VMA_ASSERT(!m_InsideString);
    5046 
    5047  WriteIndent(true);
    5048  m_SB.Add('}');
    5049 
    5050  VMA_ASSERT(!m_Stack.empty() && m_Stack.back().type == COLLECTION_TYPE_OBJECT);
    5051  m_Stack.pop_back();
    5052 }
    5053 
    5054 void VmaJsonWriter::BeginArray(bool singleLine)
    5055 {
    5056  VMA_ASSERT(!m_InsideString);
    5057 
    5058  BeginValue(false);
    5059  m_SB.Add('[');
    5060 
    5061  StackItem item;
    5062  item.type = COLLECTION_TYPE_ARRAY;
    5063  item.valueCount = 0;
    5064  item.singleLineMode = singleLine;
    5065  m_Stack.push_back(item);
    5066 }
    5067 
    5068 void VmaJsonWriter::EndArray()
    5069 {
    5070  VMA_ASSERT(!m_InsideString);
    5071 
    5072  WriteIndent(true);
    5073  m_SB.Add(']');
    5074 
    5075  VMA_ASSERT(!m_Stack.empty() && m_Stack.back().type == COLLECTION_TYPE_ARRAY);
    5076  m_Stack.pop_back();
    5077 }
    5078 
    5079 void VmaJsonWriter::WriteString(const char* pStr)
    5080 {
    5081  BeginString(pStr);
    5082  EndString();
    5083 }
    5084 
    5085 void VmaJsonWriter::BeginString(const char* pStr)
    5086 {
    5087  VMA_ASSERT(!m_InsideString);
    5088 
    5089  BeginValue(true);
    5090  m_SB.Add('"');
    5091  m_InsideString = true;
    5092  if(pStr != VMA_NULL && pStr[0] != '\0')
    5093  {
    5094  ContinueString(pStr);
    5095  }
    5096 }
    5097 
    5098 void VmaJsonWriter::ContinueString(const char* pStr)
    5099 {
    5100  VMA_ASSERT(m_InsideString);
    5101 
    5102  const size_t strLen = strlen(pStr);
    5103  for(size_t i = 0; i < strLen; ++i)
    5104  {
    5105  char ch = pStr[i];
    5106  if(ch == '\'')
    5107  {
    5108  m_SB.Add("\\\\");
    5109  }
    5110  else if(ch == '"')
    5111  {
    5112  m_SB.Add("\\\"");
    5113  }
    5114  else if(ch >= 32)
    5115  {
    5116  m_SB.Add(ch);
    5117  }
    5118  else switch(ch)
    5119  {
    5120  case '\b':
    5121  m_SB.Add("\\b");
    5122  break;
    5123  case '\f':
    5124  m_SB.Add("\\f");
    5125  break;
    5126  case '\n':
    5127  m_SB.Add("\\n");
    5128  break;
    5129  case '\r':
    5130  m_SB.Add("\\r");
    5131  break;
    5132  case '\t':
    5133  m_SB.Add("\\t");
    5134  break;
    5135  default:
    5136  VMA_ASSERT(0 && "Character not currently supported.");
    5137  break;
    5138  }
    5139  }
    5140 }
    5141 
    5142 void VmaJsonWriter::ContinueString(uint32_t n)
    5143 {
    5144  VMA_ASSERT(m_InsideString);
    5145  m_SB.AddNumber(n);
    5146 }
    5147 
    5148 void VmaJsonWriter::ContinueString(uint64_t n)
    5149 {
    5150  VMA_ASSERT(m_InsideString);
    5151  m_SB.AddNumber(n);
    5152 }
    5153 
    5154 void VmaJsonWriter::ContinueString_Pointer(const void* ptr)
    5155 {
    5156  VMA_ASSERT(m_InsideString);
    5157  m_SB.AddPointer(ptr);
    5158 }
    5159 
    5160 void VmaJsonWriter::EndString(const char* pStr)
    5161 {
    5162  VMA_ASSERT(m_InsideString);
    5163  if(pStr != VMA_NULL && pStr[0] != '\0')
    5164  {
    5165  ContinueString(pStr);
    5166  }
    5167  m_SB.Add('"');
    5168  m_InsideString = false;
    5169 }
    5170 
    5171 void VmaJsonWriter::WriteNumber(uint32_t n)
    5172 {
    5173  VMA_ASSERT(!m_InsideString);
    5174  BeginValue(false);
    5175  m_SB.AddNumber(n);
    5176 }
    5177 
    5178 void VmaJsonWriter::WriteNumber(uint64_t n)
    5179 {
    5180  VMA_ASSERT(!m_InsideString);
    5181  BeginValue(false);
    5182  m_SB.AddNumber(n);
    5183 }
    5184 
    5185 void VmaJsonWriter::WriteBool(bool b)
    5186 {
    5187  VMA_ASSERT(!m_InsideString);
    5188  BeginValue(false);
    5189  m_SB.Add(b ? "true" : "false");
    5190 }
    5191 
    5192 void VmaJsonWriter::WriteNull()
    5193 {
    5194  VMA_ASSERT(!m_InsideString);
    5195  BeginValue(false);
    5196  m_SB.Add("null");
    5197 }
    5198 
    5199 void VmaJsonWriter::BeginValue(bool isString)
    5200 {
    5201  if(!m_Stack.empty())
    5202  {
    5203  StackItem& currItem = m_Stack.back();
    5204  if(currItem.type == COLLECTION_TYPE_OBJECT &&
    5205  currItem.valueCount % 2 == 0)
    5206  {
    5207  VMA_ASSERT(isString);
    5208  }
    5209 
    5210  if(currItem.type == COLLECTION_TYPE_OBJECT &&
    5211  currItem.valueCount % 2 != 0)
    5212  {
    5213  m_SB.Add(": ");
    5214  }
    5215  else if(currItem.valueCount > 0)
    5216  {
    5217  m_SB.Add(", ");
    5218  WriteIndent();
    5219  }
    5220  else
    5221  {
    5222  WriteIndent();
    5223  }
    5224  ++currItem.valueCount;
    5225  }
    5226 }
    5227 
    5228 void VmaJsonWriter::WriteIndent(bool oneLess)
    5229 {
    5230  if(!m_Stack.empty() && !m_Stack.back().singleLineMode)
    5231  {
    5232  m_SB.AddNewLine();
    5233 
    5234  size_t count = m_Stack.size();
    5235  if(count > 0 && oneLess)
    5236  {
    5237  --count;
    5238  }
    5239  for(size_t i = 0; i < count; ++i)
    5240  {
    5241  m_SB.Add(INDENT);
    5242  }
    5243  }
    5244 }
    5245 
    5246 #endif // #if VMA_STATS_STRING_ENABLED
    5247 
    5249 
    5250 void VmaAllocation_T::SetUserData(VmaAllocator hAllocator, void* pUserData)
    5251 {
    5252  if(IsUserDataString())
    5253  {
    5254  VMA_ASSERT(pUserData == VMA_NULL || pUserData != m_pUserData);
    5255 
    5256  FreeUserDataString(hAllocator);
    5257 
    5258  if(pUserData != VMA_NULL)
    5259  {
    5260  const char* const newStrSrc = (char*)pUserData;
    5261  const size_t newStrLen = strlen(newStrSrc);
    5262  char* const newStrDst = vma_new_array(hAllocator, char, newStrLen + 1);
    5263  memcpy(newStrDst, newStrSrc, newStrLen + 1);
    5264  m_pUserData = newStrDst;
    5265  }
    5266  }
    5267  else
    5268  {
    5269  m_pUserData = pUserData;
    5270  }
    5271 }
    5272 
    5273 void VmaAllocation_T::ChangeBlockAllocation(
    5274  VmaAllocator hAllocator,
    5275  VmaDeviceMemoryBlock* block,
    5276  VkDeviceSize offset)
    5277 {
    5278  VMA_ASSERT(block != VMA_NULL);
    5279  VMA_ASSERT(m_Type == ALLOCATION_TYPE_BLOCK);
    5280 
    5281  // Move mapping reference counter from old block to new block.
    5282  if(block != m_BlockAllocation.m_Block)
    5283  {
    5284  uint32_t mapRefCount = m_MapCount & ~MAP_COUNT_FLAG_PERSISTENT_MAP;
    5285  if(IsPersistentMap())
    5286  ++mapRefCount;
    5287  m_BlockAllocation.m_Block->Unmap(hAllocator, mapRefCount);
    5288  block->Map(hAllocator, mapRefCount, VMA_NULL);
    5289  }
    5290 
    5291  m_BlockAllocation.m_Block = block;
    5292  m_BlockAllocation.m_Offset = offset;
    5293 }
    5294 
    5295 VkDeviceSize VmaAllocation_T::GetOffset() const
    5296 {
    5297  switch(m_Type)
    5298  {
    5299  case ALLOCATION_TYPE_BLOCK:
    5300  return m_BlockAllocation.m_Offset;
    5301  case ALLOCATION_TYPE_DEDICATED:
    5302  return 0;
    5303  default:
    5304  VMA_ASSERT(0);
    5305  return 0;
    5306  }
    5307 }
    5308 
    5309 VkDeviceMemory VmaAllocation_T::GetMemory() const
    5310 {
    5311  switch(m_Type)
    5312  {
    5313  case ALLOCATION_TYPE_BLOCK:
    5314  return m_BlockAllocation.m_Block->GetDeviceMemory();
    5315  case ALLOCATION_TYPE_DEDICATED:
    5316  return m_DedicatedAllocation.m_hMemory;
    5317  default:
    5318  VMA_ASSERT(0);
    5319  return VK_NULL_HANDLE;
    5320  }
    5321 }
    5322 
    5323 uint32_t VmaAllocation_T::GetMemoryTypeIndex() const
    5324 {
    5325  switch(m_Type)
    5326  {
    5327  case ALLOCATION_TYPE_BLOCK:
    5328  return m_BlockAllocation.m_Block->GetMemoryTypeIndex();
    5329  case ALLOCATION_TYPE_DEDICATED:
    5330  return m_DedicatedAllocation.m_MemoryTypeIndex;
    5331  default:
    5332  VMA_ASSERT(0);
    5333  return UINT32_MAX;
    5334  }
    5335 }
    5336 
    5337 void* VmaAllocation_T::GetMappedData() const
    5338 {
    5339  switch(m_Type)
    5340  {
    5341  case ALLOCATION_TYPE_BLOCK:
    5342  if(m_MapCount != 0)
    5343  {
    5344  void* pBlockData = m_BlockAllocation.m_Block->GetMappedData();
    5345  VMA_ASSERT(pBlockData != VMA_NULL);
    5346  return (char*)pBlockData + m_BlockAllocation.m_Offset;
    5347  }
    5348  else
    5349  {
    5350  return VMA_NULL;
    5351  }
    5352  break;
    5353  case ALLOCATION_TYPE_DEDICATED:
    5354  VMA_ASSERT((m_DedicatedAllocation.m_pMappedData != VMA_NULL) == (m_MapCount != 0));
    5355  return m_DedicatedAllocation.m_pMappedData;
    5356  default:
    5357  VMA_ASSERT(0);
    5358  return VMA_NULL;
    5359  }
    5360 }
    5361 
    5362 bool VmaAllocation_T::CanBecomeLost() const
    5363 {
    5364  switch(m_Type)
    5365  {
    5366  case ALLOCATION_TYPE_BLOCK:
    5367  return m_BlockAllocation.m_CanBecomeLost;
    5368  case ALLOCATION_TYPE_DEDICATED:
    5369  return false;
    5370  default:
    5371  VMA_ASSERT(0);
    5372  return false;
    5373  }
    5374 }
    5375 
    5376 VmaPool VmaAllocation_T::GetPool() const
    5377 {
    5378  VMA_ASSERT(m_Type == ALLOCATION_TYPE_BLOCK);
    5379  return m_BlockAllocation.m_hPool;
    5380 }
    5381 
    5382 bool VmaAllocation_T::MakeLost(uint32_t currentFrameIndex, uint32_t frameInUseCount)
    5383 {
    5384  VMA_ASSERT(CanBecomeLost());
    5385 
    5386  /*
    5387  Warning: This is a carefully designed algorithm.
    5388  Do not modify unless you really know what you're doing :)
    5389  */
    5390  uint32_t localLastUseFrameIndex = GetLastUseFrameIndex();
    5391  for(;;)
    5392  {
    5393  if(localLastUseFrameIndex == VMA_FRAME_INDEX_LOST)
    5394  {
    5395  VMA_ASSERT(0);
    5396  return false;
    5397  }
    5398  else if(localLastUseFrameIndex + frameInUseCount >= currentFrameIndex)
    5399  {
    5400  return false;
    5401  }
    5402  else // Last use time earlier than current time.
    5403  {
    5404  if(CompareExchangeLastUseFrameIndex(localLastUseFrameIndex, VMA_FRAME_INDEX_LOST))
    5405  {
    5406  // Setting hAllocation.LastUseFrameIndex atomic to VMA_FRAME_INDEX_LOST is enough to mark it as LOST.
    5407  // Calling code just needs to unregister this allocation in owning VmaDeviceMemoryBlock.
    5408  return true;
    5409  }
    5410  }
    5411  }
    5412 }
    5413 
    5414 #if VMA_STATS_STRING_ENABLED
    5415 
    5416 // Correspond to values of enum VmaSuballocationType.
    5417 static const char* VMA_SUBALLOCATION_TYPE_NAMES[] = {
    5418  "FREE",
    5419  "UNKNOWN",
    5420  "BUFFER",
    5421  "IMAGE_UNKNOWN",
    5422  "IMAGE_LINEAR",
    5423  "IMAGE_OPTIMAL",
    5424 };
    5425 
    5426 void VmaAllocation_T::PrintParameters(class VmaJsonWriter& json) const
    5427 {
    5428  json.WriteString("Type");
    5429  json.WriteString(VMA_SUBALLOCATION_TYPE_NAMES[m_SuballocationType]);
    5430 
    5431  json.WriteString("Size");
    5432  json.WriteNumber(m_Size);
    5433 
    5434  if(m_pUserData != VMA_NULL)
    5435  {
    5436  json.WriteString("UserData");
    5437  if(IsUserDataString())
    5438  {
    5439  json.WriteString((const char*)m_pUserData);
    5440  }
    5441  else
    5442  {
    5443  json.BeginString();
    5444  json.ContinueString_Pointer(m_pUserData);
    5445  json.EndString();
    5446  }
    5447  }
    5448 
    5449  json.WriteString("CreationFrameIndex");
    5450  json.WriteNumber(m_CreationFrameIndex);
    5451 
    5452  json.WriteString("LastUseFrameIndex");
    5453  json.WriteNumber(GetLastUseFrameIndex());
    5454 
    5455  if(m_BufferImageUsage != 0)
    5456  {
    5457  json.WriteString("Usage");
    5458  json.WriteNumber(m_BufferImageUsage);
    5459  }
    5460 }
    5461 
    5462 #endif
    5463 
    5464 void VmaAllocation_T::FreeUserDataString(VmaAllocator hAllocator)
    5465 {
    5466  VMA_ASSERT(IsUserDataString());
    5467  if(m_pUserData != VMA_NULL)
    5468  {
    5469  char* const oldStr = (char*)m_pUserData;
    5470  const size_t oldStrLen = strlen(oldStr);
    5471  vma_delete_array(hAllocator, oldStr, oldStrLen + 1);
    5472  m_pUserData = VMA_NULL;
    5473  }
    5474 }
    5475 
    5476 void VmaAllocation_T::BlockAllocMap()
    5477 {
    5478  VMA_ASSERT(GetType() == ALLOCATION_TYPE_BLOCK);
    5479 
    5480  if((m_MapCount & ~MAP_COUNT_FLAG_PERSISTENT_MAP) < 0x7F)
    5481  {
    5482  ++m_MapCount;
    5483  }
    5484  else
    5485  {
    5486  VMA_ASSERT(0 && "Allocation mapped too many times simultaneously.");
    5487  }
    5488 }
    5489 
    5490 void VmaAllocation_T::BlockAllocUnmap()
    5491 {
    5492  VMA_ASSERT(GetType() == ALLOCATION_TYPE_BLOCK);
    5493 
    5494  if((m_MapCount & ~MAP_COUNT_FLAG_PERSISTENT_MAP) != 0)
    5495  {
    5496  --m_MapCount;
    5497  }
    5498  else
    5499  {
    5500  VMA_ASSERT(0 && "Unmapping allocation not previously mapped.");
    5501  }
    5502 }
    5503 
    5504 VkResult VmaAllocation_T::DedicatedAllocMap(VmaAllocator hAllocator, void** ppData)
    5505 {
    5506  VMA_ASSERT(GetType() == ALLOCATION_TYPE_DEDICATED);
    5507 
    5508  if(m_MapCount != 0)
    5509  {
    5510  if((m_MapCount & ~MAP_COUNT_FLAG_PERSISTENT_MAP) < 0x7F)
    5511  {
    5512  VMA_ASSERT(m_DedicatedAllocation.m_pMappedData != VMA_NULL);
    5513  *ppData = m_DedicatedAllocation.m_pMappedData;
    5514  ++m_MapCount;
    5515  return VK_SUCCESS;
    5516  }
    5517  else
    5518  {
    5519  VMA_ASSERT(0 && "Dedicated allocation mapped too many times simultaneously.");
    5520  return VK_ERROR_MEMORY_MAP_FAILED;
    5521  }
    5522  }
    5523  else
    5524  {
    5525  VkResult result = (*hAllocator->GetVulkanFunctions().vkMapMemory)(
    5526  hAllocator->m_hDevice,
    5527  m_DedicatedAllocation.m_hMemory,
    5528  0, // offset
    5529  VK_WHOLE_SIZE,
    5530  0, // flags
    5531  ppData);
    5532  if(result == VK_SUCCESS)
    5533  {
    5534  m_DedicatedAllocation.m_pMappedData = *ppData;
    5535  m_MapCount = 1;
    5536  }
    5537  return result;
    5538  }
    5539 }
    5540 
    5541 void VmaAllocation_T::DedicatedAllocUnmap(VmaAllocator hAllocator)
    5542 {
    5543  VMA_ASSERT(GetType() == ALLOCATION_TYPE_DEDICATED);
    5544 
    5545  if((m_MapCount & ~MAP_COUNT_FLAG_PERSISTENT_MAP) != 0)
    5546  {
    5547  --m_MapCount;
    5548  if(m_MapCount == 0)
    5549  {
    5550  m_DedicatedAllocation.m_pMappedData = VMA_NULL;
    5551  (*hAllocator->GetVulkanFunctions().vkUnmapMemory)(
    5552  hAllocator->m_hDevice,
    5553  m_DedicatedAllocation.m_hMemory);
    5554  }
    5555  }
    5556  else
    5557  {
    5558  VMA_ASSERT(0 && "Unmapping dedicated allocation not previously mapped.");
    5559  }
    5560 }
    5561 
    5562 #if VMA_STATS_STRING_ENABLED
    5563 
    5564 static void VmaPrintStatInfo(VmaJsonWriter& json, const VmaStatInfo& stat)
    5565 {
    5566  json.BeginObject();
    5567 
    5568  json.WriteString("Blocks");
    5569  json.WriteNumber(stat.blockCount);
    5570 
    5571  json.WriteString("Allocations");
    5572  json.WriteNumber(stat.allocationCount);
    5573 
    5574  json.WriteString("UnusedRanges");
    5575  json.WriteNumber(stat.unusedRangeCount);
    5576 
    5577  json.WriteString("UsedBytes");
    5578  json.WriteNumber(stat.usedBytes);
    5579 
    5580  json.WriteString("UnusedBytes");
    5581  json.WriteNumber(stat.unusedBytes);
    5582 
    5583  if(stat.allocationCount > 1)
    5584  {
    5585  json.WriteString("AllocationSize");
    5586  json.BeginObject(true);
    5587  json.WriteString("Min");
    5588  json.WriteNumber(stat.allocationSizeMin);
    5589  json.WriteString("Avg");
    5590  json.WriteNumber(stat.allocationSizeAvg);
    5591  json.WriteString("Max");
    5592  json.WriteNumber(stat.allocationSizeMax);
    5593  json.EndObject();
    5594  }
    5595 
    5596  if(stat.unusedRangeCount > 1)
    5597  {
    5598  json.WriteString("UnusedRangeSize");
    5599  json.BeginObject(true);
    5600  json.WriteString("Min");
    5601  json.WriteNumber(stat.unusedRangeSizeMin);
    5602  json.WriteString("Avg");
    5603  json.WriteNumber(stat.unusedRangeSizeAvg);
    5604  json.WriteString("Max");
    5605  json.WriteNumber(stat.unusedRangeSizeMax);
    5606  json.EndObject();
    5607  }
    5608 
    5609  json.EndObject();
    5610 }
    5611 
    5612 #endif // #if VMA_STATS_STRING_ENABLED
    5613 
    5614 struct VmaSuballocationItemSizeLess
    5615 {
    5616  bool operator()(
    5617  const VmaSuballocationList::iterator lhs,
    5618  const VmaSuballocationList::iterator rhs) const
    5619  {
    5620  return lhs->size < rhs->size;
    5621  }
    5622  bool operator()(
    5623  const VmaSuballocationList::iterator lhs,
    5624  VkDeviceSize rhsSize) const
    5625  {
    5626  return lhs->size < rhsSize;
    5627  }
    5628 };
    5629 
    5631 // class VmaBlockMetadata
    5632 
    5633 VmaBlockMetadata::VmaBlockMetadata(VmaAllocator hAllocator) :
    5634  m_Size(0),
    5635  m_FreeCount(0),
    5636  m_SumFreeSize(0),
    5637  m_Suballocations(VmaStlAllocator<VmaSuballocation>(hAllocator->GetAllocationCallbacks())),
    5638  m_FreeSuballocationsBySize(VmaStlAllocator<VmaSuballocationList::iterator>(hAllocator->GetAllocationCallbacks()))
    5639 {
    5640 }
    5641 
    5642 VmaBlockMetadata::~VmaBlockMetadata()
    5643 {
    5644 }
    5645 
    5646 void VmaBlockMetadata::Init(VkDeviceSize size)
    5647 {
    5648  m_Size = size;
    5649  m_FreeCount = 1;
    5650  m_SumFreeSize = size;
    5651 
    5652  VmaSuballocation suballoc = {};
    5653  suballoc.offset = 0;
    5654  suballoc.size = size;
    5655  suballoc.type = VMA_SUBALLOCATION_TYPE_FREE;
    5656  suballoc.hAllocation = VK_NULL_HANDLE;
    5657 
    5658  m_Suballocations.push_back(suballoc);
    5659  VmaSuballocationList::iterator suballocItem = m_Suballocations.end();
    5660  --suballocItem;
    5661  m_FreeSuballocationsBySize.push_back(suballocItem);
    5662 }
    5663 
    5664 bool VmaBlockMetadata::Validate() const
    5665 {
    5666  if(m_Suballocations.empty())
    5667  {
    5668  return false;
    5669  }
    5670 
    5671  // Expected offset of new suballocation as calculates from previous ones.
    5672  VkDeviceSize calculatedOffset = 0;
    5673  // Expected number of free suballocations as calculated from traversing their list.
    5674  uint32_t calculatedFreeCount = 0;
    5675  // Expected sum size of free suballocations as calculated from traversing their list.
    5676  VkDeviceSize calculatedSumFreeSize = 0;
    5677  // Expected number of free suballocations that should be registered in
    5678  // m_FreeSuballocationsBySize calculated from traversing their list.
    5679  size_t freeSuballocationsToRegister = 0;
    5680  // True if previous visited suballocation was free.
    5681  bool prevFree = false;
    5682 
    5683  for(VmaSuballocationList::const_iterator suballocItem = m_Suballocations.cbegin();
    5684  suballocItem != m_Suballocations.cend();
    5685  ++suballocItem)
    5686  {
    5687  const VmaSuballocation& subAlloc = *suballocItem;
    5688 
    5689  // Actual offset of this suballocation doesn't match expected one.
    5690  if(subAlloc.offset != calculatedOffset)
    5691  {
    5692  return false;
    5693  }
    5694 
    5695  const bool currFree = (subAlloc.type == VMA_SUBALLOCATION_TYPE_FREE);
    5696  // Two adjacent free suballocations are invalid. They should be merged.
    5697  if(prevFree && currFree)
    5698  {
    5699  return false;
    5700  }
    5701 
    5702  if(currFree != (subAlloc.hAllocation == VK_NULL_HANDLE))
    5703  {
    5704  return false;
    5705  }
    5706 
    5707  if(currFree)
    5708  {
    5709  calculatedSumFreeSize += subAlloc.size;
    5710  ++calculatedFreeCount;
    5711  if(subAlloc.size >= VMA_MIN_FREE_SUBALLOCATION_SIZE_TO_REGISTER)
    5712  {
    5713  ++freeSuballocationsToRegister;
    5714  }
    5715 
    5716  // Margin required between allocations - every free space must be at least that large.
    5717  if(subAlloc.size < VMA_DEBUG_MARGIN)
    5718  {
    5719  return false;
    5720  }
    5721  }
    5722  else
    5723  {
    5724  if(subAlloc.hAllocation->GetOffset() != subAlloc.offset)
    5725  {
    5726  return false;
    5727  }
    5728  if(subAlloc.hAllocation->GetSize() != subAlloc.size)
    5729  {
    5730  return false;
    5731  }
    5732 
    5733  // Margin required between allocations - previous allocation must be free.
    5734  if(VMA_DEBUG_MARGIN > 0 && !prevFree)
    5735  {
    5736  return false;
    5737  }
    5738  }
    5739 
    5740  calculatedOffset += subAlloc.size;
    5741  prevFree = currFree;
    5742  }
    5743 
    5744  // Number of free suballocations registered in m_FreeSuballocationsBySize doesn't
    5745  // match expected one.
    5746  if(m_FreeSuballocationsBySize.size() != freeSuballocationsToRegister)
    5747  {
    5748  return false;
    5749  }
    5750 
    5751  VkDeviceSize lastSize = 0;
    5752  for(size_t i = 0; i < m_FreeSuballocationsBySize.size(); ++i)
    5753  {
    5754  VmaSuballocationList::iterator suballocItem = m_FreeSuballocationsBySize[i];
    5755 
    5756  // Only free suballocations can be registered in m_FreeSuballocationsBySize.
    5757  if(suballocItem->type != VMA_SUBALLOCATION_TYPE_FREE)
    5758  {
    5759  return false;
    5760  }
    5761  // They must be sorted by size ascending.
    5762  if(suballocItem->size < lastSize)
    5763  {
    5764  return false;
    5765  }
    5766 
    5767  lastSize = suballocItem->size;
    5768  }
    5769 
    5770  // Check if totals match calculacted values.
    5771  if(!ValidateFreeSuballocationList() ||
    5772  (calculatedOffset != m_Size) ||
    5773  (calculatedSumFreeSize != m_SumFreeSize) ||
    5774  (calculatedFreeCount != m_FreeCount))
    5775  {
    5776  return false;
    5777  }
    5778 
    5779  return true;
    5780 }
    5781 
    5782 VkDeviceSize VmaBlockMetadata::GetUnusedRangeSizeMax() const
    5783 {
    5784  if(!m_FreeSuballocationsBySize.empty())
    5785  {
    5786  return m_FreeSuballocationsBySize.back()->size;
    5787  }
    5788  else
    5789  {
    5790  return 0;
    5791  }
    5792 }
    5793 
    5794 bool VmaBlockMetadata::IsEmpty() const
    5795 {
    5796  return (m_Suballocations.size() == 1) && (m_FreeCount == 1);
    5797 }
    5798 
    5799 void VmaBlockMetadata::CalcAllocationStatInfo(VmaStatInfo& outInfo) const
    5800 {
    5801  outInfo.blockCount = 1;
    5802 
    5803  const uint32_t rangeCount = (uint32_t)m_Suballocations.size();
    5804  outInfo.allocationCount = rangeCount - m_FreeCount;
    5805  outInfo.unusedRangeCount = m_FreeCount;
    5806 
    5807  outInfo.unusedBytes = m_SumFreeSize;
    5808  outInfo.usedBytes = m_Size - outInfo.unusedBytes;
    5809 
    5810  outInfo.allocationSizeMin = UINT64_MAX;
    5811  outInfo.allocationSizeMax = 0;
    5812  outInfo.unusedRangeSizeMin = UINT64_MAX;
    5813  outInfo.unusedRangeSizeMax = 0;
    5814 
    5815  for(VmaSuballocationList::const_iterator suballocItem = m_Suballocations.cbegin();
    5816  suballocItem != m_Suballocations.cend();
    5817  ++suballocItem)
    5818  {
    5819  const VmaSuballocation& suballoc = *suballocItem;
    5820  if(suballoc.type != VMA_SUBALLOCATION_TYPE_FREE)
    5821  {
    5822  outInfo.allocationSizeMin = VMA_MIN(outInfo.allocationSizeMin, suballoc.size);
    5823  outInfo.allocationSizeMax = VMA_MAX(outInfo.allocationSizeMax, suballoc.size);
    5824  }
    5825  else
    5826  {
    5827  outInfo.unusedRangeSizeMin = VMA_MIN(outInfo.unusedRangeSizeMin, suballoc.size);
    5828  outInfo.unusedRangeSizeMax = VMA_MAX(outInfo.unusedRangeSizeMax, suballoc.size);
    5829  }
    5830  }
    5831 }
    5832 
    5833 void VmaBlockMetadata::AddPoolStats(VmaPoolStats& inoutStats) const
    5834 {
    5835  const uint32_t rangeCount = (uint32_t)m_Suballocations.size();
    5836 
    5837  inoutStats.size += m_Size;
    5838  inoutStats.unusedSize += m_SumFreeSize;
    5839  inoutStats.allocationCount += rangeCount - m_FreeCount;
    5840  inoutStats.unusedRangeCount += m_FreeCount;
    5841  inoutStats.unusedRangeSizeMax = VMA_MAX(inoutStats.unusedRangeSizeMax, GetUnusedRangeSizeMax());
    5842 }
    5843 
    5844 #if VMA_STATS_STRING_ENABLED
    5845 
    5846 void VmaBlockMetadata::PrintDetailedMap(class VmaJsonWriter& json) const
    5847 {
    5848  json.BeginObject();
    5849 
    5850  json.WriteString("TotalBytes");
    5851  json.WriteNumber(m_Size);
    5852 
    5853  json.WriteString("UnusedBytes");
    5854  json.WriteNumber(m_SumFreeSize);
    5855 
    5856  json.WriteString("Allocations");
    5857  json.WriteNumber((uint64_t)m_Suballocations.size() - m_FreeCount);
    5858 
    5859  json.WriteString("UnusedRanges");
    5860  json.WriteNumber(m_FreeCount);
    5861 
    5862  json.WriteString("Suballocations");
    5863  json.BeginArray();
    5864  size_t i = 0;
    5865  for(VmaSuballocationList::const_iterator suballocItem = m_Suballocations.cbegin();
    5866  suballocItem != m_Suballocations.cend();
    5867  ++suballocItem, ++i)
    5868  {
    5869  json.BeginObject(true);
    5870 
    5871  json.WriteString("Offset");
    5872  json.WriteNumber(suballocItem->offset);
    5873 
    5874  if(suballocItem->type == VMA_SUBALLOCATION_TYPE_FREE)
    5875  {
    5876  json.WriteString("Type");
    5877  json.WriteString(VMA_SUBALLOCATION_TYPE_NAMES[VMA_SUBALLOCATION_TYPE_FREE]);
    5878 
    5879  json.WriteString("Size");
    5880  json.WriteNumber(suballocItem->size);
    5881  }
    5882  else
    5883  {
    5884  suballocItem->hAllocation->PrintParameters(json);
    5885  }
    5886 
    5887  json.EndObject();
    5888  }
    5889  json.EndArray();
    5890 
    5891  json.EndObject();
    5892 }
    5893 
    5894 #endif // #if VMA_STATS_STRING_ENABLED
    5895 
    5896 /*
    5897 How many suitable free suballocations to analyze before choosing best one.
    5898 - Set to 1 to use First-Fit algorithm - first suitable free suballocation will
    5899  be chosen.
    5900 - Set to UINT32_MAX to use Best-Fit/Worst-Fit algorithm - all suitable free
    5901  suballocations will be analized and best one will be chosen.
    5902 - Any other value is also acceptable.
    5903 */
    5904 //static const uint32_t MAX_SUITABLE_SUBALLOCATIONS_TO_CHECK = 8;
    5905 
    5906 bool VmaBlockMetadata::CreateAllocationRequest(
    5907  uint32_t currentFrameIndex,
    5908  uint32_t frameInUseCount,
    5909  VkDeviceSize bufferImageGranularity,
    5910  VkDeviceSize allocSize,
    5911  VkDeviceSize allocAlignment,
    5912  VmaSuballocationType allocType,
    5913  bool canMakeOtherLost,
    5914  VmaAllocationRequest* pAllocationRequest)
    5915 {
    5916  VMA_ASSERT(allocSize > 0);
    5917  VMA_ASSERT(allocType != VMA_SUBALLOCATION_TYPE_FREE);
    5918  VMA_ASSERT(pAllocationRequest != VMA_NULL);
    5919  VMA_HEAVY_ASSERT(Validate());
    5920 
    5921  // There is not enough total free space in this block to fullfill the request: Early return.
    5922  if(canMakeOtherLost == false && m_SumFreeSize < allocSize + 2 * VMA_DEBUG_MARGIN)
    5923  {
    5924  return false;
    5925  }
    5926 
    5927  // New algorithm, efficiently searching freeSuballocationsBySize.
    5928  const size_t freeSuballocCount = m_FreeSuballocationsBySize.size();
    5929  if(freeSuballocCount > 0)
    5930  {
    5931  if(VMA_BEST_FIT)
    5932  {
    5933  // Find first free suballocation with size not less than allocSize + 2 * VMA_DEBUG_MARGIN.
    5934  VmaSuballocationList::iterator* const it = VmaBinaryFindFirstNotLess(
    5935  m_FreeSuballocationsBySize.data(),
    5936  m_FreeSuballocationsBySize.data() + freeSuballocCount,
    5937  allocSize + 2 * VMA_DEBUG_MARGIN,
    5938  VmaSuballocationItemSizeLess());
    5939  size_t index = it - m_FreeSuballocationsBySize.data();
    5940  for(; index < freeSuballocCount; ++index)
    5941  {
    5942  if(CheckAllocation(
    5943  currentFrameIndex,
    5944  frameInUseCount,
    5945  bufferImageGranularity,
    5946  allocSize,
    5947  allocAlignment,
    5948  allocType,
    5949  m_FreeSuballocationsBySize[index],
    5950  false, // canMakeOtherLost
    5951  &pAllocationRequest->offset,
    5952  &pAllocationRequest->itemsToMakeLostCount,
    5953  &pAllocationRequest->sumFreeSize,
    5954  &pAllocationRequest->sumItemSize))
    5955  {
    5956  pAllocationRequest->item = m_FreeSuballocationsBySize[index];
    5957  return true;
    5958  }
    5959  }
    5960  }
    5961  else
    5962  {
    5963  // Search staring from biggest suballocations.
    5964  for(size_t index = freeSuballocCount; index--; )
    5965  {
    5966  if(CheckAllocation(
    5967  currentFrameIndex,
    5968  frameInUseCount,
    5969  bufferImageGranularity,
    5970  allocSize,
    5971  allocAlignment,
    5972  allocType,
    5973  m_FreeSuballocationsBySize[index],
    5974  false, // canMakeOtherLost
    5975  &pAllocationRequest->offset,
    5976  &pAllocationRequest->itemsToMakeLostCount,
    5977  &pAllocationRequest->sumFreeSize,
    5978  &pAllocationRequest->sumItemSize))
    5979  {
    5980  pAllocationRequest->item = m_FreeSuballocationsBySize[index];
    5981  return true;
    5982  }
    5983  }
    5984  }
    5985  }
    5986 
    5987  if(canMakeOtherLost)
    5988  {
    5989  // Brute-force algorithm. TODO: Come up with something better.
    5990 
    5991  pAllocationRequest->sumFreeSize = VK_WHOLE_SIZE;
    5992  pAllocationRequest->sumItemSize = VK_WHOLE_SIZE;
    5993 
    5994  VmaAllocationRequest tmpAllocRequest = {};
    5995  for(VmaSuballocationList::iterator suballocIt = m_Suballocations.begin();
    5996  suballocIt != m_Suballocations.end();
    5997  ++suballocIt)
    5998  {
    5999  if(suballocIt->type == VMA_SUBALLOCATION_TYPE_FREE ||
    6000  suballocIt->hAllocation->CanBecomeLost())
    6001  {
    6002  if(CheckAllocation(
    6003  currentFrameIndex,
    6004  frameInUseCount,
    6005  bufferImageGranularity,
    6006  allocSize,
    6007  allocAlignment,
    6008  allocType,
    6009  suballocIt,
    6010  canMakeOtherLost,
    6011  &tmpAllocRequest.offset,
    6012  &tmpAllocRequest.itemsToMakeLostCount,
    6013  &tmpAllocRequest.sumFreeSize,
    6014  &tmpAllocRequest.sumItemSize))
    6015  {
    6016  tmpAllocRequest.item = suballocIt;
    6017 
    6018  if(tmpAllocRequest.CalcCost() < pAllocationRequest->CalcCost())
    6019  {
    6020  *pAllocationRequest = tmpAllocRequest;
    6021  }
    6022  }
    6023  }
    6024  }
    6025 
    6026  if(pAllocationRequest->sumItemSize != VK_WHOLE_SIZE)
    6027  {
    6028  return true;
    6029  }
    6030  }
    6031 
    6032  return false;
    6033 }
    6034 
    6035 bool VmaBlockMetadata::MakeRequestedAllocationsLost(
    6036  uint32_t currentFrameIndex,
    6037  uint32_t frameInUseCount,
    6038  VmaAllocationRequest* pAllocationRequest)
    6039 {
    6040  while(pAllocationRequest->itemsToMakeLostCount > 0)
    6041  {
    6042  if(pAllocationRequest->item->type == VMA_SUBALLOCATION_TYPE_FREE)
    6043  {
    6044  ++pAllocationRequest->item;
    6045  }
    6046  VMA_ASSERT(pAllocationRequest->item != m_Suballocations.end());
    6047  VMA_ASSERT(pAllocationRequest->item->hAllocation != VK_NULL_HANDLE);
    6048  VMA_ASSERT(pAllocationRequest->item->hAllocation->CanBecomeLost());
    6049  if(pAllocationRequest->item->hAllocation->MakeLost(currentFrameIndex, frameInUseCount))
    6050  {
    6051  pAllocationRequest->item = FreeSuballocation(pAllocationRequest->item);
    6052  --pAllocationRequest->itemsToMakeLostCount;
    6053  }
    6054  else
    6055  {
    6056  return false;
    6057  }
    6058  }
    6059 
    6060  VMA_HEAVY_ASSERT(Validate());
    6061  VMA_ASSERT(pAllocationRequest->item != m_Suballocations.end());
    6062  VMA_ASSERT(pAllocationRequest->item->type == VMA_SUBALLOCATION_TYPE_FREE);
    6063 
    6064  return true;
    6065 }
    6066 
    6067 uint32_t VmaBlockMetadata::MakeAllocationsLost(uint32_t currentFrameIndex, uint32_t frameInUseCount)
    6068 {
    6069  uint32_t lostAllocationCount = 0;
    6070  for(VmaSuballocationList::iterator it = m_Suballocations.begin();
    6071  it != m_Suballocations.end();
    6072  ++it)
    6073  {
    6074  if(it->type != VMA_SUBALLOCATION_TYPE_FREE &&
    6075  it->hAllocation->CanBecomeLost() &&
    6076  it->hAllocation->MakeLost(currentFrameIndex, frameInUseCount))
    6077  {
    6078  it = FreeSuballocation(it);
    6079  ++lostAllocationCount;
    6080  }
    6081  }
    6082  return lostAllocationCount;
    6083 }
    6084 
    6085 VkResult VmaBlockMetadata::CheckCorruption(const void* pBlockData)
    6086 {
    6087  for(VmaSuballocationList::iterator it = m_Suballocations.begin();
    6088  it != m_Suballocations.end();
    6089  ++it)
    6090  {
    6091  if(it->type != VMA_SUBALLOCATION_TYPE_FREE)
    6092  {
    6093  if(!VmaValidateMagicValue(pBlockData, it->offset - VMA_DEBUG_MARGIN))
    6094  {
    6095  VMA_ASSERT(0 && "MEMORY CORRUPTION DETECTED BEFORE VALIDATED ALLOCATION!");
    6096  return VK_ERROR_VALIDATION_FAILED_EXT;
    6097  }
    6098  if(!VmaValidateMagicValue(pBlockData, it->offset + it->size))
    6099  {
    6100  VMA_ASSERT(0 && "MEMORY CORRUPTION DETECTED AFTER VALIDATED ALLOCATION!");
    6101  return VK_ERROR_VALIDATION_FAILED_EXT;
    6102  }
    6103  }
    6104  }
    6105 
    6106  return VK_SUCCESS;
    6107 }
    6108 
    6109 void VmaBlockMetadata::Alloc(
    6110  const VmaAllocationRequest& request,
    6111  VmaSuballocationType type,
    6112  VkDeviceSize allocSize,
    6113  VmaAllocation hAllocation)
    6114 {
    6115  VMA_ASSERT(request.item != m_Suballocations.end());
    6116  VmaSuballocation& suballoc = *request.item;
    6117  // Given suballocation is a free block.
    6118  VMA_ASSERT(suballoc.type == VMA_SUBALLOCATION_TYPE_FREE);
    6119  // Given offset is inside this suballocation.
    6120  VMA_ASSERT(request.offset >= suballoc.offset);
    6121  const VkDeviceSize paddingBegin = request.offset - suballoc.offset;
    6122  VMA_ASSERT(suballoc.size >= paddingBegin + allocSize);
    6123  const VkDeviceSize paddingEnd = suballoc.size - paddingBegin - allocSize;
    6124 
    6125  // Unregister this free suballocation from m_FreeSuballocationsBySize and update
    6126  // it to become used.
    6127  UnregisterFreeSuballocation(request.item);
    6128 
    6129  suballoc.offset = request.offset;
    6130  suballoc.size = allocSize;
    6131  suballoc.type = type;
    6132  suballoc.hAllocation = hAllocation;
    6133 
    6134  // If there are any free bytes remaining at the end, insert new free suballocation after current one.
    6135  if(paddingEnd)
    6136  {
    6137  VmaSuballocation paddingSuballoc = {};
    6138  paddingSuballoc.offset = request.offset + allocSize;
    6139  paddingSuballoc.size = paddingEnd;
    6140  paddingSuballoc.type = VMA_SUBALLOCATION_TYPE_FREE;
    6141  VmaSuballocationList::iterator next = request.item;
    6142  ++next;
    6143  const VmaSuballocationList::iterator paddingEndItem =
    6144  m_Suballocations.insert(next, paddingSuballoc);
    6145  RegisterFreeSuballocation(paddingEndItem);
    6146  }
    6147 
    6148  // If there are any free bytes remaining at the beginning, insert new free suballocation before current one.
    6149  if(paddingBegin)
    6150  {
    6151  VmaSuballocation paddingSuballoc = {};
    6152  paddingSuballoc.offset = request.offset - paddingBegin;
    6153  paddingSuballoc.size = paddingBegin;
    6154  paddingSuballoc.type = VMA_SUBALLOCATION_TYPE_FREE;
    6155  const VmaSuballocationList::iterator paddingBeginItem =
    6156  m_Suballocations.insert(request.item, paddingSuballoc);
    6157  RegisterFreeSuballocation(paddingBeginItem);
    6158  }
    6159 
    6160  // Update totals.
    6161  m_FreeCount = m_FreeCount - 1;
    6162  if(paddingBegin > 0)
    6163  {
    6164  ++m_FreeCount;
    6165  }
    6166  if(paddingEnd > 0)
    6167  {
    6168  ++m_FreeCount;
    6169  }
    6170  m_SumFreeSize -= allocSize;
    6171 }
    6172 
    6173 void VmaBlockMetadata::Free(const VmaAllocation allocation)
    6174 {
    6175  for(VmaSuballocationList::iterator suballocItem = m_Suballocations.begin();
    6176  suballocItem != m_Suballocations.end();
    6177  ++suballocItem)
    6178  {
    6179  VmaSuballocation& suballoc = *suballocItem;
    6180  if(suballoc.hAllocation == allocation)
    6181  {
    6182  FreeSuballocation(suballocItem);
    6183  VMA_HEAVY_ASSERT(Validate());
    6184  return;
    6185  }
    6186  }
    6187  VMA_ASSERT(0 && "Not found!");
    6188 }
    6189 
    6190 void VmaBlockMetadata::FreeAtOffset(VkDeviceSize offset)
    6191 {
    6192  for(VmaSuballocationList::iterator suballocItem = m_Suballocations.begin();
    6193  suballocItem != m_Suballocations.end();
    6194  ++suballocItem)
    6195  {
    6196  VmaSuballocation& suballoc = *suballocItem;
    6197  if(suballoc.offset == offset)
    6198  {
    6199  FreeSuballocation(suballocItem);
    6200  return;
    6201  }
    6202  }
    6203  VMA_ASSERT(0 && "Not found!");
    6204 }
    6205 
    6206 bool VmaBlockMetadata::ValidateFreeSuballocationList() const
    6207 {
    6208  VkDeviceSize lastSize = 0;
    6209  for(size_t i = 0, count = m_FreeSuballocationsBySize.size(); i < count; ++i)
    6210  {
    6211  const VmaSuballocationList::iterator it = m_FreeSuballocationsBySize[i];
    6212 
    6213  if(it->type != VMA_SUBALLOCATION_TYPE_FREE)
    6214  {
    6215  VMA_ASSERT(0);
    6216  return false;
    6217  }
    6218  if(it->size < VMA_MIN_FREE_SUBALLOCATION_SIZE_TO_REGISTER)
    6219  {
    6220  VMA_ASSERT(0);
    6221  return false;
    6222  }
    6223  if(it->size < lastSize)
    6224  {
    6225  VMA_ASSERT(0);
    6226  return false;
    6227  }
    6228 
    6229  lastSize = it->size;
    6230  }
    6231  return true;
    6232 }
    6233 
    6234 bool VmaBlockMetadata::CheckAllocation(
    6235  uint32_t currentFrameIndex,
    6236  uint32_t frameInUseCount,
    6237  VkDeviceSize bufferImageGranularity,
    6238  VkDeviceSize allocSize,
    6239  VkDeviceSize allocAlignment,
    6240  VmaSuballocationType allocType,
    6241  VmaSuballocationList::const_iterator suballocItem,
    6242  bool canMakeOtherLost,
    6243  VkDeviceSize* pOffset,
    6244  size_t* itemsToMakeLostCount,
    6245  VkDeviceSize* pSumFreeSize,
    6246  VkDeviceSize* pSumItemSize) const
    6247 {
    6248  VMA_ASSERT(allocSize > 0);
    6249  VMA_ASSERT(allocType != VMA_SUBALLOCATION_TYPE_FREE);
    6250  VMA_ASSERT(suballocItem != m_Suballocations.cend());
    6251  VMA_ASSERT(pOffset != VMA_NULL);
    6252 
    6253  *itemsToMakeLostCount = 0;
    6254  *pSumFreeSize = 0;
    6255  *pSumItemSize = 0;
    6256 
    6257  if(canMakeOtherLost)
    6258  {
    6259  if(suballocItem->type == VMA_SUBALLOCATION_TYPE_FREE)
    6260  {
    6261  *pSumFreeSize = suballocItem->size;
    6262  }
    6263  else
    6264  {
    6265  if(suballocItem->hAllocation->CanBecomeLost() &&
    6266  suballocItem->hAllocation->GetLastUseFrameIndex() + frameInUseCount < currentFrameIndex)
    6267  {
    6268  ++*itemsToMakeLostCount;
    6269  *pSumItemSize = suballocItem->size;
    6270  }
    6271  else
    6272  {
    6273  return false;
    6274  }
    6275  }
    6276 
    6277  // Remaining size is too small for this request: Early return.
    6278  if(m_Size - suballocItem->offset < allocSize)
    6279  {
    6280  return false;
    6281  }
    6282 
    6283  // Start from offset equal to beginning of this suballocation.
    6284  *pOffset = suballocItem->offset;
    6285 
    6286  // Apply VMA_DEBUG_MARGIN at the beginning.
    6287  if(VMA_DEBUG_MARGIN > 0)
    6288  {
    6289  *pOffset += VMA_DEBUG_MARGIN;
    6290  }
    6291 
    6292  // Apply alignment.
    6293  *pOffset = VmaAlignUp(*pOffset, allocAlignment);
    6294 
    6295  // Check previous suballocations for BufferImageGranularity conflicts.
    6296  // Make bigger alignment if necessary.
    6297  if(bufferImageGranularity > 1)
    6298  {
    6299  bool bufferImageGranularityConflict = false;
    6300  VmaSuballocationList::const_iterator prevSuballocItem = suballocItem;
    6301  while(prevSuballocItem != m_Suballocations.cbegin())
    6302  {
    6303  --prevSuballocItem;
    6304  const VmaSuballocation& prevSuballoc = *prevSuballocItem;
    6305  if(VmaBlocksOnSamePage(prevSuballoc.offset, prevSuballoc.size, *pOffset, bufferImageGranularity))
    6306  {
    6307  if(VmaIsBufferImageGranularityConflict(prevSuballoc.type, allocType))
    6308  {
    6309  bufferImageGranularityConflict = true;
    6310  break;
    6311  }
    6312  }
    6313  else
    6314  // Already on previous page.
    6315  break;
    6316  }
    6317  if(bufferImageGranularityConflict)
    6318  {
    6319  *pOffset = VmaAlignUp(*pOffset, bufferImageGranularity);
    6320  }
    6321  }
    6322 
    6323  // Now that we have final *pOffset, check if we are past suballocItem.
    6324  // If yes, return false - this function should be called for another suballocItem as starting point.
    6325  if(*pOffset >= suballocItem->offset + suballocItem->size)
    6326  {
    6327  return false;
    6328  }
    6329 
    6330  // Calculate padding at the beginning based on current offset.
    6331  const VkDeviceSize paddingBegin = *pOffset - suballocItem->offset;
    6332 
    6333  // Calculate required margin at the end.
    6334  const VkDeviceSize requiredEndMargin = VMA_DEBUG_MARGIN;
    6335 
    6336  const VkDeviceSize totalSize = paddingBegin + allocSize + requiredEndMargin;
    6337  // Another early return check.
    6338  if(suballocItem->offset + totalSize > m_Size)
    6339  {
    6340  return false;
    6341  }
    6342 
    6343  // Advance lastSuballocItem until desired size is reached.
    6344  // Update itemsToMakeLostCount.
    6345  VmaSuballocationList::const_iterator lastSuballocItem = suballocItem;
    6346  if(totalSize > suballocItem->size)
    6347  {
    6348  VkDeviceSize remainingSize = totalSize - suballocItem->size;
    6349  while(remainingSize > 0)
    6350  {
    6351  ++lastSuballocItem;
    6352  if(lastSuballocItem == m_Suballocations.cend())
    6353  {
    6354  return false;
    6355  }
    6356  if(lastSuballocItem->type == VMA_SUBALLOCATION_TYPE_FREE)
    6357  {
    6358  *pSumFreeSize += lastSuballocItem->size;
    6359  }
    6360  else
    6361  {
    6362  VMA_ASSERT(lastSuballocItem->hAllocation != VK_NULL_HANDLE);
    6363  if(lastSuballocItem->hAllocation->CanBecomeLost() &&
    6364  lastSuballocItem->hAllocation->GetLastUseFrameIndex() + frameInUseCount < currentFrameIndex)
    6365  {
    6366  ++*itemsToMakeLostCount;
    6367  *pSumItemSize += lastSuballocItem->size;
    6368  }
    6369  else
    6370  {
    6371  return false;
    6372  }
    6373  }
    6374  remainingSize = (lastSuballocItem->size < remainingSize) ?
    6375  remainingSize - lastSuballocItem->size : 0;
    6376  }
    6377  }
    6378 
    6379  // Check next suballocations for BufferImageGranularity conflicts.
    6380  // If conflict exists, we must mark more allocations lost or fail.
    6381  if(bufferImageGranularity > 1)
    6382  {
    6383  VmaSuballocationList::const_iterator nextSuballocItem = lastSuballocItem;
    6384  ++nextSuballocItem;
    6385  while(nextSuballocItem != m_Suballocations.cend())
    6386  {
    6387  const VmaSuballocation& nextSuballoc = *nextSuballocItem;
    6388  if(VmaBlocksOnSamePage(*pOffset, allocSize, nextSuballoc.offset, bufferImageGranularity))
    6389  {
    6390  if(VmaIsBufferImageGranularityConflict(allocType, nextSuballoc.type))
    6391  {
    6392  VMA_ASSERT(nextSuballoc.hAllocation != VK_NULL_HANDLE);
    6393  if(nextSuballoc.hAllocation->CanBecomeLost() &&
    6394  nextSuballoc.hAllocation->GetLastUseFrameIndex() + frameInUseCount < currentFrameIndex)
    6395  {
    6396  ++*itemsToMakeLostCount;
    6397  }
    6398  else
    6399  {
    6400  return false;
    6401  }
    6402  }
    6403  }
    6404  else
    6405  {
    6406  // Already on next page.
    6407  break;
    6408  }
    6409  ++nextSuballocItem;
    6410  }
    6411  }
    6412  }
    6413  else
    6414  {
    6415  const VmaSuballocation& suballoc = *suballocItem;
    6416  VMA_ASSERT(suballoc.type == VMA_SUBALLOCATION_TYPE_FREE);
    6417 
    6418  *pSumFreeSize = suballoc.size;
    6419 
    6420  // Size of this suballocation is too small for this request: Early return.
    6421  if(suballoc.size < allocSize)
    6422  {
    6423  return false;
    6424  }
    6425 
    6426  // Start from offset equal to beginning of this suballocation.
    6427  *pOffset = suballoc.offset;
    6428 
    6429  // Apply VMA_DEBUG_MARGIN at the beginning.
    6430  if(VMA_DEBUG_MARGIN > 0)
    6431  {
    6432  *pOffset += VMA_DEBUG_MARGIN;
    6433  }
    6434 
    6435  // Apply alignment.
    6436  *pOffset = VmaAlignUp(*pOffset, allocAlignment);
    6437 
    6438  // Check previous suballocations for BufferImageGranularity conflicts.
    6439  // Make bigger alignment if necessary.
    6440  if(bufferImageGranularity > 1)
    6441  {
    6442  bool bufferImageGranularityConflict = false;
    6443  VmaSuballocationList::const_iterator prevSuballocItem = suballocItem;
    6444  while(prevSuballocItem != m_Suballocations.cbegin())
    6445  {
    6446  --prevSuballocItem;
    6447  const VmaSuballocation& prevSuballoc = *prevSuballocItem;
    6448  if(VmaBlocksOnSamePage(prevSuballoc.offset, prevSuballoc.size, *pOffset, bufferImageGranularity))
    6449  {
    6450  if(VmaIsBufferImageGranularityConflict(prevSuballoc.type, allocType))
    6451  {
    6452  bufferImageGranularityConflict = true;
    6453  break;
    6454  }
    6455  }
    6456  else
    6457  // Already on previous page.
    6458  break;
    6459  }
    6460  if(bufferImageGranularityConflict)
    6461  {
    6462  *pOffset = VmaAlignUp(*pOffset, bufferImageGranularity);
    6463  }
    6464  }
    6465 
    6466  // Calculate padding at the beginning based on current offset.
    6467  const VkDeviceSize paddingBegin = *pOffset - suballoc.offset;
    6468 
    6469  // Calculate required margin at the end.
    6470  const VkDeviceSize requiredEndMargin = VMA_DEBUG_MARGIN;
    6471 
    6472  // Fail if requested size plus margin before and after is bigger than size of this suballocation.
    6473  if(paddingBegin + allocSize + requiredEndMargin > suballoc.size)
    6474  {
    6475  return false;
    6476  }
    6477 
    6478  // Check next suballocations for BufferImageGranularity conflicts.
    6479  // If conflict exists, allocation cannot be made here.
    6480  if(bufferImageGranularity > 1)
    6481  {
    6482  VmaSuballocationList::const_iterator nextSuballocItem = suballocItem;
    6483  ++nextSuballocItem;
    6484  while(nextSuballocItem != m_Suballocations.cend())
    6485  {
    6486  const VmaSuballocation& nextSuballoc = *nextSuballocItem;
    6487  if(VmaBlocksOnSamePage(*pOffset, allocSize, nextSuballoc.offset, bufferImageGranularity))
    6488  {
    6489  if(VmaIsBufferImageGranularityConflict(allocType, nextSuballoc.type))
    6490  {
    6491  return false;
    6492  }
    6493  }
    6494  else
    6495  {
    6496  // Already on next page.
    6497  break;
    6498  }
    6499  ++nextSuballocItem;
    6500  }
    6501  }
    6502  }
    6503 
    6504  // All tests passed: Success. pOffset is already filled.
    6505  return true;
    6506 }
    6507 
    6508 void VmaBlockMetadata::MergeFreeWithNext(VmaSuballocationList::iterator item)
    6509 {
    6510  VMA_ASSERT(item != m_Suballocations.end());
    6511  VMA_ASSERT(item->type == VMA_SUBALLOCATION_TYPE_FREE);
    6512 
    6513  VmaSuballocationList::iterator nextItem = item;
    6514  ++nextItem;
    6515  VMA_ASSERT(nextItem != m_Suballocations.end());
    6516  VMA_ASSERT(nextItem->type == VMA_SUBALLOCATION_TYPE_FREE);
    6517 
    6518  item->size += nextItem->size;
    6519  --m_FreeCount;
    6520  m_Suballocations.erase(nextItem);
    6521 }
    6522 
    6523 VmaSuballocationList::iterator VmaBlockMetadata::FreeSuballocation(VmaSuballocationList::iterator suballocItem)
    6524 {
    6525  // Change this suballocation to be marked as free.
    6526  VmaSuballocation& suballoc = *suballocItem;
    6527  suballoc.type = VMA_SUBALLOCATION_TYPE_FREE;
    6528  suballoc.hAllocation = VK_NULL_HANDLE;
    6529 
    6530  // Update totals.
    6531  ++m_FreeCount;
    6532  m_SumFreeSize += suballoc.size;
    6533 
    6534  // Merge with previous and/or next suballocation if it's also free.
    6535  bool mergeWithNext = false;
    6536  bool mergeWithPrev = false;
    6537 
    6538  VmaSuballocationList::iterator nextItem = suballocItem;
    6539  ++nextItem;
    6540  if((nextItem != m_Suballocations.end()) && (nextItem->type == VMA_SUBALLOCATION_TYPE_FREE))
    6541  {
    6542  mergeWithNext = true;
    6543  }
    6544 
    6545  VmaSuballocationList::iterator prevItem = suballocItem;
    6546  if(suballocItem != m_Suballocations.begin())
    6547  {
    6548  --prevItem;
    6549  if(prevItem->type == VMA_SUBALLOCATION_TYPE_FREE)
    6550  {
    6551  mergeWithPrev = true;
    6552  }
    6553  }
    6554 
    6555  if(mergeWithNext)
    6556  {
    6557  UnregisterFreeSuballocation(nextItem);
    6558  MergeFreeWithNext(suballocItem);
    6559  }
    6560 
    6561  if(mergeWithPrev)
    6562  {
    6563  UnregisterFreeSuballocation(prevItem);
    6564  MergeFreeWithNext(prevItem);
    6565  RegisterFreeSuballocation(prevItem);
    6566  return prevItem;
    6567  }
    6568  else
    6569  {
    6570  RegisterFreeSuballocation(suballocItem);
    6571  return suballocItem;
    6572  }
    6573 }
    6574 
    6575 void VmaBlockMetadata::RegisterFreeSuballocation(VmaSuballocationList::iterator item)
    6576 {
    6577  VMA_ASSERT(item->type == VMA_SUBALLOCATION_TYPE_FREE);
    6578  VMA_ASSERT(item->size > 0);
    6579 
    6580  // You may want to enable this validation at the beginning or at the end of
    6581  // this function, depending on what do you want to check.
    6582  VMA_HEAVY_ASSERT(ValidateFreeSuballocationList());
    6583 
    6584  if(item->size >= VMA_MIN_FREE_SUBALLOCATION_SIZE_TO_REGISTER)
    6585  {
    6586  if(m_FreeSuballocationsBySize.empty())
    6587  {
    6588  m_FreeSuballocationsBySize.push_back(item);
    6589  }
    6590  else
    6591  {
    6592  VmaVectorInsertSorted<VmaSuballocationItemSizeLess>(m_FreeSuballocationsBySize, item);
    6593  }
    6594  }
    6595 
    6596  //VMA_HEAVY_ASSERT(ValidateFreeSuballocationList());
    6597 }
    6598 
    6599 
    6600 void VmaBlockMetadata::UnregisterFreeSuballocation(VmaSuballocationList::iterator item)
    6601 {
    6602  VMA_ASSERT(item->type == VMA_SUBALLOCATION_TYPE_FREE);
    6603  VMA_ASSERT(item->size > 0);
    6604 
    6605  // You may want to enable this validation at the beginning or at the end of
    6606  // this function, depending on what do you want to check.
    6607  VMA_HEAVY_ASSERT(ValidateFreeSuballocationList());
    6608 
    6609  if(item->size >= VMA_MIN_FREE_SUBALLOCATION_SIZE_TO_REGISTER)
    6610  {
    6611  VmaSuballocationList::iterator* const it = VmaBinaryFindFirstNotLess(
    6612  m_FreeSuballocationsBySize.data(),
    6613  m_FreeSuballocationsBySize.data() + m_FreeSuballocationsBySize.size(),
    6614  item,
    6615  VmaSuballocationItemSizeLess());
    6616  for(size_t index = it - m_FreeSuballocationsBySize.data();
    6617  index < m_FreeSuballocationsBySize.size();
    6618  ++index)
    6619  {
    6620  if(m_FreeSuballocationsBySize[index] == item)
    6621  {
    6622  VmaVectorRemove(m_FreeSuballocationsBySize, index);
    6623  return;
    6624  }
    6625  VMA_ASSERT((m_FreeSuballocationsBySize[index]->size == item->size) && "Not found.");
    6626  }
    6627  VMA_ASSERT(0 && "Not found.");
    6628  }
    6629 
    6630  //VMA_HEAVY_ASSERT(ValidateFreeSuballocationList());
    6631 }
    6632 
    6634 // class VmaDeviceMemoryBlock
    6635 
    6636 VmaDeviceMemoryBlock::VmaDeviceMemoryBlock(VmaAllocator hAllocator) :
    6637  m_Metadata(hAllocator),
    6638  m_MemoryTypeIndex(UINT32_MAX),
    6639  m_Id(0),
    6640  m_hMemory(VK_NULL_HANDLE),
    6641  m_MapCount(0),
    6642  m_pMappedData(VMA_NULL)
    6643 {
    6644 }
    6645 
    6646 void VmaDeviceMemoryBlock::Init(
    6647  uint32_t newMemoryTypeIndex,
    6648  VkDeviceMemory newMemory,
    6649  VkDeviceSize newSize,
    6650  uint32_t id)
    6651 {
    6652  VMA_ASSERT(m_hMemory == VK_NULL_HANDLE);
    6653 
    6654  m_MemoryTypeIndex = newMemoryTypeIndex;
    6655  m_Id = id;
    6656  m_hMemory = newMemory;
    6657 
    6658  m_Metadata.Init(newSize);
    6659 }
    6660 
    6661 void VmaDeviceMemoryBlock::Destroy(VmaAllocator allocator)
    6662 {
    6663  // This is the most important assert in the entire library.
    6664  // Hitting it means you have some memory leak - unreleased VmaAllocation objects.
    6665  VMA_ASSERT(m_Metadata.IsEmpty() && "Some allocations were not freed before destruction of this memory block!");
    6666 
    6667  VMA_ASSERT(m_hMemory != VK_NULL_HANDLE);
    6668  allocator->FreeVulkanMemory(m_MemoryTypeIndex, m_Metadata.GetSize(), m_hMemory);
    6669  m_hMemory = VK_NULL_HANDLE;
    6670 }
    6671 
    6672 bool VmaDeviceMemoryBlock::Validate() const
    6673 {
    6674  if((m_hMemory == VK_NULL_HANDLE) ||
    6675  (m_Metadata.GetSize() == 0))
    6676  {
    6677  return false;
    6678  }
    6679 
    6680  return m_Metadata.Validate();
    6681 }
    6682 
    6683 VkResult VmaDeviceMemoryBlock::CheckCorruption(VmaAllocator hAllocator)
    6684 {
    6685  void* pData = nullptr;
    6686  VkResult res = Map(hAllocator, 1, &pData);
    6687  if(res != VK_SUCCESS)
    6688  {
    6689  return res;
    6690  }
    6691 
    6692  res = m_Metadata.CheckCorruption(pData);
    6693 
    6694  Unmap(hAllocator, 1);
    6695 
    6696  return res;
    6697 }
    6698 
    6699 VkResult VmaDeviceMemoryBlock::Map(VmaAllocator hAllocator, uint32_t count, void** ppData)
    6700 {
    6701  if(count == 0)
    6702  {
    6703  return VK_SUCCESS;
    6704  }
    6705 
    6706  VmaMutexLock lock(m_Mutex, hAllocator->m_UseMutex);
    6707  if(m_MapCount != 0)
    6708  {
    6709  m_MapCount += count;
    6710  VMA_ASSERT(m_pMappedData != VMA_NULL);
    6711  if(ppData != VMA_NULL)
    6712  {
    6713  *ppData = m_pMappedData;
    6714  }
    6715  return VK_SUCCESS;
    6716  }
    6717  else
    6718  {
    6719  VkResult result = (*hAllocator->GetVulkanFunctions().vkMapMemory)(
    6720  hAllocator->m_hDevice,
    6721  m_hMemory,
    6722  0, // offset
    6723  VK_WHOLE_SIZE,
    6724  0, // flags
    6725  &m_pMappedData);
    6726  if(result == VK_SUCCESS)
    6727  {
    6728  if(ppData != VMA_NULL)
    6729  {
    6730  *ppData = m_pMappedData;
    6731  }
    6732  m_MapCount = count;
    6733  }
    6734  return result;
    6735  }
    6736 }
    6737 
    6738 void VmaDeviceMemoryBlock::Unmap(VmaAllocator hAllocator, uint32_t count)
    6739 {
    6740  if(count == 0)
    6741  {
    6742  return;
    6743  }
    6744 
    6745  VmaMutexLock lock(m_Mutex, hAllocator->m_UseMutex);
    6746  if(m_MapCount >= count)
    6747  {
    6748  m_MapCount -= count;
    6749  if(m_MapCount == 0)
    6750  {
    6751  m_pMappedData = VMA_NULL;
    6752  (*hAllocator->GetVulkanFunctions().vkUnmapMemory)(hAllocator->m_hDevice, m_hMemory);
    6753  }
    6754  }
    6755  else
    6756  {
    6757  VMA_ASSERT(0 && "VkDeviceMemory block is being unmapped while it was not previously mapped.");
    6758  }
    6759 }
    6760 
    6761 VkResult VmaDeviceMemoryBlock::WriteMagicValueAroundAllocation(VmaAllocator hAllocator, VkDeviceSize allocOffset, VkDeviceSize allocSize)
    6762 {
    6763  VMA_ASSERT(VMA_DEBUG_MARGIN > 0 && VMA_DEBUG_MARGIN % 4 == 0 && VMA_DEBUG_DETECT_CORRUPTION);
    6764  VMA_ASSERT(allocOffset >= VMA_DEBUG_MARGIN);
    6765 
    6766  void* pData;
    6767  VkResult res = Map(hAllocator, 1, &pData);
    6768  if(res != VK_SUCCESS)
    6769  {
    6770  return res;
    6771  }
    6772 
    6773  VmaWriteMagicValue(pData, allocOffset - VMA_DEBUG_MARGIN);
    6774  VmaWriteMagicValue(pData, allocOffset + allocSize);
    6775 
    6776  Unmap(hAllocator, 1);
    6777 
    6778  return VK_SUCCESS;
    6779 }
    6780 
    6781 VkResult VmaDeviceMemoryBlock::ValidateMagicValueAroundAllocation(VmaAllocator hAllocator, VkDeviceSize allocOffset, VkDeviceSize allocSize)
    6782 {
    6783  VMA_ASSERT(VMA_DEBUG_MARGIN > 0 && VMA_DEBUG_MARGIN % 4 == 0 && VMA_DEBUG_DETECT_CORRUPTION);
    6784  VMA_ASSERT(allocOffset >= VMA_DEBUG_MARGIN);
    6785 
    6786  void* pData;
    6787  VkResult res = Map(hAllocator, 1, &pData);
    6788  if(res != VK_SUCCESS)
    6789  {
    6790  return res;
    6791  }
    6792 
    6793  if(!VmaValidateMagicValue(pData, allocOffset - VMA_DEBUG_MARGIN))
    6794  {
    6795  VMA_ASSERT(0 && "MEMORY CORRUPTION DETECTED BEFORE FREED ALLOCATION!");
    6796  }
    6797  else if(!VmaValidateMagicValue(pData, allocOffset + allocSize))
    6798  {
    6799  VMA_ASSERT(0 && "MEMORY CORRUPTION DETECTED AFTER FREED ALLOCATION!");
    6800  }
    6801 
    6802  Unmap(hAllocator, 1);
    6803 
    6804  return VK_SUCCESS;
    6805 }
    6806 
    6807 VkResult VmaDeviceMemoryBlock::BindBufferMemory(
    6808  const VmaAllocator hAllocator,
    6809  const VmaAllocation hAllocation,
    6810  VkBuffer hBuffer)
    6811 {
    6812  VMA_ASSERT(hAllocation->GetType() == VmaAllocation_T::ALLOCATION_TYPE_BLOCK &&
    6813  hAllocation->GetBlock() == this);
    6814  // This lock is important so that we don't call vkBind... and/or vkMap... simultaneously on the same VkDeviceMemory from multiple threads.
    6815  VmaMutexLock lock(m_Mutex, hAllocator->m_UseMutex);
    6816  return hAllocator->GetVulkanFunctions().vkBindBufferMemory(
    6817  hAllocator->m_hDevice,
    6818  hBuffer,
    6819  m_hMemory,
    6820  hAllocation->GetOffset());
    6821 }
    6822 
    6823 VkResult VmaDeviceMemoryBlock::BindImageMemory(
    6824  const VmaAllocator hAllocator,
    6825  const VmaAllocation hAllocation,
    6826  VkImage hImage)
    6827 {
    6828  VMA_ASSERT(hAllocation->GetType() == VmaAllocation_T::ALLOCATION_TYPE_BLOCK &&
    6829  hAllocation->GetBlock() == this);
    6830  // This lock is important so that we don't call vkBind... and/or vkMap... simultaneously on the same VkDeviceMemory from multiple threads.
    6831  VmaMutexLock lock(m_Mutex, hAllocator->m_UseMutex);
    6832  return hAllocator->GetVulkanFunctions().vkBindImageMemory(
    6833  hAllocator->m_hDevice,
    6834  hImage,
    6835  m_hMemory,
    6836  hAllocation->GetOffset());
    6837 }
    6838 
    6839 static void InitStatInfo(VmaStatInfo& outInfo)
    6840 {
    6841  memset(&outInfo, 0, sizeof(outInfo));
    6842  outInfo.allocationSizeMin = UINT64_MAX;
    6843  outInfo.unusedRangeSizeMin = UINT64_MAX;
    6844 }
    6845 
    6846 // Adds statistics srcInfo into inoutInfo, like: inoutInfo += srcInfo.
    6847 static void VmaAddStatInfo(VmaStatInfo& inoutInfo, const VmaStatInfo& srcInfo)
    6848 {
    6849  inoutInfo.blockCount += srcInfo.blockCount;
    6850  inoutInfo.allocationCount += srcInfo.allocationCount;
    6851  inoutInfo.unusedRangeCount += srcInfo.unusedRangeCount;
    6852  inoutInfo.usedBytes += srcInfo.usedBytes;
    6853  inoutInfo.unusedBytes += srcInfo.unusedBytes;
    6854  inoutInfo.allocationSizeMin = VMA_MIN(inoutInfo.allocationSizeMin, srcInfo.allocationSizeMin);
    6855  inoutInfo.allocationSizeMax = VMA_MAX(inoutInfo.allocationSizeMax, srcInfo.allocationSizeMax);
    6856  inoutInfo.unusedRangeSizeMin = VMA_MIN(inoutInfo.unusedRangeSizeMin, srcInfo.unusedRangeSizeMin);
    6857  inoutInfo.unusedRangeSizeMax = VMA_MAX(inoutInfo.unusedRangeSizeMax, srcInfo.unusedRangeSizeMax);
    6858 }
    6859 
    6860 static void VmaPostprocessCalcStatInfo(VmaStatInfo& inoutInfo)
    6861 {
    6862  inoutInfo.allocationSizeAvg = (inoutInfo.allocationCount > 0) ?
    6863  VmaRoundDiv<VkDeviceSize>(inoutInfo.usedBytes, inoutInfo.allocationCount) : 0;
    6864  inoutInfo.unusedRangeSizeAvg = (inoutInfo.unusedRangeCount > 0) ?
    6865  VmaRoundDiv<VkDeviceSize>(inoutInfo.unusedBytes, inoutInfo.unusedRangeCount) : 0;
    6866 }
    6867 
    6868 VmaPool_T::VmaPool_T(
    6869  VmaAllocator hAllocator,
    6870  const VmaPoolCreateInfo& createInfo) :
    6871  m_BlockVector(
    6872  hAllocator,
    6873  createInfo.memoryTypeIndex,
    6874  createInfo.blockSize,
    6875  createInfo.minBlockCount,
    6876  createInfo.maxBlockCount,
    6877  (createInfo.flags & VMA_POOL_CREATE_IGNORE_BUFFER_IMAGE_GRANULARITY_BIT) != 0 ? 1 : hAllocator->GetBufferImageGranularity(),
    6878  createInfo.frameInUseCount,
    6879  true), // isCustomPool
    6880  m_Id(0)
    6881 {
    6882 }
    6883 
    6884 VmaPool_T::~VmaPool_T()
    6885 {
    6886 }
    6887 
    6888 #if VMA_STATS_STRING_ENABLED
    6889 
    6890 #endif // #if VMA_STATS_STRING_ENABLED
    6891 
    6892 VmaBlockVector::VmaBlockVector(
    6893  VmaAllocator hAllocator,
    6894  uint32_t memoryTypeIndex,
    6895  VkDeviceSize preferredBlockSize,
    6896  size_t minBlockCount,
    6897  size_t maxBlockCount,
    6898  VkDeviceSize bufferImageGranularity,
    6899  uint32_t frameInUseCount,
    6900  bool isCustomPool) :
    6901  m_hAllocator(hAllocator),
    6902  m_MemoryTypeIndex(memoryTypeIndex),
    6903  m_PreferredBlockSize(preferredBlockSize),
    6904  m_MinBlockCount(minBlockCount),
    6905  m_MaxBlockCount(maxBlockCount),
    6906  m_BufferImageGranularity(bufferImageGranularity),
    6907  m_FrameInUseCount(frameInUseCount),
    6908  m_IsCustomPool(isCustomPool),
    6909  m_Blocks(VmaStlAllocator<VmaDeviceMemoryBlock*>(hAllocator->GetAllocationCallbacks())),
    6910  m_HasEmptyBlock(false),
    6911  m_pDefragmentator(VMA_NULL),
    6912  m_NextBlockId(0)
    6913 {
    6914 }
    6915 
    6916 VmaBlockVector::~VmaBlockVector()
    6917 {
    6918  VMA_ASSERT(m_pDefragmentator == VMA_NULL);
    6919 
    6920  for(size_t i = m_Blocks.size(); i--; )
    6921  {
    6922  m_Blocks[i]->Destroy(m_hAllocator);
    6923  vma_delete(m_hAllocator, m_Blocks[i]);
    6924  }
    6925 }
    6926 
    6927 VkResult VmaBlockVector::CreateMinBlocks()
    6928 {
    6929  for(size_t i = 0; i < m_MinBlockCount; ++i)
    6930  {
    6931  VkResult res = CreateBlock(m_PreferredBlockSize, VMA_NULL);
    6932  if(res != VK_SUCCESS)
    6933  {
    6934  return res;
    6935  }
    6936  }
    6937  return VK_SUCCESS;
    6938 }
    6939 
    6940 void VmaBlockVector::GetPoolStats(VmaPoolStats* pStats)
    6941 {
    6942  pStats->size = 0;
    6943  pStats->unusedSize = 0;
    6944  pStats->allocationCount = 0;
    6945  pStats->unusedRangeCount = 0;
    6946  pStats->unusedRangeSizeMax = 0;
    6947 
    6948  VmaMutexLock lock(m_Mutex, m_hAllocator->m_UseMutex);
    6949 
    6950  for(uint32_t blockIndex = 0; blockIndex < m_Blocks.size(); ++blockIndex)
    6951  {
    6952  const VmaDeviceMemoryBlock* const pBlock = m_Blocks[blockIndex];
    6953  VMA_ASSERT(pBlock);
    6954  VMA_HEAVY_ASSERT(pBlock->Validate());
    6955  pBlock->m_Metadata.AddPoolStats(*pStats);
    6956  }
    6957 }
    6958 
    6959 bool VmaBlockVector::IsCorruptionDetectionEnabled() const
    6960 {
    6961  const uint32_t requiredMemFlags = VK_MEMORY_PROPERTY_HOST_VISIBLE_BIT | VK_MEMORY_PROPERTY_HOST_COHERENT_BIT;
    6962  return (VMA_DEBUG_DETECT_CORRUPTION != 0) &&
    6963  (VMA_DEBUG_MARGIN > 0) &&
    6964  (m_hAllocator->m_MemProps.memoryTypes[m_MemoryTypeIndex].propertyFlags & requiredMemFlags) == requiredMemFlags;
    6965 }
    6966 
    6967 static const uint32_t VMA_ALLOCATION_TRY_COUNT = 32;
    6968 
    6969 VkResult VmaBlockVector::Allocate(
    6970  VmaPool hCurrentPool,
    6971  uint32_t currentFrameIndex,
    6972  VkDeviceSize size,
    6973  VkDeviceSize alignment,
    6974  const VmaAllocationCreateInfo& createInfo,
    6975  VmaSuballocationType suballocType,
    6976  VmaAllocation* pAllocation)
    6977 {
    6978  // Early reject: requested allocation size is larger that maximum block size for this block vector.
    6979  if(size + 2 * VMA_DEBUG_MARGIN > m_PreferredBlockSize)
    6980  {
    6981  return VK_ERROR_OUT_OF_DEVICE_MEMORY;
    6982  }
    6983 
    6984  const bool mapped = (createInfo.flags & VMA_ALLOCATION_CREATE_MAPPED_BIT) != 0;
    6985  const bool isUserDataString = (createInfo.flags & VMA_ALLOCATION_CREATE_USER_DATA_COPY_STRING_BIT) != 0;
    6986 
    6987  VmaMutexLock lock(m_Mutex, m_hAllocator->m_UseMutex);
    6988 
    6989  // 1. Search existing allocations. Try to allocate without making other allocations lost.
    6990  // Forward order in m_Blocks - prefer blocks with smallest amount of free space.
    6991  for(size_t blockIndex = 0; blockIndex < m_Blocks.size(); ++blockIndex )
    6992  {
    6993  VmaDeviceMemoryBlock* const pCurrBlock = m_Blocks[blockIndex];
    6994  VMA_ASSERT(pCurrBlock);
    6995  VmaAllocationRequest currRequest = {};
    6996  if(pCurrBlock->m_Metadata.CreateAllocationRequest(
    6997  currentFrameIndex,
    6998  m_FrameInUseCount,
    6999  m_BufferImageGranularity,
    7000  size,
    7001  alignment,
    7002  suballocType,
    7003  false, // canMakeOtherLost
    7004  &currRequest))
    7005  {
    7006  // Allocate from pCurrBlock.
    7007  VMA_ASSERT(currRequest.itemsToMakeLostCount == 0);
    7008 
    7009  if(mapped)
    7010  {
    7011  VkResult res = pCurrBlock->Map(m_hAllocator, 1, VMA_NULL);
    7012  if(res != VK_SUCCESS)
    7013  {
    7014  return res;
    7015  }
    7016  }
    7017 
    7018  // We no longer have an empty Allocation.
    7019  if(pCurrBlock->m_Metadata.IsEmpty())
    7020  {
    7021  m_HasEmptyBlock = false;
    7022  }
    7023 
    7024  *pAllocation = vma_new(m_hAllocator, VmaAllocation_T)(currentFrameIndex, isUserDataString);
    7025  pCurrBlock->m_Metadata.Alloc(currRequest, suballocType, size, *pAllocation);
    7026  (*pAllocation)->InitBlockAllocation(
    7027  hCurrentPool,
    7028  pCurrBlock,
    7029  currRequest.offset,
    7030  alignment,
    7031  size,
    7032  suballocType,
    7033  mapped,
    7034  (createInfo.flags & VMA_ALLOCATION_CREATE_CAN_BECOME_LOST_BIT) != 0);
    7035  VMA_HEAVY_ASSERT(pCurrBlock->Validate());
    7036  VMA_DEBUG_LOG(" Returned from existing allocation #%u", (uint32_t)blockIndex);
    7037  (*pAllocation)->SetUserData(m_hAllocator, createInfo.pUserData);
    7038  if(VMA_DEBUG_INITIALIZE_ALLOCATIONS)
    7039  {
    7040  m_hAllocator->FillAllocation(*pAllocation, VMA_ALLOCATION_FILL_PATTERN_CREATED);
    7041  }
    7042  if(IsCorruptionDetectionEnabled())
    7043  {
    7044  VkResult res = pCurrBlock->WriteMagicValueAroundAllocation(m_hAllocator, currRequest.offset, size);
    7045  VMA_ASSERT(res == VK_SUCCESS && "Couldn't map block memory to write magic value.");
    7046  }
    7047  return VK_SUCCESS;
    7048  }
    7049  }
    7050 
    7051  const bool canCreateNewBlock =
    7052  ((createInfo.flags & VMA_ALLOCATION_CREATE_NEVER_ALLOCATE_BIT) == 0) &&
    7053  (m_Blocks.size() < m_MaxBlockCount);
    7054 
    7055  // 2. Try to create new block.
    7056  if(canCreateNewBlock)
    7057  {
    7058  // Calculate optimal size for new block.
    7059  VkDeviceSize newBlockSize = m_PreferredBlockSize;
    7060  uint32_t newBlockSizeShift = 0;
    7061  const uint32_t NEW_BLOCK_SIZE_SHIFT_MAX = 3;
    7062 
    7063  // Allocating blocks of other sizes is allowed only in default pools.
    7064  // In custom pools block size is fixed.
    7065  if(m_IsCustomPool == false)
    7066  {
    7067  // Allocate 1/8, 1/4, 1/2 as first blocks.
    7068  const VkDeviceSize maxExistingBlockSize = CalcMaxBlockSize();
    7069  for(uint32_t i = 0; i < NEW_BLOCK_SIZE_SHIFT_MAX; ++i)
    7070  {
    7071  const VkDeviceSize smallerNewBlockSize = newBlockSize / 2;
    7072  if(smallerNewBlockSize > maxExistingBlockSize && smallerNewBlockSize >= size * 2)
    7073  {
    7074  newBlockSize = smallerNewBlockSize;
    7075  ++newBlockSizeShift;
    7076  }
    7077  else
    7078  {
    7079  break;
    7080  }
    7081  }
    7082  }
    7083 
    7084  size_t newBlockIndex = 0;
    7085  VkResult res = CreateBlock(newBlockSize, &newBlockIndex);
    7086  // Allocation of this size failed? Try 1/2, 1/4, 1/8 of m_PreferredBlockSize.
    7087  if(m_IsCustomPool == false)
    7088  {
    7089  while(res < 0 && newBlockSizeShift < NEW_BLOCK_SIZE_SHIFT_MAX)
    7090  {
    7091  const VkDeviceSize smallerNewBlockSize = newBlockSize / 2;
    7092  if(smallerNewBlockSize >= size)
    7093  {
    7094  newBlockSize = smallerNewBlockSize;
    7095  ++newBlockSizeShift;
    7096  res = CreateBlock(newBlockSize, &newBlockIndex);
    7097  }
    7098  else
    7099  {
    7100  break;
    7101  }
    7102  }
    7103  }
    7104 
    7105  if(res == VK_SUCCESS)
    7106  {
    7107  VmaDeviceMemoryBlock* const pBlock = m_Blocks[newBlockIndex];
    7108  VMA_ASSERT(pBlock->m_Metadata.GetSize() >= size);
    7109 
    7110  if(mapped)
    7111  {
    7112  res = pBlock->Map(m_hAllocator, 1, VMA_NULL);
    7113  if(res != VK_SUCCESS)
    7114  {
    7115  return res;
    7116  }
    7117  }
    7118 
    7119  // Allocate from pBlock. Because it is empty, dstAllocRequest can be trivially filled.
    7120  VmaAllocationRequest allocRequest;
    7121  if(pBlock->m_Metadata.CreateAllocationRequest(
    7122  currentFrameIndex,
    7123  m_FrameInUseCount,
    7124  m_BufferImageGranularity,
    7125  size,
    7126  alignment,
    7127  suballocType,
    7128  false, // canMakeOtherLost
    7129  &allocRequest))
    7130  {
    7131  *pAllocation = vma_new(m_hAllocator, VmaAllocation_T)(currentFrameIndex, isUserDataString);
    7132  pBlock->m_Metadata.Alloc(allocRequest, suballocType, size, *pAllocation);
    7133  (*pAllocation)->InitBlockAllocation(
    7134  hCurrentPool,
    7135  pBlock,
    7136  allocRequest.offset,
    7137  alignment,
    7138  size,
    7139  suballocType,
    7140  mapped,
    7141  (createInfo.flags & VMA_ALLOCATION_CREATE_CAN_BECOME_LOST_BIT) != 0);
    7142  VMA_HEAVY_ASSERT(pBlock->Validate());
    7143  VMA_DEBUG_LOG(" Created new allocation Size=%llu", allocInfo.allocationSize);
    7144  (*pAllocation)->SetUserData(m_hAllocator, createInfo.pUserData);
    7145  if(VMA_DEBUG_INITIALIZE_ALLOCATIONS)
    7146  {
    7147  m_hAllocator->FillAllocation(*pAllocation, VMA_ALLOCATION_FILL_PATTERN_CREATED);
    7148  }
    7149  if(IsCorruptionDetectionEnabled())
    7150  {
    7151  res = pBlock->WriteMagicValueAroundAllocation(m_hAllocator, allocRequest.offset, size);
    7152  VMA_ASSERT(res == VK_SUCCESS && "Couldn't map block memory to write magic value.");
    7153  }
    7154  return VK_SUCCESS;
    7155  }
    7156  else
    7157  {
    7158  // Allocation from empty block failed, possibly due to VMA_DEBUG_MARGIN or alignment.
    7159  return VK_ERROR_OUT_OF_DEVICE_MEMORY;
    7160  }
    7161  }
    7162  }
    7163 
    7164  const bool canMakeOtherLost = (createInfo.flags & VMA_ALLOCATION_CREATE_CAN_MAKE_OTHER_LOST_BIT) != 0;
    7165 
    7166  // 3. Try to allocate from existing blocks with making other allocations lost.
    7167  if(canMakeOtherLost)
    7168  {
    7169  uint32_t tryIndex = 0;
    7170  for(; tryIndex < VMA_ALLOCATION_TRY_COUNT; ++tryIndex)
    7171  {
    7172  VmaDeviceMemoryBlock* pBestRequestBlock = VMA_NULL;
    7173  VmaAllocationRequest bestRequest = {};
    7174  VkDeviceSize bestRequestCost = VK_WHOLE_SIZE;
    7175 
    7176  // 1. Search existing allocations.
    7177  // Forward order in m_Blocks - prefer blocks with smallest amount of free space.
    7178  for(size_t blockIndex = 0; blockIndex < m_Blocks.size(); ++blockIndex )
    7179  {
    7180  VmaDeviceMemoryBlock* const pCurrBlock = m_Blocks[blockIndex];
    7181  VMA_ASSERT(pCurrBlock);
    7182  VmaAllocationRequest currRequest = {};
    7183  if(pCurrBlock->m_Metadata.CreateAllocationRequest(
    7184  currentFrameIndex,
    7185  m_FrameInUseCount,
    7186  m_BufferImageGranularity,
    7187  size,
    7188  alignment,
    7189  suballocType,
    7190  canMakeOtherLost,
    7191  &currRequest))
    7192  {
    7193  const VkDeviceSize currRequestCost = currRequest.CalcCost();
    7194  if(pBestRequestBlock == VMA_NULL ||
    7195  currRequestCost < bestRequestCost)
    7196  {
    7197  pBestRequestBlock = pCurrBlock;
    7198  bestRequest = currRequest;
    7199  bestRequestCost = currRequestCost;
    7200 
    7201  if(bestRequestCost == 0)
    7202  {
    7203  break;
    7204  }
    7205  }
    7206  }
    7207  }
    7208 
    7209  if(pBestRequestBlock != VMA_NULL)
    7210  {
    7211  if(mapped)
    7212  {
    7213  VkResult res = pBestRequestBlock->Map(m_hAllocator, 1, VMA_NULL);
    7214  if(res != VK_SUCCESS)
    7215  {
    7216  return res;
    7217  }
    7218  }
    7219 
    7220  if(pBestRequestBlock->m_Metadata.MakeRequestedAllocationsLost(
    7221  currentFrameIndex,
    7222  m_FrameInUseCount,
    7223  &bestRequest))
    7224  {
    7225  // We no longer have an empty Allocation.
    7226  if(pBestRequestBlock->m_Metadata.IsEmpty())
    7227  {
    7228  m_HasEmptyBlock = false;
    7229  }
    7230  // Allocate from this pBlock.
    7231  *pAllocation = vma_new(m_hAllocator, VmaAllocation_T)(currentFrameIndex, isUserDataString);
    7232  pBestRequestBlock->m_Metadata.Alloc(bestRequest, suballocType, size, *pAllocation);
    7233  (*pAllocation)->InitBlockAllocation(
    7234  hCurrentPool,
    7235  pBestRequestBlock,
    7236  bestRequest.offset,
    7237  alignment,
    7238  size,
    7239  suballocType,
    7240  mapped,
    7241  (createInfo.flags & VMA_ALLOCATION_CREATE_CAN_BECOME_LOST_BIT) != 0);
    7242  VMA_HEAVY_ASSERT(pBestRequestBlock->Validate());
    7243  VMA_DEBUG_LOG(" Returned from existing allocation #%u", (uint32_t)blockIndex);
    7244  (*pAllocation)->SetUserData(m_hAllocator, createInfo.pUserData);
    7245  if(VMA_DEBUG_INITIALIZE_ALLOCATIONS)
    7246  {
    7247  m_hAllocator->FillAllocation(*pAllocation, VMA_ALLOCATION_FILL_PATTERN_CREATED);
    7248  }
    7249  if(IsCorruptionDetectionEnabled())
    7250  {
    7251  VkResult res = pBestRequestBlock->WriteMagicValueAroundAllocation(m_hAllocator, bestRequest.offset, size);
    7252  VMA_ASSERT(res == VK_SUCCESS && "Couldn't map block memory to write magic value.");
    7253  }
    7254  return VK_SUCCESS;
    7255  }
    7256  // else: Some allocations must have been touched while we are here. Next try.
    7257  }
    7258  else
    7259  {
    7260  // Could not find place in any of the blocks - break outer loop.
    7261  break;
    7262  }
    7263  }
    7264  /* Maximum number of tries exceeded - a very unlike event when many other
    7265  threads are simultaneously touching allocations making it impossible to make
    7266  lost at the same time as we try to allocate. */
    7267  if(tryIndex == VMA_ALLOCATION_TRY_COUNT)
    7268  {
    7269  return VK_ERROR_TOO_MANY_OBJECTS;
    7270  }
    7271  }
    7272 
    7273  return VK_ERROR_OUT_OF_DEVICE_MEMORY;
    7274 }
    7275 
    7276 void VmaBlockVector::Free(
    7277  VmaAllocation hAllocation)
    7278 {
    7279  VmaDeviceMemoryBlock* pBlockToDelete = VMA_NULL;
    7280 
    7281  // Scope for lock.
    7282  {
    7283  VmaMutexLock lock(m_Mutex, m_hAllocator->m_UseMutex);
    7284 
    7285  VmaDeviceMemoryBlock* pBlock = hAllocation->GetBlock();
    7286 
    7287  if(IsCorruptionDetectionEnabled())
    7288  {
    7289  VkResult res = pBlock->ValidateMagicValueAroundAllocation(m_hAllocator, hAllocation->GetOffset(), hAllocation->GetSize());
    7290  VMA_ASSERT(res == VK_SUCCESS && "Couldn't map block memory to validate magic value.");
    7291  }
    7292 
    7293  if(hAllocation->IsPersistentMap())
    7294  {
    7295  pBlock->Unmap(m_hAllocator, 1);
    7296  }
    7297 
    7298  pBlock->m_Metadata.Free(hAllocation);
    7299  VMA_HEAVY_ASSERT(pBlock->Validate());
    7300 
    7301  VMA_DEBUG_LOG(" Freed from MemoryTypeIndex=%u", memTypeIndex);
    7302 
    7303  // pBlock became empty after this deallocation.
    7304  if(pBlock->m_Metadata.IsEmpty())
    7305  {
    7306  // Already has empty Allocation. We don't want to have two, so delete this one.
    7307  if(m_HasEmptyBlock && m_Blocks.size() > m_MinBlockCount)
    7308  {
    7309  pBlockToDelete = pBlock;
    7310  Remove(pBlock);
    7311  }
    7312  // We now have first empty block.
    7313  else
    7314  {
    7315  m_HasEmptyBlock = true;
    7316  }
    7317  }
    7318  // pBlock didn't become empty, but we have another empty block - find and free that one.
    7319  // (This is optional, heuristics.)
    7320  else if(m_HasEmptyBlock)
    7321  {
    7322  VmaDeviceMemoryBlock* pLastBlock = m_Blocks.back();
    7323  if(pLastBlock->m_Metadata.IsEmpty() && m_Blocks.size() > m_MinBlockCount)
    7324  {
    7325  pBlockToDelete = pLastBlock;
    7326  m_Blocks.pop_back();
    7327  m_HasEmptyBlock = false;
    7328  }
    7329  }
    7330 
    7331  IncrementallySortBlocks();
    7332  }
    7333 
    7334  // Destruction of a free Allocation. Deferred until this point, outside of mutex
    7335  // lock, for performance reason.
    7336  if(pBlockToDelete != VMA_NULL)
    7337  {
    7338  VMA_DEBUG_LOG(" Deleted empty allocation");
    7339  pBlockToDelete->Destroy(m_hAllocator);
    7340  vma_delete(m_hAllocator, pBlockToDelete);
    7341  }
    7342 }
    7343 
    7344 VkDeviceSize VmaBlockVector::CalcMaxBlockSize() const
    7345 {
    7346  VkDeviceSize result = 0;
    7347  for(size_t i = m_Blocks.size(); i--; )
    7348  {
    7349  result = VMA_MAX(result, m_Blocks[i]->m_Metadata.GetSize());
    7350  if(result >= m_PreferredBlockSize)
    7351  {
    7352  break;
    7353  }
    7354  }
    7355  return result;
    7356 }
    7357 
    7358 void VmaBlockVector::Remove(VmaDeviceMemoryBlock* pBlock)
    7359 {
    7360  for(uint32_t blockIndex = 0; blockIndex < m_Blocks.size(); ++blockIndex)
    7361  {
    7362  if(m_Blocks[blockIndex] == pBlock)
    7363  {
    7364  VmaVectorRemove(m_Blocks, blockIndex);
    7365  return;
    7366  }
    7367  }
    7368  VMA_ASSERT(0);
    7369 }
    7370 
    7371 void VmaBlockVector::IncrementallySortBlocks()
    7372 {
    7373  // Bubble sort only until first swap.
    7374  for(size_t i = 1; i < m_Blocks.size(); ++i)
    7375  {
    7376  if(m_Blocks[i - 1]->m_Metadata.GetSumFreeSize() > m_Blocks[i]->m_Metadata.GetSumFreeSize())
    7377  {
    7378  VMA_SWAP(m_Blocks[i - 1], m_Blocks[i]);
    7379  return;
    7380  }
    7381  }
    7382 }
    7383 
    7384 VkResult VmaBlockVector::CreateBlock(VkDeviceSize blockSize, size_t* pNewBlockIndex)
    7385 {
    7386  VkMemoryAllocateInfo allocInfo = { VK_STRUCTURE_TYPE_MEMORY_ALLOCATE_INFO };
    7387  allocInfo.memoryTypeIndex = m_MemoryTypeIndex;
    7388  allocInfo.allocationSize = blockSize;
    7389  VkDeviceMemory mem = VK_NULL_HANDLE;
    7390  VkResult res = m_hAllocator->AllocateVulkanMemory(&allocInfo, &mem);
    7391  if(res < 0)
    7392  {
    7393  return res;
    7394  }
    7395 
    7396  // New VkDeviceMemory successfully created.
    7397 
    7398  // Create new Allocation for it.
    7399  VmaDeviceMemoryBlock* const pBlock = vma_new(m_hAllocator, VmaDeviceMemoryBlock)(m_hAllocator);
    7400  pBlock->Init(
    7401  m_MemoryTypeIndex,
    7402  mem,
    7403  allocInfo.allocationSize,
    7404  m_NextBlockId++);
    7405 
    7406  m_Blocks.push_back(pBlock);
    7407  if(pNewBlockIndex != VMA_NULL)
    7408  {
    7409  *pNewBlockIndex = m_Blocks.size() - 1;
    7410  }
    7411 
    7412  return VK_SUCCESS;
    7413 }
    7414 
    7415 #if VMA_STATS_STRING_ENABLED
    7416 
    7417 void VmaBlockVector::PrintDetailedMap(class VmaJsonWriter& json)
    7418 {
    7419  VmaMutexLock lock(m_Mutex, m_hAllocator->m_UseMutex);
    7420 
    7421  json.BeginObject();
    7422 
    7423  if(m_IsCustomPool)
    7424  {
    7425  json.WriteString("MemoryTypeIndex");
    7426  json.WriteNumber(m_MemoryTypeIndex);
    7427 
    7428  json.WriteString("BlockSize");
    7429  json.WriteNumber(m_PreferredBlockSize);
    7430 
    7431  json.WriteString("BlockCount");
    7432  json.BeginObject(true);
    7433  if(m_MinBlockCount > 0)
    7434  {
    7435  json.WriteString("Min");
    7436  json.WriteNumber((uint64_t)m_MinBlockCount);
    7437  }
    7438  if(m_MaxBlockCount < SIZE_MAX)
    7439  {
    7440  json.WriteString("Max");
    7441  json.WriteNumber((uint64_t)m_MaxBlockCount);
    7442  }
    7443  json.WriteString("Cur");
    7444  json.WriteNumber((uint64_t)m_Blocks.size());
    7445  json.EndObject();
    7446 
    7447  if(m_FrameInUseCount > 0)
    7448  {
    7449  json.WriteString("FrameInUseCount");
    7450  json.WriteNumber(m_FrameInUseCount);
    7451  }
    7452  }
    7453  else
    7454  {
    7455  json.WriteString("PreferredBlockSize");
    7456  json.WriteNumber(m_PreferredBlockSize);
    7457  }
    7458 
    7459  json.WriteString("Blocks");
    7460  json.BeginObject();
    7461  for(size_t i = 0; i < m_Blocks.size(); ++i)
    7462  {
    7463  json.BeginString();
    7464  json.ContinueString(m_Blocks[i]->GetId());
    7465  json.EndString();
    7466 
    7467  m_Blocks[i]->m_Metadata.PrintDetailedMap(json);
    7468  }
    7469  json.EndObject();
    7470 
    7471  json.EndObject();
    7472 }
    7473 
    7474 #endif // #if VMA_STATS_STRING_ENABLED
    7475 
    7476 VmaDefragmentator* VmaBlockVector::EnsureDefragmentator(
    7477  VmaAllocator hAllocator,
    7478  uint32_t currentFrameIndex)
    7479 {
    7480  if(m_pDefragmentator == VMA_NULL)
    7481  {
    7482  m_pDefragmentator = vma_new(m_hAllocator, VmaDefragmentator)(
    7483  hAllocator,
    7484  this,
    7485  currentFrameIndex);
    7486  }
    7487 
    7488  return m_pDefragmentator;
    7489 }
    7490 
    7491 VkResult VmaBlockVector::Defragment(
    7492  VmaDefragmentationStats* pDefragmentationStats,
    7493  VkDeviceSize& maxBytesToMove,
    7494  uint32_t& maxAllocationsToMove)
    7495 {
    7496  if(m_pDefragmentator == VMA_NULL)
    7497  {
    7498  return VK_SUCCESS;
    7499  }
    7500 
    7501  VmaMutexLock lock(m_Mutex, m_hAllocator->m_UseMutex);
    7502 
    7503  // Defragment.
    7504  VkResult result = m_pDefragmentator->Defragment(maxBytesToMove, maxAllocationsToMove);
    7505 
    7506  // Accumulate statistics.
    7507  if(pDefragmentationStats != VMA_NULL)
    7508  {
    7509  const VkDeviceSize bytesMoved = m_pDefragmentator->GetBytesMoved();
    7510  const uint32_t allocationsMoved = m_pDefragmentator->GetAllocationsMoved();
    7511  pDefragmentationStats->bytesMoved += bytesMoved;
    7512  pDefragmentationStats->allocationsMoved += allocationsMoved;
    7513  VMA_ASSERT(bytesMoved <= maxBytesToMove);
    7514  VMA_ASSERT(allocationsMoved <= maxAllocationsToMove);
    7515  maxBytesToMove -= bytesMoved;
    7516  maxAllocationsToMove -= allocationsMoved;
    7517  }
    7518 
    7519  // Free empty blocks.
    7520  m_HasEmptyBlock = false;
    7521  for(size_t blockIndex = m_Blocks.size(); blockIndex--; )
    7522  {
    7523  VmaDeviceMemoryBlock* pBlock = m_Blocks[blockIndex];
    7524  if(pBlock->m_Metadata.IsEmpty())
    7525  {
    7526  if(m_Blocks.size() > m_MinBlockCount)
    7527  {
    7528  if(pDefragmentationStats != VMA_NULL)
    7529  {
    7530  ++pDefragmentationStats->deviceMemoryBlocksFreed;
    7531  pDefragmentationStats->bytesFreed += pBlock->m_Metadata.GetSize();
    7532  }
    7533 
    7534  VmaVectorRemove(m_Blocks, blockIndex);
    7535  pBlock->Destroy(m_hAllocator);
    7536  vma_delete(m_hAllocator, pBlock);
    7537  }
    7538  else
    7539  {
    7540  m_HasEmptyBlock = true;
    7541  }
    7542  }
    7543  }
    7544 
    7545  return result;
    7546 }
    7547 
    7548 void VmaBlockVector::DestroyDefragmentator()
    7549 {
    7550  if(m_pDefragmentator != VMA_NULL)
    7551  {
    7552  vma_delete(m_hAllocator, m_pDefragmentator);
    7553  m_pDefragmentator = VMA_NULL;
    7554  }
    7555 }
    7556 
    7557 void VmaBlockVector::MakePoolAllocationsLost(
    7558  uint32_t currentFrameIndex,
    7559  size_t* pLostAllocationCount)
    7560 {
    7561  VmaMutexLock lock(m_Mutex, m_hAllocator->m_UseMutex);
    7562  size_t lostAllocationCount = 0;
    7563  for(uint32_t blockIndex = 0; blockIndex < m_Blocks.size(); ++blockIndex)
    7564  {
    7565  VmaDeviceMemoryBlock* const pBlock = m_Blocks[blockIndex];
    7566  VMA_ASSERT(pBlock);
    7567  lostAllocationCount += pBlock->m_Metadata.MakeAllocationsLost(currentFrameIndex, m_FrameInUseCount);
    7568  }
    7569  if(pLostAllocationCount != VMA_NULL)
    7570  {
    7571  *pLostAllocationCount = lostAllocationCount;
    7572  }
    7573 }
    7574 
    7575 VkResult VmaBlockVector::CheckCorruption()
    7576 {
    7577  if(!IsCorruptionDetectionEnabled())
    7578  {
    7579  return VK_ERROR_FEATURE_NOT_PRESENT;
    7580  }
    7581 
    7582  VmaMutexLock lock(m_Mutex, m_hAllocator->m_UseMutex);
    7583  for(uint32_t blockIndex = 0; blockIndex < m_Blocks.size(); ++blockIndex)
    7584  {
    7585  VmaDeviceMemoryBlock* const pBlock = m_Blocks[blockIndex];
    7586  VMA_ASSERT(pBlock);
    7587  VkResult res = pBlock->CheckCorruption(m_hAllocator);
    7588  if(res != VK_SUCCESS)
    7589  {
    7590  return res;
    7591  }
    7592  }
    7593  return VK_SUCCESS;
    7594 }
    7595 
    7596 void VmaBlockVector::AddStats(VmaStats* pStats)
    7597 {
    7598  const uint32_t memTypeIndex = m_MemoryTypeIndex;
    7599  const uint32_t memHeapIndex = m_hAllocator->MemoryTypeIndexToHeapIndex(memTypeIndex);
    7600 
    7601  VmaMutexLock lock(m_Mutex, m_hAllocator->m_UseMutex);
    7602 
    7603  for(uint32_t blockIndex = 0; blockIndex < m_Blocks.size(); ++blockIndex)
    7604  {
    7605  const VmaDeviceMemoryBlock* const pBlock = m_Blocks[blockIndex];
    7606  VMA_ASSERT(pBlock);
    7607  VMA_HEAVY_ASSERT(pBlock->Validate());
    7608  VmaStatInfo allocationStatInfo;
    7609  pBlock->m_Metadata.CalcAllocationStatInfo(allocationStatInfo);
    7610  VmaAddStatInfo(pStats->total, allocationStatInfo);
    7611  VmaAddStatInfo(pStats->memoryType[memTypeIndex], allocationStatInfo);
    7612  VmaAddStatInfo(pStats->memoryHeap[memHeapIndex], allocationStatInfo);
    7613  }
    7614 }
    7615 
    7617 // VmaDefragmentator members definition
    7618 
    7619 VmaDefragmentator::VmaDefragmentator(
    7620  VmaAllocator hAllocator,
    7621  VmaBlockVector* pBlockVector,
    7622  uint32_t currentFrameIndex) :
    7623  m_hAllocator(hAllocator),
    7624  m_pBlockVector(pBlockVector),
    7625  m_CurrentFrameIndex(currentFrameIndex),
    7626  m_BytesMoved(0),
    7627  m_AllocationsMoved(0),
    7628  m_Allocations(VmaStlAllocator<AllocationInfo>(hAllocator->GetAllocationCallbacks())),
    7629  m_Blocks(VmaStlAllocator<BlockInfo*>(hAllocator->GetAllocationCallbacks()))
    7630 {
    7631 }
    7632 
    7633 VmaDefragmentator::~VmaDefragmentator()
    7634 {
    7635  for(size_t i = m_Blocks.size(); i--; )
    7636  {
    7637  vma_delete(m_hAllocator, m_Blocks[i]);
    7638  }
    7639 }
    7640 
    7641 void VmaDefragmentator::AddAllocation(VmaAllocation hAlloc, VkBool32* pChanged)
    7642 {
    7643  AllocationInfo allocInfo;
    7644  allocInfo.m_hAllocation = hAlloc;
    7645  allocInfo.m_pChanged = pChanged;
    7646  m_Allocations.push_back(allocInfo);
    7647 }
    7648 
    7649 VkResult VmaDefragmentator::BlockInfo::EnsureMapping(VmaAllocator hAllocator, void** ppMappedData)
    7650 {
    7651  // It has already been mapped for defragmentation.
    7652  if(m_pMappedDataForDefragmentation)
    7653  {
    7654  *ppMappedData = m_pMappedDataForDefragmentation;
    7655  return VK_SUCCESS;
    7656  }
    7657 
    7658  // It is originally mapped.
    7659  if(m_pBlock->GetMappedData())
    7660  {
    7661  *ppMappedData = m_pBlock->GetMappedData();
    7662  return VK_SUCCESS;
    7663  }
    7664 
    7665  // Map on first usage.
    7666  VkResult res = m_pBlock->Map(hAllocator, 1, &m_pMappedDataForDefragmentation);
    7667  *ppMappedData = m_pMappedDataForDefragmentation;
    7668  return res;
    7669 }
    7670 
    7671 void VmaDefragmentator::BlockInfo::Unmap(VmaAllocator hAllocator)
    7672 {
    7673  if(m_pMappedDataForDefragmentation != VMA_NULL)
    7674  {
    7675  m_pBlock->Unmap(hAllocator, 1);
    7676  }
    7677 }
    7678 
    7679 VkResult VmaDefragmentator::DefragmentRound(
    7680  VkDeviceSize maxBytesToMove,
    7681  uint32_t maxAllocationsToMove)
    7682 {
    7683  if(m_Blocks.empty())
    7684  {
    7685  return VK_SUCCESS;
    7686  }
    7687 
    7688  size_t srcBlockIndex = m_Blocks.size() - 1;
    7689  size_t srcAllocIndex = SIZE_MAX;
    7690  for(;;)
    7691  {
    7692  // 1. Find next allocation to move.
    7693  // 1.1. Start from last to first m_Blocks - they are sorted from most "destination" to most "source".
    7694  // 1.2. Then start from last to first m_Allocations - they are sorted from largest to smallest.
    7695  while(srcAllocIndex >= m_Blocks[srcBlockIndex]->m_Allocations.size())
    7696  {
    7697  if(m_Blocks[srcBlockIndex]->m_Allocations.empty())
    7698  {
    7699  // Finished: no more allocations to process.
    7700  if(srcBlockIndex == 0)
    7701  {
    7702  return VK_SUCCESS;
    7703  }
    7704  else
    7705  {
    7706  --srcBlockIndex;
    7707  srcAllocIndex = SIZE_MAX;
    7708  }
    7709  }
    7710  else
    7711  {
    7712  srcAllocIndex = m_Blocks[srcBlockIndex]->m_Allocations.size() - 1;
    7713  }
    7714  }
    7715 
    7716  BlockInfo* pSrcBlockInfo = m_Blocks[srcBlockIndex];
    7717  AllocationInfo& allocInfo = pSrcBlockInfo->m_Allocations[srcAllocIndex];
    7718 
    7719  const VkDeviceSize size = allocInfo.m_hAllocation->GetSize();
    7720  const VkDeviceSize srcOffset = allocInfo.m_hAllocation->GetOffset();
    7721  const VkDeviceSize alignment = allocInfo.m_hAllocation->GetAlignment();
    7722  const VmaSuballocationType suballocType = allocInfo.m_hAllocation->GetSuballocationType();
    7723 
    7724  // 2. Try to find new place for this allocation in preceding or current block.
    7725  for(size_t dstBlockIndex = 0; dstBlockIndex <= srcBlockIndex; ++dstBlockIndex)
    7726  {
    7727  BlockInfo* pDstBlockInfo = m_Blocks[dstBlockIndex];
    7728  VmaAllocationRequest dstAllocRequest;
    7729  if(pDstBlockInfo->m_pBlock->m_Metadata.CreateAllocationRequest(
    7730  m_CurrentFrameIndex,
    7731  m_pBlockVector->GetFrameInUseCount(),
    7732  m_pBlockVector->GetBufferImageGranularity(),
    7733  size,
    7734  alignment,
    7735  suballocType,
    7736  false, // canMakeOtherLost
    7737  &dstAllocRequest) &&
    7738  MoveMakesSense(
    7739  dstBlockIndex, dstAllocRequest.offset, srcBlockIndex, srcOffset))
    7740  {
    7741  VMA_ASSERT(dstAllocRequest.itemsToMakeLostCount == 0);
    7742 
    7743  // Reached limit on number of allocations or bytes to move.
    7744  if((m_AllocationsMoved + 1 > maxAllocationsToMove) ||
    7745  (m_BytesMoved + size > maxBytesToMove))
    7746  {
    7747  return VK_INCOMPLETE;
    7748  }
    7749 
    7750  void* pDstMappedData = VMA_NULL;
    7751  VkResult res = pDstBlockInfo->EnsureMapping(m_hAllocator, &pDstMappedData);
    7752  if(res != VK_SUCCESS)
    7753  {
    7754  return res;
    7755  }
    7756 
    7757  void* pSrcMappedData = VMA_NULL;
    7758  res = pSrcBlockInfo->EnsureMapping(m_hAllocator, &pSrcMappedData);
    7759  if(res != VK_SUCCESS)
    7760  {
    7761  return res;
    7762  }
    7763 
    7764  // THE PLACE WHERE ACTUAL DATA COPY HAPPENS.
    7765  memcpy(
    7766  reinterpret_cast<char*>(pDstMappedData) + dstAllocRequest.offset,
    7767  reinterpret_cast<char*>(pSrcMappedData) + srcOffset,
    7768  static_cast<size_t>(size));
    7769 
    7770  if(VMA_DEBUG_MARGIN > 0)
    7771  {
    7772  VmaWriteMagicValue(pDstMappedData, dstAllocRequest.offset - VMA_DEBUG_MARGIN);
    7773  VmaWriteMagicValue(pDstMappedData, dstAllocRequest.offset + size);
    7774  }
    7775 
    7776  pDstBlockInfo->m_pBlock->m_Metadata.Alloc(dstAllocRequest, suballocType, size, allocInfo.m_hAllocation);
    7777  pSrcBlockInfo->m_pBlock->m_Metadata.FreeAtOffset(srcOffset);
    7778 
    7779  allocInfo.m_hAllocation->ChangeBlockAllocation(m_hAllocator, pDstBlockInfo->m_pBlock, dstAllocRequest.offset);
    7780 
    7781  if(allocInfo.m_pChanged != VMA_NULL)
    7782  {
    7783  *allocInfo.m_pChanged = VK_TRUE;
    7784  }
    7785 
    7786  ++m_AllocationsMoved;
    7787  m_BytesMoved += size;
    7788 
    7789  VmaVectorRemove(pSrcBlockInfo->m_Allocations, srcAllocIndex);
    7790 
    7791  break;
    7792  }
    7793  }
    7794 
    7795  // If not processed, this allocInfo remains in pBlockInfo->m_Allocations for next round.
    7796 
    7797  if(srcAllocIndex > 0)
    7798  {
    7799  --srcAllocIndex;
    7800  }
    7801  else
    7802  {
    7803  if(srcBlockIndex > 0)
    7804  {
    7805  --srcBlockIndex;
    7806  srcAllocIndex = SIZE_MAX;
    7807  }
    7808  else
    7809  {
    7810  return VK_SUCCESS;
    7811  }
    7812  }
    7813  }
    7814 }
    7815 
    7816 VkResult VmaDefragmentator::Defragment(
    7817  VkDeviceSize maxBytesToMove,
    7818  uint32_t maxAllocationsToMove)
    7819 {
    7820  if(m_Allocations.empty())
    7821  {
    7822  return VK_SUCCESS;
    7823  }
    7824 
    7825  // Create block info for each block.
    7826  const size_t blockCount = m_pBlockVector->m_Blocks.size();
    7827  for(size_t blockIndex = 0; blockIndex < blockCount; ++blockIndex)
    7828  {
    7829  BlockInfo* pBlockInfo = vma_new(m_hAllocator, BlockInfo)(m_hAllocator->GetAllocationCallbacks());
    7830  pBlockInfo->m_pBlock = m_pBlockVector->m_Blocks[blockIndex];
    7831  m_Blocks.push_back(pBlockInfo);
    7832  }
    7833 
    7834  // Sort them by m_pBlock pointer value.
    7835  VMA_SORT(m_Blocks.begin(), m_Blocks.end(), BlockPointerLess());
    7836 
    7837  // Move allocation infos from m_Allocations to appropriate m_Blocks[memTypeIndex].m_Allocations.
    7838  for(size_t blockIndex = 0, allocCount = m_Allocations.size(); blockIndex < allocCount; ++blockIndex)
    7839  {
    7840  AllocationInfo& allocInfo = m_Allocations[blockIndex];
    7841  // Now as we are inside VmaBlockVector::m_Mutex, we can make final check if this allocation was not lost.
    7842  if(allocInfo.m_hAllocation->GetLastUseFrameIndex() != VMA_FRAME_INDEX_LOST)
    7843  {
    7844  VmaDeviceMemoryBlock* pBlock = allocInfo.m_hAllocation->GetBlock();
    7845  BlockInfoVector::iterator it = VmaBinaryFindFirstNotLess(m_Blocks.begin(), m_Blocks.end(), pBlock, BlockPointerLess());
    7846  if(it != m_Blocks.end() && (*it)->m_pBlock == pBlock)
    7847  {
    7848  (*it)->m_Allocations.push_back(allocInfo);
    7849  }
    7850  else
    7851  {
    7852  VMA_ASSERT(0);
    7853  }
    7854  }
    7855  }
    7856  m_Allocations.clear();
    7857 
    7858  for(size_t blockIndex = 0; blockIndex < blockCount; ++blockIndex)
    7859  {
    7860  BlockInfo* pBlockInfo = m_Blocks[blockIndex];
    7861  pBlockInfo->CalcHasNonMovableAllocations();
    7862  pBlockInfo->SortAllocationsBySizeDescecnding();
    7863  }
    7864 
    7865  // Sort m_Blocks this time by the main criterium, from most "destination" to most "source" blocks.
    7866  VMA_SORT(m_Blocks.begin(), m_Blocks.end(), BlockInfoCompareMoveDestination());
    7867 
    7868  // Execute defragmentation rounds (the main part).
    7869  VkResult result = VK_SUCCESS;
    7870  for(size_t round = 0; (round < 2) && (result == VK_SUCCESS); ++round)
    7871  {
    7872  result = DefragmentRound(maxBytesToMove, maxAllocationsToMove);
    7873  }
    7874 
    7875  // Unmap blocks that were mapped for defragmentation.
    7876  for(size_t blockIndex = 0; blockIndex < blockCount; ++blockIndex)
    7877  {
    7878  m_Blocks[blockIndex]->Unmap(m_hAllocator);
    7879  }
    7880 
    7881  return result;
    7882 }
    7883 
    7884 bool VmaDefragmentator::MoveMakesSense(
    7885  size_t dstBlockIndex, VkDeviceSize dstOffset,
    7886  size_t srcBlockIndex, VkDeviceSize srcOffset)
    7887 {
    7888  if(dstBlockIndex < srcBlockIndex)
    7889  {
    7890  return true;
    7891  }
    7892  if(dstBlockIndex > srcBlockIndex)
    7893  {
    7894  return false;
    7895  }
    7896  if(dstOffset < srcOffset)
    7897  {
    7898  return true;
    7899  }
    7900  return false;
    7901 }
    7902 
    7904 // VmaAllocator_T
    7905 
    7906 VmaAllocator_T::VmaAllocator_T(const VmaAllocatorCreateInfo* pCreateInfo) :
    7907  m_UseMutex((pCreateInfo->flags & VMA_ALLOCATOR_CREATE_EXTERNALLY_SYNCHRONIZED_BIT) == 0),
    7908  m_UseKhrDedicatedAllocation((pCreateInfo->flags & VMA_ALLOCATOR_CREATE_KHR_DEDICATED_ALLOCATION_BIT) != 0),
    7909  m_hDevice(pCreateInfo->device),
    7910  m_AllocationCallbacksSpecified(pCreateInfo->pAllocationCallbacks != VMA_NULL),
    7911  m_AllocationCallbacks(pCreateInfo->pAllocationCallbacks ?
    7912  *pCreateInfo->pAllocationCallbacks : VmaEmptyAllocationCallbacks),
    7913  m_PreferredLargeHeapBlockSize(0),
    7914  m_PhysicalDevice(pCreateInfo->physicalDevice),
    7915  m_CurrentFrameIndex(0),
    7916  m_Pools(VmaStlAllocator<VmaPool>(GetAllocationCallbacks())),
    7917  m_NextPoolId(0)
    7918 {
    7919  if(VMA_DEBUG_DETECT_CORRUPTION)
    7920  {
    7921  // Needs to be multiply of uint32_t size because we are going to write VMA_CORRUPTION_DETECTION_MAGIC_VALUE to it.
    7922  VMA_ASSERT(VMA_DEBUG_MARGIN % sizeof(uint32_t) == 0);
    7923  }
    7924 
    7925  VMA_ASSERT(pCreateInfo->physicalDevice && pCreateInfo->device);
    7926 
    7927 #if !(VMA_DEDICATED_ALLOCATION)
    7929  {
    7930  VMA_ASSERT(0 && "VMA_ALLOCATOR_CREATE_KHR_DEDICATED_ALLOCATION_BIT set but required extensions are disabled by preprocessor macros.");
    7931  }
    7932 #endif
    7933 
    7934  memset(&m_DeviceMemoryCallbacks, 0 ,sizeof(m_DeviceMemoryCallbacks));
    7935  memset(&m_PhysicalDeviceProperties, 0, sizeof(m_PhysicalDeviceProperties));
    7936  memset(&m_MemProps, 0, sizeof(m_MemProps));
    7937 
    7938  memset(&m_pBlockVectors, 0, sizeof(m_pBlockVectors));
    7939  memset(&m_pDedicatedAllocations, 0, sizeof(m_pDedicatedAllocations));
    7940 
    7941  for(uint32_t i = 0; i < VK_MAX_MEMORY_HEAPS; ++i)
    7942  {
    7943  m_HeapSizeLimit[i] = VK_WHOLE_SIZE;
    7944  }
    7945 
    7946  if(pCreateInfo->pDeviceMemoryCallbacks != VMA_NULL)
    7947  {
    7948  m_DeviceMemoryCallbacks.pfnAllocate = pCreateInfo->pDeviceMemoryCallbacks->pfnAllocate;
    7949  m_DeviceMemoryCallbacks.pfnFree = pCreateInfo->pDeviceMemoryCallbacks->pfnFree;
    7950  }
    7951 
    7952  ImportVulkanFunctions(pCreateInfo->pVulkanFunctions);
    7953 
    7954  (*m_VulkanFunctions.vkGetPhysicalDeviceProperties)(m_PhysicalDevice, &m_PhysicalDeviceProperties);
    7955  (*m_VulkanFunctions.vkGetPhysicalDeviceMemoryProperties)(m_PhysicalDevice, &m_MemProps);
    7956 
    7957  m_PreferredLargeHeapBlockSize = (pCreateInfo->preferredLargeHeapBlockSize != 0) ?
    7958  pCreateInfo->preferredLargeHeapBlockSize : static_cast<VkDeviceSize>(VMA_DEFAULT_LARGE_HEAP_BLOCK_SIZE);
    7959 
    7960  if(pCreateInfo->pHeapSizeLimit != VMA_NULL)
    7961  {
    7962  for(uint32_t heapIndex = 0; heapIndex < GetMemoryHeapCount(); ++heapIndex)
    7963  {
    7964  const VkDeviceSize limit = pCreateInfo->pHeapSizeLimit[heapIndex];
    7965  if(limit != VK_WHOLE_SIZE)
    7966  {
    7967  m_HeapSizeLimit[heapIndex] = limit;
    7968  if(limit < m_MemProps.memoryHeaps[heapIndex].size)
    7969  {
    7970  m_MemProps.memoryHeaps[heapIndex].size = limit;
    7971  }
    7972  }
    7973  }
    7974  }
    7975 
    7976  for(uint32_t memTypeIndex = 0; memTypeIndex < GetMemoryTypeCount(); ++memTypeIndex)
    7977  {
    7978  const VkDeviceSize preferredBlockSize = CalcPreferredBlockSize(memTypeIndex);
    7979 
    7980  m_pBlockVectors[memTypeIndex] = vma_new(this, VmaBlockVector)(
    7981  this,
    7982  memTypeIndex,
    7983  preferredBlockSize,
    7984  0,
    7985  SIZE_MAX,
    7986  GetBufferImageGranularity(),
    7987  pCreateInfo->frameInUseCount,
    7988  false); // isCustomPool
    7989  // No need to call m_pBlockVectors[memTypeIndex][blockVectorTypeIndex]->CreateMinBlocks here,
    7990  // becase minBlockCount is 0.
    7991  m_pDedicatedAllocations[memTypeIndex] = vma_new(this, AllocationVectorType)(VmaStlAllocator<VmaAllocation>(GetAllocationCallbacks()));
    7992 
    7993  }
    7994 }
    7995 
    7996 VmaAllocator_T::~VmaAllocator_T()
    7997 {
    7998  VMA_ASSERT(m_Pools.empty());
    7999 
    8000  for(size_t i = GetMemoryTypeCount(); i--; )
    8001  {
    8002  vma_delete(this, m_pDedicatedAllocations[i]);
    8003  vma_delete(this, m_pBlockVectors[i]);
    8004  }
    8005 }
    8006 
    8007 void VmaAllocator_T::ImportVulkanFunctions(const VmaVulkanFunctions* pVulkanFunctions)
    8008 {
    8009 #if VMA_STATIC_VULKAN_FUNCTIONS == 1
    8010  m_VulkanFunctions.vkGetPhysicalDeviceProperties = &vkGetPhysicalDeviceProperties;
    8011  m_VulkanFunctions.vkGetPhysicalDeviceMemoryProperties = &vkGetPhysicalDeviceMemoryProperties;
    8012  m_VulkanFunctions.vkAllocateMemory = &vkAllocateMemory;
    8013  m_VulkanFunctions.vkFreeMemory = &vkFreeMemory;
    8014  m_VulkanFunctions.vkMapMemory = &vkMapMemory;
    8015  m_VulkanFunctions.vkUnmapMemory = &vkUnmapMemory;
    8016  m_VulkanFunctions.vkFlushMappedMemoryRanges = &vkFlushMappedMemoryRanges;
    8017  m_VulkanFunctions.vkInvalidateMappedMemoryRanges = &vkInvalidateMappedMemoryRanges;
    8018  m_VulkanFunctions.vkBindBufferMemory = &vkBindBufferMemory;
    8019  m_VulkanFunctions.vkBindImageMemory = &vkBindImageMemory;
    8020  m_VulkanFunctions.vkGetBufferMemoryRequirements = &vkGetBufferMemoryRequirements;
    8021  m_VulkanFunctions.vkGetImageMemoryRequirements = &vkGetImageMemoryRequirements;
    8022  m_VulkanFunctions.vkCreateBuffer = &vkCreateBuffer;
    8023  m_VulkanFunctions.vkDestroyBuffer = &vkDestroyBuffer;
    8024  m_VulkanFunctions.vkCreateImage = &vkCreateImage;
    8025  m_VulkanFunctions.vkDestroyImage = &vkDestroyImage;
    8026 #if VMA_DEDICATED_ALLOCATION
    8027  if(m_UseKhrDedicatedAllocation)
    8028  {
    8029  m_VulkanFunctions.vkGetBufferMemoryRequirements2KHR =
    8030  (PFN_vkGetBufferMemoryRequirements2KHR)vkGetDeviceProcAddr(m_hDevice, "vkGetBufferMemoryRequirements2KHR");
    8031  m_VulkanFunctions.vkGetImageMemoryRequirements2KHR =
    8032  (PFN_vkGetImageMemoryRequirements2KHR)vkGetDeviceProcAddr(m_hDevice, "vkGetImageMemoryRequirements2KHR");
    8033  }
    8034 #endif // #if VMA_DEDICATED_ALLOCATION
    8035 #endif // #if VMA_STATIC_VULKAN_FUNCTIONS == 1
    8036 
    8037 #define VMA_COPY_IF_NOT_NULL(funcName) \
    8038  if(pVulkanFunctions->funcName != VMA_NULL) m_VulkanFunctions.funcName = pVulkanFunctions->funcName;
    8039 
    8040  if(pVulkanFunctions != VMA_NULL)
    8041  {
    8042  VMA_COPY_IF_NOT_NULL(vkGetPhysicalDeviceProperties);
    8043  VMA_COPY_IF_NOT_NULL(vkGetPhysicalDeviceMemoryProperties);
    8044  VMA_COPY_IF_NOT_NULL(vkAllocateMemory);
    8045  VMA_COPY_IF_NOT_NULL(vkFreeMemory);
    8046  VMA_COPY_IF_NOT_NULL(vkMapMemory);
    8047  VMA_COPY_IF_NOT_NULL(vkUnmapMemory);
    8048  VMA_COPY_IF_NOT_NULL(vkFlushMappedMemoryRanges);
    8049  VMA_COPY_IF_NOT_NULL(vkInvalidateMappedMemoryRanges);
    8050  VMA_COPY_IF_NOT_NULL(vkBindBufferMemory);
    8051  VMA_COPY_IF_NOT_NULL(vkBindImageMemory);
    8052  VMA_COPY_IF_NOT_NULL(vkGetBufferMemoryRequirements);
    8053  VMA_COPY_IF_NOT_NULL(vkGetImageMemoryRequirements);
    8054  VMA_COPY_IF_NOT_NULL(vkCreateBuffer);
    8055  VMA_COPY_IF_NOT_NULL(vkDestroyBuffer);
    8056  VMA_COPY_IF_NOT_NULL(vkCreateImage);
    8057  VMA_COPY_IF_NOT_NULL(vkDestroyImage);
    8058 #if VMA_DEDICATED_ALLOCATION
    8059  VMA_COPY_IF_NOT_NULL(vkGetBufferMemoryRequirements2KHR);
    8060  VMA_COPY_IF_NOT_NULL(vkGetImageMemoryRequirements2KHR);
    8061 #endif
    8062  }
    8063 
    8064 #undef VMA_COPY_IF_NOT_NULL
    8065 
    8066  // If these asserts are hit, you must either #define VMA_STATIC_VULKAN_FUNCTIONS 1
    8067  // or pass valid pointers as VmaAllocatorCreateInfo::pVulkanFunctions.
    8068  VMA_ASSERT(m_VulkanFunctions.vkGetPhysicalDeviceProperties != VMA_NULL);
    8069  VMA_ASSERT(m_VulkanFunctions.vkGetPhysicalDeviceMemoryProperties != VMA_NULL);
    8070  VMA_ASSERT(m_VulkanFunctions.vkAllocateMemory != VMA_NULL);
    8071  VMA_ASSERT(m_VulkanFunctions.vkFreeMemory != VMA_NULL);
    8072  VMA_ASSERT(m_VulkanFunctions.vkMapMemory != VMA_NULL);
    8073  VMA_ASSERT(m_VulkanFunctions.vkUnmapMemory != VMA_NULL);
    8074  VMA_ASSERT(m_VulkanFunctions.vkFlushMappedMemoryRanges != VMA_NULL);
    8075  VMA_ASSERT(m_VulkanFunctions.vkInvalidateMappedMemoryRanges != VMA_NULL);
    8076  VMA_ASSERT(m_VulkanFunctions.vkBindBufferMemory != VMA_NULL);
    8077  VMA_ASSERT(m_VulkanFunctions.vkBindImageMemory != VMA_NULL);
    8078  VMA_ASSERT(m_VulkanFunctions.vkGetBufferMemoryRequirements != VMA_NULL);
    8079  VMA_ASSERT(m_VulkanFunctions.vkGetImageMemoryRequirements != VMA_NULL);
    8080  VMA_ASSERT(m_VulkanFunctions.vkCreateBuffer != VMA_NULL);
    8081  VMA_ASSERT(m_VulkanFunctions.vkDestroyBuffer != VMA_NULL);
    8082  VMA_ASSERT(m_VulkanFunctions.vkCreateImage != VMA_NULL);
    8083  VMA_ASSERT(m_VulkanFunctions.vkDestroyImage != VMA_NULL);
    8084 #if VMA_DEDICATED_ALLOCATION
    8085  if(m_UseKhrDedicatedAllocation)
    8086  {
    8087  VMA_ASSERT(m_VulkanFunctions.vkGetBufferMemoryRequirements2KHR != VMA_NULL);
    8088  VMA_ASSERT(m_VulkanFunctions.vkGetImageMemoryRequirements2KHR != VMA_NULL);
    8089  }
    8090 #endif
    8091 }
    8092 
    8093 VkDeviceSize VmaAllocator_T::CalcPreferredBlockSize(uint32_t memTypeIndex)
    8094 {
    8095  const uint32_t heapIndex = MemoryTypeIndexToHeapIndex(memTypeIndex);
    8096  const VkDeviceSize heapSize = m_MemProps.memoryHeaps[heapIndex].size;
    8097  const bool isSmallHeap = heapSize <= VMA_SMALL_HEAP_MAX_SIZE;
    8098  return isSmallHeap ? (heapSize / 8) : m_PreferredLargeHeapBlockSize;
    8099 }
    8100 
    8101 VkResult VmaAllocator_T::AllocateMemoryOfType(
    8102  VkDeviceSize size,
    8103  VkDeviceSize alignment,
    8104  bool dedicatedAllocation,
    8105  VkBuffer dedicatedBuffer,
    8106  VkImage dedicatedImage,
    8107  const VmaAllocationCreateInfo& createInfo,
    8108  uint32_t memTypeIndex,
    8109  VmaSuballocationType suballocType,
    8110  VmaAllocation* pAllocation)
    8111 {
    8112  VMA_ASSERT(pAllocation != VMA_NULL);
    8113  VMA_DEBUG_LOG(" AllocateMemory: MemoryTypeIndex=%u, Size=%llu", memTypeIndex, vkMemReq.size);
    8114 
    8115  VmaAllocationCreateInfo finalCreateInfo = createInfo;
    8116 
    8117  // If memory type is not HOST_VISIBLE, disable MAPPED.
    8118  if((finalCreateInfo.flags & VMA_ALLOCATION_CREATE_MAPPED_BIT) != 0 &&
    8119  (m_MemProps.memoryTypes[memTypeIndex].propertyFlags & VK_MEMORY_PROPERTY_HOST_VISIBLE_BIT) == 0)
    8120  {
    8121  finalCreateInfo.flags &= ~VMA_ALLOCATION_CREATE_MAPPED_BIT;
    8122  }
    8123 
    8124  VmaBlockVector* const blockVector = m_pBlockVectors[memTypeIndex];
    8125  VMA_ASSERT(blockVector);
    8126 
    8127  const VkDeviceSize preferredBlockSize = blockVector->GetPreferredBlockSize();
    8128  bool preferDedicatedMemory =
    8129  VMA_DEBUG_ALWAYS_DEDICATED_MEMORY ||
    8130  dedicatedAllocation ||
    8131  // Heuristics: Allocate dedicated memory if requested size if greater than half of preferred block size.
    8132  size > preferredBlockSize / 2;
    8133 
    8134  if(preferDedicatedMemory &&
    8135  (finalCreateInfo.flags & VMA_ALLOCATION_CREATE_NEVER_ALLOCATE_BIT) == 0 &&
    8136  finalCreateInfo.pool == VK_NULL_HANDLE)
    8137  {
    8139  }
    8140 
    8141  if((finalCreateInfo.flags & VMA_ALLOCATION_CREATE_DEDICATED_MEMORY_BIT) != 0)
    8142  {
    8143  if((finalCreateInfo.flags & VMA_ALLOCATION_CREATE_NEVER_ALLOCATE_BIT) != 0)
    8144  {
    8145  return VK_ERROR_OUT_OF_DEVICE_MEMORY;
    8146  }
    8147  else
    8148  {
    8149  return AllocateDedicatedMemory(
    8150  size,
    8151  suballocType,
    8152  memTypeIndex,
    8153  (finalCreateInfo.flags & VMA_ALLOCATION_CREATE_MAPPED_BIT) != 0,
    8154  (finalCreateInfo.flags & VMA_ALLOCATION_CREATE_USER_DATA_COPY_STRING_BIT) != 0,
    8155  finalCreateInfo.pUserData,
    8156  dedicatedBuffer,
    8157  dedicatedImage,
    8158  pAllocation);
    8159  }
    8160  }
    8161  else
    8162  {
    8163  VkResult res = blockVector->Allocate(
    8164  VK_NULL_HANDLE, // hCurrentPool
    8165  m_CurrentFrameIndex.load(),
    8166  size,
    8167  alignment,
    8168  finalCreateInfo,
    8169  suballocType,
    8170  pAllocation);
    8171  if(res == VK_SUCCESS)
    8172  {
    8173  return res;
    8174  }
    8175 
    8176  // 5. Try dedicated memory.
    8177  if((finalCreateInfo.flags & VMA_ALLOCATION_CREATE_NEVER_ALLOCATE_BIT) != 0)
    8178  {
    8179  return VK_ERROR_OUT_OF_DEVICE_MEMORY;
    8180  }
    8181  else
    8182  {
    8183  res = AllocateDedicatedMemory(
    8184  size,
    8185  suballocType,
    8186  memTypeIndex,
    8187  (finalCreateInfo.flags & VMA_ALLOCATION_CREATE_MAPPED_BIT) != 0,
    8188  (finalCreateInfo.flags & VMA_ALLOCATION_CREATE_USER_DATA_COPY_STRING_BIT) != 0,
    8189  finalCreateInfo.pUserData,
    8190  dedicatedBuffer,
    8191  dedicatedImage,
    8192  pAllocation);
    8193  if(res == VK_SUCCESS)
    8194  {
    8195  // Succeeded: AllocateDedicatedMemory function already filld pMemory, nothing more to do here.
    8196  VMA_DEBUG_LOG(" Allocated as DedicatedMemory");
    8197  return VK_SUCCESS;
    8198  }
    8199  else
    8200  {
    8201  // Everything failed: Return error code.
    8202  VMA_DEBUG_LOG(" vkAllocateMemory FAILED");
    8203  return res;
    8204  }
    8205  }
    8206  }
    8207 }
    8208 
    8209 VkResult VmaAllocator_T::AllocateDedicatedMemory(
    8210  VkDeviceSize size,
    8211  VmaSuballocationType suballocType,
    8212  uint32_t memTypeIndex,
    8213  bool map,
    8214  bool isUserDataString,
    8215  void* pUserData,
    8216  VkBuffer dedicatedBuffer,
    8217  VkImage dedicatedImage,
    8218  VmaAllocation* pAllocation)
    8219 {
    8220  VMA_ASSERT(pAllocation);
    8221 
    8222  VkMemoryAllocateInfo allocInfo = { VK_STRUCTURE_TYPE_MEMORY_ALLOCATE_INFO };
    8223  allocInfo.memoryTypeIndex = memTypeIndex;
    8224  allocInfo.allocationSize = size;
    8225 
    8226 #if VMA_DEDICATED_ALLOCATION
    8227  VkMemoryDedicatedAllocateInfoKHR dedicatedAllocInfo = { VK_STRUCTURE_TYPE_MEMORY_DEDICATED_ALLOCATE_INFO_KHR };
    8228  if(m_UseKhrDedicatedAllocation)
    8229  {
    8230  if(dedicatedBuffer != VK_NULL_HANDLE)
    8231  {
    8232  VMA_ASSERT(dedicatedImage == VK_NULL_HANDLE);
    8233  dedicatedAllocInfo.buffer = dedicatedBuffer;
    8234  allocInfo.pNext = &dedicatedAllocInfo;
    8235  }
    8236  else if(dedicatedImage != VK_NULL_HANDLE)
    8237  {
    8238  dedicatedAllocInfo.image = dedicatedImage;
    8239  allocInfo.pNext = &dedicatedAllocInfo;
    8240  }
    8241  }
    8242 #endif // #if VMA_DEDICATED_ALLOCATION
    8243 
    8244  // Allocate VkDeviceMemory.
    8245  VkDeviceMemory hMemory = VK_NULL_HANDLE;
    8246  VkResult res = AllocateVulkanMemory(&allocInfo, &hMemory);
    8247  if(res < 0)
    8248  {
    8249  VMA_DEBUG_LOG(" vkAllocateMemory FAILED");
    8250  return res;
    8251  }
    8252 
    8253  void* pMappedData = VMA_NULL;
    8254  if(map)
    8255  {
    8256  res = (*m_VulkanFunctions.vkMapMemory)(
    8257  m_hDevice,
    8258  hMemory,
    8259  0,
    8260  VK_WHOLE_SIZE,
    8261  0,
    8262  &pMappedData);
    8263  if(res < 0)
    8264  {
    8265  VMA_DEBUG_LOG(" vkMapMemory FAILED");
    8266  FreeVulkanMemory(memTypeIndex, size, hMemory);
    8267  return res;
    8268  }
    8269  }
    8270 
    8271  *pAllocation = vma_new(this, VmaAllocation_T)(m_CurrentFrameIndex.load(), isUserDataString);
    8272  (*pAllocation)->InitDedicatedAllocation(memTypeIndex, hMemory, suballocType, pMappedData, size);
    8273  (*pAllocation)->SetUserData(this, pUserData);
    8274  if(VMA_DEBUG_INITIALIZE_ALLOCATIONS)
    8275  {
    8276  FillAllocation(*pAllocation, VMA_ALLOCATION_FILL_PATTERN_CREATED);
    8277  }
    8278 
    8279  // Register it in m_pDedicatedAllocations.
    8280  {
    8281  VmaMutexLock lock(m_DedicatedAllocationsMutex[memTypeIndex], m_UseMutex);
    8282  AllocationVectorType* pDedicatedAllocations = m_pDedicatedAllocations[memTypeIndex];
    8283  VMA_ASSERT(pDedicatedAllocations);
    8284  VmaVectorInsertSorted<VmaPointerLess>(*pDedicatedAllocations, *pAllocation);
    8285  }
    8286 
    8287  VMA_DEBUG_LOG(" Allocated DedicatedMemory MemoryTypeIndex=#%u", memTypeIndex);
    8288 
    8289  return VK_SUCCESS;
    8290 }
    8291 
    8292 void VmaAllocator_T::GetBufferMemoryRequirements(
    8293  VkBuffer hBuffer,
    8294  VkMemoryRequirements& memReq,
    8295  bool& requiresDedicatedAllocation,
    8296  bool& prefersDedicatedAllocation) const
    8297 {
    8298 #if VMA_DEDICATED_ALLOCATION
    8299  if(m_UseKhrDedicatedAllocation)
    8300  {
    8301  VkBufferMemoryRequirementsInfo2KHR memReqInfo = { VK_STRUCTURE_TYPE_BUFFER_MEMORY_REQUIREMENTS_INFO_2_KHR };
    8302  memReqInfo.buffer = hBuffer;
    8303 
    8304  VkMemoryDedicatedRequirementsKHR memDedicatedReq = { VK_STRUCTURE_TYPE_MEMORY_DEDICATED_REQUIREMENTS_KHR };
    8305 
    8306  VkMemoryRequirements2KHR memReq2 = { VK_STRUCTURE_TYPE_MEMORY_REQUIREMENTS_2_KHR };
    8307  memReq2.pNext = &memDedicatedReq;
    8308 
    8309  (*m_VulkanFunctions.vkGetBufferMemoryRequirements2KHR)(m_hDevice, &memReqInfo, &memReq2);
    8310 
    8311  memReq = memReq2.memoryRequirements;
    8312  requiresDedicatedAllocation = (memDedicatedReq.requiresDedicatedAllocation != VK_FALSE);
    8313  prefersDedicatedAllocation = (memDedicatedReq.prefersDedicatedAllocation != VK_FALSE);
    8314  }
    8315  else
    8316 #endif // #if VMA_DEDICATED_ALLOCATION
    8317  {
    8318  (*m_VulkanFunctions.vkGetBufferMemoryRequirements)(m_hDevice, hBuffer, &memReq);
    8319  requiresDedicatedAllocation = false;
    8320  prefersDedicatedAllocation = false;
    8321  }
    8322 }
    8323 
    8324 void VmaAllocator_T::GetImageMemoryRequirements(
    8325  VkImage hImage,
    8326  VkMemoryRequirements& memReq,
    8327  bool& requiresDedicatedAllocation,
    8328  bool& prefersDedicatedAllocation) const
    8329 {
    8330 #if VMA_DEDICATED_ALLOCATION
    8331  if(m_UseKhrDedicatedAllocation)
    8332  {
    8333  VkImageMemoryRequirementsInfo2KHR memReqInfo = { VK_STRUCTURE_TYPE_IMAGE_MEMORY_REQUIREMENTS_INFO_2_KHR };
    8334  memReqInfo.image = hImage;
    8335 
    8336  VkMemoryDedicatedRequirementsKHR memDedicatedReq = { VK_STRUCTURE_TYPE_MEMORY_DEDICATED_REQUIREMENTS_KHR };
    8337 
    8338  VkMemoryRequirements2KHR memReq2 = { VK_STRUCTURE_TYPE_MEMORY_REQUIREMENTS_2_KHR };
    8339  memReq2.pNext = &memDedicatedReq;
    8340 
    8341  (*m_VulkanFunctions.vkGetImageMemoryRequirements2KHR)(m_hDevice, &memReqInfo, &memReq2);
    8342 
    8343  memReq = memReq2.memoryRequirements;
    8344  requiresDedicatedAllocation = (memDedicatedReq.requiresDedicatedAllocation != VK_FALSE);
    8345  prefersDedicatedAllocation = (memDedicatedReq.prefersDedicatedAllocation != VK_FALSE);
    8346  }
    8347  else
    8348 #endif // #if VMA_DEDICATED_ALLOCATION
    8349  {
    8350  (*m_VulkanFunctions.vkGetImageMemoryRequirements)(m_hDevice, hImage, &memReq);
    8351  requiresDedicatedAllocation = false;
    8352  prefersDedicatedAllocation = false;
    8353  }
    8354 }
    8355 
    8356 VkResult VmaAllocator_T::AllocateMemory(
    8357  const VkMemoryRequirements& vkMemReq,
    8358  bool requiresDedicatedAllocation,
    8359  bool prefersDedicatedAllocation,
    8360  VkBuffer dedicatedBuffer,
    8361  VkImage dedicatedImage,
    8362  const VmaAllocationCreateInfo& createInfo,
    8363  VmaSuballocationType suballocType,
    8364  VmaAllocation* pAllocation)
    8365 {
    8366  if((createInfo.flags & VMA_ALLOCATION_CREATE_DEDICATED_MEMORY_BIT) != 0 &&
    8367  (createInfo.flags & VMA_ALLOCATION_CREATE_NEVER_ALLOCATE_BIT) != 0)
    8368  {
    8369  VMA_ASSERT(0 && "Specifying VMA_ALLOCATION_CREATE_DEDICATED_MEMORY_BIT together with VMA_ALLOCATION_CREATE_NEVER_ALLOCATE_BIT makes no sense.");
    8370  return VK_ERROR_OUT_OF_DEVICE_MEMORY;
    8371  }
    8372  if((createInfo.flags & VMA_ALLOCATION_CREATE_MAPPED_BIT) != 0 &&
    8374  {
    8375  VMA_ASSERT(0 && "Specifying VMA_ALLOCATION_CREATE_MAPPED_BIT together with VMA_ALLOCATION_CREATE_CAN_BECOME_LOST_BIT is invalid.");
    8376  return VK_ERROR_OUT_OF_DEVICE_MEMORY;
    8377  }
    8378  if(requiresDedicatedAllocation)
    8379  {
    8380  if((createInfo.flags & VMA_ALLOCATION_CREATE_NEVER_ALLOCATE_BIT) != 0)
    8381  {
    8382  VMA_ASSERT(0 && "VMA_ALLOCATION_CREATE_NEVER_ALLOCATE_BIT specified while dedicated allocation is required.");
    8383  return VK_ERROR_OUT_OF_DEVICE_MEMORY;
    8384  }
    8385  if(createInfo.pool != VK_NULL_HANDLE)
    8386  {
    8387  VMA_ASSERT(0 && "Pool specified while dedicated allocation is required.");
    8388  return VK_ERROR_OUT_OF_DEVICE_MEMORY;
    8389  }
    8390  }
    8391  if((createInfo.pool != VK_NULL_HANDLE) &&
    8392  ((createInfo.flags & (VMA_ALLOCATION_CREATE_DEDICATED_MEMORY_BIT)) != 0))
    8393  {
    8394  VMA_ASSERT(0 && "Specifying VMA_ALLOCATION_CREATE_DEDICATED_MEMORY_BIT when pool != null is invalid.");
    8395  return VK_ERROR_OUT_OF_DEVICE_MEMORY;
    8396  }
    8397 
    8398  if(createInfo.pool != VK_NULL_HANDLE)
    8399  {
    8400  const VkDeviceSize alignmentForPool = VMA_MAX(
    8401  vkMemReq.alignment,
    8402  GetMemoryTypeMinAlignment(createInfo.pool->m_BlockVector.GetMemoryTypeIndex()));
    8403  return createInfo.pool->m_BlockVector.Allocate(
    8404  createInfo.pool,
    8405  m_CurrentFrameIndex.load(),
    8406  vkMemReq.size,
    8407  alignmentForPool,
    8408  createInfo,
    8409  suballocType,
    8410  pAllocation);
    8411  }
    8412  else
    8413  {
    8414  // Bit mask of memory Vulkan types acceptable for this allocation.
    8415  uint32_t memoryTypeBits = vkMemReq.memoryTypeBits;
    8416  uint32_t memTypeIndex = UINT32_MAX;
    8417  VkResult res = vmaFindMemoryTypeIndex(this, memoryTypeBits, &createInfo, &memTypeIndex);
    8418  if(res == VK_SUCCESS)
    8419  {
    8420  VkDeviceSize alignmentForMemType = VMA_MAX(
    8421  vkMemReq.alignment,
    8422  GetMemoryTypeMinAlignment(memTypeIndex));
    8423 
    8424  res = AllocateMemoryOfType(
    8425  vkMemReq.size,
    8426  alignmentForMemType,
    8427  requiresDedicatedAllocation || prefersDedicatedAllocation,
    8428  dedicatedBuffer,
    8429  dedicatedImage,
    8430  createInfo,
    8431  memTypeIndex,
    8432  suballocType,
    8433  pAllocation);
    8434  // Succeeded on first try.
    8435  if(res == VK_SUCCESS)
    8436  {
    8437  return res;
    8438  }
    8439  // Allocation from this memory type failed. Try other compatible memory types.
    8440  else
    8441  {
    8442  for(;;)
    8443  {
    8444  // Remove old memTypeIndex from list of possibilities.
    8445  memoryTypeBits &= ~(1u << memTypeIndex);
    8446  // Find alternative memTypeIndex.
    8447  res = vmaFindMemoryTypeIndex(this, memoryTypeBits, &createInfo, &memTypeIndex);
    8448  if(res == VK_SUCCESS)
    8449  {
    8450  alignmentForMemType = VMA_MAX(
    8451  vkMemReq.alignment,
    8452  GetMemoryTypeMinAlignment(memTypeIndex));
    8453 
    8454  res = AllocateMemoryOfType(
    8455  vkMemReq.size,
    8456  alignmentForMemType,
    8457  requiresDedicatedAllocation || prefersDedicatedAllocation,
    8458  dedicatedBuffer,
    8459  dedicatedImage,
    8460  createInfo,
    8461  memTypeIndex,
    8462  suballocType,
    8463  pAllocation);
    8464  // Allocation from this alternative memory type succeeded.
    8465  if(res == VK_SUCCESS)
    8466  {
    8467  return res;
    8468  }
    8469  // else: Allocation from this memory type failed. Try next one - next loop iteration.
    8470  }
    8471  // No other matching memory type index could be found.
    8472  else
    8473  {
    8474  // Not returning res, which is VK_ERROR_FEATURE_NOT_PRESENT, because we already failed to allocate once.
    8475  return VK_ERROR_OUT_OF_DEVICE_MEMORY;
    8476  }
    8477  }
    8478  }
    8479  }
    8480  // Can't find any single memory type maching requirements. res is VK_ERROR_FEATURE_NOT_PRESENT.
    8481  else
    8482  return res;
    8483  }
    8484 }
    8485 
    8486 void VmaAllocator_T::FreeMemory(const VmaAllocation allocation)
    8487 {
    8488  VMA_ASSERT(allocation);
    8489 
    8490  if(allocation->CanBecomeLost() == false ||
    8491  allocation->GetLastUseFrameIndex() != VMA_FRAME_INDEX_LOST)
    8492  {
    8493  if(VMA_DEBUG_INITIALIZE_ALLOCATIONS)
    8494  {
    8495  FillAllocation(allocation, VMA_ALLOCATION_FILL_PATTERN_DESTROYED);
    8496  }
    8497 
    8498  switch(allocation->GetType())
    8499  {
    8500  case VmaAllocation_T::ALLOCATION_TYPE_BLOCK:
    8501  {
    8502  VmaBlockVector* pBlockVector = VMA_NULL;
    8503  VmaPool hPool = allocation->GetPool();
    8504  if(hPool != VK_NULL_HANDLE)
    8505  {
    8506  pBlockVector = &hPool->m_BlockVector;
    8507  }
    8508  else
    8509  {
    8510  const uint32_t memTypeIndex = allocation->GetMemoryTypeIndex();
    8511  pBlockVector = m_pBlockVectors[memTypeIndex];
    8512  }
    8513  pBlockVector->Free(allocation);
    8514  }
    8515  break;
    8516  case VmaAllocation_T::ALLOCATION_TYPE_DEDICATED:
    8517  FreeDedicatedMemory(allocation);
    8518  break;
    8519  default:
    8520  VMA_ASSERT(0);
    8521  }
    8522  }
    8523 
    8524  allocation->SetUserData(this, VMA_NULL);
    8525  vma_delete(this, allocation);
    8526 }
    8527 
    8528 void VmaAllocator_T::CalculateStats(VmaStats* pStats)
    8529 {
    8530  // Initialize.
    8531  InitStatInfo(pStats->total);
    8532  for(size_t i = 0; i < VK_MAX_MEMORY_TYPES; ++i)
    8533  InitStatInfo(pStats->memoryType[i]);
    8534  for(size_t i = 0; i < VK_MAX_MEMORY_HEAPS; ++i)
    8535  InitStatInfo(pStats->memoryHeap[i]);
    8536 
    8537  // Process default pools.
    8538  for(uint32_t memTypeIndex = 0; memTypeIndex < GetMemoryTypeCount(); ++memTypeIndex)
    8539  {
    8540  VmaBlockVector* const pBlockVector = m_pBlockVectors[memTypeIndex];
    8541  VMA_ASSERT(pBlockVector);
    8542  pBlockVector->AddStats(pStats);
    8543  }
    8544 
    8545  // Process custom pools.
    8546  {
    8547  VmaMutexLock lock(m_PoolsMutex, m_UseMutex);
    8548  for(size_t poolIndex = 0, poolCount = m_Pools.size(); poolIndex < poolCount; ++poolIndex)
    8549  {
    8550  m_Pools[poolIndex]->GetBlockVector().AddStats(pStats);
    8551  }
    8552  }
    8553 
    8554  // Process dedicated allocations.
    8555  for(uint32_t memTypeIndex = 0; memTypeIndex < GetMemoryTypeCount(); ++memTypeIndex)
    8556  {
    8557  const uint32_t memHeapIndex = MemoryTypeIndexToHeapIndex(memTypeIndex);
    8558  VmaMutexLock dedicatedAllocationsLock(m_DedicatedAllocationsMutex[memTypeIndex], m_UseMutex);
    8559  AllocationVectorType* const pDedicatedAllocVector = m_pDedicatedAllocations[memTypeIndex];
    8560  VMA_ASSERT(pDedicatedAllocVector);
    8561  for(size_t allocIndex = 0, allocCount = pDedicatedAllocVector->size(); allocIndex < allocCount; ++allocIndex)
    8562  {
    8563  VmaStatInfo allocationStatInfo;
    8564  (*pDedicatedAllocVector)[allocIndex]->DedicatedAllocCalcStatsInfo(allocationStatInfo);
    8565  VmaAddStatInfo(pStats->total, allocationStatInfo);
    8566  VmaAddStatInfo(pStats->memoryType[memTypeIndex], allocationStatInfo);
    8567  VmaAddStatInfo(pStats->memoryHeap[memHeapIndex], allocationStatInfo);
    8568  }
    8569  }
    8570 
    8571  // Postprocess.
    8572  VmaPostprocessCalcStatInfo(pStats->total);
    8573  for(size_t i = 0; i < GetMemoryTypeCount(); ++i)
    8574  VmaPostprocessCalcStatInfo(pStats->memoryType[i]);
    8575  for(size_t i = 0; i < GetMemoryHeapCount(); ++i)
    8576  VmaPostprocessCalcStatInfo(pStats->memoryHeap[i]);
    8577 }
    8578 
    8579 static const uint32_t VMA_VENDOR_ID_AMD = 4098;
    8580 
    8581 VkResult VmaAllocator_T::Defragment(
    8582  VmaAllocation* pAllocations,
    8583  size_t allocationCount,
    8584  VkBool32* pAllocationsChanged,
    8585  const VmaDefragmentationInfo* pDefragmentationInfo,
    8586  VmaDefragmentationStats* pDefragmentationStats)
    8587 {
    8588  if(pAllocationsChanged != VMA_NULL)
    8589  {
    8590  memset(pAllocationsChanged, 0, sizeof(*pAllocationsChanged));
    8591  }
    8592  if(pDefragmentationStats != VMA_NULL)
    8593  {
    8594  memset(pDefragmentationStats, 0, sizeof(*pDefragmentationStats));
    8595  }
    8596 
    8597  const uint32_t currentFrameIndex = m_CurrentFrameIndex.load();
    8598 
    8599  VmaMutexLock poolsLock(m_PoolsMutex, m_UseMutex);
    8600 
    8601  const size_t poolCount = m_Pools.size();
    8602 
    8603  // Dispatch pAllocations among defragmentators. Create them in BlockVectors when necessary.
    8604  for(size_t allocIndex = 0; allocIndex < allocationCount; ++allocIndex)
    8605  {
    8606  VmaAllocation hAlloc = pAllocations[allocIndex];
    8607  VMA_ASSERT(hAlloc);
    8608  const uint32_t memTypeIndex = hAlloc->GetMemoryTypeIndex();
    8609  // DedicatedAlloc cannot be defragmented.
    8610  if((hAlloc->GetType() == VmaAllocation_T::ALLOCATION_TYPE_BLOCK) &&
    8611  // Only HOST_VISIBLE memory types can be defragmented.
    8612  ((m_MemProps.memoryTypes[memTypeIndex].propertyFlags & VK_MEMORY_PROPERTY_HOST_VISIBLE_BIT) != 0) &&
    8613  // Lost allocation cannot be defragmented.
    8614  (hAlloc->GetLastUseFrameIndex() != VMA_FRAME_INDEX_LOST))
    8615  {
    8616  VmaBlockVector* pAllocBlockVector = VMA_NULL;
    8617 
    8618  const VmaPool hAllocPool = hAlloc->GetPool();
    8619  // This allocation belongs to custom pool.
    8620  if(hAllocPool != VK_NULL_HANDLE)
    8621  {
    8622  pAllocBlockVector = &hAllocPool->GetBlockVector();
    8623  }
    8624  // This allocation belongs to general pool.
    8625  else
    8626  {
    8627  pAllocBlockVector = m_pBlockVectors[memTypeIndex];
    8628  }
    8629 
    8630  VmaDefragmentator* const pDefragmentator = pAllocBlockVector->EnsureDefragmentator(this, currentFrameIndex);
    8631 
    8632  VkBool32* const pChanged = (pAllocationsChanged != VMA_NULL) ?
    8633  &pAllocationsChanged[allocIndex] : VMA_NULL;
    8634  pDefragmentator->AddAllocation(hAlloc, pChanged);
    8635  }
    8636  }
    8637 
    8638  VkResult result = VK_SUCCESS;
    8639 
    8640  // ======== Main processing.
    8641 
    8642  VkDeviceSize maxBytesToMove = SIZE_MAX;
    8643  uint32_t maxAllocationsToMove = UINT32_MAX;
    8644  if(pDefragmentationInfo != VMA_NULL)
    8645  {
    8646  maxBytesToMove = pDefragmentationInfo->maxBytesToMove;
    8647  maxAllocationsToMove = pDefragmentationInfo->maxAllocationsToMove;
    8648  }
    8649 
    8650  // Process standard memory.
    8651  for(uint32_t memTypeIndex = 0;
    8652  (memTypeIndex < GetMemoryTypeCount()) && (result == VK_SUCCESS);
    8653  ++memTypeIndex)
    8654  {
    8655  // Only HOST_VISIBLE memory types can be defragmented.
    8656  if((m_MemProps.memoryTypes[memTypeIndex].propertyFlags & VK_MEMORY_PROPERTY_HOST_VISIBLE_BIT) != 0)
    8657  {
    8658  result = m_pBlockVectors[memTypeIndex]->Defragment(
    8659  pDefragmentationStats,
    8660  maxBytesToMove,
    8661  maxAllocationsToMove);
    8662  }
    8663  }
    8664 
    8665  // Process custom pools.
    8666  for(size_t poolIndex = 0; (poolIndex < poolCount) && (result == VK_SUCCESS); ++poolIndex)
    8667  {
    8668  result = m_Pools[poolIndex]->GetBlockVector().Defragment(
    8669  pDefragmentationStats,
    8670  maxBytesToMove,
    8671  maxAllocationsToMove);
    8672  }
    8673 
    8674  // ======== Destroy defragmentators.
    8675 
    8676  // Process custom pools.
    8677  for(size_t poolIndex = poolCount; poolIndex--; )
    8678  {
    8679  m_Pools[poolIndex]->GetBlockVector().DestroyDefragmentator();
    8680  }
    8681 
    8682  // Process standard memory.
    8683  for(uint32_t memTypeIndex = GetMemoryTypeCount(); memTypeIndex--; )
    8684  {
    8685  if((m_MemProps.memoryTypes[memTypeIndex].propertyFlags & VK_MEMORY_PROPERTY_HOST_VISIBLE_BIT) != 0)
    8686  {
    8687  m_pBlockVectors[memTypeIndex]->DestroyDefragmentator();
    8688  }
    8689  }
    8690 
    8691  return result;
    8692 }
    8693 
    8694 void VmaAllocator_T::GetAllocationInfo(VmaAllocation hAllocation, VmaAllocationInfo* pAllocationInfo)
    8695 {
    8696  if(hAllocation->CanBecomeLost())
    8697  {
    8698  /*
    8699  Warning: This is a carefully designed algorithm.
    8700  Do not modify unless you really know what you're doing :)
    8701  */
    8702  uint32_t localCurrFrameIndex = m_CurrentFrameIndex.load();
    8703  uint32_t localLastUseFrameIndex = hAllocation->GetLastUseFrameIndex();
    8704  for(;;)
    8705  {
    8706  if(localLastUseFrameIndex == VMA_FRAME_INDEX_LOST)
    8707  {
    8708  pAllocationInfo->memoryType = UINT32_MAX;
    8709  pAllocationInfo->deviceMemory = VK_NULL_HANDLE;
    8710  pAllocationInfo->offset = 0;
    8711  pAllocationInfo->size = hAllocation->GetSize();
    8712  pAllocationInfo->pMappedData = VMA_NULL;
    8713  pAllocationInfo->pUserData = hAllocation->GetUserData();
    8714  return;
    8715  }
    8716  else if(localLastUseFrameIndex == localCurrFrameIndex)
    8717  {
    8718  pAllocationInfo->memoryType = hAllocation->GetMemoryTypeIndex();
    8719  pAllocationInfo->deviceMemory = hAllocation->GetMemory();
    8720  pAllocationInfo->offset = hAllocation->GetOffset();
    8721  pAllocationInfo->size = hAllocation->GetSize();
    8722  pAllocationInfo->pMappedData = VMA_NULL;
    8723  pAllocationInfo->pUserData = hAllocation->GetUserData();
    8724  return;
    8725  }
    8726  else // Last use time earlier than current time.
    8727  {
    8728  if(hAllocation->CompareExchangeLastUseFrameIndex(localLastUseFrameIndex, localCurrFrameIndex))
    8729  {
    8730  localLastUseFrameIndex = localCurrFrameIndex;
    8731  }
    8732  }
    8733  }
    8734  }
    8735  else
    8736  {
    8737 #if VMA_STATS_STRING_ENABLED
    8738  uint32_t localCurrFrameIndex = m_CurrentFrameIndex.load();
    8739  uint32_t localLastUseFrameIndex = hAllocation->GetLastUseFrameIndex();
    8740  for(;;)
    8741  {
    8742  VMA_ASSERT(localLastUseFrameIndex != VMA_FRAME_INDEX_LOST);
    8743  if(localLastUseFrameIndex == localCurrFrameIndex)
    8744  {
    8745  break;
    8746  }
    8747  else // Last use time earlier than current time.
    8748  {
    8749  if(hAllocation->CompareExchangeLastUseFrameIndex(localLastUseFrameIndex, localCurrFrameIndex))
    8750  {
    8751  localLastUseFrameIndex = localCurrFrameIndex;
    8752  }
    8753  }
    8754  }
    8755 #endif
    8756 
    8757  pAllocationInfo->memoryType = hAllocation->GetMemoryTypeIndex();
    8758  pAllocationInfo->deviceMemory = hAllocation->GetMemory();
    8759  pAllocationInfo->offset = hAllocation->GetOffset();
    8760  pAllocationInfo->size = hAllocation->GetSize();
    8761  pAllocationInfo->pMappedData = hAllocation->GetMappedData();
    8762  pAllocationInfo->pUserData = hAllocation->GetUserData();
    8763  }
    8764 }
    8765 
    8766 bool VmaAllocator_T::TouchAllocation(VmaAllocation hAllocation)
    8767 {
    8768  // This is a stripped-down version of VmaAllocator_T::GetAllocationInfo.
    8769  if(hAllocation->CanBecomeLost())
    8770  {
    8771  uint32_t localCurrFrameIndex = m_CurrentFrameIndex.load();
    8772  uint32_t localLastUseFrameIndex = hAllocation->GetLastUseFrameIndex();
    8773  for(;;)
    8774  {
    8775  if(localLastUseFrameIndex == VMA_FRAME_INDEX_LOST)
    8776  {
    8777  return false;
    8778  }
    8779  else if(localLastUseFrameIndex == localCurrFrameIndex)
    8780  {
    8781  return true;
    8782  }
    8783  else // Last use time earlier than current time.
    8784  {
    8785  if(hAllocation->CompareExchangeLastUseFrameIndex(localLastUseFrameIndex, localCurrFrameIndex))
    8786  {
    8787  localLastUseFrameIndex = localCurrFrameIndex;
    8788  }
    8789  }
    8790  }
    8791  }
    8792  else
    8793  {
    8794 #if VMA_STATS_STRING_ENABLED
    8795  uint32_t localCurrFrameIndex = m_CurrentFrameIndex.load();
    8796  uint32_t localLastUseFrameIndex = hAllocation->GetLastUseFrameIndex();
    8797  for(;;)
    8798  {
    8799  VMA_ASSERT(localLastUseFrameIndex != VMA_FRAME_INDEX_LOST);
    8800  if(localLastUseFrameIndex == localCurrFrameIndex)
    8801  {
    8802  break;
    8803  }
    8804  else // Last use time earlier than current time.
    8805  {
    8806  if(hAllocation->CompareExchangeLastUseFrameIndex(localLastUseFrameIndex, localCurrFrameIndex))
    8807  {
    8808  localLastUseFrameIndex = localCurrFrameIndex;
    8809  }
    8810  }
    8811  }
    8812 #endif
    8813 
    8814  return true;
    8815  }
    8816 }
    8817 
    8818 VkResult VmaAllocator_T::CreatePool(const VmaPoolCreateInfo* pCreateInfo, VmaPool* pPool)
    8819 {
    8820  VMA_DEBUG_LOG(" CreatePool: MemoryTypeIndex=%u", pCreateInfo->memoryTypeIndex);
    8821 
    8822  VmaPoolCreateInfo newCreateInfo = *pCreateInfo;
    8823 
    8824  if(newCreateInfo.maxBlockCount == 0)
    8825  {
    8826  newCreateInfo.maxBlockCount = SIZE_MAX;
    8827  }
    8828  if(newCreateInfo.blockSize == 0)
    8829  {
    8830  newCreateInfo.blockSize = CalcPreferredBlockSize(newCreateInfo.memoryTypeIndex);
    8831  }
    8832 
    8833  *pPool = vma_new(this, VmaPool_T)(this, newCreateInfo);
    8834 
    8835  VkResult res = (*pPool)->m_BlockVector.CreateMinBlocks();
    8836  if(res != VK_SUCCESS)
    8837  {
    8838  vma_delete(this, *pPool);
    8839  *pPool = VMA_NULL;
    8840  return res;
    8841  }
    8842 
    8843  // Add to m_Pools.
    8844  {
    8845  VmaMutexLock lock(m_PoolsMutex, m_UseMutex);
    8846  (*pPool)->SetId(m_NextPoolId++);
    8847  VmaVectorInsertSorted<VmaPointerLess>(m_Pools, *pPool);
    8848  }
    8849 
    8850  return VK_SUCCESS;
    8851 }
    8852 
    8853 void VmaAllocator_T::DestroyPool(VmaPool pool)
    8854 {
    8855  // Remove from m_Pools.
    8856  {
    8857  VmaMutexLock lock(m_PoolsMutex, m_UseMutex);
    8858  bool success = VmaVectorRemoveSorted<VmaPointerLess>(m_Pools, pool);
    8859  VMA_ASSERT(success && "Pool not found in Allocator.");
    8860  }
    8861 
    8862  vma_delete(this, pool);
    8863 }
    8864 
    8865 void VmaAllocator_T::GetPoolStats(VmaPool pool, VmaPoolStats* pPoolStats)
    8866 {
    8867  pool->m_BlockVector.GetPoolStats(pPoolStats);
    8868 }
    8869 
    8870 void VmaAllocator_T::SetCurrentFrameIndex(uint32_t frameIndex)
    8871 {
    8872  m_CurrentFrameIndex.store(frameIndex);
    8873 }
    8874 
    8875 void VmaAllocator_T::MakePoolAllocationsLost(
    8876  VmaPool hPool,
    8877  size_t* pLostAllocationCount)
    8878 {
    8879  hPool->m_BlockVector.MakePoolAllocationsLost(
    8880  m_CurrentFrameIndex.load(),
    8881  pLostAllocationCount);
    8882 }
    8883 
    8884 VkResult VmaAllocator_T::CheckPoolCorruption(VmaPool hPool)
    8885 {
    8886  return hPool->m_BlockVector.CheckCorruption();
    8887 }
    8888 
    8889 VkResult VmaAllocator_T::CheckCorruption(uint32_t memoryTypeBits)
    8890 {
    8891  VkResult finalRes = VK_ERROR_FEATURE_NOT_PRESENT;
    8892 
    8893  // Process default pools.
    8894  for(uint32_t memTypeIndex = 0; memTypeIndex < GetMemoryTypeCount(); ++memTypeIndex)
    8895  {
    8896  if(((1u << memTypeIndex) & memoryTypeBits) != 0)
    8897  {
    8898  VmaBlockVector* const pBlockVector = m_pBlockVectors[memTypeIndex];
    8899  VMA_ASSERT(pBlockVector);
    8900  VkResult localRes = pBlockVector->CheckCorruption();
    8901  switch(localRes)
    8902  {
    8903  case VK_ERROR_FEATURE_NOT_PRESENT:
    8904  break;
    8905  case VK_SUCCESS:
    8906  finalRes = VK_SUCCESS;
    8907  break;
    8908  default:
    8909  return localRes;
    8910  }
    8911  }
    8912  }
    8913 
    8914  // Process custom pools.
    8915  {
    8916  VmaMutexLock lock(m_PoolsMutex, m_UseMutex);
    8917  for(size_t poolIndex = 0, poolCount = m_Pools.size(); poolIndex < poolCount; ++poolIndex)
    8918  {
    8919  if(((1u << m_Pools[poolIndex]->GetBlockVector().GetMemoryTypeIndex()) & memoryTypeBits) != 0)
    8920  {
    8921  VkResult localRes = m_Pools[poolIndex]->GetBlockVector().CheckCorruption();
    8922  switch(localRes)
    8923  {
    8924  case VK_ERROR_FEATURE_NOT_PRESENT:
    8925  break;
    8926  case VK_SUCCESS:
    8927  finalRes = VK_SUCCESS;
    8928  break;
    8929  default:
    8930  return localRes;
    8931  }
    8932  }
    8933  }
    8934  }
    8935 
    8936  return finalRes;
    8937 }
    8938 
    8939 void VmaAllocator_T::CreateLostAllocation(VmaAllocation* pAllocation)
    8940 {
    8941  *pAllocation = vma_new(this, VmaAllocation_T)(VMA_FRAME_INDEX_LOST, false);
    8942  (*pAllocation)->InitLost();
    8943 }
    8944 
    8945 VkResult VmaAllocator_T::AllocateVulkanMemory(const VkMemoryAllocateInfo* pAllocateInfo, VkDeviceMemory* pMemory)
    8946 {
    8947  const uint32_t heapIndex = MemoryTypeIndexToHeapIndex(pAllocateInfo->memoryTypeIndex);
    8948 
    8949  VkResult res;
    8950  if(m_HeapSizeLimit[heapIndex] != VK_WHOLE_SIZE)
    8951  {
    8952  VmaMutexLock lock(m_HeapSizeLimitMutex, m_UseMutex);
    8953  if(m_HeapSizeLimit[heapIndex] >= pAllocateInfo->allocationSize)
    8954  {
    8955  res = (*m_VulkanFunctions.vkAllocateMemory)(m_hDevice, pAllocateInfo, GetAllocationCallbacks(), pMemory);
    8956  if(res == VK_SUCCESS)
    8957  {
    8958  m_HeapSizeLimit[heapIndex] -= pAllocateInfo->allocationSize;
    8959  }
    8960  }
    8961  else
    8962  {
    8963  res = VK_ERROR_OUT_OF_DEVICE_MEMORY;
    8964  }
    8965  }
    8966  else
    8967  {
    8968  res = (*m_VulkanFunctions.vkAllocateMemory)(m_hDevice, pAllocateInfo, GetAllocationCallbacks(), pMemory);
    8969  }
    8970 
    8971  if(res == VK_SUCCESS && m_DeviceMemoryCallbacks.pfnAllocate != VMA_NULL)
    8972  {
    8973  (*m_DeviceMemoryCallbacks.pfnAllocate)(this, pAllocateInfo->memoryTypeIndex, *pMemory, pAllocateInfo->allocationSize);
    8974  }
    8975 
    8976  return res;
    8977 }
    8978 
    8979 void VmaAllocator_T::FreeVulkanMemory(uint32_t memoryType, VkDeviceSize size, VkDeviceMemory hMemory)
    8980 {
    8981  if(m_DeviceMemoryCallbacks.pfnFree != VMA_NULL)
    8982  {
    8983  (*m_DeviceMemoryCallbacks.pfnFree)(this, memoryType, hMemory, size);
    8984  }
    8985 
    8986  (*m_VulkanFunctions.vkFreeMemory)(m_hDevice, hMemory, GetAllocationCallbacks());
    8987 
    8988  const uint32_t heapIndex = MemoryTypeIndexToHeapIndex(memoryType);
    8989  if(m_HeapSizeLimit[heapIndex] != VK_WHOLE_SIZE)
    8990  {
    8991  VmaMutexLock lock(m_HeapSizeLimitMutex, m_UseMutex);
    8992  m_HeapSizeLimit[heapIndex] += size;
    8993  }
    8994 }
    8995 
    8996 VkResult VmaAllocator_T::Map(VmaAllocation hAllocation, void** ppData)
    8997 {
    8998  if(hAllocation->CanBecomeLost())
    8999  {
    9000  return VK_ERROR_MEMORY_MAP_FAILED;
    9001  }
    9002 
    9003  switch(hAllocation->GetType())
    9004  {
    9005  case VmaAllocation_T::ALLOCATION_TYPE_BLOCK:
    9006  {
    9007  VmaDeviceMemoryBlock* const pBlock = hAllocation->GetBlock();
    9008  char *pBytes = VMA_NULL;
    9009  VkResult res = pBlock->Map(this, 1, (void**)&pBytes);
    9010  if(res == VK_SUCCESS)
    9011  {
    9012  *ppData = pBytes + (ptrdiff_t)hAllocation->GetOffset();
    9013  hAllocation->BlockAllocMap();
    9014  }
    9015  return res;
    9016  }
    9017  case VmaAllocation_T::ALLOCATION_TYPE_DEDICATED:
    9018  return hAllocation->DedicatedAllocMap(this, ppData);
    9019  default:
    9020  VMA_ASSERT(0);
    9021  return VK_ERROR_MEMORY_MAP_FAILED;
    9022  }
    9023 }
    9024 
    9025 void VmaAllocator_T::Unmap(VmaAllocation hAllocation)
    9026 {
    9027  switch(hAllocation->GetType())
    9028  {
    9029  case VmaAllocation_T::ALLOCATION_TYPE_BLOCK:
    9030  {
    9031  VmaDeviceMemoryBlock* const pBlock = hAllocation->GetBlock();
    9032  hAllocation->BlockAllocUnmap();
    9033  pBlock->Unmap(this, 1);
    9034  }
    9035  break;
    9036  case VmaAllocation_T::ALLOCATION_TYPE_DEDICATED:
    9037  hAllocation->DedicatedAllocUnmap(this);
    9038  break;
    9039  default:
    9040  VMA_ASSERT(0);
    9041  }
    9042 }
    9043 
    9044 VkResult VmaAllocator_T::BindBufferMemory(VmaAllocation hAllocation, VkBuffer hBuffer)
    9045 {
    9046  VkResult res = VK_SUCCESS;
    9047  switch(hAllocation->GetType())
    9048  {
    9049  case VmaAllocation_T::ALLOCATION_TYPE_DEDICATED:
    9050  res = GetVulkanFunctions().vkBindBufferMemory(
    9051  m_hDevice,
    9052  hBuffer,
    9053  hAllocation->GetMemory(),
    9054  0); //memoryOffset
    9055  break;
    9056  case VmaAllocation_T::ALLOCATION_TYPE_BLOCK:
    9057  {
    9058  VmaDeviceMemoryBlock* pBlock = hAllocation->GetBlock();
    9059  VMA_ASSERT(pBlock && "Binding buffer to allocation that doesn't belong to any block. Is the allocation lost?");
    9060  res = pBlock->BindBufferMemory(this, hAllocation, hBuffer);
    9061  break;
    9062  }
    9063  default:
    9064  VMA_ASSERT(0);
    9065  }
    9066  return res;
    9067 }
    9068 
    9069 VkResult VmaAllocator_T::BindImageMemory(VmaAllocation hAllocation, VkImage hImage)
    9070 {
    9071  VkResult res = VK_SUCCESS;
    9072  switch(hAllocation->GetType())
    9073  {
    9074  case VmaAllocation_T::ALLOCATION_TYPE_DEDICATED:
    9075  res = GetVulkanFunctions().vkBindImageMemory(
    9076  m_hDevice,
    9077  hImage,
    9078  hAllocation->GetMemory(),
    9079  0); //memoryOffset
    9080  break;
    9081  case VmaAllocation_T::ALLOCATION_TYPE_BLOCK:
    9082  {
    9083  VmaDeviceMemoryBlock* pBlock = hAllocation->GetBlock();
    9084  VMA_ASSERT(pBlock && "Binding image to allocation that doesn't belong to any block. Is the allocation lost?");
    9085  res = pBlock->BindImageMemory(this, hAllocation, hImage);
    9086  break;
    9087  }
    9088  default:
    9089  VMA_ASSERT(0);
    9090  }
    9091  return res;
    9092 }
    9093 
    9094 void VmaAllocator_T::FlushOrInvalidateAllocation(
    9095  VmaAllocation hAllocation,
    9096  VkDeviceSize offset, VkDeviceSize size,
    9097  VMA_CACHE_OPERATION op)
    9098 {
    9099  const uint32_t memTypeIndex = hAllocation->GetMemoryTypeIndex();
    9100  if(size > 0 && IsMemoryTypeNonCoherent(memTypeIndex))
    9101  {
    9102  const VkDeviceSize allocationSize = hAllocation->GetSize();
    9103  VMA_ASSERT(offset <= allocationSize);
    9104 
    9105  const VkDeviceSize nonCoherentAtomSize = m_PhysicalDeviceProperties.limits.nonCoherentAtomSize;
    9106 
    9107  VkMappedMemoryRange memRange = { VK_STRUCTURE_TYPE_MAPPED_MEMORY_RANGE };
    9108  memRange.memory = hAllocation->GetMemory();
    9109 
    9110  switch(hAllocation->GetType())
    9111  {
    9112  case VmaAllocation_T::ALLOCATION_TYPE_DEDICATED:
    9113  memRange.offset = VmaAlignDown(offset, nonCoherentAtomSize);
    9114  if(size == VK_WHOLE_SIZE)
    9115  {
    9116  memRange.size = allocationSize - memRange.offset;
    9117  }
    9118  else
    9119  {
    9120  VMA_ASSERT(offset + size <= allocationSize);
    9121  memRange.size = VMA_MIN(
    9122  VmaAlignUp(size + (offset - memRange.offset), nonCoherentAtomSize),
    9123  allocationSize - memRange.offset);
    9124  }
    9125  break;
    9126 
    9127  case VmaAllocation_T::ALLOCATION_TYPE_BLOCK:
    9128  {
    9129  // 1. Still within this allocation.
    9130  memRange.offset = VmaAlignDown(offset, nonCoherentAtomSize);
    9131  if(size == VK_WHOLE_SIZE)
    9132  {
    9133  size = allocationSize - offset;
    9134  }
    9135  else
    9136  {
    9137  VMA_ASSERT(offset + size <= allocationSize);
    9138  }
    9139  memRange.size = VmaAlignUp(size + (offset - memRange.offset), nonCoherentAtomSize);
    9140 
    9141  // 2. Adjust to whole block.
    9142  const VkDeviceSize allocationOffset = hAllocation->GetOffset();
    9143  VMA_ASSERT(allocationOffset % nonCoherentAtomSize == 0);
    9144  const VkDeviceSize blockSize = hAllocation->GetBlock()->m_Metadata.GetSize();
    9145  memRange.offset += allocationOffset;
    9146  memRange.size = VMA_MIN(memRange.size, blockSize - memRange.offset);
    9147 
    9148  break;
    9149  }
    9150 
    9151  default:
    9152  VMA_ASSERT(0);
    9153  }
    9154 
    9155  switch(op)
    9156  {
    9157  case VMA_CACHE_FLUSH:
    9158  (*GetVulkanFunctions().vkFlushMappedMemoryRanges)(m_hDevice, 1, &memRange);
    9159  break;
    9160  case VMA_CACHE_INVALIDATE:
    9161  (*GetVulkanFunctions().vkInvalidateMappedMemoryRanges)(m_hDevice, 1, &memRange);
    9162  break;
    9163  default:
    9164  VMA_ASSERT(0);
    9165  }
    9166  }
    9167  // else: Just ignore this call.
    9168 }
    9169 
    9170 void VmaAllocator_T::FreeDedicatedMemory(VmaAllocation allocation)
    9171 {
    9172  VMA_ASSERT(allocation && allocation->GetType() == VmaAllocation_T::ALLOCATION_TYPE_DEDICATED);
    9173 
    9174  const uint32_t memTypeIndex = allocation->GetMemoryTypeIndex();
    9175  {
    9176  VmaMutexLock lock(m_DedicatedAllocationsMutex[memTypeIndex], m_UseMutex);
    9177  AllocationVectorType* const pDedicatedAllocations = m_pDedicatedAllocations[memTypeIndex];
    9178  VMA_ASSERT(pDedicatedAllocations);
    9179  bool success = VmaVectorRemoveSorted<VmaPointerLess>(*pDedicatedAllocations, allocation);
    9180  VMA_ASSERT(success);
    9181  }
    9182 
    9183  VkDeviceMemory hMemory = allocation->GetMemory();
    9184 
    9185  if(allocation->GetMappedData() != VMA_NULL)
    9186  {
    9187  (*m_VulkanFunctions.vkUnmapMemory)(m_hDevice, hMemory);
    9188  }
    9189 
    9190  FreeVulkanMemory(memTypeIndex, allocation->GetSize(), hMemory);
    9191 
    9192  VMA_DEBUG_LOG(" Freed DedicatedMemory MemoryTypeIndex=%u", memTypeIndex);
    9193 }
    9194 
    9195 void VmaAllocator_T::FillAllocation(const VmaAllocation hAllocation, uint8_t pattern)
    9196 {
    9197  if(VMA_DEBUG_INITIALIZE_ALLOCATIONS &&
    9198  !hAllocation->CanBecomeLost() &&
    9199  (m_MemProps.memoryTypes[hAllocation->GetMemoryTypeIndex()].propertyFlags & VK_MEMORY_PROPERTY_HOST_VISIBLE_BIT) != 0)
    9200  {
    9201  void* pData = VMA_NULL;
    9202  VkResult res = Map(hAllocation, &pData);
    9203  if(res == VK_SUCCESS)
    9204  {
    9205  memset(pData, (int)pattern, (size_t)hAllocation->GetSize());
    9206  FlushOrInvalidateAllocation(hAllocation, 0, VK_WHOLE_SIZE, VMA_CACHE_FLUSH);
    9207  Unmap(hAllocation);
    9208  }
    9209  else
    9210  {
    9211  VMA_ASSERT(0 && "VMA_DEBUG_INITIALIZE_ALLOCATIONS is enabled, but couldn't map memory to fill allocation.");
    9212  }
    9213  }
    9214 }
    9215 
    9216 #if VMA_STATS_STRING_ENABLED
    9217 
    9218 void VmaAllocator_T::PrintDetailedMap(VmaJsonWriter& json)
    9219 {
    9220  bool dedicatedAllocationsStarted = false;
    9221  for(uint32_t memTypeIndex = 0; memTypeIndex < GetMemoryTypeCount(); ++memTypeIndex)
    9222  {
    9223  VmaMutexLock dedicatedAllocationsLock(m_DedicatedAllocationsMutex[memTypeIndex], m_UseMutex);
    9224  AllocationVectorType* const pDedicatedAllocVector = m_pDedicatedAllocations[memTypeIndex];
    9225  VMA_ASSERT(pDedicatedAllocVector);
    9226  if(pDedicatedAllocVector->empty() == false)
    9227  {
    9228  if(dedicatedAllocationsStarted == false)
    9229  {
    9230  dedicatedAllocationsStarted = true;
    9231  json.WriteString("DedicatedAllocations");
    9232  json.BeginObject();
    9233  }
    9234 
    9235  json.BeginString("Type ");
    9236  json.ContinueString(memTypeIndex);
    9237  json.EndString();
    9238 
    9239  json.BeginArray();
    9240 
    9241  for(size_t i = 0; i < pDedicatedAllocVector->size(); ++i)
    9242  {
    9243  json.BeginObject(true);
    9244  const VmaAllocation hAlloc = (*pDedicatedAllocVector)[i];
    9245  hAlloc->PrintParameters(json);
    9246  json.EndObject();
    9247  }
    9248 
    9249  json.EndArray();
    9250  }
    9251  }
    9252  if(dedicatedAllocationsStarted)
    9253  {
    9254  json.EndObject();
    9255  }
    9256 
    9257  {
    9258  bool allocationsStarted = false;
    9259  for(uint32_t memTypeIndex = 0; memTypeIndex < GetMemoryTypeCount(); ++memTypeIndex)
    9260  {
    9261  if(m_pBlockVectors[memTypeIndex]->IsEmpty() == false)
    9262  {
    9263  if(allocationsStarted == false)
    9264  {
    9265  allocationsStarted = true;
    9266  json.WriteString("DefaultPools");
    9267  json.BeginObject();
    9268  }
    9269 
    9270  json.BeginString("Type ");
    9271  json.ContinueString(memTypeIndex);
    9272  json.EndString();
    9273 
    9274  m_pBlockVectors[memTypeIndex]->PrintDetailedMap(json);
    9275  }
    9276  }
    9277  if(allocationsStarted)
    9278  {
    9279  json.EndObject();
    9280  }
    9281  }
    9282 
    9283  {
    9284  VmaMutexLock lock(m_PoolsMutex, m_UseMutex);
    9285  const size_t poolCount = m_Pools.size();
    9286  if(poolCount > 0)
    9287  {
    9288  json.WriteString("Pools");
    9289  json.BeginObject();
    9290  for(size_t poolIndex = 0; poolIndex < poolCount; ++poolIndex)
    9291  {
    9292  json.BeginString();
    9293  json.ContinueString(m_Pools[poolIndex]->GetId());
    9294  json.EndString();
    9295 
    9296  m_Pools[poolIndex]->m_BlockVector.PrintDetailedMap(json);
    9297  }
    9298  json.EndObject();
    9299  }
    9300  }
    9301 }
    9302 
    9303 #endif // #if VMA_STATS_STRING_ENABLED
    9304 
    9305 static VkResult AllocateMemoryForImage(
    9306  VmaAllocator allocator,
    9307  VkImage image,
    9308  const VmaAllocationCreateInfo* pAllocationCreateInfo,
    9309  VmaSuballocationType suballocType,
    9310  VmaAllocation* pAllocation)
    9311 {
    9312  VMA_ASSERT(allocator && (image != VK_NULL_HANDLE) && pAllocationCreateInfo && pAllocation);
    9313 
    9314  VkMemoryRequirements vkMemReq = {};
    9315  bool requiresDedicatedAllocation = false;
    9316  bool prefersDedicatedAllocation = false;
    9317  allocator->GetImageMemoryRequirements(image, vkMemReq,
    9318  requiresDedicatedAllocation, prefersDedicatedAllocation);
    9319 
    9320  return allocator->AllocateMemory(
    9321  vkMemReq,
    9322  requiresDedicatedAllocation,
    9323  prefersDedicatedAllocation,
    9324  VK_NULL_HANDLE, // dedicatedBuffer
    9325  image, // dedicatedImage
    9326  *pAllocationCreateInfo,
    9327  suballocType,
    9328  pAllocation);
    9329 }
    9330 
    9332 // Public interface
    9333 
    9334 VkResult vmaCreateAllocator(
    9335  const VmaAllocatorCreateInfo* pCreateInfo,
    9336  VmaAllocator* pAllocator)
    9337 {
    9338  VMA_ASSERT(pCreateInfo && pAllocator);
    9339  VMA_DEBUG_LOG("vmaCreateAllocator");
    9340  *pAllocator = vma_new(pCreateInfo->pAllocationCallbacks, VmaAllocator_T)(pCreateInfo);
    9341  return VK_SUCCESS;
    9342 }
    9343 
    9344 void vmaDestroyAllocator(
    9345  VmaAllocator allocator)
    9346 {
    9347  if(allocator != VK_NULL_HANDLE)
    9348  {
    9349  VMA_DEBUG_LOG("vmaDestroyAllocator");
    9350  VkAllocationCallbacks allocationCallbacks = allocator->m_AllocationCallbacks;
    9351  vma_delete(&allocationCallbacks, allocator);
    9352  }
    9353 }
    9354 
    9356  VmaAllocator allocator,
    9357  const VkPhysicalDeviceProperties **ppPhysicalDeviceProperties)
    9358 {
    9359  VMA_ASSERT(allocator && ppPhysicalDeviceProperties);
    9360  *ppPhysicalDeviceProperties = &allocator->m_PhysicalDeviceProperties;
    9361 }
    9362 
    9364  VmaAllocator allocator,
    9365  const VkPhysicalDeviceMemoryProperties** ppPhysicalDeviceMemoryProperties)
    9366 {
    9367  VMA_ASSERT(allocator && ppPhysicalDeviceMemoryProperties);
    9368  *ppPhysicalDeviceMemoryProperties = &allocator->m_MemProps;
    9369 }
    9370 
    9372  VmaAllocator allocator,
    9373  uint32_t memoryTypeIndex,
    9374  VkMemoryPropertyFlags* pFlags)
    9375 {
    9376  VMA_ASSERT(allocator && pFlags);
    9377  VMA_ASSERT(memoryTypeIndex < allocator->GetMemoryTypeCount());
    9378  *pFlags = allocator->m_MemProps.memoryTypes[memoryTypeIndex].propertyFlags;
    9379 }
    9380 
    9382  VmaAllocator allocator,
    9383  uint32_t frameIndex)
    9384 {
    9385  VMA_ASSERT(allocator);
    9386  VMA_ASSERT(frameIndex != VMA_FRAME_INDEX_LOST);
    9387 
    9388  VMA_DEBUG_GLOBAL_MUTEX_LOCK
    9389 
    9390  allocator->SetCurrentFrameIndex(frameIndex);
    9391 }
    9392 
    9393 void vmaCalculateStats(
    9394  VmaAllocator allocator,
    9395  VmaStats* pStats)
    9396 {
    9397  VMA_ASSERT(allocator && pStats);
    9398  VMA_DEBUG_GLOBAL_MUTEX_LOCK
    9399  allocator->CalculateStats(pStats);
    9400 }
    9401 
    9402 #if VMA_STATS_STRING_ENABLED
    9403 
    9404 void vmaBuildStatsString(
    9405  VmaAllocator allocator,
    9406  char** ppStatsString,
    9407  VkBool32 detailedMap)
    9408 {
    9409  VMA_ASSERT(allocator && ppStatsString);
    9410  VMA_DEBUG_GLOBAL_MUTEX_LOCK
    9411 
    9412  VmaStringBuilder sb(allocator);
    9413  {
    9414  VmaJsonWriter json(allocator->GetAllocationCallbacks(), sb);
    9415  json.BeginObject();
    9416 
    9417  VmaStats stats;
    9418  allocator->CalculateStats(&stats);
    9419 
    9420  json.WriteString("Total");
    9421  VmaPrintStatInfo(json, stats.total);
    9422 
    9423  for(uint32_t heapIndex = 0; heapIndex < allocator->GetMemoryHeapCount(); ++heapIndex)
    9424  {
    9425  json.BeginString("Heap ");
    9426  json.ContinueString(heapIndex);
    9427  json.EndString();
    9428  json.BeginObject();
    9429 
    9430  json.WriteString("Size");
    9431  json.WriteNumber(allocator->m_MemProps.memoryHeaps[heapIndex].size);
    9432 
    9433  json.WriteString("Flags");
    9434  json.BeginArray(true);
    9435  if((allocator->m_MemProps.memoryHeaps[heapIndex].flags & VK_MEMORY_HEAP_DEVICE_LOCAL_BIT) != 0)
    9436  {
    9437  json.WriteString("DEVICE_LOCAL");
    9438  }
    9439  json.EndArray();
    9440 
    9441  if(stats.memoryHeap[heapIndex].blockCount > 0)
    9442  {
    9443  json.WriteString("Stats");
    9444  VmaPrintStatInfo(json, stats.memoryHeap[heapIndex]);
    9445  }
    9446 
    9447  for(uint32_t typeIndex = 0; typeIndex < allocator->GetMemoryTypeCount(); ++typeIndex)
    9448  {
    9449  if(allocator->MemoryTypeIndexToHeapIndex(typeIndex) == heapIndex)
    9450  {
    9451  json.BeginString("Type ");
    9452  json.ContinueString(typeIndex);
    9453  json.EndString();
    9454 
    9455  json.BeginObject();
    9456 
    9457  json.WriteString("Flags");
    9458  json.BeginArray(true);
    9459  VkMemoryPropertyFlags flags = allocator->m_MemProps.memoryTypes[typeIndex].propertyFlags;
    9460  if((flags & VK_MEMORY_PROPERTY_DEVICE_LOCAL_BIT) != 0)
    9461  {
    9462  json.WriteString("DEVICE_LOCAL");
    9463  }
    9464  if((flags & VK_MEMORY_PROPERTY_HOST_VISIBLE_BIT) != 0)
    9465  {
    9466  json.WriteString("HOST_VISIBLE");
    9467  }
    9468  if((flags & VK_MEMORY_PROPERTY_HOST_COHERENT_BIT) != 0)
    9469  {
    9470  json.WriteString("HOST_COHERENT");
    9471  }
    9472  if((flags & VK_MEMORY_PROPERTY_HOST_CACHED_BIT) != 0)
    9473  {
    9474  json.WriteString("HOST_CACHED");
    9475  }
    9476  if((flags & VK_MEMORY_PROPERTY_LAZILY_ALLOCATED_BIT) != 0)
    9477  {
    9478  json.WriteString("LAZILY_ALLOCATED");
    9479  }
    9480  json.EndArray();
    9481 
    9482  if(stats.memoryType[typeIndex].blockCount > 0)
    9483  {
    9484  json.WriteString("Stats");
    9485  VmaPrintStatInfo(json, stats.memoryType[typeIndex]);
    9486  }
    9487 
    9488  json.EndObject();
    9489  }
    9490  }
    9491 
    9492  json.EndObject();
    9493  }
    9494  if(detailedMap == VK_TRUE)
    9495  {
    9496  allocator->PrintDetailedMap(json);
    9497  }
    9498 
    9499  json.EndObject();
    9500  }
    9501 
    9502  const size_t len = sb.GetLength();
    9503  char* const pChars = vma_new_array(allocator, char, len + 1);
    9504  if(len > 0)
    9505  {
    9506  memcpy(pChars, sb.GetData(), len);
    9507  }
    9508  pChars[len] = '\0';
    9509  *ppStatsString = pChars;
    9510 }
    9511 
    9512 void vmaFreeStatsString(
    9513  VmaAllocator allocator,
    9514  char* pStatsString)
    9515 {
    9516  if(pStatsString != VMA_NULL)
    9517  {
    9518  VMA_ASSERT(allocator);
    9519  size_t len = strlen(pStatsString);
    9520  vma_delete_array(allocator, pStatsString, len + 1);
    9521  }
    9522 }
    9523 
    9524 #endif // #if VMA_STATS_STRING_ENABLED
    9525 
    9526 /*
    9527 This function is not protected by any mutex because it just reads immutable data.
    9528 */
    9529 VkResult vmaFindMemoryTypeIndex(
    9530  VmaAllocator allocator,
    9531  uint32_t memoryTypeBits,
    9532  const VmaAllocationCreateInfo* pAllocationCreateInfo,
    9533  uint32_t* pMemoryTypeIndex)
    9534 {
    9535  VMA_ASSERT(allocator != VK_NULL_HANDLE);
    9536  VMA_ASSERT(pAllocationCreateInfo != VMA_NULL);
    9537  VMA_ASSERT(pMemoryTypeIndex != VMA_NULL);
    9538 
    9539  if(pAllocationCreateInfo->memoryTypeBits != 0)
    9540  {
    9541  memoryTypeBits &= pAllocationCreateInfo->memoryTypeBits;
    9542  }
    9543 
    9544  uint32_t requiredFlags = pAllocationCreateInfo->requiredFlags;
    9545  uint32_t preferredFlags = pAllocationCreateInfo->preferredFlags;
    9546 
    9547  const bool mapped = (pAllocationCreateInfo->flags & VMA_ALLOCATION_CREATE_MAPPED_BIT) != 0;
    9548  if(mapped)
    9549  {
    9550  preferredFlags |= VK_MEMORY_PROPERTY_HOST_VISIBLE_BIT;
    9551  }
    9552 
    9553  // Convert usage to requiredFlags and preferredFlags.
    9554  switch(pAllocationCreateInfo->usage)
    9555  {
    9557  break;
    9559  if(!allocator->IsIntegratedGpu() || (preferredFlags & VK_MEMORY_PROPERTY_HOST_VISIBLE_BIT) == 0)
    9560  {
    9561  preferredFlags |= VK_MEMORY_PROPERTY_DEVICE_LOCAL_BIT;
    9562  }
    9563  break;
    9565  requiredFlags |= VK_MEMORY_PROPERTY_HOST_VISIBLE_BIT | VK_MEMORY_PROPERTY_HOST_COHERENT_BIT;
    9566  break;
    9568  requiredFlags |= VK_MEMORY_PROPERTY_HOST_VISIBLE_BIT;
    9569  if(!allocator->IsIntegratedGpu() || (preferredFlags & VK_MEMORY_PROPERTY_HOST_VISIBLE_BIT) == 0)
    9570  {
    9571  preferredFlags |= VK_MEMORY_PROPERTY_DEVICE_LOCAL_BIT;
    9572  }
    9573  break;
    9575  requiredFlags |= VK_MEMORY_PROPERTY_HOST_VISIBLE_BIT;
    9576  preferredFlags |= VK_MEMORY_PROPERTY_HOST_COHERENT_BIT | VK_MEMORY_PROPERTY_HOST_CACHED_BIT;
    9577  break;
    9578  default:
    9579  break;
    9580  }
    9581 
    9582  *pMemoryTypeIndex = UINT32_MAX;
    9583  uint32_t minCost = UINT32_MAX;
    9584  for(uint32_t memTypeIndex = 0, memTypeBit = 1;
    9585  memTypeIndex < allocator->GetMemoryTypeCount();
    9586  ++memTypeIndex, memTypeBit <<= 1)
    9587  {
    9588  // This memory type is acceptable according to memoryTypeBits bitmask.
    9589  if((memTypeBit & memoryTypeBits) != 0)
    9590  {
    9591  const VkMemoryPropertyFlags currFlags =
    9592  allocator->m_MemProps.memoryTypes[memTypeIndex].propertyFlags;
    9593  // This memory type contains requiredFlags.
    9594  if((requiredFlags & ~currFlags) == 0)
    9595  {
    9596  // Calculate cost as number of bits from preferredFlags not present in this memory type.
    9597  uint32_t currCost = VmaCountBitsSet(preferredFlags & ~currFlags);
    9598  // Remember memory type with lowest cost.
    9599  if(currCost < minCost)
    9600  {
    9601  *pMemoryTypeIndex = memTypeIndex;
    9602  if(currCost == 0)
    9603  {
    9604  return VK_SUCCESS;
    9605  }
    9606  minCost = currCost;
    9607  }
    9608  }
    9609  }
    9610  }
    9611  return (*pMemoryTypeIndex != UINT32_MAX) ? VK_SUCCESS : VK_ERROR_FEATURE_NOT_PRESENT;
    9612 }
    9613 
    9615  VmaAllocator allocator,
    9616  const VkBufferCreateInfo* pBufferCreateInfo,
    9617  const VmaAllocationCreateInfo* pAllocationCreateInfo,
    9618  uint32_t* pMemoryTypeIndex)
    9619 {
    9620  VMA_ASSERT(allocator != VK_NULL_HANDLE);
    9621  VMA_ASSERT(pBufferCreateInfo != VMA_NULL);
    9622  VMA_ASSERT(pAllocationCreateInfo != VMA_NULL);
    9623  VMA_ASSERT(pMemoryTypeIndex != VMA_NULL);
    9624 
    9625  const VkDevice hDev = allocator->m_hDevice;
    9626  VkBuffer hBuffer = VK_NULL_HANDLE;
    9627  VkResult res = allocator->GetVulkanFunctions().vkCreateBuffer(
    9628  hDev, pBufferCreateInfo, allocator->GetAllocationCallbacks(), &hBuffer);
    9629  if(res == VK_SUCCESS)
    9630  {
    9631  VkMemoryRequirements memReq = {};
    9632  allocator->GetVulkanFunctions().vkGetBufferMemoryRequirements(
    9633  hDev, hBuffer, &memReq);
    9634 
    9635  res = vmaFindMemoryTypeIndex(
    9636  allocator,
    9637  memReq.memoryTypeBits,
    9638  pAllocationCreateInfo,
    9639  pMemoryTypeIndex);
    9640 
    9641  allocator->GetVulkanFunctions().vkDestroyBuffer(
    9642  hDev, hBuffer, allocator->GetAllocationCallbacks());
    9643  }
    9644  return res;
    9645 }
    9646 
    9648  VmaAllocator allocator,
    9649  const VkImageCreateInfo* pImageCreateInfo,
    9650  const VmaAllocationCreateInfo* pAllocationCreateInfo,
    9651  uint32_t* pMemoryTypeIndex)
    9652 {
    9653  VMA_ASSERT(allocator != VK_NULL_HANDLE);
    9654  VMA_ASSERT(pImageCreateInfo != VMA_NULL);
    9655  VMA_ASSERT(pAllocationCreateInfo != VMA_NULL);
    9656  VMA_ASSERT(pMemoryTypeIndex != VMA_NULL);
    9657 
    9658  const VkDevice hDev = allocator->m_hDevice;
    9659  VkImage hImage = VK_NULL_HANDLE;
    9660  VkResult res = allocator->GetVulkanFunctions().vkCreateImage(
    9661  hDev, pImageCreateInfo, allocator->GetAllocationCallbacks(), &hImage);
    9662  if(res == VK_SUCCESS)
    9663  {
    9664  VkMemoryRequirements memReq = {};
    9665  allocator->GetVulkanFunctions().vkGetImageMemoryRequirements(
    9666  hDev, hImage, &memReq);
    9667 
    9668  res = vmaFindMemoryTypeIndex(
    9669  allocator,
    9670  memReq.memoryTypeBits,
    9671  pAllocationCreateInfo,
    9672  pMemoryTypeIndex);
    9673 
    9674  allocator->GetVulkanFunctions().vkDestroyImage(
    9675  hDev, hImage, allocator->GetAllocationCallbacks());
    9676  }
    9677  return res;
    9678 }
    9679 
    9680 VkResult vmaCreatePool(
    9681  VmaAllocator allocator,
    9682  const VmaPoolCreateInfo* pCreateInfo,
    9683  VmaPool* pPool)
    9684 {
    9685  VMA_ASSERT(allocator && pCreateInfo && pPool);
    9686 
    9687  VMA_DEBUG_LOG("vmaCreatePool");
    9688 
    9689  VMA_DEBUG_GLOBAL_MUTEX_LOCK
    9690 
    9691  return allocator->CreatePool(pCreateInfo, pPool);
    9692 }
    9693 
    9694 void vmaDestroyPool(
    9695  VmaAllocator allocator,
    9696  VmaPool pool)
    9697 {
    9698  VMA_ASSERT(allocator);
    9699 
    9700  if(pool == VK_NULL_HANDLE)
    9701  {
    9702  return;
    9703  }
    9704 
    9705  VMA_DEBUG_LOG("vmaDestroyPool");
    9706 
    9707  VMA_DEBUG_GLOBAL_MUTEX_LOCK
    9708 
    9709  allocator->DestroyPool(pool);
    9710 }
    9711 
    9712 void vmaGetPoolStats(
    9713  VmaAllocator allocator,
    9714  VmaPool pool,
    9715  VmaPoolStats* pPoolStats)
    9716 {
    9717  VMA_ASSERT(allocator && pool && pPoolStats);
    9718 
    9719  VMA_DEBUG_GLOBAL_MUTEX_LOCK
    9720 
    9721  allocator->GetPoolStats(pool, pPoolStats);
    9722 }
    9723 
    9725  VmaAllocator allocator,
    9726  VmaPool pool,
    9727  size_t* pLostAllocationCount)
    9728 {
    9729  VMA_ASSERT(allocator && pool);
    9730 
    9731  VMA_DEBUG_GLOBAL_MUTEX_LOCK
    9732 
    9733  allocator->MakePoolAllocationsLost(pool, pLostAllocationCount);
    9734 }
    9735 
    9736 VkResult vmaCheckPoolCorruption(VmaAllocator allocator, VmaPool pool)
    9737 {
    9738  VMA_ASSERT(allocator && pool);
    9739 
    9740  VMA_DEBUG_GLOBAL_MUTEX_LOCK
    9741 
    9742  VMA_DEBUG_LOG("vmaCheckPoolCorruption");
    9743 
    9744  return allocator->CheckPoolCorruption(pool);
    9745 }
    9746 
    9747 VkResult vmaAllocateMemory(
    9748  VmaAllocator allocator,
    9749  const VkMemoryRequirements* pVkMemoryRequirements,
    9750  const VmaAllocationCreateInfo* pCreateInfo,
    9751  VmaAllocation* pAllocation,
    9752  VmaAllocationInfo* pAllocationInfo)
    9753 {
    9754  VMA_ASSERT(allocator && pVkMemoryRequirements && pCreateInfo && pAllocation);
    9755 
    9756  VMA_DEBUG_LOG("vmaAllocateMemory");
    9757 
    9758  VMA_DEBUG_GLOBAL_MUTEX_LOCK
    9759 
    9760  VkResult result = allocator->AllocateMemory(
    9761  *pVkMemoryRequirements,
    9762  false, // requiresDedicatedAllocation
    9763  false, // prefersDedicatedAllocation
    9764  VK_NULL_HANDLE, // dedicatedBuffer
    9765  VK_NULL_HANDLE, // dedicatedImage
    9766  *pCreateInfo,
    9767  VMA_SUBALLOCATION_TYPE_UNKNOWN,
    9768  pAllocation);
    9769 
    9770  if(pAllocationInfo && result == VK_SUCCESS)
    9771  {
    9772  allocator->GetAllocationInfo(*pAllocation, pAllocationInfo);
    9773  }
    9774 
    9775  return result;
    9776 }
    9777 
    9779  VmaAllocator allocator,
    9780  VkBuffer buffer,
    9781  const VmaAllocationCreateInfo* pCreateInfo,
    9782  VmaAllocation* pAllocation,
    9783  VmaAllocationInfo* pAllocationInfo)
    9784 {
    9785  VMA_ASSERT(allocator && buffer != VK_NULL_HANDLE && pCreateInfo && pAllocation);
    9786 
    9787  VMA_DEBUG_LOG("vmaAllocateMemoryForBuffer");
    9788 
    9789  VMA_DEBUG_GLOBAL_MUTEX_LOCK
    9790 
    9791  VkMemoryRequirements vkMemReq = {};
    9792  bool requiresDedicatedAllocation = false;
    9793  bool prefersDedicatedAllocation = false;
    9794  allocator->GetBufferMemoryRequirements(buffer, vkMemReq,
    9795  requiresDedicatedAllocation,
    9796  prefersDedicatedAllocation);
    9797 
    9798  VkResult result = allocator->AllocateMemory(
    9799  vkMemReq,
    9800  requiresDedicatedAllocation,
    9801  prefersDedicatedAllocation,
    9802  buffer, // dedicatedBuffer
    9803  VK_NULL_HANDLE, // dedicatedImage
    9804  *pCreateInfo,
    9805  VMA_SUBALLOCATION_TYPE_BUFFER,
    9806  pAllocation);
    9807 
    9808  if(pAllocationInfo && result == VK_SUCCESS)
    9809  {
    9810  allocator->GetAllocationInfo(*pAllocation, pAllocationInfo);
    9811  }
    9812 
    9813  return result;
    9814 }
    9815 
    9816 VkResult vmaAllocateMemoryForImage(
    9817  VmaAllocator allocator,
    9818  VkImage image,
    9819  const VmaAllocationCreateInfo* pCreateInfo,
    9820  VmaAllocation* pAllocation,
    9821  VmaAllocationInfo* pAllocationInfo)
    9822 {
    9823  VMA_ASSERT(allocator && image != VK_NULL_HANDLE && pCreateInfo && pAllocation);
    9824 
    9825  VMA_DEBUG_LOG("vmaAllocateMemoryForImage");
    9826 
    9827  VMA_DEBUG_GLOBAL_MUTEX_LOCK
    9828 
    9829  VkResult result = AllocateMemoryForImage(
    9830  allocator,
    9831  image,
    9832  pCreateInfo,
    9833  VMA_SUBALLOCATION_TYPE_IMAGE_UNKNOWN,
    9834  pAllocation);
    9835 
    9836  if(pAllocationInfo && result == VK_SUCCESS)
    9837  {
    9838  allocator->GetAllocationInfo(*pAllocation, pAllocationInfo);
    9839  }
    9840 
    9841  return result;
    9842 }
    9843 
    9844 void vmaFreeMemory(
    9845  VmaAllocator allocator,
    9846  VmaAllocation allocation)
    9847 {
    9848  VMA_ASSERT(allocator);
    9849  VMA_DEBUG_LOG("vmaFreeMemory");
    9850  VMA_DEBUG_GLOBAL_MUTEX_LOCK
    9851  if(allocation != VK_NULL_HANDLE)
    9852  {
    9853  allocator->FreeMemory(allocation);
    9854  }
    9855 }
    9856 
    9858  VmaAllocator allocator,
    9859  VmaAllocation allocation,
    9860  VmaAllocationInfo* pAllocationInfo)
    9861 {
    9862  VMA_ASSERT(allocator && allocation && pAllocationInfo);
    9863 
    9864  VMA_DEBUG_GLOBAL_MUTEX_LOCK
    9865 
    9866  allocator->GetAllocationInfo(allocation, pAllocationInfo);
    9867 }
    9868 
    9869 VkBool32 vmaTouchAllocation(
    9870  VmaAllocator allocator,
    9871  VmaAllocation allocation)
    9872 {
    9873  VMA_ASSERT(allocator && allocation);
    9874 
    9875  VMA_DEBUG_GLOBAL_MUTEX_LOCK
    9876 
    9877  return allocator->TouchAllocation(allocation);
    9878 }
    9879 
    9881  VmaAllocator allocator,
    9882  VmaAllocation allocation,
    9883  void* pUserData)
    9884 {
    9885  VMA_ASSERT(allocator && allocation);
    9886 
    9887  VMA_DEBUG_GLOBAL_MUTEX_LOCK
    9888 
    9889  allocation->SetUserData(allocator, pUserData);
    9890 }
    9891 
    9893  VmaAllocator allocator,
    9894  VmaAllocation* pAllocation)
    9895 {
    9896  VMA_ASSERT(allocator && pAllocation);
    9897 
    9898  VMA_DEBUG_GLOBAL_MUTEX_LOCK;
    9899 
    9900  allocator->CreateLostAllocation(pAllocation);
    9901 }
    9902 
    9903 VkResult vmaMapMemory(
    9904  VmaAllocator allocator,
    9905  VmaAllocation allocation,
    9906  void** ppData)
    9907 {
    9908  VMA_ASSERT(allocator && allocation && ppData);
    9909 
    9910  VMA_DEBUG_GLOBAL_MUTEX_LOCK
    9911 
    9912  return allocator->Map(allocation, ppData);
    9913 }
    9914 
    9915 void vmaUnmapMemory(
    9916  VmaAllocator allocator,
    9917  VmaAllocation allocation)
    9918 {
    9919  VMA_ASSERT(allocator && allocation);
    9920 
    9921  VMA_DEBUG_GLOBAL_MUTEX_LOCK
    9922 
    9923  allocator->Unmap(allocation);
    9924 }
    9925 
    9926 void vmaFlushAllocation(VmaAllocator allocator, VmaAllocation allocation, VkDeviceSize offset, VkDeviceSize size)
    9927 {
    9928  VMA_ASSERT(allocator && allocation);
    9929 
    9930  VMA_DEBUG_LOG("vmaFlushAllocation");
    9931 
    9932  VMA_DEBUG_GLOBAL_MUTEX_LOCK
    9933 
    9934  allocator->FlushOrInvalidateAllocation(allocation, offset, size, VMA_CACHE_FLUSH);
    9935 }
    9936 
    9937 void vmaInvalidateAllocation(VmaAllocator allocator, VmaAllocation allocation, VkDeviceSize offset, VkDeviceSize size)
    9938 {
    9939  VMA_ASSERT(allocator && allocation);
    9940 
    9941  VMA_DEBUG_LOG("vmaInvalidateAllocation");
    9942 
    9943  VMA_DEBUG_GLOBAL_MUTEX_LOCK
    9944 
    9945  allocator->FlushOrInvalidateAllocation(allocation, offset, size, VMA_CACHE_INVALIDATE);
    9946 }
    9947 
    9948 VkResult vmaCheckCorruption(VmaAllocator allocator, uint32_t memoryTypeBits)
    9949 {
    9950  VMA_ASSERT(allocator);
    9951 
    9952  VMA_DEBUG_LOG("vmaCheckCorruption");
    9953 
    9954  VMA_DEBUG_GLOBAL_MUTEX_LOCK
    9955 
    9956  return allocator->CheckCorruption(memoryTypeBits);
    9957 }
    9958 
    9959 VkResult vmaDefragment(
    9960  VmaAllocator allocator,
    9961  VmaAllocation* pAllocations,
    9962  size_t allocationCount,
    9963  VkBool32* pAllocationsChanged,
    9964  const VmaDefragmentationInfo *pDefragmentationInfo,
    9965  VmaDefragmentationStats* pDefragmentationStats)
    9966 {
    9967  VMA_ASSERT(allocator && pAllocations);
    9968 
    9969  VMA_DEBUG_LOG("vmaDefragment");
    9970 
    9971  VMA_DEBUG_GLOBAL_MUTEX_LOCK
    9972 
    9973  return allocator->Defragment(pAllocations, allocationCount, pAllocationsChanged, pDefragmentationInfo, pDefragmentationStats);
    9974 }
    9975 
    9976 VkResult vmaBindBufferMemory(
    9977  VmaAllocator allocator,
    9978  VmaAllocation allocation,
    9979  VkBuffer buffer)
    9980 {
    9981  VMA_ASSERT(allocator && allocation && buffer);
    9982 
    9983  VMA_DEBUG_LOG("vmaBindBufferMemory");
    9984 
    9985  VMA_DEBUG_GLOBAL_MUTEX_LOCK
    9986 
    9987  return allocator->BindBufferMemory(allocation, buffer);
    9988 }
    9989 
    9990 VkResult vmaBindImageMemory(
    9991  VmaAllocator allocator,
    9992  VmaAllocation allocation,
    9993  VkImage image)
    9994 {
    9995  VMA_ASSERT(allocator && allocation && image);
    9996 
    9997  VMA_DEBUG_LOG("vmaBindImageMemory");
    9998 
    9999  VMA_DEBUG_GLOBAL_MUTEX_LOCK
    10000 
    10001  return allocator->BindImageMemory(allocation, image);
    10002 }
    10003 
    10004 VkResult vmaCreateBuffer(
    10005  VmaAllocator allocator,
    10006  const VkBufferCreateInfo* pBufferCreateInfo,
    10007  const VmaAllocationCreateInfo* pAllocationCreateInfo,
    10008  VkBuffer* pBuffer,
    10009  VmaAllocation* pAllocation,
    10010  VmaAllocationInfo* pAllocationInfo)
    10011 {
    10012  VMA_ASSERT(allocator && pBufferCreateInfo && pAllocationCreateInfo && pBuffer && pAllocation);
    10013 
    10014  VMA_DEBUG_LOG("vmaCreateBuffer");
    10015 
    10016  VMA_DEBUG_GLOBAL_MUTEX_LOCK
    10017 
    10018  *pBuffer = VK_NULL_HANDLE;
    10019  *pAllocation = VK_NULL_HANDLE;
    10020 
    10021  // 1. Create VkBuffer.
    10022  VkResult res = (*allocator->GetVulkanFunctions().vkCreateBuffer)(
    10023  allocator->m_hDevice,
    10024  pBufferCreateInfo,
    10025  allocator->GetAllocationCallbacks(),
    10026  pBuffer);
    10027  if(res >= 0)
    10028  {
    10029  // 2. vkGetBufferMemoryRequirements.
    10030  VkMemoryRequirements vkMemReq = {};
    10031  bool requiresDedicatedAllocation = false;
    10032  bool prefersDedicatedAllocation = false;
    10033  allocator->GetBufferMemoryRequirements(*pBuffer, vkMemReq,
    10034  requiresDedicatedAllocation, prefersDedicatedAllocation);
    10035 
    10036  // Make sure alignment requirements for specific buffer usages reported
    10037  // in Physical Device Properties are included in alignment reported by memory requirements.
    10038  if((pBufferCreateInfo->usage & VK_BUFFER_USAGE_UNIFORM_TEXEL_BUFFER_BIT) != 0)
    10039  {
    10040  VMA_ASSERT(vkMemReq.alignment %
    10041  allocator->m_PhysicalDeviceProperties.limits.minTexelBufferOffsetAlignment == 0);
    10042  }
    10043  if((pBufferCreateInfo->usage & VK_BUFFER_USAGE_UNIFORM_BUFFER_BIT) != 0)
    10044  {
    10045  VMA_ASSERT(vkMemReq.alignment %
    10046  allocator->m_PhysicalDeviceProperties.limits.minUniformBufferOffsetAlignment == 0);
    10047  }
    10048  if((pBufferCreateInfo->usage & VK_BUFFER_USAGE_STORAGE_BUFFER_BIT) != 0)
    10049  {
    10050  VMA_ASSERT(vkMemReq.alignment %
    10051  allocator->m_PhysicalDeviceProperties.limits.minStorageBufferOffsetAlignment == 0);
    10052  }
    10053 
    10054  // 3. Allocate memory using allocator.
    10055  res = allocator->AllocateMemory(
    10056  vkMemReq,
    10057  requiresDedicatedAllocation,
    10058  prefersDedicatedAllocation,
    10059  *pBuffer, // dedicatedBuffer
    10060  VK_NULL_HANDLE, // dedicatedImage
    10061  *pAllocationCreateInfo,
    10062  VMA_SUBALLOCATION_TYPE_BUFFER,
    10063  pAllocation);
    10064  if(res >= 0)
    10065  {
    10066  // 3. Bind buffer with memory.
    10067  res = allocator->BindBufferMemory(*pAllocation, *pBuffer);
    10068  if(res >= 0)
    10069  {
    10070  // All steps succeeded.
    10071  #if VMA_STATS_STRING_ENABLED
    10072  (*pAllocation)->InitBufferImageUsage(pBufferCreateInfo->usage);
    10073  #endif
    10074  if(pAllocationInfo != VMA_NULL)
    10075  {
    10076  allocator->GetAllocationInfo(*pAllocation, pAllocationInfo);
    10077  }
    10078  return VK_SUCCESS;
    10079  }
    10080  allocator->FreeMemory(*pAllocation);
    10081  *pAllocation = VK_NULL_HANDLE;
    10082  (*allocator->GetVulkanFunctions().vkDestroyBuffer)(allocator->m_hDevice, *pBuffer, allocator->GetAllocationCallbacks());
    10083  *pBuffer = VK_NULL_HANDLE;
    10084  return res;
    10085  }
    10086  (*allocator->GetVulkanFunctions().vkDestroyBuffer)(allocator->m_hDevice, *pBuffer, allocator->GetAllocationCallbacks());
    10087  *pBuffer = VK_NULL_HANDLE;
    10088  return res;
    10089  }
    10090  return res;
    10091 }
    10092 
    10093 void vmaDestroyBuffer(
    10094  VmaAllocator allocator,
    10095  VkBuffer buffer,
    10096  VmaAllocation allocation)
    10097 {
    10098  VMA_ASSERT(allocator);
    10099  VMA_DEBUG_LOG("vmaDestroyBuffer");
    10100  VMA_DEBUG_GLOBAL_MUTEX_LOCK
    10101  if(buffer != VK_NULL_HANDLE)
    10102  {
    10103  (*allocator->GetVulkanFunctions().vkDestroyBuffer)(allocator->m_hDevice, buffer, allocator->GetAllocationCallbacks());
    10104  }
    10105  if(allocation != VK_NULL_HANDLE)
    10106  {
    10107  allocator->FreeMemory(allocation);
    10108  }
    10109 }
    10110 
    10111 VkResult vmaCreateImage(
    10112  VmaAllocator allocator,
    10113  const VkImageCreateInfo* pImageCreateInfo,
    10114  const VmaAllocationCreateInfo* pAllocationCreateInfo,
    10115  VkImage* pImage,
    10116  VmaAllocation* pAllocation,
    10117  VmaAllocationInfo* pAllocationInfo)
    10118 {
    10119  VMA_ASSERT(allocator && pImageCreateInfo && pAllocationCreateInfo && pImage && pAllocation);
    10120 
    10121  VMA_DEBUG_LOG("vmaCreateImage");
    10122 
    10123  VMA_DEBUG_GLOBAL_MUTEX_LOCK
    10124 
    10125  *pImage = VK_NULL_HANDLE;
    10126  *pAllocation = VK_NULL_HANDLE;
    10127 
    10128  // 1. Create VkImage.
    10129  VkResult res = (*allocator->GetVulkanFunctions().vkCreateImage)(
    10130  allocator->m_hDevice,
    10131  pImageCreateInfo,
    10132  allocator->GetAllocationCallbacks(),
    10133  pImage);
    10134  if(res >= 0)
    10135  {
    10136  VmaSuballocationType suballocType = pImageCreateInfo->tiling == VK_IMAGE_TILING_OPTIMAL ?
    10137  VMA_SUBALLOCATION_TYPE_IMAGE_OPTIMAL :
    10138  VMA_SUBALLOCATION_TYPE_IMAGE_LINEAR;
    10139 
    10140  // 2. Allocate memory using allocator.
    10141  res = AllocateMemoryForImage(allocator, *pImage, pAllocationCreateInfo, suballocType, pAllocation);
    10142  if(res >= 0)
    10143  {
    10144  // 3. Bind image with memory.
    10145  res = allocator->BindImageMemory(*pAllocation, *pImage);
    10146  if(res >= 0)
    10147  {
    10148  // All steps succeeded.
    10149  #if VMA_STATS_STRING_ENABLED
    10150  (*pAllocation)->InitBufferImageUsage(pImageCreateInfo->usage);
    10151  #endif
    10152  if(pAllocationInfo != VMA_NULL)
    10153  {
    10154  allocator->GetAllocationInfo(*pAllocation, pAllocationInfo);
    10155  }
    10156  return VK_SUCCESS;
    10157  }
    10158  allocator->FreeMemory(*pAllocation);
    10159  *pAllocation = VK_NULL_HANDLE;
    10160  (*allocator->GetVulkanFunctions().vkDestroyImage)(allocator->m_hDevice, *pImage, allocator->GetAllocationCallbacks());
    10161  *pImage = VK_NULL_HANDLE;
    10162  return res;
    10163  }
    10164  (*allocator->GetVulkanFunctions().vkDestroyImage)(allocator->m_hDevice, *pImage, allocator->GetAllocationCallbacks());
    10165  *pImage = VK_NULL_HANDLE;
    10166  return res;
    10167  }
    10168  return res;
    10169 }
    10170 
    10171 void vmaDestroyImage(
    10172  VmaAllocator allocator,
    10173  VkImage image,
    10174  VmaAllocation allocation)
    10175 {
    10176  VMA_ASSERT(allocator);
    10177  VMA_DEBUG_LOG("vmaDestroyImage");
    10178  VMA_DEBUG_GLOBAL_MUTEX_LOCK
    10179  if(image != VK_NULL_HANDLE)
    10180  {
    10181  (*allocator->GetVulkanFunctions().vkDestroyImage)(allocator->m_hDevice, image, allocator->GetAllocationCallbacks());
    10182  }
    10183  if(allocation != VK_NULL_HANDLE)
    10184  {
    10185  allocator->FreeMemory(allocation);
    10186  }
    10187 }
    10188 
    10189 #endif // #ifdef VMA_IMPLEMENTATION
    PFN_vkGetPhysicalDeviceProperties vkGetPhysicalDeviceProperties
    Definition: vk_mem_alloc.h:1271
    +
    Set this flag if the allocation should have its own memory block.
    Definition: vk_mem_alloc.h:1537
    void vmaUnmapMemory(VmaAllocator allocator, VmaAllocation allocation)
    Unmaps memory represented by given allocation, mapped previously using vmaMapMemory().
    -
    VkPhysicalDevice physicalDevice
    Vulkan physical device.
    Definition: vk_mem_alloc.h:1273
    +
    VkPhysicalDevice physicalDevice
    Vulkan physical device.
    Definition: vk_mem_alloc.h:1300
    VkResult vmaDefragment(VmaAllocator allocator, VmaAllocation *pAllocations, size_t allocationCount, VkBool32 *pAllocationsChanged, const VmaDefragmentationInfo *pDefragmentationInfo, VmaDefragmentationStats *pDefragmentationStats)
    Compacts memory by moving allocations.
    void vmaInvalidateAllocation(VmaAllocator allocator, VmaAllocation allocation, VkDeviceSize offset, VkDeviceSize size)
    Invalidates memory of given allocation.
    Represents single memory allocation.
    -
    PFN_vkCreateBuffer vkCreateBuffer
    Definition: vk_mem_alloc.h:1256
    +
    PFN_vkCreateBuffer vkCreateBuffer
    Definition: vk_mem_alloc.h:1283
    void vmaFreeStatsString(VmaAllocator allocator, char *pStatsString)
    struct VmaStats VmaStats
    General statistics from current state of Allocator.
    -
    Definition: vk_mem_alloc.h:1467
    -
    PFN_vkMapMemory vkMapMemory
    Definition: vk_mem_alloc.h:1248
    -
    VkDeviceMemory deviceMemory
    Handle to Vulkan memory object.
    Definition: vk_mem_alloc.h:1856
    -
    VmaAllocatorCreateFlags flags
    Flags for created allocator. Use VmaAllocatorCreateFlagBits enum.
    Definition: vk_mem_alloc.h:1270
    -
    uint32_t maxAllocationsToMove
    Maximum number of allocations that can be moved to different place.
    Definition: vk_mem_alloc.h:2101
    -
    Use this flag if you always allocate only buffers and linear images or only optimal images out of thi...
    Definition: vk_mem_alloc.h:1686
    +
    Definition: vk_mem_alloc.h:1494
    +
    PFN_vkMapMemory vkMapMemory
    Definition: vk_mem_alloc.h:1275
    +
    VkDeviceMemory deviceMemory
    Handle to Vulkan memory object.
    Definition: vk_mem_alloc.h:1883
    +
    VmaAllocatorCreateFlags flags
    Flags for created allocator. Use VmaAllocatorCreateFlagBits enum.
    Definition: vk_mem_alloc.h:1297
    +
    uint32_t maxAllocationsToMove
    Maximum number of allocations that can be moved to different place.
    Definition: vk_mem_alloc.h:2128
    +
    Use this flag if you always allocate only buffers and linear images or only optimal images out of thi...
    Definition: vk_mem_alloc.h:1713
    void vmaMakePoolAllocationsLost(VmaAllocator allocator, VmaPool pool, size_t *pLostAllocationCount)
    Marks all allocations in given pool as lost if they are not used in current frame or VmaPoolCreateInf...
    -
    VkDeviceSize size
    Total amount of VkDeviceMemory allocated from Vulkan for this pool, in bytes.
    Definition: vk_mem_alloc.h:1740
    -
    Definition: vk_mem_alloc.h:1547
    -
    VkFlags VmaAllocatorCreateFlags
    Definition: vk_mem_alloc.h:1237
    -
    VkMemoryPropertyFlags preferredFlags
    Flags that preferably should be set in a memory type chosen for an allocation.
    Definition: vk_mem_alloc.h:1585
    -
    Definition: vk_mem_alloc.h:1494
    -
    const VkAllocationCallbacks * pAllocationCallbacks
    Custom CPU memory allocation callbacks. Optional.
    Definition: vk_mem_alloc.h:1282
    +
    VkDeviceSize size
    Total amount of VkDeviceMemory allocated from Vulkan for this pool, in bytes.
    Definition: vk_mem_alloc.h:1767
    +
    Definition: vk_mem_alloc.h:1574
    +
    VkFlags VmaAllocatorCreateFlags
    Definition: vk_mem_alloc.h:1264
    +
    VkMemoryPropertyFlags preferredFlags
    Flags that preferably should be set in a memory type chosen for an allocation.
    Definition: vk_mem_alloc.h:1612
    +
    Definition: vk_mem_alloc.h:1521
    +
    const VkAllocationCallbacks * pAllocationCallbacks
    Custom CPU memory allocation callbacks. Optional.
    Definition: vk_mem_alloc.h:1309
    void vmaCalculateStats(VmaAllocator allocator, VmaStats *pStats)
    Retrieves statistics from current state of the Allocator.
    -
    const VmaVulkanFunctions * pVulkanFunctions
    Pointers to Vulkan functions. Can be null if you leave define VMA_STATIC_VULKAN_FUNCTIONS 1...
    Definition: vk_mem_alloc.h:1335
    -
    Description of a Allocator to be created.
    Definition: vk_mem_alloc.h:1267
    +
    const VmaVulkanFunctions * pVulkanFunctions
    Pointers to Vulkan functions. Can be null if you leave define VMA_STATIC_VULKAN_FUNCTIONS 1...
    Definition: vk_mem_alloc.h:1362
    +
    Description of a Allocator to be created.
    Definition: vk_mem_alloc.h:1294
    void vmaDestroyAllocator(VmaAllocator allocator)
    Destroys allocator object.
    -
    VmaAllocationCreateFlagBits
    Flags to be passed as VmaAllocationCreateInfo::flags.
    Definition: vk_mem_alloc.h:1498
    +
    VmaAllocationCreateFlagBits
    Flags to be passed as VmaAllocationCreateInfo::flags.
    Definition: vk_mem_alloc.h:1525
    void vmaGetAllocationInfo(VmaAllocator allocator, VmaAllocation allocation, VmaAllocationInfo *pAllocationInfo)
    Returns current information about specified allocation and atomically marks it as used in current fra...
    -
    VkDeviceSize allocationSizeMax
    Definition: vk_mem_alloc.h:1400
    -
    PFN_vkBindImageMemory vkBindImageMemory
    Definition: vk_mem_alloc.h:1253
    -
    VkDeviceSize unusedBytes
    Total number of bytes occupied by unused ranges.
    Definition: vk_mem_alloc.h:1399
    -
    Statistics returned by function vmaDefragment().
    Definition: vk_mem_alloc.h:2105
    +
    VkDeviceSize allocationSizeMax
    Definition: vk_mem_alloc.h:1427
    +
    PFN_vkBindImageMemory vkBindImageMemory
    Definition: vk_mem_alloc.h:1280
    +
    VkDeviceSize unusedBytes
    Total number of bytes occupied by unused ranges.
    Definition: vk_mem_alloc.h:1426
    +
    Statistics returned by function vmaDefragment().
    Definition: vk_mem_alloc.h:2132
    void vmaFreeMemory(VmaAllocator allocator, VmaAllocation allocation)
    Frees memory previously allocated using vmaAllocateMemory(), vmaAllocateMemoryForBuffer(), or vmaAllocateMemoryForImage().
    -
    uint32_t frameInUseCount
    Maximum number of additional frames that are in use at the same time as current frame.
    Definition: vk_mem_alloc.h:1299
    -
    VmaStatInfo total
    Definition: vk_mem_alloc.h:1409
    -
    uint32_t deviceMemoryBlocksFreed
    Number of empty VkDeviceMemory objects that have been released to the system.
    Definition: vk_mem_alloc.h:2113
    -
    VmaAllocationCreateFlags flags
    Use VmaAllocationCreateFlagBits enum.
    Definition: vk_mem_alloc.h:1569
    -
    VkDeviceSize maxBytesToMove
    Maximum total numbers of bytes that can be copied while moving allocations to different places...
    Definition: vk_mem_alloc.h:2096
    -
    PFN_vkGetBufferMemoryRequirements vkGetBufferMemoryRequirements
    Definition: vk_mem_alloc.h:1254
    -
    void(VKAPI_PTR * PFN_vmaAllocateDeviceMemoryFunction)(VmaAllocator allocator, uint32_t memoryType, VkDeviceMemory memory, VkDeviceSize size)
    Callback function called after successful vkAllocateMemory.
    Definition: vk_mem_alloc.h:1179
    +
    uint32_t frameInUseCount
    Maximum number of additional frames that are in use at the same time as current frame.
    Definition: vk_mem_alloc.h:1326
    +
    VmaStatInfo total
    Definition: vk_mem_alloc.h:1436
    +
    uint32_t deviceMemoryBlocksFreed
    Number of empty VkDeviceMemory objects that have been released to the system.
    Definition: vk_mem_alloc.h:2140
    +
    VmaAllocationCreateFlags flags
    Use VmaAllocationCreateFlagBits enum.
    Definition: vk_mem_alloc.h:1596
    +
    VkDeviceSize maxBytesToMove
    Maximum total numbers of bytes that can be copied while moving allocations to different places...
    Definition: vk_mem_alloc.h:2123
    +
    PFN_vkGetBufferMemoryRequirements vkGetBufferMemoryRequirements
    Definition: vk_mem_alloc.h:1281
    +
    void(VKAPI_PTR * PFN_vmaAllocateDeviceMemoryFunction)(VmaAllocator allocator, uint32_t memoryType, VkDeviceMemory memory, VkDeviceSize size)
    Callback function called after successful vkAllocateMemory.
    Definition: vk_mem_alloc.h:1206
    Represents main object of this library initialized.
    -
    VkDevice device
    Vulkan device.
    Definition: vk_mem_alloc.h:1276
    +
    VkDevice device
    Vulkan device.
    Definition: vk_mem_alloc.h:1303
    VkResult vmaBindBufferMemory(VmaAllocator allocator, VmaAllocation allocation, VkBuffer buffer)
    Binds buffer to allocation.
    -
    Describes parameter of created VmaPool.
    Definition: vk_mem_alloc.h:1694
    -
    Definition: vk_mem_alloc.h:1688
    -
    VkDeviceSize size
    Size of this allocation, in bytes.
    Definition: vk_mem_alloc.h:1866
    +
    Describes parameter of created VmaPool.
    Definition: vk_mem_alloc.h:1721
    +
    Definition: vk_mem_alloc.h:1715
    +
    VkDeviceSize size
    Size of this allocation, in bytes.
    Definition: vk_mem_alloc.h:1893
    void vmaGetMemoryTypeProperties(VmaAllocator allocator, uint32_t memoryTypeIndex, VkMemoryPropertyFlags *pFlags)
    Given Memory Type Index, returns Property Flags of this memory type.
    -
    PFN_vkUnmapMemory vkUnmapMemory
    Definition: vk_mem_alloc.h:1249
    -
    void * pUserData
    Custom general-purpose pointer that will be stored in VmaAllocation, can be read as VmaAllocationInfo...
    Definition: vk_mem_alloc.h:1606
    -
    size_t minBlockCount
    Minimum number of blocks to be always allocated in this pool, even if they stay empty.
    Definition: vk_mem_alloc.h:1710
    -
    size_t allocationCount
    Number of VmaAllocation objects created from this pool that were not destroyed or lost...
    Definition: vk_mem_alloc.h:1746
    +
    PFN_vkUnmapMemory vkUnmapMemory
    Definition: vk_mem_alloc.h:1276
    +
    void * pUserData
    Custom general-purpose pointer that will be stored in VmaAllocation, can be read as VmaAllocationInfo...
    Definition: vk_mem_alloc.h:1633
    +
    size_t minBlockCount
    Minimum number of blocks to be always allocated in this pool, even if they stay empty.
    Definition: vk_mem_alloc.h:1737
    +
    size_t allocationCount
    Number of VmaAllocation objects created from this pool that were not destroyed or lost...
    Definition: vk_mem_alloc.h:1773
    struct VmaVulkanFunctions VmaVulkanFunctions
    Pointers to some Vulkan functions - a subset used by the library.
    -
    Definition: vk_mem_alloc.h:1235
    -
    uint32_t memoryTypeIndex
    Vulkan memory type index to allocate this pool from.
    Definition: vk_mem_alloc.h:1697
    +
    Definition: vk_mem_alloc.h:1262
    +
    uint32_t memoryTypeIndex
    Vulkan memory type index to allocate this pool from.
    Definition: vk_mem_alloc.h:1724
    VkResult vmaFindMemoryTypeIndex(VmaAllocator allocator, uint32_t memoryTypeBits, const VmaAllocationCreateInfo *pAllocationCreateInfo, uint32_t *pMemoryTypeIndex)
    Helps to find memoryTypeIndex, given memoryTypeBits and VmaAllocationCreateInfo.
    -
    VmaMemoryUsage
    Definition: vk_mem_alloc.h:1445
    +
    VmaMemoryUsage
    Definition: vk_mem_alloc.h:1472
    struct VmaAllocationInfo VmaAllocationInfo
    Parameters of VmaAllocation objects, that can be retrieved using function vmaGetAllocationInfo().
    void vmaFlushAllocation(VmaAllocator allocator, VmaAllocation allocation, VkDeviceSize offset, VkDeviceSize size)
    Flushes memory of given allocation.
    -
    Optional configuration parameters to be passed to function vmaDefragment().
    Definition: vk_mem_alloc.h:2091
    +
    Optional configuration parameters to be passed to function vmaDefragment().
    Definition: vk_mem_alloc.h:2118
    struct VmaPoolCreateInfo VmaPoolCreateInfo
    Describes parameter of created VmaPool.
    void vmaDestroyPool(VmaAllocator allocator, VmaPool pool)
    Destroys VmaPool object and frees Vulkan device memory.
    -
    VkDeviceSize bytesFreed
    Total number of bytes that have been released to the system by freeing empty VkDeviceMemory objects...
    Definition: vk_mem_alloc.h:2109
    -
    Definition: vk_mem_alloc.h:1484
    -
    uint32_t memoryTypeBits
    Bitmask containing one bit set for every memory type acceptable for this allocation.
    Definition: vk_mem_alloc.h:1593
    -
    PFN_vkBindBufferMemory vkBindBufferMemory
    Definition: vk_mem_alloc.h:1252
    +
    VkDeviceSize bytesFreed
    Total number of bytes that have been released to the system by freeing empty VkDeviceMemory objects...
    Definition: vk_mem_alloc.h:2136
    +
    Definition: vk_mem_alloc.h:1511
    +
    uint32_t memoryTypeBits
    Bitmask containing one bit set for every memory type acceptable for this allocation.
    Definition: vk_mem_alloc.h:1620
    +
    PFN_vkBindBufferMemory vkBindBufferMemory
    Definition: vk_mem_alloc.h:1279
    Represents custom memory pool.
    void vmaGetPoolStats(VmaAllocator allocator, VmaPool pool, VmaPoolStats *pPoolStats)
    Retrieves statistics of existing VmaPool object.
    struct VmaDefragmentationInfo VmaDefragmentationInfo
    Optional configuration parameters to be passed to function vmaDefragment().
    -
    General statistics from current state of Allocator.
    Definition: vk_mem_alloc.h:1405
    -
    void(VKAPI_PTR * PFN_vmaFreeDeviceMemoryFunction)(VmaAllocator allocator, uint32_t memoryType, VkDeviceMemory memory, VkDeviceSize size)
    Callback function called before vkFreeMemory.
    Definition: vk_mem_alloc.h:1185
    +
    General statistics from current state of Allocator.
    Definition: vk_mem_alloc.h:1432
    +
    void(VKAPI_PTR * PFN_vmaFreeDeviceMemoryFunction)(VmaAllocator allocator, uint32_t memoryType, VkDeviceMemory memory, VkDeviceSize size)
    Callback function called before vkFreeMemory.
    Definition: vk_mem_alloc.h:1212
    void vmaSetAllocationUserData(VmaAllocator allocator, VmaAllocation allocation, void *pUserData)
    Sets pUserData in given allocation to new value.
    VkResult vmaCreatePool(VmaAllocator allocator, const VmaPoolCreateInfo *pCreateInfo, VmaPool *pPool)
    Allocates Vulkan device memory and creates VmaPool object.
    -
    VmaAllocatorCreateFlagBits
    Flags for created VmaAllocator.
    Definition: vk_mem_alloc.h:1206
    +
    VmaAllocatorCreateFlagBits
    Flags for created VmaAllocator.
    Definition: vk_mem_alloc.h:1233
    VkResult vmaBindImageMemory(VmaAllocator allocator, VmaAllocation allocation, VkImage image)
    Binds image to allocation.
    struct VmaStatInfo VmaStatInfo
    Calculated statistics of memory usage in entire allocator.
    -
    Allocator and all objects created from it will not be synchronized internally, so you must guarantee ...
    Definition: vk_mem_alloc.h:1211
    -
    uint32_t allocationsMoved
    Number of allocations that have been moved to different places.
    Definition: vk_mem_alloc.h:2111
    +
    Allocator and all objects created from it will not be synchronized internally, so you must guarantee ...
    Definition: vk_mem_alloc.h:1238
    +
    uint32_t allocationsMoved
    Number of allocations that have been moved to different places.
    Definition: vk_mem_alloc.h:2138
    void vmaCreateLostAllocation(VmaAllocator allocator, VmaAllocation *pAllocation)
    Creates new allocation that is in lost state from the beginning.
    -
    VkMemoryPropertyFlags requiredFlags
    Flags that must be set in a Memory Type chosen for an allocation.
    Definition: vk_mem_alloc.h:1580
    -
    VkDeviceSize unusedRangeSizeMax
    Size of the largest continuous free memory region.
    Definition: vk_mem_alloc.h:1756
    +
    VkMemoryPropertyFlags requiredFlags
    Flags that must be set in a Memory Type chosen for an allocation.
    Definition: vk_mem_alloc.h:1607
    +
    VkDeviceSize unusedRangeSizeMax
    Size of the largest continuous free memory region.
    Definition: vk_mem_alloc.h:1783
    void vmaBuildStatsString(VmaAllocator allocator, char **ppStatsString, VkBool32 detailedMap)
    Builds and returns statistics as string in JSON format.
    -
    PFN_vkGetPhysicalDeviceMemoryProperties vkGetPhysicalDeviceMemoryProperties
    Definition: vk_mem_alloc.h:1245
    -
    Calculated statistics of memory usage in entire allocator.
    Definition: vk_mem_alloc.h:1388
    -
    VkDeviceSize blockSize
    Size of a single VkDeviceMemory block to be allocated as part of this pool, in bytes.
    Definition: vk_mem_alloc.h:1705
    -
    Set of callbacks that the library will call for vkAllocateMemory and vkFreeMemory.
    Definition: vk_mem_alloc.h:1198
    +
    PFN_vkGetPhysicalDeviceMemoryProperties vkGetPhysicalDeviceMemoryProperties
    Definition: vk_mem_alloc.h:1272
    +
    Calculated statistics of memory usage in entire allocator.
    Definition: vk_mem_alloc.h:1415
    +
    VkDeviceSize blockSize
    Size of a single VkDeviceMemory block to be allocated as part of this pool, in bytes.
    Definition: vk_mem_alloc.h:1732
    +
    Set of callbacks that the library will call for vkAllocateMemory and vkFreeMemory.
    Definition: vk_mem_alloc.h:1225
    VkResult vmaCreateBuffer(VmaAllocator allocator, const VkBufferCreateInfo *pBufferCreateInfo, const VmaAllocationCreateInfo *pAllocationCreateInfo, VkBuffer *pBuffer, VmaAllocation *pAllocation, VmaAllocationInfo *pAllocationInfo)
    -
    Definition: vk_mem_alloc.h:1554
    -
    VkDeviceSize unusedRangeSizeMin
    Definition: vk_mem_alloc.h:1401
    -
    PFN_vmaFreeDeviceMemoryFunction pfnFree
    Optional, can be null.
    Definition: vk_mem_alloc.h:1202
    -
    VmaPoolCreateFlags flags
    Use combination of VmaPoolCreateFlagBits.
    Definition: vk_mem_alloc.h:1700
    -
    Definition: vk_mem_alloc.h:1493
    -
    PFN_vkInvalidateMappedMemoryRanges vkInvalidateMappedMemoryRanges
    Definition: vk_mem_alloc.h:1251
    +
    Definition: vk_mem_alloc.h:1581
    +
    VkDeviceSize unusedRangeSizeMin
    Definition: vk_mem_alloc.h:1428
    +
    PFN_vmaFreeDeviceMemoryFunction pfnFree
    Optional, can be null.
    Definition: vk_mem_alloc.h:1229
    +
    VmaPoolCreateFlags flags
    Use combination of VmaPoolCreateFlagBits.
    Definition: vk_mem_alloc.h:1727
    +
    Definition: vk_mem_alloc.h:1520
    +
    PFN_vkInvalidateMappedMemoryRanges vkInvalidateMappedMemoryRanges
    Definition: vk_mem_alloc.h:1278
    struct VmaPoolStats VmaPoolStats
    Describes parameter of existing VmaPool.
    VkResult vmaCreateImage(VmaAllocator allocator, const VkImageCreateInfo *pImageCreateInfo, const VmaAllocationCreateInfo *pAllocationCreateInfo, VkImage *pImage, VmaAllocation *pAllocation, VmaAllocationInfo *pAllocationInfo)
    Function similar to vmaCreateBuffer().
    -
    VmaMemoryUsage usage
    Intended usage of memory.
    Definition: vk_mem_alloc.h:1575
    -
    Definition: vk_mem_alloc.h:1566
    +
    VmaMemoryUsage usage
    Intended usage of memory.
    Definition: vk_mem_alloc.h:1602
    +
    Definition: vk_mem_alloc.h:1593
    VkResult vmaFindMemoryTypeIndexForImageInfo(VmaAllocator allocator, const VkImageCreateInfo *pImageCreateInfo, const VmaAllocationCreateInfo *pAllocationCreateInfo, uint32_t *pMemoryTypeIndex)
    Helps to find memoryTypeIndex, given VkImageCreateInfo and VmaAllocationCreateInfo.
    -
    uint32_t blockCount
    Number of VkDeviceMemory Vulkan memory blocks allocated.
    Definition: vk_mem_alloc.h:1391
    -
    PFN_vkFreeMemory vkFreeMemory
    Definition: vk_mem_alloc.h:1247
    -
    size_t maxBlockCount
    Maximum number of blocks that can be allocated in this pool. Optional.
    Definition: vk_mem_alloc.h:1718
    -
    const VmaDeviceMemoryCallbacks * pDeviceMemoryCallbacks
    Informative callbacks for vkAllocateMemory, vkFreeMemory. Optional.
    Definition: vk_mem_alloc.h:1285
    -
    size_t unusedRangeCount
    Number of continuous memory ranges in the pool not used by any VmaAllocation.
    Definition: vk_mem_alloc.h:1749
    -
    VkFlags VmaAllocationCreateFlags
    Definition: vk_mem_alloc.h:1564
    -
    VmaPool pool
    Pool that this allocation should be created in.
    Definition: vk_mem_alloc.h:1599
    +
    uint32_t blockCount
    Number of VkDeviceMemory Vulkan memory blocks allocated.
    Definition: vk_mem_alloc.h:1418
    +
    PFN_vkFreeMemory vkFreeMemory
    Definition: vk_mem_alloc.h:1274
    +
    size_t maxBlockCount
    Maximum number of blocks that can be allocated in this pool. Optional.
    Definition: vk_mem_alloc.h:1745
    +
    const VmaDeviceMemoryCallbacks * pDeviceMemoryCallbacks
    Informative callbacks for vkAllocateMemory, vkFreeMemory. Optional.
    Definition: vk_mem_alloc.h:1312
    +
    size_t unusedRangeCount
    Number of continuous memory ranges in the pool not used by any VmaAllocation.
    Definition: vk_mem_alloc.h:1776
    +
    VkFlags VmaAllocationCreateFlags
    Definition: vk_mem_alloc.h:1591
    +
    VmaPool pool
    Pool that this allocation should be created in.
    Definition: vk_mem_alloc.h:1626
    void vmaGetMemoryProperties(VmaAllocator allocator, const VkPhysicalDeviceMemoryProperties **ppPhysicalDeviceMemoryProperties)
    -
    const VkDeviceSize * pHeapSizeLimit
    Either null or a pointer to an array of limits on maximum number of bytes that can be allocated out o...
    Definition: vk_mem_alloc.h:1323
    -
    VmaStatInfo memoryType[VK_MAX_MEMORY_TYPES]
    Definition: vk_mem_alloc.h:1407
    -
    Set this flag to use a memory that will be persistently mapped and retrieve pointer to it...
    Definition: vk_mem_alloc.h:1534
    -
    VkDeviceSize allocationSizeMin
    Definition: vk_mem_alloc.h:1400
    +
    const VkDeviceSize * pHeapSizeLimit
    Either null or a pointer to an array of limits on maximum number of bytes that can be allocated out o...
    Definition: vk_mem_alloc.h:1350
    +
    VmaStatInfo memoryType[VK_MAX_MEMORY_TYPES]
    Definition: vk_mem_alloc.h:1434
    +
    Set this flag to use a memory that will be persistently mapped and retrieve pointer to it...
    Definition: vk_mem_alloc.h:1561
    +
    VkDeviceSize allocationSizeMin
    Definition: vk_mem_alloc.h:1427
    VkResult vmaFindMemoryTypeIndexForBufferInfo(VmaAllocator allocator, const VkBufferCreateInfo *pBufferCreateInfo, const VmaAllocationCreateInfo *pAllocationCreateInfo, uint32_t *pMemoryTypeIndex)
    Helps to find memoryTypeIndex, given VkBufferCreateInfo and VmaAllocationCreateInfo.
    -
    PFN_vkCreateImage vkCreateImage
    Definition: vk_mem_alloc.h:1258
    -
    PFN_vmaAllocateDeviceMemoryFunction pfnAllocate
    Optional, can be null.
    Definition: vk_mem_alloc.h:1200
    -
    PFN_vkDestroyBuffer vkDestroyBuffer
    Definition: vk_mem_alloc.h:1257
    +
    PFN_vkCreateImage vkCreateImage
    Definition: vk_mem_alloc.h:1285
    +
    PFN_vmaAllocateDeviceMemoryFunction pfnAllocate
    Optional, can be null.
    Definition: vk_mem_alloc.h:1227
    +
    PFN_vkDestroyBuffer vkDestroyBuffer
    Definition: vk_mem_alloc.h:1284
    VkResult vmaMapMemory(VmaAllocator allocator, VmaAllocation allocation, void **ppData)
    Maps memory represented by given allocation and returns pointer to it.
    -
    uint32_t frameInUseCount
    Maximum number of additional frames that are in use at the same time as current frame.
    Definition: vk_mem_alloc.h:1732
    -
    PFN_vkFlushMappedMemoryRanges vkFlushMappedMemoryRanges
    Definition: vk_mem_alloc.h:1250
    +
    uint32_t frameInUseCount
    Maximum number of additional frames that are in use at the same time as current frame.
    Definition: vk_mem_alloc.h:1759
    +
    PFN_vkFlushMappedMemoryRanges vkFlushMappedMemoryRanges
    Definition: vk_mem_alloc.h:1277
    VkResult vmaAllocateMemoryForImage(VmaAllocator allocator, VkImage image, const VmaAllocationCreateInfo *pCreateInfo, VmaAllocation *pAllocation, VmaAllocationInfo *pAllocationInfo)
    Function similar to vmaAllocateMemoryForBuffer().
    struct VmaAllocatorCreateInfo VmaAllocatorCreateInfo
    Description of a Allocator to be created.
    -
    void * pUserData
    Custom general-purpose pointer that was passed as VmaAllocationCreateInfo::pUserData or set using vma...
    Definition: vk_mem_alloc.h:1880
    -
    VkDeviceSize preferredLargeHeapBlockSize
    Preferred size of a single VkDeviceMemory block to be allocated from large heaps > 1 GiB...
    Definition: vk_mem_alloc.h:1279
    -
    VkDeviceSize allocationSizeAvg
    Definition: vk_mem_alloc.h:1400
    -
    VkDeviceSize usedBytes
    Total number of bytes occupied by all allocations.
    Definition: vk_mem_alloc.h:1397
    +
    void * pUserData
    Custom general-purpose pointer that was passed as VmaAllocationCreateInfo::pUserData or set using vma...
    Definition: vk_mem_alloc.h:1907
    +
    VkDeviceSize preferredLargeHeapBlockSize
    Preferred size of a single VkDeviceMemory block to be allocated from large heaps > 1 GiB...
    Definition: vk_mem_alloc.h:1306
    +
    VkDeviceSize allocationSizeAvg
    Definition: vk_mem_alloc.h:1427
    +
    VkDeviceSize usedBytes
    Total number of bytes occupied by all allocations.
    Definition: vk_mem_alloc.h:1424
    struct VmaDeviceMemoryCallbacks VmaDeviceMemoryCallbacks
    Set of callbacks that the library will call for vkAllocateMemory and vkFreeMemory.
    VkResult vmaCheckCorruption(VmaAllocator allocator, uint32_t memoryTypeBits)
    Checks magic number in margins around all allocations in given memory types (in both default and cust...
    -
    Describes parameter of existing VmaPool.
    Definition: vk_mem_alloc.h:1737
    +
    Describes parameter of existing VmaPool.
    Definition: vk_mem_alloc.h:1764
    VkResult vmaCheckPoolCorruption(VmaAllocator allocator, VmaPool pool)
    Checks magic number in margins around all allocations in given memory pool in search for corruptions...
    -
    VkDeviceSize offset
    Offset into deviceMemory object to the beginning of this allocation, in bytes. (deviceMemory, offset) pair is unique to this allocation.
    Definition: vk_mem_alloc.h:1861
    -
    Definition: vk_mem_alloc.h:1562
    -
    VkDeviceSize bytesMoved
    Total number of bytes that have been copied while moving allocations to different places...
    Definition: vk_mem_alloc.h:2107
    -
    Pointers to some Vulkan functions - a subset used by the library.
    Definition: vk_mem_alloc.h:1243
    +
    VkDeviceSize offset
    Offset into deviceMemory object to the beginning of this allocation, in bytes. (deviceMemory, offset) pair is unique to this allocation.
    Definition: vk_mem_alloc.h:1888
    +
    Definition: vk_mem_alloc.h:1589
    +
    VkDeviceSize bytesMoved
    Total number of bytes that have been copied while moving allocations to different places...
    Definition: vk_mem_alloc.h:2134
    +
    Pointers to some Vulkan functions - a subset used by the library.
    Definition: vk_mem_alloc.h:1270
    VkResult vmaCreateAllocator(const VmaAllocatorCreateInfo *pCreateInfo, VmaAllocator *pAllocator)
    Creates Allocator object.
    -
    uint32_t unusedRangeCount
    Number of free ranges of memory between allocations.
    Definition: vk_mem_alloc.h:1395
    -
    Definition: vk_mem_alloc.h:1450
    -
    VkFlags VmaPoolCreateFlags
    Definition: vk_mem_alloc.h:1690
    +
    uint32_t unusedRangeCount
    Number of free ranges of memory between allocations.
    Definition: vk_mem_alloc.h:1422
    +
    Definition: vk_mem_alloc.h:1477
    +
    VkFlags VmaPoolCreateFlags
    Definition: vk_mem_alloc.h:1717
    void vmaGetPhysicalDeviceProperties(VmaAllocator allocator, const VkPhysicalDeviceProperties **ppPhysicalDeviceProperties)
    -
    uint32_t allocationCount
    Number of VmaAllocation allocation objects allocated.
    Definition: vk_mem_alloc.h:1393
    -
    PFN_vkGetImageMemoryRequirements vkGetImageMemoryRequirements
    Definition: vk_mem_alloc.h:1255
    -
    PFN_vkDestroyImage vkDestroyImage
    Definition: vk_mem_alloc.h:1259
    -
    Set this flag to only try to allocate from existing VkDeviceMemory blocks and never create new such b...
    Definition: vk_mem_alloc.h:1521
    -
    Definition: vk_mem_alloc.h:1477
    -
    void * pMappedData
    Pointer to the beginning of this allocation as mapped data.
    Definition: vk_mem_alloc.h:1875
    +
    uint32_t allocationCount
    Number of VmaAllocation allocation objects allocated.
    Definition: vk_mem_alloc.h:1420
    +
    PFN_vkGetImageMemoryRequirements vkGetImageMemoryRequirements
    Definition: vk_mem_alloc.h:1282
    +
    PFN_vkDestroyImage vkDestroyImage
    Definition: vk_mem_alloc.h:1286
    +
    Set this flag to only try to allocate from existing VkDeviceMemory blocks and never create new such b...
    Definition: vk_mem_alloc.h:1548
    +
    Definition: vk_mem_alloc.h:1504
    +
    void * pMappedData
    Pointer to the beginning of this allocation as mapped data.
    Definition: vk_mem_alloc.h:1902
    void vmaDestroyImage(VmaAllocator allocator, VkImage image, VmaAllocation allocation)
    Destroys Vulkan image and frees allocated memory.
    -
    Enables usage of VK_KHR_dedicated_allocation extension.
    Definition: vk_mem_alloc.h:1233
    +
    Enables usage of VK_KHR_dedicated_allocation extension.
    Definition: vk_mem_alloc.h:1260
    struct VmaDefragmentationStats VmaDefragmentationStats
    Statistics returned by function vmaDefragment().
    -
    PFN_vkAllocateMemory vkAllocateMemory
    Definition: vk_mem_alloc.h:1246
    -
    Parameters of VmaAllocation objects, that can be retrieved using function vmaGetAllocationInfo().
    Definition: vk_mem_alloc.h:1842
    +
    PFN_vkAllocateMemory vkAllocateMemory
    Definition: vk_mem_alloc.h:1273
    +
    Parameters of VmaAllocation objects, that can be retrieved using function vmaGetAllocationInfo().
    Definition: vk_mem_alloc.h:1869
    VkResult vmaAllocateMemory(VmaAllocator allocator, const VkMemoryRequirements *pVkMemoryRequirements, const VmaAllocationCreateInfo *pCreateInfo, VmaAllocation *pAllocation, VmaAllocationInfo *pAllocationInfo)
    General purpose memory allocation.
    void vmaSetCurrentFrameIndex(VmaAllocator allocator, uint32_t frameIndex)
    Sets index of the current frame.
    struct VmaAllocationCreateInfo VmaAllocationCreateInfo
    VkResult vmaAllocateMemoryForBuffer(VmaAllocator allocator, VkBuffer buffer, const VmaAllocationCreateInfo *pCreateInfo, VmaAllocation *pAllocation, VmaAllocationInfo *pAllocationInfo)
    -
    VmaPoolCreateFlagBits
    Flags to be passed as VmaPoolCreateInfo::flags.
    Definition: vk_mem_alloc.h:1668
    -
    VkDeviceSize unusedRangeSizeAvg
    Definition: vk_mem_alloc.h:1401
    +
    VmaPoolCreateFlagBits
    Flags to be passed as VmaPoolCreateInfo::flags.
    Definition: vk_mem_alloc.h:1695
    +
    VkDeviceSize unusedRangeSizeAvg
    Definition: vk_mem_alloc.h:1428
    VkBool32 vmaTouchAllocation(VmaAllocator allocator, VmaAllocation allocation)
    Returns VK_TRUE if allocation is not lost and atomically marks it as used in current frame...
    - -
    VmaStatInfo memoryHeap[VK_MAX_MEMORY_HEAPS]
    Definition: vk_mem_alloc.h:1408
    + +
    VmaStatInfo memoryHeap[VK_MAX_MEMORY_HEAPS]
    Definition: vk_mem_alloc.h:1435
    void vmaDestroyBuffer(VmaAllocator allocator, VkBuffer buffer, VmaAllocation allocation)
    Destroys Vulkan buffer and frees allocated memory.
    -
    VkDeviceSize unusedSize
    Total number of bytes in the pool not used by any VmaAllocation.
    Definition: vk_mem_alloc.h:1743
    -
    VkDeviceSize unusedRangeSizeMax
    Definition: vk_mem_alloc.h:1401
    -
    uint32_t memoryType
    Memory type index that this allocation was allocated from.
    Definition: vk_mem_alloc.h:1847
    +
    VkDeviceSize unusedSize
    Total number of bytes in the pool not used by any VmaAllocation.
    Definition: vk_mem_alloc.h:1770
    +
    VkDeviceSize unusedRangeSizeMax
    Definition: vk_mem_alloc.h:1428
    +
    uint32_t memoryType
    Memory type index that this allocation was allocated from.
    Definition: vk_mem_alloc.h:1874