Over the past decade, heap sprays have become almost synonymous with exploits in web-browsers (but let's not forget that they can be used in exploits against many other kinds of software). A great many people have published excellent descriptions of how they work, and produced various implementations with different benefits and draw-backs that fit almost any need. I won't bother describing the basic concept and the various implementations in more detail here. If you'd like to freshen up on the subject, I suggest you read the Wikipedia page on heap spraying and its the external links and references.
After having developed my first practical implementation of a heap spray for Microsoft Internet Explorer about ten years ago, I found that the amount of memory needed in some cases was too much for a realistic attack scenario. At that time, not many machines had the multiple Gigabytes of RAM needed to spray to some of the higher addresses I was forced to use for various reasons. Sure, the OS could swap memory to disk, but since disk storage was not very fast, it caused the exploit to slow to a crawl if it did not cause an out-of-memory crash.
Anyway, I needed a new kind of heap spray that did not allocate as much RAM as traditional heap sprays do. So, I developed a heap spray that uses significantly less RAM than a traditional heap spray does. In practice it uses about 33% less in most cases, but theoretically it could be less in certain situations.
To me the best way to describe this kind of heap spray is with an analogy to
the game Tetris: what if the game rules were reversed and you get points for
getting a block at the top of the screen as fast as possible? i.
Similarly, this kind of heap-spray tries to leave large gaps between memory
blocks that are not immediately relevant; e.
This gets even better if you allocate alternating large and small blocks and then free the large blocks before allocating an even larger one. For example, if you allocate a total of 10 blocks of alternating 9999 and 1 bytes, then free the larger blocks before allocating 10000 bytes, you will end up having 10005 bytes allocated in 6 blocks, 5 of these containing only 1 byte and one containing 10000 bytes. That last block start at an address 50000 bytes away from the first block.After allocating blocks: Address Contents 0 [1st 9999 byte block] 9999 [1st 1 byte block] 10000 [2nd 9999 byte block] 19999 [2nd 1 byte block] ..... (N-1)*10000 [Nth 9999 byte block] N*10000-1 [Nth 1 byte block] Total memory usage: N*10000 bytes. After freeing blocks: Address Contents 0 free 9999 [1st 1 byte block] 10000 free 19999 [2nd 1 byte block] ..... (N-1)*10000 free N*10000-1 [Nth 1 byte block] Total memory usage: N bytes. After allocating 10000 byte blocks: Address Contents 0 free 9999 [1st 1 byte block] 10000 free 19999 [2nd 1 byte block] ..... (N-1)*10000 free N*10000-1 [Nth 1 byte block] N*10000 [10000 byte block] Total memory usage: N+10000 bytes, approximating 1/10000th of the original!! Address range reached: [N*10000 - (N+1)*10000]
One might think at this point that this does not actually allow you to spray to
a higher address without having to allocate almost the same amount of memory as
a traditional heap spray first: after all, in our example we did allocate
a lot of memory before we got to free most of it. However, this problem goes
away once you start "stacking" these kinds of Swiss-cheese heap-sprays: first
you do this for memory blocks of size
A is large and
B small up to about half your target address. You then free all the
blocks and then do it again for blocks of size
significantly larger than
A. You do this up to the point where if you
C block, you cover your target address. At that point, you
free all the
C blocks before allocating a block of
After the step using the
B blocks, you have allocated about half the
memory needed for a normal heap spray up to the target address. You then free
almost all the memory you have allocated so far, because the
A blocks are
much larger than the
B blocks. The larger the difference between these two,
the closer your total allocated memory goes back to 0 after this step.
During the next step you only need to cover the second half of the distance to your target, so you should not need to allocate more than half of a normal heap spray again. When you are done allocating, you have again used about half the memory of a normal heap spray, but you are now very near to allocating a block at the target address. You again free a large part of the memory you allocated, before allocating nowhere near enough to increase the amount of memory you have allocated beyond half of a normal heap spray. This then does allow us to spray the heap without having to have all memory in between the start address and the target address allocated at any point in time.Step 1: allocate 999 and 1 byte blocks. Address Contents => 0 [999 byte block] => 999 [1 byte block] Total memory usage: 1,000 bytes Step 2: free 999 byte block Address Contents => 0 free 999 [1 byte block] Total memory usage: 1 bytes Step 3: allocate 99999 and 1000 byte blocks. Address Contents 0 free 999 [1 byte block] => 1,000 [99999 byte block] => 100,999 [1000 byte block] Total memory usage: 101,000 bytes Step 4: free 99,999 byte block Address Contents 0 free 999 [1 byte block] => 1,000 free 100,999 [1,000 byte block] Total memory usage: 1,001 bytes Step 5: allocate 100,000 byte block Address Contents 0 free 999 [1 byte block] 1,000 free 100,999 [1,000 byte block] 101,999 [100,000 byte block] Total memory usage: 101,001 bytes Address range reached: [101,999 - 201,999]
And there you have it: in the above example, we never allocated more than 101,001 bytes, but the final memory block was allocated at an offset of 101,999 from the start and ends at address 201,999. If you were targeting the address 200,000, you used only about half the amount of RAM a normal heap spray would have required.
The amount of RAM installed in most modern desktop and server systems is not as much of a limit to a successful exploit using a heap spray as it was 10 years ago. However, many phones and Internet-of-Things devices still have limited amounts of RAM. Also, the widespread use of 64-bit applications means that 32-bit integer math mistakes can result in out-of-bounds memory access at offsets several Gigabytes from the actual memory allocation. In such situations, the effectiveness of a heap-spray can still be impacted by available RAM.
As shown above, there are practical ways of implementing a heap spray that uses significantly less memory to reach a given address range than a traditional heap spray would, thus improving the speed and reliability of an exploit that uses it.
This technique does not require anything other than the ability to selectively free memory blocks used during the heap spray. It should therefore be possible to apply this technique to any existing heap spray implementations, providing they can free memory during the process of spraying the heap.
I've tried to come up with a generic algorithm for getting to any address with the least amount of RAM, but unfortunately it seems there are many factors that complicate things beyond the point at which I can write something generic. For instance, the minimum page size, maximum size of a heap block, other heap allocator quirks, uncertainty about the start address, and desired memory range for the final block are all things that need to be taken into consideration in such an algorithm. I'm sure some math wiz can find the optimal allocation/free algorithm for spraying a memory range up to any given address given a set of variables for these values - do let me know if you've worked it out. All I can say is that I believe the theoretical maximum gain from this technique is using about 50% less memory than a regular heap spray, but proving that is beyond my capabilities given the time I've allotted for such activities.