Buddy memory allocation
Encyclopedia
The buddy memory allocation technique is a memory allocation algorithm that divides memory into partitions to try to satisfy a memory request as suitably as possible. This system makes use of splitting memory into halves to try to give a best-fit. According to Donald Knuth
Donald Knuth
Donald Ervin Knuth is a computer scientist and Professor Emeritus at Stanford University.He is the author of the seminal multi-volume work The Art of Computer Programming. Knuth has been called the "father" of the analysis of algorithms...

, the buddy system was invented in 1963 by Harry Markowitz
Harry Markowitz
Harry Max Markowitz is an American economist and a recipient of the John von Neumann Theory Prize and the Nobel Memorial Prize in Economic Sciences....

, who won the 1990 Nobel Memorial Prize in Economics, and was first described by Kenneth C. Knowlton
Ken Knowlton
In 1963, Knowlton developed the BEFLIX programming language for bitmap computer-produced movies, created using an IBM 7094 computer and a Stromberg-Carlson 4020 microfilm recorder. Each frame contained eight shades of grey and a resolution of 252 x 184....

 (published 1965). Compared to the more complex memory allocation techniques that some modern operating systems use, buddy memory allocation is relatively easy to implement. It supports limited but efficient splitting and coalescing of memory blocks.

How it works

There are various forms of the buddy system, but binary buddies, in which each block is subdivided into two smaller blocks, are the simplest and most common variety. Every memory block in this system has an order, where the order is an integer ranging from 0 to a specified upper limit. The blocks in each order have sizes proportional to 2order, so that each block is exactly twice the size of blocks that are one order lower. Power-of-two block sizes make address computation simple, because all buddies are aligned on memory address boundaries that are also powers of two. When a larger block is split, it is divided into two smaller blocks, and each smaller block becomes a unique buddy to the other. A split block can only be merged with its unique buddy block, which then reforms the larger block they were split from.

Starting off, the size of the smallest possible block is determined, i.e. the smallest memory block that can be allocated. In most cases, this size is imposed by the hardware itself when paged memory is used, since pages have a certain minimal size such as 4 kilobytes. However, a lower limit independent of hardware limitations may also be desirable, so that the overhead of storing used and free memory locations is minimized. If no lower limit existed at all, and many programs requested small blocks of memory like 1K or less, the system would waste a lot of space relative to the size of the blocks simply to keep track of which blocks are allocated and unallocated. Typically the lower limit would be small enough to minimize wasted space, but large enough to avoid excessive overhead. The smallest block size is then taken as the size of an order-0 block, so that all higher orders are expressed as power-of-two multiples of this size.

The programmer
Programmer
A programmer, computer programmer or coder is someone who writes computer software. The term computer programmer can refer to a specialist in one area of computer programming or to a generalist who writes code for many kinds of software. One who practices or professes a formal approach to...

 then has to decide on, or to write code to obtain, the highest possible order that can still fit in the available memory space. Since the total available memory in a given computer system may not be a power-of-two multiple of the minimum block size, the largest block size may not span the entire memory of the system. For instance, if the system had 2000K of physical memory and the individual memory pages were 4K each, the upper limit on the order would be 8, since 28 (256 pages, 1024K) is the biggest block that will fit in memory. This results in making it impossible to allocate the entire physical memory in a single chunk; the remaining 976K of memory would have to be allocated in smaller blocks.

In practice

The following is an example of what happens when a program makes requests for memory. Let's say in this system, the smallest possible block is 64 kilobytes in size, and the upper limit for the order is 4, which results in a largest possible allocatable block, 24 times 64K = 1024K in size. The following shows a possible state of the system after various memory requests.
Step 64K 64K 64K 64K 64K 64K 64K 64K 64K 64K 64K 64K 64K 64K 64K 64K
1 24
2.1 23 23
2.2 22 22 23
2.3 21 21 22 23
2.4 20 20 21 22 23
2.5 A: 20 20 21 22 23
3 A: 20 20 B: 21 22 23
4 A: 20 C: 20 B: 21 22 23
5.1 A: 20 C: 20 B: 21 21 21 23
5.2 A: 20 C: 20 B: 21 D: 21 21 23
6 A: 20 C: 20 21 D: 21 21 23
7.1 A: 20 C: 20 21 21 21 23
7.2 A: 20 C: 20 21 22 23
8 20 C: 20 21 22 23
9.1 20 20 21 22 23
9.2 21 21 22 23
9.3 22 22 23
9.4 23 23
9.5 24


This allocation could have occurred in the following manner
  1. The initial situation.
  2. Program A requests memory 34K, order 0.
    1. No order 0 blocks are available, so an order 4 block is split, creating two order 3 blocks.
    2. Still no order 0 blocks available, so the first order 3 block is split, creating two order 2 blocks.
    3. Still no order 0 blocks available, so the first order 2 block is split, creating two order 1 blocks.
    4. Still no order 0 blocks available, so the first order 1 block is split, creating two order 0 blocks.
    5. Now an order 0 block is available, so it is allocated to A.
  3. Program B requests memory 66K, order 1. An order 1 block is available, so it is allocated to B.
  4. Program C requests memory 35K, order 0. An order 0 block is available, so it is allocated to C.
  5. Program D requests memory 67K, order 1.
    1. No order 1 blocks are available, so an order 2 block is split, creating two order 1 blocks.
    2. Now an order 1 block is available, so it is allocated to D.
  6. Program B releases its memory, freeing one order 1 block.
  7. Program D releases its memory.
    1. One order 1 block is freed.
    2. Since the buddy block of the newly freed block is also free, the two are merged into one order 2 block.
  8. Program A releases its memory, freeing one order 0 block.
  9. Program C releases its memory.
    1. One order 0 block is freed.
    2. Since the buddy block of the newly freed block is also free, the two are merged into one order 1 block.
    3. Since the buddy block of the newly formed order 1 block is also free, the two are merged into one order 2 block.
    4. Since the buddy block of the newly formed order 2 block is also free, the two are merged into one order 3 block.
    5. Since the buddy block of the newly formed order 3 block is also free, the two are merged into one order 4 block.


As you can see, what happens when a memory request is made is as follows:
  • If memory is to be allocated
  1. Look for a memory slot of a suitable size (the minimal 2k block that is larger or equal to that of the requested memory)
    1. If it is found, it is allocated to the program
    2. If not, it tries to make a suitable memory slot. The system does so by trying the following:
      1. Split a free memory slot larger than the requested memory size into half
      2. If the lower limit is reached, then allocate that amount of memory
      3. Go back to step 1 (look for a memory slot of a suitable size)
      4. Repeat this process until a suitable memory slot is found

  • If memory is to be freed
  1. Free the block of memory
  2. Look at the neighboring block - is it free too?
  3. If it is, combine the two, and go back to step 2 and repeat this process until either the upper limit is reached (all memory is freed), or until a non-free neighbour block is encountered

Implementation and efficiency

In comparison to other simpler techniques such as dynamic allocation, the buddy memory system has little external fragmentation, and allows for compaction
Compaction
Compaction may refer to:* Soil compaction, for mechanically induced compaction near the ground surface* Compaction , part of the process of lithification involving mechanical dewatering of a sediment by progressive loading under several km of geomaterial* Waste compaction, related to garbage* Cold...

 of memory with little overhead. The buddy method of freeing memory is fast, with the maximal number of compactions required equal to log2(highest order). Typically the buddy memory allocation system is implemented with the use of a binary tree
Binary tree
In computer science, a binary tree is a tree data structure in which each node has at most two child nodes, usually distinguished as "left" and "right". Nodes with children are parent nodes, and child nodes may contain references to their parents. Outside the tree, there is often a reference to...

 to represent used or unused split memory blocks. The "buddy" of each block can be found with an exclusive OR of the block's address and the block's size.

However, there still exists the problem of internal fragmentation — memory wasted because the memory requested is a little larger than a small block, but a lot smaller than a large block. Because of the way the buddy memory allocation technique works, a program that requests 66K of memory would be allocated 128K, which results in a waste of 62K of memory. This problem can be solved by slab allocation
Slab allocation
Slab allocation is a memory management mechanism intended for the efficient memory allocation of kernel objects which displays the desirable property of eliminating fragmentation caused by allocations and deallocations. The technique is used to retain allocated memory that contains a data object of...

, which may be layered on top of the more course buddy allocator to provide more fine-grained allocation.

One version of the buddy allocation algorithm was described in detail by Donald Knuth in volume 1 of The Art of Computer Programming
The Art of Computer Programming
The Art of Computer Programming is a comprehensive monograph written by Donald Knuth that covers many kinds of programming algorithms and their analysis....

. The Linux kernel
Linux kernel
The Linux kernel is an operating system kernel used by the Linux family of Unix-like operating systems. It is one of the most prominent examples of free and open source software....

also uses the buddy system, with further modifications to minimise external fragmentation, along with various other allocators to manage the memory within blocks.
The source of this article is wikipedia, the free encyclopedia.  The text of this article is licensed under the GFDL.
 
x
OK