In computer science, an inplace algorithm is an algorithm which transforms input using a data structure with a small amount of extra storage space. The input is usually overwritten by the output as the algorithm executes. An algorithm which is not inplace is sometimes called notinplace or outofplace.
Inplace can have slightly different meanings. In its strictest form, the algorithm can only have a constant amount of extra space, counting everything including function calls and pointers. However, this form is very limited as simply having an index to a length n array requires O(log n) bits. More broadly, inplace means that the algorithm does not use extra space for manipulating the input but may require a small though nonconstant extra space for its operation. Usually, this space is O(log n), though sometimes anything in o(n) is allowed. Note that space complexity also has varied choices in whether or not to count the index lengths as part of the space used. Often, the space complexity is given in terms of the number of indices or pointers needed, ignoring their length. In this article, we refer to total space complexity (DSPACE), counting pointer lengths. Therefore, the space requirements here have an extra log n factor compared to an analysis that ignores the length of indices and pointers.
An algorithm may or may not count the output as part of its space usage. Since inplace algorithms usually overwrite their input with output, no additional space is needed. When writing the output to writeonly memory or a stream, it may make be more appropriate to only consider the working space of the algorithm. In theory applications such as logspace reductions, it is more typical to always ignore output space (in these cases it is more essential that the output is writeonly).
Contents

Examples 1

In computational complexity 2

Role of randomness 3

In functional programming 4

See also 5

References 6
Examples
Given an array a
of n items, suppose we want an array that holds the same elements in reversed order and dispose of the original. One seemingly simple way to do this is to create a new array of equal size, fill it with copies from a
in appropriate order and then delete a
.
function reverse(a[0..n  1])
allocate b[0..n  1]
for i from 0 to n  1
b[n − 1 − i] := a[i]
return b
Unfortunately, this requires O(n) extra space for having the arrays a
and b
available simultaneously. Also, allocation and deallocation are often slow operations. Since we no longer need a
, we can instead overwrite it with its own reversal using this inplace algorithm which will only need constant number (2) of integers for the auxiliary variables i
and tmp
, no matter how large the array is.
function reverse_in_place(a[0..n1])
for i from 0 to floor((n2)/2)
tmp := a[i]
a[i] := a[n − 1 − i]
a[n − 1 − i] := tmp
As another example, there are a number of sorting algorithms that can rearrange arrays into sorted order inplace, including: Bubble sort, Comb sort, Selection sort, Insertion sort, Heapsort, Shell sort. These algorithm require only pointers, so their space complexity is O(logn).
The standard implementation of Quicksort operates inplace on the data to be sorted as it only ever swaps two elements. However, most implementations require O(log n) recursive function calls as part of the divide and conquer strategy. Since each recursive call has pointers of size O(log n), the total space required is O(log^{2} n). Usually, quicksort is still considered an inplace algorithm, though it does require more space than the sorting algorithms listed earlier.
Most selection algorithms are also inplace, although some considerably rearrange the input array in the process of finding the final, constantsized result.
Some text manipulation algorithms such as trim and reverse may be done inplace.
In computational complexity
In computational complexity theory, the strict definition of inplace algorithms includes all algorithms with O(1) space complexity, the class DSPACE(1). This class is very limited; it equals the regular languages.^{[1]} In fact, it does not even include any of the examples listed above.
We usually consider algorithms in L, the class of problems requiring O(log n) additional space, to be inplace. This class is more in line with the practical definition, as it allows numbers of size n as pointers or indices. This expanded definition still excludes quicksort, however, because of its recursive calls.
Identifying the inplace algorithms with L has some interesting implications; for example, it means that there is a (rather complex) inplace algorithm to determine whether a path exists between two nodes in an undirected graph,^{[2]} a problem that requires O(n) extra space using typical algorithms such as depthfirst search (a visited bit for each node). This in turn yields inplace algorithms for problems such as determining if a graph is bipartite or testing whether two graphs have the same number of connected components. See SL for more information.
Role of randomness
In many cases, the space requirements for an algorithm can be drastically cut by using a randomized algorithm. For example, say we wish to know if two vertices in a graph of n vertices are in the same connected component of the graph. There is no known simple, deterministic, inplace algorithm to determine this, but if we simply start at one vertex and perform a random walk of about 20n^{3} steps, the chance that we will stumble across the other vertex provided that it is in the same component is very high. Similarly, there are simple randomized inplace algorithms for primality testing such as the MillerRabin primality test, and there are also simple inplace randomized factoring algorithms such as Pollard's rho algorithm. See RL and BPL for more discussion of this phenomenon.
In functional programming
Functional programming languages often discourage or don't support explicit inplace algorithms that overwrite data, since this is a type of side effect; instead, they only allow new data to be constructed. However, good functional language compilers will often recognize when an object very similar to an existing one is created and then the old one thrown away, and will optimize this into a simple mutation "underthehood".
Note that it is possible in principle to carefully construct inplace algorithms that don't modify data (unless the data is no longer being used), but this is rarely done in practice. See purely functional data structures.
See also
References

^ Maciej Liśkiewicz and Rüdiger Reischuk. The Complexity World below Logarithmic Space. Structure in Complexity Theory Conference, pp. 6478. 1994. Online: p. 3, Theorem 2.

^ Omer Reingold. Undirected STconnectivity in LogSpace. Electronic Colloquium on Computational Complexity. No. 94.
This article was sourced from Creative Commons AttributionShareAlike License; additional terms may apply. World Heritage Encyclopedia content is assembled from numerous content providers, Open Access Publishing, and in compliance with The Fair Access to Science and Technology Research Act (FASTR), Wikimedia Foundation, Inc., Public Library of Science, The Encyclopedia of Life, Open Book Publishers (OBP), PubMed, U.S. National Library of Medicine, National Center for Biotechnology Information, U.S. National Library of Medicine, National Institutes of Health (NIH), U.S. Department of Health & Human Services, and USA.gov, which sources content from all federal, state, local, tribal, and territorial government publication portals (.gov, .mil, .edu). Funding for USA.gov and content contributors is made possible from the U.S. Congress, EGovernment Act of 2002.
Crowd sourced content that is contributed to World Heritage Encyclopedia is peer reviewed and edited by our editorial staff to ensure quality scholarly research articles.
By using this site, you agree to the Terms of Use and Privacy Policy. World Heritage Encyclopedia™ is a registered trademark of the World Public Library Association, a nonprofit organization.