In mathematics, a polynomial is an expression consisting of variables and constant coefficients, using only the operations of addition, subtraction, multiplication, and positive integer exponents. An example of a polynomial with a single variable x is x^{2} − 4x + 7.
Every polynomial can be written as a sum of monomials.^{[1]} For example, a polynomial in a single variable can be written as $\backslash sum\_\{i=0\}^n\; a\_i\; x^i$ for some integer n. The smallest n for which a given polynomial can be written in this form is called the degree of the polynomial.
A polynomial function is a function which is defined by a polynomial.
According to the Oxford English Dictionary the term polynomial was created from the earlier word binomial by replacing the bi- with Greek poly, "many", and was first used in the 17th Century.^{[2]}
Polynomials appear in a wide variety of areas of mathematics and science. For example, they are used to form polynomial equations, which encode a wide range of problems, from elementary word problems to complicated problems in the sciences; they are used to define polynomial functions, which appear in settings ranging from basic chemistry and physics to economics and social science; they are used in calculus and numerical analysis to approximate other functions. In advanced mathematics, polynomials are used to construct polynomial rings, a central concept in algebra and algebraic geometry.
Definition
A polynomial in a single variable can be written in the form
- $a\_n\; x^n\; +\; a\_\{n-1\}x^\{n-1\}\; +\; \backslash dotsb\; +\; a\_2\; x^2\; +\; a\_1\; x\; +\; a\_0,$
where the leading coefficient $a\_n\backslash ne\; 0.$
Or, more concisely:
- $\backslash sum\_\{i=0\}^n\; a\_i\; x^i$
That is, a polynomial can either be zero or can be written as the sum of a finite number of non-zero terms. Each term consists of the product of a constant -- called the coefficient of the term^{[3]} -- and a finite number of variables, raised to integer powers. The exponent on a variable in a term is called the degree of that variable in that term; the degree of the term is the sum of the degrees of the variables in that term, and the degree of a polynomial is the largest degree of any one term. Since x = x^{1}, the degree of a variable without a written exponent is one. A term with no variables is called a constant term, or just a constant; the degree of the constant term is 0.^{[4]}
For example:
- $-5x^2y\backslash ,$
is a term. The coefficient is –5, the variables are x and y, the degree of x is in the term two, while the degree of y is one. The degree of the entire term is the sum of the degrees of each variable in it, so in this example the degree is 2 + 1 = 3.
Forming a sum of several terms produces a polynomial. For example, the following is a polynomial:
- $\backslash underbrace\{\_\backslash ,3x^2\}\_\{\backslash begin\{smallmatrix\}\backslash mathrm\{term\}\backslash \backslash \backslash mathrm\{1\}\backslash end\{smallmatrix\}\}\; \backslash underbrace\{-\_\backslash ,5x\}\_\{\backslash begin\{smallmatrix\}\backslash mathrm\{term\}\backslash \backslash \backslash mathrm\{2\}\backslash end\{smallmatrix\}\}\; \backslash underbrace\{+\_\backslash ,4\}\_\{\backslash begin\{smallmatrix\}\backslash mathrm\{term\}\backslash \backslash \backslash mathrm\{3\}\backslash end\{smallmatrix\}\}.$
It consists of three terms: the first is degree two, the second is degree one, and the third is degree zero.
Polynomials of small degree have been given specific names. A polynomial of degree zero is a constant polynomial or simply a constant. Polynomials of degree one, two or three are respectively linear polynomials, quadratic polynomials and cubic polynomials. For higher degrees the specific names are not commonly used, although quartic polynomial (for degree four) and quintic polynomial (for degree five) are sometimes used. The names for the degrees may be applied to the polynomial or to its terms. For example, in $x^2\; +\; 2x\; +\; 1$ the term $2x$ is a linear term in a quadratic polynomial.
The polynomial 0, which may be considered to have no terms at all, is called the zero polynomial. Unlike other constant polynomials, its degree is not zero. Rather the degree of the zero polynomial is either left explicitly undefined, or defined as negative (either −1 or −∞).^{[5]} These conventions are important when defining Euclidean division of polynomials. The zero polynomial is also unique in that it is the only polynomial having an infinite number of roots. In the case of polynomials in more than one variable, a polynomial is called homogeneous of degree n if all its terms have degree n. For example, $x^3y^2\; +\; 7x^2y^3\; -\; 3x^5$ is homogeneous of degree 5. For more details, see Homogeneous polynomial.
The commutative law of addition can be used to rearrange terms into any preferred order. In polynomials with one variable, the terms are usually ordered according to degree, either in "descending powers of x", with the term of largest degree first, or in "ascending powers of x". The polynomial in the example above is written in descending powers of x. The first term has coefficient 3, variable x, and exponent 2. In the second term, the coefficient is –5. The third term is a constant. Since the degree of a non-zero polynomial is the largest degree of any one term, this polynomial has degree two.^{[6]}
Two terms with the same variables raised to the same powers are called "similar terms" or "like terms", and they can be combined, using the distributive law, into a single term whose coefficient is the sum of the coefficients of the terms that were combined. It may happen that this makes the coefficient 0.^{[7]} Polynomials can be classified by the number of terms with nonzero coefficients, so that a one-term polynomial is called a monomial, a two-term polynomial is called a binomial, and so on. (Some authors use "monomial" to mean "monic monomial".^{[8]})
A polynomial in one variable is called a univariate polynomial, a polynomial in more than one variable is called a multivariate polynomial. These notions refer more to the kind of polynomials one is generally working with than to individual polynomials; for instance when working with univariate polynomials one does not exclude constant polynomials (which may result, for instance, from the subtraction of non-constant polynomials), although strictly speaking constant polynomials do not contain any variables at all. It is possible to further classify multivariate polynomials as bivariate, trivariate, and so on, according to the maximum number of variables allowed. Again, so that the set of objects under consideration be closed under subtraction, a study of trivariate polynomials usually allows bivariate polynomials, and so on. It is common, also, to say simply "polynomials in x, y, and z", listing the variables allowed. In this case, xy is allowed.
The evaluation of a polynomial consists of assigning a valuation to each variable and carrying out the indicated multiplications and additions. For polynomials in one variable, the evaluation is usually more efficient using the Horner scheme:
- $(((\backslash dotsb((a\_n\; x\; +\; a\_\{n-1\})x\; +\; a\_\{n-2\})x\; +\; \backslash dotsb\; +\; a\_3)x\; +\; a\_2)x\; +\; a\_1)x\; +\; a\_0.$
Arithmetic of polynomials
Polynomials can be added using the associative law of addition (grouping all their terms together into a single sum), possibly followed by reordering, and combining of like terms.^{[9]}^{[7]} For example, if
- $\backslash begin\{align\}$
P &= 3x^2 - 2x + 5xy - 2 \\
Q &= -3x^2 + 3x + 4y^2 + 8
\end{align}
then
- $P\; +\; Q\; =\; 3x^2\; -\; 2x\; +\; 5xy\; -\; 2\; -\; 3x^2\; +\; 3x\; +\; 4y^2\; +\; 8$
which can be simplified to
- $P\; +\; Q\; =\; x\; +\; 5xy\; +\; 4y^2\; +\; 6$
To work out the product of two polynomials into a sum of terms, the distributive law is repeatedly applied, which results in each term of one polynomial being multiplied by every term of the other.^{[7]} For example, if
- $\backslash begin\{align\}$
\color{BrickRed} P &\color{BrickRed}{= 2x + 3y + 5} \\
\color{RoyalBlue} Q &\color{RoyalBlue}{= 2x + 5y + xy + 1}
\end{align}
then
- $\backslash begin\{array\}\{rccrcrcrcr\}$
{\color{BrickRed}P}{\color{RoyalBlue}Q}&=&&({\color{BrickRed}2x}\cdot{\color{RoyalBlue}2x})
&+&({\color{BrickRed}2x}\cdot{\color{RoyalBlue}5y})&+&({\color{BrickRed}2x}\cdot {\color{RoyalBlue}xy})&+&({\color{BrickRed}2x}\cdot{\color{RoyalBlue}1})
\\&&+&({\color{BrickRed}3y}\cdot{\color{RoyalBlue}2x})&+&({\color{BrickRed}3y}\cdot{\color{RoyalBlue}5y})&+&({\color{BrickRed}3y}\cdot {\color{RoyalBlue}xy})&+&
({\color{BrickRed}3y}\cdot{\color{RoyalBlue}1})
\\&&+&({\color{BrickRed}5}\cdot{\color{RoyalBlue}2x})&+&({\color{BrickRed}5}\cdot{\color{RoyalBlue}5y})&+&
({\color{BrickRed}5}\cdot {\color{RoyalBlue}xy})&+&({\color{BrickRed}5}\cdot{\color{RoyalBlue}1})
\end{array}
which can be simplified to
- $PQ\; =\; 4x^2\; +\; 21xy\; +\; 2x^2y\; +\; 12x\; +\; 15y^2\; +\; 3xy^2\; +\; 28y\; +\; 5$
Polynomial evaluation can be used to compute the remainder of polynomial division by a polynomial of degree one, since the remainder of the division of f(x) by (x-a) is f(a); see the polynomial remainder theorem. This is more efficient than the usual algorithm of division when the quotient is not needed.
- A sum of polynomials is a polynomial.^{[4]}
- A product of polynomials is a polynomial.^{[4]}
- A composition of two polynomials is a polynomial, which is obtained by substituting a variable of the first polynomial by the second polynomial.^{[4]}
- The derivative of the polynomial a_{n}x^{n} + a_{n-1}x^{n-1} + ... + a_{2}x^{2} + a_{1}x + a_{0} is the polynomial na_{n}x^{n-1} + (n-1)a_{n-1}x^{n-2} + ... + 2a_{2}x + a_{1}. If the set of the coefficients does not contain the integers (for example if the coefficients are integers modulo some prime number p), then ka_{k} should be interpreted as the sum of a_{k} with itself, k times. For example, over the integers modulo p, the derivative of the polynomial x^{p}+1 is the polynomial 0.^{[10]}
- If the division by integers is allowed in the set of coefficients, a primitive or antiderivative of the polynomial a_{n}x^{n} + a_{n-1}x^{n-1} + ... + a_{2}x^{2} + a_{1}x + a_{0} is a_{n}x^{n+1}/(n+1) + a_{n-1}x^{n}/n + ... + a_{2}x^{3}/3 + a_{1}x^{2}/2 + a_{0}x +c, where c is an arbitrary constant. Thus x^{2}+1 is a polynomial with integer coefficients whose primitives are not polynomials over the integers. If this polynomial is viewed as a polynomial over the integers modulo 3 it has no primitive at all.
As for the integers, two kinds of divisions are considered for the polynomials. The Euclidean division of polynomials that generalizes the Euclidean division of the integers. It results in two polynomials, a quotient and a remainder that are characterized by the following property of the polynomials: given two polynomials a and b such that b ≠ 0, there exists a unique pair of polynomials, q, the quotient, and r, the remainder, such that a = b q + r and degree(r) < degree(b) (here the polynomial zero is supposed to have a negative degree). By hand as well as with a computer, this division can be computed by the polynomial long division algorithm.^{[11]}
All polynomials with coefficients in a unique factorization domain (for example, the integers or a field) also have a factored form in which the polynomial is written as a product of irreducible polynomials and a constant. This factored form is unique up to the order of the factors and their multiplication by an invertible constant. In the case of the field of complex numbers, the irreducible factors are linear. Over the real numbers, they have the degree either one or two. Over the integers and the rational numbers the irreducible factors may have any degree.^{[12]} For example, the factored form of
- $5x^3-5$
is
- $5(x\; -\; 1)\backslash left(x^2\; +\; x\; +\; 1\backslash right)$
over the integers and the reals and
- $5(x\; -\; 1)\backslash left(x\; +\; \backslash frac\{1\; +\; i\backslash sqrt\{3\}\}\{2\}\backslash right)\backslash left(x\; +\; \backslash frac\{1\; -\; i\backslash sqrt\{3\}\}\{2\}\backslash right)$
over the complex numbers.
The computation of the factored form, called factorization is, in general, too difficult to be done by hand-written computation. However, there are efficient algorithms (see Polynomial factorization) that are available in most computer algebra systems.
A formal quotient of polynomials, that is, an algebraic fraction where the numerator and denominator are polynomials, is called a "rational expression" or "rational fraction" and is not, in general, a polynomial. Division of a polynomial by a number, however, does yield another polynomial. For example, x^{3}/12 is considered a valid term in a polynomial (and a polynomial by itself) because it is equivalent to (1/12)x^{3} and 1/12 is just a constant. When this expression is used as a term, its coefficient is therefore 1/12. For similar reasons, if complex coefficients are allowed, one may have a single term like (2 + 3i) x^{3}; even though it looks like it should be expanded to two terms, the complex number 2 + 3i is one complex number, and is the coefficient of that term. The expression 1/(x^{2} + 1) is not a polynomial because it includes division by a non-constant polynomial. The expression (5 + y)^{x} is not a polynomial, because it contains a variable used as exponent.
Since subtraction can be replaced by addition of the opposite quantity, and since positive integer exponents can be replaced by repeated multiplication, all polynomials can be constructed from constants and variables using only addition and multiplication.
Polynomial functions
A polynomial function is a function that can be defined by evaluating a polynomial. A function ƒ of one argument is called a polynomial function if it satisfies
- $f(x)\; =\; a\_n\; x^n\; +\; a\_\{n-1\}\; x^\{n-1\}\; +\; \backslash cdots\; +\; a\_2\; x^2\; +\; a\_1\; x\; +\; a\_0\; \backslash ,$
for all arguments x, where n is a non-negative integer and a_{0}, a_{1},a_{2}, ..., a_{n} are constant coefficients.
For example, the function ƒ, taking real numbers to real numbers, defined by
- $f(x)\; =\; x^3\; -\; x\backslash ,$
is a polynomial function of one argument. Polynomial functions of multiple arguments can also be defined, using polynomials in multiple variables, as in
- $f(x,y)=\; 2x^3+4x^2y+xy^5+y^2-7.\backslash ,$
An example is also the function $f(x)=\backslash cos(2\backslash arccos(x))$ which, although it doesn't look like a polynomial, is a polynomial function on $[-1,1]$ since for every $x$ from $[-1,1]$ it is true that $f(x)=2x^2-1$ (see Chebyshev polynomials).
Polynomial functions are a class of functions having many important properties. They are all continuous, smooth, entire, computable, etc.
Graphs of polynomial functions
Polynomial of degree 2:
f(x) = x^{2} - x - 2 = (x+1)(x-2)
Polynomial of degree 3:
f(x) = x^{3}/4 + 3x^{2}/4 - 3x/2 - 2 = 1/4 (x+4)(x+1)(x-2)
Polynomial of degree 4:
f(x) = 1/14 (x+4)(x+1)(x-1)(x-3) + 0.5
Polynomial of degree 5:
f(x) = 1/20 (x+4)(x+2)(x+1)(x-1)(x-3) + 2
Polynomial of degree 6:
f(x) = 1/30 (x+3.5)(x+2)(x+1)(x-1)(x-3)(x-4) + 2
Polynomial of degree 7:
f(x) = (x-3)(x-2)(x-1)(x)(x+1)(x+2)(x+3)
A polynomial function in one real variable can be represented by a graph.
- The graph of the zero polynomial
- f(x) = 0
- is the x-axis.
- The graph of a degree 0 polynomial
- f(x) = a_{0}, where a_{0} ≠ 0,
- is a horizontal line with y-intercept a_{0}
- The graph of a degree 1 polynomial (or linear function)
- f(x) = a_{0} + a_{1}x , where a_{1} ≠ 0,
- is an oblique line with y-intercept a_{0} and slope a_{1}.
- The graph of a degree 2 polynomial
- f(x) = a_{0} + a_{1}x + a_{2}x^{2}, where a_{2} ≠ 0
- is a parabola.
- The graph of a degree 3 polynomial
- f(x) = a_{0} + a_{1}x + a_{2}x^{2}, + a_{3}x^{3}, where a_{3} ≠ 0
- is a cubic curve.
- The graph of any polynomial with degree 2 or greater
- f(x) = a_{0} + a_{1}x + a_{2}x^{2} + ... + a_{n}x^{n} , where a_{n} ≠ 0 and n ≥ 2
- is a continuous non-linear curve.
The graph of a non-constant (univariate) polynomial always tends to infinity when the variable increases indefinitely (in absolute value).
Polynomial graphs are analyzed in calculus using intercepts, slopes, concavity, and end behavior.
Polynomial equations
A polynomial equation, also called algebraic equation, is an equation of the form^{[13]}
- $a\_n\; x^n\; +\; a\_\{n-1\}x^\{n-1\}\; +\; \backslash dotsb\; +\; a\_2\; x^2\; +\; a\_1\; x\; +\; a\_0\; =\; 0$
For example,
- $3x^2\; +\; 4x\; -5\; =\; 0\; \backslash ,$
is a polynomial equation.
In case of a univariate polynomial equation, the variable is considered an unknown, and one seeks to find the possible values for which both members of the equation evaluate to the same value (in general more than one solution may exist). A polynomial equation stands in contrast to a polynomial identity like (x + y)(x – y) = x^{2} – y^{2}, where both members represent the same polynomial in different forms, and as a consequence any evaluation of both members gives a valid equality. This means that a polynomial identity is a polynomial equation for which all possible values of the unknowns are solutions.
In elementary algebra, methods such as the quadratic formula are given for solving all first degree and second degree polynomial equations in one variable. There are also formulas for the cubic and quartic equations. For higher degrees, Abel-Ruffini theorem asserts that there can not exist a general formula. Therefore, only numerical approximations of the roots may be computed (see Root-finding algorithm). The number of solutions may not exceed the degree, and equals the degree when the complex solutions are counted with their multiplicity. This fact is called the fundamental theorem of algebra.
Solving polynomial equations
Every polynomial P in x corresponds to a function, ƒ(x) = P (where the occurrences of x in P are interpreted as the argument of ƒ), called the polynomial function of P; the equation in x setting f(x) = 0 is the polynomial equation corresponding to P. The solutions of this equation are called the roots of the polynomial; they are the zeroes of the function ƒ (corresponding to the points where the graph of ƒ meets the x-axis). A number a is a root of P if and only if the polynomial x − a (of degree one in x) divides P. It may happen that x − a divides P more than once: if (x − a)^{2} divides P then a is called a multiple root of P, and otherwise a is called a simple root of P. If P is a nonzero polynomial, there is a highest power m such that (x − a)^{m} divides P, which is called the multiplicity of the root a in P. When P is the zero polynomial, the corresponding polynomial equation is trivial, and this case is usually excluded when considering roots: with the above definitions every number would be a root of the zero polynomial, with undefined (or infinite) multiplicity. With this exception made, the number of roots of P, even counted with their respective multiplicities, cannot exceed the degree of P.^{[14]} The relation between the roots of a polynomial and its coefficients is described by Viète's formulas.
Some polynomials, such as x^{2} + 1, do not have any roots among the real numbers. If, however, the set of allowed candidates is expanded to the complex numbers, every non-constant polynomial has at least one root; this is the fundamental theorem of algebra. By successively dividing out factors x − a, one sees that any polynomial with complex coefficients can be written as a constant (its leading coefficient) times a product of such polynomial factors of degree 1; as a consequence, the number of (complex) roots counted with their multiplicities is exactly equal to the degree of the polynomial.
There is a difference between approximating roots and finding exact expressions for roots. Formulas for expressing the roots of polynomials of degree 2 in terms of square roots have been known since ancient times (see quadratic equation), and for polynomials of degree 3 or 4 similar formulas (using cube roots in addition to square roots) were found in the 16th century (see cubic function and quartic function for the formulas and Niccolo Fontana Tartaglia, Lodovico Ferrari, Gerolamo Cardano, and Vieta for historical details). But formulas for degree 5 eluded researchers. In 1824, Niels Henrik Abel proved the striking result that there can be no general (finite) formula, involving only arithmetic operations and radicals, that expresses the roots of a polynomial of degree 5 or greater in terms of its coefficients (see Abel-Ruffini theorem). In 1830, Évariste Galois, studying the permutations of the roots of a polynomial, extended the Abel-Ruffini theorem by showing that, given a polynomial equation, one may decide if it is solvable by radicals, and, if it is, solve it. This result marked the start of Galois theory and Group theory, two important branches of modern mathematics. Galois himself noted that the computations implied by his method were impracticable. Nevertheless, formulas for solvable equations of degrees 5 and 6 have been published (see quintic function and sextic equation).
Numerical approximations of roots of polynomial equations in one unknown is easily done on a computer by the Jenkins-Traub method, Laguerre's method, Durand–Kerner method or by some other root-finding algorithm.^{[15]}
For polynomials in more than one variable the notion of root does not exist, and there are usually infinitely many combinations of values for the variables for which the polynomial function takes the value zero. However for certain sets of such polynomials it may happen that for only finitely many combinations all polynomial functions take the value zero.
For a set of polynomial equations in several unknowns, there are algorithms to decide if they have a finite number of complex solutions. If the number of solutions is finite, there are algorithms to compute the solutions. The methods underlying these algorithms are described in the article systems of polynomial equations.
The special case where all the polynomials are of degree one is called a system of linear equations, for which another range of different solution methods exist, including the classical Gaussian elimination.
Polynomials associated to other objects
Calculus
Main article: Calculus with polynomials
The simple structure of polynomial functions makes them quite useful in analyzing more complex functions using polynomial approximations. An important example in calculus is Taylor's theorem, which roughly states that every differentiable function locally looks like a polynomial function, and the Stone-Weierstrass theorem, which states that every continuous function defined on a compact interval of the real axis can be approximated on the whole interval as closely as desired by a polynomial function.
Calculating derivatives and integrals of polynomial functions is particularly simple. For the polynomial function
- $\backslash sum\_\{i=0\}^n\; a\_i\; x^i$
the derivative with respect to x is
- $\backslash sum\_\{i=1\}^n\; a\_i\; i\; x^\{i-1\}$
and the indefinite integral is
- $\backslash sum\_\{i=0\}^n\; \{a\_i\backslash over\; i+1\}\; x^\{i+1\}+c.$
Abstract algebra
In abstract algebra, one distinguishes between polynomials and polynomial functions. A polynomial f in one variable X over a ring R is defined as a formal expression of the form
- $f\; =\; a\_n\; X^n\; +\; a\_\{n\; -\; 1\}\; X^\{n\; -\; 1\}\; +\; \backslash cdots\; +\; a\_1\; X^1\; +\; a\_0X^0$
where n is a natural number, the coefficients $a\_0,\backslash ldots,a\_n$ are elements of R, and X is a formal symbol, whose powers X^{i} are just placeholders for the corresponding coefficients a_{i}, so that the given formal expression is just a way to encode the sequence $(a\_0,\; a\_1,\; \backslash ldots)$, where there is an n such that a_{i} = 0 for all i > n. Two polynomials sharing the same value of n are considered equal if and only if the sequences of their coefficients are equal; furthermore any polynomial is equal to any polynomial with greater value of n obtained from it by adding terms in front whose coefficient is zero. These polynomials can be added by simply adding corresponding coefficients (the rule for extending by terms with zero coefficients can be used to make sure such coefficients exist). Thus each polynomial is actually equal to the sum of the terms used in its formal expression, if such a term a_{i}X^{i} is interpreted as a polynomial that has zero coefficients at all powers of X other than X^{i}. Then to define multiplication, it suffices by the distributive law to describe the product of any two such terms, which is given by the rule
$$
a X^k \; b X^l = ab X^{k+l}
.
Formation of the polynomial ring, together with forming factor rings by factoring out ideals, are important tools for constructing new rings out of known ones. For instance, the ring (in fact field) of complex numbers, which can be constructed from the polynomial ring R[X] over the real numbers by factoring out the ideal of multiples of the polynomial X^{2} + 1. Another example is the construction of finite fields, which proceeds similarly, starting out with the field of integers modulo some prime number as the coefficient ring R (see modular arithmetic).
Polynomials are frequently used to encode information about some other object. The characteristic polynomial of a matrix or linear operator contains information about the operator's eigenvalues. The minimal polynomial of an algebraic element records the simplest algebraic relation satisfied by that element. The chromatic polynomial of a graph counts the number of proper colourings of that graph.
The term "polynomial", as an adjective, can also be used for quantities or functions that can be written in polynomial form. For example, in computational complexity theory the phrase polynomial time means that the time it takes to complete an algorithm is bounded by a polynomial function of some variable, such as the size of the input.
Rings of polynomials in a finite number of variables are of fundamental importance in algebraic geometry which studies the simultaneous zero sets of several such multivariate polynomials. These rings can alternatively be constructed by repeating the construction of univariate polynomials with as coefficient ring another ring of polynomials: thus the ring R[X,Y] of polynomials in X and Y can be viewed as the ring (R[X])[Y] of polynomials in Y with as coefficients polynomials in X, or as the ring
(R[Y])[X] of polynomials in X with as coefficients polynomials in Y. These identifications are compatible with arithmetic operations (they are isomorphisms of rings), but some notions such as degree or whether a polynomial is considered monic can change between these points of view. One can construct rings of polynomials in infinitely many variables, but since polynomials are (finite) expressions, any individual polynomial can only contain finitely many variables.
A binary polynomial where the second variable takes the form of an exponential function applied to the first variable, for example P(X,e^{X }), may be called an exponential polynomial.
Determining the roots of polynomials, or "solving algebraic equations", is among the oldest problems in mathematics. However, the elegant and practical notation we use today only developed beginning in the 15th century. Before that, equations were written out in words. For example, an algebra problem from the Chinese Arithmetic in Nine Sections, circa 200 BCE, begins "Three sheafs of good crop, two sheafs of mediocre crop, and one sheaf of bad crop are sold for 29 dou." We would write 3x + 2y + z = 29.