Do we really need polynomials (In contrast to polynomial functions)?
$begingroup$
In the following I'm going to call
a polynomial expression an element of a suitable algebraic structure (for example a ring, since it has an addition and a multiplication)
that has the form $a_{n}x^{n}+cdots+a_{1}x+a_{0}$, where $x$ is
some fixed element of said structure,a polynomial function a function of the form $xmapsto a_{n}x^{n}+cdots+a_{1}x+a_{0}$ (where the same algebraic considerations as above apply)
a polynomial an element of the form $a_{n}X^{n}+cdots+a_{1}X+a_{0}$ with indeterminates $X$ (these
can be formalized, if we are in a ring, with strings of $0$s and
a $1$).
Note that when we are very rigorous/formal, polynomial expressions
and polynomials are something different (although in daily life we
often use them synonymously). Polynomial functions and expressions
are also different from each other although in this case the relationship
is a closer one, since every polynomials expression can be interpreted
as a polynomial function evaluated at a certain point (thus "polynomial
functions" are something more general than "polynomial expressions").
My question is: Why do we use polynomials ? It seems to me that every
piece of mathematics I have encountered so far, one could replace
every occurrence of polynomials with polynomial expressions/functions
without any major change in the rest of the proof/theorem/definition
etc.
The only reasons that I can see to use polynomials are the following
two:$$ $$
1. After one makes the idea precise that one can plug "suitable"
elements into polynomials (which may lie a ring containing the ring
in which the coefficients live in), one can save time in certain setting,
by handling the "plugging of suitable elements into polynomials"
more elegantly: For example in case of the theorem of Cayley-Hamilton,
which in its "polynomial function" version would look like:
Let $A$ be an $ntimes n$ matrix over $K$, whose characteristic
polynomial (function) is $xmapsto a_{n}x^{n}+cdots+a_{1}x+a_{0}$.
Then
$$
a_{n}A^{n}+cdots+a_{1}A+a_{0}I=0.
$$
whereas the "polynomial" version looks more elegant:
Let $A$ be an $ntimes n$ matrix over $K$, whose characteristic
polynomial is $p_{A}in Kleft[Xright]$. Then
$$
p_{A}left(Aright)=0.
$$
2. The only thing that polynomials can "do", but algebraic expressions/functions can't, is to be different, when the algebraic expressions/functions are the same (i.e. there's a theorem that tells us that the mapping of polynomials to polynomials expressions/functions
isn't injective, if the field is finite). Maybe this small difference makes a big enough difference
to consider polynomials after all, but as I said, I haven't encountered
any situation in which this difference could manifest itself.
(I'm guessing that maybe cryptography or higher number theory really needs
polynomials and not just polynomial expressions/functions. Since I don't know
anything about these subjects, I would be very happy with an example
of a theorem (whose content isn't merely technical as it is the case with the theorem above) involving polynomials, where these absolutely cannot
be replaced by polynomial expressions/functions. Conversely I would also be happy with an authoritative statement from someone knowledgeable that indeed we could dispense of polynomials, if we wanted to.)
abstract-algebra functions polynomials soft-question finite-fields
$endgroup$
add a comment |
$begingroup$
In the following I'm going to call
a polynomial expression an element of a suitable algebraic structure (for example a ring, since it has an addition and a multiplication)
that has the form $a_{n}x^{n}+cdots+a_{1}x+a_{0}$, where $x$ is
some fixed element of said structure,a polynomial function a function of the form $xmapsto a_{n}x^{n}+cdots+a_{1}x+a_{0}$ (where the same algebraic considerations as above apply)
a polynomial an element of the form $a_{n}X^{n}+cdots+a_{1}X+a_{0}$ with indeterminates $X$ (these
can be formalized, if we are in a ring, with strings of $0$s and
a $1$).
Note that when we are very rigorous/formal, polynomial expressions
and polynomials are something different (although in daily life we
often use them synonymously). Polynomial functions and expressions
are also different from each other although in this case the relationship
is a closer one, since every polynomials expression can be interpreted
as a polynomial function evaluated at a certain point (thus "polynomial
functions" are something more general than "polynomial expressions").
My question is: Why do we use polynomials ? It seems to me that every
piece of mathematics I have encountered so far, one could replace
every occurrence of polynomials with polynomial expressions/functions
without any major change in the rest of the proof/theorem/definition
etc.
The only reasons that I can see to use polynomials are the following
two:$$ $$
1. After one makes the idea precise that one can plug "suitable"
elements into polynomials (which may lie a ring containing the ring
in which the coefficients live in), one can save time in certain setting,
by handling the "plugging of suitable elements into polynomials"
more elegantly: For example in case of the theorem of Cayley-Hamilton,
which in its "polynomial function" version would look like:
Let $A$ be an $ntimes n$ matrix over $K$, whose characteristic
polynomial (function) is $xmapsto a_{n}x^{n}+cdots+a_{1}x+a_{0}$.
Then
$$
a_{n}A^{n}+cdots+a_{1}A+a_{0}I=0.
$$
whereas the "polynomial" version looks more elegant:
Let $A$ be an $ntimes n$ matrix over $K$, whose characteristic
polynomial is $p_{A}in Kleft[Xright]$. Then
$$
p_{A}left(Aright)=0.
$$
2. The only thing that polynomials can "do", but algebraic expressions/functions can't, is to be different, when the algebraic expressions/functions are the same (i.e. there's a theorem that tells us that the mapping of polynomials to polynomials expressions/functions
isn't injective, if the field is finite). Maybe this small difference makes a big enough difference
to consider polynomials after all, but as I said, I haven't encountered
any situation in which this difference could manifest itself.
(I'm guessing that maybe cryptography or higher number theory really needs
polynomials and not just polynomial expressions/functions. Since I don't know
anything about these subjects, I would be very happy with an example
of a theorem (whose content isn't merely technical as it is the case with the theorem above) involving polynomials, where these absolutely cannot
be replaced by polynomial expressions/functions. Conversely I would also be happy with an authoritative statement from someone knowledgeable that indeed we could dispense of polynomials, if we wanted to.)
abstract-algebra functions polynomials soft-question finite-fields
$endgroup$
4
$begingroup$
In a finite field it's different. There are finitely many functions, but infinitely many polynomial expressions.
$endgroup$
– mez
May 13 '13 at 13:26
3
$begingroup$
A general point that is not specific to the issue at hand (already well addressed in great answers): when you have two similar mathematical structures, the question you should ask is not Can I get rid of Structure A and just use Structure B?, but rather What are the differences between Structure A and Structure B, and when is it more useful to use one structure instead of the other?
$endgroup$
– Michael Joyce
May 15 '13 at 1:18
1
$begingroup$
@MichaelJoyce Very well said! And with so many great answers a very difficult choice awaits...
$endgroup$
– temo
May 19 '13 at 10:46
add a comment |
$begingroup$
In the following I'm going to call
a polynomial expression an element of a suitable algebraic structure (for example a ring, since it has an addition and a multiplication)
that has the form $a_{n}x^{n}+cdots+a_{1}x+a_{0}$, where $x$ is
some fixed element of said structure,a polynomial function a function of the form $xmapsto a_{n}x^{n}+cdots+a_{1}x+a_{0}$ (where the same algebraic considerations as above apply)
a polynomial an element of the form $a_{n}X^{n}+cdots+a_{1}X+a_{0}$ with indeterminates $X$ (these
can be formalized, if we are in a ring, with strings of $0$s and
a $1$).
Note that when we are very rigorous/formal, polynomial expressions
and polynomials are something different (although in daily life we
often use them synonymously). Polynomial functions and expressions
are also different from each other although in this case the relationship
is a closer one, since every polynomials expression can be interpreted
as a polynomial function evaluated at a certain point (thus "polynomial
functions" are something more general than "polynomial expressions").
My question is: Why do we use polynomials ? It seems to me that every
piece of mathematics I have encountered so far, one could replace
every occurrence of polynomials with polynomial expressions/functions
without any major change in the rest of the proof/theorem/definition
etc.
The only reasons that I can see to use polynomials are the following
two:$$ $$
1. After one makes the idea precise that one can plug "suitable"
elements into polynomials (which may lie a ring containing the ring
in which the coefficients live in), one can save time in certain setting,
by handling the "plugging of suitable elements into polynomials"
more elegantly: For example in case of the theorem of Cayley-Hamilton,
which in its "polynomial function" version would look like:
Let $A$ be an $ntimes n$ matrix over $K$, whose characteristic
polynomial (function) is $xmapsto a_{n}x^{n}+cdots+a_{1}x+a_{0}$.
Then
$$
a_{n}A^{n}+cdots+a_{1}A+a_{0}I=0.
$$
whereas the "polynomial" version looks more elegant:
Let $A$ be an $ntimes n$ matrix over $K$, whose characteristic
polynomial is $p_{A}in Kleft[Xright]$. Then
$$
p_{A}left(Aright)=0.
$$
2. The only thing that polynomials can "do", but algebraic expressions/functions can't, is to be different, when the algebraic expressions/functions are the same (i.e. there's a theorem that tells us that the mapping of polynomials to polynomials expressions/functions
isn't injective, if the field is finite). Maybe this small difference makes a big enough difference
to consider polynomials after all, but as I said, I haven't encountered
any situation in which this difference could manifest itself.
(I'm guessing that maybe cryptography or higher number theory really needs
polynomials and not just polynomial expressions/functions. Since I don't know
anything about these subjects, I would be very happy with an example
of a theorem (whose content isn't merely technical as it is the case with the theorem above) involving polynomials, where these absolutely cannot
be replaced by polynomial expressions/functions. Conversely I would also be happy with an authoritative statement from someone knowledgeable that indeed we could dispense of polynomials, if we wanted to.)
abstract-algebra functions polynomials soft-question finite-fields
$endgroup$
In the following I'm going to call
a polynomial expression an element of a suitable algebraic structure (for example a ring, since it has an addition and a multiplication)
that has the form $a_{n}x^{n}+cdots+a_{1}x+a_{0}$, where $x$ is
some fixed element of said structure,a polynomial function a function of the form $xmapsto a_{n}x^{n}+cdots+a_{1}x+a_{0}$ (where the same algebraic considerations as above apply)
a polynomial an element of the form $a_{n}X^{n}+cdots+a_{1}X+a_{0}$ with indeterminates $X$ (these
can be formalized, if we are in a ring, with strings of $0$s and
a $1$).
Note that when we are very rigorous/formal, polynomial expressions
and polynomials are something different (although in daily life we
often use them synonymously). Polynomial functions and expressions
are also different from each other although in this case the relationship
is a closer one, since every polynomials expression can be interpreted
as a polynomial function evaluated at a certain point (thus "polynomial
functions" are something more general than "polynomial expressions").
My question is: Why do we use polynomials ? It seems to me that every
piece of mathematics I have encountered so far, one could replace
every occurrence of polynomials with polynomial expressions/functions
without any major change in the rest of the proof/theorem/definition
etc.
The only reasons that I can see to use polynomials are the following
two:$$ $$
1. After one makes the idea precise that one can plug "suitable"
elements into polynomials (which may lie a ring containing the ring
in which the coefficients live in), one can save time in certain setting,
by handling the "plugging of suitable elements into polynomials"
more elegantly: For example in case of the theorem of Cayley-Hamilton,
which in its "polynomial function" version would look like:
Let $A$ be an $ntimes n$ matrix over $K$, whose characteristic
polynomial (function) is $xmapsto a_{n}x^{n}+cdots+a_{1}x+a_{0}$.
Then
$$
a_{n}A^{n}+cdots+a_{1}A+a_{0}I=0.
$$
whereas the "polynomial" version looks more elegant:
Let $A$ be an $ntimes n$ matrix over $K$, whose characteristic
polynomial is $p_{A}in Kleft[Xright]$. Then
$$
p_{A}left(Aright)=0.
$$
2. The only thing that polynomials can "do", but algebraic expressions/functions can't, is to be different, when the algebraic expressions/functions are the same (i.e. there's a theorem that tells us that the mapping of polynomials to polynomials expressions/functions
isn't injective, if the field is finite). Maybe this small difference makes a big enough difference
to consider polynomials after all, but as I said, I haven't encountered
any situation in which this difference could manifest itself.
(I'm guessing that maybe cryptography or higher number theory really needs
polynomials and not just polynomial expressions/functions. Since I don't know
anything about these subjects, I would be very happy with an example
of a theorem (whose content isn't merely technical as it is the case with the theorem above) involving polynomials, where these absolutely cannot
be replaced by polynomial expressions/functions. Conversely I would also be happy with an authoritative statement from someone knowledgeable that indeed we could dispense of polynomials, if we wanted to.)
abstract-algebra functions polynomials soft-question finite-fields
abstract-algebra functions polynomials soft-question finite-fields
edited Apr 18 '16 at 10:49
user26857
39.3k124183
39.3k124183
asked May 13 '13 at 10:13
temotemo
1,8061648
1,8061648
4
$begingroup$
In a finite field it's different. There are finitely many functions, but infinitely many polynomial expressions.
$endgroup$
– mez
May 13 '13 at 13:26
3
$begingroup$
A general point that is not specific to the issue at hand (already well addressed in great answers): when you have two similar mathematical structures, the question you should ask is not Can I get rid of Structure A and just use Structure B?, but rather What are the differences between Structure A and Structure B, and when is it more useful to use one structure instead of the other?
$endgroup$
– Michael Joyce
May 15 '13 at 1:18
1
$begingroup$
@MichaelJoyce Very well said! And with so many great answers a very difficult choice awaits...
$endgroup$
– temo
May 19 '13 at 10:46
add a comment |
4
$begingroup$
In a finite field it's different. There are finitely many functions, but infinitely many polynomial expressions.
$endgroup$
– mez
May 13 '13 at 13:26
3
$begingroup$
A general point that is not specific to the issue at hand (already well addressed in great answers): when you have two similar mathematical structures, the question you should ask is not Can I get rid of Structure A and just use Structure B?, but rather What are the differences between Structure A and Structure B, and when is it more useful to use one structure instead of the other?
$endgroup$
– Michael Joyce
May 15 '13 at 1:18
1
$begingroup$
@MichaelJoyce Very well said! And with so many great answers a very difficult choice awaits...
$endgroup$
– temo
May 19 '13 at 10:46
4
4
$begingroup$
In a finite field it's different. There are finitely many functions, but infinitely many polynomial expressions.
$endgroup$
– mez
May 13 '13 at 13:26
$begingroup$
In a finite field it's different. There are finitely many functions, but infinitely many polynomial expressions.
$endgroup$
– mez
May 13 '13 at 13:26
3
3
$begingroup$
A general point that is not specific to the issue at hand (already well addressed in great answers): when you have two similar mathematical structures, the question you should ask is not Can I get rid of Structure A and just use Structure B?, but rather What are the differences between Structure A and Structure B, and when is it more useful to use one structure instead of the other?
$endgroup$
– Michael Joyce
May 15 '13 at 1:18
$begingroup$
A general point that is not specific to the issue at hand (already well addressed in great answers): when you have two similar mathematical structures, the question you should ask is not Can I get rid of Structure A and just use Structure B?, but rather What are the differences between Structure A and Structure B, and when is it more useful to use one structure instead of the other?
$endgroup$
– Michael Joyce
May 15 '13 at 1:18
1
1
$begingroup$
@MichaelJoyce Very well said! And with so many great answers a very difficult choice awaits...
$endgroup$
– temo
May 19 '13 at 10:46
$begingroup$
@MichaelJoyce Very well said! And with so many great answers a very difficult choice awaits...
$endgroup$
– temo
May 19 '13 at 10:46
add a comment |
6 Answers
6
active
oldest
votes
$begingroup$
If the mapping from polynomials to polynomial functions is not injective, then one cannot talk about the coefficients of a polynomial function. Without coefficients, one cannot do Euclidean division. Without Euclidean division, not much of what one does readily with polynomials over a field can be done (defining minimal polynomials of linear operators is one such thing; there are many others). In short, in algebra one works with polynomials, not with polynomial functions.
Note that the ring of polynomial functions over a finite field $F$ (which is in fact the ring of all functions $Fto F$) is not an integral domain; it is in fact a rather ugly object from the algebraic point of view (a product ring of $#F$ copies of $F$).
However, if you want a statement from someone knowledgeable that indeed we could dispense of polynomials, note that Serge Lang in Introduction to Linear Algebra (2) Chapter IX defines polynomials over an arbitrary field $K$ as polynomial functions, and goes on stating a Theorem$~1$ that the mapping from coefficient sequences to polynomials (i.e., functions) is injective (which as observed in the question fails for finite fields). The proof of the theorem is conveniently only given for the case where $K$ is the real or complex numbers. I certainly would not want to exclude Serge Lang from the set of knowledgeable people, but it seems that in this particular instance he made a regrettable error where he should have known better.
Added: Leafing though the cited book, I recently found out a detail that might explain what Lang did. I thought I knew without having to look it up what was meant by "a field $K$"; however, in section II.2 of the book it says "Let $K$ be a subset of the complex numbers $mathbf C$. We shall say that $K$ is a field if it satisfies the following conditions:" (requiring closure under addition, multiplication, negation and inversion of nonzero$~x$, and containment of $0,1$). The proof later given of uniqueness of polynomial coefficients is by considering the asymptotic behaviour of the polynomial function for large arguments. The proof is immediately followed by the cryptic sentences "We have used the notion of limit. In Chapter$~$XII, $S1$ we shall prove the same statement for more general fields, by a method which is completely algebraic." This leaves me wondering what "more general" fields are intended; given the earlier definition one might assume "general subfields of $mathbf C$", but the proof using limits would appear to do fine for any such field. Anyway I cannot see which proof is being referred to. There is corollary 3 saying that a polynomial in $K[t]$ of degree $n$ has at most $n$ roots in$~K$, but that is not the same statement. More importantly, it assumes that degree has been defined, and it uses Euclidean division; as these notions use the existence of (unique) polynomial coefficients, this would appear to be circular.
$endgroup$
2
$begingroup$
(+1) for the well-explained answer and (+100) for the comment about Lang.
$endgroup$
– Ittay Weiss
May 13 '13 at 18:26
$begingroup$
+1. Lang's proof probably works for any infinite field, doesn't it? well it's a silly mistake he made, but considering the amount of books he wrote this happens
$endgroup$
– user88576
Feb 20 '14 at 23:28
$begingroup$
@amoreacceptablename No, the proof works (only) for subfields of the complex numbers; see my added paragraph.
$endgroup$
– Marc van Leeuwen
Feb 21 '14 at 9:13
$begingroup$
thanks @MarcvanLeeuwen, very kind of you!
$endgroup$
– user88576
Feb 22 '14 at 23:57
add a comment |
$begingroup$
You write that polynomial functions are "more general" than polynomial expressions. In fact the exact opposite is the case: multiple polynomial expressions may be the same function (on a finite field), so it's polynomial expressions that are more general.
Here are a few ways that polynomial expressions are useful where it could (or would) be awkward to use polynomial functions.
If $F$ is a field of $p^r$ elements, for a prime $p$ and positive integer $r$, then $F$ is the splitting field over ${mathbf F}_p$ of $x^{p^r} - x$. But for every $a in F$, $a^{p^r} = a$, so $a^{p^r} - a = 0$. You don't want to say $F$ is the splitting field of, well, "zero". The distinction between the nonzero polynomial $x^{p^r} - x$ in $F[x]$ and the zero polynomial is essential for that.
In case you think "I don't care about finite fields", well, some properties of sequences of integers are proved by making them into the coefficients of a polynomial and then reducing the polynomial mod p, which places the polynomial into ${mathbf F}_p[x]$, where there are convenient features that are not available in ${mathbf Z}[x]$. One example of this is a slick proof of Lucas's congruence on binomial coefficients. It's crucial for this that we can recover the coefficients of a polynomial in ${mathbf F}_p[x]$ even if its degree is greater than $p$, which is impossible if you think of polynomials in ${mathbf F}_p[x]$ as functions ${mathbf F}_p rightarrow {mathbf F}_p$.
Quotient rings of $A[x]$, where $A$ is a ring, may not look like functions from an elementary point of view. Do you feel it's easier to work with ${mathbf Q}[x]/(x^5)$ using the viewpoint of polynomials as functions rather than expressions?
Would you like to define the ring of noncommutative polynomials in two variables as functions?
Generating functions are important objects in mathematics (not just in combinatorics), and they are formal power series rather than numerical series. Most of them have positive radius of convergence (assuming their coefficients are in a ring like ${mathbf R}$ or ${mathbf C}$ and thus have a region where it makes sense to substitute a scalar for the variable at all), so they can be treated as functions on the domain of convergence, but it is convenient that the theory of formal power series doesn't really need to rely on the radius of convergence to get off the ground. Sometimes a generating function may even have radius of convergence equal to 0 but still contain useful information that can be legally extracted because formal power series can be developed without needing a radius of convergence. Of course a lot of important intuition about the way formal power series are manipulated is inspired by the properties of classical numerical power series as functions.
Your question mentions 3 types of polynomial objects: polynomial expressions, polynomial functions, and polynomials as strings of coefficients. I'd say the first and third are really synonymous (the third is really just a rigorous form of the first, not materially different in practice), but they're not the same as polynomial functions if you work over a finite field. If the coefficients are an infinite field then different polynomial expressions do define different functions on the field.
$endgroup$
1
$begingroup$
I have two remarks: 1) I think you misunderstood what I meant by polynomial expressions: Let's say that $x=4$ and that we consider the polynomial expression $2x+3$; then the number $11=2cdot 4+4$ is a polynomial expression for me. Another example: Let $x$ be some number from in $[0,1]$ and consider the polynomial expression $4x^2-1$. Then $4x^2-1$ is again a number , although we don't know which one, since we don't know what $x$ is. But the "nature" of $4x^2-1$ is definitely numeric. In contrast to that a the polynomial $4x^2-1$ is actually the string $(-1,0,4,0,0,ldots)$. [con...]
$endgroup$
– temo
May 13 '13 at 13:15
$begingroup$
[...tinuing] So objects of the first and third type are fundamentally different. I think what you understood "polynomial expressions" to be is a nonrigorous way of speaking about strings of coefficients - am I right with that ? But of course, none of all this makes your very good and instructive answer (!) less readable!$$ $$2. Could you please elaborate a bit more on example 3. and 4. ? It seems to me that a lot more is hiding there than I'm currently able to see. [another con...
$endgroup$
– temo
May 13 '13 at 13:19
$begingroup$
...tinuation] Since, for example, I don't know any applications of the ring of noncommutative polynomials in two variables I can't see why it would be bad to define it as the set of all maps $K^2rightarrow K$ for some field $K$. I also can't see why it's harder to work in $boldsymbol{Q}[x]/(x^5)$ with functions than polynomials. If you could provide me with a short explanation I could come back later to this, when I have more background in quotient rings and polynomials in two variables and make sense of the explanation.
$endgroup$
– temo
May 13 '13 at 13:43
1
$begingroup$
A noncommutative polynomial in two variables is not a map $K^2 rightarrow K$, since $xy^2, yxy$, and $y^2x$ are different. You know examples of noncommutative polynomials (well, a quotient ring of the noncommutative polynomials): differential operators with polynomial coefficients. If $D = d/dx$ on smooth functions, and polynomials act on functions as multiplication (so "$x$" means the function $f mapsto xf$ for smooth $f$), then the product rule $D(xf) = xD(f) + f$ is the same as $Dx = xD + 1$. Polynomial differential operators form a ring ${mathbf R}[x,D]$ where $Dx = xD + 1$.
$endgroup$
– KCd
May 14 '13 at 3:16
add a comment |
$begingroup$
Over some fields, the map from polynomials to polynomial functions is not injective; for example, over the finite field $mathbb{F}_2$, the polynomial function $x^2+x$ is the same as the $0$ function - try plugging in $0$ and $1$. So in this case the (infinite) set of polynomials over $mathbb{F}_2$ is much larger than the (finite) set of polynomial functions $mathbb{F}_2tomathbb{F}_2$.
$endgroup$
$begingroup$
The same, however, is not true for every field extension of $mathbb{F}_2$. If there's an element $x$ for which neither $x$ nor $x+1$ is zero, then $x^+x=x(x+1)$ won't be identically zero. Since additive inverses are unique in fields, such an element $x$ must even exists in every extension of $mathbf{F}_2$, just pick any $x neq 0$ which doesn't happen to be $-1$. In algebra, you regularly take a polynomial over a field and look at its behaviour over extension fields, so identifying a polynomial with the function it induces on a particular field clearly doesn't cut it.
$endgroup$
– fgp
May 13 '13 at 10:25
$begingroup$
Yes, I know that this is the case. This was what I meant, when I said that "the mapping of polynomials to polynomials expressions/functions isn't injective". But what good is this fact ? The fact alone that this mapping isn't injective surely can't be the only reason we consider polynomials (see the last paragraph from my question written in boldface).
$endgroup$
– temo
May 13 '13 at 11:05
$begingroup$
@temo Apologies, I'm not sure now what I read there instead of what you wrote. The other answers expand on the reasons for this a lot more (as I'd hoped would happen). Maybe this answer should even have been a comment, I thought it was borderline when I wrote it.
$endgroup$
– mdp
May 13 '13 at 11:54
$begingroup$
@MattPressland No problem. (I edited my question slightly after your answer - you can look it up in the revisions - in hope to make in clearer what I mean for future readers.)
$endgroup$
– temo
May 13 '13 at 13:45
add a comment |
$begingroup$
Constructing field extensions is a very important operation. This is most conveniently done as follows: Let $F$ be a field and $p(X)in F[X]$. Suppose we wish to adjoin to $F$ an element $alpha$ that will satisfy a certain algebraic equation. That is, there is a polynomial $P(X)in F[X]$ such that we wish $P(alpha)=0$ (notice here we switched between the polynomial $P(X)$ and its associated function evaluated at $alpha$). If $P(X)$ is irreducible, then such a field extension is constructed as $F[X]/(P(X))$. It is crucial for this construction to take $F[X]$ to be the ring of polynomials, not polynomial functions.
Another reason is the convenient fact that a ring homomorphism $R[X]to S$, from the ring of polynomials with coefficients in $R$ to a ring $S$, is the same thing as a homomorphism $Rto S$ together with a choice of an arbitrary element in $S$. Thus, polynomial rings allow for a systematic study of homomorphism+an element. It turns out that many properties of the chosen element are reflected by properties of the homomorphism, which in turn are related to properties of the domain, that is of polynomials (not polynomial functions).
One more reason is that polynomials are combinatorial beasts. They can be manipulated purely combinatorially. But since they give rise to functions, it means that we can use combinatorics to study functions. For instance, given functions $f,g$, can you find functions $a,b$ such that $af+bg=1$ is quite easily solved using the Euclidean algorithm if all these functions are required to be polynomials (the solution is actually constructive).
$endgroup$
$begingroup$
The last point is not the most convincing; certainly if you can find polynomials such that $ag+bg=1$, then you can find (corresponding) polynomial functions as well.
$endgroup$
– Marc van Leeuwen
May 13 '13 at 13:02
$begingroup$
@MarcvanLeeuwen I did not explain myself well then, as that was precisely my point. I left 'function' unspecified, not even domain and codomain, and tried to emphasize that if things are required to be polynomials, then a solution can easily be found using the Euclidean algorithm which is combinatorial.
$endgroup$
– Ittay Weiss
May 13 '13 at 18:23
add a comment |
$begingroup$
Here are some answers with somewhat different conclusions. Let us call the form of polynomials with coefficients and indeterminates "abstract" or "algebraic" to distinguish them from polynomial functions.
If you want to work with rational functions, the situation becomes foggy if you do not define everything in terms of abstract polynomials:
- Is the function $f(x) = +frac{1}{x} -frac{1}{x}$ equal to the zero function?
- Is $frac{x}{x}$ equal to $1$ for all $x$, or only for $x neq 0$?
- When composing a chain of linear fractional functions $frac{ax+b}{cx+d}$ do we exclude more and more points from the domain everywhere that one of the denominators is zero, when the end result as a rational function has at most one one singularity? Over a finite field this can empty the domain of all its points.
- For an infinite field you can get around some of this by treating polynomial functions and rational functions as function-equal if they are equal at infinitely many points, and defining rational functions as written in lowest terms (which breaks the analogy with fractions). Over a finite field, nothing like that can work because ratios like ${p(x)^a}/{p(x)^b}$ are not defined if $p=0$ for all elements in the finite field, and if there were a way to define it based on functions (such as using the lowest-degree representation) it would not depend on the exponents.
In the other direction:
- if you want representations of the polynomial to not always be centered at $0$ or to be less coordinate dependent, then you might want polynomial to mean a function killed by differential or difference operators of order $n$ for some $n > 0$.
- where polynomials appear by a construction that operates on functions, and cannot be performed on the abstract polynomials, then you have no choice but to work with polynomials as functions.
In algebraic uses of polynomial functions, the arithmetic operations, function composition, differentiation and integration are often sufficient and can be done on the abstract form, and a function is demonstrated to be polynomial by exhibiting an abstract form. So the evaluation that converts algebraic representation into a function can always be postponed by doing all operations on the algebraic representation.
$endgroup$
$begingroup$
I think your answer is with just two upvotes, of which one is mine, very underrated. Some of your points aren't yet clear to me though: $$ $$ 1) Why does the situation becomes foggy, if we work with rational functions ? The maximal domain (if we work consider only real numbers) of the functions $f(x)=frac{1}{x}-frac{1}{x}$ resp, $g(x)=frac{x}{x}$ is $mathbb{R}setminus{0}$ and thus on this set they are equal to $0$ resp. $1$- no ambiguity here, as far as I can see...[conti-
$endgroup$
– temo
May 19 '13 at 10:40
$begingroup$
[-nuation] I we define these as abstract polynomials, than it doesn't make sense to ask if $f$ and $g$ are equal to a number, since (set-theoretically) they are strings on numbers; it would only make sense to ask for which numbers $f$ and $g$ can be evaluated. In this case the considerations from above apply. [last continuation]
$endgroup$
– temo
May 19 '13 at 10:43
$begingroup$
2) I don't understand the first proposition of what you wrote in the fourth bullet: Why can we treat over an infinite field polynomial functions and rational functions as equals if the agree for infinite many values ? What does this have do to with writing rational functions in lowest term and why does the analogy with fractions break ?
$endgroup$
– temo
May 19 '13 at 11:25
$begingroup$
For (1) the maximal domain will change depending on the way that $f$ is written, such as $0$ or $1/x - 1/x$ or $1/x(x-1) - 1/x(x-1)$, which also breaks algebraic laws like $g(x) = g(x) + f(x) - f(x) = g(x)f(x)/f(x)$. You could handle part of this problem by declaring that an equation $A=B$ holds only on the intersection of domains, but this would make chains like $A=B=C=...$ non-reversible, and the domain can become the empty set for rational functions over a finite field. For [-nuation], we ask if $f=g$ as abstract polynomials, not numbers.
$endgroup$
– zyx
May 19 '13 at 21:18
$begingroup$
For (2), abstract (ratios of) polynomials can be recovered from (ratios of) polynomial functions over an infinite field, and equality at an infinite set of points can serve as the definition of equality to replace the problems with "maximal domains". But this procedure breaks down for finite fields as the ratios $p(x)^m / p(x)^n$ demonstrate, for integer $m$ and $n$ and $p(x)=0$ at all points. For fractions there is no requirement to write them in lowest terms, and there can might be problems with requiring lowest terms if the coefficients of the polynomials/rational functions are not a UFD.
$endgroup$
– zyx
May 19 '13 at 21:24
|
show 1 more comment
$begingroup$
OK, this is an old question, but I think the following is not covered by the other answers.
The polynomial as defined in point 3 is really a special case of the polynomial expression as defined in point 1. In particular, given the commutative ring the coefficients are from, the “suitable algebraic structure” (a commutative ring) for the polynomial is obtained as follows:
All elements of the original ring are also in the extended ring, and the same relations hold between them (that is, the ring of the coefficients is a subring of the ring of polynomials).
There's an additional element (the “unknown” you mention) that is not member of the coefficient's ring. This additional element fulfils no relations other than those implied by the axioms of commutative rings and the relations between the original elements.
Apart of the elements mentioned above, and those additionally required to be present to make it a ring while fulfilling the condition of point 2, no further elements are added.
In this extended structure (called $R[X]$ if $R$ is the ring of the coefficients and $X$ is the name of the “unknown”), your point 3 polynomial is just the point 1 polynomial expression for the choice $x=X$.
The polynomial function can also be obtained naturally from that construction. First, we find that for any $xin R$, there's an unique ring homomorphism*$phi_x:R[X]to R$ that maps $X$ to $x$ and every element of $R$ to itself. Given a polynomial $pin R[X]$, the polynomial function then is just the function $xmapsto phi_x(p)$.
And for completeness, although that point was already stressed in other answers: While the polynomial uniquely determines the polynomial function (by the construction given above), the reverse is not always true.
*) A homomorphism is a function that keeps the algebraic structure intact. In particular, a ring homomorphism is a function $phi$ with $phi(0)=0$, $phi(1)=1$, $phi(a+b)=phi(a)+phi(b)$, $phi(-a)=-phi(a)$ and $phi(ab)=phi(a)phi(b)$.
$endgroup$
add a comment |
Your Answer
StackExchange.ifUsing("editor", function () {
return StackExchange.using("mathjaxEditing", function () {
StackExchange.MarkdownEditor.creationCallbacks.add(function (editor, postfix) {
StackExchange.mathjaxEditing.prepareWmdForMathJax(editor, postfix, [["$", "$"], ["\\(","\\)"]]);
});
});
}, "mathjax-editing");
StackExchange.ready(function() {
var channelOptions = {
tags: "".split(" "),
id: "69"
};
initTagRenderer("".split(" "), "".split(" "), channelOptions);
StackExchange.using("externalEditor", function() {
// Have to fire editor after snippets, if snippets enabled
if (StackExchange.settings.snippets.snippetsEnabled) {
StackExchange.using("snippets", function() {
createEditor();
});
}
else {
createEditor();
}
});
function createEditor() {
StackExchange.prepareEditor({
heartbeatType: 'answer',
autoActivateHeartbeat: false,
convertImagesToLinks: true,
noModals: true,
showLowRepImageUploadWarning: true,
reputationToPostImages: 10,
bindNavPrevention: true,
postfix: "",
imageUploader: {
brandingHtml: "Powered by u003ca class="icon-imgur-white" href="https://imgur.com/"u003eu003c/au003e",
contentPolicyHtml: "User contributions licensed under u003ca href="https://creativecommons.org/licenses/by-sa/3.0/"u003ecc by-sa 3.0 with attribution requiredu003c/au003e u003ca href="https://stackoverflow.com/legal/content-policy"u003e(content policy)u003c/au003e",
allowUrls: true
},
noCode: true, onDemand: true,
discardSelector: ".discard-answer"
,immediatelyShowMarkdownHelp:true
});
}
});
Sign up or log in
StackExchange.ready(function () {
StackExchange.helpers.onClickDraftSave('#login-link');
});
Sign up using Google
Sign up using Facebook
Sign up using Email and Password
Post as a guest
Required, but never shown
StackExchange.ready(
function () {
StackExchange.openid.initPostLogin('.new-post-login', 'https%3a%2f%2fmath.stackexchange.com%2fquestions%2f390260%2fdo-we-really-need-polynomials-in-contrast-to-polynomial-functions%23new-answer', 'question_page');
}
);
Post as a guest
Required, but never shown
6 Answers
6
active
oldest
votes
6 Answers
6
active
oldest
votes
active
oldest
votes
active
oldest
votes
$begingroup$
If the mapping from polynomials to polynomial functions is not injective, then one cannot talk about the coefficients of a polynomial function. Without coefficients, one cannot do Euclidean division. Without Euclidean division, not much of what one does readily with polynomials over a field can be done (defining minimal polynomials of linear operators is one such thing; there are many others). In short, in algebra one works with polynomials, not with polynomial functions.
Note that the ring of polynomial functions over a finite field $F$ (which is in fact the ring of all functions $Fto F$) is not an integral domain; it is in fact a rather ugly object from the algebraic point of view (a product ring of $#F$ copies of $F$).
However, if you want a statement from someone knowledgeable that indeed we could dispense of polynomials, note that Serge Lang in Introduction to Linear Algebra (2) Chapter IX defines polynomials over an arbitrary field $K$ as polynomial functions, and goes on stating a Theorem$~1$ that the mapping from coefficient sequences to polynomials (i.e., functions) is injective (which as observed in the question fails for finite fields). The proof of the theorem is conveniently only given for the case where $K$ is the real or complex numbers. I certainly would not want to exclude Serge Lang from the set of knowledgeable people, but it seems that in this particular instance he made a regrettable error where he should have known better.
Added: Leafing though the cited book, I recently found out a detail that might explain what Lang did. I thought I knew without having to look it up what was meant by "a field $K$"; however, in section II.2 of the book it says "Let $K$ be a subset of the complex numbers $mathbf C$. We shall say that $K$ is a field if it satisfies the following conditions:" (requiring closure under addition, multiplication, negation and inversion of nonzero$~x$, and containment of $0,1$). The proof later given of uniqueness of polynomial coefficients is by considering the asymptotic behaviour of the polynomial function for large arguments. The proof is immediately followed by the cryptic sentences "We have used the notion of limit. In Chapter$~$XII, $S1$ we shall prove the same statement for more general fields, by a method which is completely algebraic." This leaves me wondering what "more general" fields are intended; given the earlier definition one might assume "general subfields of $mathbf C$", but the proof using limits would appear to do fine for any such field. Anyway I cannot see which proof is being referred to. There is corollary 3 saying that a polynomial in $K[t]$ of degree $n$ has at most $n$ roots in$~K$, but that is not the same statement. More importantly, it assumes that degree has been defined, and it uses Euclidean division; as these notions use the existence of (unique) polynomial coefficients, this would appear to be circular.
$endgroup$
2
$begingroup$
(+1) for the well-explained answer and (+100) for the comment about Lang.
$endgroup$
– Ittay Weiss
May 13 '13 at 18:26
$begingroup$
+1. Lang's proof probably works for any infinite field, doesn't it? well it's a silly mistake he made, but considering the amount of books he wrote this happens
$endgroup$
– user88576
Feb 20 '14 at 23:28
$begingroup$
@amoreacceptablename No, the proof works (only) for subfields of the complex numbers; see my added paragraph.
$endgroup$
– Marc van Leeuwen
Feb 21 '14 at 9:13
$begingroup$
thanks @MarcvanLeeuwen, very kind of you!
$endgroup$
– user88576
Feb 22 '14 at 23:57
add a comment |
$begingroup$
If the mapping from polynomials to polynomial functions is not injective, then one cannot talk about the coefficients of a polynomial function. Without coefficients, one cannot do Euclidean division. Without Euclidean division, not much of what one does readily with polynomials over a field can be done (defining minimal polynomials of linear operators is one such thing; there are many others). In short, in algebra one works with polynomials, not with polynomial functions.
Note that the ring of polynomial functions over a finite field $F$ (which is in fact the ring of all functions $Fto F$) is not an integral domain; it is in fact a rather ugly object from the algebraic point of view (a product ring of $#F$ copies of $F$).
However, if you want a statement from someone knowledgeable that indeed we could dispense of polynomials, note that Serge Lang in Introduction to Linear Algebra (2) Chapter IX defines polynomials over an arbitrary field $K$ as polynomial functions, and goes on stating a Theorem$~1$ that the mapping from coefficient sequences to polynomials (i.e., functions) is injective (which as observed in the question fails for finite fields). The proof of the theorem is conveniently only given for the case where $K$ is the real or complex numbers. I certainly would not want to exclude Serge Lang from the set of knowledgeable people, but it seems that in this particular instance he made a regrettable error where he should have known better.
Added: Leafing though the cited book, I recently found out a detail that might explain what Lang did. I thought I knew without having to look it up what was meant by "a field $K$"; however, in section II.2 of the book it says "Let $K$ be a subset of the complex numbers $mathbf C$. We shall say that $K$ is a field if it satisfies the following conditions:" (requiring closure under addition, multiplication, negation and inversion of nonzero$~x$, and containment of $0,1$). The proof later given of uniqueness of polynomial coefficients is by considering the asymptotic behaviour of the polynomial function for large arguments. The proof is immediately followed by the cryptic sentences "We have used the notion of limit. In Chapter$~$XII, $S1$ we shall prove the same statement for more general fields, by a method which is completely algebraic." This leaves me wondering what "more general" fields are intended; given the earlier definition one might assume "general subfields of $mathbf C$", but the proof using limits would appear to do fine for any such field. Anyway I cannot see which proof is being referred to. There is corollary 3 saying that a polynomial in $K[t]$ of degree $n$ has at most $n$ roots in$~K$, but that is not the same statement. More importantly, it assumes that degree has been defined, and it uses Euclidean division; as these notions use the existence of (unique) polynomial coefficients, this would appear to be circular.
$endgroup$
2
$begingroup$
(+1) for the well-explained answer and (+100) for the comment about Lang.
$endgroup$
– Ittay Weiss
May 13 '13 at 18:26
$begingroup$
+1. Lang's proof probably works for any infinite field, doesn't it? well it's a silly mistake he made, but considering the amount of books he wrote this happens
$endgroup$
– user88576
Feb 20 '14 at 23:28
$begingroup$
@amoreacceptablename No, the proof works (only) for subfields of the complex numbers; see my added paragraph.
$endgroup$
– Marc van Leeuwen
Feb 21 '14 at 9:13
$begingroup$
thanks @MarcvanLeeuwen, very kind of you!
$endgroup$
– user88576
Feb 22 '14 at 23:57
add a comment |
$begingroup$
If the mapping from polynomials to polynomial functions is not injective, then one cannot talk about the coefficients of a polynomial function. Without coefficients, one cannot do Euclidean division. Without Euclidean division, not much of what one does readily with polynomials over a field can be done (defining minimal polynomials of linear operators is one such thing; there are many others). In short, in algebra one works with polynomials, not with polynomial functions.
Note that the ring of polynomial functions over a finite field $F$ (which is in fact the ring of all functions $Fto F$) is not an integral domain; it is in fact a rather ugly object from the algebraic point of view (a product ring of $#F$ copies of $F$).
However, if you want a statement from someone knowledgeable that indeed we could dispense of polynomials, note that Serge Lang in Introduction to Linear Algebra (2) Chapter IX defines polynomials over an arbitrary field $K$ as polynomial functions, and goes on stating a Theorem$~1$ that the mapping from coefficient sequences to polynomials (i.e., functions) is injective (which as observed in the question fails for finite fields). The proof of the theorem is conveniently only given for the case where $K$ is the real or complex numbers. I certainly would not want to exclude Serge Lang from the set of knowledgeable people, but it seems that in this particular instance he made a regrettable error where he should have known better.
Added: Leafing though the cited book, I recently found out a detail that might explain what Lang did. I thought I knew without having to look it up what was meant by "a field $K$"; however, in section II.2 of the book it says "Let $K$ be a subset of the complex numbers $mathbf C$. We shall say that $K$ is a field if it satisfies the following conditions:" (requiring closure under addition, multiplication, negation and inversion of nonzero$~x$, and containment of $0,1$). The proof later given of uniqueness of polynomial coefficients is by considering the asymptotic behaviour of the polynomial function for large arguments. The proof is immediately followed by the cryptic sentences "We have used the notion of limit. In Chapter$~$XII, $S1$ we shall prove the same statement for more general fields, by a method which is completely algebraic." This leaves me wondering what "more general" fields are intended; given the earlier definition one might assume "general subfields of $mathbf C$", but the proof using limits would appear to do fine for any such field. Anyway I cannot see which proof is being referred to. There is corollary 3 saying that a polynomial in $K[t]$ of degree $n$ has at most $n$ roots in$~K$, but that is not the same statement. More importantly, it assumes that degree has been defined, and it uses Euclidean division; as these notions use the existence of (unique) polynomial coefficients, this would appear to be circular.
$endgroup$
If the mapping from polynomials to polynomial functions is not injective, then one cannot talk about the coefficients of a polynomial function. Without coefficients, one cannot do Euclidean division. Without Euclidean division, not much of what one does readily with polynomials over a field can be done (defining minimal polynomials of linear operators is one such thing; there are many others). In short, in algebra one works with polynomials, not with polynomial functions.
Note that the ring of polynomial functions over a finite field $F$ (which is in fact the ring of all functions $Fto F$) is not an integral domain; it is in fact a rather ugly object from the algebraic point of view (a product ring of $#F$ copies of $F$).
However, if you want a statement from someone knowledgeable that indeed we could dispense of polynomials, note that Serge Lang in Introduction to Linear Algebra (2) Chapter IX defines polynomials over an arbitrary field $K$ as polynomial functions, and goes on stating a Theorem$~1$ that the mapping from coefficient sequences to polynomials (i.e., functions) is injective (which as observed in the question fails for finite fields). The proof of the theorem is conveniently only given for the case where $K$ is the real or complex numbers. I certainly would not want to exclude Serge Lang from the set of knowledgeable people, but it seems that in this particular instance he made a regrettable error where he should have known better.
Added: Leafing though the cited book, I recently found out a detail that might explain what Lang did. I thought I knew without having to look it up what was meant by "a field $K$"; however, in section II.2 of the book it says "Let $K$ be a subset of the complex numbers $mathbf C$. We shall say that $K$ is a field if it satisfies the following conditions:" (requiring closure under addition, multiplication, negation and inversion of nonzero$~x$, and containment of $0,1$). The proof later given of uniqueness of polynomial coefficients is by considering the asymptotic behaviour of the polynomial function for large arguments. The proof is immediately followed by the cryptic sentences "We have used the notion of limit. In Chapter$~$XII, $S1$ we shall prove the same statement for more general fields, by a method which is completely algebraic." This leaves me wondering what "more general" fields are intended; given the earlier definition one might assume "general subfields of $mathbf C$", but the proof using limits would appear to do fine for any such field. Anyway I cannot see which proof is being referred to. There is corollary 3 saying that a polynomial in $K[t]$ of degree $n$ has at most $n$ roots in$~K$, but that is not the same statement. More importantly, it assumes that degree has been defined, and it uses Euclidean division; as these notions use the existence of (unique) polynomial coefficients, this would appear to be circular.
edited Feb 23 '14 at 5:47
answered May 13 '13 at 12:58
Marc van LeeuwenMarc van Leeuwen
87.4k5109224
87.4k5109224
2
$begingroup$
(+1) for the well-explained answer and (+100) for the comment about Lang.
$endgroup$
– Ittay Weiss
May 13 '13 at 18:26
$begingroup$
+1. Lang's proof probably works for any infinite field, doesn't it? well it's a silly mistake he made, but considering the amount of books he wrote this happens
$endgroup$
– user88576
Feb 20 '14 at 23:28
$begingroup$
@amoreacceptablename No, the proof works (only) for subfields of the complex numbers; see my added paragraph.
$endgroup$
– Marc van Leeuwen
Feb 21 '14 at 9:13
$begingroup$
thanks @MarcvanLeeuwen, very kind of you!
$endgroup$
– user88576
Feb 22 '14 at 23:57
add a comment |
2
$begingroup$
(+1) for the well-explained answer and (+100) for the comment about Lang.
$endgroup$
– Ittay Weiss
May 13 '13 at 18:26
$begingroup$
+1. Lang's proof probably works for any infinite field, doesn't it? well it's a silly mistake he made, but considering the amount of books he wrote this happens
$endgroup$
– user88576
Feb 20 '14 at 23:28
$begingroup$
@amoreacceptablename No, the proof works (only) for subfields of the complex numbers; see my added paragraph.
$endgroup$
– Marc van Leeuwen
Feb 21 '14 at 9:13
$begingroup$
thanks @MarcvanLeeuwen, very kind of you!
$endgroup$
– user88576
Feb 22 '14 at 23:57
2
2
$begingroup$
(+1) for the well-explained answer and (+100) for the comment about Lang.
$endgroup$
– Ittay Weiss
May 13 '13 at 18:26
$begingroup$
(+1) for the well-explained answer and (+100) for the comment about Lang.
$endgroup$
– Ittay Weiss
May 13 '13 at 18:26
$begingroup$
+1. Lang's proof probably works for any infinite field, doesn't it? well it's a silly mistake he made, but considering the amount of books he wrote this happens
$endgroup$
– user88576
Feb 20 '14 at 23:28
$begingroup$
+1. Lang's proof probably works for any infinite field, doesn't it? well it's a silly mistake he made, but considering the amount of books he wrote this happens
$endgroup$
– user88576
Feb 20 '14 at 23:28
$begingroup$
@amoreacceptablename No, the proof works (only) for subfields of the complex numbers; see my added paragraph.
$endgroup$
– Marc van Leeuwen
Feb 21 '14 at 9:13
$begingroup$
@amoreacceptablename No, the proof works (only) for subfields of the complex numbers; see my added paragraph.
$endgroup$
– Marc van Leeuwen
Feb 21 '14 at 9:13
$begingroup$
thanks @MarcvanLeeuwen, very kind of you!
$endgroup$
– user88576
Feb 22 '14 at 23:57
$begingroup$
thanks @MarcvanLeeuwen, very kind of you!
$endgroup$
– user88576
Feb 22 '14 at 23:57
add a comment |
$begingroup$
You write that polynomial functions are "more general" than polynomial expressions. In fact the exact opposite is the case: multiple polynomial expressions may be the same function (on a finite field), so it's polynomial expressions that are more general.
Here are a few ways that polynomial expressions are useful where it could (or would) be awkward to use polynomial functions.
If $F$ is a field of $p^r$ elements, for a prime $p$ and positive integer $r$, then $F$ is the splitting field over ${mathbf F}_p$ of $x^{p^r} - x$. But for every $a in F$, $a^{p^r} = a$, so $a^{p^r} - a = 0$. You don't want to say $F$ is the splitting field of, well, "zero". The distinction between the nonzero polynomial $x^{p^r} - x$ in $F[x]$ and the zero polynomial is essential for that.
In case you think "I don't care about finite fields", well, some properties of sequences of integers are proved by making them into the coefficients of a polynomial and then reducing the polynomial mod p, which places the polynomial into ${mathbf F}_p[x]$, where there are convenient features that are not available in ${mathbf Z}[x]$. One example of this is a slick proof of Lucas's congruence on binomial coefficients. It's crucial for this that we can recover the coefficients of a polynomial in ${mathbf F}_p[x]$ even if its degree is greater than $p$, which is impossible if you think of polynomials in ${mathbf F}_p[x]$ as functions ${mathbf F}_p rightarrow {mathbf F}_p$.
Quotient rings of $A[x]$, where $A$ is a ring, may not look like functions from an elementary point of view. Do you feel it's easier to work with ${mathbf Q}[x]/(x^5)$ using the viewpoint of polynomials as functions rather than expressions?
Would you like to define the ring of noncommutative polynomials in two variables as functions?
Generating functions are important objects in mathematics (not just in combinatorics), and they are formal power series rather than numerical series. Most of them have positive radius of convergence (assuming their coefficients are in a ring like ${mathbf R}$ or ${mathbf C}$ and thus have a region where it makes sense to substitute a scalar for the variable at all), so they can be treated as functions on the domain of convergence, but it is convenient that the theory of formal power series doesn't really need to rely on the radius of convergence to get off the ground. Sometimes a generating function may even have radius of convergence equal to 0 but still contain useful information that can be legally extracted because formal power series can be developed without needing a radius of convergence. Of course a lot of important intuition about the way formal power series are manipulated is inspired by the properties of classical numerical power series as functions.
Your question mentions 3 types of polynomial objects: polynomial expressions, polynomial functions, and polynomials as strings of coefficients. I'd say the first and third are really synonymous (the third is really just a rigorous form of the first, not materially different in practice), but they're not the same as polynomial functions if you work over a finite field. If the coefficients are an infinite field then different polynomial expressions do define different functions on the field.
$endgroup$
1
$begingroup$
I have two remarks: 1) I think you misunderstood what I meant by polynomial expressions: Let's say that $x=4$ and that we consider the polynomial expression $2x+3$; then the number $11=2cdot 4+4$ is a polynomial expression for me. Another example: Let $x$ be some number from in $[0,1]$ and consider the polynomial expression $4x^2-1$. Then $4x^2-1$ is again a number , although we don't know which one, since we don't know what $x$ is. But the "nature" of $4x^2-1$ is definitely numeric. In contrast to that a the polynomial $4x^2-1$ is actually the string $(-1,0,4,0,0,ldots)$. [con...]
$endgroup$
– temo
May 13 '13 at 13:15
$begingroup$
[...tinuing] So objects of the first and third type are fundamentally different. I think what you understood "polynomial expressions" to be is a nonrigorous way of speaking about strings of coefficients - am I right with that ? But of course, none of all this makes your very good and instructive answer (!) less readable!$$ $$2. Could you please elaborate a bit more on example 3. and 4. ? It seems to me that a lot more is hiding there than I'm currently able to see. [another con...
$endgroup$
– temo
May 13 '13 at 13:19
$begingroup$
...tinuation] Since, for example, I don't know any applications of the ring of noncommutative polynomials in two variables I can't see why it would be bad to define it as the set of all maps $K^2rightarrow K$ for some field $K$. I also can't see why it's harder to work in $boldsymbol{Q}[x]/(x^5)$ with functions than polynomials. If you could provide me with a short explanation I could come back later to this, when I have more background in quotient rings and polynomials in two variables and make sense of the explanation.
$endgroup$
– temo
May 13 '13 at 13:43
1
$begingroup$
A noncommutative polynomial in two variables is not a map $K^2 rightarrow K$, since $xy^2, yxy$, and $y^2x$ are different. You know examples of noncommutative polynomials (well, a quotient ring of the noncommutative polynomials): differential operators with polynomial coefficients. If $D = d/dx$ on smooth functions, and polynomials act on functions as multiplication (so "$x$" means the function $f mapsto xf$ for smooth $f$), then the product rule $D(xf) = xD(f) + f$ is the same as $Dx = xD + 1$. Polynomial differential operators form a ring ${mathbf R}[x,D]$ where $Dx = xD + 1$.
$endgroup$
– KCd
May 14 '13 at 3:16
add a comment |
$begingroup$
You write that polynomial functions are "more general" than polynomial expressions. In fact the exact opposite is the case: multiple polynomial expressions may be the same function (on a finite field), so it's polynomial expressions that are more general.
Here are a few ways that polynomial expressions are useful where it could (or would) be awkward to use polynomial functions.
If $F$ is a field of $p^r$ elements, for a prime $p$ and positive integer $r$, then $F$ is the splitting field over ${mathbf F}_p$ of $x^{p^r} - x$. But for every $a in F$, $a^{p^r} = a$, so $a^{p^r} - a = 0$. You don't want to say $F$ is the splitting field of, well, "zero". The distinction between the nonzero polynomial $x^{p^r} - x$ in $F[x]$ and the zero polynomial is essential for that.
In case you think "I don't care about finite fields", well, some properties of sequences of integers are proved by making them into the coefficients of a polynomial and then reducing the polynomial mod p, which places the polynomial into ${mathbf F}_p[x]$, where there are convenient features that are not available in ${mathbf Z}[x]$. One example of this is a slick proof of Lucas's congruence on binomial coefficients. It's crucial for this that we can recover the coefficients of a polynomial in ${mathbf F}_p[x]$ even if its degree is greater than $p$, which is impossible if you think of polynomials in ${mathbf F}_p[x]$ as functions ${mathbf F}_p rightarrow {mathbf F}_p$.
Quotient rings of $A[x]$, where $A$ is a ring, may not look like functions from an elementary point of view. Do you feel it's easier to work with ${mathbf Q}[x]/(x^5)$ using the viewpoint of polynomials as functions rather than expressions?
Would you like to define the ring of noncommutative polynomials in two variables as functions?
Generating functions are important objects in mathematics (not just in combinatorics), and they are formal power series rather than numerical series. Most of them have positive radius of convergence (assuming their coefficients are in a ring like ${mathbf R}$ or ${mathbf C}$ and thus have a region where it makes sense to substitute a scalar for the variable at all), so they can be treated as functions on the domain of convergence, but it is convenient that the theory of formal power series doesn't really need to rely on the radius of convergence to get off the ground. Sometimes a generating function may even have radius of convergence equal to 0 but still contain useful information that can be legally extracted because formal power series can be developed without needing a radius of convergence. Of course a lot of important intuition about the way formal power series are manipulated is inspired by the properties of classical numerical power series as functions.
Your question mentions 3 types of polynomial objects: polynomial expressions, polynomial functions, and polynomials as strings of coefficients. I'd say the first and third are really synonymous (the third is really just a rigorous form of the first, not materially different in practice), but they're not the same as polynomial functions if you work over a finite field. If the coefficients are an infinite field then different polynomial expressions do define different functions on the field.
$endgroup$
1
$begingroup$
I have two remarks: 1) I think you misunderstood what I meant by polynomial expressions: Let's say that $x=4$ and that we consider the polynomial expression $2x+3$; then the number $11=2cdot 4+4$ is a polynomial expression for me. Another example: Let $x$ be some number from in $[0,1]$ and consider the polynomial expression $4x^2-1$. Then $4x^2-1$ is again a number , although we don't know which one, since we don't know what $x$ is. But the "nature" of $4x^2-1$ is definitely numeric. In contrast to that a the polynomial $4x^2-1$ is actually the string $(-1,0,4,0,0,ldots)$. [con...]
$endgroup$
– temo
May 13 '13 at 13:15
$begingroup$
[...tinuing] So objects of the first and third type are fundamentally different. I think what you understood "polynomial expressions" to be is a nonrigorous way of speaking about strings of coefficients - am I right with that ? But of course, none of all this makes your very good and instructive answer (!) less readable!$$ $$2. Could you please elaborate a bit more on example 3. and 4. ? It seems to me that a lot more is hiding there than I'm currently able to see. [another con...
$endgroup$
– temo
May 13 '13 at 13:19
$begingroup$
...tinuation] Since, for example, I don't know any applications of the ring of noncommutative polynomials in two variables I can't see why it would be bad to define it as the set of all maps $K^2rightarrow K$ for some field $K$. I also can't see why it's harder to work in $boldsymbol{Q}[x]/(x^5)$ with functions than polynomials. If you could provide me with a short explanation I could come back later to this, when I have more background in quotient rings and polynomials in two variables and make sense of the explanation.
$endgroup$
– temo
May 13 '13 at 13:43
1
$begingroup$
A noncommutative polynomial in two variables is not a map $K^2 rightarrow K$, since $xy^2, yxy$, and $y^2x$ are different. You know examples of noncommutative polynomials (well, a quotient ring of the noncommutative polynomials): differential operators with polynomial coefficients. If $D = d/dx$ on smooth functions, and polynomials act on functions as multiplication (so "$x$" means the function $f mapsto xf$ for smooth $f$), then the product rule $D(xf) = xD(f) + f$ is the same as $Dx = xD + 1$. Polynomial differential operators form a ring ${mathbf R}[x,D]$ where $Dx = xD + 1$.
$endgroup$
– KCd
May 14 '13 at 3:16
add a comment |
$begingroup$
You write that polynomial functions are "more general" than polynomial expressions. In fact the exact opposite is the case: multiple polynomial expressions may be the same function (on a finite field), so it's polynomial expressions that are more general.
Here are a few ways that polynomial expressions are useful where it could (or would) be awkward to use polynomial functions.
If $F$ is a field of $p^r$ elements, for a prime $p$ and positive integer $r$, then $F$ is the splitting field over ${mathbf F}_p$ of $x^{p^r} - x$. But for every $a in F$, $a^{p^r} = a$, so $a^{p^r} - a = 0$. You don't want to say $F$ is the splitting field of, well, "zero". The distinction between the nonzero polynomial $x^{p^r} - x$ in $F[x]$ and the zero polynomial is essential for that.
In case you think "I don't care about finite fields", well, some properties of sequences of integers are proved by making them into the coefficients of a polynomial and then reducing the polynomial mod p, which places the polynomial into ${mathbf F}_p[x]$, where there are convenient features that are not available in ${mathbf Z}[x]$. One example of this is a slick proof of Lucas's congruence on binomial coefficients. It's crucial for this that we can recover the coefficients of a polynomial in ${mathbf F}_p[x]$ even if its degree is greater than $p$, which is impossible if you think of polynomials in ${mathbf F}_p[x]$ as functions ${mathbf F}_p rightarrow {mathbf F}_p$.
Quotient rings of $A[x]$, where $A$ is a ring, may not look like functions from an elementary point of view. Do you feel it's easier to work with ${mathbf Q}[x]/(x^5)$ using the viewpoint of polynomials as functions rather than expressions?
Would you like to define the ring of noncommutative polynomials in two variables as functions?
Generating functions are important objects in mathematics (not just in combinatorics), and they are formal power series rather than numerical series. Most of them have positive radius of convergence (assuming their coefficients are in a ring like ${mathbf R}$ or ${mathbf C}$ and thus have a region where it makes sense to substitute a scalar for the variable at all), so they can be treated as functions on the domain of convergence, but it is convenient that the theory of formal power series doesn't really need to rely on the radius of convergence to get off the ground. Sometimes a generating function may even have radius of convergence equal to 0 but still contain useful information that can be legally extracted because formal power series can be developed without needing a radius of convergence. Of course a lot of important intuition about the way formal power series are manipulated is inspired by the properties of classical numerical power series as functions.
Your question mentions 3 types of polynomial objects: polynomial expressions, polynomial functions, and polynomials as strings of coefficients. I'd say the first and third are really synonymous (the third is really just a rigorous form of the first, not materially different in practice), but they're not the same as polynomial functions if you work over a finite field. If the coefficients are an infinite field then different polynomial expressions do define different functions on the field.
$endgroup$
You write that polynomial functions are "more general" than polynomial expressions. In fact the exact opposite is the case: multiple polynomial expressions may be the same function (on a finite field), so it's polynomial expressions that are more general.
Here are a few ways that polynomial expressions are useful where it could (or would) be awkward to use polynomial functions.
If $F$ is a field of $p^r$ elements, for a prime $p$ and positive integer $r$, then $F$ is the splitting field over ${mathbf F}_p$ of $x^{p^r} - x$. But for every $a in F$, $a^{p^r} = a$, so $a^{p^r} - a = 0$. You don't want to say $F$ is the splitting field of, well, "zero". The distinction between the nonzero polynomial $x^{p^r} - x$ in $F[x]$ and the zero polynomial is essential for that.
In case you think "I don't care about finite fields", well, some properties of sequences of integers are proved by making them into the coefficients of a polynomial and then reducing the polynomial mod p, which places the polynomial into ${mathbf F}_p[x]$, where there are convenient features that are not available in ${mathbf Z}[x]$. One example of this is a slick proof of Lucas's congruence on binomial coefficients. It's crucial for this that we can recover the coefficients of a polynomial in ${mathbf F}_p[x]$ even if its degree is greater than $p$, which is impossible if you think of polynomials in ${mathbf F}_p[x]$ as functions ${mathbf F}_p rightarrow {mathbf F}_p$.
Quotient rings of $A[x]$, where $A$ is a ring, may not look like functions from an elementary point of view. Do you feel it's easier to work with ${mathbf Q}[x]/(x^5)$ using the viewpoint of polynomials as functions rather than expressions?
Would you like to define the ring of noncommutative polynomials in two variables as functions?
Generating functions are important objects in mathematics (not just in combinatorics), and they are formal power series rather than numerical series. Most of them have positive radius of convergence (assuming their coefficients are in a ring like ${mathbf R}$ or ${mathbf C}$ and thus have a region where it makes sense to substitute a scalar for the variable at all), so they can be treated as functions on the domain of convergence, but it is convenient that the theory of formal power series doesn't really need to rely on the radius of convergence to get off the ground. Sometimes a generating function may even have radius of convergence equal to 0 but still contain useful information that can be legally extracted because formal power series can be developed without needing a radius of convergence. Of course a lot of important intuition about the way formal power series are manipulated is inspired by the properties of classical numerical power series as functions.
Your question mentions 3 types of polynomial objects: polynomial expressions, polynomial functions, and polynomials as strings of coefficients. I'd say the first and third are really synonymous (the third is really just a rigorous form of the first, not materially different in practice), but they're not the same as polynomial functions if you work over a finite field. If the coefficients are an infinite field then different polynomial expressions do define different functions on the field.
edited Dec 17 '18 at 17:26
answered May 13 '13 at 11:16
KCdKCd
16.7k4076
16.7k4076
1
$begingroup$
I have two remarks: 1) I think you misunderstood what I meant by polynomial expressions: Let's say that $x=4$ and that we consider the polynomial expression $2x+3$; then the number $11=2cdot 4+4$ is a polynomial expression for me. Another example: Let $x$ be some number from in $[0,1]$ and consider the polynomial expression $4x^2-1$. Then $4x^2-1$ is again a number , although we don't know which one, since we don't know what $x$ is. But the "nature" of $4x^2-1$ is definitely numeric. In contrast to that a the polynomial $4x^2-1$ is actually the string $(-1,0,4,0,0,ldots)$. [con...]
$endgroup$
– temo
May 13 '13 at 13:15
$begingroup$
[...tinuing] So objects of the first and third type are fundamentally different. I think what you understood "polynomial expressions" to be is a nonrigorous way of speaking about strings of coefficients - am I right with that ? But of course, none of all this makes your very good and instructive answer (!) less readable!$$ $$2. Could you please elaborate a bit more on example 3. and 4. ? It seems to me that a lot more is hiding there than I'm currently able to see. [another con...
$endgroup$
– temo
May 13 '13 at 13:19
$begingroup$
...tinuation] Since, for example, I don't know any applications of the ring of noncommutative polynomials in two variables I can't see why it would be bad to define it as the set of all maps $K^2rightarrow K$ for some field $K$. I also can't see why it's harder to work in $boldsymbol{Q}[x]/(x^5)$ with functions than polynomials. If you could provide me with a short explanation I could come back later to this, when I have more background in quotient rings and polynomials in two variables and make sense of the explanation.
$endgroup$
– temo
May 13 '13 at 13:43
1
$begingroup$
A noncommutative polynomial in two variables is not a map $K^2 rightarrow K$, since $xy^2, yxy$, and $y^2x$ are different. You know examples of noncommutative polynomials (well, a quotient ring of the noncommutative polynomials): differential operators with polynomial coefficients. If $D = d/dx$ on smooth functions, and polynomials act on functions as multiplication (so "$x$" means the function $f mapsto xf$ for smooth $f$), then the product rule $D(xf) = xD(f) + f$ is the same as $Dx = xD + 1$. Polynomial differential operators form a ring ${mathbf R}[x,D]$ where $Dx = xD + 1$.
$endgroup$
– KCd
May 14 '13 at 3:16
add a comment |
1
$begingroup$
I have two remarks: 1) I think you misunderstood what I meant by polynomial expressions: Let's say that $x=4$ and that we consider the polynomial expression $2x+3$; then the number $11=2cdot 4+4$ is a polynomial expression for me. Another example: Let $x$ be some number from in $[0,1]$ and consider the polynomial expression $4x^2-1$. Then $4x^2-1$ is again a number , although we don't know which one, since we don't know what $x$ is. But the "nature" of $4x^2-1$ is definitely numeric. In contrast to that a the polynomial $4x^2-1$ is actually the string $(-1,0,4,0,0,ldots)$. [con...]
$endgroup$
– temo
May 13 '13 at 13:15
$begingroup$
[...tinuing] So objects of the first and third type are fundamentally different. I think what you understood "polynomial expressions" to be is a nonrigorous way of speaking about strings of coefficients - am I right with that ? But of course, none of all this makes your very good and instructive answer (!) less readable!$$ $$2. Could you please elaborate a bit more on example 3. and 4. ? It seems to me that a lot more is hiding there than I'm currently able to see. [another con...
$endgroup$
– temo
May 13 '13 at 13:19
$begingroup$
...tinuation] Since, for example, I don't know any applications of the ring of noncommutative polynomials in two variables I can't see why it would be bad to define it as the set of all maps $K^2rightarrow K$ for some field $K$. I also can't see why it's harder to work in $boldsymbol{Q}[x]/(x^5)$ with functions than polynomials. If you could provide me with a short explanation I could come back later to this, when I have more background in quotient rings and polynomials in two variables and make sense of the explanation.
$endgroup$
– temo
May 13 '13 at 13:43
1
$begingroup$
A noncommutative polynomial in two variables is not a map $K^2 rightarrow K$, since $xy^2, yxy$, and $y^2x$ are different. You know examples of noncommutative polynomials (well, a quotient ring of the noncommutative polynomials): differential operators with polynomial coefficients. If $D = d/dx$ on smooth functions, and polynomials act on functions as multiplication (so "$x$" means the function $f mapsto xf$ for smooth $f$), then the product rule $D(xf) = xD(f) + f$ is the same as $Dx = xD + 1$. Polynomial differential operators form a ring ${mathbf R}[x,D]$ where $Dx = xD + 1$.
$endgroup$
– KCd
May 14 '13 at 3:16
1
1
$begingroup$
I have two remarks: 1) I think you misunderstood what I meant by polynomial expressions: Let's say that $x=4$ and that we consider the polynomial expression $2x+3$; then the number $11=2cdot 4+4$ is a polynomial expression for me. Another example: Let $x$ be some number from in $[0,1]$ and consider the polynomial expression $4x^2-1$. Then $4x^2-1$ is again a number , although we don't know which one, since we don't know what $x$ is. But the "nature" of $4x^2-1$ is definitely numeric. In contrast to that a the polynomial $4x^2-1$ is actually the string $(-1,0,4,0,0,ldots)$. [con...]
$endgroup$
– temo
May 13 '13 at 13:15
$begingroup$
I have two remarks: 1) I think you misunderstood what I meant by polynomial expressions: Let's say that $x=4$ and that we consider the polynomial expression $2x+3$; then the number $11=2cdot 4+4$ is a polynomial expression for me. Another example: Let $x$ be some number from in $[0,1]$ and consider the polynomial expression $4x^2-1$. Then $4x^2-1$ is again a number , although we don't know which one, since we don't know what $x$ is. But the "nature" of $4x^2-1$ is definitely numeric. In contrast to that a the polynomial $4x^2-1$ is actually the string $(-1,0,4,0,0,ldots)$. [con...]
$endgroup$
– temo
May 13 '13 at 13:15
$begingroup$
[...tinuing] So objects of the first and third type are fundamentally different. I think what you understood "polynomial expressions" to be is a nonrigorous way of speaking about strings of coefficients - am I right with that ? But of course, none of all this makes your very good and instructive answer (!) less readable!$$ $$2. Could you please elaborate a bit more on example 3. and 4. ? It seems to me that a lot more is hiding there than I'm currently able to see. [another con...
$endgroup$
– temo
May 13 '13 at 13:19
$begingroup$
[...tinuing] So objects of the first and third type are fundamentally different. I think what you understood "polynomial expressions" to be is a nonrigorous way of speaking about strings of coefficients - am I right with that ? But of course, none of all this makes your very good and instructive answer (!) less readable!$$ $$2. Could you please elaborate a bit more on example 3. and 4. ? It seems to me that a lot more is hiding there than I'm currently able to see. [another con...
$endgroup$
– temo
May 13 '13 at 13:19
$begingroup$
...tinuation] Since, for example, I don't know any applications of the ring of noncommutative polynomials in two variables I can't see why it would be bad to define it as the set of all maps $K^2rightarrow K$ for some field $K$. I also can't see why it's harder to work in $boldsymbol{Q}[x]/(x^5)$ with functions than polynomials. If you could provide me with a short explanation I could come back later to this, when I have more background in quotient rings and polynomials in two variables and make sense of the explanation.
$endgroup$
– temo
May 13 '13 at 13:43
$begingroup$
...tinuation] Since, for example, I don't know any applications of the ring of noncommutative polynomials in two variables I can't see why it would be bad to define it as the set of all maps $K^2rightarrow K$ for some field $K$. I also can't see why it's harder to work in $boldsymbol{Q}[x]/(x^5)$ with functions than polynomials. If you could provide me with a short explanation I could come back later to this, when I have more background in quotient rings and polynomials in two variables and make sense of the explanation.
$endgroup$
– temo
May 13 '13 at 13:43
1
1
$begingroup$
A noncommutative polynomial in two variables is not a map $K^2 rightarrow K$, since $xy^2, yxy$, and $y^2x$ are different. You know examples of noncommutative polynomials (well, a quotient ring of the noncommutative polynomials): differential operators with polynomial coefficients. If $D = d/dx$ on smooth functions, and polynomials act on functions as multiplication (so "$x$" means the function $f mapsto xf$ for smooth $f$), then the product rule $D(xf) = xD(f) + f$ is the same as $Dx = xD + 1$. Polynomial differential operators form a ring ${mathbf R}[x,D]$ where $Dx = xD + 1$.
$endgroup$
– KCd
May 14 '13 at 3:16
$begingroup$
A noncommutative polynomial in two variables is not a map $K^2 rightarrow K$, since $xy^2, yxy$, and $y^2x$ are different. You know examples of noncommutative polynomials (well, a quotient ring of the noncommutative polynomials): differential operators with polynomial coefficients. If $D = d/dx$ on smooth functions, and polynomials act on functions as multiplication (so "$x$" means the function $f mapsto xf$ for smooth $f$), then the product rule $D(xf) = xD(f) + f$ is the same as $Dx = xD + 1$. Polynomial differential operators form a ring ${mathbf R}[x,D]$ where $Dx = xD + 1$.
$endgroup$
– KCd
May 14 '13 at 3:16
add a comment |
$begingroup$
Over some fields, the map from polynomials to polynomial functions is not injective; for example, over the finite field $mathbb{F}_2$, the polynomial function $x^2+x$ is the same as the $0$ function - try plugging in $0$ and $1$. So in this case the (infinite) set of polynomials over $mathbb{F}_2$ is much larger than the (finite) set of polynomial functions $mathbb{F}_2tomathbb{F}_2$.
$endgroup$
$begingroup$
The same, however, is not true for every field extension of $mathbb{F}_2$. If there's an element $x$ for which neither $x$ nor $x+1$ is zero, then $x^+x=x(x+1)$ won't be identically zero. Since additive inverses are unique in fields, such an element $x$ must even exists in every extension of $mathbf{F}_2$, just pick any $x neq 0$ which doesn't happen to be $-1$. In algebra, you regularly take a polynomial over a field and look at its behaviour over extension fields, so identifying a polynomial with the function it induces on a particular field clearly doesn't cut it.
$endgroup$
– fgp
May 13 '13 at 10:25
$begingroup$
Yes, I know that this is the case. This was what I meant, when I said that "the mapping of polynomials to polynomials expressions/functions isn't injective". But what good is this fact ? The fact alone that this mapping isn't injective surely can't be the only reason we consider polynomials (see the last paragraph from my question written in boldface).
$endgroup$
– temo
May 13 '13 at 11:05
$begingroup$
@temo Apologies, I'm not sure now what I read there instead of what you wrote. The other answers expand on the reasons for this a lot more (as I'd hoped would happen). Maybe this answer should even have been a comment, I thought it was borderline when I wrote it.
$endgroup$
– mdp
May 13 '13 at 11:54
$begingroup$
@MattPressland No problem. (I edited my question slightly after your answer - you can look it up in the revisions - in hope to make in clearer what I mean for future readers.)
$endgroup$
– temo
May 13 '13 at 13:45
add a comment |
$begingroup$
Over some fields, the map from polynomials to polynomial functions is not injective; for example, over the finite field $mathbb{F}_2$, the polynomial function $x^2+x$ is the same as the $0$ function - try plugging in $0$ and $1$. So in this case the (infinite) set of polynomials over $mathbb{F}_2$ is much larger than the (finite) set of polynomial functions $mathbb{F}_2tomathbb{F}_2$.
$endgroup$
$begingroup$
The same, however, is not true for every field extension of $mathbb{F}_2$. If there's an element $x$ for which neither $x$ nor $x+1$ is zero, then $x^+x=x(x+1)$ won't be identically zero. Since additive inverses are unique in fields, such an element $x$ must even exists in every extension of $mathbf{F}_2$, just pick any $x neq 0$ which doesn't happen to be $-1$. In algebra, you regularly take a polynomial over a field and look at its behaviour over extension fields, so identifying a polynomial with the function it induces on a particular field clearly doesn't cut it.
$endgroup$
– fgp
May 13 '13 at 10:25
$begingroup$
Yes, I know that this is the case. This was what I meant, when I said that "the mapping of polynomials to polynomials expressions/functions isn't injective". But what good is this fact ? The fact alone that this mapping isn't injective surely can't be the only reason we consider polynomials (see the last paragraph from my question written in boldface).
$endgroup$
– temo
May 13 '13 at 11:05
$begingroup$
@temo Apologies, I'm not sure now what I read there instead of what you wrote. The other answers expand on the reasons for this a lot more (as I'd hoped would happen). Maybe this answer should even have been a comment, I thought it was borderline when I wrote it.
$endgroup$
– mdp
May 13 '13 at 11:54
$begingroup$
@MattPressland No problem. (I edited my question slightly after your answer - you can look it up in the revisions - in hope to make in clearer what I mean for future readers.)
$endgroup$
– temo
May 13 '13 at 13:45
add a comment |
$begingroup$
Over some fields, the map from polynomials to polynomial functions is not injective; for example, over the finite field $mathbb{F}_2$, the polynomial function $x^2+x$ is the same as the $0$ function - try plugging in $0$ and $1$. So in this case the (infinite) set of polynomials over $mathbb{F}_2$ is much larger than the (finite) set of polynomial functions $mathbb{F}_2tomathbb{F}_2$.
$endgroup$
Over some fields, the map from polynomials to polynomial functions is not injective; for example, over the finite field $mathbb{F}_2$, the polynomial function $x^2+x$ is the same as the $0$ function - try plugging in $0$ and $1$. So in this case the (infinite) set of polynomials over $mathbb{F}_2$ is much larger than the (finite) set of polynomial functions $mathbb{F}_2tomathbb{F}_2$.
answered May 13 '13 at 10:20
mdpmdp
11.3k12955
11.3k12955
$begingroup$
The same, however, is not true for every field extension of $mathbb{F}_2$. If there's an element $x$ for which neither $x$ nor $x+1$ is zero, then $x^+x=x(x+1)$ won't be identically zero. Since additive inverses are unique in fields, such an element $x$ must even exists in every extension of $mathbf{F}_2$, just pick any $x neq 0$ which doesn't happen to be $-1$. In algebra, you regularly take a polynomial over a field and look at its behaviour over extension fields, so identifying a polynomial with the function it induces on a particular field clearly doesn't cut it.
$endgroup$
– fgp
May 13 '13 at 10:25
$begingroup$
Yes, I know that this is the case. This was what I meant, when I said that "the mapping of polynomials to polynomials expressions/functions isn't injective". But what good is this fact ? The fact alone that this mapping isn't injective surely can't be the only reason we consider polynomials (see the last paragraph from my question written in boldface).
$endgroup$
– temo
May 13 '13 at 11:05
$begingroup$
@temo Apologies, I'm not sure now what I read there instead of what you wrote. The other answers expand on the reasons for this a lot more (as I'd hoped would happen). Maybe this answer should even have been a comment, I thought it was borderline when I wrote it.
$endgroup$
– mdp
May 13 '13 at 11:54
$begingroup$
@MattPressland No problem. (I edited my question slightly after your answer - you can look it up in the revisions - in hope to make in clearer what I mean for future readers.)
$endgroup$
– temo
May 13 '13 at 13:45
add a comment |
$begingroup$
The same, however, is not true for every field extension of $mathbb{F}_2$. If there's an element $x$ for which neither $x$ nor $x+1$ is zero, then $x^+x=x(x+1)$ won't be identically zero. Since additive inverses are unique in fields, such an element $x$ must even exists in every extension of $mathbf{F}_2$, just pick any $x neq 0$ which doesn't happen to be $-1$. In algebra, you regularly take a polynomial over a field and look at its behaviour over extension fields, so identifying a polynomial with the function it induces on a particular field clearly doesn't cut it.
$endgroup$
– fgp
May 13 '13 at 10:25
$begingroup$
Yes, I know that this is the case. This was what I meant, when I said that "the mapping of polynomials to polynomials expressions/functions isn't injective". But what good is this fact ? The fact alone that this mapping isn't injective surely can't be the only reason we consider polynomials (see the last paragraph from my question written in boldface).
$endgroup$
– temo
May 13 '13 at 11:05
$begingroup$
@temo Apologies, I'm not sure now what I read there instead of what you wrote. The other answers expand on the reasons for this a lot more (as I'd hoped would happen). Maybe this answer should even have been a comment, I thought it was borderline when I wrote it.
$endgroup$
– mdp
May 13 '13 at 11:54
$begingroup$
@MattPressland No problem. (I edited my question slightly after your answer - you can look it up in the revisions - in hope to make in clearer what I mean for future readers.)
$endgroup$
– temo
May 13 '13 at 13:45
$begingroup$
The same, however, is not true for every field extension of $mathbb{F}_2$. If there's an element $x$ for which neither $x$ nor $x+1$ is zero, then $x^+x=x(x+1)$ won't be identically zero. Since additive inverses are unique in fields, such an element $x$ must even exists in every extension of $mathbf{F}_2$, just pick any $x neq 0$ which doesn't happen to be $-1$. In algebra, you regularly take a polynomial over a field and look at its behaviour over extension fields, so identifying a polynomial with the function it induces on a particular field clearly doesn't cut it.
$endgroup$
– fgp
May 13 '13 at 10:25
$begingroup$
The same, however, is not true for every field extension of $mathbb{F}_2$. If there's an element $x$ for which neither $x$ nor $x+1$ is zero, then $x^+x=x(x+1)$ won't be identically zero. Since additive inverses are unique in fields, such an element $x$ must even exists in every extension of $mathbf{F}_2$, just pick any $x neq 0$ which doesn't happen to be $-1$. In algebra, you regularly take a polynomial over a field and look at its behaviour over extension fields, so identifying a polynomial with the function it induces on a particular field clearly doesn't cut it.
$endgroup$
– fgp
May 13 '13 at 10:25
$begingroup$
Yes, I know that this is the case. This was what I meant, when I said that "the mapping of polynomials to polynomials expressions/functions isn't injective". But what good is this fact ? The fact alone that this mapping isn't injective surely can't be the only reason we consider polynomials (see the last paragraph from my question written in boldface).
$endgroup$
– temo
May 13 '13 at 11:05
$begingroup$
Yes, I know that this is the case. This was what I meant, when I said that "the mapping of polynomials to polynomials expressions/functions isn't injective". But what good is this fact ? The fact alone that this mapping isn't injective surely can't be the only reason we consider polynomials (see the last paragraph from my question written in boldface).
$endgroup$
– temo
May 13 '13 at 11:05
$begingroup$
@temo Apologies, I'm not sure now what I read there instead of what you wrote. The other answers expand on the reasons for this a lot more (as I'd hoped would happen). Maybe this answer should even have been a comment, I thought it was borderline when I wrote it.
$endgroup$
– mdp
May 13 '13 at 11:54
$begingroup$
@temo Apologies, I'm not sure now what I read there instead of what you wrote. The other answers expand on the reasons for this a lot more (as I'd hoped would happen). Maybe this answer should even have been a comment, I thought it was borderline when I wrote it.
$endgroup$
– mdp
May 13 '13 at 11:54
$begingroup$
@MattPressland No problem. (I edited my question slightly after your answer - you can look it up in the revisions - in hope to make in clearer what I mean for future readers.)
$endgroup$
– temo
May 13 '13 at 13:45
$begingroup$
@MattPressland No problem. (I edited my question slightly after your answer - you can look it up in the revisions - in hope to make in clearer what I mean for future readers.)
$endgroup$
– temo
May 13 '13 at 13:45
add a comment |
$begingroup$
Constructing field extensions is a very important operation. This is most conveniently done as follows: Let $F$ be a field and $p(X)in F[X]$. Suppose we wish to adjoin to $F$ an element $alpha$ that will satisfy a certain algebraic equation. That is, there is a polynomial $P(X)in F[X]$ such that we wish $P(alpha)=0$ (notice here we switched between the polynomial $P(X)$ and its associated function evaluated at $alpha$). If $P(X)$ is irreducible, then such a field extension is constructed as $F[X]/(P(X))$. It is crucial for this construction to take $F[X]$ to be the ring of polynomials, not polynomial functions.
Another reason is the convenient fact that a ring homomorphism $R[X]to S$, from the ring of polynomials with coefficients in $R$ to a ring $S$, is the same thing as a homomorphism $Rto S$ together with a choice of an arbitrary element in $S$. Thus, polynomial rings allow for a systematic study of homomorphism+an element. It turns out that many properties of the chosen element are reflected by properties of the homomorphism, which in turn are related to properties of the domain, that is of polynomials (not polynomial functions).
One more reason is that polynomials are combinatorial beasts. They can be manipulated purely combinatorially. But since they give rise to functions, it means that we can use combinatorics to study functions. For instance, given functions $f,g$, can you find functions $a,b$ such that $af+bg=1$ is quite easily solved using the Euclidean algorithm if all these functions are required to be polynomials (the solution is actually constructive).
$endgroup$
$begingroup$
The last point is not the most convincing; certainly if you can find polynomials such that $ag+bg=1$, then you can find (corresponding) polynomial functions as well.
$endgroup$
– Marc van Leeuwen
May 13 '13 at 13:02
$begingroup$
@MarcvanLeeuwen I did not explain myself well then, as that was precisely my point. I left 'function' unspecified, not even domain and codomain, and tried to emphasize that if things are required to be polynomials, then a solution can easily be found using the Euclidean algorithm which is combinatorial.
$endgroup$
– Ittay Weiss
May 13 '13 at 18:23
add a comment |
$begingroup$
Constructing field extensions is a very important operation. This is most conveniently done as follows: Let $F$ be a field and $p(X)in F[X]$. Suppose we wish to adjoin to $F$ an element $alpha$ that will satisfy a certain algebraic equation. That is, there is a polynomial $P(X)in F[X]$ such that we wish $P(alpha)=0$ (notice here we switched between the polynomial $P(X)$ and its associated function evaluated at $alpha$). If $P(X)$ is irreducible, then such a field extension is constructed as $F[X]/(P(X))$. It is crucial for this construction to take $F[X]$ to be the ring of polynomials, not polynomial functions.
Another reason is the convenient fact that a ring homomorphism $R[X]to S$, from the ring of polynomials with coefficients in $R$ to a ring $S$, is the same thing as a homomorphism $Rto S$ together with a choice of an arbitrary element in $S$. Thus, polynomial rings allow for a systematic study of homomorphism+an element. It turns out that many properties of the chosen element are reflected by properties of the homomorphism, which in turn are related to properties of the domain, that is of polynomials (not polynomial functions).
One more reason is that polynomials are combinatorial beasts. They can be manipulated purely combinatorially. But since they give rise to functions, it means that we can use combinatorics to study functions. For instance, given functions $f,g$, can you find functions $a,b$ such that $af+bg=1$ is quite easily solved using the Euclidean algorithm if all these functions are required to be polynomials (the solution is actually constructive).
$endgroup$
$begingroup$
The last point is not the most convincing; certainly if you can find polynomials such that $ag+bg=1$, then you can find (corresponding) polynomial functions as well.
$endgroup$
– Marc van Leeuwen
May 13 '13 at 13:02
$begingroup$
@MarcvanLeeuwen I did not explain myself well then, as that was precisely my point. I left 'function' unspecified, not even domain and codomain, and tried to emphasize that if things are required to be polynomials, then a solution can easily be found using the Euclidean algorithm which is combinatorial.
$endgroup$
– Ittay Weiss
May 13 '13 at 18:23
add a comment |
$begingroup$
Constructing field extensions is a very important operation. This is most conveniently done as follows: Let $F$ be a field and $p(X)in F[X]$. Suppose we wish to adjoin to $F$ an element $alpha$ that will satisfy a certain algebraic equation. That is, there is a polynomial $P(X)in F[X]$ such that we wish $P(alpha)=0$ (notice here we switched between the polynomial $P(X)$ and its associated function evaluated at $alpha$). If $P(X)$ is irreducible, then such a field extension is constructed as $F[X]/(P(X))$. It is crucial for this construction to take $F[X]$ to be the ring of polynomials, not polynomial functions.
Another reason is the convenient fact that a ring homomorphism $R[X]to S$, from the ring of polynomials with coefficients in $R$ to a ring $S$, is the same thing as a homomorphism $Rto S$ together with a choice of an arbitrary element in $S$. Thus, polynomial rings allow for a systematic study of homomorphism+an element. It turns out that many properties of the chosen element are reflected by properties of the homomorphism, which in turn are related to properties of the domain, that is of polynomials (not polynomial functions).
One more reason is that polynomials are combinatorial beasts. They can be manipulated purely combinatorially. But since they give rise to functions, it means that we can use combinatorics to study functions. For instance, given functions $f,g$, can you find functions $a,b$ such that $af+bg=1$ is quite easily solved using the Euclidean algorithm if all these functions are required to be polynomials (the solution is actually constructive).
$endgroup$
Constructing field extensions is a very important operation. This is most conveniently done as follows: Let $F$ be a field and $p(X)in F[X]$. Suppose we wish to adjoin to $F$ an element $alpha$ that will satisfy a certain algebraic equation. That is, there is a polynomial $P(X)in F[X]$ such that we wish $P(alpha)=0$ (notice here we switched between the polynomial $P(X)$ and its associated function evaluated at $alpha$). If $P(X)$ is irreducible, then such a field extension is constructed as $F[X]/(P(X))$. It is crucial for this construction to take $F[X]$ to be the ring of polynomials, not polynomial functions.
Another reason is the convenient fact that a ring homomorphism $R[X]to S$, from the ring of polynomials with coefficients in $R$ to a ring $S$, is the same thing as a homomorphism $Rto S$ together with a choice of an arbitrary element in $S$. Thus, polynomial rings allow for a systematic study of homomorphism+an element. It turns out that many properties of the chosen element are reflected by properties of the homomorphism, which in turn are related to properties of the domain, that is of polynomials (not polynomial functions).
One more reason is that polynomials are combinatorial beasts. They can be manipulated purely combinatorially. But since they give rise to functions, it means that we can use combinatorics to study functions. For instance, given functions $f,g$, can you find functions $a,b$ such that $af+bg=1$ is quite easily solved using the Euclidean algorithm if all these functions are required to be polynomials (the solution is actually constructive).
answered May 13 '13 at 11:44
Ittay WeissIttay Weiss
63.9k6101183
63.9k6101183
$begingroup$
The last point is not the most convincing; certainly if you can find polynomials such that $ag+bg=1$, then you can find (corresponding) polynomial functions as well.
$endgroup$
– Marc van Leeuwen
May 13 '13 at 13:02
$begingroup$
@MarcvanLeeuwen I did not explain myself well then, as that was precisely my point. I left 'function' unspecified, not even domain and codomain, and tried to emphasize that if things are required to be polynomials, then a solution can easily be found using the Euclidean algorithm which is combinatorial.
$endgroup$
– Ittay Weiss
May 13 '13 at 18:23
add a comment |
$begingroup$
The last point is not the most convincing; certainly if you can find polynomials such that $ag+bg=1$, then you can find (corresponding) polynomial functions as well.
$endgroup$
– Marc van Leeuwen
May 13 '13 at 13:02
$begingroup$
@MarcvanLeeuwen I did not explain myself well then, as that was precisely my point. I left 'function' unspecified, not even domain and codomain, and tried to emphasize that if things are required to be polynomials, then a solution can easily be found using the Euclidean algorithm which is combinatorial.
$endgroup$
– Ittay Weiss
May 13 '13 at 18:23
$begingroup$
The last point is not the most convincing; certainly if you can find polynomials such that $ag+bg=1$, then you can find (corresponding) polynomial functions as well.
$endgroup$
– Marc van Leeuwen
May 13 '13 at 13:02
$begingroup$
The last point is not the most convincing; certainly if you can find polynomials such that $ag+bg=1$, then you can find (corresponding) polynomial functions as well.
$endgroup$
– Marc van Leeuwen
May 13 '13 at 13:02
$begingroup$
@MarcvanLeeuwen I did not explain myself well then, as that was precisely my point. I left 'function' unspecified, not even domain and codomain, and tried to emphasize that if things are required to be polynomials, then a solution can easily be found using the Euclidean algorithm which is combinatorial.
$endgroup$
– Ittay Weiss
May 13 '13 at 18:23
$begingroup$
@MarcvanLeeuwen I did not explain myself well then, as that was precisely my point. I left 'function' unspecified, not even domain and codomain, and tried to emphasize that if things are required to be polynomials, then a solution can easily be found using the Euclidean algorithm which is combinatorial.
$endgroup$
– Ittay Weiss
May 13 '13 at 18:23
add a comment |
$begingroup$
Here are some answers with somewhat different conclusions. Let us call the form of polynomials with coefficients and indeterminates "abstract" or "algebraic" to distinguish them from polynomial functions.
If you want to work with rational functions, the situation becomes foggy if you do not define everything in terms of abstract polynomials:
- Is the function $f(x) = +frac{1}{x} -frac{1}{x}$ equal to the zero function?
- Is $frac{x}{x}$ equal to $1$ for all $x$, or only for $x neq 0$?
- When composing a chain of linear fractional functions $frac{ax+b}{cx+d}$ do we exclude more and more points from the domain everywhere that one of the denominators is zero, when the end result as a rational function has at most one one singularity? Over a finite field this can empty the domain of all its points.
- For an infinite field you can get around some of this by treating polynomial functions and rational functions as function-equal if they are equal at infinitely many points, and defining rational functions as written in lowest terms (which breaks the analogy with fractions). Over a finite field, nothing like that can work because ratios like ${p(x)^a}/{p(x)^b}$ are not defined if $p=0$ for all elements in the finite field, and if there were a way to define it based on functions (such as using the lowest-degree representation) it would not depend on the exponents.
In the other direction:
- if you want representations of the polynomial to not always be centered at $0$ or to be less coordinate dependent, then you might want polynomial to mean a function killed by differential or difference operators of order $n$ for some $n > 0$.
- where polynomials appear by a construction that operates on functions, and cannot be performed on the abstract polynomials, then you have no choice but to work with polynomials as functions.
In algebraic uses of polynomial functions, the arithmetic operations, function composition, differentiation and integration are often sufficient and can be done on the abstract form, and a function is demonstrated to be polynomial by exhibiting an abstract form. So the evaluation that converts algebraic representation into a function can always be postponed by doing all operations on the algebraic representation.
$endgroup$
$begingroup$
I think your answer is with just two upvotes, of which one is mine, very underrated. Some of your points aren't yet clear to me though: $$ $$ 1) Why does the situation becomes foggy, if we work with rational functions ? The maximal domain (if we work consider only real numbers) of the functions $f(x)=frac{1}{x}-frac{1}{x}$ resp, $g(x)=frac{x}{x}$ is $mathbb{R}setminus{0}$ and thus on this set they are equal to $0$ resp. $1$- no ambiguity here, as far as I can see...[conti-
$endgroup$
– temo
May 19 '13 at 10:40
$begingroup$
[-nuation] I we define these as abstract polynomials, than it doesn't make sense to ask if $f$ and $g$ are equal to a number, since (set-theoretically) they are strings on numbers; it would only make sense to ask for which numbers $f$ and $g$ can be evaluated. In this case the considerations from above apply. [last continuation]
$endgroup$
– temo
May 19 '13 at 10:43
$begingroup$
2) I don't understand the first proposition of what you wrote in the fourth bullet: Why can we treat over an infinite field polynomial functions and rational functions as equals if the agree for infinite many values ? What does this have do to with writing rational functions in lowest term and why does the analogy with fractions break ?
$endgroup$
– temo
May 19 '13 at 11:25
$begingroup$
For (1) the maximal domain will change depending on the way that $f$ is written, such as $0$ or $1/x - 1/x$ or $1/x(x-1) - 1/x(x-1)$, which also breaks algebraic laws like $g(x) = g(x) + f(x) - f(x) = g(x)f(x)/f(x)$. You could handle part of this problem by declaring that an equation $A=B$ holds only on the intersection of domains, but this would make chains like $A=B=C=...$ non-reversible, and the domain can become the empty set for rational functions over a finite field. For [-nuation], we ask if $f=g$ as abstract polynomials, not numbers.
$endgroup$
– zyx
May 19 '13 at 21:18
$begingroup$
For (2), abstract (ratios of) polynomials can be recovered from (ratios of) polynomial functions over an infinite field, and equality at an infinite set of points can serve as the definition of equality to replace the problems with "maximal domains". But this procedure breaks down for finite fields as the ratios $p(x)^m / p(x)^n$ demonstrate, for integer $m$ and $n$ and $p(x)=0$ at all points. For fractions there is no requirement to write them in lowest terms, and there can might be problems with requiring lowest terms if the coefficients of the polynomials/rational functions are not a UFD.
$endgroup$
– zyx
May 19 '13 at 21:24
|
show 1 more comment
$begingroup$
Here are some answers with somewhat different conclusions. Let us call the form of polynomials with coefficients and indeterminates "abstract" or "algebraic" to distinguish them from polynomial functions.
If you want to work with rational functions, the situation becomes foggy if you do not define everything in terms of abstract polynomials:
- Is the function $f(x) = +frac{1}{x} -frac{1}{x}$ equal to the zero function?
- Is $frac{x}{x}$ equal to $1$ for all $x$, or only for $x neq 0$?
- When composing a chain of linear fractional functions $frac{ax+b}{cx+d}$ do we exclude more and more points from the domain everywhere that one of the denominators is zero, when the end result as a rational function has at most one one singularity? Over a finite field this can empty the domain of all its points.
- For an infinite field you can get around some of this by treating polynomial functions and rational functions as function-equal if they are equal at infinitely many points, and defining rational functions as written in lowest terms (which breaks the analogy with fractions). Over a finite field, nothing like that can work because ratios like ${p(x)^a}/{p(x)^b}$ are not defined if $p=0$ for all elements in the finite field, and if there were a way to define it based on functions (such as using the lowest-degree representation) it would not depend on the exponents.
In the other direction:
- if you want representations of the polynomial to not always be centered at $0$ or to be less coordinate dependent, then you might want polynomial to mean a function killed by differential or difference operators of order $n$ for some $n > 0$.
- where polynomials appear by a construction that operates on functions, and cannot be performed on the abstract polynomials, then you have no choice but to work with polynomials as functions.
In algebraic uses of polynomial functions, the arithmetic operations, function composition, differentiation and integration are often sufficient and can be done on the abstract form, and a function is demonstrated to be polynomial by exhibiting an abstract form. So the evaluation that converts algebraic representation into a function can always be postponed by doing all operations on the algebraic representation.
$endgroup$
$begingroup$
I think your answer is with just two upvotes, of which one is mine, very underrated. Some of your points aren't yet clear to me though: $$ $$ 1) Why does the situation becomes foggy, if we work with rational functions ? The maximal domain (if we work consider only real numbers) of the functions $f(x)=frac{1}{x}-frac{1}{x}$ resp, $g(x)=frac{x}{x}$ is $mathbb{R}setminus{0}$ and thus on this set they are equal to $0$ resp. $1$- no ambiguity here, as far as I can see...[conti-
$endgroup$
– temo
May 19 '13 at 10:40
$begingroup$
[-nuation] I we define these as abstract polynomials, than it doesn't make sense to ask if $f$ and $g$ are equal to a number, since (set-theoretically) they are strings on numbers; it would only make sense to ask for which numbers $f$ and $g$ can be evaluated. In this case the considerations from above apply. [last continuation]
$endgroup$
– temo
May 19 '13 at 10:43
$begingroup$
2) I don't understand the first proposition of what you wrote in the fourth bullet: Why can we treat over an infinite field polynomial functions and rational functions as equals if the agree for infinite many values ? What does this have do to with writing rational functions in lowest term and why does the analogy with fractions break ?
$endgroup$
– temo
May 19 '13 at 11:25
$begingroup$
For (1) the maximal domain will change depending on the way that $f$ is written, such as $0$ or $1/x - 1/x$ or $1/x(x-1) - 1/x(x-1)$, which also breaks algebraic laws like $g(x) = g(x) + f(x) - f(x) = g(x)f(x)/f(x)$. You could handle part of this problem by declaring that an equation $A=B$ holds only on the intersection of domains, but this would make chains like $A=B=C=...$ non-reversible, and the domain can become the empty set for rational functions over a finite field. For [-nuation], we ask if $f=g$ as abstract polynomials, not numbers.
$endgroup$
– zyx
May 19 '13 at 21:18
$begingroup$
For (2), abstract (ratios of) polynomials can be recovered from (ratios of) polynomial functions over an infinite field, and equality at an infinite set of points can serve as the definition of equality to replace the problems with "maximal domains". But this procedure breaks down for finite fields as the ratios $p(x)^m / p(x)^n$ demonstrate, for integer $m$ and $n$ and $p(x)=0$ at all points. For fractions there is no requirement to write them in lowest terms, and there can might be problems with requiring lowest terms if the coefficients of the polynomials/rational functions are not a UFD.
$endgroup$
– zyx
May 19 '13 at 21:24
|
show 1 more comment
$begingroup$
Here are some answers with somewhat different conclusions. Let us call the form of polynomials with coefficients and indeterminates "abstract" or "algebraic" to distinguish them from polynomial functions.
If you want to work with rational functions, the situation becomes foggy if you do not define everything in terms of abstract polynomials:
- Is the function $f(x) = +frac{1}{x} -frac{1}{x}$ equal to the zero function?
- Is $frac{x}{x}$ equal to $1$ for all $x$, or only for $x neq 0$?
- When composing a chain of linear fractional functions $frac{ax+b}{cx+d}$ do we exclude more and more points from the domain everywhere that one of the denominators is zero, when the end result as a rational function has at most one one singularity? Over a finite field this can empty the domain of all its points.
- For an infinite field you can get around some of this by treating polynomial functions and rational functions as function-equal if they are equal at infinitely many points, and defining rational functions as written in lowest terms (which breaks the analogy with fractions). Over a finite field, nothing like that can work because ratios like ${p(x)^a}/{p(x)^b}$ are not defined if $p=0$ for all elements in the finite field, and if there were a way to define it based on functions (such as using the lowest-degree representation) it would not depend on the exponents.
In the other direction:
- if you want representations of the polynomial to not always be centered at $0$ or to be less coordinate dependent, then you might want polynomial to mean a function killed by differential or difference operators of order $n$ for some $n > 0$.
- where polynomials appear by a construction that operates on functions, and cannot be performed on the abstract polynomials, then you have no choice but to work with polynomials as functions.
In algebraic uses of polynomial functions, the arithmetic operations, function composition, differentiation and integration are often sufficient and can be done on the abstract form, and a function is demonstrated to be polynomial by exhibiting an abstract form. So the evaluation that converts algebraic representation into a function can always be postponed by doing all operations on the algebraic representation.
$endgroup$
Here are some answers with somewhat different conclusions. Let us call the form of polynomials with coefficients and indeterminates "abstract" or "algebraic" to distinguish them from polynomial functions.
If you want to work with rational functions, the situation becomes foggy if you do not define everything in terms of abstract polynomials:
- Is the function $f(x) = +frac{1}{x} -frac{1}{x}$ equal to the zero function?
- Is $frac{x}{x}$ equal to $1$ for all $x$, or only for $x neq 0$?
- When composing a chain of linear fractional functions $frac{ax+b}{cx+d}$ do we exclude more and more points from the domain everywhere that one of the denominators is zero, when the end result as a rational function has at most one one singularity? Over a finite field this can empty the domain of all its points.
- For an infinite field you can get around some of this by treating polynomial functions and rational functions as function-equal if they are equal at infinitely many points, and defining rational functions as written in lowest terms (which breaks the analogy with fractions). Over a finite field, nothing like that can work because ratios like ${p(x)^a}/{p(x)^b}$ are not defined if $p=0$ for all elements in the finite field, and if there were a way to define it based on functions (such as using the lowest-degree representation) it would not depend on the exponents.
In the other direction:
- if you want representations of the polynomial to not always be centered at $0$ or to be less coordinate dependent, then you might want polynomial to mean a function killed by differential or difference operators of order $n$ for some $n > 0$.
- where polynomials appear by a construction that operates on functions, and cannot be performed on the abstract polynomials, then you have no choice but to work with polynomials as functions.
In algebraic uses of polynomial functions, the arithmetic operations, function composition, differentiation and integration are often sufficient and can be done on the abstract form, and a function is demonstrated to be polynomial by exhibiting an abstract form. So the evaluation that converts algebraic representation into a function can always be postponed by doing all operations on the algebraic representation.
answered May 15 '13 at 0:51
zyxzyx
31.6k33698
31.6k33698
$begingroup$
I think your answer is with just two upvotes, of which one is mine, very underrated. Some of your points aren't yet clear to me though: $$ $$ 1) Why does the situation becomes foggy, if we work with rational functions ? The maximal domain (if we work consider only real numbers) of the functions $f(x)=frac{1}{x}-frac{1}{x}$ resp, $g(x)=frac{x}{x}$ is $mathbb{R}setminus{0}$ and thus on this set they are equal to $0$ resp. $1$- no ambiguity here, as far as I can see...[conti-
$endgroup$
– temo
May 19 '13 at 10:40
$begingroup$
[-nuation] I we define these as abstract polynomials, than it doesn't make sense to ask if $f$ and $g$ are equal to a number, since (set-theoretically) they are strings on numbers; it would only make sense to ask for which numbers $f$ and $g$ can be evaluated. In this case the considerations from above apply. [last continuation]
$endgroup$
– temo
May 19 '13 at 10:43
$begingroup$
2) I don't understand the first proposition of what you wrote in the fourth bullet: Why can we treat over an infinite field polynomial functions and rational functions as equals if the agree for infinite many values ? What does this have do to with writing rational functions in lowest term and why does the analogy with fractions break ?
$endgroup$
– temo
May 19 '13 at 11:25
$begingroup$
For (1) the maximal domain will change depending on the way that $f$ is written, such as $0$ or $1/x - 1/x$ or $1/x(x-1) - 1/x(x-1)$, which also breaks algebraic laws like $g(x) = g(x) + f(x) - f(x) = g(x)f(x)/f(x)$. You could handle part of this problem by declaring that an equation $A=B$ holds only on the intersection of domains, but this would make chains like $A=B=C=...$ non-reversible, and the domain can become the empty set for rational functions over a finite field. For [-nuation], we ask if $f=g$ as abstract polynomials, not numbers.
$endgroup$
– zyx
May 19 '13 at 21:18
$begingroup$
For (2), abstract (ratios of) polynomials can be recovered from (ratios of) polynomial functions over an infinite field, and equality at an infinite set of points can serve as the definition of equality to replace the problems with "maximal domains". But this procedure breaks down for finite fields as the ratios $p(x)^m / p(x)^n$ demonstrate, for integer $m$ and $n$ and $p(x)=0$ at all points. For fractions there is no requirement to write them in lowest terms, and there can might be problems with requiring lowest terms if the coefficients of the polynomials/rational functions are not a UFD.
$endgroup$
– zyx
May 19 '13 at 21:24
|
show 1 more comment
$begingroup$
I think your answer is with just two upvotes, of which one is mine, very underrated. Some of your points aren't yet clear to me though: $$ $$ 1) Why does the situation becomes foggy, if we work with rational functions ? The maximal domain (if we work consider only real numbers) of the functions $f(x)=frac{1}{x}-frac{1}{x}$ resp, $g(x)=frac{x}{x}$ is $mathbb{R}setminus{0}$ and thus on this set they are equal to $0$ resp. $1$- no ambiguity here, as far as I can see...[conti-
$endgroup$
– temo
May 19 '13 at 10:40
$begingroup$
[-nuation] I we define these as abstract polynomials, than it doesn't make sense to ask if $f$ and $g$ are equal to a number, since (set-theoretically) they are strings on numbers; it would only make sense to ask for which numbers $f$ and $g$ can be evaluated. In this case the considerations from above apply. [last continuation]
$endgroup$
– temo
May 19 '13 at 10:43
$begingroup$
2) I don't understand the first proposition of what you wrote in the fourth bullet: Why can we treat over an infinite field polynomial functions and rational functions as equals if the agree for infinite many values ? What does this have do to with writing rational functions in lowest term and why does the analogy with fractions break ?
$endgroup$
– temo
May 19 '13 at 11:25
$begingroup$
For (1) the maximal domain will change depending on the way that $f$ is written, such as $0$ or $1/x - 1/x$ or $1/x(x-1) - 1/x(x-1)$, which also breaks algebraic laws like $g(x) = g(x) + f(x) - f(x) = g(x)f(x)/f(x)$. You could handle part of this problem by declaring that an equation $A=B$ holds only on the intersection of domains, but this would make chains like $A=B=C=...$ non-reversible, and the domain can become the empty set for rational functions over a finite field. For [-nuation], we ask if $f=g$ as abstract polynomials, not numbers.
$endgroup$
– zyx
May 19 '13 at 21:18
$begingroup$
For (2), abstract (ratios of) polynomials can be recovered from (ratios of) polynomial functions over an infinite field, and equality at an infinite set of points can serve as the definition of equality to replace the problems with "maximal domains". But this procedure breaks down for finite fields as the ratios $p(x)^m / p(x)^n$ demonstrate, for integer $m$ and $n$ and $p(x)=0$ at all points. For fractions there is no requirement to write them in lowest terms, and there can might be problems with requiring lowest terms if the coefficients of the polynomials/rational functions are not a UFD.
$endgroup$
– zyx
May 19 '13 at 21:24
$begingroup$
I think your answer is with just two upvotes, of which one is mine, very underrated. Some of your points aren't yet clear to me though: $$ $$ 1) Why does the situation becomes foggy, if we work with rational functions ? The maximal domain (if we work consider only real numbers) of the functions $f(x)=frac{1}{x}-frac{1}{x}$ resp, $g(x)=frac{x}{x}$ is $mathbb{R}setminus{0}$ and thus on this set they are equal to $0$ resp. $1$- no ambiguity here, as far as I can see...[conti-
$endgroup$
– temo
May 19 '13 at 10:40
$begingroup$
I think your answer is with just two upvotes, of which one is mine, very underrated. Some of your points aren't yet clear to me though: $$ $$ 1) Why does the situation becomes foggy, if we work with rational functions ? The maximal domain (if we work consider only real numbers) of the functions $f(x)=frac{1}{x}-frac{1}{x}$ resp, $g(x)=frac{x}{x}$ is $mathbb{R}setminus{0}$ and thus on this set they are equal to $0$ resp. $1$- no ambiguity here, as far as I can see...[conti-
$endgroup$
– temo
May 19 '13 at 10:40
$begingroup$
[-nuation] I we define these as abstract polynomials, than it doesn't make sense to ask if $f$ and $g$ are equal to a number, since (set-theoretically) they are strings on numbers; it would only make sense to ask for which numbers $f$ and $g$ can be evaluated. In this case the considerations from above apply. [last continuation]
$endgroup$
– temo
May 19 '13 at 10:43
$begingroup$
[-nuation] I we define these as abstract polynomials, than it doesn't make sense to ask if $f$ and $g$ are equal to a number, since (set-theoretically) they are strings on numbers; it would only make sense to ask for which numbers $f$ and $g$ can be evaluated. In this case the considerations from above apply. [last continuation]
$endgroup$
– temo
May 19 '13 at 10:43
$begingroup$
2) I don't understand the first proposition of what you wrote in the fourth bullet: Why can we treat over an infinite field polynomial functions and rational functions as equals if the agree for infinite many values ? What does this have do to with writing rational functions in lowest term and why does the analogy with fractions break ?
$endgroup$
– temo
May 19 '13 at 11:25
$begingroup$
2) I don't understand the first proposition of what you wrote in the fourth bullet: Why can we treat over an infinite field polynomial functions and rational functions as equals if the agree for infinite many values ? What does this have do to with writing rational functions in lowest term and why does the analogy with fractions break ?
$endgroup$
– temo
May 19 '13 at 11:25
$begingroup$
For (1) the maximal domain will change depending on the way that $f$ is written, such as $0$ or $1/x - 1/x$ or $1/x(x-1) - 1/x(x-1)$, which also breaks algebraic laws like $g(x) = g(x) + f(x) - f(x) = g(x)f(x)/f(x)$. You could handle part of this problem by declaring that an equation $A=B$ holds only on the intersection of domains, but this would make chains like $A=B=C=...$ non-reversible, and the domain can become the empty set for rational functions over a finite field. For [-nuation], we ask if $f=g$ as abstract polynomials, not numbers.
$endgroup$
– zyx
May 19 '13 at 21:18
$begingroup$
For (1) the maximal domain will change depending on the way that $f$ is written, such as $0$ or $1/x - 1/x$ or $1/x(x-1) - 1/x(x-1)$, which also breaks algebraic laws like $g(x) = g(x) + f(x) - f(x) = g(x)f(x)/f(x)$. You could handle part of this problem by declaring that an equation $A=B$ holds only on the intersection of domains, but this would make chains like $A=B=C=...$ non-reversible, and the domain can become the empty set for rational functions over a finite field. For [-nuation], we ask if $f=g$ as abstract polynomials, not numbers.
$endgroup$
– zyx
May 19 '13 at 21:18
$begingroup$
For (2), abstract (ratios of) polynomials can be recovered from (ratios of) polynomial functions over an infinite field, and equality at an infinite set of points can serve as the definition of equality to replace the problems with "maximal domains". But this procedure breaks down for finite fields as the ratios $p(x)^m / p(x)^n$ demonstrate, for integer $m$ and $n$ and $p(x)=0$ at all points. For fractions there is no requirement to write them in lowest terms, and there can might be problems with requiring lowest terms if the coefficients of the polynomials/rational functions are not a UFD.
$endgroup$
– zyx
May 19 '13 at 21:24
$begingroup$
For (2), abstract (ratios of) polynomials can be recovered from (ratios of) polynomial functions over an infinite field, and equality at an infinite set of points can serve as the definition of equality to replace the problems with "maximal domains". But this procedure breaks down for finite fields as the ratios $p(x)^m / p(x)^n$ demonstrate, for integer $m$ and $n$ and $p(x)=0$ at all points. For fractions there is no requirement to write them in lowest terms, and there can might be problems with requiring lowest terms if the coefficients of the polynomials/rational functions are not a UFD.
$endgroup$
– zyx
May 19 '13 at 21:24
|
show 1 more comment
$begingroup$
OK, this is an old question, but I think the following is not covered by the other answers.
The polynomial as defined in point 3 is really a special case of the polynomial expression as defined in point 1. In particular, given the commutative ring the coefficients are from, the “suitable algebraic structure” (a commutative ring) for the polynomial is obtained as follows:
All elements of the original ring are also in the extended ring, and the same relations hold between them (that is, the ring of the coefficients is a subring of the ring of polynomials).
There's an additional element (the “unknown” you mention) that is not member of the coefficient's ring. This additional element fulfils no relations other than those implied by the axioms of commutative rings and the relations between the original elements.
Apart of the elements mentioned above, and those additionally required to be present to make it a ring while fulfilling the condition of point 2, no further elements are added.
In this extended structure (called $R[X]$ if $R$ is the ring of the coefficients and $X$ is the name of the “unknown”), your point 3 polynomial is just the point 1 polynomial expression for the choice $x=X$.
The polynomial function can also be obtained naturally from that construction. First, we find that for any $xin R$, there's an unique ring homomorphism*$phi_x:R[X]to R$ that maps $X$ to $x$ and every element of $R$ to itself. Given a polynomial $pin R[X]$, the polynomial function then is just the function $xmapsto phi_x(p)$.
And for completeness, although that point was already stressed in other answers: While the polynomial uniquely determines the polynomial function (by the construction given above), the reverse is not always true.
*) A homomorphism is a function that keeps the algebraic structure intact. In particular, a ring homomorphism is a function $phi$ with $phi(0)=0$, $phi(1)=1$, $phi(a+b)=phi(a)+phi(b)$, $phi(-a)=-phi(a)$ and $phi(ab)=phi(a)phi(b)$.
$endgroup$
add a comment |
$begingroup$
OK, this is an old question, but I think the following is not covered by the other answers.
The polynomial as defined in point 3 is really a special case of the polynomial expression as defined in point 1. In particular, given the commutative ring the coefficients are from, the “suitable algebraic structure” (a commutative ring) for the polynomial is obtained as follows:
All elements of the original ring are also in the extended ring, and the same relations hold between them (that is, the ring of the coefficients is a subring of the ring of polynomials).
There's an additional element (the “unknown” you mention) that is not member of the coefficient's ring. This additional element fulfils no relations other than those implied by the axioms of commutative rings and the relations between the original elements.
Apart of the elements mentioned above, and those additionally required to be present to make it a ring while fulfilling the condition of point 2, no further elements are added.
In this extended structure (called $R[X]$ if $R$ is the ring of the coefficients and $X$ is the name of the “unknown”), your point 3 polynomial is just the point 1 polynomial expression for the choice $x=X$.
The polynomial function can also be obtained naturally from that construction. First, we find that for any $xin R$, there's an unique ring homomorphism*$phi_x:R[X]to R$ that maps $X$ to $x$ and every element of $R$ to itself. Given a polynomial $pin R[X]$, the polynomial function then is just the function $xmapsto phi_x(p)$.
And for completeness, although that point was already stressed in other answers: While the polynomial uniquely determines the polynomial function (by the construction given above), the reverse is not always true.
*) A homomorphism is a function that keeps the algebraic structure intact. In particular, a ring homomorphism is a function $phi$ with $phi(0)=0$, $phi(1)=1$, $phi(a+b)=phi(a)+phi(b)$, $phi(-a)=-phi(a)$ and $phi(ab)=phi(a)phi(b)$.
$endgroup$
add a comment |
$begingroup$
OK, this is an old question, but I think the following is not covered by the other answers.
The polynomial as defined in point 3 is really a special case of the polynomial expression as defined in point 1. In particular, given the commutative ring the coefficients are from, the “suitable algebraic structure” (a commutative ring) for the polynomial is obtained as follows:
All elements of the original ring are also in the extended ring, and the same relations hold between them (that is, the ring of the coefficients is a subring of the ring of polynomials).
There's an additional element (the “unknown” you mention) that is not member of the coefficient's ring. This additional element fulfils no relations other than those implied by the axioms of commutative rings and the relations between the original elements.
Apart of the elements mentioned above, and those additionally required to be present to make it a ring while fulfilling the condition of point 2, no further elements are added.
In this extended structure (called $R[X]$ if $R$ is the ring of the coefficients and $X$ is the name of the “unknown”), your point 3 polynomial is just the point 1 polynomial expression for the choice $x=X$.
The polynomial function can also be obtained naturally from that construction. First, we find that for any $xin R$, there's an unique ring homomorphism*$phi_x:R[X]to R$ that maps $X$ to $x$ and every element of $R$ to itself. Given a polynomial $pin R[X]$, the polynomial function then is just the function $xmapsto phi_x(p)$.
And for completeness, although that point was already stressed in other answers: While the polynomial uniquely determines the polynomial function (by the construction given above), the reverse is not always true.
*) A homomorphism is a function that keeps the algebraic structure intact. In particular, a ring homomorphism is a function $phi$ with $phi(0)=0$, $phi(1)=1$, $phi(a+b)=phi(a)+phi(b)$, $phi(-a)=-phi(a)$ and $phi(ab)=phi(a)phi(b)$.
$endgroup$
OK, this is an old question, but I think the following is not covered by the other answers.
The polynomial as defined in point 3 is really a special case of the polynomial expression as defined in point 1. In particular, given the commutative ring the coefficients are from, the “suitable algebraic structure” (a commutative ring) for the polynomial is obtained as follows:
All elements of the original ring are also in the extended ring, and the same relations hold between them (that is, the ring of the coefficients is a subring of the ring of polynomials).
There's an additional element (the “unknown” you mention) that is not member of the coefficient's ring. This additional element fulfils no relations other than those implied by the axioms of commutative rings and the relations between the original elements.
Apart of the elements mentioned above, and those additionally required to be present to make it a ring while fulfilling the condition of point 2, no further elements are added.
In this extended structure (called $R[X]$ if $R$ is the ring of the coefficients and $X$ is the name of the “unknown”), your point 3 polynomial is just the point 1 polynomial expression for the choice $x=X$.
The polynomial function can also be obtained naturally from that construction. First, we find that for any $xin R$, there's an unique ring homomorphism*$phi_x:R[X]to R$ that maps $X$ to $x$ and every element of $R$ to itself. Given a polynomial $pin R[X]$, the polynomial function then is just the function $xmapsto phi_x(p)$.
And for completeness, although that point was already stressed in other answers: While the polynomial uniquely determines the polynomial function (by the construction given above), the reverse is not always true.
*) A homomorphism is a function that keeps the algebraic structure intact. In particular, a ring homomorphism is a function $phi$ with $phi(0)=0$, $phi(1)=1$, $phi(a+b)=phi(a)+phi(b)$, $phi(-a)=-phi(a)$ and $phi(ab)=phi(a)phi(b)$.
answered Dec 17 '18 at 20:35
celtschkceltschk
30.1k755101
30.1k755101
add a comment |
add a comment |
Thanks for contributing an answer to Mathematics Stack Exchange!
- Please be sure to answer the question. Provide details and share your research!
But avoid …
- Asking for help, clarification, or responding to other answers.
- Making statements based on opinion; back them up with references or personal experience.
Use MathJax to format equations. MathJax reference.
To learn more, see our tips on writing great answers.
Sign up or log in
StackExchange.ready(function () {
StackExchange.helpers.onClickDraftSave('#login-link');
});
Sign up using Google
Sign up using Facebook
Sign up using Email and Password
Post as a guest
Required, but never shown
StackExchange.ready(
function () {
StackExchange.openid.initPostLogin('.new-post-login', 'https%3a%2f%2fmath.stackexchange.com%2fquestions%2f390260%2fdo-we-really-need-polynomials-in-contrast-to-polynomial-functions%23new-answer', 'question_page');
}
);
Post as a guest
Required, but never shown
Sign up or log in
StackExchange.ready(function () {
StackExchange.helpers.onClickDraftSave('#login-link');
});
Sign up using Google
Sign up using Facebook
Sign up using Email and Password
Post as a guest
Required, but never shown
Sign up or log in
StackExchange.ready(function () {
StackExchange.helpers.onClickDraftSave('#login-link');
});
Sign up using Google
Sign up using Facebook
Sign up using Email and Password
Post as a guest
Required, but never shown
Sign up or log in
StackExchange.ready(function () {
StackExchange.helpers.onClickDraftSave('#login-link');
});
Sign up using Google
Sign up using Facebook
Sign up using Email and Password
Sign up using Google
Sign up using Facebook
Sign up using Email and Password
Post as a guest
Required, but never shown
Required, but never shown
Required, but never shown
Required, but never shown
Required, but never shown
Required, but never shown
Required, but never shown
Required, but never shown
Required, but never shown
4
$begingroup$
In a finite field it's different. There are finitely many functions, but infinitely many polynomial expressions.
$endgroup$
– mez
May 13 '13 at 13:26
3
$begingroup$
A general point that is not specific to the issue at hand (already well addressed in great answers): when you have two similar mathematical structures, the question you should ask is not Can I get rid of Structure A and just use Structure B?, but rather What are the differences between Structure A and Structure B, and when is it more useful to use one structure instead of the other?
$endgroup$
– Michael Joyce
May 15 '13 at 1:18
1
$begingroup$
@MichaelJoyce Very well said! And with so many great answers a very difficult choice awaits...
$endgroup$
– temo
May 19 '13 at 10:46