A matrix involving distances of $n$ points in $mathbb{R}^3$











up vote
13
down vote

favorite
4












Let $x_1,ldots ,x_n$ be $n$ distinct points in $mathbb{R}^3$. Consider the $ntimes n$ real symmetric matrix $A$ defined by $A_{ij}:=|x_i-x_j|$. I would like to show that



$$Ker,A;cap,{vinmathbb{R}^n,:, v_1+v_2 +ldots +v_n=0}={0}$$



Thank you for any suggestions.










share|cite|improve this question


















  • 1




    Where did this problem come from? Do you have any ideas about how you might solve it?
    – Theo Bendit
    Nov 16 at 15:17










  • The question comes as a step of a more general problem in which I have to invert a matrix depending on a parameter. I have no idea of how to solve the problem, as it strongly depends on the underlying combinatorial structure.
    – Capublanca
    Nov 16 at 15:22






  • 2




    Have you got an example of such a matrix that is singular?
    – Theo Bendit
    Nov 16 at 15:26










  • For small $n$ the matrix is actually non-singular. For large $n$ I suspect that a 1-dimensional kernel can occur, but I do not have explicit examples. Of course it would be a nicer and cleaner result if $A$ turns out to be always non-singular.
    – Capublanca
    Nov 16 at 15:30






  • 4




    joriki's answer should be relevant.
    – user1551
    Nov 16 at 16:50















up vote
13
down vote

favorite
4












Let $x_1,ldots ,x_n$ be $n$ distinct points in $mathbb{R}^3$. Consider the $ntimes n$ real symmetric matrix $A$ defined by $A_{ij}:=|x_i-x_j|$. I would like to show that



$$Ker,A;cap,{vinmathbb{R}^n,:, v_1+v_2 +ldots +v_n=0}={0}$$



Thank you for any suggestions.










share|cite|improve this question


















  • 1




    Where did this problem come from? Do you have any ideas about how you might solve it?
    – Theo Bendit
    Nov 16 at 15:17










  • The question comes as a step of a more general problem in which I have to invert a matrix depending on a parameter. I have no idea of how to solve the problem, as it strongly depends on the underlying combinatorial structure.
    – Capublanca
    Nov 16 at 15:22






  • 2




    Have you got an example of such a matrix that is singular?
    – Theo Bendit
    Nov 16 at 15:26










  • For small $n$ the matrix is actually non-singular. For large $n$ I suspect that a 1-dimensional kernel can occur, but I do not have explicit examples. Of course it would be a nicer and cleaner result if $A$ turns out to be always non-singular.
    – Capublanca
    Nov 16 at 15:30






  • 4




    joriki's answer should be relevant.
    – user1551
    Nov 16 at 16:50













up vote
13
down vote

favorite
4









up vote
13
down vote

favorite
4






4





Let $x_1,ldots ,x_n$ be $n$ distinct points in $mathbb{R}^3$. Consider the $ntimes n$ real symmetric matrix $A$ defined by $A_{ij}:=|x_i-x_j|$. I would like to show that



$$Ker,A;cap,{vinmathbb{R}^n,:, v_1+v_2 +ldots +v_n=0}={0}$$



Thank you for any suggestions.










share|cite|improve this question













Let $x_1,ldots ,x_n$ be $n$ distinct points in $mathbb{R}^3$. Consider the $ntimes n$ real symmetric matrix $A$ defined by $A_{ij}:=|x_i-x_j|$. I would like to show that



$$Ker,A;cap,{vinmathbb{R}^n,:, v_1+v_2 +ldots +v_n=0}={0}$$



Thank you for any suggestions.







linear-algebra matrices






share|cite|improve this question













share|cite|improve this question











share|cite|improve this question




share|cite|improve this question










asked Nov 16 at 15:10









Capublanca

940514




940514








  • 1




    Where did this problem come from? Do you have any ideas about how you might solve it?
    – Theo Bendit
    Nov 16 at 15:17










  • The question comes as a step of a more general problem in which I have to invert a matrix depending on a parameter. I have no idea of how to solve the problem, as it strongly depends on the underlying combinatorial structure.
    – Capublanca
    Nov 16 at 15:22






  • 2




    Have you got an example of such a matrix that is singular?
    – Theo Bendit
    Nov 16 at 15:26










  • For small $n$ the matrix is actually non-singular. For large $n$ I suspect that a 1-dimensional kernel can occur, but I do not have explicit examples. Of course it would be a nicer and cleaner result if $A$ turns out to be always non-singular.
    – Capublanca
    Nov 16 at 15:30






  • 4




    joriki's answer should be relevant.
    – user1551
    Nov 16 at 16:50














  • 1




    Where did this problem come from? Do you have any ideas about how you might solve it?
    – Theo Bendit
    Nov 16 at 15:17










  • The question comes as a step of a more general problem in which I have to invert a matrix depending on a parameter. I have no idea of how to solve the problem, as it strongly depends on the underlying combinatorial structure.
    – Capublanca
    Nov 16 at 15:22






  • 2




    Have you got an example of such a matrix that is singular?
    – Theo Bendit
    Nov 16 at 15:26










  • For small $n$ the matrix is actually non-singular. For large $n$ I suspect that a 1-dimensional kernel can occur, but I do not have explicit examples. Of course it would be a nicer and cleaner result if $A$ turns out to be always non-singular.
    – Capublanca
    Nov 16 at 15:30






  • 4




    joriki's answer should be relevant.
    – user1551
    Nov 16 at 16:50








1




1




Where did this problem come from? Do you have any ideas about how you might solve it?
– Theo Bendit
Nov 16 at 15:17




Where did this problem come from? Do you have any ideas about how you might solve it?
– Theo Bendit
Nov 16 at 15:17












The question comes as a step of a more general problem in which I have to invert a matrix depending on a parameter. I have no idea of how to solve the problem, as it strongly depends on the underlying combinatorial structure.
– Capublanca
Nov 16 at 15:22




The question comes as a step of a more general problem in which I have to invert a matrix depending on a parameter. I have no idea of how to solve the problem, as it strongly depends on the underlying combinatorial structure.
– Capublanca
Nov 16 at 15:22




2




2




Have you got an example of such a matrix that is singular?
– Theo Bendit
Nov 16 at 15:26




Have you got an example of such a matrix that is singular?
– Theo Bendit
Nov 16 at 15:26












For small $n$ the matrix is actually non-singular. For large $n$ I suspect that a 1-dimensional kernel can occur, but I do not have explicit examples. Of course it would be a nicer and cleaner result if $A$ turns out to be always non-singular.
– Capublanca
Nov 16 at 15:30




For small $n$ the matrix is actually non-singular. For large $n$ I suspect that a 1-dimensional kernel can occur, but I do not have explicit examples. Of course it would be a nicer and cleaner result if $A$ turns out to be always non-singular.
– Capublanca
Nov 16 at 15:30




4




4




joriki's answer should be relevant.
– user1551
Nov 16 at 16:50




joriki's answer should be relevant.
– user1551
Nov 16 at 16:50










2 Answers
2






active

oldest

votes

















up vote
5
down vote



accepted
+100










$defRR{mathbb{R}}$Numerical experiments suggest that this matrix is negative semidefinite on the plane $sum v_i=0$. Specifically, I generated 20 sets of 10 points each, drawn uniformly from the unit cube, and this was true every time. I repeated the experiment with points in 2, 4 and 5 dimensions, and the same held.



I am reminded of this answer by Noam Elkies but can't make a precise connection.





Switching this answer to CW to write up darij's proof from the comments. We will show that:




  • If $x_i$ are any $n$ points in $RR^d$ and $v_i$ are any scalars with $sum v_i=0$, then $sum v_i v_j |x_i-x_j| leq 0$ and


  • If the $x_i$ are distinct and the $v_i$ are not all $0$, we have strict inequality.



The latter shows that the matrix $left[ |x_i-x_j| right]$ times the vector $left[ v_i right]$ is nonzero. We start, however, by proving the former. We turn to the issue of when zero occurs after the horizontal line.



We start with the averaging trick: By rotational and scaling invariance, we can see that there is some positive $c$ such that
$$int_{|w|=1} left| langle w cdot x rangle right| = c |x|.$$
So
$$sum v_i v_j |x_i-x_j| = c^{-1} int_{|w|=1} sum v_i v_j left| langle w cdot (x_i-x_j) rangle right|$$
and thus it is sufficient to show $sum v_i v_j left| langle w cdot (x_i-x_j) rangle right|leq 0$ for a particular vector $w$. Now, $w cdot (x_i-x_j)$ only depends on the orthogonal projections of $x_i$ and $x_j$ onto the line $RR w$, so we may (and do) assume all the $x_i$ lie on a line. Our goal is now to show, for any $n$ values $x_i in RR$, that $sum v_i v_j |x_i-x_j| leq 0$.



We have $|z| = max(z,0) + max(-z,0)$, so $sum v_i v_j |x_i-x_j|=2 sum v_i v_j max(x_i-x_j,0)$.
We use the notation $left[ mbox{statement} right]$ to mean $1$ if the statement is true and $0$ if it is false. So
$$max(x_i-x_j,0) = int_{t in RR} left[x_j < t < x_i right] dt$$
and
$$sum_{i,j} v_i v_j max(x_i-x_j,0) = int_{t in RR} sum_{i,j} v_i v_j left[x_j < t < x_i right] dt.$$ So it is enough to show that, for any $t$, we have
$$sum_{x_i < t < x_j} v_i v_j leq 0 . $$
Let $I = { i : x_i < t }$ and $J = { i : x_j > t }$. (For almost all $t$, none of the $x_i$ equal $t$, so we can disregard the boundary case.) Then
$$sum_{x_i < t < x_j} v_i v_j = sum_{i in I, j in J} v_i v_j = left( sum_{i in I} v_i right) left( sum_{j in J} v_j right) = - left(sum_{i in I} v_i right)^2 leq 0 .$$
In the final equality, we have finally used the hypothesis $sum v_k=0$.





Now, let's consider what happens for distinct $x_i$. As long as the $x_i$ as distinct, for almost as $w$, the orthogonal projections of the $x_i$ onto $RR w$ will stay distinct. In order to get $0$, we must get $0$ from all these choices of $w$. Let's say the orthogonal projections are ordered as $x_1 < x_2 < cdots < x_n$. Once again, we must get $0$ from every choice of $t$. So we must have $left( sum_{i=1}^k v_i right)^2 =0$ for all $k$, and this means that all $v_i$ are zero.






share|cite|improve this answer























  • Yes, it is negative semidefinite on the plane given by $sum_i v_i = 0$. Here is a sketch of a proof (feel free to write it up in detail): ...
    – darij grinberg
    Nov 23 at 5:13








  • 1




    ... Step 1: Fix $n$ reals $v_1, v_2, ldots, v_n$ satisfying $sum_i v_i = 0$. Our goal is to prove that $sum_{i, j} left|x_i - x_jright| v_i v_j leq 0$ for any $n$ points $x_1, x_2, ldots, x_n in mathbb{R}^m$ for any $m geq 0$. (There is no reason to restrict oneself to $m = 3$.) By the classical "averaging over the sphere" trick, it suffices to prove this inequality only for $x_1, x_2, ldots, x_n in mathbb{R}^1$, because each $left|x_i - x_jright|$ is just a scalar factor times ...
    – darij grinberg
    Nov 23 at 5:15










  • ... the average of $w^perp x_i - w^perp x_j$ where $w$ ranges over the unit sphere in $mathbb{R}^m$ (uniformly in Haar measure). (I hope I'm getting this trick correct; if not, replace the unit sphere by $operatorname{SO}left(m, mathbb{R}right)$ or something similar. I don't really know what I'm talking about when I'm talking integration.) ...
    – darij grinberg
    Nov 23 at 5:16






  • 2




    ... Step 2: So let's prove this inequality for $m = 1$. I think this was an olympiad problem not too long ago (USAMO?), but I can't find it, so here's the idea of the solution: Let $I$ be a finite interval containing all points $x_1, x_2, ldots, x_n in mathbb{R}$, and let us use the Iverson bracket notation. Then, $left|x_i-x_jright| = int_I left[t leq x_iright] left[t > x_jright] dt$ for all $i$ and $j$. Thus, ...
    – darij grinberg
    Nov 23 at 5:18








  • 1




    ... we have $sum_{i,j} left|x_i-x_jright| v_i v_j = sum_{i,j} int_I left[t leq x_iright] left[t > x_jright] dt v_i v_j = int_I left(sum_i left[t leq x_iright] v_iright) left(sum_j left[t > x_jright] v_jright) dt leq 0$, because each $t in I$ satisfies $sum_j left[t > x_jright] v_j = -sum_i left[t leq x_iright] v_i$ and thus $left(sum_i left[t leq x_iright] v_iright) left(sum_j left[t > x_jright] v_jright) = - left(sum_i left[t leq x_iright] v_iright)^2 leq 0$.
    – darij grinberg
    Nov 23 at 5:18




















up vote
-1
down vote













So, if I understand it correctly, you want to proof that $Ax=0 implies x=0$, correct?



So, we simply need to show that $det(A) neq 0$.



Let $A= begin{pmatrix}
a_{11} & a_{12} & a_{13} \
a_{21} & a_{22} & a_{23} \
a_{31} & a_{32} & a_{33}
end{pmatrix}$
where $a_{ij} := d(i,j) := |x_i-x_j|$



begin{align}
det A &= det
begin{pmatrix}
a_{11} & a_{12} & a_{13} \
a_{21} & a_{22} & a_{23} \
a_{31} & a_{32} & a_{33}
end{pmatrix}
\ &= a_{11} a_{22} a_{33} +a_{12} a_{23} a_{31} + a_{13} a_{21} a_{32} - a_{13} a_{22} a_{31} - a_{12} a_{21} a_{33} - a_{11} a_{23} a_{32}
\ &= 0+a_{12} a_{23} a_{31} + a_{13} a_{21} a_{32} -0-0-0
\ &= 2a_{12}a_{13} a_{23} > 0
end{align}


As $a_{ii}=d(i,i) = 0$ and $d(i,j)>0$ for $ineq j$






share|cite|improve this answer





















  • Do you know that there exist natural numbers greater than 3?
    – Capublanca
    Nov 24 at 13:25










  • Eh, my bad. In the first sentence he went with points in $mathbb{R}^3$, so I went with that
    – Sudix
    Nov 24 at 16:04










  • Ok, no problem.
    – Capublanca
    Nov 24 at 19:12











Your Answer





StackExchange.ifUsing("editor", function () {
return StackExchange.using("mathjaxEditing", function () {
StackExchange.MarkdownEditor.creationCallbacks.add(function (editor, postfix) {
StackExchange.mathjaxEditing.prepareWmdForMathJax(editor, postfix, [["$", "$"], ["\\(","\\)"]]);
});
});
}, "mathjax-editing");

StackExchange.ready(function() {
var channelOptions = {
tags: "".split(" "),
id: "69"
};
initTagRenderer("".split(" "), "".split(" "), channelOptions);

StackExchange.using("externalEditor", function() {
// Have to fire editor after snippets, if snippets enabled
if (StackExchange.settings.snippets.snippetsEnabled) {
StackExchange.using("snippets", function() {
createEditor();
});
}
else {
createEditor();
}
});

function createEditor() {
StackExchange.prepareEditor({
heartbeatType: 'answer',
convertImagesToLinks: true,
noModals: true,
showLowRepImageUploadWarning: true,
reputationToPostImages: 10,
bindNavPrevention: true,
postfix: "",
imageUploader: {
brandingHtml: "Powered by u003ca class="icon-imgur-white" href="https://imgur.com/"u003eu003c/au003e",
contentPolicyHtml: "User contributions licensed under u003ca href="https://creativecommons.org/licenses/by-sa/3.0/"u003ecc by-sa 3.0 with attribution requiredu003c/au003e u003ca href="https://stackoverflow.com/legal/content-policy"u003e(content policy)u003c/au003e",
allowUrls: true
},
noCode: true, onDemand: true,
discardSelector: ".discard-answer"
,immediatelyShowMarkdownHelp:true
});


}
});














draft saved

draft discarded


















StackExchange.ready(
function () {
StackExchange.openid.initPostLogin('.new-post-login', 'https%3a%2f%2fmath.stackexchange.com%2fquestions%2f3001258%2fa-matrix-involving-distances-of-n-points-in-mathbbr3%23new-answer', 'question_page');
}
);

Post as a guest















Required, but never shown

























2 Answers
2






active

oldest

votes








2 Answers
2






active

oldest

votes









active

oldest

votes






active

oldest

votes








up vote
5
down vote



accepted
+100










$defRR{mathbb{R}}$Numerical experiments suggest that this matrix is negative semidefinite on the plane $sum v_i=0$. Specifically, I generated 20 sets of 10 points each, drawn uniformly from the unit cube, and this was true every time. I repeated the experiment with points in 2, 4 and 5 dimensions, and the same held.



I am reminded of this answer by Noam Elkies but can't make a precise connection.





Switching this answer to CW to write up darij's proof from the comments. We will show that:




  • If $x_i$ are any $n$ points in $RR^d$ and $v_i$ are any scalars with $sum v_i=0$, then $sum v_i v_j |x_i-x_j| leq 0$ and


  • If the $x_i$ are distinct and the $v_i$ are not all $0$, we have strict inequality.



The latter shows that the matrix $left[ |x_i-x_j| right]$ times the vector $left[ v_i right]$ is nonzero. We start, however, by proving the former. We turn to the issue of when zero occurs after the horizontal line.



We start with the averaging trick: By rotational and scaling invariance, we can see that there is some positive $c$ such that
$$int_{|w|=1} left| langle w cdot x rangle right| = c |x|.$$
So
$$sum v_i v_j |x_i-x_j| = c^{-1} int_{|w|=1} sum v_i v_j left| langle w cdot (x_i-x_j) rangle right|$$
and thus it is sufficient to show $sum v_i v_j left| langle w cdot (x_i-x_j) rangle right|leq 0$ for a particular vector $w$. Now, $w cdot (x_i-x_j)$ only depends on the orthogonal projections of $x_i$ and $x_j$ onto the line $RR w$, so we may (and do) assume all the $x_i$ lie on a line. Our goal is now to show, for any $n$ values $x_i in RR$, that $sum v_i v_j |x_i-x_j| leq 0$.



We have $|z| = max(z,0) + max(-z,0)$, so $sum v_i v_j |x_i-x_j|=2 sum v_i v_j max(x_i-x_j,0)$.
We use the notation $left[ mbox{statement} right]$ to mean $1$ if the statement is true and $0$ if it is false. So
$$max(x_i-x_j,0) = int_{t in RR} left[x_j < t < x_i right] dt$$
and
$$sum_{i,j} v_i v_j max(x_i-x_j,0) = int_{t in RR} sum_{i,j} v_i v_j left[x_j < t < x_i right] dt.$$ So it is enough to show that, for any $t$, we have
$$sum_{x_i < t < x_j} v_i v_j leq 0 . $$
Let $I = { i : x_i < t }$ and $J = { i : x_j > t }$. (For almost all $t$, none of the $x_i$ equal $t$, so we can disregard the boundary case.) Then
$$sum_{x_i < t < x_j} v_i v_j = sum_{i in I, j in J} v_i v_j = left( sum_{i in I} v_i right) left( sum_{j in J} v_j right) = - left(sum_{i in I} v_i right)^2 leq 0 .$$
In the final equality, we have finally used the hypothesis $sum v_k=0$.





Now, let's consider what happens for distinct $x_i$. As long as the $x_i$ as distinct, for almost as $w$, the orthogonal projections of the $x_i$ onto $RR w$ will stay distinct. In order to get $0$, we must get $0$ from all these choices of $w$. Let's say the orthogonal projections are ordered as $x_1 < x_2 < cdots < x_n$. Once again, we must get $0$ from every choice of $t$. So we must have $left( sum_{i=1}^k v_i right)^2 =0$ for all $k$, and this means that all $v_i$ are zero.






share|cite|improve this answer























  • Yes, it is negative semidefinite on the plane given by $sum_i v_i = 0$. Here is a sketch of a proof (feel free to write it up in detail): ...
    – darij grinberg
    Nov 23 at 5:13








  • 1




    ... Step 1: Fix $n$ reals $v_1, v_2, ldots, v_n$ satisfying $sum_i v_i = 0$. Our goal is to prove that $sum_{i, j} left|x_i - x_jright| v_i v_j leq 0$ for any $n$ points $x_1, x_2, ldots, x_n in mathbb{R}^m$ for any $m geq 0$. (There is no reason to restrict oneself to $m = 3$.) By the classical "averaging over the sphere" trick, it suffices to prove this inequality only for $x_1, x_2, ldots, x_n in mathbb{R}^1$, because each $left|x_i - x_jright|$ is just a scalar factor times ...
    – darij grinberg
    Nov 23 at 5:15










  • ... the average of $w^perp x_i - w^perp x_j$ where $w$ ranges over the unit sphere in $mathbb{R}^m$ (uniformly in Haar measure). (I hope I'm getting this trick correct; if not, replace the unit sphere by $operatorname{SO}left(m, mathbb{R}right)$ or something similar. I don't really know what I'm talking about when I'm talking integration.) ...
    – darij grinberg
    Nov 23 at 5:16






  • 2




    ... Step 2: So let's prove this inequality for $m = 1$. I think this was an olympiad problem not too long ago (USAMO?), but I can't find it, so here's the idea of the solution: Let $I$ be a finite interval containing all points $x_1, x_2, ldots, x_n in mathbb{R}$, and let us use the Iverson bracket notation. Then, $left|x_i-x_jright| = int_I left[t leq x_iright] left[t > x_jright] dt$ for all $i$ and $j$. Thus, ...
    – darij grinberg
    Nov 23 at 5:18








  • 1




    ... we have $sum_{i,j} left|x_i-x_jright| v_i v_j = sum_{i,j} int_I left[t leq x_iright] left[t > x_jright] dt v_i v_j = int_I left(sum_i left[t leq x_iright] v_iright) left(sum_j left[t > x_jright] v_jright) dt leq 0$, because each $t in I$ satisfies $sum_j left[t > x_jright] v_j = -sum_i left[t leq x_iright] v_i$ and thus $left(sum_i left[t leq x_iright] v_iright) left(sum_j left[t > x_jright] v_jright) = - left(sum_i left[t leq x_iright] v_iright)^2 leq 0$.
    – darij grinberg
    Nov 23 at 5:18

















up vote
5
down vote



accepted
+100










$defRR{mathbb{R}}$Numerical experiments suggest that this matrix is negative semidefinite on the plane $sum v_i=0$. Specifically, I generated 20 sets of 10 points each, drawn uniformly from the unit cube, and this was true every time. I repeated the experiment with points in 2, 4 and 5 dimensions, and the same held.



I am reminded of this answer by Noam Elkies but can't make a precise connection.





Switching this answer to CW to write up darij's proof from the comments. We will show that:




  • If $x_i$ are any $n$ points in $RR^d$ and $v_i$ are any scalars with $sum v_i=0$, then $sum v_i v_j |x_i-x_j| leq 0$ and


  • If the $x_i$ are distinct and the $v_i$ are not all $0$, we have strict inequality.



The latter shows that the matrix $left[ |x_i-x_j| right]$ times the vector $left[ v_i right]$ is nonzero. We start, however, by proving the former. We turn to the issue of when zero occurs after the horizontal line.



We start with the averaging trick: By rotational and scaling invariance, we can see that there is some positive $c$ such that
$$int_{|w|=1} left| langle w cdot x rangle right| = c |x|.$$
So
$$sum v_i v_j |x_i-x_j| = c^{-1} int_{|w|=1} sum v_i v_j left| langle w cdot (x_i-x_j) rangle right|$$
and thus it is sufficient to show $sum v_i v_j left| langle w cdot (x_i-x_j) rangle right|leq 0$ for a particular vector $w$. Now, $w cdot (x_i-x_j)$ only depends on the orthogonal projections of $x_i$ and $x_j$ onto the line $RR w$, so we may (and do) assume all the $x_i$ lie on a line. Our goal is now to show, for any $n$ values $x_i in RR$, that $sum v_i v_j |x_i-x_j| leq 0$.



We have $|z| = max(z,0) + max(-z,0)$, so $sum v_i v_j |x_i-x_j|=2 sum v_i v_j max(x_i-x_j,0)$.
We use the notation $left[ mbox{statement} right]$ to mean $1$ if the statement is true and $0$ if it is false. So
$$max(x_i-x_j,0) = int_{t in RR} left[x_j < t < x_i right] dt$$
and
$$sum_{i,j} v_i v_j max(x_i-x_j,0) = int_{t in RR} sum_{i,j} v_i v_j left[x_j < t < x_i right] dt.$$ So it is enough to show that, for any $t$, we have
$$sum_{x_i < t < x_j} v_i v_j leq 0 . $$
Let $I = { i : x_i < t }$ and $J = { i : x_j > t }$. (For almost all $t$, none of the $x_i$ equal $t$, so we can disregard the boundary case.) Then
$$sum_{x_i < t < x_j} v_i v_j = sum_{i in I, j in J} v_i v_j = left( sum_{i in I} v_i right) left( sum_{j in J} v_j right) = - left(sum_{i in I} v_i right)^2 leq 0 .$$
In the final equality, we have finally used the hypothesis $sum v_k=0$.





Now, let's consider what happens for distinct $x_i$. As long as the $x_i$ as distinct, for almost as $w$, the orthogonal projections of the $x_i$ onto $RR w$ will stay distinct. In order to get $0$, we must get $0$ from all these choices of $w$. Let's say the orthogonal projections are ordered as $x_1 < x_2 < cdots < x_n$. Once again, we must get $0$ from every choice of $t$. So we must have $left( sum_{i=1}^k v_i right)^2 =0$ for all $k$, and this means that all $v_i$ are zero.






share|cite|improve this answer























  • Yes, it is negative semidefinite on the plane given by $sum_i v_i = 0$. Here is a sketch of a proof (feel free to write it up in detail): ...
    – darij grinberg
    Nov 23 at 5:13








  • 1




    ... Step 1: Fix $n$ reals $v_1, v_2, ldots, v_n$ satisfying $sum_i v_i = 0$. Our goal is to prove that $sum_{i, j} left|x_i - x_jright| v_i v_j leq 0$ for any $n$ points $x_1, x_2, ldots, x_n in mathbb{R}^m$ for any $m geq 0$. (There is no reason to restrict oneself to $m = 3$.) By the classical "averaging over the sphere" trick, it suffices to prove this inequality only for $x_1, x_2, ldots, x_n in mathbb{R}^1$, because each $left|x_i - x_jright|$ is just a scalar factor times ...
    – darij grinberg
    Nov 23 at 5:15










  • ... the average of $w^perp x_i - w^perp x_j$ where $w$ ranges over the unit sphere in $mathbb{R}^m$ (uniformly in Haar measure). (I hope I'm getting this trick correct; if not, replace the unit sphere by $operatorname{SO}left(m, mathbb{R}right)$ or something similar. I don't really know what I'm talking about when I'm talking integration.) ...
    – darij grinberg
    Nov 23 at 5:16






  • 2




    ... Step 2: So let's prove this inequality for $m = 1$. I think this was an olympiad problem not too long ago (USAMO?), but I can't find it, so here's the idea of the solution: Let $I$ be a finite interval containing all points $x_1, x_2, ldots, x_n in mathbb{R}$, and let us use the Iverson bracket notation. Then, $left|x_i-x_jright| = int_I left[t leq x_iright] left[t > x_jright] dt$ for all $i$ and $j$. Thus, ...
    – darij grinberg
    Nov 23 at 5:18








  • 1




    ... we have $sum_{i,j} left|x_i-x_jright| v_i v_j = sum_{i,j} int_I left[t leq x_iright] left[t > x_jright] dt v_i v_j = int_I left(sum_i left[t leq x_iright] v_iright) left(sum_j left[t > x_jright] v_jright) dt leq 0$, because each $t in I$ satisfies $sum_j left[t > x_jright] v_j = -sum_i left[t leq x_iright] v_i$ and thus $left(sum_i left[t leq x_iright] v_iright) left(sum_j left[t > x_jright] v_jright) = - left(sum_i left[t leq x_iright] v_iright)^2 leq 0$.
    – darij grinberg
    Nov 23 at 5:18















up vote
5
down vote



accepted
+100







up vote
5
down vote



accepted
+100




+100




$defRR{mathbb{R}}$Numerical experiments suggest that this matrix is negative semidefinite on the plane $sum v_i=0$. Specifically, I generated 20 sets of 10 points each, drawn uniformly from the unit cube, and this was true every time. I repeated the experiment with points in 2, 4 and 5 dimensions, and the same held.



I am reminded of this answer by Noam Elkies but can't make a precise connection.





Switching this answer to CW to write up darij's proof from the comments. We will show that:




  • If $x_i$ are any $n$ points in $RR^d$ and $v_i$ are any scalars with $sum v_i=0$, then $sum v_i v_j |x_i-x_j| leq 0$ and


  • If the $x_i$ are distinct and the $v_i$ are not all $0$, we have strict inequality.



The latter shows that the matrix $left[ |x_i-x_j| right]$ times the vector $left[ v_i right]$ is nonzero. We start, however, by proving the former. We turn to the issue of when zero occurs after the horizontal line.



We start with the averaging trick: By rotational and scaling invariance, we can see that there is some positive $c$ such that
$$int_{|w|=1} left| langle w cdot x rangle right| = c |x|.$$
So
$$sum v_i v_j |x_i-x_j| = c^{-1} int_{|w|=1} sum v_i v_j left| langle w cdot (x_i-x_j) rangle right|$$
and thus it is sufficient to show $sum v_i v_j left| langle w cdot (x_i-x_j) rangle right|leq 0$ for a particular vector $w$. Now, $w cdot (x_i-x_j)$ only depends on the orthogonal projections of $x_i$ and $x_j$ onto the line $RR w$, so we may (and do) assume all the $x_i$ lie on a line. Our goal is now to show, for any $n$ values $x_i in RR$, that $sum v_i v_j |x_i-x_j| leq 0$.



We have $|z| = max(z,0) + max(-z,0)$, so $sum v_i v_j |x_i-x_j|=2 sum v_i v_j max(x_i-x_j,0)$.
We use the notation $left[ mbox{statement} right]$ to mean $1$ if the statement is true and $0$ if it is false. So
$$max(x_i-x_j,0) = int_{t in RR} left[x_j < t < x_i right] dt$$
and
$$sum_{i,j} v_i v_j max(x_i-x_j,0) = int_{t in RR} sum_{i,j} v_i v_j left[x_j < t < x_i right] dt.$$ So it is enough to show that, for any $t$, we have
$$sum_{x_i < t < x_j} v_i v_j leq 0 . $$
Let $I = { i : x_i < t }$ and $J = { i : x_j > t }$. (For almost all $t$, none of the $x_i$ equal $t$, so we can disregard the boundary case.) Then
$$sum_{x_i < t < x_j} v_i v_j = sum_{i in I, j in J} v_i v_j = left( sum_{i in I} v_i right) left( sum_{j in J} v_j right) = - left(sum_{i in I} v_i right)^2 leq 0 .$$
In the final equality, we have finally used the hypothesis $sum v_k=0$.





Now, let's consider what happens for distinct $x_i$. As long as the $x_i$ as distinct, for almost as $w$, the orthogonal projections of the $x_i$ onto $RR w$ will stay distinct. In order to get $0$, we must get $0$ from all these choices of $w$. Let's say the orthogonal projections are ordered as $x_1 < x_2 < cdots < x_n$. Once again, we must get $0$ from every choice of $t$. So we must have $left( sum_{i=1}^k v_i right)^2 =0$ for all $k$, and this means that all $v_i$ are zero.






share|cite|improve this answer














$defRR{mathbb{R}}$Numerical experiments suggest that this matrix is negative semidefinite on the plane $sum v_i=0$. Specifically, I generated 20 sets of 10 points each, drawn uniformly from the unit cube, and this was true every time. I repeated the experiment with points in 2, 4 and 5 dimensions, and the same held.



I am reminded of this answer by Noam Elkies but can't make a precise connection.





Switching this answer to CW to write up darij's proof from the comments. We will show that:




  • If $x_i$ are any $n$ points in $RR^d$ and $v_i$ are any scalars with $sum v_i=0$, then $sum v_i v_j |x_i-x_j| leq 0$ and


  • If the $x_i$ are distinct and the $v_i$ are not all $0$, we have strict inequality.



The latter shows that the matrix $left[ |x_i-x_j| right]$ times the vector $left[ v_i right]$ is nonzero. We start, however, by proving the former. We turn to the issue of when zero occurs after the horizontal line.



We start with the averaging trick: By rotational and scaling invariance, we can see that there is some positive $c$ such that
$$int_{|w|=1} left| langle w cdot x rangle right| = c |x|.$$
So
$$sum v_i v_j |x_i-x_j| = c^{-1} int_{|w|=1} sum v_i v_j left| langle w cdot (x_i-x_j) rangle right|$$
and thus it is sufficient to show $sum v_i v_j left| langle w cdot (x_i-x_j) rangle right|leq 0$ for a particular vector $w$. Now, $w cdot (x_i-x_j)$ only depends on the orthogonal projections of $x_i$ and $x_j$ onto the line $RR w$, so we may (and do) assume all the $x_i$ lie on a line. Our goal is now to show, for any $n$ values $x_i in RR$, that $sum v_i v_j |x_i-x_j| leq 0$.



We have $|z| = max(z,0) + max(-z,0)$, so $sum v_i v_j |x_i-x_j|=2 sum v_i v_j max(x_i-x_j,0)$.
We use the notation $left[ mbox{statement} right]$ to mean $1$ if the statement is true and $0$ if it is false. So
$$max(x_i-x_j,0) = int_{t in RR} left[x_j < t < x_i right] dt$$
and
$$sum_{i,j} v_i v_j max(x_i-x_j,0) = int_{t in RR} sum_{i,j} v_i v_j left[x_j < t < x_i right] dt.$$ So it is enough to show that, for any $t$, we have
$$sum_{x_i < t < x_j} v_i v_j leq 0 . $$
Let $I = { i : x_i < t }$ and $J = { i : x_j > t }$. (For almost all $t$, none of the $x_i$ equal $t$, so we can disregard the boundary case.) Then
$$sum_{x_i < t < x_j} v_i v_j = sum_{i in I, j in J} v_i v_j = left( sum_{i in I} v_i right) left( sum_{j in J} v_j right) = - left(sum_{i in I} v_i right)^2 leq 0 .$$
In the final equality, we have finally used the hypothesis $sum v_k=0$.





Now, let's consider what happens for distinct $x_i$. As long as the $x_i$ as distinct, for almost as $w$, the orthogonal projections of the $x_i$ onto $RR w$ will stay distinct. In order to get $0$, we must get $0$ from all these choices of $w$. Let's say the orthogonal projections are ordered as $x_1 < x_2 < cdots < x_n$. Once again, we must get $0$ from every choice of $t$. So we must have $left( sum_{i=1}^k v_i right)^2 =0$ for all $k$, and this means that all $v_i$ are zero.







share|cite|improve this answer














share|cite|improve this answer



share|cite|improve this answer








edited Nov 23 at 18:34


























community wiki





2 revs
David E Speyer













  • Yes, it is negative semidefinite on the plane given by $sum_i v_i = 0$. Here is a sketch of a proof (feel free to write it up in detail): ...
    – darij grinberg
    Nov 23 at 5:13








  • 1




    ... Step 1: Fix $n$ reals $v_1, v_2, ldots, v_n$ satisfying $sum_i v_i = 0$. Our goal is to prove that $sum_{i, j} left|x_i - x_jright| v_i v_j leq 0$ for any $n$ points $x_1, x_2, ldots, x_n in mathbb{R}^m$ for any $m geq 0$. (There is no reason to restrict oneself to $m = 3$.) By the classical "averaging over the sphere" trick, it suffices to prove this inequality only for $x_1, x_2, ldots, x_n in mathbb{R}^1$, because each $left|x_i - x_jright|$ is just a scalar factor times ...
    – darij grinberg
    Nov 23 at 5:15










  • ... the average of $w^perp x_i - w^perp x_j$ where $w$ ranges over the unit sphere in $mathbb{R}^m$ (uniformly in Haar measure). (I hope I'm getting this trick correct; if not, replace the unit sphere by $operatorname{SO}left(m, mathbb{R}right)$ or something similar. I don't really know what I'm talking about when I'm talking integration.) ...
    – darij grinberg
    Nov 23 at 5:16






  • 2




    ... Step 2: So let's prove this inequality for $m = 1$. I think this was an olympiad problem not too long ago (USAMO?), but I can't find it, so here's the idea of the solution: Let $I$ be a finite interval containing all points $x_1, x_2, ldots, x_n in mathbb{R}$, and let us use the Iverson bracket notation. Then, $left|x_i-x_jright| = int_I left[t leq x_iright] left[t > x_jright] dt$ for all $i$ and $j$. Thus, ...
    – darij grinberg
    Nov 23 at 5:18








  • 1




    ... we have $sum_{i,j} left|x_i-x_jright| v_i v_j = sum_{i,j} int_I left[t leq x_iright] left[t > x_jright] dt v_i v_j = int_I left(sum_i left[t leq x_iright] v_iright) left(sum_j left[t > x_jright] v_jright) dt leq 0$, because each $t in I$ satisfies $sum_j left[t > x_jright] v_j = -sum_i left[t leq x_iright] v_i$ and thus $left(sum_i left[t leq x_iright] v_iright) left(sum_j left[t > x_jright] v_jright) = - left(sum_i left[t leq x_iright] v_iright)^2 leq 0$.
    – darij grinberg
    Nov 23 at 5:18




















  • Yes, it is negative semidefinite on the plane given by $sum_i v_i = 0$. Here is a sketch of a proof (feel free to write it up in detail): ...
    – darij grinberg
    Nov 23 at 5:13








  • 1




    ... Step 1: Fix $n$ reals $v_1, v_2, ldots, v_n$ satisfying $sum_i v_i = 0$. Our goal is to prove that $sum_{i, j} left|x_i - x_jright| v_i v_j leq 0$ for any $n$ points $x_1, x_2, ldots, x_n in mathbb{R}^m$ for any $m geq 0$. (There is no reason to restrict oneself to $m = 3$.) By the classical "averaging over the sphere" trick, it suffices to prove this inequality only for $x_1, x_2, ldots, x_n in mathbb{R}^1$, because each $left|x_i - x_jright|$ is just a scalar factor times ...
    – darij grinberg
    Nov 23 at 5:15










  • ... the average of $w^perp x_i - w^perp x_j$ where $w$ ranges over the unit sphere in $mathbb{R}^m$ (uniformly in Haar measure). (I hope I'm getting this trick correct; if not, replace the unit sphere by $operatorname{SO}left(m, mathbb{R}right)$ or something similar. I don't really know what I'm talking about when I'm talking integration.) ...
    – darij grinberg
    Nov 23 at 5:16






  • 2




    ... Step 2: So let's prove this inequality for $m = 1$. I think this was an olympiad problem not too long ago (USAMO?), but I can't find it, so here's the idea of the solution: Let $I$ be a finite interval containing all points $x_1, x_2, ldots, x_n in mathbb{R}$, and let us use the Iverson bracket notation. Then, $left|x_i-x_jright| = int_I left[t leq x_iright] left[t > x_jright] dt$ for all $i$ and $j$. Thus, ...
    – darij grinberg
    Nov 23 at 5:18








  • 1




    ... we have $sum_{i,j} left|x_i-x_jright| v_i v_j = sum_{i,j} int_I left[t leq x_iright] left[t > x_jright] dt v_i v_j = int_I left(sum_i left[t leq x_iright] v_iright) left(sum_j left[t > x_jright] v_jright) dt leq 0$, because each $t in I$ satisfies $sum_j left[t > x_jright] v_j = -sum_i left[t leq x_iright] v_i$ and thus $left(sum_i left[t leq x_iright] v_iright) left(sum_j left[t > x_jright] v_jright) = - left(sum_i left[t leq x_iright] v_iright)^2 leq 0$.
    – darij grinberg
    Nov 23 at 5:18


















Yes, it is negative semidefinite on the plane given by $sum_i v_i = 0$. Here is a sketch of a proof (feel free to write it up in detail): ...
– darij grinberg
Nov 23 at 5:13






Yes, it is negative semidefinite on the plane given by $sum_i v_i = 0$. Here is a sketch of a proof (feel free to write it up in detail): ...
– darij grinberg
Nov 23 at 5:13






1




1




... Step 1: Fix $n$ reals $v_1, v_2, ldots, v_n$ satisfying $sum_i v_i = 0$. Our goal is to prove that $sum_{i, j} left|x_i - x_jright| v_i v_j leq 0$ for any $n$ points $x_1, x_2, ldots, x_n in mathbb{R}^m$ for any $m geq 0$. (There is no reason to restrict oneself to $m = 3$.) By the classical "averaging over the sphere" trick, it suffices to prove this inequality only for $x_1, x_2, ldots, x_n in mathbb{R}^1$, because each $left|x_i - x_jright|$ is just a scalar factor times ...
– darij grinberg
Nov 23 at 5:15




... Step 1: Fix $n$ reals $v_1, v_2, ldots, v_n$ satisfying $sum_i v_i = 0$. Our goal is to prove that $sum_{i, j} left|x_i - x_jright| v_i v_j leq 0$ for any $n$ points $x_1, x_2, ldots, x_n in mathbb{R}^m$ for any $m geq 0$. (There is no reason to restrict oneself to $m = 3$.) By the classical "averaging over the sphere" trick, it suffices to prove this inequality only for $x_1, x_2, ldots, x_n in mathbb{R}^1$, because each $left|x_i - x_jright|$ is just a scalar factor times ...
– darij grinberg
Nov 23 at 5:15












... the average of $w^perp x_i - w^perp x_j$ where $w$ ranges over the unit sphere in $mathbb{R}^m$ (uniformly in Haar measure). (I hope I'm getting this trick correct; if not, replace the unit sphere by $operatorname{SO}left(m, mathbb{R}right)$ or something similar. I don't really know what I'm talking about when I'm talking integration.) ...
– darij grinberg
Nov 23 at 5:16




... the average of $w^perp x_i - w^perp x_j$ where $w$ ranges over the unit sphere in $mathbb{R}^m$ (uniformly in Haar measure). (I hope I'm getting this trick correct; if not, replace the unit sphere by $operatorname{SO}left(m, mathbb{R}right)$ or something similar. I don't really know what I'm talking about when I'm talking integration.) ...
– darij grinberg
Nov 23 at 5:16




2




2




... Step 2: So let's prove this inequality for $m = 1$. I think this was an olympiad problem not too long ago (USAMO?), but I can't find it, so here's the idea of the solution: Let $I$ be a finite interval containing all points $x_1, x_2, ldots, x_n in mathbb{R}$, and let us use the Iverson bracket notation. Then, $left|x_i-x_jright| = int_I left[t leq x_iright] left[t > x_jright] dt$ for all $i$ and $j$. Thus, ...
– darij grinberg
Nov 23 at 5:18






... Step 2: So let's prove this inequality for $m = 1$. I think this was an olympiad problem not too long ago (USAMO?), but I can't find it, so here's the idea of the solution: Let $I$ be a finite interval containing all points $x_1, x_2, ldots, x_n in mathbb{R}$, and let us use the Iverson bracket notation. Then, $left|x_i-x_jright| = int_I left[t leq x_iright] left[t > x_jright] dt$ for all $i$ and $j$. Thus, ...
– darij grinberg
Nov 23 at 5:18






1




1




... we have $sum_{i,j} left|x_i-x_jright| v_i v_j = sum_{i,j} int_I left[t leq x_iright] left[t > x_jright] dt v_i v_j = int_I left(sum_i left[t leq x_iright] v_iright) left(sum_j left[t > x_jright] v_jright) dt leq 0$, because each $t in I$ satisfies $sum_j left[t > x_jright] v_j = -sum_i left[t leq x_iright] v_i$ and thus $left(sum_i left[t leq x_iright] v_iright) left(sum_j left[t > x_jright] v_jright) = - left(sum_i left[t leq x_iright] v_iright)^2 leq 0$.
– darij grinberg
Nov 23 at 5:18






... we have $sum_{i,j} left|x_i-x_jright| v_i v_j = sum_{i,j} int_I left[t leq x_iright] left[t > x_jright] dt v_i v_j = int_I left(sum_i left[t leq x_iright] v_iright) left(sum_j left[t > x_jright] v_jright) dt leq 0$, because each $t in I$ satisfies $sum_j left[t > x_jright] v_j = -sum_i left[t leq x_iright] v_i$ and thus $left(sum_i left[t leq x_iright] v_iright) left(sum_j left[t > x_jright] v_jright) = - left(sum_i left[t leq x_iright] v_iright)^2 leq 0$.
– darij grinberg
Nov 23 at 5:18












up vote
-1
down vote













So, if I understand it correctly, you want to proof that $Ax=0 implies x=0$, correct?



So, we simply need to show that $det(A) neq 0$.



Let $A= begin{pmatrix}
a_{11} & a_{12} & a_{13} \
a_{21} & a_{22} & a_{23} \
a_{31} & a_{32} & a_{33}
end{pmatrix}$
where $a_{ij} := d(i,j) := |x_i-x_j|$



begin{align}
det A &= det
begin{pmatrix}
a_{11} & a_{12} & a_{13} \
a_{21} & a_{22} & a_{23} \
a_{31} & a_{32} & a_{33}
end{pmatrix}
\ &= a_{11} a_{22} a_{33} +a_{12} a_{23} a_{31} + a_{13} a_{21} a_{32} - a_{13} a_{22} a_{31} - a_{12} a_{21} a_{33} - a_{11} a_{23} a_{32}
\ &= 0+a_{12} a_{23} a_{31} + a_{13} a_{21} a_{32} -0-0-0
\ &= 2a_{12}a_{13} a_{23} > 0
end{align}


As $a_{ii}=d(i,i) = 0$ and $d(i,j)>0$ for $ineq j$






share|cite|improve this answer





















  • Do you know that there exist natural numbers greater than 3?
    – Capublanca
    Nov 24 at 13:25










  • Eh, my bad. In the first sentence he went with points in $mathbb{R}^3$, so I went with that
    – Sudix
    Nov 24 at 16:04










  • Ok, no problem.
    – Capublanca
    Nov 24 at 19:12















up vote
-1
down vote













So, if I understand it correctly, you want to proof that $Ax=0 implies x=0$, correct?



So, we simply need to show that $det(A) neq 0$.



Let $A= begin{pmatrix}
a_{11} & a_{12} & a_{13} \
a_{21} & a_{22} & a_{23} \
a_{31} & a_{32} & a_{33}
end{pmatrix}$
where $a_{ij} := d(i,j) := |x_i-x_j|$



begin{align}
det A &= det
begin{pmatrix}
a_{11} & a_{12} & a_{13} \
a_{21} & a_{22} & a_{23} \
a_{31} & a_{32} & a_{33}
end{pmatrix}
\ &= a_{11} a_{22} a_{33} +a_{12} a_{23} a_{31} + a_{13} a_{21} a_{32} - a_{13} a_{22} a_{31} - a_{12} a_{21} a_{33} - a_{11} a_{23} a_{32}
\ &= 0+a_{12} a_{23} a_{31} + a_{13} a_{21} a_{32} -0-0-0
\ &= 2a_{12}a_{13} a_{23} > 0
end{align}


As $a_{ii}=d(i,i) = 0$ and $d(i,j)>0$ for $ineq j$






share|cite|improve this answer





















  • Do you know that there exist natural numbers greater than 3?
    – Capublanca
    Nov 24 at 13:25










  • Eh, my bad. In the first sentence he went with points in $mathbb{R}^3$, so I went with that
    – Sudix
    Nov 24 at 16:04










  • Ok, no problem.
    – Capublanca
    Nov 24 at 19:12













up vote
-1
down vote










up vote
-1
down vote









So, if I understand it correctly, you want to proof that $Ax=0 implies x=0$, correct?



So, we simply need to show that $det(A) neq 0$.



Let $A= begin{pmatrix}
a_{11} & a_{12} & a_{13} \
a_{21} & a_{22} & a_{23} \
a_{31} & a_{32} & a_{33}
end{pmatrix}$
where $a_{ij} := d(i,j) := |x_i-x_j|$



begin{align}
det A &= det
begin{pmatrix}
a_{11} & a_{12} & a_{13} \
a_{21} & a_{22} & a_{23} \
a_{31} & a_{32} & a_{33}
end{pmatrix}
\ &= a_{11} a_{22} a_{33} +a_{12} a_{23} a_{31} + a_{13} a_{21} a_{32} - a_{13} a_{22} a_{31} - a_{12} a_{21} a_{33} - a_{11} a_{23} a_{32}
\ &= 0+a_{12} a_{23} a_{31} + a_{13} a_{21} a_{32} -0-0-0
\ &= 2a_{12}a_{13} a_{23} > 0
end{align}


As $a_{ii}=d(i,i) = 0$ and $d(i,j)>0$ for $ineq j$






share|cite|improve this answer












So, if I understand it correctly, you want to proof that $Ax=0 implies x=0$, correct?



So, we simply need to show that $det(A) neq 0$.



Let $A= begin{pmatrix}
a_{11} & a_{12} & a_{13} \
a_{21} & a_{22} & a_{23} \
a_{31} & a_{32} & a_{33}
end{pmatrix}$
where $a_{ij} := d(i,j) := |x_i-x_j|$



begin{align}
det A &= det
begin{pmatrix}
a_{11} & a_{12} & a_{13} \
a_{21} & a_{22} & a_{23} \
a_{31} & a_{32} & a_{33}
end{pmatrix}
\ &= a_{11} a_{22} a_{33} +a_{12} a_{23} a_{31} + a_{13} a_{21} a_{32} - a_{13} a_{22} a_{31} - a_{12} a_{21} a_{33} - a_{11} a_{23} a_{32}
\ &= 0+a_{12} a_{23} a_{31} + a_{13} a_{21} a_{32} -0-0-0
\ &= 2a_{12}a_{13} a_{23} > 0
end{align}


As $a_{ii}=d(i,i) = 0$ and $d(i,j)>0$ for $ineq j$







share|cite|improve this answer












share|cite|improve this answer



share|cite|improve this answer










answered Nov 24 at 5:24









Sudix

8761316




8761316












  • Do you know that there exist natural numbers greater than 3?
    – Capublanca
    Nov 24 at 13:25










  • Eh, my bad. In the first sentence he went with points in $mathbb{R}^3$, so I went with that
    – Sudix
    Nov 24 at 16:04










  • Ok, no problem.
    – Capublanca
    Nov 24 at 19:12


















  • Do you know that there exist natural numbers greater than 3?
    – Capublanca
    Nov 24 at 13:25










  • Eh, my bad. In the first sentence he went with points in $mathbb{R}^3$, so I went with that
    – Sudix
    Nov 24 at 16:04










  • Ok, no problem.
    – Capublanca
    Nov 24 at 19:12
















Do you know that there exist natural numbers greater than 3?
– Capublanca
Nov 24 at 13:25




Do you know that there exist natural numbers greater than 3?
– Capublanca
Nov 24 at 13:25












Eh, my bad. In the first sentence he went with points in $mathbb{R}^3$, so I went with that
– Sudix
Nov 24 at 16:04




Eh, my bad. In the first sentence he went with points in $mathbb{R}^3$, so I went with that
– Sudix
Nov 24 at 16:04












Ok, no problem.
– Capublanca
Nov 24 at 19:12




Ok, no problem.
– Capublanca
Nov 24 at 19:12


















draft saved

draft discarded




















































Thanks for contributing an answer to Mathematics Stack Exchange!


  • Please be sure to answer the question. Provide details and share your research!

But avoid



  • Asking for help, clarification, or responding to other answers.

  • Making statements based on opinion; back them up with references or personal experience.


Use MathJax to format equations. MathJax reference.


To learn more, see our tips on writing great answers.





Some of your past answers have not been well-received, and you're in danger of being blocked from answering.


Please pay close attention to the following guidance:


  • Please be sure to answer the question. Provide details and share your research!

But avoid



  • Asking for help, clarification, or responding to other answers.

  • Making statements based on opinion; back them up with references or personal experience.


To learn more, see our tips on writing great answers.




draft saved


draft discarded














StackExchange.ready(
function () {
StackExchange.openid.initPostLogin('.new-post-login', 'https%3a%2f%2fmath.stackexchange.com%2fquestions%2f3001258%2fa-matrix-involving-distances-of-n-points-in-mathbbr3%23new-answer', 'question_page');
}
);

Post as a guest















Required, but never shown





















































Required, but never shown














Required, but never shown












Required, but never shown







Required, but never shown

































Required, but never shown














Required, but never shown












Required, but never shown







Required, but never shown







Popular posts from this blog

Wiesbaden

Marschland

Dieringhausen