Usage of Lagrange Multipliers in multivariable calculus
$begingroup$
I just learnt about Lagrange Multipliers & am confused about why they are useful. Why can we not just check for critical points by checking if the gradient vector of the objective function $f$ is $0$? Is it because for higher dimensions the boundary of a set may be infinite unlike the case when we have $[a,b]$ so if there is no critical point within $(a,b)$, the max and min lie at the endpoints.
Also, I may be wrong but the general procedure is to first check for critical points of $g$, the constraint function, on the level set and then look for Lagrange points. If this is correct, could someone explain why we look for critical points of $g$ instead of $f$? It doesn't make sense to look for maximum or minimum points of the constraint function..
multivariable-calculus optimization vector-analysis lagrange-multiplier
$endgroup$
add a comment |
$begingroup$
I just learnt about Lagrange Multipliers & am confused about why they are useful. Why can we not just check for critical points by checking if the gradient vector of the objective function $f$ is $0$? Is it because for higher dimensions the boundary of a set may be infinite unlike the case when we have $[a,b]$ so if there is no critical point within $(a,b)$, the max and min lie at the endpoints.
Also, I may be wrong but the general procedure is to first check for critical points of $g$, the constraint function, on the level set and then look for Lagrange points. If this is correct, could someone explain why we look for critical points of $g$ instead of $f$? It doesn't make sense to look for maximum or minimum points of the constraint function..
multivariable-calculus optimization vector-analysis lagrange-multiplier
$endgroup$
add a comment |
$begingroup$
I just learnt about Lagrange Multipliers & am confused about why they are useful. Why can we not just check for critical points by checking if the gradient vector of the objective function $f$ is $0$? Is it because for higher dimensions the boundary of a set may be infinite unlike the case when we have $[a,b]$ so if there is no critical point within $(a,b)$, the max and min lie at the endpoints.
Also, I may be wrong but the general procedure is to first check for critical points of $g$, the constraint function, on the level set and then look for Lagrange points. If this is correct, could someone explain why we look for critical points of $g$ instead of $f$? It doesn't make sense to look for maximum or minimum points of the constraint function..
multivariable-calculus optimization vector-analysis lagrange-multiplier
$endgroup$
I just learnt about Lagrange Multipliers & am confused about why they are useful. Why can we not just check for critical points by checking if the gradient vector of the objective function $f$ is $0$? Is it because for higher dimensions the boundary of a set may be infinite unlike the case when we have $[a,b]$ so if there is no critical point within $(a,b)$, the max and min lie at the endpoints.
Also, I may be wrong but the general procedure is to first check for critical points of $g$, the constraint function, on the level set and then look for Lagrange points. If this is correct, could someone explain why we look for critical points of $g$ instead of $f$? It doesn't make sense to look for maximum or minimum points of the constraint function..
multivariable-calculus optimization vector-analysis lagrange-multiplier
multivariable-calculus optimization vector-analysis lagrange-multiplier
asked Dec 10 '18 at 4:23
Saad Saad
580211
580211
add a comment |
add a comment |
1 Answer
1
active
oldest
votes
$begingroup$
Perhaps it would help to consider a simple example:
Maximize/minimize the function $f(x,y) = 3x + 4y$ subject to the constraint $g(x,y) = x^2 + y^2 = 25$
We can now address one of your concerns: why can we not just check for critical points by checking if the gradient vector of the objective function $f$ is $0$? As you can see, the gradient of the objective function is $nabla f = (3,4)$, which is never zero. By your logic, we should expect no extrema since we have no critical points (you can verify, however, that we obtain a max at $(3,4)$ and a min at $(-3,-4)$).
To your second question: it amounts to the fact that points where the gradient of $g$ is zero or undefined are "poorly behaved" points over the domain on which we're optimizing. For instance, consider the example:
Maximize/minimize the function $f(x,y) = y$ subject to the constraint $g(x,y) = x^2 - y^3 = 0$
Verify that, although there are no Lagrange points, $f$ attains a minimum at $(x,y) = (0,0)$. To see what I mean by "poorly behaved", note that $x^2 - y^3 = 0$ traces out the graph of $y = x^{2/3}$, which fails to be differentiable at $x = 0$.
$endgroup$
$begingroup$
I don't understand your second point. I can't see how your example shows why we look for critical points of the constraint function instead of the objective function. For my first question, could you explain what exactly goes wrong? Goes wrong as in why the following argument fails : A differentiable function has derivative 0 at every critical point, and a global extremum is a local extremum (critical point) if it is in the interior.
$endgroup$
– Saad
Dec 10 '18 at 5:28
1
$begingroup$
@Saad as I said for the second problem: there are no Lagrange points. There are also no critical points for the objective function, which has gradient $(0,1)$. So, if we want to find the max/min, we have to look at something else. As it turns out, the critical points of $g$ are what we would have missed.
$endgroup$
– Omnomnomnom
Dec 10 '18 at 5:32
$begingroup$
@Saad as for your argument: I really don't see how it's supposed to apply to the first problem at all. Yes, $f$ has a derivative $0$ at every critical points... but in this problem the derivative of $f$ is never $0$ (or undefined). Also, what do you mean by the "interior" in the context of this problem?
$endgroup$
– Omnomnomnom
Dec 10 '18 at 5:35
1
$begingroup$
@Saad here's a way to think of it, for the case of a non-empty interior (e.g. if the first problem were $g(x,y) leq 25$). To begin, we look for critical points of the objective function on the $2$-dimensional interior. Then, we consider the problem of maximizing $f$ on the $1$-dimensional boundary which, as you observed, contains infinitely many points. Compare this to the $1$-dimensional problem where, after looking at the interior, we maximize the objective function on the $0$-dimensional boundary, i.e. the points $a$ and $b$.
$endgroup$
– Omnomnomnom
Dec 10 '18 at 5:52
1
$begingroup$
@Saad either your theorem has a more flexible definition of a Lagrange point than I’m used to, or your theorem assumes that the gradient of the constraint function is never zero or undefined over the domain in question.
$endgroup$
– Omnomnomnom
Dec 10 '18 at 6:10
|
show 5 more comments
Your Answer
StackExchange.ifUsing("editor", function () {
return StackExchange.using("mathjaxEditing", function () {
StackExchange.MarkdownEditor.creationCallbacks.add(function (editor, postfix) {
StackExchange.mathjaxEditing.prepareWmdForMathJax(editor, postfix, [["$", "$"], ["\\(","\\)"]]);
});
});
}, "mathjax-editing");
StackExchange.ready(function() {
var channelOptions = {
tags: "".split(" "),
id: "69"
};
initTagRenderer("".split(" "), "".split(" "), channelOptions);
StackExchange.using("externalEditor", function() {
// Have to fire editor after snippets, if snippets enabled
if (StackExchange.settings.snippets.snippetsEnabled) {
StackExchange.using("snippets", function() {
createEditor();
});
}
else {
createEditor();
}
});
function createEditor() {
StackExchange.prepareEditor({
heartbeatType: 'answer',
autoActivateHeartbeat: false,
convertImagesToLinks: true,
noModals: true,
showLowRepImageUploadWarning: true,
reputationToPostImages: 10,
bindNavPrevention: true,
postfix: "",
imageUploader: {
brandingHtml: "Powered by u003ca class="icon-imgur-white" href="https://imgur.com/"u003eu003c/au003e",
contentPolicyHtml: "User contributions licensed under u003ca href="https://creativecommons.org/licenses/by-sa/3.0/"u003ecc by-sa 3.0 with attribution requiredu003c/au003e u003ca href="https://stackoverflow.com/legal/content-policy"u003e(content policy)u003c/au003e",
allowUrls: true
},
noCode: true, onDemand: true,
discardSelector: ".discard-answer"
,immediatelyShowMarkdownHelp:true
});
}
});
Sign up or log in
StackExchange.ready(function () {
StackExchange.helpers.onClickDraftSave('#login-link');
});
Sign up using Google
Sign up using Facebook
Sign up using Email and Password
Post as a guest
Required, but never shown
StackExchange.ready(
function () {
StackExchange.openid.initPostLogin('.new-post-login', 'https%3a%2f%2fmath.stackexchange.com%2fquestions%2f3033441%2fusage-of-lagrange-multipliers-in-multivariable-calculus%23new-answer', 'question_page');
}
);
Post as a guest
Required, but never shown
1 Answer
1
active
oldest
votes
1 Answer
1
active
oldest
votes
active
oldest
votes
active
oldest
votes
$begingroup$
Perhaps it would help to consider a simple example:
Maximize/minimize the function $f(x,y) = 3x + 4y$ subject to the constraint $g(x,y) = x^2 + y^2 = 25$
We can now address one of your concerns: why can we not just check for critical points by checking if the gradient vector of the objective function $f$ is $0$? As you can see, the gradient of the objective function is $nabla f = (3,4)$, which is never zero. By your logic, we should expect no extrema since we have no critical points (you can verify, however, that we obtain a max at $(3,4)$ and a min at $(-3,-4)$).
To your second question: it amounts to the fact that points where the gradient of $g$ is zero or undefined are "poorly behaved" points over the domain on which we're optimizing. For instance, consider the example:
Maximize/minimize the function $f(x,y) = y$ subject to the constraint $g(x,y) = x^2 - y^3 = 0$
Verify that, although there are no Lagrange points, $f$ attains a minimum at $(x,y) = (0,0)$. To see what I mean by "poorly behaved", note that $x^2 - y^3 = 0$ traces out the graph of $y = x^{2/3}$, which fails to be differentiable at $x = 0$.
$endgroup$
$begingroup$
I don't understand your second point. I can't see how your example shows why we look for critical points of the constraint function instead of the objective function. For my first question, could you explain what exactly goes wrong? Goes wrong as in why the following argument fails : A differentiable function has derivative 0 at every critical point, and a global extremum is a local extremum (critical point) if it is in the interior.
$endgroup$
– Saad
Dec 10 '18 at 5:28
1
$begingroup$
@Saad as I said for the second problem: there are no Lagrange points. There are also no critical points for the objective function, which has gradient $(0,1)$. So, if we want to find the max/min, we have to look at something else. As it turns out, the critical points of $g$ are what we would have missed.
$endgroup$
– Omnomnomnom
Dec 10 '18 at 5:32
$begingroup$
@Saad as for your argument: I really don't see how it's supposed to apply to the first problem at all. Yes, $f$ has a derivative $0$ at every critical points... but in this problem the derivative of $f$ is never $0$ (or undefined). Also, what do you mean by the "interior" in the context of this problem?
$endgroup$
– Omnomnomnom
Dec 10 '18 at 5:35
1
$begingroup$
@Saad here's a way to think of it, for the case of a non-empty interior (e.g. if the first problem were $g(x,y) leq 25$). To begin, we look for critical points of the objective function on the $2$-dimensional interior. Then, we consider the problem of maximizing $f$ on the $1$-dimensional boundary which, as you observed, contains infinitely many points. Compare this to the $1$-dimensional problem where, after looking at the interior, we maximize the objective function on the $0$-dimensional boundary, i.e. the points $a$ and $b$.
$endgroup$
– Omnomnomnom
Dec 10 '18 at 5:52
1
$begingroup$
@Saad either your theorem has a more flexible definition of a Lagrange point than I’m used to, or your theorem assumes that the gradient of the constraint function is never zero or undefined over the domain in question.
$endgroup$
– Omnomnomnom
Dec 10 '18 at 6:10
|
show 5 more comments
$begingroup$
Perhaps it would help to consider a simple example:
Maximize/minimize the function $f(x,y) = 3x + 4y$ subject to the constraint $g(x,y) = x^2 + y^2 = 25$
We can now address one of your concerns: why can we not just check for critical points by checking if the gradient vector of the objective function $f$ is $0$? As you can see, the gradient of the objective function is $nabla f = (3,4)$, which is never zero. By your logic, we should expect no extrema since we have no critical points (you can verify, however, that we obtain a max at $(3,4)$ and a min at $(-3,-4)$).
To your second question: it amounts to the fact that points where the gradient of $g$ is zero or undefined are "poorly behaved" points over the domain on which we're optimizing. For instance, consider the example:
Maximize/minimize the function $f(x,y) = y$ subject to the constraint $g(x,y) = x^2 - y^3 = 0$
Verify that, although there are no Lagrange points, $f$ attains a minimum at $(x,y) = (0,0)$. To see what I mean by "poorly behaved", note that $x^2 - y^3 = 0$ traces out the graph of $y = x^{2/3}$, which fails to be differentiable at $x = 0$.
$endgroup$
$begingroup$
I don't understand your second point. I can't see how your example shows why we look for critical points of the constraint function instead of the objective function. For my first question, could you explain what exactly goes wrong? Goes wrong as in why the following argument fails : A differentiable function has derivative 0 at every critical point, and a global extremum is a local extremum (critical point) if it is in the interior.
$endgroup$
– Saad
Dec 10 '18 at 5:28
1
$begingroup$
@Saad as I said for the second problem: there are no Lagrange points. There are also no critical points for the objective function, which has gradient $(0,1)$. So, if we want to find the max/min, we have to look at something else. As it turns out, the critical points of $g$ are what we would have missed.
$endgroup$
– Omnomnomnom
Dec 10 '18 at 5:32
$begingroup$
@Saad as for your argument: I really don't see how it's supposed to apply to the first problem at all. Yes, $f$ has a derivative $0$ at every critical points... but in this problem the derivative of $f$ is never $0$ (or undefined). Also, what do you mean by the "interior" in the context of this problem?
$endgroup$
– Omnomnomnom
Dec 10 '18 at 5:35
1
$begingroup$
@Saad here's a way to think of it, for the case of a non-empty interior (e.g. if the first problem were $g(x,y) leq 25$). To begin, we look for critical points of the objective function on the $2$-dimensional interior. Then, we consider the problem of maximizing $f$ on the $1$-dimensional boundary which, as you observed, contains infinitely many points. Compare this to the $1$-dimensional problem where, after looking at the interior, we maximize the objective function on the $0$-dimensional boundary, i.e. the points $a$ and $b$.
$endgroup$
– Omnomnomnom
Dec 10 '18 at 5:52
1
$begingroup$
@Saad either your theorem has a more flexible definition of a Lagrange point than I’m used to, or your theorem assumes that the gradient of the constraint function is never zero or undefined over the domain in question.
$endgroup$
– Omnomnomnom
Dec 10 '18 at 6:10
|
show 5 more comments
$begingroup$
Perhaps it would help to consider a simple example:
Maximize/minimize the function $f(x,y) = 3x + 4y$ subject to the constraint $g(x,y) = x^2 + y^2 = 25$
We can now address one of your concerns: why can we not just check for critical points by checking if the gradient vector of the objective function $f$ is $0$? As you can see, the gradient of the objective function is $nabla f = (3,4)$, which is never zero. By your logic, we should expect no extrema since we have no critical points (you can verify, however, that we obtain a max at $(3,4)$ and a min at $(-3,-4)$).
To your second question: it amounts to the fact that points where the gradient of $g$ is zero or undefined are "poorly behaved" points over the domain on which we're optimizing. For instance, consider the example:
Maximize/minimize the function $f(x,y) = y$ subject to the constraint $g(x,y) = x^2 - y^3 = 0$
Verify that, although there are no Lagrange points, $f$ attains a minimum at $(x,y) = (0,0)$. To see what I mean by "poorly behaved", note that $x^2 - y^3 = 0$ traces out the graph of $y = x^{2/3}$, which fails to be differentiable at $x = 0$.
$endgroup$
Perhaps it would help to consider a simple example:
Maximize/minimize the function $f(x,y) = 3x + 4y$ subject to the constraint $g(x,y) = x^2 + y^2 = 25$
We can now address one of your concerns: why can we not just check for critical points by checking if the gradient vector of the objective function $f$ is $0$? As you can see, the gradient of the objective function is $nabla f = (3,4)$, which is never zero. By your logic, we should expect no extrema since we have no critical points (you can verify, however, that we obtain a max at $(3,4)$ and a min at $(-3,-4)$).
To your second question: it amounts to the fact that points where the gradient of $g$ is zero or undefined are "poorly behaved" points over the domain on which we're optimizing. For instance, consider the example:
Maximize/minimize the function $f(x,y) = y$ subject to the constraint $g(x,y) = x^2 - y^3 = 0$
Verify that, although there are no Lagrange points, $f$ attains a minimum at $(x,y) = (0,0)$. To see what I mean by "poorly behaved", note that $x^2 - y^3 = 0$ traces out the graph of $y = x^{2/3}$, which fails to be differentiable at $x = 0$.
edited Dec 10 '18 at 5:05
answered Dec 10 '18 at 4:54
OmnomnomnomOmnomnomnom
128k790178
128k790178
$begingroup$
I don't understand your second point. I can't see how your example shows why we look for critical points of the constraint function instead of the objective function. For my first question, could you explain what exactly goes wrong? Goes wrong as in why the following argument fails : A differentiable function has derivative 0 at every critical point, and a global extremum is a local extremum (critical point) if it is in the interior.
$endgroup$
– Saad
Dec 10 '18 at 5:28
1
$begingroup$
@Saad as I said for the second problem: there are no Lagrange points. There are also no critical points for the objective function, which has gradient $(0,1)$. So, if we want to find the max/min, we have to look at something else. As it turns out, the critical points of $g$ are what we would have missed.
$endgroup$
– Omnomnomnom
Dec 10 '18 at 5:32
$begingroup$
@Saad as for your argument: I really don't see how it's supposed to apply to the first problem at all. Yes, $f$ has a derivative $0$ at every critical points... but in this problem the derivative of $f$ is never $0$ (or undefined). Also, what do you mean by the "interior" in the context of this problem?
$endgroup$
– Omnomnomnom
Dec 10 '18 at 5:35
1
$begingroup$
@Saad here's a way to think of it, for the case of a non-empty interior (e.g. if the first problem were $g(x,y) leq 25$). To begin, we look for critical points of the objective function on the $2$-dimensional interior. Then, we consider the problem of maximizing $f$ on the $1$-dimensional boundary which, as you observed, contains infinitely many points. Compare this to the $1$-dimensional problem where, after looking at the interior, we maximize the objective function on the $0$-dimensional boundary, i.e. the points $a$ and $b$.
$endgroup$
– Omnomnomnom
Dec 10 '18 at 5:52
1
$begingroup$
@Saad either your theorem has a more flexible definition of a Lagrange point than I’m used to, or your theorem assumes that the gradient of the constraint function is never zero or undefined over the domain in question.
$endgroup$
– Omnomnomnom
Dec 10 '18 at 6:10
|
show 5 more comments
$begingroup$
I don't understand your second point. I can't see how your example shows why we look for critical points of the constraint function instead of the objective function. For my first question, could you explain what exactly goes wrong? Goes wrong as in why the following argument fails : A differentiable function has derivative 0 at every critical point, and a global extremum is a local extremum (critical point) if it is in the interior.
$endgroup$
– Saad
Dec 10 '18 at 5:28
1
$begingroup$
@Saad as I said for the second problem: there are no Lagrange points. There are also no critical points for the objective function, which has gradient $(0,1)$. So, if we want to find the max/min, we have to look at something else. As it turns out, the critical points of $g$ are what we would have missed.
$endgroup$
– Omnomnomnom
Dec 10 '18 at 5:32
$begingroup$
@Saad as for your argument: I really don't see how it's supposed to apply to the first problem at all. Yes, $f$ has a derivative $0$ at every critical points... but in this problem the derivative of $f$ is never $0$ (or undefined). Also, what do you mean by the "interior" in the context of this problem?
$endgroup$
– Omnomnomnom
Dec 10 '18 at 5:35
1
$begingroup$
@Saad here's a way to think of it, for the case of a non-empty interior (e.g. if the first problem were $g(x,y) leq 25$). To begin, we look for critical points of the objective function on the $2$-dimensional interior. Then, we consider the problem of maximizing $f$ on the $1$-dimensional boundary which, as you observed, contains infinitely many points. Compare this to the $1$-dimensional problem where, after looking at the interior, we maximize the objective function on the $0$-dimensional boundary, i.e. the points $a$ and $b$.
$endgroup$
– Omnomnomnom
Dec 10 '18 at 5:52
1
$begingroup$
@Saad either your theorem has a more flexible definition of a Lagrange point than I’m used to, or your theorem assumes that the gradient of the constraint function is never zero or undefined over the domain in question.
$endgroup$
– Omnomnomnom
Dec 10 '18 at 6:10
$begingroup$
I don't understand your second point. I can't see how your example shows why we look for critical points of the constraint function instead of the objective function. For my first question, could you explain what exactly goes wrong? Goes wrong as in why the following argument fails : A differentiable function has derivative 0 at every critical point, and a global extremum is a local extremum (critical point) if it is in the interior.
$endgroup$
– Saad
Dec 10 '18 at 5:28
$begingroup$
I don't understand your second point. I can't see how your example shows why we look for critical points of the constraint function instead of the objective function. For my first question, could you explain what exactly goes wrong? Goes wrong as in why the following argument fails : A differentiable function has derivative 0 at every critical point, and a global extremum is a local extremum (critical point) if it is in the interior.
$endgroup$
– Saad
Dec 10 '18 at 5:28
1
1
$begingroup$
@Saad as I said for the second problem: there are no Lagrange points. There are also no critical points for the objective function, which has gradient $(0,1)$. So, if we want to find the max/min, we have to look at something else. As it turns out, the critical points of $g$ are what we would have missed.
$endgroup$
– Omnomnomnom
Dec 10 '18 at 5:32
$begingroup$
@Saad as I said for the second problem: there are no Lagrange points. There are also no critical points for the objective function, which has gradient $(0,1)$. So, if we want to find the max/min, we have to look at something else. As it turns out, the critical points of $g$ are what we would have missed.
$endgroup$
– Omnomnomnom
Dec 10 '18 at 5:32
$begingroup$
@Saad as for your argument: I really don't see how it's supposed to apply to the first problem at all. Yes, $f$ has a derivative $0$ at every critical points... but in this problem the derivative of $f$ is never $0$ (or undefined). Also, what do you mean by the "interior" in the context of this problem?
$endgroup$
– Omnomnomnom
Dec 10 '18 at 5:35
$begingroup$
@Saad as for your argument: I really don't see how it's supposed to apply to the first problem at all. Yes, $f$ has a derivative $0$ at every critical points... but in this problem the derivative of $f$ is never $0$ (or undefined). Also, what do you mean by the "interior" in the context of this problem?
$endgroup$
– Omnomnomnom
Dec 10 '18 at 5:35
1
1
$begingroup$
@Saad here's a way to think of it, for the case of a non-empty interior (e.g. if the first problem were $g(x,y) leq 25$). To begin, we look for critical points of the objective function on the $2$-dimensional interior. Then, we consider the problem of maximizing $f$ on the $1$-dimensional boundary which, as you observed, contains infinitely many points. Compare this to the $1$-dimensional problem where, after looking at the interior, we maximize the objective function on the $0$-dimensional boundary, i.e. the points $a$ and $b$.
$endgroup$
– Omnomnomnom
Dec 10 '18 at 5:52
$begingroup$
@Saad here's a way to think of it, for the case of a non-empty interior (e.g. if the first problem were $g(x,y) leq 25$). To begin, we look for critical points of the objective function on the $2$-dimensional interior. Then, we consider the problem of maximizing $f$ on the $1$-dimensional boundary which, as you observed, contains infinitely many points. Compare this to the $1$-dimensional problem where, after looking at the interior, we maximize the objective function on the $0$-dimensional boundary, i.e. the points $a$ and $b$.
$endgroup$
– Omnomnomnom
Dec 10 '18 at 5:52
1
1
$begingroup$
@Saad either your theorem has a more flexible definition of a Lagrange point than I’m used to, or your theorem assumes that the gradient of the constraint function is never zero or undefined over the domain in question.
$endgroup$
– Omnomnomnom
Dec 10 '18 at 6:10
$begingroup$
@Saad either your theorem has a more flexible definition of a Lagrange point than I’m used to, or your theorem assumes that the gradient of the constraint function is never zero or undefined over the domain in question.
$endgroup$
– Omnomnomnom
Dec 10 '18 at 6:10
|
show 5 more comments
Thanks for contributing an answer to Mathematics Stack Exchange!
- Please be sure to answer the question. Provide details and share your research!
But avoid …
- Asking for help, clarification, or responding to other answers.
- Making statements based on opinion; back them up with references or personal experience.
Use MathJax to format equations. MathJax reference.
To learn more, see our tips on writing great answers.
Sign up or log in
StackExchange.ready(function () {
StackExchange.helpers.onClickDraftSave('#login-link');
});
Sign up using Google
Sign up using Facebook
Sign up using Email and Password
Post as a guest
Required, but never shown
StackExchange.ready(
function () {
StackExchange.openid.initPostLogin('.new-post-login', 'https%3a%2f%2fmath.stackexchange.com%2fquestions%2f3033441%2fusage-of-lagrange-multipliers-in-multivariable-calculus%23new-answer', 'question_page');
}
);
Post as a guest
Required, but never shown
Sign up or log in
StackExchange.ready(function () {
StackExchange.helpers.onClickDraftSave('#login-link');
});
Sign up using Google
Sign up using Facebook
Sign up using Email and Password
Post as a guest
Required, but never shown
Sign up or log in
StackExchange.ready(function () {
StackExchange.helpers.onClickDraftSave('#login-link');
});
Sign up using Google
Sign up using Facebook
Sign up using Email and Password
Post as a guest
Required, but never shown
Sign up or log in
StackExchange.ready(function () {
StackExchange.helpers.onClickDraftSave('#login-link');
});
Sign up using Google
Sign up using Facebook
Sign up using Email and Password
Sign up using Google
Sign up using Facebook
Sign up using Email and Password
Post as a guest
Required, but never shown
Required, but never shown
Required, but never shown
Required, but never shown
Required, but never shown
Required, but never shown
Required, but never shown
Required, but never shown
Required, but never shown