single scikit-learn/pandas algorithm for data with categorical variable












0















I am interested in the programming solution for a conceptual question I asked in the datascience stack exchange. There doesn't seem to be a simple algorithm based on the replies (https://datascience.stackexchange.com/questions/41606/single-machine-learning-algorithm-for-multiple-classes-of-data-one-hot-encoder). So I wanted to know what is the best way to program this?



How can I use pandas and scikit-learn for the combined data to get the same accuracy as separating the data while still using one machine learning model and one dataframe? Is splitting the data and creating separate models the only way to program this in pandas and scikit-learn to get the optimal accuracy?



import pandas as pd
from sklearn.linear_model import LinearRegression

# Dataframe with x1 = 0 and linear regression gives a slope of 1 as expected

df = pd.DataFrame(data=[{'x1': 0, 'x2': 1, 'y': 1},
{'x1': 0, 'x2': 2, 'y': 2},
{'x1': 0, 'x2': 3, 'y': 3},
{'x1': 0, 'x2': 4, 'y': 4}
],
columns=['x1', 'x2', 'y'])

X = df[['x1', 'x2']]
y = df['y']
reg = LinearRegression().fit(X, y)
print(reg.predict(np.array([[0, 5]]))) # Output is 5 as expected

# Dataframe with x1 = 1 and linear regression gives a slope of 5 as expected

df = pd.DataFrame(data=[{'x1': 1, 'x2': 1, 'y': 4},
{'x1': 1, 'x2': 2, 'y': 8},
{'x1': 1, 'x2': 3, 'y': 12},
{'x1': 1, 'x2': 4, 'y': 16}
],
columns=['x1', 'x2', 'y'])

X = df[['x1', 'x2']]
y = df['y']
reg = LinearRegression().fit(X, y)
print(reg.predict(np.array([[1, 5]]))) # Output is 20 as expected

# Combine the two data frames x1 = 0 and x1 = 1

df = pd.DataFrame(data=[{'x1': 0, 'x2': 1, 'y': 1},
{'x1': 0, 'x2': 2, 'y': 2},
{'x1': 0, 'x2': 3, 'y': 3},
{'x1': 0, 'x2': 4, 'y': 4},
{'x1': 1, 'x2': 1, 'y': 4},
{'x1': 1, 'x2': 2, 'y': 8},
{'x1': 1, 'x2': 3, 'y': 12},
{'x1': 1, 'x2': 4, 'y': 16}
],
columns=['x1', 'x2', 'y'])

X = df[['x1', 'x2']]
y = df['y']
reg = LinearRegression().fit(X, y)
print(reg.predict(np.array([[0, 5]]))) # Output is 8.75 while optimal solution in 5
print(reg.predict(np.array([[1, 5]]))) # Output is 16.25 while optimal solution in 20

# use one hot encoder

df = pd.get_dummies(df, columns=["x1"], prefix=["x1"])
X = df[['x1_0', 'x1_1', 'x2']]
y = df['y']
reg = LinearRegression().fit(X, y)
print(reg.predict(np.array([[1, 0, 5]]))) # Output is 8.75 while optimal solution in 5
print(reg.predict(np.array([[0, 1, 5]]))) # Output is 16.25 while optimal solution in 20









share|improve this question























  • If you believe that different values of x1 lead to distinct linear models, then splitting is a good way to go. I have done it during investigations. Of course, there is no coefficient/inference for x1 which may(not) be a problem for you. Another way for a linear model is to examine if a mixed effects model matches the problem you are solving. But scikit, which you mention is a constraint, does not have mixed effects now. You could try if other scikit algorithms (trees) work but I often investigate split data for trees as well. And there is no inference with trees.

    – Craig
    Nov 25 '18 at 14:09
















0















I am interested in the programming solution for a conceptual question I asked in the datascience stack exchange. There doesn't seem to be a simple algorithm based on the replies (https://datascience.stackexchange.com/questions/41606/single-machine-learning-algorithm-for-multiple-classes-of-data-one-hot-encoder). So I wanted to know what is the best way to program this?



How can I use pandas and scikit-learn for the combined data to get the same accuracy as separating the data while still using one machine learning model and one dataframe? Is splitting the data and creating separate models the only way to program this in pandas and scikit-learn to get the optimal accuracy?



import pandas as pd
from sklearn.linear_model import LinearRegression

# Dataframe with x1 = 0 and linear regression gives a slope of 1 as expected

df = pd.DataFrame(data=[{'x1': 0, 'x2': 1, 'y': 1},
{'x1': 0, 'x2': 2, 'y': 2},
{'x1': 0, 'x2': 3, 'y': 3},
{'x1': 0, 'x2': 4, 'y': 4}
],
columns=['x1', 'x2', 'y'])

X = df[['x1', 'x2']]
y = df['y']
reg = LinearRegression().fit(X, y)
print(reg.predict(np.array([[0, 5]]))) # Output is 5 as expected

# Dataframe with x1 = 1 and linear regression gives a slope of 5 as expected

df = pd.DataFrame(data=[{'x1': 1, 'x2': 1, 'y': 4},
{'x1': 1, 'x2': 2, 'y': 8},
{'x1': 1, 'x2': 3, 'y': 12},
{'x1': 1, 'x2': 4, 'y': 16}
],
columns=['x1', 'x2', 'y'])

X = df[['x1', 'x2']]
y = df['y']
reg = LinearRegression().fit(X, y)
print(reg.predict(np.array([[1, 5]]))) # Output is 20 as expected

# Combine the two data frames x1 = 0 and x1 = 1

df = pd.DataFrame(data=[{'x1': 0, 'x2': 1, 'y': 1},
{'x1': 0, 'x2': 2, 'y': 2},
{'x1': 0, 'x2': 3, 'y': 3},
{'x1': 0, 'x2': 4, 'y': 4},
{'x1': 1, 'x2': 1, 'y': 4},
{'x1': 1, 'x2': 2, 'y': 8},
{'x1': 1, 'x2': 3, 'y': 12},
{'x1': 1, 'x2': 4, 'y': 16}
],
columns=['x1', 'x2', 'y'])

X = df[['x1', 'x2']]
y = df['y']
reg = LinearRegression().fit(X, y)
print(reg.predict(np.array([[0, 5]]))) # Output is 8.75 while optimal solution in 5
print(reg.predict(np.array([[1, 5]]))) # Output is 16.25 while optimal solution in 20

# use one hot encoder

df = pd.get_dummies(df, columns=["x1"], prefix=["x1"])
X = df[['x1_0', 'x1_1', 'x2']]
y = df['y']
reg = LinearRegression().fit(X, y)
print(reg.predict(np.array([[1, 0, 5]]))) # Output is 8.75 while optimal solution in 5
print(reg.predict(np.array([[0, 1, 5]]))) # Output is 16.25 while optimal solution in 20









share|improve this question























  • If you believe that different values of x1 lead to distinct linear models, then splitting is a good way to go. I have done it during investigations. Of course, there is no coefficient/inference for x1 which may(not) be a problem for you. Another way for a linear model is to examine if a mixed effects model matches the problem you are solving. But scikit, which you mention is a constraint, does not have mixed effects now. You could try if other scikit algorithms (trees) work but I often investigate split data for trees as well. And there is no inference with trees.

    – Craig
    Nov 25 '18 at 14:09














0












0








0








I am interested in the programming solution for a conceptual question I asked in the datascience stack exchange. There doesn't seem to be a simple algorithm based on the replies (https://datascience.stackexchange.com/questions/41606/single-machine-learning-algorithm-for-multiple-classes-of-data-one-hot-encoder). So I wanted to know what is the best way to program this?



How can I use pandas and scikit-learn for the combined data to get the same accuracy as separating the data while still using one machine learning model and one dataframe? Is splitting the data and creating separate models the only way to program this in pandas and scikit-learn to get the optimal accuracy?



import pandas as pd
from sklearn.linear_model import LinearRegression

# Dataframe with x1 = 0 and linear regression gives a slope of 1 as expected

df = pd.DataFrame(data=[{'x1': 0, 'x2': 1, 'y': 1},
{'x1': 0, 'x2': 2, 'y': 2},
{'x1': 0, 'x2': 3, 'y': 3},
{'x1': 0, 'x2': 4, 'y': 4}
],
columns=['x1', 'x2', 'y'])

X = df[['x1', 'x2']]
y = df['y']
reg = LinearRegression().fit(X, y)
print(reg.predict(np.array([[0, 5]]))) # Output is 5 as expected

# Dataframe with x1 = 1 and linear regression gives a slope of 5 as expected

df = pd.DataFrame(data=[{'x1': 1, 'x2': 1, 'y': 4},
{'x1': 1, 'x2': 2, 'y': 8},
{'x1': 1, 'x2': 3, 'y': 12},
{'x1': 1, 'x2': 4, 'y': 16}
],
columns=['x1', 'x2', 'y'])

X = df[['x1', 'x2']]
y = df['y']
reg = LinearRegression().fit(X, y)
print(reg.predict(np.array([[1, 5]]))) # Output is 20 as expected

# Combine the two data frames x1 = 0 and x1 = 1

df = pd.DataFrame(data=[{'x1': 0, 'x2': 1, 'y': 1},
{'x1': 0, 'x2': 2, 'y': 2},
{'x1': 0, 'x2': 3, 'y': 3},
{'x1': 0, 'x2': 4, 'y': 4},
{'x1': 1, 'x2': 1, 'y': 4},
{'x1': 1, 'x2': 2, 'y': 8},
{'x1': 1, 'x2': 3, 'y': 12},
{'x1': 1, 'x2': 4, 'y': 16}
],
columns=['x1', 'x2', 'y'])

X = df[['x1', 'x2']]
y = df['y']
reg = LinearRegression().fit(X, y)
print(reg.predict(np.array([[0, 5]]))) # Output is 8.75 while optimal solution in 5
print(reg.predict(np.array([[1, 5]]))) # Output is 16.25 while optimal solution in 20

# use one hot encoder

df = pd.get_dummies(df, columns=["x1"], prefix=["x1"])
X = df[['x1_0', 'x1_1', 'x2']]
y = df['y']
reg = LinearRegression().fit(X, y)
print(reg.predict(np.array([[1, 0, 5]]))) # Output is 8.75 while optimal solution in 5
print(reg.predict(np.array([[0, 1, 5]]))) # Output is 16.25 while optimal solution in 20









share|improve this question














I am interested in the programming solution for a conceptual question I asked in the datascience stack exchange. There doesn't seem to be a simple algorithm based on the replies (https://datascience.stackexchange.com/questions/41606/single-machine-learning-algorithm-for-multiple-classes-of-data-one-hot-encoder). So I wanted to know what is the best way to program this?



How can I use pandas and scikit-learn for the combined data to get the same accuracy as separating the data while still using one machine learning model and one dataframe? Is splitting the data and creating separate models the only way to program this in pandas and scikit-learn to get the optimal accuracy?



import pandas as pd
from sklearn.linear_model import LinearRegression

# Dataframe with x1 = 0 and linear regression gives a slope of 1 as expected

df = pd.DataFrame(data=[{'x1': 0, 'x2': 1, 'y': 1},
{'x1': 0, 'x2': 2, 'y': 2},
{'x1': 0, 'x2': 3, 'y': 3},
{'x1': 0, 'x2': 4, 'y': 4}
],
columns=['x1', 'x2', 'y'])

X = df[['x1', 'x2']]
y = df['y']
reg = LinearRegression().fit(X, y)
print(reg.predict(np.array([[0, 5]]))) # Output is 5 as expected

# Dataframe with x1 = 1 and linear regression gives a slope of 5 as expected

df = pd.DataFrame(data=[{'x1': 1, 'x2': 1, 'y': 4},
{'x1': 1, 'x2': 2, 'y': 8},
{'x1': 1, 'x2': 3, 'y': 12},
{'x1': 1, 'x2': 4, 'y': 16}
],
columns=['x1', 'x2', 'y'])

X = df[['x1', 'x2']]
y = df['y']
reg = LinearRegression().fit(X, y)
print(reg.predict(np.array([[1, 5]]))) # Output is 20 as expected

# Combine the two data frames x1 = 0 and x1 = 1

df = pd.DataFrame(data=[{'x1': 0, 'x2': 1, 'y': 1},
{'x1': 0, 'x2': 2, 'y': 2},
{'x1': 0, 'x2': 3, 'y': 3},
{'x1': 0, 'x2': 4, 'y': 4},
{'x1': 1, 'x2': 1, 'y': 4},
{'x1': 1, 'x2': 2, 'y': 8},
{'x1': 1, 'x2': 3, 'y': 12},
{'x1': 1, 'x2': 4, 'y': 16}
],
columns=['x1', 'x2', 'y'])

X = df[['x1', 'x2']]
y = df['y']
reg = LinearRegression().fit(X, y)
print(reg.predict(np.array([[0, 5]]))) # Output is 8.75 while optimal solution in 5
print(reg.predict(np.array([[1, 5]]))) # Output is 16.25 while optimal solution in 20

# use one hot encoder

df = pd.get_dummies(df, columns=["x1"], prefix=["x1"])
X = df[['x1_0', 'x1_1', 'x2']]
y = df['y']
reg = LinearRegression().fit(X, y)
print(reg.predict(np.array([[1, 0, 5]]))) # Output is 8.75 while optimal solution in 5
print(reg.predict(np.array([[0, 1, 5]]))) # Output is 16.25 while optimal solution in 20






python pandas scikit-learn






share|improve this question













share|improve this question











share|improve this question




share|improve this question










asked Nov 24 '18 at 21:50









user3631804user3631804

132




132













  • If you believe that different values of x1 lead to distinct linear models, then splitting is a good way to go. I have done it during investigations. Of course, there is no coefficient/inference for x1 which may(not) be a problem for you. Another way for a linear model is to examine if a mixed effects model matches the problem you are solving. But scikit, which you mention is a constraint, does not have mixed effects now. You could try if other scikit algorithms (trees) work but I often investigate split data for trees as well. And there is no inference with trees.

    – Craig
    Nov 25 '18 at 14:09



















  • If you believe that different values of x1 lead to distinct linear models, then splitting is a good way to go. I have done it during investigations. Of course, there is no coefficient/inference for x1 which may(not) be a problem for you. Another way for a linear model is to examine if a mixed effects model matches the problem you are solving. But scikit, which you mention is a constraint, does not have mixed effects now. You could try if other scikit algorithms (trees) work but I often investigate split data for trees as well. And there is no inference with trees.

    – Craig
    Nov 25 '18 at 14:09

















If you believe that different values of x1 lead to distinct linear models, then splitting is a good way to go. I have done it during investigations. Of course, there is no coefficient/inference for x1 which may(not) be a problem for you. Another way for a linear model is to examine if a mixed effects model matches the problem you are solving. But scikit, which you mention is a constraint, does not have mixed effects now. You could try if other scikit algorithms (trees) work but I often investigate split data for trees as well. And there is no inference with trees.

– Craig
Nov 25 '18 at 14:09





If you believe that different values of x1 lead to distinct linear models, then splitting is a good way to go. I have done it during investigations. Of course, there is no coefficient/inference for x1 which may(not) be a problem for you. Another way for a linear model is to examine if a mixed effects model matches the problem you are solving. But scikit, which you mention is a constraint, does not have mixed effects now. You could try if other scikit algorithms (trees) work but I often investigate split data for trees as well. And there is no inference with trees.

– Craig
Nov 25 '18 at 14:09












0






active

oldest

votes











Your Answer






StackExchange.ifUsing("editor", function () {
StackExchange.using("externalEditor", function () {
StackExchange.using("snippets", function () {
StackExchange.snippets.init();
});
});
}, "code-snippets");

StackExchange.ready(function() {
var channelOptions = {
tags: "".split(" "),
id: "1"
};
initTagRenderer("".split(" "), "".split(" "), channelOptions);

StackExchange.using("externalEditor", function() {
// Have to fire editor after snippets, if snippets enabled
if (StackExchange.settings.snippets.snippetsEnabled) {
StackExchange.using("snippets", function() {
createEditor();
});
}
else {
createEditor();
}
});

function createEditor() {
StackExchange.prepareEditor({
heartbeatType: 'answer',
autoActivateHeartbeat: false,
convertImagesToLinks: true,
noModals: true,
showLowRepImageUploadWarning: true,
reputationToPostImages: 10,
bindNavPrevention: true,
postfix: "",
imageUploader: {
brandingHtml: "Powered by u003ca class="icon-imgur-white" href="https://imgur.com/"u003eu003c/au003e",
contentPolicyHtml: "User contributions licensed under u003ca href="https://creativecommons.org/licenses/by-sa/3.0/"u003ecc by-sa 3.0 with attribution requiredu003c/au003e u003ca href="https://stackoverflow.com/legal/content-policy"u003e(content policy)u003c/au003e",
allowUrls: true
},
onDemand: true,
discardSelector: ".discard-answer"
,immediatelyShowMarkdownHelp:true
});


}
});














draft saved

draft discarded


















StackExchange.ready(
function () {
StackExchange.openid.initPostLogin('.new-post-login', 'https%3a%2f%2fstackoverflow.com%2fquestions%2f53462670%2fsingle-scikit-learn-pandas-algorithm-for-data-with-categorical-variable%23new-answer', 'question_page');
}
);

Post as a guest















Required, but never shown

























0






active

oldest

votes








0






active

oldest

votes









active

oldest

votes






active

oldest

votes
















draft saved

draft discarded




















































Thanks for contributing an answer to Stack Overflow!


  • Please be sure to answer the question. Provide details and share your research!

But avoid



  • Asking for help, clarification, or responding to other answers.

  • Making statements based on opinion; back them up with references or personal experience.


To learn more, see our tips on writing great answers.




draft saved


draft discarded














StackExchange.ready(
function () {
StackExchange.openid.initPostLogin('.new-post-login', 'https%3a%2f%2fstackoverflow.com%2fquestions%2f53462670%2fsingle-scikit-learn-pandas-algorithm-for-data-with-categorical-variable%23new-answer', 'question_page');
}
);

Post as a guest















Required, but never shown





















































Required, but never shown














Required, but never shown












Required, but never shown







Required, but never shown

































Required, but never shown














Required, but never shown












Required, but never shown







Required, but never shown







Popular posts from this blog

Wiesbaden

Marschland

Dieringhausen