Very large variation of the time elapsed when a small write to Postgresql












0















I'm a freshman with Postgresql, so any hints or tips are appreciated.



My program is very small. It receives data from socket, and then insert them into a table that has no index/foreign key/constrain/Text Field. In the fact, it has a few of numeric fields only.



My data from network is coming continuously at a rate about 100 to 1000 records per seconds. My Postgresql is not optimized at all, and every thing of it is running at the default values.



The trouble is
The time consumption of every db-insert varys in a very wide range. The maximum time of a single insert might be 10s, and the minimum one might be 100us, while the average one is about 900us.



Even though I can increase the length of buffer, to avoid the buffer be exhausted when we wait for a too long insert to finish, I think this is NOT the correct solution, because regardless how long the buffer length is, we might encounter a more slower insert.
I guesss it is abnormal in the world of Postgresql(10s for writing a tiny plain table), but I have no idea to solve it, so I ask it here to hope someone point out my mistake or some optimizing methods.



env: Centos 7.5 64bit, Postgresql 10, gcc 8.2.0 and latest pqxx lib.



This is my pg_hba.conf



# TYPE  DATABASE        USER            ADDRESS                 METHOD

# "local" is for Unix domain socket connections only
local all all trust

# IPv4 local connections:
host all all 127.0.0.1/32 trust

# IPv4 incoming connections:
host all all all password

# IPv6 local connections:
host all all ::1/128 ident

# Allow replication connections from localhost, by a user with the
# replication privilege.
local replication all peer
host replication all 127.0.0.1/32 ident
host replication all ::1/128 ident


and the postgresql.conf



listen_addresses = '*'          # what IP address(es) to listen on;
max_connections = 100 # (change requires restart)
shared_buffers = 128MB # min 128kB
dynamic_shared_memory_type = posix # the default is the first option
log_destination = 'stderr' # Valid values are combinations of
logging_collector = on # Enable capturing of stderr and csvlog
log_directory = 'log' # directory where log files are written,
log_filename = 'postgresql-%a.log' # log file name pattern,
log_truncate_on_rotation = on # If on, an existing log file with the
log_rotation_age = 1d # Automatic rotation of logfiles will
log_rotation_size = 0 # Automatic rotation of logfiles will
log_line_prefix = '%m [%p] ' # special values:
log_timezone = 'PRC'
datestyle = 'iso, ymd'
timezone = 'PRC'
lc_messages = 'zh_CN.UTF-8' # locale for system error message
lc_monetary = 'zh_CN.UTF-8' # locale for monetary formatting
lc_numeric = 'zh_CN.UTF-8' # locale for number formatting
lc_time = 'zh_CN.UTF-8' # locale for time formatting
default_text_search_config = 'pg_catalog.simple'


Sorry for my English, I'm a Chinese. I appreciate any patience to read my poor English.
Thanks!










share|improve this question

























  • show your database configurations

    – user7294900
    Nov 25 '18 at 8:25











  • thanks for your reply. The configs have been added.

    – Leon
    Nov 25 '18 at 11:57













  • How often do you commit? Are there any concurrent transactions that may cause a lock? Try to attach to the backend with strace and see where the time is spent.

    – Laurenz Albe
    Nov 25 '18 at 14:25













  • Sorry for too later, because I'm busy these days. My program commits a record about every 1us -- 1ms. No transaction is needed, because there is only one tiny table to be insert(not update). This table has no index/pk/fk/constrain.

    – Leon
    Nov 29 '18 at 3:24


















0















I'm a freshman with Postgresql, so any hints or tips are appreciated.



My program is very small. It receives data from socket, and then insert them into a table that has no index/foreign key/constrain/Text Field. In the fact, it has a few of numeric fields only.



My data from network is coming continuously at a rate about 100 to 1000 records per seconds. My Postgresql is not optimized at all, and every thing of it is running at the default values.



The trouble is
The time consumption of every db-insert varys in a very wide range. The maximum time of a single insert might be 10s, and the minimum one might be 100us, while the average one is about 900us.



Even though I can increase the length of buffer, to avoid the buffer be exhausted when we wait for a too long insert to finish, I think this is NOT the correct solution, because regardless how long the buffer length is, we might encounter a more slower insert.
I guesss it is abnormal in the world of Postgresql(10s for writing a tiny plain table), but I have no idea to solve it, so I ask it here to hope someone point out my mistake or some optimizing methods.



env: Centos 7.5 64bit, Postgresql 10, gcc 8.2.0 and latest pqxx lib.



This is my pg_hba.conf



# TYPE  DATABASE        USER            ADDRESS                 METHOD

# "local" is for Unix domain socket connections only
local all all trust

# IPv4 local connections:
host all all 127.0.0.1/32 trust

# IPv4 incoming connections:
host all all all password

# IPv6 local connections:
host all all ::1/128 ident

# Allow replication connections from localhost, by a user with the
# replication privilege.
local replication all peer
host replication all 127.0.0.1/32 ident
host replication all ::1/128 ident


and the postgresql.conf



listen_addresses = '*'          # what IP address(es) to listen on;
max_connections = 100 # (change requires restart)
shared_buffers = 128MB # min 128kB
dynamic_shared_memory_type = posix # the default is the first option
log_destination = 'stderr' # Valid values are combinations of
logging_collector = on # Enable capturing of stderr and csvlog
log_directory = 'log' # directory where log files are written,
log_filename = 'postgresql-%a.log' # log file name pattern,
log_truncate_on_rotation = on # If on, an existing log file with the
log_rotation_age = 1d # Automatic rotation of logfiles will
log_rotation_size = 0 # Automatic rotation of logfiles will
log_line_prefix = '%m [%p] ' # special values:
log_timezone = 'PRC'
datestyle = 'iso, ymd'
timezone = 'PRC'
lc_messages = 'zh_CN.UTF-8' # locale for system error message
lc_monetary = 'zh_CN.UTF-8' # locale for monetary formatting
lc_numeric = 'zh_CN.UTF-8' # locale for number formatting
lc_time = 'zh_CN.UTF-8' # locale for time formatting
default_text_search_config = 'pg_catalog.simple'


Sorry for my English, I'm a Chinese. I appreciate any patience to read my poor English.
Thanks!










share|improve this question

























  • show your database configurations

    – user7294900
    Nov 25 '18 at 8:25











  • thanks for your reply. The configs have been added.

    – Leon
    Nov 25 '18 at 11:57













  • How often do you commit? Are there any concurrent transactions that may cause a lock? Try to attach to the backend with strace and see where the time is spent.

    – Laurenz Albe
    Nov 25 '18 at 14:25













  • Sorry for too later, because I'm busy these days. My program commits a record about every 1us -- 1ms. No transaction is needed, because there is only one tiny table to be insert(not update). This table has no index/pk/fk/constrain.

    – Leon
    Nov 29 '18 at 3:24
















0












0








0








I'm a freshman with Postgresql, so any hints or tips are appreciated.



My program is very small. It receives data from socket, and then insert them into a table that has no index/foreign key/constrain/Text Field. In the fact, it has a few of numeric fields only.



My data from network is coming continuously at a rate about 100 to 1000 records per seconds. My Postgresql is not optimized at all, and every thing of it is running at the default values.



The trouble is
The time consumption of every db-insert varys in a very wide range. The maximum time of a single insert might be 10s, and the minimum one might be 100us, while the average one is about 900us.



Even though I can increase the length of buffer, to avoid the buffer be exhausted when we wait for a too long insert to finish, I think this is NOT the correct solution, because regardless how long the buffer length is, we might encounter a more slower insert.
I guesss it is abnormal in the world of Postgresql(10s for writing a tiny plain table), but I have no idea to solve it, so I ask it here to hope someone point out my mistake or some optimizing methods.



env: Centos 7.5 64bit, Postgresql 10, gcc 8.2.0 and latest pqxx lib.



This is my pg_hba.conf



# TYPE  DATABASE        USER            ADDRESS                 METHOD

# "local" is for Unix domain socket connections only
local all all trust

# IPv4 local connections:
host all all 127.0.0.1/32 trust

# IPv4 incoming connections:
host all all all password

# IPv6 local connections:
host all all ::1/128 ident

# Allow replication connections from localhost, by a user with the
# replication privilege.
local replication all peer
host replication all 127.0.0.1/32 ident
host replication all ::1/128 ident


and the postgresql.conf



listen_addresses = '*'          # what IP address(es) to listen on;
max_connections = 100 # (change requires restart)
shared_buffers = 128MB # min 128kB
dynamic_shared_memory_type = posix # the default is the first option
log_destination = 'stderr' # Valid values are combinations of
logging_collector = on # Enable capturing of stderr and csvlog
log_directory = 'log' # directory where log files are written,
log_filename = 'postgresql-%a.log' # log file name pattern,
log_truncate_on_rotation = on # If on, an existing log file with the
log_rotation_age = 1d # Automatic rotation of logfiles will
log_rotation_size = 0 # Automatic rotation of logfiles will
log_line_prefix = '%m [%p] ' # special values:
log_timezone = 'PRC'
datestyle = 'iso, ymd'
timezone = 'PRC'
lc_messages = 'zh_CN.UTF-8' # locale for system error message
lc_monetary = 'zh_CN.UTF-8' # locale for monetary formatting
lc_numeric = 'zh_CN.UTF-8' # locale for number formatting
lc_time = 'zh_CN.UTF-8' # locale for time formatting
default_text_search_config = 'pg_catalog.simple'


Sorry for my English, I'm a Chinese. I appreciate any patience to read my poor English.
Thanks!










share|improve this question
















I'm a freshman with Postgresql, so any hints or tips are appreciated.



My program is very small. It receives data from socket, and then insert them into a table that has no index/foreign key/constrain/Text Field. In the fact, it has a few of numeric fields only.



My data from network is coming continuously at a rate about 100 to 1000 records per seconds. My Postgresql is not optimized at all, and every thing of it is running at the default values.



The trouble is
The time consumption of every db-insert varys in a very wide range. The maximum time of a single insert might be 10s, and the minimum one might be 100us, while the average one is about 900us.



Even though I can increase the length of buffer, to avoid the buffer be exhausted when we wait for a too long insert to finish, I think this is NOT the correct solution, because regardless how long the buffer length is, we might encounter a more slower insert.
I guesss it is abnormal in the world of Postgresql(10s for writing a tiny plain table), but I have no idea to solve it, so I ask it here to hope someone point out my mistake or some optimizing methods.



env: Centos 7.5 64bit, Postgresql 10, gcc 8.2.0 and latest pqxx lib.



This is my pg_hba.conf



# TYPE  DATABASE        USER            ADDRESS                 METHOD

# "local" is for Unix domain socket connections only
local all all trust

# IPv4 local connections:
host all all 127.0.0.1/32 trust

# IPv4 incoming connections:
host all all all password

# IPv6 local connections:
host all all ::1/128 ident

# Allow replication connections from localhost, by a user with the
# replication privilege.
local replication all peer
host replication all 127.0.0.1/32 ident
host replication all ::1/128 ident


and the postgresql.conf



listen_addresses = '*'          # what IP address(es) to listen on;
max_connections = 100 # (change requires restart)
shared_buffers = 128MB # min 128kB
dynamic_shared_memory_type = posix # the default is the first option
log_destination = 'stderr' # Valid values are combinations of
logging_collector = on # Enable capturing of stderr and csvlog
log_directory = 'log' # directory where log files are written,
log_filename = 'postgresql-%a.log' # log file name pattern,
log_truncate_on_rotation = on # If on, an existing log file with the
log_rotation_age = 1d # Automatic rotation of logfiles will
log_rotation_size = 0 # Automatic rotation of logfiles will
log_line_prefix = '%m [%p] ' # special values:
log_timezone = 'PRC'
datestyle = 'iso, ymd'
timezone = 'PRC'
lc_messages = 'zh_CN.UTF-8' # locale for system error message
lc_monetary = 'zh_CN.UTF-8' # locale for monetary formatting
lc_numeric = 'zh_CN.UTF-8' # locale for number formatting
lc_time = 'zh_CN.UTF-8' # locale for time formatting
default_text_search_config = 'pg_catalog.simple'


Sorry for my English, I'm a Chinese. I appreciate any patience to read my poor English.
Thanks!







postgresql optimization database-connection libpqxx






share|improve this question















share|improve this question













share|improve this question




share|improve this question








edited Nov 25 '18 at 11:56







Leon

















asked Nov 24 '18 at 23:26









LeonLeon

476




476













  • show your database configurations

    – user7294900
    Nov 25 '18 at 8:25











  • thanks for your reply. The configs have been added.

    – Leon
    Nov 25 '18 at 11:57













  • How often do you commit? Are there any concurrent transactions that may cause a lock? Try to attach to the backend with strace and see where the time is spent.

    – Laurenz Albe
    Nov 25 '18 at 14:25













  • Sorry for too later, because I'm busy these days. My program commits a record about every 1us -- 1ms. No transaction is needed, because there is only one tiny table to be insert(not update). This table has no index/pk/fk/constrain.

    – Leon
    Nov 29 '18 at 3:24





















  • show your database configurations

    – user7294900
    Nov 25 '18 at 8:25











  • thanks for your reply. The configs have been added.

    – Leon
    Nov 25 '18 at 11:57













  • How often do you commit? Are there any concurrent transactions that may cause a lock? Try to attach to the backend with strace and see where the time is spent.

    – Laurenz Albe
    Nov 25 '18 at 14:25













  • Sorry for too later, because I'm busy these days. My program commits a record about every 1us -- 1ms. No transaction is needed, because there is only one tiny table to be insert(not update). This table has no index/pk/fk/constrain.

    – Leon
    Nov 29 '18 at 3:24



















show your database configurations

– user7294900
Nov 25 '18 at 8:25





show your database configurations

– user7294900
Nov 25 '18 at 8:25













thanks for your reply. The configs have been added.

– Leon
Nov 25 '18 at 11:57







thanks for your reply. The configs have been added.

– Leon
Nov 25 '18 at 11:57















How often do you commit? Are there any concurrent transactions that may cause a lock? Try to attach to the backend with strace and see where the time is spent.

– Laurenz Albe
Nov 25 '18 at 14:25







How often do you commit? Are there any concurrent transactions that may cause a lock? Try to attach to the backend with strace and see where the time is spent.

– Laurenz Albe
Nov 25 '18 at 14:25















Sorry for too later, because I'm busy these days. My program commits a record about every 1us -- 1ms. No transaction is needed, because there is only one tiny table to be insert(not update). This table has no index/pk/fk/constrain.

– Leon
Nov 29 '18 at 3:24







Sorry for too later, because I'm busy these days. My program commits a record about every 1us -- 1ms. No transaction is needed, because there is only one tiny table to be insert(not update). This table has no index/pk/fk/constrain.

– Leon
Nov 29 '18 at 3:24














0






active

oldest

votes











Your Answer






StackExchange.ifUsing("editor", function () {
StackExchange.using("externalEditor", function () {
StackExchange.using("snippets", function () {
StackExchange.snippets.init();
});
});
}, "code-snippets");

StackExchange.ready(function() {
var channelOptions = {
tags: "".split(" "),
id: "1"
};
initTagRenderer("".split(" "), "".split(" "), channelOptions);

StackExchange.using("externalEditor", function() {
// Have to fire editor after snippets, if snippets enabled
if (StackExchange.settings.snippets.snippetsEnabled) {
StackExchange.using("snippets", function() {
createEditor();
});
}
else {
createEditor();
}
});

function createEditor() {
StackExchange.prepareEditor({
heartbeatType: 'answer',
autoActivateHeartbeat: false,
convertImagesToLinks: true,
noModals: true,
showLowRepImageUploadWarning: true,
reputationToPostImages: 10,
bindNavPrevention: true,
postfix: "",
imageUploader: {
brandingHtml: "Powered by u003ca class="icon-imgur-white" href="https://imgur.com/"u003eu003c/au003e",
contentPolicyHtml: "User contributions licensed under u003ca href="https://creativecommons.org/licenses/by-sa/3.0/"u003ecc by-sa 3.0 with attribution requiredu003c/au003e u003ca href="https://stackoverflow.com/legal/content-policy"u003e(content policy)u003c/au003e",
allowUrls: true
},
onDemand: true,
discardSelector: ".discard-answer"
,immediatelyShowMarkdownHelp:true
});


}
});














draft saved

draft discarded


















StackExchange.ready(
function () {
StackExchange.openid.initPostLogin('.new-post-login', 'https%3a%2f%2fstackoverflow.com%2fquestions%2f53463285%2fvery-large-variation-of-the-time-elapsed-when-a-small-write-to-postgresql%23new-answer', 'question_page');
}
);

Post as a guest















Required, but never shown

























0






active

oldest

votes








0






active

oldest

votes









active

oldest

votes






active

oldest

votes
















draft saved

draft discarded




















































Thanks for contributing an answer to Stack Overflow!


  • Please be sure to answer the question. Provide details and share your research!

But avoid



  • Asking for help, clarification, or responding to other answers.

  • Making statements based on opinion; back them up with references or personal experience.


To learn more, see our tips on writing great answers.




draft saved


draft discarded














StackExchange.ready(
function () {
StackExchange.openid.initPostLogin('.new-post-login', 'https%3a%2f%2fstackoverflow.com%2fquestions%2f53463285%2fvery-large-variation-of-the-time-elapsed-when-a-small-write-to-postgresql%23new-answer', 'question_page');
}
);

Post as a guest















Required, but never shown





















































Required, but never shown














Required, but never shown












Required, but never shown







Required, but never shown

































Required, but never shown














Required, but never shown












Required, but never shown







Required, but never shown







Popular posts from this blog

To store a contact into the json file from server.js file using a class in NodeJS

Redirect URL with Chrome Remote Debugging Android Devices

Dieringhausen