fread and zcat issues












0















I'm using R 3.4.3 and Ubuntu 18.04.



Due to filesize limitations I have to provide a dummy example.



Last night I was doing this and it worked (no updates or changes since then):



library(data.table)
data <- fread("zcat 5gb-file.csv.gz")


Now I am doing this:



library(data.table)
data1 <- fread("zcat 200mb-file.csv.gz")
data2 <- fread("zcat 100mb-file.csv.gz")


And it returns the error /dev/shm/file123456 not found.



I have more than 1TB free on /home and ~40GB free on /.



What can be causing this?










share|improve this question


















  • 1





    Can you execute the following command at your linux command prompt and see what size is your shm: df -h |grep shm

    – Katia
    Nov 22 '18 at 1:38











  • thanks @Katia it returns 3.9GB while I am trying to zcat a 200MB file

    – pachamaltese
    Nov 22 '18 at 13:37






  • 1





    What is used and avail values in the output? The output should look something like: tmpfs 127G 17G 111G 13% /dev/shm. The first value (127GB) in my case is total size, then 17GB is used and 111GB is available memory. The fact that your input file is 200MB means that when it is unziped it will probably use more than 10 times more RAM. Shm is used by other processes and so you might not have enough space in shm. Did you try to use read.csv() or read_csv() function? What size of RAM do you have on your machine?

    – Katia
    Nov 23 '18 at 16:13






  • 1





    One more suggestion: What you can try is to read just a part of a file using nrows argument to fread() and see if this succeeds for relatively small number (like nrows=10) and then if it works increase number of records.

    – Katia
    Nov 23 '18 at 16:14











  • thanks a lot again @Katia, I saw that and in my case the decompressed CSV is around 200MB, compresed it used 18MB, also I am using around 8% of shm which has 128GB asigned... I tried read_csv() and it works but returns reading errors where fread() is robust.... I guess I have to format the computer after this magically happened

    – pachamaltese
    Nov 23 '18 at 17:03
















0















I'm using R 3.4.3 and Ubuntu 18.04.



Due to filesize limitations I have to provide a dummy example.



Last night I was doing this and it worked (no updates or changes since then):



library(data.table)
data <- fread("zcat 5gb-file.csv.gz")


Now I am doing this:



library(data.table)
data1 <- fread("zcat 200mb-file.csv.gz")
data2 <- fread("zcat 100mb-file.csv.gz")


And it returns the error /dev/shm/file123456 not found.



I have more than 1TB free on /home and ~40GB free on /.



What can be causing this?










share|improve this question


















  • 1





    Can you execute the following command at your linux command prompt and see what size is your shm: df -h |grep shm

    – Katia
    Nov 22 '18 at 1:38











  • thanks @Katia it returns 3.9GB while I am trying to zcat a 200MB file

    – pachamaltese
    Nov 22 '18 at 13:37






  • 1





    What is used and avail values in the output? The output should look something like: tmpfs 127G 17G 111G 13% /dev/shm. The first value (127GB) in my case is total size, then 17GB is used and 111GB is available memory. The fact that your input file is 200MB means that when it is unziped it will probably use more than 10 times more RAM. Shm is used by other processes and so you might not have enough space in shm. Did you try to use read.csv() or read_csv() function? What size of RAM do you have on your machine?

    – Katia
    Nov 23 '18 at 16:13






  • 1





    One more suggestion: What you can try is to read just a part of a file using nrows argument to fread() and see if this succeeds for relatively small number (like nrows=10) and then if it works increase number of records.

    – Katia
    Nov 23 '18 at 16:14











  • thanks a lot again @Katia, I saw that and in my case the decompressed CSV is around 200MB, compresed it used 18MB, also I am using around 8% of shm which has 128GB asigned... I tried read_csv() and it works but returns reading errors where fread() is robust.... I guess I have to format the computer after this magically happened

    – pachamaltese
    Nov 23 '18 at 17:03














0












0








0








I'm using R 3.4.3 and Ubuntu 18.04.



Due to filesize limitations I have to provide a dummy example.



Last night I was doing this and it worked (no updates or changes since then):



library(data.table)
data <- fread("zcat 5gb-file.csv.gz")


Now I am doing this:



library(data.table)
data1 <- fread("zcat 200mb-file.csv.gz")
data2 <- fread("zcat 100mb-file.csv.gz")


And it returns the error /dev/shm/file123456 not found.



I have more than 1TB free on /home and ~40GB free on /.



What can be causing this?










share|improve this question














I'm using R 3.4.3 and Ubuntu 18.04.



Due to filesize limitations I have to provide a dummy example.



Last night I was doing this and it worked (no updates or changes since then):



library(data.table)
data <- fread("zcat 5gb-file.csv.gz")


Now I am doing this:



library(data.table)
data1 <- fread("zcat 200mb-file.csv.gz")
data2 <- fread("zcat 100mb-file.csv.gz")


And it returns the error /dev/shm/file123456 not found.



I have more than 1TB free on /home and ~40GB free on /.



What can be causing this?







r ubuntu data.table






share|improve this question













share|improve this question











share|improve this question




share|improve this question










asked Nov 22 '18 at 0:06









pachamaltesepachamaltese

1,1371531




1,1371531








  • 1





    Can you execute the following command at your linux command prompt and see what size is your shm: df -h |grep shm

    – Katia
    Nov 22 '18 at 1:38











  • thanks @Katia it returns 3.9GB while I am trying to zcat a 200MB file

    – pachamaltese
    Nov 22 '18 at 13:37






  • 1





    What is used and avail values in the output? The output should look something like: tmpfs 127G 17G 111G 13% /dev/shm. The first value (127GB) in my case is total size, then 17GB is used and 111GB is available memory. The fact that your input file is 200MB means that when it is unziped it will probably use more than 10 times more RAM. Shm is used by other processes and so you might not have enough space in shm. Did you try to use read.csv() or read_csv() function? What size of RAM do you have on your machine?

    – Katia
    Nov 23 '18 at 16:13






  • 1





    One more suggestion: What you can try is to read just a part of a file using nrows argument to fread() and see if this succeeds for relatively small number (like nrows=10) and then if it works increase number of records.

    – Katia
    Nov 23 '18 at 16:14











  • thanks a lot again @Katia, I saw that and in my case the decompressed CSV is around 200MB, compresed it used 18MB, also I am using around 8% of shm which has 128GB asigned... I tried read_csv() and it works but returns reading errors where fread() is robust.... I guess I have to format the computer after this magically happened

    – pachamaltese
    Nov 23 '18 at 17:03














  • 1





    Can you execute the following command at your linux command prompt and see what size is your shm: df -h |grep shm

    – Katia
    Nov 22 '18 at 1:38











  • thanks @Katia it returns 3.9GB while I am trying to zcat a 200MB file

    – pachamaltese
    Nov 22 '18 at 13:37






  • 1





    What is used and avail values in the output? The output should look something like: tmpfs 127G 17G 111G 13% /dev/shm. The first value (127GB) in my case is total size, then 17GB is used and 111GB is available memory. The fact that your input file is 200MB means that when it is unziped it will probably use more than 10 times more RAM. Shm is used by other processes and so you might not have enough space in shm. Did you try to use read.csv() or read_csv() function? What size of RAM do you have on your machine?

    – Katia
    Nov 23 '18 at 16:13






  • 1





    One more suggestion: What you can try is to read just a part of a file using nrows argument to fread() and see if this succeeds for relatively small number (like nrows=10) and then if it works increase number of records.

    – Katia
    Nov 23 '18 at 16:14











  • thanks a lot again @Katia, I saw that and in my case the decompressed CSV is around 200MB, compresed it used 18MB, also I am using around 8% of shm which has 128GB asigned... I tried read_csv() and it works but returns reading errors where fread() is robust.... I guess I have to format the computer after this magically happened

    – pachamaltese
    Nov 23 '18 at 17:03








1




1





Can you execute the following command at your linux command prompt and see what size is your shm: df -h |grep shm

– Katia
Nov 22 '18 at 1:38





Can you execute the following command at your linux command prompt and see what size is your shm: df -h |grep shm

– Katia
Nov 22 '18 at 1:38













thanks @Katia it returns 3.9GB while I am trying to zcat a 200MB file

– pachamaltese
Nov 22 '18 at 13:37





thanks @Katia it returns 3.9GB while I am trying to zcat a 200MB file

– pachamaltese
Nov 22 '18 at 13:37




1




1





What is used and avail values in the output? The output should look something like: tmpfs 127G 17G 111G 13% /dev/shm. The first value (127GB) in my case is total size, then 17GB is used and 111GB is available memory. The fact that your input file is 200MB means that when it is unziped it will probably use more than 10 times more RAM. Shm is used by other processes and so you might not have enough space in shm. Did you try to use read.csv() or read_csv() function? What size of RAM do you have on your machine?

– Katia
Nov 23 '18 at 16:13





What is used and avail values in the output? The output should look something like: tmpfs 127G 17G 111G 13% /dev/shm. The first value (127GB) in my case is total size, then 17GB is used and 111GB is available memory. The fact that your input file is 200MB means that when it is unziped it will probably use more than 10 times more RAM. Shm is used by other processes and so you might not have enough space in shm. Did you try to use read.csv() or read_csv() function? What size of RAM do you have on your machine?

– Katia
Nov 23 '18 at 16:13




1




1





One more suggestion: What you can try is to read just a part of a file using nrows argument to fread() and see if this succeeds for relatively small number (like nrows=10) and then if it works increase number of records.

– Katia
Nov 23 '18 at 16:14





One more suggestion: What you can try is to read just a part of a file using nrows argument to fread() and see if this succeeds for relatively small number (like nrows=10) and then if it works increase number of records.

– Katia
Nov 23 '18 at 16:14













thanks a lot again @Katia, I saw that and in my case the decompressed CSV is around 200MB, compresed it used 18MB, also I am using around 8% of shm which has 128GB asigned... I tried read_csv() and it works but returns reading errors where fread() is robust.... I guess I have to format the computer after this magically happened

– pachamaltese
Nov 23 '18 at 17:03





thanks a lot again @Katia, I saw that and in my case the decompressed CSV is around 200MB, compresed it used 18MB, also I am using around 8% of shm which has 128GB asigned... I tried read_csv() and it works but returns reading errors where fread() is robust.... I guess I have to format the computer after this magically happened

– pachamaltese
Nov 23 '18 at 17:03












0






active

oldest

votes











Your Answer






StackExchange.ifUsing("editor", function () {
StackExchange.using("externalEditor", function () {
StackExchange.using("snippets", function () {
StackExchange.snippets.init();
});
});
}, "code-snippets");

StackExchange.ready(function() {
var channelOptions = {
tags: "".split(" "),
id: "1"
};
initTagRenderer("".split(" "), "".split(" "), channelOptions);

StackExchange.using("externalEditor", function() {
// Have to fire editor after snippets, if snippets enabled
if (StackExchange.settings.snippets.snippetsEnabled) {
StackExchange.using("snippets", function() {
createEditor();
});
}
else {
createEditor();
}
});

function createEditor() {
StackExchange.prepareEditor({
heartbeatType: 'answer',
autoActivateHeartbeat: false,
convertImagesToLinks: true,
noModals: true,
showLowRepImageUploadWarning: true,
reputationToPostImages: 10,
bindNavPrevention: true,
postfix: "",
imageUploader: {
brandingHtml: "Powered by u003ca class="icon-imgur-white" href="https://imgur.com/"u003eu003c/au003e",
contentPolicyHtml: "User contributions licensed under u003ca href="https://creativecommons.org/licenses/by-sa/3.0/"u003ecc by-sa 3.0 with attribution requiredu003c/au003e u003ca href="https://stackoverflow.com/legal/content-policy"u003e(content policy)u003c/au003e",
allowUrls: true
},
onDemand: true,
discardSelector: ".discard-answer"
,immediatelyShowMarkdownHelp:true
});


}
});














draft saved

draft discarded


















StackExchange.ready(
function () {
StackExchange.openid.initPostLogin('.new-post-login', 'https%3a%2f%2fstackoverflow.com%2fquestions%2f53422158%2ffread-and-zcat-issues%23new-answer', 'question_page');
}
);

Post as a guest















Required, but never shown

























0






active

oldest

votes








0






active

oldest

votes









active

oldest

votes






active

oldest

votes
















draft saved

draft discarded




















































Thanks for contributing an answer to Stack Overflow!


  • Please be sure to answer the question. Provide details and share your research!

But avoid



  • Asking for help, clarification, or responding to other answers.

  • Making statements based on opinion; back them up with references or personal experience.


To learn more, see our tips on writing great answers.




draft saved


draft discarded














StackExchange.ready(
function () {
StackExchange.openid.initPostLogin('.new-post-login', 'https%3a%2f%2fstackoverflow.com%2fquestions%2f53422158%2ffread-and-zcat-issues%23new-answer', 'question_page');
}
);

Post as a guest















Required, but never shown





















































Required, but never shown














Required, but never shown












Required, but never shown







Required, but never shown

































Required, but never shown














Required, but never shown












Required, but never shown







Required, but never shown







Popular posts from this blog

Wiesbaden

Marschland

Dieringhausen