Process model in early UNIX












21















I heard recently that the process model in very early variants of UNIX was quite a bit different to the fork/exec model used nowadays.



How did it differ from the current state?










share|improve this question























  • I could have sworn this was asked already but apparently not.

    – Alex Hajnal
    Nov 22 '18 at 7:20






  • 2





    I thought it sounded familiar. I mentioned some of this in passing in my answer to Why did Unix use slash as the directory separator? (citing Ritchie's paper as my source). IOW, not a dupe.

    – Alex Hajnal
    Nov 22 '18 at 7:35


















21















I heard recently that the process model in very early variants of UNIX was quite a bit different to the fork/exec model used nowadays.



How did it differ from the current state?










share|improve this question























  • I could have sworn this was asked already but apparently not.

    – Alex Hajnal
    Nov 22 '18 at 7:20






  • 2





    I thought it sounded familiar. I mentioned some of this in passing in my answer to Why did Unix use slash as the directory separator? (citing Ritchie's paper as my source). IOW, not a dupe.

    – Alex Hajnal
    Nov 22 '18 at 7:35
















21












21








21


4






I heard recently that the process model in very early variants of UNIX was quite a bit different to the fork/exec model used nowadays.



How did it differ from the current state?










share|improve this question














I heard recently that the process model in very early variants of UNIX was quite a bit different to the fork/exec model used nowadays.



How did it differ from the current state?







unix operating-system






share|improve this question













share|improve this question











share|improve this question




share|improve this question










asked Nov 22 '18 at 7:11







user6464




















  • I could have sworn this was asked already but apparently not.

    – Alex Hajnal
    Nov 22 '18 at 7:20






  • 2





    I thought it sounded familiar. I mentioned some of this in passing in my answer to Why did Unix use slash as the directory separator? (citing Ritchie's paper as my source). IOW, not a dupe.

    – Alex Hajnal
    Nov 22 '18 at 7:35





















  • I could have sworn this was asked already but apparently not.

    – Alex Hajnal
    Nov 22 '18 at 7:20






  • 2





    I thought it sounded familiar. I mentioned some of this in passing in my answer to Why did Unix use slash as the directory separator? (citing Ritchie's paper as my source). IOW, not a dupe.

    – Alex Hajnal
    Nov 22 '18 at 7:35



















I could have sworn this was asked already but apparently not.

– Alex Hajnal
Nov 22 '18 at 7:20





I could have sworn this was asked already but apparently not.

– Alex Hajnal
Nov 22 '18 at 7:20




2




2





I thought it sounded familiar. I mentioned some of this in passing in my answer to Why did Unix use slash as the directory separator? (citing Ritchie's paper as my source). IOW, not a dupe.

– Alex Hajnal
Nov 22 '18 at 7:35







I thought it sounded familiar. I mentioned some of this in passing in my answer to Why did Unix use slash as the directory separator? (citing Ritchie's paper as my source). IOW, not a dupe.

– Alex Hajnal
Nov 22 '18 at 7:35












1 Answer
1






active

oldest

votes


















28














If you search for the seminal 1979 paper from Dennis Ritchie, entitled The Evolution of the Unix Time-Sharing System, it covers this (amongst a few other things like incredibly difficult-to-use file-system links, only being able to create directories at boot time, and why the password file has a GECOS field).



First, we'll recap the current model. A process is one type of execution unit within UNIX while a program is a runnable item that lives within a process (when it's running). That distinction is important.



A running process that wants to start a new process will call fork and this gives you two nearly identical processes running the same program (at the same point), where only one existed before.



At that point, one of them (usually the child) may choose to exec a new program to perform some other work - this exec basically replaces the program in the current process with a whole new program.



Should the original process wish to wait until the child exits, it can call wait to do so.





The old model was a little similar but it only ever had a limited number of processes, one for each of the terminals hooked up to the machine. These processes were created at boot time and there was therefore no fork. A shell ran in each of these processes, interacting with the user on the given terminal.



When the user specified a program to run, the shell would:




  1. Create a link to the file in the current directory (this has to do with the "incredibly difficult-to-use file-system links" mentioned earlier).


  2. Open the file.


  3. Remove the link.


  4. Copy a small bootstrap program to the top of memory and jump to it.


  5. This bootstrap program would read in the already-open file over the current shell code, then jump to the first location of the command (exec).


  6. After the command had done its work, it called exit. But this isn't the exit we know and love nowadays. What this exit did was simply to reload the shell program into the process in much the same way as the shell had loaded the program in the first place.



At that point, you would be back in the shell, ready to type in another command. As you may imagine, this had no support for pipeslines/filters but, interestingly, had I/O redirection from a very early stage - all the shell had to do was connect the standard handles to specific files rather than the terminal device.






share|improve this answer


























  • IIRC, though not in the initial release, pipes were trivial to add; trivial as in one person spending an afternoon coding (or something like that).

    – Alex Hajnal
    Nov 22 '18 at 7:22











  • ^^^ Citation needed. If anyone has a primary source to back up my assertion, kindly post below.

    – Alex Hajnal
    Nov 22 '18 at 7:39








  • 2





    @Alex, the paper states that: "Some time later, thanks to McIlroy’s persistence, pipes were finally installed in the operating system (a relatively simple job)". Not the bit about it taking an afternoon and presumably we had to wait until we had a proper fork (to get more than one process per terminal), but it says it was easy.

    – user6464
    Nov 22 '18 at 8:19











  • Yea, I saw that mention in the paper. The bit about it taking an afternoon (or overnight) is I think from an oral history interview.

    – Alex Hajnal
    Nov 22 '18 at 8:28








  • 8





    Found it: "The basic redirectability of input-output made it easy to put pipes in when Doug McIlroy finally persuaded Ken Thompson to do it. In one feverish night Ken wrote and installed the pipe system call, added pipes to the shell, and modified several utilities, such as pr and ov … to be usable as filters. The next day saw an unforgettable orgy of one-liners as everybody joined in the excitement of plumbing." A Research UNIX Reader: Annotated Excerpts from the Programmer’s Manual, 1971-1986, M. Douglas McIlroy, p. 9

    – Alex Hajnal
    Nov 22 '18 at 15:16













Your Answer








StackExchange.ready(function() {
var channelOptions = {
tags: "".split(" "),
id: "648"
};
initTagRenderer("".split(" "), "".split(" "), channelOptions);

StackExchange.using("externalEditor", function() {
// Have to fire editor after snippets, if snippets enabled
if (StackExchange.settings.snippets.snippetsEnabled) {
StackExchange.using("snippets", function() {
createEditor();
});
}
else {
createEditor();
}
});

function createEditor() {
StackExchange.prepareEditor({
heartbeatType: 'answer',
autoActivateHeartbeat: false,
convertImagesToLinks: false,
noModals: true,
showLowRepImageUploadWarning: true,
reputationToPostImages: null,
bindNavPrevention: true,
postfix: "",
imageUploader: {
brandingHtml: "Powered by u003ca class="icon-imgur-white" href="https://imgur.com/"u003eu003c/au003e",
contentPolicyHtml: "User contributions licensed under u003ca href="https://creativecommons.org/licenses/by-sa/3.0/"u003ecc by-sa 3.0 with attribution requiredu003c/au003e u003ca href="https://stackoverflow.com/legal/content-policy"u003e(content policy)u003c/au003e",
allowUrls: true
},
noCode: true, onDemand: true,
discardSelector: ".discard-answer"
,immediatelyShowMarkdownHelp:true
});


}
});














draft saved

draft discarded


















StackExchange.ready(
function () {
StackExchange.openid.initPostLogin('.new-post-login', 'https%3a%2f%2fretrocomputing.stackexchange.com%2fquestions%2f8361%2fprocess-model-in-early-unix%23new-answer', 'question_page');
}
);

Post as a guest















Required, but never shown
























1 Answer
1






active

oldest

votes








1 Answer
1






active

oldest

votes









active

oldest

votes






active

oldest

votes









28














If you search for the seminal 1979 paper from Dennis Ritchie, entitled The Evolution of the Unix Time-Sharing System, it covers this (amongst a few other things like incredibly difficult-to-use file-system links, only being able to create directories at boot time, and why the password file has a GECOS field).



First, we'll recap the current model. A process is one type of execution unit within UNIX while a program is a runnable item that lives within a process (when it's running). That distinction is important.



A running process that wants to start a new process will call fork and this gives you two nearly identical processes running the same program (at the same point), where only one existed before.



At that point, one of them (usually the child) may choose to exec a new program to perform some other work - this exec basically replaces the program in the current process with a whole new program.



Should the original process wish to wait until the child exits, it can call wait to do so.





The old model was a little similar but it only ever had a limited number of processes, one for each of the terminals hooked up to the machine. These processes were created at boot time and there was therefore no fork. A shell ran in each of these processes, interacting with the user on the given terminal.



When the user specified a program to run, the shell would:




  1. Create a link to the file in the current directory (this has to do with the "incredibly difficult-to-use file-system links" mentioned earlier).


  2. Open the file.


  3. Remove the link.


  4. Copy a small bootstrap program to the top of memory and jump to it.


  5. This bootstrap program would read in the already-open file over the current shell code, then jump to the first location of the command (exec).


  6. After the command had done its work, it called exit. But this isn't the exit we know and love nowadays. What this exit did was simply to reload the shell program into the process in much the same way as the shell had loaded the program in the first place.



At that point, you would be back in the shell, ready to type in another command. As you may imagine, this had no support for pipeslines/filters but, interestingly, had I/O redirection from a very early stage - all the shell had to do was connect the standard handles to specific files rather than the terminal device.






share|improve this answer


























  • IIRC, though not in the initial release, pipes were trivial to add; trivial as in one person spending an afternoon coding (or something like that).

    – Alex Hajnal
    Nov 22 '18 at 7:22











  • ^^^ Citation needed. If anyone has a primary source to back up my assertion, kindly post below.

    – Alex Hajnal
    Nov 22 '18 at 7:39








  • 2





    @Alex, the paper states that: "Some time later, thanks to McIlroy’s persistence, pipes were finally installed in the operating system (a relatively simple job)". Not the bit about it taking an afternoon and presumably we had to wait until we had a proper fork (to get more than one process per terminal), but it says it was easy.

    – user6464
    Nov 22 '18 at 8:19











  • Yea, I saw that mention in the paper. The bit about it taking an afternoon (or overnight) is I think from an oral history interview.

    – Alex Hajnal
    Nov 22 '18 at 8:28








  • 8





    Found it: "The basic redirectability of input-output made it easy to put pipes in when Doug McIlroy finally persuaded Ken Thompson to do it. In one feverish night Ken wrote and installed the pipe system call, added pipes to the shell, and modified several utilities, such as pr and ov … to be usable as filters. The next day saw an unforgettable orgy of one-liners as everybody joined in the excitement of plumbing." A Research UNIX Reader: Annotated Excerpts from the Programmer’s Manual, 1971-1986, M. Douglas McIlroy, p. 9

    – Alex Hajnal
    Nov 22 '18 at 15:16


















28














If you search for the seminal 1979 paper from Dennis Ritchie, entitled The Evolution of the Unix Time-Sharing System, it covers this (amongst a few other things like incredibly difficult-to-use file-system links, only being able to create directories at boot time, and why the password file has a GECOS field).



First, we'll recap the current model. A process is one type of execution unit within UNIX while a program is a runnable item that lives within a process (when it's running). That distinction is important.



A running process that wants to start a new process will call fork and this gives you two nearly identical processes running the same program (at the same point), where only one existed before.



At that point, one of them (usually the child) may choose to exec a new program to perform some other work - this exec basically replaces the program in the current process with a whole new program.



Should the original process wish to wait until the child exits, it can call wait to do so.





The old model was a little similar but it only ever had a limited number of processes, one for each of the terminals hooked up to the machine. These processes were created at boot time and there was therefore no fork. A shell ran in each of these processes, interacting with the user on the given terminal.



When the user specified a program to run, the shell would:




  1. Create a link to the file in the current directory (this has to do with the "incredibly difficult-to-use file-system links" mentioned earlier).


  2. Open the file.


  3. Remove the link.


  4. Copy a small bootstrap program to the top of memory and jump to it.


  5. This bootstrap program would read in the already-open file over the current shell code, then jump to the first location of the command (exec).


  6. After the command had done its work, it called exit. But this isn't the exit we know and love nowadays. What this exit did was simply to reload the shell program into the process in much the same way as the shell had loaded the program in the first place.



At that point, you would be back in the shell, ready to type in another command. As you may imagine, this had no support for pipeslines/filters but, interestingly, had I/O redirection from a very early stage - all the shell had to do was connect the standard handles to specific files rather than the terminal device.






share|improve this answer


























  • IIRC, though not in the initial release, pipes were trivial to add; trivial as in one person spending an afternoon coding (or something like that).

    – Alex Hajnal
    Nov 22 '18 at 7:22











  • ^^^ Citation needed. If anyone has a primary source to back up my assertion, kindly post below.

    – Alex Hajnal
    Nov 22 '18 at 7:39








  • 2





    @Alex, the paper states that: "Some time later, thanks to McIlroy’s persistence, pipes were finally installed in the operating system (a relatively simple job)". Not the bit about it taking an afternoon and presumably we had to wait until we had a proper fork (to get more than one process per terminal), but it says it was easy.

    – user6464
    Nov 22 '18 at 8:19











  • Yea, I saw that mention in the paper. The bit about it taking an afternoon (or overnight) is I think from an oral history interview.

    – Alex Hajnal
    Nov 22 '18 at 8:28








  • 8





    Found it: "The basic redirectability of input-output made it easy to put pipes in when Doug McIlroy finally persuaded Ken Thompson to do it. In one feverish night Ken wrote and installed the pipe system call, added pipes to the shell, and modified several utilities, such as pr and ov … to be usable as filters. The next day saw an unforgettable orgy of one-liners as everybody joined in the excitement of plumbing." A Research UNIX Reader: Annotated Excerpts from the Programmer’s Manual, 1971-1986, M. Douglas McIlroy, p. 9

    – Alex Hajnal
    Nov 22 '18 at 15:16
















28












28








28







If you search for the seminal 1979 paper from Dennis Ritchie, entitled The Evolution of the Unix Time-Sharing System, it covers this (amongst a few other things like incredibly difficult-to-use file-system links, only being able to create directories at boot time, and why the password file has a GECOS field).



First, we'll recap the current model. A process is one type of execution unit within UNIX while a program is a runnable item that lives within a process (when it's running). That distinction is important.



A running process that wants to start a new process will call fork and this gives you two nearly identical processes running the same program (at the same point), where only one existed before.



At that point, one of them (usually the child) may choose to exec a new program to perform some other work - this exec basically replaces the program in the current process with a whole new program.



Should the original process wish to wait until the child exits, it can call wait to do so.





The old model was a little similar but it only ever had a limited number of processes, one for each of the terminals hooked up to the machine. These processes were created at boot time and there was therefore no fork. A shell ran in each of these processes, interacting with the user on the given terminal.



When the user specified a program to run, the shell would:




  1. Create a link to the file in the current directory (this has to do with the "incredibly difficult-to-use file-system links" mentioned earlier).


  2. Open the file.


  3. Remove the link.


  4. Copy a small bootstrap program to the top of memory and jump to it.


  5. This bootstrap program would read in the already-open file over the current shell code, then jump to the first location of the command (exec).


  6. After the command had done its work, it called exit. But this isn't the exit we know and love nowadays. What this exit did was simply to reload the shell program into the process in much the same way as the shell had loaded the program in the first place.



At that point, you would be back in the shell, ready to type in another command. As you may imagine, this had no support for pipeslines/filters but, interestingly, had I/O redirection from a very early stage - all the shell had to do was connect the standard handles to specific files rather than the terminal device.






share|improve this answer















If you search for the seminal 1979 paper from Dennis Ritchie, entitled The Evolution of the Unix Time-Sharing System, it covers this (amongst a few other things like incredibly difficult-to-use file-system links, only being able to create directories at boot time, and why the password file has a GECOS field).



First, we'll recap the current model. A process is one type of execution unit within UNIX while a program is a runnable item that lives within a process (when it's running). That distinction is important.



A running process that wants to start a new process will call fork and this gives you two nearly identical processes running the same program (at the same point), where only one existed before.



At that point, one of them (usually the child) may choose to exec a new program to perform some other work - this exec basically replaces the program in the current process with a whole new program.



Should the original process wish to wait until the child exits, it can call wait to do so.





The old model was a little similar but it only ever had a limited number of processes, one for each of the terminals hooked up to the machine. These processes were created at boot time and there was therefore no fork. A shell ran in each of these processes, interacting with the user on the given terminal.



When the user specified a program to run, the shell would:




  1. Create a link to the file in the current directory (this has to do with the "incredibly difficult-to-use file-system links" mentioned earlier).


  2. Open the file.


  3. Remove the link.


  4. Copy a small bootstrap program to the top of memory and jump to it.


  5. This bootstrap program would read in the already-open file over the current shell code, then jump to the first location of the command (exec).


  6. After the command had done its work, it called exit. But this isn't the exit we know and love nowadays. What this exit did was simply to reload the shell program into the process in much the same way as the shell had loaded the program in the first place.



At that point, you would be back in the shell, ready to type in another command. As you may imagine, this had no support for pipeslines/filters but, interestingly, had I/O redirection from a very early stage - all the shell had to do was connect the standard handles to specific files rather than the terminal device.







share|improve this answer














share|improve this answer



share|improve this answer








edited Nov 22 '18 at 7:27









Alex Hajnal

3,88031735




3,88031735










answered Nov 22 '18 at 7:11







user6464




















  • IIRC, though not in the initial release, pipes were trivial to add; trivial as in one person spending an afternoon coding (or something like that).

    – Alex Hajnal
    Nov 22 '18 at 7:22











  • ^^^ Citation needed. If anyone has a primary source to back up my assertion, kindly post below.

    – Alex Hajnal
    Nov 22 '18 at 7:39








  • 2





    @Alex, the paper states that: "Some time later, thanks to McIlroy’s persistence, pipes were finally installed in the operating system (a relatively simple job)". Not the bit about it taking an afternoon and presumably we had to wait until we had a proper fork (to get more than one process per terminal), but it says it was easy.

    – user6464
    Nov 22 '18 at 8:19











  • Yea, I saw that mention in the paper. The bit about it taking an afternoon (or overnight) is I think from an oral history interview.

    – Alex Hajnal
    Nov 22 '18 at 8:28








  • 8





    Found it: "The basic redirectability of input-output made it easy to put pipes in when Doug McIlroy finally persuaded Ken Thompson to do it. In one feverish night Ken wrote and installed the pipe system call, added pipes to the shell, and modified several utilities, such as pr and ov … to be usable as filters. The next day saw an unforgettable orgy of one-liners as everybody joined in the excitement of plumbing." A Research UNIX Reader: Annotated Excerpts from the Programmer’s Manual, 1971-1986, M. Douglas McIlroy, p. 9

    – Alex Hajnal
    Nov 22 '18 at 15:16





















  • IIRC, though not in the initial release, pipes were trivial to add; trivial as in one person spending an afternoon coding (or something like that).

    – Alex Hajnal
    Nov 22 '18 at 7:22











  • ^^^ Citation needed. If anyone has a primary source to back up my assertion, kindly post below.

    – Alex Hajnal
    Nov 22 '18 at 7:39








  • 2





    @Alex, the paper states that: "Some time later, thanks to McIlroy’s persistence, pipes were finally installed in the operating system (a relatively simple job)". Not the bit about it taking an afternoon and presumably we had to wait until we had a proper fork (to get more than one process per terminal), but it says it was easy.

    – user6464
    Nov 22 '18 at 8:19











  • Yea, I saw that mention in the paper. The bit about it taking an afternoon (or overnight) is I think from an oral history interview.

    – Alex Hajnal
    Nov 22 '18 at 8:28








  • 8





    Found it: "The basic redirectability of input-output made it easy to put pipes in when Doug McIlroy finally persuaded Ken Thompson to do it. In one feverish night Ken wrote and installed the pipe system call, added pipes to the shell, and modified several utilities, such as pr and ov … to be usable as filters. The next day saw an unforgettable orgy of one-liners as everybody joined in the excitement of plumbing." A Research UNIX Reader: Annotated Excerpts from the Programmer’s Manual, 1971-1986, M. Douglas McIlroy, p. 9

    – Alex Hajnal
    Nov 22 '18 at 15:16



















IIRC, though not in the initial release, pipes were trivial to add; trivial as in one person spending an afternoon coding (or something like that).

– Alex Hajnal
Nov 22 '18 at 7:22





IIRC, though not in the initial release, pipes were trivial to add; trivial as in one person spending an afternoon coding (or something like that).

– Alex Hajnal
Nov 22 '18 at 7:22













^^^ Citation needed. If anyone has a primary source to back up my assertion, kindly post below.

– Alex Hajnal
Nov 22 '18 at 7:39







^^^ Citation needed. If anyone has a primary source to back up my assertion, kindly post below.

– Alex Hajnal
Nov 22 '18 at 7:39






2




2





@Alex, the paper states that: "Some time later, thanks to McIlroy’s persistence, pipes were finally installed in the operating system (a relatively simple job)". Not the bit about it taking an afternoon and presumably we had to wait until we had a proper fork (to get more than one process per terminal), but it says it was easy.

– user6464
Nov 22 '18 at 8:19





@Alex, the paper states that: "Some time later, thanks to McIlroy’s persistence, pipes were finally installed in the operating system (a relatively simple job)". Not the bit about it taking an afternoon and presumably we had to wait until we had a proper fork (to get more than one process per terminal), but it says it was easy.

– user6464
Nov 22 '18 at 8:19













Yea, I saw that mention in the paper. The bit about it taking an afternoon (or overnight) is I think from an oral history interview.

– Alex Hajnal
Nov 22 '18 at 8:28







Yea, I saw that mention in the paper. The bit about it taking an afternoon (or overnight) is I think from an oral history interview.

– Alex Hajnal
Nov 22 '18 at 8:28






8




8





Found it: "The basic redirectability of input-output made it easy to put pipes in when Doug McIlroy finally persuaded Ken Thompson to do it. In one feverish night Ken wrote and installed the pipe system call, added pipes to the shell, and modified several utilities, such as pr and ov … to be usable as filters. The next day saw an unforgettable orgy of one-liners as everybody joined in the excitement of plumbing." A Research UNIX Reader: Annotated Excerpts from the Programmer’s Manual, 1971-1986, M. Douglas McIlroy, p. 9

– Alex Hajnal
Nov 22 '18 at 15:16







Found it: "The basic redirectability of input-output made it easy to put pipes in when Doug McIlroy finally persuaded Ken Thompson to do it. In one feverish night Ken wrote and installed the pipe system call, added pipes to the shell, and modified several utilities, such as pr and ov … to be usable as filters. The next day saw an unforgettable orgy of one-liners as everybody joined in the excitement of plumbing." A Research UNIX Reader: Annotated Excerpts from the Programmer’s Manual, 1971-1986, M. Douglas McIlroy, p. 9

– Alex Hajnal
Nov 22 '18 at 15:16




















draft saved

draft discarded




















































Thanks for contributing an answer to Retrocomputing Stack Exchange!


  • Please be sure to answer the question. Provide details and share your research!

But avoid



  • Asking for help, clarification, or responding to other answers.

  • Making statements based on opinion; back them up with references or personal experience.


To learn more, see our tips on writing great answers.




draft saved


draft discarded














StackExchange.ready(
function () {
StackExchange.openid.initPostLogin('.new-post-login', 'https%3a%2f%2fretrocomputing.stackexchange.com%2fquestions%2f8361%2fprocess-model-in-early-unix%23new-answer', 'question_page');
}
);

Post as a guest















Required, but never shown





















































Required, but never shown














Required, but never shown












Required, but never shown







Required, but never shown

































Required, but never shown














Required, but never shown












Required, but never shown







Required, but never shown







Popular posts from this blog

Wiesbaden

Marschland

Dieringhausen