To understand Linux, you have to go back to the birth of Unix. In 1969, Dennis Ritchie and Ken Thompson, a couple of programmers at Bell Labs wanted to continue their research into operating systems. Bell Labs had been participating in the Multics, a research project that included MIT and GE, to create an operating system that would provide an information utility. The idea was a lot like what’s now called “cloud computing,” but in the 1960s it was the operating system equivalent of Duke Nukem Forever, with development proceeding slowly. Bell Labs pulled out of the project, leaving Thompson and Ritchie missing the programming environment they experienced on Multics. They used a Digital Equipment Corporation PDP-7, which was considered old even then, to implement a much simpler systems. The called it Unics, a pun on Multics. Multics stood for Multiplexed Information and Computing Service, and since their system was simpler and “castrated” version, it was dubbed the Uniplexed Information and Computing Service. The name was later shortened to Unix. Nevertheless, Unix spread like wildfire within Bell Labs. One major innovation was the ability to send the output of one program to the input of another, letting programmers build applications out of pre-existing programs like LEGO. Another was the idea to re-implement Unix in C, a language invented by Ritchie and Brian Kernighan. C is a high-level language, which any computer with a compiler can use. Previously, operating systems were developed in assembly language and developed for a specific computer. Rewriting Unix in C allowed it to become a universal operating system, being able to run on different computers with very few changes.
This system spread outside of Bell Labs when Thompson and Ritchie published a paper on it in the prestigious computer science journal Communications of the ACM. AT&T, Bell Labs’ parent company, gave it to universities for free because it was barred from non-telephone markets. One of the universities that got a hold of it was UC Berkeley, where programmers quickly started making modifications, since the system came with the source code. This system was dubbed BSD, or Berkeley Software Distribution, and it included some innovations like integrating TCP/IP and various other utilities.
In the meantime, AT&T started enforcing its intellectual property even more. A programmer at MIT’s AI Lab, Richard Stallman, was not pleased with this, and started the GNU project, which stand’s for “Gnu’s Not Unix” as a free replacement. Stallman explained his reasoning in a manifesto: Stallman recruited programmers to build free (as in speech, as well as in beer) programs that came with source code and explicitly gave permission for programmers to modify and redistribute their improved versions. The last piece that proved difficult was the kernel, or very heart of the operating system. At the same time, a computer science professor named Andrew Tanenbaum wrote a book on operating systems that was a replacement for an earlier book by John Lions that included the complete source code of an earlier version of Unix and commentary. Tanenbaum created a free replacement he called Minix and included it with his book. One of the many people who used Minix was a Finnish graduate student named Linus Torvalds wanted to explore the 386 microprocessor, so he decided to write his own kernel just for fun, designed like the Unix systems he was accustomed to using. Here’s how he announced it on Usenet in late 1991: When this was combined with the GNU tools, this proved a formidable system, one that could compete with Windows and Mac OS (which is now based on Unix). If you look at the history of Linux, it’s clear that Linus had some pretty big giants to stand on the shoulders of. Photo credits: Wikipedia, Martin Streicher (Linus Torvalds photo), Sam Williams (Stallman photo)