Why Unix?

Steven Zeil

Last modified: Aug 29, 2019

We can actually interpret the title question in a number of different ways. Do we mean, “why does the CS department use Unix?”, or “why is this course about Unix?”, or “why was Unix invented?”, or even “why does Unix behave the way that it does?”

It’s actually the last of these interpretations that I want to address, although understanding the answer to that question will go long ways towards explaining why the CS department uses Unix in most of its courses and, therefore, the very reason for the existence of this course.

I think that, to really understand a number of the fundamental behaviors of Unix, it helps to consider how Unix is different from what is probably a much more familiar operating system to most of you, Microsoft Windows. Furthermore, to understand the differences between these operating systems, you need to look at the history of computer hardware and system software evolution in effect at the time when each of these operating systems was designed. In particular, I want to focus on three ideas: evolution of CPU’s and process support, evolution in display technology, and evolution in network and technology.

Computer historians are fond of pointing out that mainframe computers were huge behemoths, occupying massive rooms, drawing large amounts of electrical power for their operation, and often requiring cooling systems fully as large as the processor itself. For some time, processors continued to be physically large, although the processing power squeezed into that space grew tremendously.

On the early machines, only a single program could be run at any given time. As processors became more powerful, both hardware and system software evolved to permit more than one program to run simultaneously on the same processor. This is called multiprocessing. The initial reason for doing multiprocessing was to allow programs from many different users (programmers) to run at once. This, in turn, is called multiprogramming. At first, it was assumed that a single user had no need for more than one process at a time.

Interactive programs are characterized by long periods of idleness, in which they are awaiting the next input from the user. In an interactive environment, it becomes natural for users to switch attention from one process that is awaiting input or, in some cases, conducting a lengthy calculation, to another process that has become more interesting. For example, someone using a word processor might want to switch over to their calendar to look up an important date before returning to the word processor and typing that date into their document. Fortunately, once you have support for multiprogramming, you have most of which you need for combined multiprogramming and multiprocessing.

In fact, there is a definite advantage to having started with multiprogramming. In a multiprogramming environment, there is a great danger that one programmer’s buggy software could crash and, by rewriting portions of memory or resetting machine parameters, take down not only that programmer’s program but other programs that happened to be running on the machine at the same time. Consequently, multiprogramming systems place a heavy emphasis on security, erecting hardware and software barriers between processes that make it very difficult for one process to affect others in any way. Adding that kind of protection on to an operating system that didn’t design for it in the first place is much harder.

The trend toward multiprogramming and multiprocessing persisted, not only across families of mainframe computers, but also across the increasing number of desk-size “minicomputers”. It is in this context that Unix was developed. From the very beginning, Unix was therefore envisioned as an operating system that would provide support for both multiprocessing and multiprogramming.

IBM 026 card code

During the heyday of the mainframe, most data was entered on punch cards and most output went directly to a printer. Most of these systems had a “console” where commands could be entered directly from a keyboard, and output received directly on an electric typewriter-like printer, but such input was slow and inexact. Prior to multiprocessing, it would have been economic folly to tie up an expensive CPU waiting for someone to type commands and read output at merely human speeds. So the system console generally saw use only for booting up the system, running diagnostics when something was going wrong, or issuing commands to the computer center staff (e.g., “Please mount magnetic tape #4107 on drive 2.”).


The advent of multiprocessing and the subsequent rise of interactive computing applications meant that the single system console, hidden away in the computer room where only the computer center staff ever touched it, was replaced with a number of computer terminals accessible to the programmers and data entry staff. An early computer terminal was, basically, a keyboard for input and an electric typewriter for output.

Terminals were not cheap, but their lifetime cost was actually dominated by the amount of paper they consumed. In fairly short order, the typewriter output was replaced by a CRT screen. This opened up new possibilities in output. A CRT screen can be cleared, output to it can be written at different positions on the screen, and portions of the screen can be rewritten without rewriting the entire thing. These things can’t be done when you are printing directly onto a roll of paper. Terminal manufacturers began to add control sequences, combinations of character codes that, instead of printing directly, would instruct the terminal to shift the location where the next characters would appear, to change to bold-face or underlined characters, to clear the screen, etc. All of this wizardry was hard-wired – there were no integrated-circuit CPUs that could be embedded into the box and programmed to produce the desired results. Consequently, terminals were quite expensive (the fancier ones costing as much as a typical new car). Different manufacturers selected their control sequences as much based upon what they could wire in easily as upon any desire for uniformity or comparability. Consequently, there were eventually hundreds of models of CRT-based computer terminals, all of which used incompatible sets of control sequences.

Embedded microprocessors eventually simplified the design of computer terminals considerably (sinking a number of companies along the way that had made their money leasing the older expensive models), and the capabilities of computer terminal began to grow, including adding graphics and color capabilities. Eventually, PCs became cheap enough that the whole idea of a dedicated box serving merely as a terminal came into question, and the computer terminal now exists as a separate entity only in very special circumstances, although there are periodic attempts to revive the idea (e.g., so-called Internet appliances).

Before there was a World-Wide Web, there was an Internet. The Internet grew out of a deliberate attempt to allow researchers all around the country access to the limited number of highly expensive mainframe CPU’s. Internet traffic originally was dominated by telnet, a protocol for issuing text commands to a computer via the Internet, and FTP, a protocol for transferring files from machine to machine via the Internet. Email came along later.

In imitation of (and perhaps in jealousy of) the Internet, UseNet evolved as an anarchic collection of mainframe and minicomputers that each knew a handful of telephone numbers of other UseNet computers and could pass email and news (a.k.a. bulletin board) entries along those connections.

As the idea of long range networking took hold, more and more sites began installing local area networks to enable communication among their own machines.

Unix evolved for minicomputers in an historical context where

Altair 8800 at the Computer History Museum

When personal-computer (PC’s) came on the scene, they represented a revolution in terms of both decreased size and decreased cost, but they represented a step backwards in terms of total computing power and in terms of the sophistication of the hardware support for many systems programming activities.

Oddly enough, PC systems seemed to recap the entire history of computing up till that time, though at a somewhat faster pace:

MSDOS was developed in a context where

As MSDOS evolved into Windows, it did so in response to changes in the HW/SW context:

Of course, both MS Windows and Unix continued to evolve past their earliest forms, but the contexts in which they have evolved helped establish their fundamental philosophy and continues to influence how they work today.

All this may help to explain why the CS Dept. makes such heavy use of Unix. It’s not that we dislike MS Windows. But if we require a particular software package (say, a compiler) in our classes, under Unix we install it on a few Unix machines and let students run it from remote locations via the Internet. Under Windows we have to install it on every CS Dept machine, ask that it be installed on every machine in the ODU laboratories (and at the distance learning sites). And when an updated version of that package comes out, we have to go through the entire process all over again. It’s just far, far easier to maintain a consistent working environment for all students under Unix.

1 All in the *nix Family

Is Unix still relevant? Will you really ever need to use or work in a Unix environment?

Unix is an old operating system. Sometimes its age shows. Some Unix applications seem to use odd choices for interpreting special keys; some interpret mouse clicks in unexpected (to Windows users) ways; some have odd mechanisms for copying and pasting and sometimes the windows look funny. Take something as simple as as copy-and paste. Windows users know that the keyboard shortcut to paste information is Control-v. This is pretty much universal among Windows programs. But in the Unix emacs editor, instead of “pasting”, you “yank” information with Control-y. Ask an emacs fan why emacs didn’t adopt the same “standard” key stroke as the rest of the world was already using, and you will be given a condescending smile while the fan explains that emacs adopted the Control-y yank long before Microsoft even existed, and that the real question is why the rest of the world agreed to the idea that letter “v” was a logical choice to stand for “paste”.

Unix is a new operating system. It has been continually changed, updated, evolved, and used as the basis for new “Unix-like” operating systems. In fact, it really makes more sense to think of Unix as a family of operating systems (sometimes denoted as “*nix”). There is an actual standards process that allows an operating system to certify that is it a “real” Unix. But there may be even more machines out there running Unix-like operating systems, in other words, something in the *nix family.

You might well be using a Unix or Unix-like operating system without realizing it:

So there’s actually a pretty good chance that any computerized system that you lay your hands on that isn’t running Windows will actually be running something in the Unix family. Although some of these hide the operating system deeply enough that you won’t notice it when you use them, if your future profession should involve programming for any of them, you may find yourself working on a *nix system as the closet thing to the platform where your code will eventually run.