A practical guide to Fedora and Red Hat Enterprise Linux, 7th Edition (2014)
Chapter 1. Welcome to Linux
In This Chapter
The History of UNIX and GNU–Linux
The Heritage of Linux: UNIX
Open-Source Software and Licensing
What Is So Good About Linux?
Overview of Linux
The Shell: Command Interpreter and Programming Language
GUIs: Graphical User Interfaces
Choosing an Operating System
After reading this chapter you should be able to:
Discuss the history of UNIX, Linux, and the GNU Project
Explain what is meant by “free software” and list characteristics of the GNU General Public License
List three characteristics that are important to you when selecting an operating system
Describe the intent of the term FOSS/FLOSS; explain how it relates to free and open-source software
Explain what a Linux distribution is and list three popular distributions
List characteristics of Linux and reasons the Linux operating system is so popular
List three factors you would consider in choosing an operating system
Explain what a desktop environment is and name three desktop managers
An operating system is the low-level software that schedules tasks, allocates storage, and handles the interfaces to peripheral hardware, such as printers, disk drives, the screen, keyboard, and mouse. An operating system has two main parts: the kernel and the system programs. The kernel allocates machine resources—including memory, disk space, and CPU (page 1244) cycles—to all other programs that run on the computer. The system programs include device drivers, libraries, utility programs, shells (command interpreters), configuration scripts and files, application programs, servers, and documentation. They perform higher-level housekeeping tasks, often acting as servers in a client/server relationship. Many of the libraries, servers, and utility programs were written by the GNU Project, which is discussed shortly.
The Linux kernel was developed by Linus Torvalds while he was an undergraduate student. He released version 0.01 in September 1991 from his home in Finland. He used the Internet to make the source code immediately available to others for free.
The new operating system came together through a lot of hard work. Programmers around the world were quick to extend the kernel and develop other tools, adding functionality to match that already found in both BSD UNIX and SVR4 (System V UNIX, release 4) as well as new functionality. The name Linux is a combination of Linus and UNIX.
The Linux operating system, which was developed through the cooperation of numerous people around the world, is a product of the Internet and is a free (FOSS; page 7) operating system. In other words, all the source code is free. You are free to study it, redistribute it, and modify it. As a result, the code is available free of cost—no charge for the software, source, documentation, or support (via newsgroups, mailing lists, and other Internet resources). As the GNU Free Software Definition (www.gnu.org/philosophy/free-sw.html) puts it:
“Free software” is a matter of liberty, not price. To understand the concept, you should think of “free” as in “free speech,” not as in “free beer.”
The History of UNIX and GNU–Linux
This section presents some background on the relationships between UNIX and Linux and between GNU and Linux. Visit www.levenez.com/unix for an extensive history of UNIX.
The Heritage of Linux: UNIX
The UNIX system was developed by researchers who needed a set of modern computing tools to help them with their projects. The system allowed a group of people working together on a project to share selected data and programs while keeping other information private.
Universities and colleges played a major role in furthering the popularity of the UNIX operating system through the “four-year effect.” When the UNIX operating system became widely available in 1975, Bell Labs offered it to educational institutions at nominal cost. The schools, in turn, used it in their computer science programs, ensuring that computer science students became familiar with it. Because UNIX was such an advanced development system, the students became acclimated to a sophisticated programming environment. As these students graduated and went into industry, they expected to work in a similarly advanced environment. As more of them worked their way up the ladder in the commercial world, the UNIX operating system found its way into industry.
Berkeley UNIX (BSD)
In addition to introducing students to the UNIX operating system, the CSRG (Computer Systems Research Group) at the University of California at Berkeley made significant additions and changes to it. In fact, it made so many popular changes that one version of the system is called the BSD (Berkeley Software Distribution) of the UNIX system, or just Berkeley UNIX. The other major version is UNIX System V (SVR4), which descended from versions developed and maintained by AT&T and UNIX System Laboratories.
Fade to 1983
Richard Stallman (www.stallman.org) announced1 the GNU Project for creating an operating system, both kernel and system programs, and presented the GNU Manifesto,2 which begins as follows:
GNU, which stands for Gnu’s Not UNIX, is the name for the complete UNIX-compatible software system which I am writing so that I can give it away free to everyone who can use it.
Some years later, Stallman added a footnote to the preceding sentence when he realized that it was creating confusion:
The wording here was careless. The intention was that nobody would have to pay for *permission* to use the GNU system. But the words don’t make this clear, and people often interpret them as saying that copies of GNU should always be distributed at little or no charge. That was never the intent; later on, the manifesto mentions the possibility of companies providing the service of distribution for a profit. Subsequently I have learned to distinguish carefully between “free” in the sense of freedom and “free” in the sense of price. Free software is software that users have the freedom to distribute and change. Some users may obtain copies at no charge, while others pay to obtain copies—and if the funds help support improving the software, so much the better. The important thing is that everyone who has a copy has the freedom to cooperate with others in using it.
In the manifesto, after explaining a little about the project and what has been accomplished so far, Stallman continues:
Why I Must Write GNU
I consider that the golden rule requires that if I like a program I must share it with other people who like it. Software sellers want to divide the users and conquer them, making each user agree not to share with others. I refuse to break solidarity with other users in this way. I cannot in good conscience sign a nondisclosure agreement or a software license agreement. For years I worked within the Artificial Intelligence Lab to resist such tendencies and other inhospitalities, but eventually they had gone too far: I could not remain in an institution where such things are done for me against my will.
So that I can continue to use computers without dishonor, I have decided to put together a sufficient body of free software so that I will be able to get along without any software that is not free. I have resigned from the AI Lab to deny MIT any legal excuse to prevent me from giving GNU away.
Next Scene, 1991
The GNU Project has moved well along toward its goal. Much of the GNU operating system, except for the kernel, is complete. Richard Stallman later writes:
By the early ’90s we had put together the whole system aside from the kernel (and we were also working on a kernel, the GNU Hurd,3 which runs on top of Mach4). Developing this kernel has been a lot harder than we expected, and we are still working on finishing it.5
...[M]any believe that once Linus Torvalds finished writing the kernel, his friends looked around for other free software, and for no particular reason most everything necessary to make a UNIX-like system was already available.
What they found was no accident—it was the GNU system. The available free software6 added up to a complete system because the GNU Project had been working since 1984 to make one. The GNU Manifesto had set forth the goal of developing a free UNIX-like system, called GNU. The Initial Announcement of the GNU Project also outlines some of the original plans for the GNU system. By the time Linux was written, the [GNU] system was almost finished.7
6. See www.gnu.org/philosophy/free-sw.html
Today the GNU “operating system” runs on top of the FreeBSD (www.freebsd.org) and NetBSD (www.netbsd.org) kernels with complete Linux binary compatibility and on top of Hurd pre-releases and Darwin (developer.apple.com/opensource) without this compatibility.
The Code Is Free
The tradition of free software dates back to the days when UNIX was released to universities at nominal cost, which contributed to its portability and success. This tradition eventually died as UNIX was commercialized and manufacturers came to regard the source code as proprietary, making it effectively unavailable. Another problem with the commercial versions of UNIX related to their complexity. As each manufacturer tuned UNIX for a specific architecture, the operating system became less portable and too unwieldy for teaching and experimentation.
Two professors created their own stripped-down UNIX look-alikes for educational purposes: Doug Comer created XINU, and Andrew Tanenbaum created MINIX. Linus Torvalds created Linux to counteract the shortcomings in MINIX. Every time there was a choice between code simplicity and efficiency/features, Tanenbaum chose simplicity (to make it easy to teach with MINIX), which meant this system lacked many features people wanted. Linux went in the opposite direction.
You can obtain Linux at no cost over the Internet (page 46). You can also obtain the GNU code via the U.S. mail at a modest cost for materials and shipping. You can support the Free Software Foundation (www.fsf.org) by buying the same (GNU) code in higher-priced packages, and you can buy commercially packaged releases of Linux that include installation instructions, software, and support.
Linux and GNU software are distributed under the terms of the GPL (GNU General Public License; www.gnu.org/licenses/licenses.html). The GPL says you have the right to copy, modify, and redistribute the code covered by the agreement. When you redistribute the code, however, you must also distribute the same license with the code, thereby making the code and the license inseparable. If you download source code from the Internet for an accounting program that is under the GPL and then modify that code and redistribute an executable version of the program, you must also distribute the modified source code and the GPL agreement with it. Because this arrangement is the reverse of the way a normal copyright works (it gives rights instead of limiting them), it has been termed a copyleft. (This paragraph is not a legal interpretation of the GPL; it is intended merely to give you an idea of how it works. Refer to the GPL itself when you want to make use of it.)
Linux software, including tools, applications, and systems software, is distributed under one of several licences. In addition to GPL 2 and GPL3, there are many more licenses available. See www.gnu.org/licenses/license-list.html#SoftwareLicenses for a partial list.
Linux Is More than a Kernel
Although technically Linux is the name of the kernel, the term Linux has come to mean much more. By itself, a kernel is not of much use, so the Linux kernel is typically packaged with utilities and application programs. Many of the utilities that are distributed with the Linux kernel were written as part of the FSF’s GNU Project (page 3). Although the FSF prefers to refer to the combination of programs as GNU/Linux, most people simply call it Linux. As more people wanted to run Linux, groups formed that packaged the kernel and a collection of utilities from the GNU Project, BSD, and other sources, as well as application programs and servers.
These packages of kernel and programs are distributions of Linux and are referred to simply as distributions or distros. A distribution typically includes word processors, spreadsheets, media players, database applications, and a program to install the distribution. In addition, a distribution includes libraries and utilities from the GNU Project and graphics support from the X Window System.
All distributions are based on the same upstream code, although each might include different applications and tools. Distributions distinguish themselves in the areas of package management and installation tools, policies, community, and support.
First released in 1992, Yggdrasil was the first company to create a CD-ROM–based Linux distribution (it was also a live CD). First released in 1993, Slackware is the oldest distribution of Linux that is still being maintained. Today there are many distributions, including commercially backed distributions such as Fedora/Red Hat Enterprise Linux, Ubuntu, Mandriva, and openSUSE, and community-driven distributions such as Debian, Gentoo, and Mageia.
Embedded and moble Linux
Some companies have taken the GNU/Linux code, grown and customized it, and moved it to mobile, embedded, and consumer devices. For example, the ubiquitous Linksys WRT54G wireless router, the Cadillac XTS CUE infotainment system, and the Tivo DVR all run Linux. Linux powers many machine control, industrial automation, and medical instrumentation systems. The most widely distributed Linux release is Android, which is designed for smartphones and tablet computers. Visit linuxgizmos.com/category/devices for an up-to-date list.
Open-Source Software and Licensing
On January 22, 1998, Netscape announced it would make the source code for its next Web browser available for free licensing on the Internet. The month after that, Netscape staffed a project named Mozilla to coordinate the development of the Mozilla Application Suite. Netscape’s decisions were business decisions. The headline on its press release said it wanted to harness the creative power of thousands of Internet developers (blog.lizardwrangler.com/2008/01/22/january-22-1998-the-beginning-of-mozilla). Netscape wanted to improve its source code by participating in an active community.
Inspired by Netscape’s bold move, a group of people got together the next month and held a strategy session that created the term open source (opensource.org/history). This term provided a label for Netscape’s approach and contrasted with the term free software, which was used as a philosophical and political label rather than a pragmatic one. From that session, Eric Raymond and Bruce Perens founded OSI (Open Source Initiative; opensource.org) in late February 1998 to encourage the use of the new term.
The terms FOSS (free and open-source software) and FLOSS (free/libre/open-source software; the libre denotes freedom) were brought into common use shortly after OSI was formed. Free software, the term coined and supported by the FSF (it is in the name of the organization), suggests the freedoms it gives its users (as in free speech), whereas open-source software suggests the strengths of the peer-to-peer development model. The terms FOSS and FLOSS incorporate both free software and open-source software and can be used without indicating a preference for either term.
The license and the community
The combination of a license that forces you to share software you create and improve (GPL; page 5) and a community that is comfortable sharing software and expects software to be shared, builds the best and most reliable software. Netscape had the right idea.
But how can you make money with this setup? Several companies have done quite well in this environment, notably Red Hat. Red Hat sells training, subscriptions to automated software updates, documentation, and support for FOSS. Another way of making money with FOSS is to improve or extend existing software and get paid for your time. Of course the software goes back to the community when you are done. Although it does not follow the FOSS ideals, some companies create a free version of their software and sell a version with more features (e.g., VMware).
Two key words for Linux are “Have Fun!” These words pop up in prompts and documentation. The UNIX—now Linux—culture is steeped in humor that can be seen throughout the system. For example, less is more—GNU has replaced the UNIX paging utility named more with an improved utility named less. The utility to view PostScript documents is named ghostscript, and one of several replacements for the vi editor is named elvis. While machines with Intel processors have “Intel Inside” logos on their outside, some Linux machines sport “Linux Inside” logos. And Torvalds himself has been seen wearing a T-shirt bearing a “Linus Inside” logo.
What Is So Good About Linux?
In recent years Linux has emerged as a powerful and innovative UNIX work-alike. Its popularity has surpassed that of its UNIX predecessors. Although it mimics UNIX in many ways, the Linux operating system departs from UNIX in several significant ways: The Linux kernel is implemented independently of both BSD and System V, the continuing development of Linux is taking place through the combined efforts of many capable individuals throughout the world, and Linux puts the power of UNIX within easy reach of both business and personal computer users. Using the Internet, today’s skilled programmers submit additions and improvements to open-source developers such as Linus Torvalds and the GNU Project.
In 1985, individuals from companies throughout the computer industry joined together to develop the POSIX (Portable Operating System Interface for Computer Environments) standard, which is based largely on the UNIX SVID (System V Interface Definition) and other earlier standardization efforts. These efforts were spurred by the U.S. government, which needed a standard computing environment to minimize its training and procurement costs. Released in 1988, POSIX is a group of IEEE standards that define the API (application programming interface), shell, and utility interfaces for an operating system. Although aimed at UNIX-like systems, the standards can apply to any compatible operating system. Now that these standards have gained acceptance, software developers are able to develop applications that run on all conforming versions of UNIX, Linux, and other operating systems.
A rich selection of applications is available for Linux—both free and commercial—as well as a wide variety of tools: graphical, word processing, networking, security, administration, Web server, and many others. Large software companies have recently seen the benefit in supporting Linux and now have on-staff programmers whose job it is to design and code the Linux kernel, GNU, KDE, and other software that runs on Linux. For example, IBM (www.ibm.com/linux) is a major Linux supporter. Linux conforms increasingly more closely to POSIX standards, and some distributions and parts of others meet this standard. These developments indicate that Linux is becoming mainstream and is respected as an attractive alternative to other popular operating systems.
Another aspect of Linux that appeals to users is the amazing range of peripherals that is supported and the speed with which support for new peripherals emerges. Linux often supports a peripheral or interface card before any company does. Unfortunately some types of peripherals—particularly proprietary graphics cards—lag in their support because the manufacturers do not release specifications or source code for drivers in a timely manner, if at all.
Also important to users is the amount of software that is available—not just source code (which needs to be compiled) but also prebuilt binaries that are easy to install and ready to run. These programs include more than free software. Netscape, for example, was available for Linux from the start and included Java support before it was available from many commercial vendors. Its sibling Mozilla/Thunderbird/ Firefox is now a viable browser, mail client, and newsreader, performing many other functions as well.
Linux is not just for Intel-based platforms (which now include Apple computers): It has been ported to and runs on the Power PC—including older Apple computers (ppclinux), Compaq’s (née Digital Equipment Corporation) Alpha-based machines, MIPS-based machines, Motorola’s 68K-based machines, various 32- and 64-bit systems, and IBM’s S/390. Linux also runs on multiple-processor machines (SMPs; page 1273). It also includes an O(1) scheduler, which dramatically increases scalability on SMP systems.
Linux supports programs, called emulators, that run code intended for other operating systems. By using emulators you can run some DOS, Windows, and Macintosh programs under Linux. For example, Wine (www.winehq.com) is an open-source implementation of the Windows API that runs on top of the X Window System and UNIX/Linux.
A virtual machine (VM or guest) appears to the user and to the software running on it as a complete physical machine. It is, however, one of potentially many such VMs running on a single physical machine. See Chapter 17 for more information.
Linux Is Popular with Hardware Companies and Developers
Two trends in the computer industry set the stage for the growing popularity of UNIX and Linux. First, advances in hardware technology created the need for an operating system that could take advantage of available hardware power. In the mid-1970s, minicomputers began challenging the large mainframe computers because, in many applications, minicomputers could perform the same functions less expensively. More recently, powerful 64-bit processor chips, plentiful and inexpensive memory, and lower-priced hard disk storage have allowed hardware companies to install multiuser operating systems on desktop computers.
Proprietary operating systems
Second, with the cost of hardware continually dropping, hardware manufacturers could no longer afford to develop and support proprietary operating systems. A proprietary operating system is one that is written and owned by the manufacturer of the hardware (for example, DEC/Compaq owns VMS). Today’s manufacturers need a generic operating system they can easily adapt to their machines.
Generic operating systems
A generic operating system is written outside of the company manufacturing the hardware and is sold (UNIX, OS X, Windows) or given (Linux) to the manufacturer. Linux is a generic operating system because it runs on different types of hardware produced by different manufacturers. Of course, if manufacturers can pay only for development and avoid per-unit costs (which they have to pay to Microsoft for each copy of Windows they sell), they are much better off. In turn, software developers need to keep the prices of their products down; they cannot afford to create new versions of their products to run under many different proprietary operating systems. Like hardware manufacturers, software developers need a generic operating system.
Although the UNIX system once met the needs of hardware companies and researchers for a generic operating system, over time it has become more proprietary as manufacturers added support for their own specialized features and introduced new software libraries and utilities. Linux emerged to serve both needs: It is a generic operating system that takes advantage of available hardware power.
Linux Is Portable
A portable operating system is one that can run on many different machines. More than 95 percent of the Linux operating system is written in the C programming language, and C is portable because it is written in a higher-level, machine-independent language. (The C compiler is written in C.)
Because Linux is portable, it can be adapted (ported) to different machines and can meet special requirements. For example, Linux is used in embedded computers, such as the ones found in cellphones, PDAs, and the cable boxes on top of many TVs. The file structure takes full advantage of large, fast hard disks. Equally important, Linux was originally designed as a multiuser operating system—it was not modified to serve several users as an afterthought. Sharing the computer’s power among many users and giving them the ability to share data and programs are central features of the system.
Because it is adaptable and takes advantage of available hardware, Linux runs on many different microprocessor-based systems as well as mainframes. The popularity of the microprocessor-based hardware drives Linux; these microcomputers are getting faster all the time at about the same price point. This widespread acceptance benefits both users, who do not like having to learn a new operating system for each vendor’s hardware, and system administrators, who like having a consistent software environment.
The advent of a standard operating system has given a boost to the development of the software industry. Now software manufacturers can afford to make one version of a product available on machines from different manufacturers.
The C Programming Language
Ken Thompson wrote the UNIX operating system in 1969 in PDP-7 assembly language. Assembly language is machine dependent: Programs written in assembly language work on only one machine or, at best, on one family of machines. For this reason, the original UNIX operating system could not easily be transported to run on other machines: It was not portable.
To make UNIX portable, Thompson developed the B programming language, a machine-independent language, from the BCPL language. Dennis Ritchie developed the C programming language by modifying B and, with Thompson, rewrote UNIX in C in 1973. Originally, C was touted as a “portable assembler.” The revised operating system could be transported more easily to run on other machines.
That development marked the start of C. Its roots reveal some of the reasons why it is such a powerful tool. C can be used to write machine-independent programs. A programmer who designs a program to be portable can easily move it to any computer that has a C compiler. C is also designed to compile into very efficient code. With the advent of C, a programmer no longer had to resort to assembly language to produce code that would run well (that is, quickly—although an assembler will always generate more efficient code than a high-level language).
C is a good systems language. You can write a compiler or an operating system in C. It is a highly structured but not necessarily a high-level language. C allows a programmer to manipulate bits and bytes, as is necessary when writing an operating system. At the same time, it has high-level constructs that allow for efficient, modular programming.
In the late 1980s, ANSI (the American National Standards Institute) defined a standard version of the C language, commonly referred to as ANSI C or C89 (for the year the standard was published). Ten years later the C99 standard was published. The original version of the language is often referred to as Kernighan & Ritchie (or K&R) C, named for the authors of the book that first described the C language.
Another researcher at Bell Labs, Bjarne Stroustrup, created an object-oriented programming language named C++, which is built on the foundation of C. Because object-oriented programming is desired by many employers today, C++ is preferred over C in many environments. Another language of choice is Objective-C, which was used to write the first Web browser. The GNU Project’s C compiler (gcc) supports C, C++, and Objective-C.
Overview of Linux
The Linux operating system has many unique and powerful features. Like other operating systems, it is a control program for computers. But like UNIX, it is also a well-thought-out family of utility programs (Figure 1-1) and a set of tools that allow users to connect and use these utilities to build systems and applications.
Figure 1-1 A layered view of the Linux operating system
Linux Has a Kernel Programming Interface
The Linux kernel—the heart of the Linux operating system—is responsible for allocating the computer’s resources and scheduling user jobs so each one gets its fair share of system resources, including access to the CPU; peripheral devices, such as hard disk, DVD, and tape storage; and printers. Programs interact with the kernel through system calls, special functions with well-known names. A programmer can use a single system call to interact with many kinds of devices. For example, there is one write() system call, rather than many device-specific ones. When a program issues a write() request, the kernel interprets the context and passes the request to the appropriate device. This flexibility allows old utilities to work with devices that did not exist when the utilities were written. It also makes it possible to move programs to new versions of the operating system without rewriting them (provided the new version recognizes the same system calls).
Linux Can Support Many Users
Depending on the hardware and the types of tasks the computer performs, a Linux system can support from 1 to more than 1,000 users, each concurrently running a different set of programs. The per-user cost of a computer that can be used by many people at the same time is less than that of a computer that can be used by only a single person at a time. It is less because one person cannot generally take advantage of all the resources a computer has to offer. That is, no one can keep all the printers going constantly, keep all the system memory in use, keep all the disks busy reading and writing, keep the Internet connection in use, and keep all the terminals busy at the same time. By contrast, a multiuser operating system allows many people to use all of the system resources almost simultaneously. The use of costly resources can be maximized, and the cost per user can be minimized; these are the primary objectives of a multiuser operating system.
Linux Can Run Many Tasks
Linux is a fully protected multitasking operating system, allowing each user to run more than one job at a time. Processes can communicate with one another but remain fully protected from one another, just as the kernel remains protected from all processes. You can run several jobs in the background while giving all your attention to the job being displayed on the screen, and you can switch back and forth between jobs. If you are running the X Window System (page 16), you can run different programs in different windows on the same screen and watch all of them. This capability helps users be more productive.
Linux Provides a Secure Hierarchical Filesystem
A file is a collection of information, such as text for a memo or report, an accumulation of sales figures, an image, a song, or an executable program. Each file is stored under a unique identifier on a storage device, such as a hard disk. The Linux filesystem provides a structure whereby files are arranged under directories, which are like folders or boxes. Each directory has a name and can hold other files and directories. Directories, in turn, are arranged under other directories and so forth in a treelike organization. This structure helps users keep track of large numbers of files by grouping related files in directories. Each user has one primary directory and as many subdirectories as are required (Figure 1-2).
Figure 1-2 The Linux filesystem structure
With the idea of making life easier for system administrators and software developers, a group got together over the Internet and developed the Linux Filesystem Standard (FSSTND), which has since evolved into the Linux Filesystem Hierarchy Standard (FHS). Before this standard was adopted, key programs were located in different places in different Linux distributions. Today you can sit down at a Linux system and expect to find a given standard program at a consistent location (page 189).
A link allows a given file to be accessed by means of two or more names. The alternative names can be located in the same directory as the original file or in another directory. Links can make the same file appear in several users’ directories, enabling those users to share the file easily. Windows uses the term shortcut in place of link to describe this capability. Macintosh users will be more familiar with the term alias. Under Linux, an alias is different from a link; it is a command macro feature provided by the shell (page 392).
Like most multiuser operating systems, Linux allows users to protect their data from access by other users. It also allows users to share selected data and programs with certain other users by means of a simple but effective protection scheme. This level of security is provided by file access permissions, which limit the users who can read from, write to, or execute a file. Linux also implements ACLs (Access Control Lists), which give users and administrators finer-grained control over file access permissions, and SELinux, which gives users and administrators more control over access control.
The Shell: Command Interpreter and Programming Language
In a textual environment, the shell—the command interpreter—acts as an interface between you and the operating system. When you enter a command on the screen, the shell interprets the command and calls the program you want. A number of shells are available for Linux. The four most popular shells are
• The Bourne Again Shell (bash), an enhanced version of the original Bourne Shell (an original UNIX shell).
• The Debian Almquist Shell (dash; page 329), a smaller version of bash with fewer features. Many startup shell scripts call dash in place of bash to speed the boot process.
• The TC Shell (tcsh), an enhanced version of the C Shell that was developed as part of BSD UNIX.
• The Z Shell (zsh), which incorporates features from a number of shells, including the Korn Shell.
Because different users might prefer different shells, multiuser systems can have several different shells in use at any given time. The choice of shells demonstrates one of the advantages of the Linux operating system: the ability to provide a customized interface for each user.
Besides performing its function of interpreting commands from a keyboard and sending those commands to the operating system, the shell is a high-level programming language. Shell commands can be arranged in a file for later execution (Linux calls these files shell scripts; Windows calls them batch files). This flexibility allows users to perform complex operations with relative ease, often by issuing short commands, or to build with surprisingly little effort elaborate programs that perform highly complex operations.
Wildcards and ambiguous file references
When you type commands to be processed by the shell, you can construct patterns using characters that have special meanings to the shell. These characters are called wildcard characters. The patterns, called ambiguous file references, are a kind of shorthand: Rather than typing in complete filenames, you can type patterns; the shell expands these patterns into matching filenames. An ambiguous file reference can save you the effort of typing in a long filename or a long series of similar filenames. For example, the shell might expand the pattern mak*tar.gz to make-3.80.tar.gz. Patterns can also be useful when you know only part of a filename or cannot remember the exact spelling of a filename.
In conjunction with the Readline Library, the shell performs command, filename, pathname, and variable completion: You type a prefix and press TAB, and the shell lists the items that begin with that prefix or completes the item if the prefix specifies a unique item.
Device-Independent Input and Output
Devices (such as a printer or a terminal) and disk files appear as files to Linux programs. When you give a command to the Linux operating system, you can instruct it to send the output to any one of several devices or files. This diversion is called output redirection.
In a similar manner, a program’s input, which normally comes from a keyboard, can be redirected so that it comes from a disk file instead. Input and output are device independent; that is, they can be redirected to or from any appropriate device.
As an example, the cat utility normally displays the contents of a file on the screen. When you run a cat command, you can easily cause its output to go to a disk file instead of the screen.
One of the most important features of the shell is that users can use it as a programming language. Because the shell is an interpreter, it does not compile programs written for it but rather interprets programs each time they are loaded from the disk. Loading and interpreting programs can be time-consuming.
Many shells, including the Bourne Again Shell, support shell functions that the shell holds in memory so it does not have to read them from the disk each time you execute them. The shell also keeps functions in an internal format so it does not have to spend as much time interpreting them.
Job control is a shell feature that allows users to work on several jobs at once, switching back and forth between them as desired. When you start a job, it is frequently run in the foreground so it is connected to the terminal. Using job control, you can move the job you are working with to the background and continue running it there while working on or observing another job in the foreground. If a background job then needs your attention, you can move it to the foreground so it is once again attached to the terminal. The concept of job control originated with BSD UNIX, where it appeared in the C Shell.
X Window System
History of X
The X Window System (also called X or X11; www.x.org) was created in 1984 at MIT (the Massachusetts Institute of Technology) by researchers working on a distributed computing project and a campuswide distributed environment called Project Athena. This system was not the first windowing software to run on a UNIX system, but it was the first to become widely available and accepted. In 1985, MIT released X (version 9) to the public for use without a license. Three years later, a group of vendors formed the X Consortium to support the continued development of X, under the leadership of MIT. By 1998, the X Consortium had become part of the Open Group. In 2001, the Open Group released X version 11, release 6.6 (X11R6.6).
The X Window System was inspired by the ideas and features found in earlier proprietary window systems but is written to be portable and flexible. X is designed to run on a workstation, typically attached to a LAN. The designers built X with the network in mind. If you can communicate with a remote computer over a network, running an X application on that computer and sending the results to a local display is straightforward.
Although the X protocol has remained stable for a long time, additions to it in the form of extensions are quite common. One of the most interesting—albeit one that has not yet made its way into production—is the Media Application Server, which aims to provide the same level of network transparency for sound and video that X does for simple windowing applications.
XFree86 and X.org
Many distributions of Linux used the XFree86 X server, which inherited its license from the original MIT X server, through release 4.3. In early 2004, just before the release of XFree86 4.4, the XFree86 license was changed to one that is more restrictive and not compatible with the GPL (page 5). In the wake of this change, a number of distributions abandoned XFree86 and replaced it with an X.org X server that is based on a pre-release version of XFree86 4.4, which predates the change in the XFree86 license. Fedora/RHEL uses the X.org X server, named Xorg; it is functionally equivalent to the one distributed by XFree86 because most of the code is the same. Thus modules designed to work with one server work with the other.
Computer networks are central to the design of X. It is possible to run an application on one computer and display the results on a screen attached to a different computer; the ease with which this can be done distinguishes X from other window systems available today. Thanks to this capability, a scientist can run and manipulate a program on a powerful supercomputer in another building or another country and view the results on a personal workstation or laptop computer. For more information refer to “Remote Computing and Local Displays” on page 460.
GUIs: Graphical User Interfaces
The X Window System provides the foundation for the GUIs available with Linux. Given a terminal or workstation screen that supports X, a user can interact with the computer through multiple windows on the screen, display graphical information, or use special-purpose applications to draw pictures, monitor processes, or preview formatted output. Because X is an across-the-network protocol, it allows a user to open a window on a workstation or computer system that is remote from the CPU generating the window. Conceptually X is very simple. As a consequence, it does not provide some of the more common features found in GUIs, such as the ability to drag windows. The UNIX/Linux philosophy is one of modularity: X relies on a window manager that in turn relies on a desktop manager/environment.
Usually two layers run on top of X: a desktop manager and a window manager. A desktop manager is a picture-oriented user interface that enables you to interact with system programs by manipulating icons instead of typing the corresponding commands to a shell. Fedora/RHEL runs the GNOME desktop manager (www.gnome.org) by default, but X can also run KDE (www.kde.org) and a number of other desktop managers.
A window manager is a program that runs under the desktop manager and allows you to open and close windows, run programs, and set up a mouse so it has different effects depending on how and where you click it. The window manager also gives the screen its personality. Whereas Microsoft Windows allows you to change the color of key elements in a window, a window manager under X allows you to customize the overall look and feel of the screen: You can change the way a window looks and works (by giving it different borders, buttons, and scrollbars), set up virtual desktops, create menus, and more. When you are working from the command line, you can approximate a window manager by using Midnight Commander (mc).
Several popular window managers run under X and Linux. RHEL provides both Metacity (the default under GNOME 2) and kwin (the default under KDE). In addition to KDE, Fedora provides Mutter (the default under GNOME 3). Mutter is short for Metacity Clutter (the graphics library is named Clutter). Other window managers, such as Sawfish and WindowMaker, are also available. Chapter 4 presents information on using a window manager and other components of a GUI.
Unlike a window manager, which has a clearly defined task, a desktop environment (manager) does many things. In general, a desktop environment, such as GNOME or KDE, provides a means of launching applications and utilities, such as a file manager, that work with a window manager.
GNOME and KDE
The KDE Project began in 1996, with the aim of creating a consistent, user-friendly desktop environment for free UNIX-like operating systems. KDE is based on the Qt toolkit made by Trolltech. When KDE development began, the Qt license was not compatible with the GPL (page 5). For this reason the FSF decided to support a different project, GNOME (the GNU Network Object Model Environment). Qt has since been released under the terms of the GPL, eliminating part of the rationale for GNOME’s existence.
GNOME (www.gnome.org) is the default desktop environment for Fedora/RHEL. It provides a simple, coherent user interface that is suitable for corporate use. GNOME uses GTK for drawing widgets. GTK, developed for the GNU Image Manipulation Program (gimp), is written in C, although bindings for C++ and other languages are available.
GNOME does not take much advantage of its component architecture. Instead, it continues to support the traditional UNIX philosophy of relying on many small programs, each of which is good at doing a specific task.
KDE (kde.org) is written in C++ on top of the Qt framework. KDE tries to use existing technology, if it can be reused, but creates its own if nothing else is available or if a superior solution is needed. For example, KDE implemented an HTML rendering engine long before the Mozilla project was born. Similarly, work on KOffice began a long time before StarOffice became the open-source OpenOffice.org (which is now LibreOffice). In contrast, the GNOME office applications are stand-alone programs that originated outside the GNOME Project. KDE’s portability is demonstrated by the use of most of its core components, including Konqueror and KOffice, under Mac OS X.
Since the release of version 2, the GNOME Project has focused on simplifying the user interface, removing options where they are deemed unnecessary, and aiming for a set of default settings that the end user will not wish to change. Fedora 15 introduced GNOME 3, which is radically different from GNOME 2, following the trend toward simpler, more graphical desktops that have more icons and fewer menus. KDE has moved in the opposite direction, emphasizing configurability.
The freedesktop.org group (freedesktop.org), whose members are drawn from the GNOME and KDE Projects, is improving interoperability and aims to produce standards that will allow the two environments to work together. One standard released by freedesktop.org allows applications to use the notification area of either the GNOME or KDE panel without being aware of which desktop environment they are running in.
Other Desktop Environments
In addition to GNOME and KDE, there are other popular desktop environments. See page 118 for information on and installation instructions for the WindowMaker, Xfce, and LXDE desktop environments.
A Large Collection of Useful Utilities
Linux includes a family of several hundred utility programs, often referred to as commands. These utilities perform functions that are universally required by users. And each utility tends to do one thing and do it well. The sort utility, for example, puts lists (or groups of lists) in alphabetical or numerical order and can be used to sort lists by part number, last name, city, ZIP code, telephone number, age, size, cost, and so forth. The sort utility is an important programming tool that is part of the standard Linux system. Other utilities allow users to create, display, print, copy, search, and delete files as well as to edit, format, and typeset text. The man (for manual) and info utilities provide online documentation for Linux.
Pipelines and filters
Linux enables users to establish both pipelines and filters on the command line. A pipeline passes the output of one program to another program as input. A filter is a special kind of pipeline that processes a stream of input data to yield a stream of output data. A filter processes another program’s output, altering it as a result. The filter’s output then becomes input to another program.
Pipelines and filters frequently join utilities to perform a specific task. For example, you can use a pipeline to send the output of the sort utility to head (a filter that lists the first ten lines of its input); you can then use another pipeline to send the output of head to a third utility, lpr, that sends the data to a printer. Thus, in one command line, you can use three, unrelated utilities together to sort and print part of a file.
Linux network support includes many utilities that enable you to access remote systems over a variety of networks. In addition to sending email to users on other systems, you can access files on disks mounted on other computers as if they were located on the local system, make your files available to other systems in a similar manner, copy files back and forth, run programs on remote systems while displaying the results on the local system, and perform many other operations across local area networks (LANs) and wide area networks (WANs), including the Internet.
Layered on top of this network access is a wide range of application programs that extend the computer’s resources around the globe. You can carry on conversations with people throughout the world, gather information on a wide variety of subjects, and download new software over the Internet quickly and reliably. Chapter 8 discusses networks, the Internet, and the Linux network facilities.
On a Linux system the system administrator is frequently the owner and only user of the system. This person has many responsibilities. The first responsibility might be to set up the system, install the software, and possibly edit configuration files. Once the system is up and running, the system administrator is responsible for downloading and installing software (including upgrading the operating system), backing up and restoring files, and managing such system facilities as printers, terminals, servers, and a local network. The system administrator is also responsible for setting up accounts for new users on a multiuser system, bringing the system up and down as needed, monitoring the system, and taking care of any problems that arise.
One of Linux’s most impressive strengths is its rich software development environment. Linux supports compilers and interpreters for many computer languages. Besides C and C++, languages available for Linux include Ada, Fortran, Java, Lisp, Pascal, Perl, and Python. The bison utility generates parsing code that makes it easier to write programs to build compilers (tools that parse files containing structured information). The flex utility generates scanners (code that recognizes lexical patterns in text). The make utility and the GNU Configure and Build System make it easier to manage complex development projects. Source code management systems, such as CVS, simplify version control. Several debuggers, including ups and gdb, can help you track down and repair software defects. The GNU C compiler (gcc) works with the gprof profiling utility to help programmers identify potential bottlenecks in a program’s performance. The C compiler includes options to perform extensive checking of C code, thereby making the code more portable and reducing debugging time.
Choosing an Operating System
If you are reading this book you are probably at least considering using/installing Linux; you know there is an alternative to Windows (Microsoft) and OS X (Macintosh) systems. This chapter details a lot of features that make Linux a great operating system. But how do you decide if Linux is for you? Following is a list of some of the factors that go into choosing an operating system.
• Look and feel—Can you get comfortable using the GUI? What is the learning curve?
• Cost—How much does the operating system cost? What is included at that cost?
• Ease of use—Is the GUI easy to use?
• Bundled software—What software comes with the operating system and what other software will you have to buy?
• Hardware requirements—How expensive is the hardware needed to run the operating system at a speed you will be comfortable with?
• Bugs—How long does it take to fix a bug, and how often are updates released?
• Security—How secure is the operating system? What do you need to do to make it secure enough to meet your needs? Do you need to purchase security software? Do you need to buy antivirus software?
• Available software—How much software is available for the operating system?
Windows, OS X, and Linux each have their strengths and weaknesses. Frequently, look and feel and ease of use depend on what you are used to. Linux allows you to customize the GUI more than OS X and much more than Windows, so you may be able to customize it so you are more comfortable working with it. If you are changing operating systems, give yourself enough time to get used to the new one. Linux does have a learning curve: It will be challenging but rewarding.
One of the biggest benefits of Linux is that you can fix it yourself; you do not have to wait for Microsoft or Apple to find and fix a bug when you run into a problem.
Linux wins the question of cost hands down: it is free software. In addition, Linux comes with a lot of bundled software, including LibreOffice, a complete Microsoft-compatible office suite, and a lot more. And you can download additional software for free. Windows comes with very little bundled software; you need to buy office software and most other software separately. Again, OS X is somewhere between.
Windows is frequently called bloated—it takes the latest and greatest hardware to run it at an acceptable speed. OS X runs only on Apple hardware, which is expensive compared with the hardware required to run Windows (PC hardware). Linux runs on PC hardware, but uses the hardware a lot more lightly than does Windows. On a nonserver system the GUI typically uses a lot of the available horsepower of a system. On older hardware you can run one of the lightweight GUIs (page 118) in place of GNOME or KDE and get good performance. Depending on what you are doing, you can get excellent performance using only the CLI on minimal hardware.
Because of community support, Linux gets excellent marks for support: bug fixes are distributed frequently and are easy to apply. Windows support has improved, but is nowhere near the level of Linux support. OS X gets mixed reviews. And viruses are almost unknown on Linux so there is less support needed.
The Linux operating system grew out of the UNIX heritage to become a popular alternative to traditional systems (that is, Windows) available for microcomputer (PC) hardware. UNIX users will find a familiar environment in Linux. Distributions of Linux contain the expected complement of UNIX utilities, contributed by programmers around the world, including the set of tools developed as part of the GNU Project. The Linux community is committed to the continued development of this system. Support for new microcomputer devices and features is added soon after the hardware becomes available, and the tools available on Linux continue to be refined. Given the many commercial software packages available to run on Linux platforms and the many hardware manufacturers offering Linux on their systems, it is clear that the system has evolved well beyond its origin as an undergraduate project to become an operating system of choice for academic, commercial, professional, and personal use.
Linux is packaged in distributions, each of which has an installer and a collection of software packages appropriate to the purpose of the distribution. A distribution typically includes a way to keep its software up-to-date.
Linux and much of the related software is FOSS/FLOSS (Free [Libre] Open Source Software). These terms combine the meanings of free software coined by the FSF (Free Software Foundation) and open-source software, which originated with OSI (Open Source Initiative), and can be used without indicating a preference for either.
1. What is free software? List three characteristics of free software.
2. Why is Linux popular? Why is it popular in academia?
3. What are multiuser systems? Why are they successful?
4. What is Linux? What is the Free Software Foundation/GNU? Which parts of the Linux operating system did each provide? Who else has helped build and refine this operating system?
5. In which language is Linux written? What does the language have to do with the success of Linux?
6. What is a distribution? What does it contain? Name three distributions.
7. What is the difference between the terms free software and open-source software? Who coined each term?
8. What is a utility program?
9. What is a shell? How does it work with the kernel? With the user?
10. How can you use utility programs and a shell to create your own applications?
11. Why is the Linux filesystem referred to as hierarchical?
12. What is the difference between a multiuser and a multitasking system?
13. Give an example of when you would want to use a multitasking system.
14. Approximately how many people wrote Linux? Why is this project unique?
15. What are the key terms of the GNU General Public License?