Discover millions of ebooks, audiobooks, and so much more with a free trial

Only $11.99/month after trial. Cancel anytime.

Using and Administering Linux: Volume 1: Zero to SysAdmin: Getting Started
Using and Administering Linux: Volume 1: Zero to SysAdmin: Getting Started
Using and Administering Linux: Volume 1: Zero to SysAdmin: Getting Started
Ebook1,014 pages12 hours

Using and Administering Linux: Volume 1: Zero to SysAdmin: Getting Started

Rating: 0 out of 5 stars

()

Read preview

About this ebook

Become a Linux sysadmin and expert user of Linux, even with no previous Linux experience and learn to manage complex systems with ease.  Volume 1 of this three volume training course introduces operating systems in general and Linux in particular. It briefly explores the The Linux Philosophy for SysAdmins in preparation for the rest of the course. This book provides you with the tools necessary for mastering user management; installing, updating, and deleting software; and using command line tools to do performance tuning and basic problem determination.

You'll begin by creating a virtual network and installing an instance of Fedora – a popular and powerful Linux distribution – on a VirtualBox VM that can be used for all of the experiments on an existing Windows or Linux computer. You’ll then move on to the basics of using the Xfce GUI desktop and the many tools Linux provides for working on the command line including virtual consoles, various terminal emulators, BASH, and other shells.

Explore data streams and the Linux tools used to manipulate them, and learn about the Vim text editor, which is indispensable to advanced Linux users and system administrators, and be introduced to some other text editors. You’ll also see how to install software updates and new software, learn additional terminal emulators, and some advanced shell skills. Examine the sequence of events that take place as the computer boots and Linux starts up, configure your shell to personalize it in ways that can seriously enhance your command line efficiency, and delve into all things file and filesystems.

What You Will Learn

  • Install Fedora Linux and basic configuration of the Xfce desktop
  • Access the root user ID, and the care that must be taken when working as root
  • Use Bash and other shells in the Linux virtual consoles and terminal emulators
  • Create and modify system configuration files with Use the Vim text editor
  • Explore administrative tools available to root that enable you to manage users, filesystems, processes, and basic network communications
  • Configure the boot and startup sequences

Who This Book Is For

Anyone who wants to learn Linux as an advanced user and system administrator at the command line while using the GUI desktop to leverage productivity. 


LanguageEnglish
PublisherApress
Release dateDec 10, 2019
ISBN9781484250495
Using and Administering Linux: Volume 1: Zero to SysAdmin: Getting Started

Read more from David Both

Related to Using and Administering Linux

Related ebooks

Programming For You

View More

Related articles

Reviews for Using and Administering Linux

Rating: 0 out of 5 stars
0 ratings

0 ratings0 reviews

What did you think?

Tap to rate

Review must be at least 10 words

    Book preview

    Using and Administering Linux - David Both

    © David Both 2020

    D. BothUsing and Administering Linux: Volume 1https://doi.org/10.1007/978-1-4842-5049-5_1

    1. Introduction

    David Both¹ 

    (1)

    Raleigh, NC, USA

    Objectives

    After reading this chapter, you will be able to

    Define the value proposition of Linux

    Describe at least four attributes that make Linux desirable as an operating system

    Define the meaning of the term free when it is applied to open source software

    State the Linux Truth and its meaning

    Describe how open source software makes the job of the SysAdmin easier

    List some of the traits found in a typical SysAdmin

    Describe the structure of the experiments used throughout this course

    List two types of terminal environments that can be used to access the Linux command line

    About Linux

    The value of any software lies in its usefulness not in its price.

    —Linus Torvalds¹

    The preceding quote from Linus Torvalds, the creator of Linux,² perfectly describes the value proposition of free open source software (FOSS) and particularly Linux. Expensive software that performs poorly or does not meet the needs of the users can in no way be worth any amount of money. On the other hand, free software that meets the needs of the users has great value to those users.

    Most open source software³ falls in the latter category. It is software that millions of people find extremely useful and that is what gives it such great value. I have personally downloaded and used only one proprietary software application in over 20 years that I have been using Linux.

    Linux itself is a complete, open source operating system that is open, flexible, stable, scalable, and secure. Like all operating systems, it provides a bridge between the computer hardware and the application software that runs on it. It also provides tools that can be used by a system administrator, SysAdmin, to monitor and manage the following things:

    1.

    The functions and features of the operating system itself

    2.

    Productivity software like word processors; spreadsheets; financial, scientific, industrial, and academic software; and much more

    3.

    The underlying hardware, for example, temperatures and operational status

    4.

    Software updates to fix bugs

    5.

    Upgrades to move from one release level of the operating system to the next higher level

    The tasks that need to be performed by the system administrator are inseparable from the philosophy of the operating system, both in terms of the tools which are available to perform them and the freedom afforded to the SysAdmin in their performance of those tasks. Let’s look very briefly at the origins of both Linux and Windows and explore a bit about how the philosophies of their creators affect the job of a SysAdmin.

    The birth of Windows

    The proprietary DEC VAX/VMS⁴ operating system was designed by developers who subscribed to a closed philosophy. That is, that the user should be protected from the internal vagaries of the system⁵ because the users are afraid of computers.

    Dave Cutler,⁶ who wrote the DEC VAX/VMS operating system, is also the chief architect of Windows NT, the parent of all current forms of Windows. Cutler was hired away from DEC by Microsoft with the specific intention of having him write Windows NT. As part of his deal with Microsoft, he was allowed to bring many of his top engineers from DEC with him. Therefore, it should be no surprise that the Windows versions of today, however, far removed from Windows NT they might be, remain hidden behind this veil of secrecy.

    Black box syndrome

    Let’s look at what proprietary software means to someone trying to fix it. I will use a trivial black box example to represent some hypothetical compiled, proprietary software. This software was written by a hypothetical company that wants to keep the source code a secret so that their alleged trade secrets cannot be stolen.

    As the hypothetical user of this hypothetical proprietary software, I have no knowledge of what happens inside the bit of compiled machine language code to which I have access. Part of that restriction is contractual – notice that I do not say legal – in a license agreement that forbids me from reverse engineering the machine code to produce the source code. The sole function of this hypothetical code is to print no if the number input is 17 or less and to print yes if the input is over 17. This result might be used to determine whether my customer receives a discount on orders of 17 units or more.

    Using this software for a number of weeks/months/years, everything seems normal until one of my customers complains that they should have received the discount but did not.

    Simple testing of input numbers from 0 to 16 produces the correct output of no. Testing of numbers from 18 and up produces the correct output of yes. Testing of the number 17 results in an incorrect output of no. Why? We have no way of knowing why! The program fails on the edge case of exactly 17. I can surmise that there is an incorrect logical comparison in the code, but I have no way of knowing, and without access to the source code, I can neither verify this nor fix it myself.

    So I report this problem to the vendor from whom I purchased the software. They tell me they will fix it in the next release. When will that be? I ask. In about six months – or so, they reply.

    I must now task one of my workers to check the results of every sale to verify whether the customer should receive the discount. If they should, we assign other people to cut a refund check and send that along with a letter explaining the situation.

    After a few months with no work on a fix from the vendor, I call to try and determine the status of the fix. They tell me that they have decided not to fix the problem because I am the only one having the problem. The translation of this is sorry, you don’t spend enough money with us to warrant us fixing the problem. They also tell me that the new owners, the venture capital company who bought out the company from which I bought the software, will no longer be selling or supporting that software anyway.

    I am left with useless – less than useless – software that will never be fixed and that I cannot fix myself. Neither can anyone else who purchased that software fix it if they ever run into this problem.

    Because it is completely closed and the sealed box in which it exists is impenetrable, proprietary software is unknowable. Windows is like this. Even most Windows support staff have no idea how it works inside. This is why the most common advice to fix Windows problems is to reboot the computer – because it is impossible to reason about a closed, unknowable system of any kind.

    Operating systems like Windows that shield their users from the power they possess were developed starting with the basic assumption that the users are not smart or knowledgeable enough to be trusted with the full power that computers can actually provide. These operating systems are restrictive and have user interfaces – both command line and graphical – which enforce those restrictions by design. These restrictive user interfaces force regular users and SysAdmins alike into an enclosed room with no windows and then slam the door shut and triple lock it. That locked room prevents them from doing many clever things that can be done with Linux.

    The command-line interfaces of such limiting operating systems offer a relatively few commands, providing a de facto limit on the possible activities in which anyone might engage. Some users find this a comfort. I do not and, apparently, neither do you to judge from the fact that you are reading this book.

    The birth of Linux

    The short version of this story is that the developers of Unix, led by Ken Thompson⁷ and Dennis Ritchie,⁸ designed Unix to be open and accessible in a way that made sense to them. They created rules, guidelines, and procedural methods and then designed them into the structure of the operating system. That worked well for system developers and that also – partly, at least – worked for SysAdmins (system administrators). That collection of guidance from the originators of the Unix operating system was codified in the excellent book, The Unix Philosophy, by Mike Gancarz, and then later updated by Mr. Gancarz as Linux and the Unix Philosophy.⁹

    Another fine book, The Art of Unix Programming,¹⁰ by Eric S. Raymond, provides the author's philosophical view of programming in a Unix environment. It is also somewhat of a history of the development of Unix as it was experienced and recalled by the author. This book is also available in its entirety at no charge on the Internet.¹¹

    In 1991, in Helsinki, Finland, Linus Torvalds was taking computer science classes using Minix,¹² a tiny variant of Unix that was written by Andrew S. Tanenbaum.¹³ Torvalds was not happy with Minix as it had many deficiencies, at least to him. So he wrote his own operating system and shared that fact and the code on the Internet. This little operating system, which started as a hobby, eventually became known as Linux as a tribute to its creator and was distributed under the GNU GPL 2 open source license.¹⁴

    Wikipedia has a good history of Linux¹⁵ as does Digital Ocean.¹⁶ For a more personal history, read Linus Torvalds’ own book, Just for fun¹⁷.

    The open box

    Let’s imagine the same software as in the previous example but this time written by a company that open sourced it and provides the source code should I want it. The same situation occurs. In this case, I report the problem, and they reply that no one else has had this problem and that they will look into it but don’t expect to fix it soon.

    So I download the source code. I immediately see the problem and write a quick patch for it. I test the patch on some samples of my own customer transactions – in a test environment of course – and find the results to show the problem has been fixed. I submit the patch to them along with my basic test results. They tell me that is cool, insert the patch in their own code base, run it through testing, and determine that the fix works. At that point they add the revised code into the main trunk of their code base, and all is well.

    Of course, if they get bought out or otherwise become unable or unwilling to maintain the software, the result would be the same. I would still have the open source code, fix it, and make it available to whoever took over the development of the open source product. This scenario has taken place more than once. In one instance, I took over the development of a bit of shell script code from a developer in Latvia who no longer had the time to maintain it and I maintained it for several years.

    In another instance, a large company purchased a software firm called StarOffice who open sourced their office suite under the name OpenOffice.org. Later, a large computer company purchased OpenOffice.org. The new organization decided they would create their own version of the software starting from the existing code. That turned out to be quite a flop. Most of the developers of the open source version migrated to a new, open organization that maintains the reissued software that is now called LibreOffice. OpenOffice now languishes and has few developers while LibreOffice flourishes.

    One advantage of open source software is that the source code is always available. Any developers can take it over and maintain it. Even if an individual or an organization tries to take it over and make it proprietary, they cannot, and the original code is out there and can be forked into a new but identical product by any developer or group. In the case of LibreOffice, there are thousands of people around the world contributing new code and fixes when they are required.

    Having the source code available is one of the main advantages of open source because anyone with the skills can look at it and fix it then make that fix available to the rest of the community surrounding that software.

    §§§

    In the context of open source software, the term open means that the source code is freely available for all to see and examine without restriction. Anyone with appropriate skills has legal permission to make changes to the code to enhance its functionality or to fix a bug.

    For the latest release of the Linux kernel, version 4.17, on June 03, 2018, as I write this, over 1,700 developers from a multitude of disparate organizations around the globe contributed 13,500 changes to the kernel code. That does not even consider the changes to other core components of the Linux operating system, such as core utilities, or even major software applications such as LibreOffice, the powerful office suite that I use for writing my books and articles as well as spreadsheets, drawings, presentations, and more. Projects such as LibreOffice have hundreds of their own developers.

    This openness makes it easy for SysAdmins – and everyone else, for that matter – to explore all aspects of the operating system and to fully understand how any or all of it is supposed to work. This means that it is possible to apply one’s full knowledge of Linux to use its powerful and open tools in a methodical reasoning process that can be leveraged for problem solving.

    The Linux Truth

    Unix was not designed to stop its users from doing stupid things, as that would also stop them from doing clever things.

    —Doug Gwyn

    This quote summarizes the overriding truth and the philosophies of both Unix and Linux – that the operating system must trust the user. It is only by extending this full measure of trust that allows the user to access the full power made possible by the operating system. This truth applies to Linux because of its heritage as a direct descendant of Unix.

    The Linux Truth results in an operating system that places no restrictions or limits on the things that users, particularly the root¹⁸ user, can do. The root user can do anything on a Linux computer. There are no limits of any type on the root user. Although there are a very few administrative speed bumps placed in the path of the root user, root can always remove those slight impediments and do all manner of stupid and clever things.

    Non-root users have a few limits placed on them, but they can still do plenty of clever things as well. The primary limits placed on non-root users are intended to – mostly – prevent them from doing things that interfere with others’ ability to freely use the Linux host. These limits in no way prevent regular users from doing great harm to their own user accounts.

    Even the most experienced users can do stupid things using Linux. My experience has been that recovery from my own not so infrequent stupidity has been made much easier by the open access to the full power of the operating system. I find that most times a few commands can resolve the problem without even a reboot. On a few occasions, I have had to switch to a lower runlevel to fix a problem. I have only very infrequently needed to boot to recovery mode in order to edit a configuration file that I managed to damage so badly it caused serious problems including failure to boot. It takes knowledge of the underlying philosophy, the structure, and the technology of Linux to be able to fully unleash its power, especially when things are broken. Linux just requires a bit of understanding and knowledge on the part of the SysAdmin to fully unlock its potential.

    Knowledge

    Anyone can memorize or learn commands and procedures, but rote memorization is not true knowledge. Without the knowledge of the philosophy and how that is embodied in the elegant structure and implementation of Linux, applying the correct commands as tools to resolve complex problems is not possible. I have seen smart people who had a vast knowledge of Linux be unable to resolve a relatively simple problem because they were unaware of the elegance of the structure beneath the surface.

    As a SysAdmin, part of my responsibility in many of my jobs has been to assist with hiring new employees. I participated in many technical interviews of people who had passed many Microsoft certifications and who had fine resumes. I also participated in many interviews in which we were looking for Linux skills, but very few of those applicants had certifications. This was at a time when Microsoft certifications were the big thing but during the early days of Linux in the data center and few applicants were yet certified.

    We usually started these interviews with questions designed to determine the limits of the applicant’s knowledge. Then we would get into the more interesting questions, ones that would test their ability to reason through a problem to find a solution. I noticed some very interesting results. Few of the Windows certificate owners could reason their way through the scenarios we presented, while a very large percentage of the applicants with a Linux background were able to do so.

    I think that result was due in part to the fact that obtaining the Windows certificates relied upon memorization rather than actual hands-on experience combined with the fact that Windows is a closed system which prevents SysAdmins from truly understanding how it works. I think that the Linux applicants did so much better because Linux is open on multiple levels and that, as a result, logic and reason can be used to identify and resolve any problem. Any SysAdmin who has been using Linux for some time has had to learn about the architecture of Linux and has had a decent amount of experience with the application of knowledge, logic, and reason to the solution of problems.

    Flexibility

    To me, flexibility means the ability to run on any platform, not just Intel and AMD processors. Scalability is about power, but flexibility is about running on many processor architectures.

    Wikipedia has a list of CPU architectures supported by Linux,¹⁹ and it is a long one. By my automated count, there are over 100 CPU architectures on which Linux is currently known to run. Note that this list changes and CPUs get added and dropped from the list. But the point is well taken that Linux will run on many architectures. If your architecture is not currently supported by Linux, with some work you can recompile it to run on any 64-bit system and some 32-bit ones.

    This broad-ranging hardware support means that Linux can run on everything from my Raspberry Pi²⁰ to my television, to vehicle entertainment systems, to cell phones, to DVRs, to the computers on the International Space Station²¹ (ISS), to all 500 of the fastest supercomputers back on Earth,²² and much more. A single operating system can run nearly any computing device from the smallest to the largest from any vendor.

    Stability

    Stability can have multiple meanings when the term is applied to Linux by different people. My own definition of the term as it applies to Linux is that it can run for weeks or months without crashing or causing problems that make me worry I might lose data for any of the critical projects I am working on.

    Today’s Linux easily meets that requirement. I always have several computers running Linux at any given time, and they are all rock solid in this sense. They run without interruption. I have workstations, a server, a firewall, and some that I use for testing, and they all just run.

    This is not to say that Linux never has any problems. Nothing is perfect. Many of those problems have been caused by my own misconfiguration of one or more features, but a few have been caused by problems with some of the software I use. Sometimes a software application will crash, but that is very infrequent and usually related to issues I have had with the KDE desktop.

    If you read my personal technical web site, you know that I have had some problems with the KDE GUI desktop over the years and that it has had two significant periods of instability. In the first of these instances which was many years ago around the time of Fedora 10, KDE was transitioning from KDE 3 to the KDE Plasma 4 desktop which offered many interesting features. In this case most of the KDE-specific applications I used had not been fully rewritten for the new desktop environment so lacked required functionality or would just crash. During the second, most recent, and still ongoing instance, the desktop just locks up, crashes, or fails to work properly.

    In both of these cases, I was able to use a different desktop to get my work done in a completely stable environment. In the first case, I used the Cinnamon desktop, and in this most recent instance, I am using the LXDE desktop. However, the underlying software, the kernel, and the programs running underneath the surface – they all continued to run without problem. So this is the second layer of stability; if one thing crashes, even the desktop, the underlying stuff continues to run.

    To be fair, KDE is improving, and many of the problems in this round have been resolved. I never did lose any data, but I did lose a bit of time. Although I still like KDE, the LXDE desktop is my current favorite, and I also like the Xfce desktop.

    Scalability

    Scalability is extremely important for any software, particularly for an operating system. Running the same operating system from watches, phones (Android), to laptops, powerful workstations, servers, and even the most powerful supercomputers on the planet can make life much simpler for the network administrator or the IT manager. Linux is the only operating system on the planet today which can provide that level of scalability.

    Since November of 2017, Linux has powered all of the fastest supercomputers in the world.²³ Through this writing, as of July 2019, one hundred percent, 100% – all – of the top 500 supercomputers in the world run Linux of one form or another, and this is expected to continue. There are usually specialized distributions of Linux designed for supercomputers. Linux also powers much smaller devices such as Android phones and Raspberry Pi single board computers. Supercomputers are very fast, and many different calculations can be performed simultaneously. It is, however, very unusual for a single user to have access to the entire resources of a supercomputer. Many users share those resources, each user performing his or her own set of complex calculations.

    Linux can run on any computer from the smallest to the largest and anything in between.

    Security

    We will talk a lot about security as we proceed through these courses. Security is a critical consideration in these days of constant attacks from the Internet. If you think that they are not after you, too, let me tell you that they are. Your computer is under constant attack every hour of every day.

    Most Linux distributions are very secure right from the installation. Many tools are provided to both ensure tight security where it is needed as well as to allow specified access into the computer. For example, you may wish to allow SSH access from a limited number of remote hosts, access to the web server from anywhere in the world, and e-mail to be sent to a Linux host from anywhere. Yet you may also want to block, at least temporarily, access attempts by black hat hackers attempting to force their way in. Other security measures provide your personal files protection from other users on the same host while still allowing mechanisms for you to share files that you choose with others.

    Many of the security mechanisms that we will discuss in these courses were designed and built in to Linux right from its inception. The architecture of Linux is designed from the ground up, like Unix, its progenitor, to provide security mechanisms that can protect files and running processes from malicious intervention from both internal and external sources. Linux security is not an add-on feature, it is an integral part of Linux. Because of this, most of our discussions that relate to security will be embedded as an integral part of the text throughout this book. There is a chapter about security, but it is intended to cover those few things not covered elsewhere.

    Freedom

    Freedom has an entirely different meaning when applied to free open source software (FOSS) than it does in most other circumstances. In FOSS, free is the freedom to do what I want with software. It means that I have easy access to the source code and that I can make changes to the code and recompile it if I need or want to.

    Freedom means that I can download a copy of Fedora Linux, or Firefox, or LibreOffice, and install it on as many computers as I want to. It means that I can share that downloaded code by providing copies to my friends or installing it on computers belonging to my customers, both the executables and the sources.

    Freedom also means that we do not need to worry about the license police showing up on our doorsteps and demanding huge sums of money to become compliant. This has happened at some companies that over-installed the number of licenses that they had available for an operating system or office suite. It means that I don’t have to type in a long, long, key to unlock the software I have purchased or downloaded.

    Our software rights

    The rights to the freedoms that we have with open source software should be part of the license we receive when we download open source software. The definition for open source software²⁴ is found at the Open Source Initiative web site. This definition describes the freedoms and responsibilities that are part of using open source software.

    The issue is that there are many licenses that claim to be open source. Some are and some are not. In order to be true open source software, the license must meet the requirements specified in this definition. The definition is not a license – it specifies the terms to which any license must conform if the software to which it is attached is to be legally considered open source. If any of the defined terms do not exist in a license, then the software to which it refers is not true open source software.

    All of the software used in this book is open source software.

    I have not included that definition here despite its importance because it is and not really the focus of this book. You can go to the web site previously cited, or you can read more about it in my book, The Linux Philosophy for SysAdmins.²⁵ I strongly recommend that you at least go to the web site and read the definition so that you will more fully understand what open source really is and what rights you have.

    I also like the description of Linux at Opensource.com,²⁶ as well as their long list of other open source resources.²⁷

    Longevity

    Longevity – an interesting word. I use it here to help clarify some of the statements that I hear many people make. These statements are usually along the lines of Linux can extend the life of existing hardware, or Keep old hardware out of landfills or unmonitored recycling facilities.

    The idea is that you can use your old computer longer and that by doing that, you lengthen the useful life of the computer and decrease the number of computers you need to purchase in your lifetime. This both reduces demand for new computers and reduces the number of old computers being discarded.

    Linux prevents the planned obsolescence continually enforced by the ongoing requirements for more and faster hardware required to support upgrades. It means I do not need to add more RAM or hard drive space just to upgrade to the latest version of the operating system.

    Another aspect of longevity is the open source software that stores data in open and well-documented formats. Documents that I wrote over a decade ago are still readable by current versions of the same software I used then, such as LibreOffice and its predecessors, OpenOffice, and before that Star Office. I never need to worry that a software upgrade will relegate my old files to the bit bucket.

    Keep the hardware relevant

    For one example, until it recently died, I had an old Lenovo ThinkPad W500 that I purchased in May of 2006. It was old and clunky and heavy compared to many of today’s laptops, but I liked it a lot, and it was my only laptop. I took it with me on most trips and use it for training. It had enough power in its Intel Core 2 Duo 2.8GHz processor, 8GB of RAM, and 300GB hard drive to support Fedora running a couple virtual machines and to be the router and firewall between a classroom network and the Internet, to connect to a projector to display my slides, and to use to demonstrate the use of Linux commands. I used Fedora 28 on it, the very latest. That is pretty amazing considering that this laptop, which I affectionately called vgr, was a bit over 12 years old.

    The ThinkPad died of multiple hardware problems in October of 2018, and I replaced it with a System76²⁸ Oryx Pro with 32GB of RAM, an Intel i7 with 6 cores (12 CPU threads) and 2TB of SSD storage. I expect to get at least a decade of service out of this new laptop.

    And then there is my original EeePC 900 netbook with an Intel Atom CPU at 1.8GHz, 2G of RAM, and an 8GB SDD. It ran Fedora up through Fedora 28 for ten years before it too started having hardware problems.

    Linux can most definitely keep old hardware useful. I have several old desktop workstations that are still useful with Linux on them. Although none are as old as vgr, I have at least one workstation with an Intel motherboard from 2008, one from 2010, at least three from 2012.

    Resist malware

    Another reason that I can keep old hardware running longer is that Linux is very resistant to malware infections. It is not completely immune to malware, but none of my systems have ever been infected. Even my laptop which connects to all kinds of wired and wireless networks that I do not control has never been infected.

    Without the massive malware infections that cause most peoples’ computers to slow to an unbearable crawl, my Linux systems – all of them – keep running at top speed. It is this constant slowdown, even after many cleanings at the big box stores or the strip mall computer stores, which causes most people to think that their computers are old and useless. So they throw them away and buy another.

    So if Linux can keep my 12-year-old laptop and other old systems running smoothly, it can surely keep many others running as well.

    Should I be a SysAdmin?

    Since this book is intended to help you become a SysAdmin, it would be useful for you to know whether you might already be one, whether you are aware of that fact or not, or if you exhibit some propensity toward system administration. Let’s look at some of the tasks a SysAdmin may be asked to perform and some of the qualities one might find in a SysAdmin.

    Wikipedia²⁹ defines a system administrator as a person who is responsible for the upkeep, configuration, and reliable operation of computer systems, especially multiuser computers, such as servers. In my experience, this can include computer and network hardware, software, racks and enclosures, computer rooms or space, and much more.

    The typical SysAdmin's job can include a very large number of tasks. In a small business, a SysAdmin may be responsible for doing everything computer related. In larger environments, multiple SysAdmins may share responsibility for all of the tasks required to keep things running. In some cases, you may not even know you are a SysAdmin; your manager may have simply told you to start maintaining one or more computers in your office – that makes you a SysAdmin, like it or not.

    There is also a term, DevOps, which is used to describe the intersection of the formerly separate development and operations organizations. In the past, this has been primarily about closer cooperation between development and operations, and it included teaching SysAdmins to write code. The focus is now shifting to teaching programmers how to perform operational tasks.³⁰ Attending to SysAdmin tasks makes these folks SysAdmins, too, at least for part of the time. While I was working at Cisco, I had a DevOps type of job. Part of the time I wrote code to test Linux appliances, and the rest of the time I was a SysAdmin in the lab where those appliances were tested. It was a very interesting and rewarding time in my career.

    I have created this short list to help you determine whether you might have some of the qualities of a SysAdmin. You know you are a SysAdmin if...

    1.

    You think this book might be a fun read.

    2.

    You would rather spend time learning about computers than watch television.

    3.

    You like to take things apart to see how they work.

    4.

    Sometimes those things still work when you are required by someone else to reassemble them.

    5.

    People frequently ask you to help them with their computers.

    6.

    You know what open source means.

    7.

    You document everything you do.

    8.

    You find computers easier to interact with than most humans.

    9.

    You think the command line might be fun.

    10.

    You like to be in complete control.

    11.

    You understand the difference between free as in beer and free as in speech, when applied to software.

    12.

    You have installed a computer.

    13.

    You have ever repaired or upgraded your own computer.

    14.

    You have installed or tried to install Linux.

    15.

    You have a Raspberry Pi.

    16.

    You leave the covers off your computer because you replace components frequently.

    17.

    ...etc...

    You get the idea. I could list a lot more things that might make you a good candidate to be a SysAdmin, but I am sure you can think of plenty more that apply to you. The bottom line here is that you are curious, you like to explore the internal workings of devices, you want to understand how things work – particularly computers, you enjoy helping people, and you would rather be in control of at least some of the technology that we encounter in our daily lives than to let it completely control you.

    About this course

    If you ask me a question about how to perform some task in Linux, I am the Linux guy that explains how Linux works before answering the question – at least that is the impression I give most people. My tendency is to explain how things work, and I think that it is very important for SysAdmins to understand why things work as they do and the architecture and structure of Linux in order to be most effective.

    So I will explain a lot of things in detail as we go through this course. For the most part, it will not be a course in which you will be told to type commands without some reasoning behind it. The preparation in Chapter 4 will also have some explanation but perhaps not so much as the rest of the book. Without these explanations, the use of the commands would be just rote memorization and that is not how most of us SysAdmins learn best.

    UNIX is very simple, it just needs a genius to understand its simplicity.

    —Dennis Ritchie³¹

    The explanations I provide will sometimes include historical references because the history of Unix and Linux is illustrative of why and how Linux is so open and easy to understand. The preceding Ritchie quote also applies to Linux because Linux was designed to be a version of Unix. Yes, Linux is very simple. You just need a little guidance and mentoring to show you how to explore it yourself. That is part of what you will learn in this course.

    Part of the simplicity of Linux is that it is completely open and knowable, and you can access any and all of it in very powerful and revealing ways. This course contains many experiments which are designed to explore the architecture of Linux as well as to introduce you to new commands.

    Why do you think that Windows support – regardless of where you get it – always starts with rebooting the system? Because it is a closed system and closed systems cannot ever be knowable. As a result, the easiest approach to solving problems is to reboot the system rather than to dig into the problem, find the root cause, and fix it.

    About the experiments

    As a hands-on SysAdmin, I like to experiment with the command line in order to learn new commands, new ways to perform tasks, and how Linux works. Most of the experiments I have devised for this book are ones that I have performed in my own explorations with perhaps some minor changes to accommodate their use in a course using virtual machines.

    I use the term experiments because they are intended to be much more than simple lab projects, designed to be followed blindly with no opportunity for you, the student, to follow your own curiosity and wander far afield. These experiments are designed to be the starting points for your own explorations. This is one reason to use a VM for them, so that production machines will be out of harm’s way and you can safely try things that pique your curiosity. Using virtualization software such as VirtualBox enables us to run a software implementation of standardized hardware. It allows us to run one or more software computers (VMs), in which we can install any operating system, on your hardware computer. It seems complex, but we will go through creating a virtual network and a virtual machine (VM) in Chapter 4 as we prepare for the experiments.

    All SysAdmins are curious, hands-on people even though we have different ways of learning. I think it is helpful for SysAdmins to have hands-on experience. That is what the experiments are for – to provide an opportunity to go beyond the theoretical and apply the things you learn in a practical way. Although some of the experiments are a bit contrived in order to illustrate a particular point, they are nevertheless valid.

    These enlightening experiments are not tucked away at the end of each chapter, or the book, where they can be easily ignored – they are embedded in the text and are an integral part of the flow of this book. I recommend that you perform the experiments as you proceed through the book.

    The commands and sometimes the results for each experiment will appear in experiment sections as shown in the following. Some experiments need only a single command and so will have only one experiment section. Other experiments may be more complex and so split among two to more experiments.

    SAMPLE EXPERIMENT

    This is an example of an experiment. Each experiment will have instructions and code for you to enter end run on your computer.

    Many experiments will have a series of instructions in a prose format like this paragraph. Just follow the instructions and the experiments will work just fine:

    1.

    Some experiments will have a list of steps to perform.

    2.

    Step 2.

    3.

    etc...

    Code that you are to enter for the experiments will look like this.

    This is the end of the experiment.

    Some of these experiments can be performed as a non-root user; that is much safer than doing everything as root. However you will need to be root for many of these experiments. These experiments are considered safe for use on a VM designated for training such as the one that you will create in Chapter 4. Regardless of how benign they may seem, you should not perform any of these experiments on a production system whether physical or virtual.

    There are times when I want to present code that is interesting but which you should not run as part of one of the experiments. For such situations I will place the code and any supporting text in a CODE SAMPLE section as shown in the following.

    CODE SAMPLE

    Code that is intended to illustrate a point but which you should not even think about running on any computer will be contained in a section like this one:

    echo This is sample code which you should never run.

    Warning

    Do not perform the experiments presented in this book on a production system. You should use a virtual machine that is designated for this training.

    What to do if the experiments do not work

    These experiments are intended to be self-contained and not dependent upon any setup, except for the USB thumb drive, or the results of previously performed experiments. Certain Linux utilities and tools must be present, but these should all be available on a standard Fedora Linux workstation installation or any other mainstream general use distribution. Therefore, all of these experiments should just work. We all know how that goes, right? So when something does fail, the first things to do are the obvious.

    Verify that the commands were entered correctly. This is the most common problem I encounter for myself.

    You may see an error message indicating that the command was not found. The Bash shell shows the bad command; in this case I made up badcommand. It then gives a brief description of the problem. This error message is displayed for both missing and misspelled commands. Check the command spelling and syntax multiple times to verify that it is correct:

    [student@testvm1 ~]$ badcommand

    bash: badcommand: command not found...

    Use the man command to view the manual pages (man pages) in order to verify the correct syntax and spelling of commands.

    Ensure that the required command is, in fact, installed. Install them if they are not already installed.

    For experiments that require you to be logged in as root, ensure that you have done so. There should be only a few of these, but performing them as a non-root user will not work.

    There is not much else that should go wrong – but if you encounter a problem that you cannot make work using these tips, contact me at LinuxGeek46@both.org, and I will do my best to help figure out the problem.

    Terminology

    It is important to clarify a bit of terminology before we proceed. In this course I will refer to computers with multiple terms. A computer is a hardware or virtual machine for computing. A computer is also referred to as a node when connected to a network. A network node can be any type of device including routers, switches, computers, and more. The term host generally refers to a computer that is a node on a network, but I have also encountered it used to refer to an unconnected computer.

    How to access the command line

    All of the modern mainstream Linux distributions provide at least three ways to access the command line. If you use a graphical desktop, most distributions come with multiple terminal emulators from which to choose. I prefer Krusader, Tilix, and especially xfce4-terminal, but you can use any terminal emulator that you like.

    Linux also provides the capability for multiple virtual consoles to allow for multiple logins from a single keyboard and monitor (KVM³²). Virtual consoles can be used on systems that don’t have a GUI desktop, but they can be used even on systems that do have one. Each virtual console is assigned to a function key corresponding to the console number. So vc1 would be assigned to function key F1, and so on. It is easy to switch to and from these sessions. On a physical computer, you can hold down the Ctrl and Alt keys and press F2 to switch to vc2. Then hold down the Ctrl and Alt keys and press F1 to switch to vc1 and the graphical interface.

    The last method to access the command line on a Linux computer is via a remote login. Telnet was common before security became such an issue, so Secure Shell (SSH) is now used for remote access.

    For some of the experiments, you will need to log in more than once or start multiple terminal sessions in the GUI desktop. We will go into much more detail about terminal emulators, console sessions, and shells as we proceed through this book.

    Chapter summary

    Linux was designed from the very beginning as an open and freely available operating system. Its value lies in the power, reliability, security, and openness that it brings to the marketplace for operating systems and not just in the fact that it can be had for free in monetary terms. Because Linux is open and free in the sense that it can be freely used, shared, and explored, its use has spread into all aspects of our lives.

    The tasks a SysAdmin might be asked to do are many and varied. You may already be doing some of these or at least have some level of curiosity about how Linux works or how to make it work better for you. Most of the experiments encountered in this book must be performed at the command line. The command line can be accessed in multiple ways and with any one or more of several available and acceptable terminal emulators.

    Exercises

    Note that a couple of the following questions are intended to cause you to think about your desire to become a SysAdmin. There are no right answers to these questions, only yours, and you are not required to write them down or to share them. They are simply designed to prompt you to be a bit introspective about yourself and being a SysAdmin:

    1.

    From where does open source software derive its value?

    2.

    What are the four defining characteristics of Linux?

    3.

    As of the time you read this, how many of the world’s top 500 supercomputers use Linux as their operating system?

    4.

    What does the Linux Truth mean to Linux users and administrators?

    5.

    What does freedom mean with respect to open source software?

    6.

    Why do you want to be a SysAdmin?

    7.

    What makes you think you would be a good SysAdmin?

    8.

    How would you access the Linux command line if there were no GUI desktop installed on the Linux host?

    Footnotes

    1

    Wikipedia, Linus Torvalds, https://en.wikipedia.org/wiki/Linus_Torvalds

    2

    Wikipedia, History of Linux, https://en.wikipedia.org/wiki/History_of_Linux

    3

    Wikipedia, Open Source Software, https://en.wikipedia.org/wiki/Open-source_software

    4

    Renamed to OpenVMS circa late 1991

    5

    Gancarz. Mike, Linux and the Unix Philosophy, Digital Press, 2003, 146–148

    6

    ITPro Today, Windows NT and VMS: The rest of the Story, www.itprotoday.com/management-mobility/windows-nt-and-vms-rest-story

    7

    https://en.wikipedia.org/wiki/Ken_Thompson

    8

    https://en.wikipedia.org/wiki/Dennis_Ritchie

    9

    Mike Gancarz, Linux and the Unix Philosophy, Digital Press – an imprint of Elsevier Science, 2003, ISBN 1-55558-273-7

    10

    Eric S. Raymond, The Art of Unix Programming, Addison-Wesley, September 17, 2003, ISBN 0-13-142901-9

    11

    Eric S. Raymond, The Art of Unix Programming, www.catb.org/esr/writings/taoup/html/index.html/

    12

    https://en.wikipedia.org/wiki/MINIX

    13

    https://en.wikipedia.org/wiki/Andrew_S._Tanenbaum

    14

    https://en.wikipedia.org/wiki/GNU_General_Public_License

    15

    https://en.wikipedia.org/wiki/History_of_Linux

    16

    Juell, Kathleen, A Brief History of Linux, www.digitalocean.com/community/tutorials/brief-history-of-linux

    17

    Torvalds, Linus, and Diamond, David, Just for fun: The story of an accidental revolutionary, HarperBusiness, 2001

    18

    The root user is the administrator of a Linux host and can do everything and anything. Compared to other operating systems, non-root Linux users also have very few restrictions, but we will see later in this course that there are some limits imposed on them.

    19

    Wikipedia, List of Linux-supported computer architectures, https://en.wikipedia.org/wiki/List_of_Linux-supported_computer_architectures

    20

    Raspberry Pi web site, www.raspberrypi.org/

    21

    ZDNet, The ISS just got its own Linux supercomputer, www.zdnet.com/article/the-iss-just-got-its-own-linux-supercomputer/

    22

    Wikipedia, TOP500, https://en.wikipedia.org/wiki/TOP500

    23

    Top 500, www.top500.org/statistics/list/

    24

    Opensource.org, The Open Source Definition, https://opensource.org/docs/osd

    25

    Both, David, The Linux Philosophy for SysAdmins, Apress, 2018, 311–316

    26

    Opensource.com, What is Linux?, https://opensource.com/resources/linux

    27

    Opensource.com, Resources, https://opensource.com/resources

    28

    System76 Home page, https://system76.com/

    29

    Wikipedia, System Administrator, https://en.wikipedia.org/wiki/System_administrator

    30

    Charity, "Ops: It’s everyone’s job now," https://opensource.com/article/17/7/state-systems-administration

    31

    Wikipedia, Dennis Ritchie, https://en.wikipedia.org/wiki/Dennis_Ritchie

    32

    Keyboard, Video, Mouse

    © David Both 2020

    D. BothUsing and Administering Linux: Volume 1https://doi.org/10.1007/978-1-4842-5049-5_2

    2. Introduction to Operating Systems

    David Both¹ 

    (1)

    Raleigh, NC, USA

    Objectives

    In this chapter you will learn to

    Describe the functions of the main hardware components of a computer

    List and describe the primary functions of an operating system

    Briefly outline the reasons that prompted Linus Torvalds to create Linux

    Describe how the Linux core utilities support the kernel and together create an operating system

    Choice – Really!

    Every computer requires an operating system. The operating system you use on your computer is at least as important – or more so – than the hardware you run it on. The operating system (OS) is the software that determines the capabilities and limits of your computer or device. It also defines the personality of your computer.

    The most important single choice you will make concerning your computer is that of the

    Enjoying the preview?
    Page 1 of 1