What is a sysadmin?
Image by Hitesh Choudhary from Pexels
What does the term “sysadmin” or “system administrator” mean to you? What does a sysadmin do? How do you know you are one?
Existential questions like this seldom have concrete answers, and the answers that do present themselves can be pretty fluid. The answers are also significantly different for different people and depend upon their work experience.
Are you a sysadmin?
In my book, “The Linux Philosophy for SysAdmins,” I attempt to answer that question. I created this short list to help you determine whether you are a sysadmin. You might just be a sysadmin if:
- You think my books and articles might be a fun read.
- People frequently ask you to help them with their computers.
- You check the servers every morning before you do anything else.
- You write shell scripts to automate even simple tasks.
- You share your shell scripts.
- You license your shell scripts with an open source license.
- You know what open source means.
- You document everything you do.
- You have hacked the wireless router to install Linux software.
- You find computers easier to interact with than most humans.
- You understand :(){ :|:& };:
- You think the command line is fun.
- You like to be in complete control.
- You are root.
- You always say “no” when someone asks whether something can be done or not.
- After discussion and discovering what the person really wants to accomplish, you know it can be done in 30 seconds with a shell script you already have written, but you tell them it will be “a few days” before you can get to it.
- You understand the difference between “free as in beer” and “free as in speech” when applied to software.
- You have installed a computer in a rack enclosure.
- You have replaced the standard CPU cooling fan with one that dissipates more heat.
- You purchase the parts and build your own computers.
- You use liquid cooling for your CPU.
- You install Linux on everything you can.
- You have a Raspberry Pi connected to your television.
- You use a Raspberry Pi as a firewall for your home network.
- You run your own Email, DHCP, NTP, NFS, DNS, and/or SSH servers.
- You have hacked your home computer to replace the processor with a faster one.
- You have upgraded the BIOS in a computer.
- You leave the covers off your computer because you replace components frequently.
- The router provided by your ISP is in “pass-through” mode.
- You use a Linux computer as a router.
You get the idea. I could list many more things that might make you a sysadmin, but there would be hundreds of items. I am sure you can think of plenty more that apply to you.
That list is about you as someone who does all the sysadmin things rather than an actual definition. But is there really a single definition that fits all? My own experience says that sysadmins cannot be easily defined.
My story
I worked for my home state government for about five years—hired to maintain and do an almost complete rewrite of an interlocking set of existing Perl programs. There were around twenty-five or so of these programs running on a small Intel desktop. These programs generated CGI web pages based on data stored in flat files. These web pages were the management interface for the state email system that encompassed several large, statewide agencies. The email system itself ran on multiple Sun Microsystems servers.
Since this started as a pilot project, only one computer was used for my part of the project. That computer was an old cast-off from another department when it was “gifted” to our project, and it was located on my desktop. It also served as my personal workstation. I don’t recall the exact specifications, but it had an original 32-bit Pentium processor running at 33MHz with 16MB of RAM and two hard drives. It ran an early release of Red Hat Linux.
It was becoming impossible to maintain these programs to add new functions and locate and fix bugs due to the code’s complexity. My assigned task was to fix the bugs and add some additional functionality to these programs.
As I began trying to make sense of the code’s complexity, it became clear that my first priority had to be simplifying the code. After profusely commenting the code and fixing a few bugs as I went, I began to collect some code that had been inserted into two or more of these programs and collected them into Perl libraries. This made fixing problems easier because they only had to be fixed in one location—the library. I straightened out other code, simplifying the common execution paths.
But I also was responsible for maintaining the older systems, both hardware, and software. I installed updates, upgraded to a couple of newer releases of Red Hat Linux over the years, and resolved a good number of Linux configuration issues that were caused by a lack of knowledge on the part of the previous admin. I watched the logs and the system performance continually and made adjustments to tuning as often as necessary.
I also managed security for this host. I did all of this with only that one computer, no backup, live, online, with customers still using the website. Eventually, we convinced management that we did need to have a second computer that we could use for a test environment. Those were fun times.
Large enterprise
I worked for about five years at Cisco Systems, fairly late in my career. That was a fun job in part because it had two related components.
One part of the job was to write test cases for Linux-based appliances that the company was developing. I used Tcl/Expect, which integrated well with the test infrastructure that was already in place. As a tester, I wrote code for a good part of the time, but then I also had to test the test suite and then run the actual tests.
The second part of this job was as the assistant lab administrator for several rows of racks that made up our test lab—about half of my time. Equipment of every kind was located there, from Linux 1U servers to huge routers that spanned an entire rack and dozens of Linux and Solaris hosts.
My tasks as the lab admin had me ordering new equipment, racking and cabling it when it arrived, and installing Linux on all the Intel boxes. We did not have responsibility for network management in the lab, but the various departments worked together well, and the network group provided us with some automation of their own. We filled out a web form requesting an IP address from the DHCP managed list of static addresses and a DNS entry—configuring those two items immediately and we could then proceed with our Linux installations.
To expedite the installation of our special configuration of Red Hat Linux, I wrote a script to automate that process. I wrote about that in an article published in Linux Magazine in June 2008. That article is now available at my personal website: Complete Kickstart and at Linux Today.
Using these procedures developed with the other groups, we were able to unpack, rack, cable, and install Linux on six to eight new servers in a day.
I also created several Lunch-n-Learn training sessions to help bring other employees up to speed on Linux.
Silos
The last W-2 job I ever had as a so-called sysadmin was for about a year at one particular organization that makes a fantastic example of the failure of the traditional “team” methodology taken to its extreme. The worst part is that it was also a horrible daily commute.
In this organization—which shall remain nameless—management had created very narrow, very tall silos to contain everything.
There were multiple teams, the Unix team, the application team, the network team, the hardware team, the DNS team, the rack team, the cable team, the power team—pretty much any team you can think of. And the procedures were mind-boggling. For example, one of my projects was to install Linux on several servers that were to be used for various aspects of the organization’s website. The first step was to order the servers, but the request took weeks to work its way through the administrative bureaucracy.
Once the servers were delivered, the Unix team would rack them in the installation lab and install the operating system. We had that part down very nicely. But first, we had to request an IP address. We could not do that before we had the servers delivered because the request for an IP address required the serial numbers of the servers and MAC addresses of the NICs. The issue here was that each silo had to have a Service Level Agreement (SLA) with every other silo and the response time defined by the SLA was a minimum of two weeks. And every silo took no less time to respond than that specified in the SLA. However, we couldn’t get the IP address until we had a rack location assigned in the server room because IP addresses were assigned by rack and location in the rack. So we had to send a request for a rack assignment and wait two weeks for that to be provided.
So the next step after getting the IP address was to send that to the silo that handled DHCP configuration. Then it was at least two weeks after getting the IP address that we had to wait before the DHCP got set up.
Only when the network configuration data for the server was configured on the DHCP server could the request for moving the server from our rack to the server room be sent. Another two-week turnaround.
After the move request was approved—and only after—we could then send a request to install the computer in the rack. After the installation was complete, then we could send the request to cable the server with network and power. When that was completed we could send a request to power on the server.
Except for installing the operating system, we could not touch the server. We weren’t even allowed to enter the server room. Ever.
Needless to say, it took months to install each server and get it running and ready for the production teams to take over. I could go on about many more ways in which this place was a functional disaster, but I think you get the idea. Their teams were primarily political fiefdoms protected by impenetrable silos.
Self-employment and consulting
Over the years, I’ve had two companies of my own that I used for consulting while I was between other jobs. This prevented holes in my resume that would raise questions during interviews. I had some good customers during this time and learned a lot.
In this role, I did pretty much everything from planning new installations to training and performing updates. I charged outrageous fees to spend a few minutes resetting the root password on some Linux systems in more than one case. It seems that the pointy-haired bosses tend to fire sysadmins without understanding the need for a smooth transition during which they would obtain all the passwords.
Retirement – sort of
I am now allegedly retired, but I seem to be busier than ever. I write articles and books and have a few Linux computers to support.
I have maintained anywhere from four to twelve computers that belonged to me or one of my companies in my own home lab. Of course, they all run Linux. Currently, I have 12 computers in my home lab but that includes three that I am working on various projects for my church. I also manage the computers and network at my church and for a couple of friends whom I have convinced to try Linux.
So, I am the sysadmin for most of these computers, complete hardware and software support for the others. I also provide training for various people in my current role.
Conclusion
A sysadmin is many things, sometimes all at the same time.
My experience has run the gamut. I have never had a job that I would consider a “pure” sysadmin job. Each organization has unique needs, and most times, the sysadmin gets “volunteered” to fill roles never mentioned in the interview or job description. Sysadmins tend to have the knowledge and skills to do almost anything necessary. The people we work with and for generally understand that so we do get those tough assignments thrust upon us.
I like it.