Archive

Posts Tagged ‘unix’

UNC Chapel Hill Migrating to Microsoft Exchange: a poor choice for freedom

July 10, 2011 9 comments

The University of North Carolina has a long history of supporting software freedom. The University has sponsored ibiblio.org since before I started using the internet, and recently made the very smart move to switch away from the proprietary Blackboard online learning system to Sakai, which is licensed under an Apache-like license. Recently however the university has made an unfortunate choice about its email systems. I wrote in my last post about the dichotomy between academic computing and commercial computing, and unfortunately UNC Chapel Hill has chosen commercial computing over academic computing in handling its email systems. This disappoints me. I contend that their justifications, mostly based on “performance” and “meeting the needs of users” are hollow. Performance is not the only thing that is important in computer systems. As far as I can tell, the only feature that distinguishes the new system from the existing system is the ability to invade user privacy. Worst of all, the university is sacrificing academic computing ideals, including freedom, and “outsourcing” its email to a commercial interest. The fact that a world-class university like UNC Chapel Hill would trust Microsoft instead of using their own talent is really stupid.

Take a look at this list of advantages of the new Microsoft-based email system, offered by the ITS staff at the medical school. Look carefully and notice that the only feature that is really new is the “[a]bility to ‘wipe’ lost/stolen portable devices.” Everything else on that list is available with Cyrus IMAP. In other words, the university prefers a system that allows invasion of privacy. Now, I understand that there is a good security motivation for this feature. However, when considering that this is the only new feature of Exchange over Cyrus IMAP, it seems odd that the university is favoring a new system that does allow invasion of privacy. Why is that so important? Clearly the new system does not “meet the needs of users,” as much as it meets the needs of administrators.

Another feature that doesn’t make sense to me is “Scalable handheld (smart phone) e-mail solution – works with Blackberry, iPhone, Windows Mobile, Android, etc.” This is a little weird because I don’t need to view a webpage to get my email on my desktop, why would I need to view a webpage to get my email on a smartphone? This demonstrates the most annoying aspect of all the announcements I’ve gotten about the new email system: confusion between, or failure to distinguish client and server. Many of the justifications for the new email system are made on the basis of clients, but the change the university is making is a change of server. That’s weird because the whole point of standardized protocols like IMAP is so that clients can be entirely agnostic to the identity of the server. If the server chooses to depend on nonstandard features, that messes things up for clients. Which client I use is my choice, and the server should accommodate. That’s the “needs of users.” However, I know at least one user who’s having trouble even marking her mail read while connecting her chosen client to the new server.

Features and “performance” are a common justification for using proprietary software. There is a common attitude that “open source is best for making the world a better place, but I need to get my work done and I’ll choose the best tool for the job.” We’ve already ruled out any advantages in terms of “features” of Microsoft Exchange over the Cyrus IMAP daemon. There are other things to consider: the quality of service and the message that the choice of proprietary software sends to students of the university. The quality of service with Microsoft Email servers that I’ve experienced is terrible. Again, the biggest problem is the confusion of client and server. I used to work at a large hospital system that used Exchange, and whenever I called the helpdesk, they would refuse to answer any questions about the server until I told them which client I used. In other words, they wouldn’t simply tell me if the server was down because I was using Thunderbird to read my mail. Storing mail and reading mail are two different things. Sending mail and fetching mail are two different things. The only people in charge of an email system should be people who (minimally) understand those facts. Microsoft’s sales tactic, on the other hand, is that their “customers” will save money by hiring less qualified people. In other words, screw service, screw your users, save your own ass some dough. That’s what UNC Chapel Hill is choosing.

The message this sends to UNC students is that the university cares more about money and less about student lives and intellectual freedom. They’re already raising tuition. The university is effectively making itself another corporate entity. They are in the business not of education, but of being in business, just like any other vacuous corporation. That’s insane. Universities should be bastions of intellectual freedom and they should cultivate and harvest the fruits of that intellectual freedom by providing key infrastructure themselves. They should not seek to emulate the corporate world. I understand that they want to save money, but they should do it by hiring fewer, well-qualified people to staff fewer servers running free software.

I often mention that I’ve been using the internet for almost twenty years. I do this for two reasons, neither of which is to brag or apply seniority. One is to emphasize that before most people found out about the world-wide web, there was an established culture on the internet of scientists, engineers and computer personnel. Universities were the backbone of that community. After the concept of the internet was established by the military, universities carried the torch and led the way in technology. When the military needed a new technology to build up their newer communications network, where did they go? They went to Berkeley. A university has the necessary expertise for what they needed.

The other reason I point out how long I’ve used the internet is a sort of nostalgia. The best way to use the internet was always on university machines, running some form of Unix: BSD, System V, SunOS and more recently GNU/Linux. Universities were always the best places to use computers. Why? Because universities were where the talent grew up, developed and was allowed to be creative. Universities existed outside the stultifying, cost-saving world of corporations.

It seems that now universities are done giving the baby a bath, they are throwing the baby, the bathwater, the tub, the sink and the baby’s mom out of the window. They might as well kick dad in the balls by only teaching their computer science students how to work for Microsoft. Using Microsoft servers, software and supporting corporate culture (i.e. the culture of Microsoft) doesn’t serve the interests of students, researchers at the university, or society. Universities serve their students by teaching them how to be flexible, creative and constructive members of society. Universities do not help their students by teaching them to be money-hungry, cog-thinking, competitive corporate flunkies. A university can teach all of the above good values by teaching students with free software based on Unix ideals. It can even do so in an inclusive environment that includes Microsoft software. However, teaching students in a university computing environment mainly based on Microsoft software does not teach them creativity or flexibility: it teaches them “you don’t have a right to learn until you are chosen as one of the elite; then you can subjugate people just like we’ve subjugated you.”

Anybody who says “We have to be realistic and teach students to use software X because those are the jobs that are out there” is a corporate tool. People who learn properly at a university can learn to use anything that someone hands them: that’s the point of a college education, to be able to learn, not to know something.

Furthermore, universities do not serve their researchers by running Microsoft software. Researchers at universities are professionals and they need to be treated that way. Microsoft products are just not professional quality. Even if they were, they limit freedom in such a way that they should not be taken seriously by researchers. This goes for all proprietary software, including Mathematica and Matlab, but researchers have choices on what to use in their own research. Unfortunately they often have to use what a university will provide for them when it comes to basic services like email. Universities should provide the best, and Microsoft Exchange is just not the best. If Cyrus IMAP was not the best, they could have chosen Dovecot, Courier, Zimbra or any of the huge number of free software alternatives. If the Cyrus system “… is old, complex, outdated, and does not fully meet the needs of our users” they can hire dedicated, talented people and make it simple and current so that it meets the needs of users. Instead they choose Microsoft.

Both of these failures to meet the needs of students and researchers mean that the university is failing society as well. People denigrate the “ivory tower” all the time, but there are chunks that fall from that ivory tower that change society and even make people a lot of money. Let me see if I can think of a few examples: the internet, the world-wide web, science, liberalism, Charles Darwin…

What can you do? Complain. If you have a problem with the new email system, let the university know. They do listen. A list of the relevant managers in charge can be found on the ITS web site. Email them directly. Another alternative is to stop using email. I don’t advise this because UNC has made email an official form of communication. You could probably rig something where they have to contact you by campus mail (forcing you to use email is discriminatory). However, another problem is that email is, I believe, with all its problems, the best form of electronic communication. If you want to ‘e’-anything, you should email it. One thing I know I will do is I will seriously consider the IT infrastructure at the next university I go to. I’m a graduate student, so my time at UNC is limited. I will have things I will miss and things I certainly won’t.

One more thing: don’t wait until the forced transition if you plan to continue using email. I’m going to transition tomorrow and I’ll let you know how it goes. Thanks for reading.

Advertisements

Should I learn programming? The case for Unix and Emacs in everyday life

June 28, 2011 2 comments

Most people think “programming is for programmers,” and by “programmers” they mean people who earn a living writing software, i.e. “end-user” software: software that people will buy, or that will be used in some big company. However, recently I’ve overheard a lot of talk from people in the business world about what those large companies do, and much of it sounds like it could be done by simple computer programs. The problem is that people don’t learn programming, nor do they learn to think of their problems as amenable to programming. I surmise that for most people, a programming solution doesn’t even enter into their thinking.

At a recent breakfast conversation, my brother told me that at his company most of the problems that come up result from people not thinking of something if a notification doesn’t come up on their computer screens and ask them. Even if they know there’s a problem, they won’t do anything about it if they don’t see it right there in front of their faces. They won’t even get up and walk five feet over to the guy in charge of that problem to ask him. These people and their tasks could be replaced with simple programs. He also told me that the corporation he works for uses none of the operations research or systems design theory that he learned in business school. Everything is just left up to guessing at the best solution and sticking with it for years, regardless of how much it costs or the alternatives.

I also sat next to some people in the airport who were using Excel and mentioned SAP (which my brother tells me is basically a money-milking machine; the companies who buy it are the cows). One of these people said her current project was “organizing [inaudible] into categories based on characteristics, including horsepower…they weren’t labeled with horsepower.” She was doing it “by hand.” One of my missions in my last job and in graduate school is to intercede whenever I hear the phrase “by hand.” We have computers. “By hand” should be a thing of the past. This young woman apparently didn’t think of her task algorithmically. Why would she when it’s unlikely any of her education included the word “algorithm?”

These patterns highlight the current failings of commercial computing. Commercial computing has one goal: sell computers. For most of the history of computing, this approach has been focused on hardware, but now people mostly see it as software. Commercial computing’s current goals are to sell people software as if it were hardware and then walk away, wagging your finger when the customer comes back complaining that it doesn’t work. Eric Raymond calls this the “manufacturing delusion.” Programs aren’t truly manufactured because they have zero marginal costs (it costs only as much to make a billion copies of a program as it does to make one copy). Commercial computing focuses on monolithic hardware and software, i.e. programs that try to do everything the user might need, and funneling everyone’s work through that program. That doesn’t work.

Academic computing, on the other hand, has the perspective that if something doesn’t work the way you need it to work, you rewire it, you combine it with something else, or build onto it so that it will accomplish a specific task. People literally rewired computers up until twenty-five years ago, when it became cheaper to buy a new machine (if anyone can correct me on that date, please let me know). Similarly for software, if the software you have doesn’t do the job you need, you write the software to do the job you need. If you have several programs that decompose the problem, you tie them together into a workflow. Suppose you have a specific problem, even one that you will only do once, and might take you one day to program — potentially saving you a week of “by hand” — then you write a program for it. Then if you ever have to do it again, you already have a program. You might also have a new problem that is very similar. So you broaden the scope of the previous program. Recently I wrote a script that inserted copyright notices with the proper licenses into a huge number of files. I had to insert the right notice, either for the GPL or All Rights Reserved based on the content of those files. On the other hand, if you have a program that generally does what you want, e.g. edits text, and you want it to do something specific, you extend that program to do what you need.

Basically I see a dichotomy between the thinking that certain people should make money, and solving problems only to the extent that solving their problems makes those people a lot of money, versus actually solving problems. If you disagree that this dichotomy exists, let me know and I’ll show you academic computing in action.

The solution for all these problems is teaching people to think algorithmically. Algorithmic thinking is inherent in the use of certain software and therefore that software should be used to teach algorithmic thinking. Teaching people algorithmic thinking using Excel is fine, but Excel is not free software, and thus should not be considered “available” to anyone. Teaching these skills in non-computer classes will get the point across to people that they will be able to apply these skills in any job. Teaching this to high school students will give them the skills that they need to streamline their work: they will be able to do more work, do the work of more people, communicate better and think through problems instead of just sitting there. People will also know when someone else is bullshitting them, trying to sell them something that they don’t need. Make no mistake, I’m not saying that teaching programming will get rid of laziness, but it might make it a lot harder to tolerate laziness. If you know that you can replace that lazy person with a very small shell script then where will the lazy people work?

If you teach biology, or any field that is not “computer science,” then I urge you to start teaching your students to handle their problems algorithmically. Teach them programming! I am going to try to create a project to teach this to undergraduate students. I have in mind a Scheme interpreter tied to a graphics engine, or perhaps teaching people using R, since it has graphics included. Arrgh…Scheme is just so much prettier. Teaching them the crucial ideas behind Unix and Emacs will go a long way. Unix thinking is workflow thinking. Unix (which most often these days is actually GNU/Linux) really shines when you take several programs and link them together, each doing its task to accomplish a larger goal. Emacs thinking is extension-oriented thinking. Both are forms of algorithmic thinking.

If you are a scientist, then stop procrastinating and learn a programming language. To be successful you will have to learn how to program a computer for specific tasks at some point in your career. Every scientist I know spends a huge amount of time engaged in programming. Whenever I visit my grad student friends, their shelves and desks are littered with books on Perl, MySQL, Python, R and Ruby. I suggest learning Scheme, but if you have people around you programming in Python, then go for it. I also suggest learning the basics of how to write shell-scripts: a lot of people use Perl when they should use shell-scripts. Learn to use awk, sed and grep and you will be impressed with what you can do. The chapters of Linux in a Nutshell should be enough to get you going. Classic Shell Scripting is an excellent book on the subject. Use Emacs and you’ll get a taste of just how many things “text editing” can be.

Every profession today is highly data-oriented. Anybody hoping to gain employment in any profession will benefit from this sort of learning. Whether people go into farming, business, science or anything else, they will succeed for several reasons. There are the obvious benefits of getting more work done, but there are also social reasons we should teach people algorithmic thinking. The biggest social reason is that teaching algorithmic thinking removes the divide between “customers” and “programmers.” This is why it bothers me to hear “open source” commentators constantly referring to “enterprise” and “customers” and “consumers.” If a farmer graduates from high school knowing that he can program his way to more efficient land use, then he will not be at the mercy of someone who wants to sell him a box containing the supposed secret. Again, you can teach algorithmic thinking using Excel, but then you already have the divide between Microsoft and the user. With free software that divide just doesn’t exist.

Is Android Open?

October 25, 2010 2 comments

Steve Jobs raised the question last week “Is Android Open?”. What was particularly funny was that he was using the word “open” in a sense that most people don’t use or hear these days. That’s why it was so funny to see Andy Rubin’s response, because they were talking about fundamentally different things. What a lot of people have forgotten is that the word “open” was seriously redefined in 1998 by the people who coined “open source.”

Before open source “open” meant compatibility. “Open computing” was a selling point of the workstation market that said “if you compile your [usually C] code on a DEC workstation, you can send it to your friend who uses a Sun workstation, it will work. Fantastic! Buy our hardware!” Modern Macintosh computers are the descendants, not of Mac OS 9, but of NeXT computers, which were Steve Jobs’ workstation computers.

I have a hard time believing that Steve Jobs is really dumb enough to not understand that the conventional use of “open” has changed. I think he’s deliberately sowing confusion. He can say “Apple is all about open” and be totally correct in his own usage of the term, but he also gains a slight moral legitimacy in the eyes of people who have heard that “open software” is “good.” This is another reason I don’t like using the term “open source.” It confuses people. And it gives completely immoral people like Steve Jobs a certain amount of moral leeway. This is America, and he has a right to be completely immoral to make a profit, but you shouldn’t help him.

Distributed version control: I get it, but I don’t need to

September 4, 2010 4 comments

Over the past few months I’ve been working with new tools to enable me to work on several machines and keep the same important data, such as configurations and bookmarks. The impetus for this was that I’ve started to use a laptop: I never wanted one, but then somebody on the TriLUG mailing list offered one for sale for $20, and I just bought it. It’s really useful for when I’m watching the kids or when travelling: I don’t have to use someone’s antiquated, slow Windows computer just because I’m visiting them. Also my wife needs to be on our main machine after we put the kids to bed. Short of acting out my fantasy and installing a server with LTSP terminals throughout the house, the laptop is good for me to keep working. Unfortunately the usual routine of setting up my shell, Emacs, Firefox, email and my Org Mode agenda files seemed so laborious that I realized “there’s potential for automation here.”

Sync Tools

To tackle email, I switched from MH to using IMAP. For Firefox I started using Weave. I was using Delicious for a long time, but Delicious is not free software so I decided I didn’t trust it. Weave is as free as Firefox, so when I heard about it I decided to go for it and it’s worked really well. I rarely used the social aspect of Delicious and mostly used it for portable bookmarks.

For the other two areas, Emacs and Org Mode files, the solution was less clear. I had tried using ssh urls with tramp to have my agenda on multiple machines, then I saw Carsten Dominik’s Tech Talk about Org Mode, where he described using git to manage his Org files on multiple computers. For config files (shell and Emacs) I had tried using rsync to mirror them on different machines, using a Makefile with rsync commands. However, different needs of different machines would always screw things up. Then I remembered that version control might be the right tool. I had tried that before with Subversion (SVN), my main version-control system (VCS), but things had not gone much better than with rsync. Then I thought perhaps a distributed version-control system (DVCS) would make more sense.

Using DVCS

My first impetus for using something other than Subversion was that I’ve discovered having one project per repository makes the most sense; that way I can make branches and not worry about confusing anybody. So I have a repository for my webpages, and a repository for my biggest project. That works well. However, I also started working on a book (i.e. it became a book) and it really didn’t fit in either repository. I came down to a choice between adding another Subversion repository, with all the Apache setup, or using something else that would be more convenient. Although setting up Apache is not hard after the third or fourth time you’ve done it, I still felt like it was unnecessary. I knew I would be the only person working on this, and therefore something that I could configure to use ssh made the most sense.

This is the most compelling argument for distributed version control: it’s easy to set up a repository and it’s easy to configure access for local system users. With Mercurial (similar for git and bzr), you just do

joel@chondestes: ~/tmp > hg init repo
joel@chondestes: ~/tmp > cd repo
joel@chondestes: ~/tmp/repo > touch myself
joel@chondestes: ~/tmp/repo > hg add myself
joel@chondestes: ~/tmp/repo > hg status
A myself

That to me is really compelling. Setting up a Subversion repository is also pretty easy, depending on where you want to put it, but configuring access from the internet is not as simple. I can access the above-initialized Mercurial repository just using an ssh url argument to its clone or pull commands.

Another thing that is really good about DVCS systems (all that I’ve tried) is that they’re really good at merging. They do not have a monopoly on this, however; Subversion has been good at merging for as long as I’ve been using it. Again, I may be different from the people writing the manuals in that I don’t work on large projects with lots of contributors.

For some reason, however, the biggest advantage of distributed version control touted by its proponents is that you can make commits when you don’t have internet access. Wow! That is huge. Oh wait, that’s sarcasm. I am in this situation pretty often working on my Subversion projects and it really doesn’t bother me. If I’ve come to a point where I should make a commit and I don’t have network access I can do one of two things: I can make a ChangeLog comment and keep working, or I can stop working. I always have other things I can do. Seriously I don’t see this as an advantage, especially when if what you want is to update another repository you have to push your changes anyway, and that requires network access. Committing changes to my Org-files would be useless unless I could push them to the computer I know I’ll be sitting at in the morning.

Another ridiculously inflated advantage proponents mention is that you don’t have to worry about breaking the build when you commit, because you make your commits to your local repository. I have spent another blog posting on this concept already, but again this is not a distinct advantage. I commit things that are broken when they’re broken already, but not if I’m adding a new feature. If you want to commit something on a new feature that might screw other things up, the proper way to do it with centralized version control is to make a branch. It seems like some people don’t think this is possible with Subversion, but I’ve been doing it since the beginning. Not only is it possible, it’s recommended.

There are two more big problems I have with distributed version control. First: it’s a fad. I don’t mean that like it’s something that is overblown and bound to die out like MC Hammer. However, it seems like everyone is switching to it and citing the same bad reasons. That to me seems like a warning. The rest of us who know how to use Subversion will just keep on going doing it the right way, reading the manual.

My second big problem that people who like DVCS seem to love is this “fork happens” idea. Forking is when there’s some kind of split in the development philosophy or goals between project members that leads to factionalism. The most famous example is the creating of XEmacs. The author of The Definitive Guide to Mercurial uses socially-oriented rhetoric (thank you, Eric S. Raymond) to justify distributed version control. He says basically that forking is natural and we’ve all been in denial by using centralized version control. Using DVCS on the other hand, brings us out of our comfort zone into some new promised-land where we all have our own fork.

This argument doesn’t really hold up. As others have pointed out, the idea that you’re always working on your own fork is kinda ridiculous. Unless you’re happy to just keep your own version, your contribution to the larger piece of software will always have to be transmitted to someone else. Why would you develop software in a vacuum?

The same Definitive Guide author says that some people use centralized version control because it gives them an illusion of authority, knowing that people won’t be off-putting out their code as someone else’s. While that’s possible, it certainly goes against the tone of the Subversion book, which encourages administrators to let developers develop their own branches freely. And again, even if you don’t have the illusion of authority over a project, you’re going to have to express authority at some point and decide whose changes to merge into the mainline. Now who’s delusional? People don’t want to download some dude’s fork, they want to download from a central point, where all the developers and users meet to discuss and code, and decide which changes should make the software the best.

Which DVCS?

My previous experience with distributed version control was using git to maintain my webpages. There was so much about it that didn’t make sense that I decided its creator was not using the Rule of Least Surprise. He claims to have never used CVS for Linux, so I can understand him not using CVS-like commands. However, the creators of Mercurial and Bazaar seem to have noticed that a lot of people who aren’t Linus Torvalds have also been writing software over the past few years: these two DVC systems do use syntax that is mostly familiar to a habitual Subversion user like me.

I got pretty psyched about Bazaar and read a lot about it over the past few weeks. However, despite the claims made on the bzr website, bzr is really slow. No matter what I do, it’s really slow. I was just playing around with it by cloning a repository on the same machine (again, that is a selling point) and no matter what it was deathly slow with just a few text files and minor changes. I’m not the only one who thinks it’s slow: Emacs maintainer Staffan Monnier recently wrote Emacs-devel to say that some minor pushes had taken over 30 minutes. I liked the bzr interface, but considering that hg has pretty much the same interface and is way faster I decided to stick with using hg for my Org Files. [Update: I am using bzr for a project on LaunchPad.net] The only remaining task is to figure out how to maintain forks of my configuration files on different machines. I think I have just been ignoring very basic rules of usage for merging, so that should not be hard.

Conclusions?

My conclusion is that using Mercurial is a good idea for my needs, and perhaps I can make it work using configuration files again. However, it is not the panacea that its proponents advertise, nor do we need to necessarily rethink version control completely. Version control is good for many things and making bold statements like “forking is fundamental” is really uncalled for. Those sorts of conclusions are specific to the particular needs of the developers involved and not necessarily for me. I’m not going to convert any of my major SVN projects to bzr as I originally intended because as I see it, DVCS does not offer any major advantages over centralized version control. Maybe it does for Linux kernel developers, Firefox developers, etc. For me it’s not a major improvement and it’s not going to help me work any better. I’m going to keep using Subversion and Mercurial, and we’ll see what happens.

Simpleio now on LaunchPad!

September 3, 2010 2 comments

I’ve developed an input-output library for scientific applications that is now available on Launchpad. Take a look, download it, play with it, see what you can do with it.

As of now the only documentation is the source code, but I am working on a manual. Simpleio allows you to specify simple configuration files where you can specify ranges for variables, define variables, and input values for calculations. You can also use its writing capabilities to write large buffers of output at the desired time.

XML: I don’t get it yet

August 11, 2010 3 comments

I’m grappling with understanding what XML is really good for. I guess I understand that if I wanted to make an RSS reader or something like that, I would understand it right away. Mostly I’m not trying to understand what it’s good for but how it could help me. I understand that XML is for portable document (broadly-defined) exchange over the web, but it’s the meaning of “document” and “exchange” that require illumination.

I have a few big obstacles. For one, documents describing XML are only about syntax, which for me is the really really really really really really really really really really really really simple part. I mean, how much of a brain does it take to understand well-formedness and hierarchy? What’s hard to understand is poor-formedness and non-hierarchy (i.e. HTML). I reiterate that I come from a Unix background, where it’s impossible to think of things in a non-hierarchical manner, although some bad habits are creeping into the community from outside.

The next big obstacle is that many documents describing XML say that it’s big advantage is storing data in plain text files. My reaction is always “What idiot stores data in an opaque binary that requires a special program to read it?” Again, coming from a Unix background, this idea is hardly revolutionary. So it seems to me that the complexity of XML is a way to bring simplicity out of the Unix world, with just enough annoying complexity to satisfy people who insist on complexity. If it were too simple, they wouldn’t recognize it. So we have a potentially simple thing — data stored in plain text — in a completely verbose form: XML.

That verbosity brings to mind the common complaint about XML, which is that it’s really Lisp syntax with angle brackets and a bunch of stuff inside them. Consequently Lisp is a natural way to deal with XML. Again, this is bringing the ideas of one programming community (the Lisp community) to another (web programming).

I have thought of a few things I could do with XML, but most of all I don’t know why I feel the urge to learn more about it. One would be a graphing program that uses a web server and a browser to view results as SVG. That might be cool. Again, however, I have this voice inside that says “just tie together the right tools and automate them with a Makefile!”

Please share your thoughts.

Categories: Technology, Web Tags: , , , , , ,

Are sysadmins failing Free Software?

Today I attempted to run a simulation on the university’s Beowulf cluster and a simple shell script returned an error saying that mktemp was not present. Really? I have this experience all the time: I go from my Fedora System to the university’s RHEL system and find out that my workstation is a heck of a lot easier to use because on the cluster a package is missing, or there’s some security measure or other sysadmin preference that prevents me from using the system the way I should be able to use it.

To me this is merely annoying, but I think the Free Software movement needs to take notice of this. Here’s the real problem: people view “Linux” (their name for it, not mine) as a workhorse, something that no one would willingly use unless they need “performance.” A lot of people talk about what a pain it is to use “Linux” while I think it is a joy to use, and it’s a pain to use anything else. Perhaps the reason they think so is because they’ve only had contact with systems run by system administrators who keep everything too tightly locked down. I use “Linux” in quotation marks because the experiences these people have is so drastically different from the stable, capable GNU System I use for all my computing.

My first exhibit was the lack of mktemp on this thoroughly outdated RHEL system, but my second is that whenever I’m around system administrators, they talk about having problems that I have never had administering GNU systems on home and office workstations. Last time I was in such a situation, I told this to somebody and she was really nice about it: she said “We might just be doing everything wrong.” I understand she was just being tactful (though more than average), I highly doubt that I have all the answers. Anything that I have done that wasn’t part of a default install was something that was no more complex to figure out than reading the manual. But I’m not going to RTFM a room full of sysadmins, am I?

The problem I see for the Free Software movement is sysadmins are not motivated in any way to promote free software, at least not most of the time. On top of that, their jobs are certainly not easy and they don’t have most of the power. They have the sort of “spit on your french fries” power that unfortunately makes them defend their decisions ruthlessly, even if they’re shown to be wrong.

This becomes a problem for me advocating free software on a personal and minor public level because if I have a room full of scientists, most of them have used “Linux” but very few of them even know it can have a GUI. Even I didn’t know that until 5 years ago! Furthermore if any of them have tried using GNU/Linux and have faced continued frustration because the machine they’re working on doesn’t work, I’m not going to convince them of anything.

I’m not saying that sysadmins ought to put free software advocacy first; what I’m saying is that pretty often they make these machines like old Unix machines, instead of GNU machines — machines with no arbitrary limits, portable tools and an emphasis on enabling users to be active participants. If they could focus some of their energy on that, and their bosses would let them, then we’d all have an easier time telling people the benefits of free software.

%d bloggers like this: