I have often become confused, angry or cynical over the past few years when seeing self-professed “open source users” with Macs on their desks, or using R under Windows. I once had a discussion with a Linux user group about which laptop to buy: when many had said my laptop was “under-powered” I pressed them and found out that they meant it would have been slow running Windows. Contributors to help forums and on IRC have often assumed that my machines dual-boot Windows and GNU/Linux: “Can you see the partition when you boot into Windows?” I have also seen the insistence, or mere suggestion, of calling the operating system I’m using “GNU/Linux,” instead of Linux, dismissed as “zealotry,” or “mere semantics.” I became angry because I assumed that everyone in these situations had heard of the values of freedom embodied by the GNU project and had rejected them as unimportant. How could freedom possibly be unimportant? What could be more important to Americans, other than money?
There was another possibility that I only considered for a few seconds at a time, but it’s now becoming clear that this possibility is more feasible: these people have never heard of the GNU Project, or the Four Freedoms, or Richard Stallman. They have never heard of the true benefits of software freedom, the dangers of proprietary software, or the full breadth of freedom that is possible. If they have heard of it, perhaps they did dismiss it without thinking it was possible: perhaps software freedom is, to most people, an urban legend. This seems strange, since I came to free software by reading about it on Wikipedia and gnu.org and my interest was primarily motivated by (a) freedom and (b) the possibility of having a Unix-like system to work on. The fact that it was free to download and install merely removed the barriers to enacting those freedoms.
The barrier to my own belief that people have just never heard of freedom is that it seems to me that all systems (in fact all things) are imperfect. We all know how imperfect Windows is, and I got annoyed as hell using a Mac, so as much as its devotees attest to its perfection, it’s not perfect for everybody. However, people complain the most about the imperfections of Linux[sic]. Perhaps this is because they can, as in if they complain, someone will do something about it eventually. With Windows and MacIntyre, you have to get fifty million corporate employees to complain, whereas with free operating systems, you can be just one guy and raise a huge stink about how the buttons on the top of the windows are arranged all wrong (of course, the other advantage is that somebody can explain to you how that’s your fault). Despite the lowered barriers to complaints, I always had the feeling that people were complaining because they feel like GNU/Linux is just not “professional,” or “slick” because it’s not purveyed by a huge corporation. Therefore they complain about all kinds of things that really aren’t important to me.
Nevertheless, you still get people promoting the hell out of Linux[sic]. I could never understand why. Take NixiePixel for example, a YouTube personality who promotes primarily Ubuntu and Linux Mint. I really thank her for doing so, because whether she likes it or not, she’s promoting freedom: better that people have it and not know it than not have it at all. However, she never says why she’s promoting these alternatives. Why is it better to use Ubuntu than Windows, particularly if there aren’t the same games available for it? She even has a new series called OSAlt where she discusses and rates “open source” alternatives to non-free programs. Again the question is why? Is “open source” inherently better for users somehow? I suppose in some ways it is, but how?
This is so puzzling because for me, without freedom, everything comes down to your personal choices. No computer operating system, no anything, is going to work well, or even comfortably for anybody. Life just doesn’t work that way: nothing “just works.” So why promote one alternative over another? Freedom is the only motivator to use GNU/Linux that stands that test. The freedom leads to a lot of nice by-products, but freedom is the prime mover. Some users may not have a choice of what to use; they may have to use a proprietary system at work, and not have time to learn to use something else at home. Additionally, some users like NixiePixel will be unwilling to embrace a campaign for freedom because considerations of freedom are intensely personal at the same time as “political” and the possibility for insulting people is pretty high. There is also a lot of angry, cynical behavior in the open source and free software worlds. That’s bound to happen whenever a community is composed of human beings instead of marketing personnel.
This is why it’s so crucial to let people know about their freedom at every possible opportunity, i.e. every time you mention the system. I know that “GNU/Linux” is a mouthful, but it’s too easy for people to hear about “Linux” and not know there’s anything special about it except that nerds like it. I myself had heard of “Linux” for years before I knew that it was free of charge, much less free-as-in-freedom (FAIF). There’s too much possibility that people will hear of “Linux” and just think it is another operating system. Or, they may get sucked into using non-free software by the “nerd-allure” of it.
Take Android for example: Android is a Linux system, but it only took me a few minutes of using my dad’s Samsung phone to see that Android is not a freedom-respecting system. None of the values of the free software movement were respected in its interface or its operation. There weren’t even subsidiary values (those by-products I mentioned), like organization, clarity and standards. There was an avenue for spam and advertising that was pretty well-lubricated, but the only reason I saw for using the Linux kernel was that it’s adaptable to many devices. After playing Angry Birds for a few minutes, it became clear to me why it’s important to call the system I’m using now GNU/Linux: it’s accurate, and it promotes a mission that is in line with my values. As often as I can inform people of their possibility for freedom in technology, I will do my best.
For more on these issues, you can read The GNU/Linux FAQ
This weekend I’ve made trips to two events that really got me thinking about who we should promote free software to. The first stop was the Durham Farmer’s Market, and the second was a benefit concert for a cooperative preschool in Chapel Hill. I have been thinking for a long time about the “organic food crowd,” particularly because I’m a biology graduate student, and most of my fellow graduate students buy organic food or shop at farmer’s markets. They seem to have values in common with me, yet few of them use free operating systems. A lot of my fellow graduate students know about certain free software, like Firefox, R and Python. However, mostly they use Window$ or McIntyre operating systems.
I really think somebody needs to get the idea of free operating systems to people at the Durham Farmer’s Market, Whole Foods and events like the concert I just attended. Obviously that could be me, and I could just go and talk to the vendors at the farmer’s market. That would be easy. There are a few problems, chief among them the assumptions I’m making. I automatically assume that these people who I seem to have a lot in common with are very different from me. I assume that they are making their decisions from a fashion-inspired reflex. I think I feel this way because I have come to my own values my own way, and not because of fashion. However, I know my conclusion is not justified. I don’t actually have good data about the “organic food people,” and probably at least ten percent of them do indeed use free software. Probably more than ninety percent of them at least know about Firefox, even if they don’t know what’s actually good about “open source.” I do know that pretty often I see cars like the one I saw driving back from Chapel Hill: bumper stickers saying “When words fail, music sings,” alongside an Apple sticker.
The other problem is just what I would say to them? Would I recommend a particular distro? Would I recommend that they read GNU Philosophy? Would I recommend that they learn about the issues on Wikipedia? These were all helpful things for me. However, it’s best to get across the ethical essence of the idea by simply giving people a persuasive argument. That almost always gets people’s attention, but you need to give them at least a step to get going. Another good first step is to recommend the film Revolution OS, but that’s starting to seem a bit dated. Perhaps it’s time for another documentary, like Patent Absurdity.
The third challenge is to remember is that promoting freedom is not a race to get the most users. People in the software press seem to always be concerned about numbers, about “desktop share,” and about “killer apps.” That’s really not the point. The point is to demonstrate that ethical motivation is enough to create a working operating system. In other words, whether the GNU/Linux operating system was created for freedom and fun, it was not created for money. Often the first thing people tell me when I give them my persuasive argument is “but programmers have to make money!” as if money were the only reason that anybody ever does anything. The point of free software (and Wikipedia) is to show skeptics that there are people who have different values.
Ultimately, I believe, that ethical motivation will prevail and one way or another, whether they know it or not, people will end up using ethics-promoting software. It doesn’t matter how many Windows users we “convert” or how many Mac users we tell the truth about much of the software they’re using. It doesn’t matter that we “conquer the world” or anything like that. What matters is that those of us who care about our freedom now do what we can to continue to improve our ability to live our lives without using ethics-compromising software. The more we can do that, the better demonstration we make to people who finally decide that they want to make the effort to preserve their freedoms. We will do our best, and others will see it and make their decision.
The University of North Carolina has a long history of supporting software freedom. The University has sponsored ibiblio.org since before I started using the internet, and recently made the very smart move to switch away from the proprietary Blackboard online learning system to Sakai, which is licensed under an Apache-like license. Recently however the university has made an unfortunate choice about its email systems. I wrote in my last post about the dichotomy between academic computing and commercial computing, and unfortunately UNC Chapel Hill has chosen commercial computing over academic computing in handling its email systems. This disappoints me. I contend that their justifications, mostly based on “performance” and “meeting the needs of users” are hollow. Performance is not the only thing that is important in computer systems. As far as I can tell, the only feature that distinguishes the new system from the existing system is the ability to invade user privacy. Worst of all, the university is sacrificing academic computing ideals, including freedom, and “outsourcing” its email to a commercial interest. The fact that a world-class university like UNC Chapel Hill would trust Microsoft instead of using their own talent is really stupid.
Take a look at this list of advantages of the new Microsoft-based email system, offered by the ITS staff at the medical school. Look carefully and notice that the only feature that is really new is the “[a]bility to ‘wipe’ lost/stolen portable devices.” Everything else on that list is available with Cyrus IMAP. In other words, the university prefers a system that allows invasion of privacy. Now, I understand that there is a good security motivation for this feature. However, when considering that this is the only new feature of Exchange over Cyrus IMAP, it seems odd that the university is favoring a new system that does allow invasion of privacy. Why is that so important? Clearly the new system does not “meet the needs of users,” as much as it meets the needs of administrators.
Another feature that doesn’t make sense to me is “Scalable handheld (smart phone) e-mail solution – works with Blackberry, iPhone, Windows Mobile, Android, etc.” This is a little weird because I don’t need to view a webpage to get my email on my desktop, why would I need to view a webpage to get my email on a smartphone? This demonstrates the most annoying aspect of all the announcements I’ve gotten about the new email system: confusion between, or failure to distinguish client and server. Many of the justifications for the new email system are made on the basis of clients, but the change the university is making is a change of server. That’s weird because the whole point of standardized protocols like IMAP is so that clients can be entirely agnostic to the identity of the server. If the server chooses to depend on nonstandard features, that messes things up for clients. Which client I use is my choice, and the server should accommodate. That’s the “needs of users.” However, I know at least one user who’s having trouble even marking her mail read while connecting her chosen client to the new server.
Features and “performance” are a common justification for using proprietary software. There is a common attitude that “open source is best for making the world a better place, but I need to get my work done and I’ll choose the best tool for the job.” We’ve already ruled out any advantages in terms of “features” of Microsoft Exchange over the Cyrus IMAP daemon. There are other things to consider: the quality of service and the message that the choice of proprietary software sends to students of the university. The quality of service with Microsoft Email servers that I’ve experienced is terrible. Again, the biggest problem is the confusion of client and server. I used to work at a large hospital system that used Exchange, and whenever I called the helpdesk, they would refuse to answer any questions about the server until I told them which client I used. In other words, they wouldn’t simply tell me if the server was down because I was using Thunderbird to read my mail. Storing mail and reading mail are two different things. Sending mail and fetching mail are two different things. The only people in charge of an email system should be people who (minimally) understand those facts. Microsoft’s sales tactic, on the other hand, is that their “customers” will save money by hiring less qualified people. In other words, screw service, screw your users, save your own ass some dough. That’s what UNC Chapel Hill is choosing.
The message this sends to UNC students is that the university cares more about money and less about student lives and intellectual freedom. They’re already raising tuition. The university is effectively making itself another corporate entity. They are in the business not of education, but of being in business, just like any other vacuous corporation. That’s insane. Universities should be bastions of intellectual freedom and they should cultivate and harvest the fruits of that intellectual freedom by providing key infrastructure themselves. They should not seek to emulate the corporate world. I understand that they want to save money, but they should do it by hiring fewer, well-qualified people to staff fewer servers running free software.
I often mention that I’ve been using the internet for almost twenty years. I do this for two reasons, neither of which is to brag or apply seniority. One is to emphasize that before most people found out about the world-wide web, there was an established culture on the internet of scientists, engineers and computer personnel. Universities were the backbone of that community. After the concept of the internet was established by the military, universities carried the torch and led the way in technology. When the military needed a new technology to build up their newer communications network, where did they go? They went to Berkeley. A university has the necessary expertise for what they needed.
The other reason I point out how long I’ve used the internet is a sort of nostalgia. The best way to use the internet was always on university machines, running some form of Unix: BSD, System V, SunOS and more recently GNU/Linux. Universities were always the best places to use computers. Why? Because universities were where the talent grew up, developed and was allowed to be creative. Universities existed outside the stultifying, cost-saving world of corporations.
It seems that now universities are done giving the baby a bath, they are throwing the baby, the bathwater, the tub, the sink and the baby’s mom out of the window. They might as well kick dad in the balls by only teaching their computer science students how to work for Microsoft. Using Microsoft servers, software and supporting corporate culture (i.e. the culture of Microsoft) doesn’t serve the interests of students, researchers at the university, or society. Universities serve their students by teaching them how to be flexible, creative and constructive members of society. Universities do not help their students by teaching them to be money-hungry, cog-thinking, competitive corporate flunkies. A university can teach all of the above good values by teaching students with free software based on Unix ideals. It can even do so in an inclusive environment that includes Microsoft software. However, teaching students in a university computing environment mainly based on Microsoft software does not teach them creativity or flexibility: it teaches them “you don’t have a right to learn until you are chosen as one of the elite; then you can subjugate people just like we’ve subjugated you.”
Anybody who says “We have to be realistic and teach students to use software X because those are the jobs that are out there” is a corporate tool. People who learn properly at a university can learn to use anything that someone hands them: that’s the point of a college education, to be able to learn, not to know something.
Furthermore, universities do not serve their researchers by running Microsoft software. Researchers at universities are professionals and they need to be treated that way. Microsoft products are just not professional quality. Even if they were, they limit freedom in such a way that they should not be taken seriously by researchers. This goes for all proprietary software, including Mathematica and Matlab, but researchers have choices on what to use in their own research. Unfortunately they often have to use what a university will provide for them when it comes to basic services like email. Universities should provide the best, and Microsoft Exchange is just not the best. If Cyrus IMAP was not the best, they could have chosen Dovecot, Courier, Zimbra or any of the huge number of free software alternatives. If the Cyrus system “… is old, complex, outdated, and does not fully meet the needs of our users” they can hire dedicated, talented people and make it simple and current so that it meets the needs of users. Instead they choose Microsoft.
Both of these failures to meet the needs of students and researchers mean that the university is failing society as well. People denigrate the “ivory tower” all the time, but there are chunks that fall from that ivory tower that change society and even make people a lot of money. Let me see if I can think of a few examples: the internet, the world-wide web, science, liberalism, Charles Darwin…
What can you do? Complain. If you have a problem with the new email system, let the university know. They do listen. A list of the relevant managers in charge can be found on the ITS web site. Email them directly. Another alternative is to stop using email. I don’t advise this because UNC has made email an official form of communication. You could probably rig something where they have to contact you by campus mail (forcing you to use email is discriminatory). However, another problem is that email is, I believe, with all its problems, the best form of electronic communication. If you want to ‘e’-anything, you should email it. One thing I know I will do is I will seriously consider the IT infrastructure at the next university I go to. I’m a graduate student, so my time at UNC is limited. I will have things I will miss and things I certainly won’t.
One more thing: don’t wait until the forced transition if you plan to continue using email. I’m going to transition tomorrow and I’ll let you know how it goes. Thanks for reading.
Last weekend I thought I just couldn’t wait any longer for Gnome-shell and Gnome 3, so I tried to install Fedora Rawhide on my laptop. After trying to yum update the system, a weird thing happened and the machine turned off spontaneously. Then when I rebooted the Live USB I’d made with Unetbootin, a mysterious mixture of Gnome 3 and Gnome 2 became the default desktop (I am not joking, there was a Gnome 2 panel sitting right on top of Gnome-shell). This was too weird. I decided to just give that up until Tuesday when I knew the Fedora Alpha was coming out.
However, I also knew about a relatively new distro called #! (Crunchbang) that I really wanted to try. The only thing holding me back was that it’s based on Debian. I have had seriously bad times with Ubuntu in the past, and my few attempts at installing Debian had not gone well (the first time the machine completely froze up the first time I opened synaptic). Despite my difficulties with Ubuntu and Debian, I’ve always acknowledged that Debian has a lot going for it, and Crunchbang’s “philosophy” certainly agreed with me. This laptop is not “underpowered” but sometimes I’ve felt like Gnome is a bit of overkill; the only reason I use Gnome is because my two main applications (or “shells”) are Emacs and Firefox, both GTK applications. Much to my delight, I found that Crunchbang has an Xfce version.
I thought I’d give it a try. I download the .iso for the Xfce 4.4 version, made a bootable USB with Unetbootin (no CD drive) and cranked it up. I selected an encrypted system this time, another reason I wanted to reinstall; the university is pretending to enforce rules about keeping student information private, so I’d like to be able to tell them my laptop is encrypted. The only surprise was that the installer took about two hours to erase my partitions before it started the actual install. Once that was done, however, the installation was really nice. The installer asks which features you want to install (Java, web server, development tools like version control systems and the autotools) — this was already much nicer than most other distros I’ve tried — all in a text interface running in a terminal emulator. This is nice because I’d rather just install that stuff before I need it, and while the installer knows which packages those are. In Fedora I can install package groups, but it’s just much nicer to take care of it at install time.
Just about everything that I need on a daily basis works really well with Crunchbang. The laptop speakers work, surprisingly without monkeying around with anything. I had to install the wireless drivers from the Realtek website again, but surprisingly the next morning when I booted the machine the wireless card worked without having to copy any firmware or anything. Nice!
There were two interesting surprises: the default web browser is Chromium. I really like Firefox, so I installed Iceweasel, and have had no problems; I really need Firefox because I use Zotero. Another major surprise is Youtube using gnash works really well. Of course, it’s not perfect, in fact the BAcksliders froze the whole machine when I tried to put it on 1080p.
Remarkably I’ve had no problems yet with package management. Of the three or four times I’ve installed Ubuntu it was only a matter of time before something got really screwed up in the normal course of updating the system. It was pathetic. Once I got a stale file handle of all things and there was nothing I could do to update the system (this would require a fresh install to fix). Nothing’s broken yet. I’ll give it a little time, but I’m taking the blame back from Debian and putting it squarely on Ubuntu.
Probably the nicest thing about Crunchbang is that it lives up its advertisement. I get annoyed when the promo for a software project says their main application or distro is “light and fast”; everybody says their thing is “light and fast.” Crunchbang’s web site merely says Crunchbang offers
a great blend of speed, style and substance. Using the nimble Openbox window manager, it is highly customisable and provides a modern, full-featured GNU/Linux system without sacrificing performance.–Home Page
This also means it’s the antithesis of Ubuntu: I can’t do anything with Ubuntu without it sending a notification my way saying “You can’t do that,” or “Wouldn’t you rather do this?” If I wanted that I would use a Mac.
What I haven’t done
As amazing as the fact that I’ve done all my daily work with Crunchbang for the past week, and only turned on my home workstation for use as a DVD player, is what I haven’t done. I have not changed the theme, I have not changed the wallpaper, I have not changed the … anything. And these are things that I compulsively tinker with, so that’s saying a lot.
If you’ve been hesitating trying Crunchbang for any of the reasons I was (mistaking it for Ubuntu, just not needing to try another distro), I encourage you to try it. It had already changed my mind about swearing off Debian-based distros. I am loving it.
I have long held disdain for laptops. I didn’t need one for most of my life, I use paper for things that undergraduates at my university seem to require laptops for. I also have often thought of laptops as conspicuous consumption, a “thneed” that people express insanely irrational desire for. My wife and I once toured a local preschool and came to a room where some children were using a computer. Our tour guide said in a sad voice “Yes, they’re all desktops now, but we should be getting laptops really soon!” Why? Why is a laptop automatically better computer than the one they already had?
About a year-and-a-half ago, however, I bought a small, used IBM Thinkpad from a member of TriLUG for $20. This machine ran mostly well for a long time, and I found it convenient in a lot of situations, the biggest being ability to move quickly from one part of the house to another, supervising my children.
That machine, of course, had a limited lifespan. I was working on somthing when Emacs told me my /home partition was read-only. Further analysis revealed the hard drive was failing. I replaced this machine with a free ($0) machine from a friend, and although I appreciated the gift, that machine was worth about what I paid for it, so I thought of buying a new computer (for the first time in a while).
Choosing the X100e
The biggest choice to make was whether to go down the netbook road. Netbooks have come a long way since I bought my first one for my wife, one of the first EeePCs that had a number of design flaws that were corrected right after we bought it. Most “netbooks” now have 10 inch screens, and bigger keyboards. However, all the netbooks I could find in stores looked really flimsy compared to one that wasn’t called a netbook, but was still really compact: the X100e.
The X100e comes in two flavors, the single core MV-40 ($400) and a dual-core (“Elite”) version for only $100 more. For a while I was thinking that I would need the dual-core version and it was worth the price. Then I saw a video review where the reviewer plays 1080p video on the single core version and it runs beautifully. This was something that I didn’t plan on doing, so I thought I’d save myself the hundred bucks. I had a minor hesitation when I read a review that said “Don’t buy these netbooks, buy a real laptop.” I don’t want to ruin it for you, but I totally disagree with that guy: this “netbook processor” I bought runs incredibly fast.
I ordered direct from Lenovo. Here’s what came out of the box:
And here are the hilarious instructions for how to turn on the machine:
I had previously created a boot USB stick with Unetbootin. This is so much easier than it was even two years ago: I must say it was amazing. I totally didn’t expect it to work (I love free software but I’m realistic about these things, and I don’t mind a little trouble to get what I want).
I plugged in the USB stick, hit the power button and then carefully watched for the right F-key to press so I could boot from USB. I didn’t find it in time, and saw the dreaded “Starting POS…” on the screen. I hit the power button HARD! and started over again. This time I pressed F12 out of instinct, and the machine presented me with a menu: it already recognized that the USB stick was bootable. I chose that option and saw this:
That was my first “WOW!” As I said, I didn’t really expect it to work. I hit ENTER again and:
Write changes to disk? You bet yer ass! Installation took less than 10 minutes. The next order of business was to cover up that disgusting little sticker:
That’s better! And look: the X100e is so thin that it fits inside a neoprene envelope!
Before buying this machine I knew that it had a wireless NIC that is not supported by recent versions of Linux. Based on another blog post I downloaded the RealTek drivers but could get the build process to go much further. This was befuddling: make returned with an error that said “/lib/modules/$(uname -r)/build” didn’t exist, when in fact it did. Installing the kernel-headers package didn’t help. Finally nirik on #fedora set me straight: I installed kernel-devel and the build proceeded through ‘make’ and ‘make install.’
Free Software Advocacy When Buying a New Machine
I recently read a perspective that buying a “Windows 7 computer” and replacing its OS with GNU/Linux actually hurts our cause. I disagree with the author of that statement for two reasons. One is that wiping out Windows 7 on this machine means that I’m getting out there with a machine that people think of as needing Windows to run, and showing them, at the coffee shop, at the playground, at the library, in the classroom, that GNU/Linux supports every piece of hardware on this brand new machine, even though most manufacturers don’t make it a selling point. Think of how important that is when there are still people saying that when you switch to Linux[sic] you should know that hardware support is virtually nonexistent. That’s bullsh*t, but people won’t know it’s bullsh*t if we all used machines cobbled together from spare parts.
The other reason is that when the difference is between a $400 machine that I can touch and a $1400 machine that I can’t, I’m going to choose the former. Hating the hardware and loving the software on it will not make a good pitch to people who ask about the software: “Yeah I bought this from a free software-oriented OEM without being able to see it, so it runs okay, but I hate the keyboard.” I think it’s far more important to get out there with GNU/Linux and show people how well it works, even if it’s against the manufacturer’s desires (and even if those desires are the result of coercion and harassment from Microsoft). I also just have to put a ceiling on how much I’m willing to spend; one of the only reasons I decided to buy a laptop is how inexpensive they’ve become.
This may be rationalization, but I don’t believe I’m paying Microsoft very much buying this computer: I believe Microsoft makes more money from that added-on crapware (i.e. Office) that people get when they order their machines than they make from Windows. Windows is how they hook people, how they make the world believe that they are necessary. They turn a profit from selling Office to people for $400 dollars with a new machine, chosen by people who basically say “Well, we need this [Office] stuff to do anything with the computer, and we’re already paying $600, so…” I think it’s just fine to buy a computer with just Windows, and never buy another piece of Microsoft crapware. I have no need to do that when I’m running a better operating system, even if I had to install it myself (did I mention how easy that was?).
A few weeks ago I migrated two major projects to distributed version control systems (DVCS), leaving only one project in Subversion, the one hosted on Savannah. As you can read in my prior posts, I have resisted switching over to DVCS. However, recently I’ve understood the benefits propounded by DVCS adherents, and I’ve found that it has more features than most tutorials let on.
Why Did I Resist?
I resisted DVCS so strongly for a few reasons:
- Most arguments for DVCS I encountered were actually anti-Subversion arguments; much of them based on incorrect information about Subversion and CVS
- Much of what I read sounded like knee-jerk trendiness: it sounded like people were doing it just because Linus Torvalds says Subversion is stupid
- I had an important project (my dissertation!) in Subversion, managed with Trac. I didn’t want to lose all that history by doing a crappy conversion.
When the anti-Subversion arguments didn’t hold up, I ignored them. I thought maybe my working conditions were just different or other people just weren’t reading the manual. Those are still possibilities, but the harder thing to examine was my second reason for dismissal: I assumed that anyone who said these things was a total newbie, who had just been told that DVCS was better. I’ve talked about object-oriented programming proponents often just sound inexperienced with programming. I figured the same was true of DVCS proponents.
However, two things happened that really changed my mind. The first was that I’ve realized that the most annoying thing about somebody questioning my decisions is the feeling that they think my decision is poorly considered when it is deliberate, careful and took me weeks of preparation. It’s very easy to take that attitude with people online: when I don’t hear or see people, I don’t have that mirror held up to me. It’s very easy to just brush something off and say that the other person “just isn’t thinking about it.” Realizing how much that pisses me off when people take that attitude with me, I’ve thought a little more about how I consider peoples’ attitudes online.
Many experience hackers have switched
The second thing was realizing that people whose opinions I know I can value, people who definitely have done their homework, have switched major projects to DVCS. Emacs, my favorite piece of software that I am using right now to right this, is kept in Bazaar now. I know the people who made that decision were doing their homework, not going by knee-jerk reaction, certainly not just to copy Linus Torvalds. Bazaar is also part of the GNU Project.
What about my revisions?
svn2bzr answered my third concern. svn2bzr is a featureful-enough tool that will create Bazaar branches or repositories from SVN repository dumps. It’s really freakin’ easy to create whatever configuration you want:
> python ~/.bazaar/plugins/svn2bzr/svn2bzr.py --prefix=subdir svndump newrepo
This will create a new Bazaar repository in the directory `newrepo’ that contains all the revisions in the subdirectory `subdir’ of the svn repository. This is where Bazaar’s concept of repositories shows its difference.
In a Bazaar repository you can have many branches beneath the repository in the filesystem, and you import a branch by branching into a subdirectory. I did’t get this for a few weeks, so let me give you an example. Suppose I have a branch called `branch’ located at `~/Public/src/branch’ and a repository called `repo’:
> cd repo > bzr branch ~/Public/src/branch here
That creates a branch within the repository called `here’. Now I can create other branches, merge them, etc. The only tricky thing about getting my revisions into a place where Trac could use them was that I needed a repository hosted on HTTP. Then I used the TracBzr plugin to add the repository to Trac. I realized that changeset links are only used in Trac tickets, and since I had so few of those referencing current revisions, changes in the revision numbers wouldn’t matter that much.
Features of DVCS
I heard many, many anti-Subversion arguments and some really bogus arguments for DVCS. People have said “you can’t merge,” “you can’t make branches,” “Subversion causes brain damage” and on and on. The bogus pro-arguments I heard were that you can commit without a network connection, “forking is fundamental,” and that DVCS is “modern.” Answering these arguments is simple: committing without a network connection is not a big deal. On the other hand updating without a network connection is impossible, and it’s a situation I’ve found myself in more often, especially working with a laptop, instead of just two workstations. This is where DVCS was nice. Updating is a bigger problem than committing.
As to “you can’t merge” and “you can’t make branches,” we all know that’s bologna. However, what you can do much better with DVCS systems like git and Bazaar is edit directory structure and rename files. This is a huge advantage of DVCS systems. Bazaar, for instance, totally keeps track of all renames and copies in its history. Subversion, on the other hand, does renames with a DELETE operation and an ADD operation. Not so smooth. A good way to do get something better than CVS, but not the best.
Furthermore, DVCS systems are very good at merging. That doesn’t mean you can’t merge with Subversion — I’ve been doing that for years. However, merging between two branches in Bazaar is much simpler than merging in Subversion. I don’t have to read the help when I’m merging with Bazaar; merging with Subversion is not hard, but it’s not as simple. Simplicity is the name of the game, baby.
A Stupid Git Realization
I had tried using git before and didn’t enjoy it. I’m glad to say I was using it wrong. I had tried using it to manage my webpages, but whenever I pushed my local changes to my remote webpage tree on UNC’s servers, I would get messages about not updating the local tree and stuff like that. It was just confusing. It didn’t really make sense. I wasn’t interested in trying git again, hence using Bazaar for some new projects.
I had a weird realization one night: I was working with the git tree of Guile, and someone on irc had told me that the most updated git source had a known problem. I didn’t want to go get the tarball for Guile 1.9-13, so I thought “Wait, I have the git tree, so I should be able to generate whatever release version I want. How do I do that?”
> git tag -l > git checkout release_1-9-13
and there I had it. Wow! That is cool.
I also followed a simple tutorial to get my webpages working with a hook that would update the local tree (the one served as my homepage) every time.
It seems a simple idea: make a repository in a different directory,
and check out to it every time I push to that repository. Why hadn’t
that occurred to me before? Conversion from SVN to git was insanely simple:
> sudo yum install git-svn > git svn clone http://path/to/repo webgit
I think I’m done with Subversion. DVCS, at least git and Bazaar, can do a hell of a lot and I really like their features. I wouldn’t mind using Subversion for an existing project, but I think I’m not going to start any new projects with it. I’m also going to take it easy on people who disagree with me online. I’ve seen that at least some of them were speaking from the same position I hope to.
Someone on Stack Overflow disagreed with me about using centralized version control for a solo project. As I predicted, of course; as I said in my last post, DVCS is a fad and it will have many converts who support it in a knee-jerk fashion. I think the person who disagreed may just be misinformed, or not thinking about this hard enough. I may not have made things clear in my last post, however, about why centralized version control makes more sense for a solo developer. Also, I admit that it’s paradoxical that someone would use centralized version control for a solo project. However, as I’ll show, centralized version control does make more sense (not only that, but I’m not the only one who thinks so).
Consider this: if you are a solo developer working on the same machine all the time (e.g. a laptop), a DVCS repository in your current working directory and a Subversion repository in your home directory are practically the same. The repository is always online and you can always make commits. This is what makes the “commit while offline” argument of DVCS proponents so weak: for certain situations you can do that with Subversion, or most of the time you won’t need to.
Now consider that if you are a solo developer working on multiple machines, DVCS only creates an extra step in your development. I always work on my big ol’ workstation during the day. If I were using git or hg for my most-frequently-worked-on projects, I would need to remember to push my changes from one repository to the other. With Subversion I just don’t have that step. All other steps are identical with the two workflows.
All through those above arguments, I am only one developer, and this is the crux of the idea. I think people often confuse the situation of multiple computers with that of multiple people. In the former argument, I posed that with one computer there is no difference between using file URLs and using a DVCS repository in the current working directory. However, there would be a difference if there were multiple developers. Then you have to configure access to a single machine (either for cloning or checking out) for multiple users; this is when things get more complex.
The “D” in DVCS stands for “Different People.” What I mean by that is that the “forking is fundamental” argument posed by DVCS proponents doesn’t apply when there’s a single developer or author working on a project. If by myself I want to create branches and work on different features, that is perfectly easy to do with Subversion, and so is merging. What distributed version control is for different people maintaining different features and seeing how they work. I completely reject the idea that centralized version control is obsolete and I will keep recommending it to solo developers.