Archive

Posts Tagged ‘bazaar’

Distributed version control: I get it, but I don’t need to

September 4, 2010 4 comments

Over the past few months I’ve been working with new tools to enable me to work on several machines and keep the same important data, such as configurations and bookmarks. The impetus for this was that I’ve started to use a laptop: I never wanted one, but then somebody on the TriLUG mailing list offered one for sale for $20, and I just bought it. It’s really useful for when I’m watching the kids or when travelling: I don’t have to use someone’s antiquated, slow Windows computer just because I’m visiting them. Also my wife needs to be on our main machine after we put the kids to bed. Short of acting out my fantasy and installing a server with LTSP terminals throughout the house, the laptop is good for me to keep working. Unfortunately the usual routine of setting up my shell, Emacs, Firefox, email and my Org Mode agenda files seemed so laborious that I realized “there’s potential for automation here.”

Sync Tools

To tackle email, I switched from MH to using IMAP. For Firefox I started using Weave. I was using Delicious for a long time, but Delicious is not free software so I decided I didn’t trust it. Weave is as free as Firefox, so when I heard about it I decided to go for it and it’s worked really well. I rarely used the social aspect of Delicious and mostly used it for portable bookmarks.

For the other two areas, Emacs and Org Mode files, the solution was less clear. I had tried using ssh urls with tramp to have my agenda on multiple machines, then I saw Carsten Dominik’s Tech Talk about Org Mode, where he described using git to manage his Org files on multiple computers. For config files (shell and Emacs) I had tried using rsync to mirror them on different machines, using a Makefile with rsync commands. However, different needs of different machines would always screw things up. Then I remembered that version control might be the right tool. I had tried that before with Subversion (SVN), my main version-control system (VCS), but things had not gone much better than with rsync. Then I thought perhaps a distributed version-control system (DVCS) would make more sense.

Using DVCS

My first impetus for using something other than Subversion was that I’ve discovered having one project per repository makes the most sense; that way I can make branches and not worry about confusing anybody. So I have a repository for my webpages, and a repository for my biggest project. That works well. However, I also started working on a book (i.e. it became a book) and it really didn’t fit in either repository. I came down to a choice between adding another Subversion repository, with all the Apache setup, or using something else that would be more convenient. Although setting up Apache is not hard after the third or fourth time you’ve done it, I still felt like it was unnecessary. I knew I would be the only person working on this, and therefore something that I could configure to use ssh made the most sense.

This is the most compelling argument for distributed version control: it’s easy to set up a repository and it’s easy to configure access for local system users. With Mercurial (similar for git and bzr), you just do

joel@chondestes: ~/tmp > hg init repo
joel@chondestes: ~/tmp > cd repo
joel@chondestes: ~/tmp/repo > touch myself
joel@chondestes: ~/tmp/repo > hg add myself
joel@chondestes: ~/tmp/repo > hg status
A myself

That to me is really compelling. Setting up a Subversion repository is also pretty easy, depending on where you want to put it, but configuring access from the internet is not as simple. I can access the above-initialized Mercurial repository just using an ssh url argument to its clone or pull commands.

Another thing that is really good about DVCS systems (all that I’ve tried) is that they’re really good at merging. They do not have a monopoly on this, however; Subversion has been good at merging for as long as I’ve been using it. Again, I may be different from the people writing the manuals in that I don’t work on large projects with lots of contributors.

For some reason, however, the biggest advantage of distributed version control touted by its proponents is that you can make commits when you don’t have internet access. Wow! That is huge. Oh wait, that’s sarcasm. I am in this situation pretty often working on my Subversion projects and it really doesn’t bother me. If I’ve come to a point where I should make a commit and I don’t have network access I can do one of two things: I can make a ChangeLog comment and keep working, or I can stop working. I always have other things I can do. Seriously I don’t see this as an advantage, especially when if what you want is to update another repository you have to push your changes anyway, and that requires network access. Committing changes to my Org-files would be useless unless I could push them to the computer I know I’ll be sitting at in the morning.

Another ridiculously inflated advantage proponents mention is that you don’t have to worry about breaking the build when you commit, because you make your commits to your local repository. I have spent another blog posting on this concept already, but again this is not a distinct advantage. I commit things that are broken when they’re broken already, but not if I’m adding a new feature. If you want to commit something on a new feature that might screw other things up, the proper way to do it with centralized version control is to make a branch. It seems like some people don’t think this is possible with Subversion, but I’ve been doing it since the beginning. Not only is it possible, it’s recommended.

There are two more big problems I have with distributed version control. First: it’s a fad. I don’t mean that like it’s something that is overblown and bound to die out like MC Hammer. However, it seems like everyone is switching to it and citing the same bad reasons. That to me seems like a warning. The rest of us who know how to use Subversion will just keep on going doing it the right way, reading the manual.

My second big problem that people who like DVCS seem to love is this “fork happens” idea. Forking is when there’s some kind of split in the development philosophy or goals between project members that leads to factionalism. The most famous example is the creating of XEmacs. The author of The Definitive Guide to Mercurial uses socially-oriented rhetoric (thank you, Eric S. Raymond) to justify distributed version control. He says basically that forking is natural and we’ve all been in denial by using centralized version control. Using DVCS on the other hand, brings us out of our comfort zone into some new promised-land where we all have our own fork.

This argument doesn’t really hold up. As others have pointed out, the idea that you’re always working on your own fork is kinda ridiculous. Unless you’re happy to just keep your own version, your contribution to the larger piece of software will always have to be transmitted to someone else. Why would you develop software in a vacuum?

The same Definitive Guide author says that some people use centralized version control because it gives them an illusion of authority, knowing that people won’t be off-putting out their code as someone else’s. While that’s possible, it certainly goes against the tone of the Subversion book, which encourages administrators to let developers develop their own branches freely. And again, even if you don’t have the illusion of authority over a project, you’re going to have to express authority at some point and decide whose changes to merge into the mainline. Now who’s delusional? People don’t want to download some dude’s fork, they want to download from a central point, where all the developers and users meet to discuss and code, and decide which changes should make the software the best.

Which DVCS?

My previous experience with distributed version control was using git to maintain my webpages. There was so much about it that didn’t make sense that I decided its creator was not using the Rule of Least Surprise. He claims to have never used CVS for Linux, so I can understand him not using CVS-like commands. However, the creators of Mercurial and Bazaar seem to have noticed that a lot of people who aren’t Linus Torvalds have also been writing software over the past few years: these two DVC systems do use syntax that is mostly familiar to a habitual Subversion user like me.

I got pretty psyched about Bazaar and read a lot about it over the past few weeks. However, despite the claims made on the bzr website, bzr is really slow. No matter what I do, it’s really slow. I was just playing around with it by cloning a repository on the same machine (again, that is a selling point) and no matter what it was deathly slow with just a few text files and minor changes. I’m not the only one who thinks it’s slow: Emacs maintainer Staffan Monnier recently wrote Emacs-devel to say that some minor pushes had taken over 30 minutes. I liked the bzr interface, but considering that hg has pretty much the same interface and is way faster I decided to stick with using hg for my Org Files. [Update: I am using bzr for a project on LaunchPad.net] The only remaining task is to figure out how to maintain forks of my configuration files on different machines. I think I have just been ignoring very basic rules of usage for merging, so that should not be hard.

Conclusions?

My conclusion is that using Mercurial is a good idea for my needs, and perhaps I can make it work using configuration files again. However, it is not the panacea that its proponents advertise, nor do we need to necessarily rethink version control completely. Version control is good for many things and making bold statements like “forking is fundamental” is really uncalled for. Those sorts of conclusions are specific to the particular needs of the developers involved and not necessarily for me. I’m not going to convert any of my major SVN projects to bzr as I originally intended because as I see it, DVCS does not offer any major advantages over centralized version control. Maybe it does for Linux kernel developers, Firefox developers, etc. For me it’s not a major improvement and it’s not going to help me work any better. I’m going to keep using Subversion and Mercurial, and we’ll see what happens.

Subversion Causes Brain Damage (according to Joel Spolsky)

June 22, 2010 4 comments

I just read part of a Mercurial tutorial by Joel Spolsky and I’ve been thinking about it since I read it. Some people just don’t get it. In this case Spolsky doesn’t seem to get what Subversion is for. He makes the claim that people who use Subversion are “Brain-damaged” for the following reason:

Subversion team members often go days or weeks without checking anything in. In Subversion teams, newbies are terrified of checking any code in, for fear of breaking the build, or pissing off Mike, the senior developer, or whatever. Mike once got so angry about a checkin that broke the build that he stormed into an intern’s cubicle and swept off all the contents of his desk and shouted, “This is your last day!” (It wasn’t, but the poor intern practically wet his pants.)

Okay, so his point is made, but the problem is that he and these people obviously haven’t read The Subversion Book. What he’s describing is people checking in code that they haven’t tested, which is really stupid. What the book says to do is always test before you commit, and if you’re trying something radical make a private branch (which is bloody easy with Subversion), and do your experimenting. Then you merge that branch into a working copy of the trunk, test the changes, and commit if it works. The book generally encourages boldness in the name of collaboration; they certainly discourage the despotic “lead developer Mike” depicted above. If you find out someone’s committed a bug, the mature thing to do is revert the change, then go to the person (or send him an email) and say “Please test before committing.”

I’ll admit that I’ve not worked at the same level of collaboration as Mr. Spolsky, but I have read the (friendly) manual. Version control is all about the ability to track changes, make branches, and do thorough testing. And most of all it’s about the ability to maintain the entire history of your project so you can undo big mistakes. I’ll also say that my attempts so far to demonstrate Subversion to people have resulted in people simply not getting it. However, what they didn’t get was version control, not centralized version control. The situation would have been no different if I’d shown them Mercurial.

Furthermore, every Subversion (or git, or bzr, or hg, or CVS) repository I’ve ever gotten code from has a big disclaimer that says “This might be broken, don’t expect it to work.” Even Emacs has that and it has always worked. I guess this sort of thing is expected in the free software community whereas it is part of a huge scheme of denial in the non-free software community (if you can call it a community).

The other thing that Spolsky doesn’t seem to get is Unix. What Subversion does is model a Unix filesystem hierarchy and record the changes to that filesystem hierarchy. One of the beautiful things about Unix is that the filesystem hierarchy is significant, i.e. it means something. Subversion knows that and it does a damn good job of exploiting all the beauty of it. Those of us who can tell that Unix is The Right Thing are always going to have a hard time explaining how simply beautiful it is. Subversion has obviously spread beyond the “worse is better” community, and people out there often just don’t get it.

A little background

If you’re one of my non-programming friends or you’ve never used version control, or are thinking about it: Version control is a scheme for recording changes to a set of files. For example you can keep track of changes to documents, program code, or basic directory structures. If someone makes a change to a program and introduces a bug, you can undo it, in a way.

For a long time the only game in town was a set of programs called CVS (for Concurrent Version System). CVS was cool in its time but had a number of problems, so instead of improving it beyond all recognition, a team of developers created a beefed up, slightly more sophisticated one called Subversion.

With CVS and Subversion, you download (“check out”) code from a centralized repository (a database system on a server somewhere) that everybody checks their code into and out of. The repository is a centralized place for development. This model does have some shortcomings, though none that are completely damning; it’s still good for many types of projects.

As an alternative the Linux Kernel developers and a few others developed distributed version control systems, where every developer has his own “repository” locally where he or she checks in changes to code. To communicate changes to others, or get theirs, you “push” and “pull” changes, respectively. This has a number of advantages, but again it’s not a panacea. It’s good for some projects whereas Subversion is better for others.

For the kind of programming I do, Subversion is my choice. Also for my homepage, I use Subversion because the directory structure of my webpages is very important. However, for my configuration files and a writing project that I expect is years away from publication, I use Mercurial.

The unfortunate thing about distributed version control is that it’s a fad and therefore everybody’s raving about it and dissing the perfectly good system of centralized version control. I think that Mercurial and git are better than a fad, but I don’t think they solve every problem the way some people do. They start referring to decentralized systems as “modern.”

About the author

Joel Adamson is a graduate student in evolutionary biology; he uses Subversion and Mercurial for managing his projects.

%d bloggers like this: