Archive

Archive for the ‘Programming’ Category

What no one has told me about programming

November 15, 2011 Leave a comment

I had another interesting breakthrough yesterday with regard to how I think about programming, or rather creating applications using programming. I’ve learned over the past couple years of creating an application that seemed simple at the outset (a simple number-crunching program!) that the really hard parts of programming are not the parts that people typically write about, and the really challenging parts are things that are obvious, but nobody talks about those things, perhaps because they are so challenging. This brings me to another question about how I should handle those really hard parts.
Yesterday I toyed again with the idea of learning C++, and decided against it yet again. I’d heard that C++ had a number of tools that are good for numerical programs, like vectors, and I recently heard of a new matrix library called Eigen. However, I’ve avoided C++ because I still believe object-oriented programming is one of those bad habits people pick up in programming classes, and it didn’t seem to offer any advantages over C. C is good; I mean C is The Right Thing.

There was still something else nagging me, however. When you read the books with the trivial examples that you could do easily with a pocket calculator, they don’t match up with the way the language is designed. Even books on Haskell don’t seem to be saying as much as they should about what’s really important in designing an application. That was the realization: what you’re supposed to do with a programming language is build an application. The “program” part (that is, the algorithm) is really immaterial. After exploring many programming languages, I have found that with few exceptions there are very few that really differ in their offerings for completing algorithms. You can write the same algorithm in almost any language and have it perform pretty well on most hardware these days. So what’s missing? What are all the manuals full of? Why does every programming language have a preoccupation with strings?

Let me use an analogy: I play the banjo, and took over a year of lessons, read tons of books and have probably spent over 3,000 hours playing and practicing, and even after getting in a really good band, having great people to jam with, and practicing really well, there was still something about playing that was so difficult. I just kept saying “I don’t know what to play,” or “I can’t make the notes fit there!” After I started graduate school and my second son was born, I needed to shift back to listening and if playing, playing a quieter instrument, so I started doing things I’d never done with my banjo using a guitar: playing scales, picking out melodies, and listening very carefully to my favorite guitar players. Listening to Steve Stevens, Jerry Garcia, David Gilmour and Kurt Cobain, I noticed something: these guys don’t play notes, they play phrases.

Why had absolutely no one mentioned playing phrases to me? Was I not listening? Did no one just say “Melodies, counter-melodies, rhythms, etc., i.e. music (dude!) is composed of phrases. You can construct phrases in many ways, but the key is punctuation.” When I learned to play the banjo, I learned the punctuation marks (licks). I learned how to move my fingers, and I learned chord formations. But I never learned the fundamental thing about music is phrasing. After I figured this out my brother told me how a famous drummer sat him down at a workshop and pointed his finger saying “One thing is important: phrasing.” Luckily this was when my brother was fifteen. Since I’m not a pro like him, I can understand why I didn’t get that opportunity, but still come on! This is hugely important. Why did nobody mention it?

And why has nobody mentioned, in any programming book that I’ve ever found that the crucial thing — the hard thing — about designing a program is the user interface. There are books about user interface, certainly, but they are concerned with superficialities of user interface, like what color the frame should be. Who cares? The difficult part is deciding how your program should interact with its user. Eric Raymond does spend a whole chapter on this, but he doesn’t start with it. I’d like to read a book that starts with “You can figure out all that stuff about your algorithms: you have the equations, you have the data structures, you know what it’s going to do; spend time thinking about how a user would get it to do that well.”

So my realization yesterday is that the reason the C standard library is full of string functions, the reason Lisp programmers are so concerned with reading program text and the reason that there are so many programming languages and libraries and plugins is that the really hard part is between the user and the algorithm. My inclination is to say that the simplest interface is best. The simplest interface would be “stick something in and see what comes out.” That’s called Unix. Even in Unix you can’t just do that: you have to mediate somehow between the algorithm in a world of numbers, and the user who lives in a world of text. This is easiest on Unix, but it’s still not easy.

There are other schools of thought: your user interface should be a pane full of buttons and pretty colors to dazzle your user into thinking they’re doing something useful, or a monolithic shell that does everything with the computer. I don’t really buy either of those things, because I know how to use stream editing and Make to tie things together. However, sometimes I need a program that I don’t have to re-run all the time. I would like something in between: something where I can run a simulation, look at the results, then tweak it a little and run it again, then set it up into batch mode to produce a huge pile of results that I can analyze. There’s no reason that all has to be in one huge program, it could be several, but the point is that the algorithm contained in there would be the same for all those steps. There are languages like this, such as R, Octave and Scilab. However, I don’t like programming in any of their languages. Maybe I can come to like it since they make the hard parts easy.

The approach I should take with my next program is “How do I write a language for running a simulation?”

Advertisements

Should I learn programming? The case for Unix and Emacs in everyday life

June 28, 2011 2 comments

Most people think “programming is for programmers,” and by “programmers” they mean people who earn a living writing software, i.e. “end-user” software: software that people will buy, or that will be used in some big company. However, recently I’ve overheard a lot of talk from people in the business world about what those large companies do, and much of it sounds like it could be done by simple computer programs. The problem is that people don’t learn programming, nor do they learn to think of their problems as amenable to programming. I surmise that for most people, a programming solution doesn’t even enter into their thinking.

At a recent breakfast conversation, my brother told me that at his company most of the problems that come up result from people not thinking of something if a notification doesn’t come up on their computer screens and ask them. Even if they know there’s a problem, they won’t do anything about it if they don’t see it right there in front of their faces. They won’t even get up and walk five feet over to the guy in charge of that problem to ask him. These people and their tasks could be replaced with simple programs. He also told me that the corporation he works for uses none of the operations research or systems design theory that he learned in business school. Everything is just left up to guessing at the best solution and sticking with it for years, regardless of how much it costs or the alternatives.

I also sat next to some people in the airport who were using Excel and mentioned SAP (which my brother tells me is basically a money-milking machine; the companies who buy it are the cows). One of these people said her current project was “organizing [inaudible] into categories based on characteristics, including horsepower…they weren’t labeled with horsepower.” She was doing it “by hand.” One of my missions in my last job and in graduate school is to intercede whenever I hear the phrase “by hand.” We have computers. “By hand” should be a thing of the past. This young woman apparently didn’t think of her task algorithmically. Why would she when it’s unlikely any of her education included the word “algorithm?”

These patterns highlight the current failings of commercial computing. Commercial computing has one goal: sell computers. For most of the history of computing, this approach has been focused on hardware, but now people mostly see it as software. Commercial computing’s current goals are to sell people software as if it were hardware and then walk away, wagging your finger when the customer comes back complaining that it doesn’t work. Eric Raymond calls this the “manufacturing delusion.” Programs aren’t truly manufactured because they have zero marginal costs (it costs only as much to make a billion copies of a program as it does to make one copy). Commercial computing focuses on monolithic hardware and software, i.e. programs that try to do everything the user might need, and funneling everyone’s work through that program. That doesn’t work.

Academic computing, on the other hand, has the perspective that if something doesn’t work the way you need it to work, you rewire it, you combine it with something else, or build onto it so that it will accomplish a specific task. People literally rewired computers up until twenty-five years ago, when it became cheaper to buy a new machine (if anyone can correct me on that date, please let me know). Similarly for software, if the software you have doesn’t do the job you need, you write the software to do the job you need. If you have several programs that decompose the problem, you tie them together into a workflow. Suppose you have a specific problem, even one that you will only do once, and might take you one day to program — potentially saving you a week of “by hand” — then you write a program for it. Then if you ever have to do it again, you already have a program. You might also have a new problem that is very similar. So you broaden the scope of the previous program. Recently I wrote a script that inserted copyright notices with the proper licenses into a huge number of files. I had to insert the right notice, either for the GPL or All Rights Reserved based on the content of those files. On the other hand, if you have a program that generally does what you want, e.g. edits text, and you want it to do something specific, you extend that program to do what you need.

Basically I see a dichotomy between the thinking that certain people should make money, and solving problems only to the extent that solving their problems makes those people a lot of money, versus actually solving problems. If you disagree that this dichotomy exists, let me know and I’ll show you academic computing in action.

The solution for all these problems is teaching people to think algorithmically. Algorithmic thinking is inherent in the use of certain software and therefore that software should be used to teach algorithmic thinking. Teaching people algorithmic thinking using Excel is fine, but Excel is not free software, and thus should not be considered “available” to anyone. Teaching these skills in non-computer classes will get the point across to people that they will be able to apply these skills in any job. Teaching this to high school students will give them the skills that they need to streamline their work: they will be able to do more work, do the work of more people, communicate better and think through problems instead of just sitting there. People will also know when someone else is bullshitting them, trying to sell them something that they don’t need. Make no mistake, I’m not saying that teaching programming will get rid of laziness, but it might make it a lot harder to tolerate laziness. If you know that you can replace that lazy person with a very small shell script then where will the lazy people work?

If you teach biology, or any field that is not “computer science,” then I urge you to start teaching your students to handle their problems algorithmically. Teach them programming! I am going to try to create a project to teach this to undergraduate students. I have in mind a Scheme interpreter tied to a graphics engine, or perhaps teaching people using R, since it has graphics included. Arrgh…Scheme is just so much prettier. Teaching them the crucial ideas behind Unix and Emacs will go a long way. Unix thinking is workflow thinking. Unix (which most often these days is actually GNU/Linux) really shines when you take several programs and link them together, each doing its task to accomplish a larger goal. Emacs thinking is extension-oriented thinking. Both are forms of algorithmic thinking.

If you are a scientist, then stop procrastinating and learn a programming language. To be successful you will have to learn how to program a computer for specific tasks at some point in your career. Every scientist I know spends a huge amount of time engaged in programming. Whenever I visit my grad student friends, their shelves and desks are littered with books on Perl, MySQL, Python, R and Ruby. I suggest learning Scheme, but if you have people around you programming in Python, then go for it. I also suggest learning the basics of how to write shell-scripts: a lot of people use Perl when they should use shell-scripts. Learn to use awk, sed and grep and you will be impressed with what you can do. The chapters of Linux in a Nutshell should be enough to get you going. Classic Shell Scripting is an excellent book on the subject. Use Emacs and you’ll get a taste of just how many things “text editing” can be.

Every profession today is highly data-oriented. Anybody hoping to gain employment in any profession will benefit from this sort of learning. Whether people go into farming, business, science or anything else, they will succeed for several reasons. There are the obvious benefits of getting more work done, but there are also social reasons we should teach people algorithmic thinking. The biggest social reason is that teaching algorithmic thinking removes the divide between “customers” and “programmers.” This is why it bothers me to hear “open source” commentators constantly referring to “enterprise” and “customers” and “consumers.” If a farmer graduates from high school knowing that he can program his way to more efficient land use, then he will not be at the mercy of someone who wants to sell him a box containing the supposed secret. Again, you can teach algorithmic thinking using Excel, but then you already have the divide between Microsoft and the user. With free software that divide just doesn’t exist.

Did Richard Stallman Invent the eBook?

February 2, 2011 4 comments

Lately you’ve heard me say that my feelings toward laptops have changed. Since getting my new laptop, some of my feelings toward reading have changed as well. I love paper, and I love the look of printed letters, and well typeset text on the page. That won’t change. However, I noticed that most of the texts that I read (journal articles) I can read in online versions without missing much of the content. I’ve started exclusively reading current articles either online or in PDF form on my laptop and I’m glad to be conserving paper.

One thing that hasn’t changed — things that I’ve always read on my computer — are GNU manuals. GNU manuals are written in an ingenious format called TeXinfo which enables the author to produce appropriate output for several different ways of reading: PDF, HTML and the online info format, most easily read in Emacs. If you’re running GNU/Linux, you will find tons of manuals in this format by typing “info” into a terminal. Within Emacs, type “F1 h” (that’s press and release F1, then press and release ‘h’). Either way you should get a menu of topics, each covered by its own info manual.

Since deciding on Sunday that my programming goal should be better programming, rather than learning a new language, I started reading advanced topics in Unix/GNU programming: processes, pipes, IPC, etc. I was thinking “Man, I need to get that classic book Advanced Programming in the Unix Environment.” Unfortunately this book is HUGE, I wouldn’t carry it around with me, as reducing back strain is currently high on my agenda. It also dates from 1992 (around the time I first used Unix) and some things have changed since then. Most of the things the book is about have not changed, but most texts show their age in one way or another. Most Unix texts from this time look like casualties of the Unix wars, with more than half their content explaining incompatibilities between different version of Unix, and the pitfalls of writing portable programs.

So of course, I went for (what I thought was) the next best thing, something I already had and could carry around with me at no extra weight: the GNU C Library Manual (in the info menu, type “m” and then enter “libc” and hit Enter). I have been reading about the basics of IPC and processes for a while off and on, and there were things that I just didn’t get about them. I get them now, having read them in the Libc manual. For example, I didn’t understand that a child process and its parent process receive different return values from fork(); the Libc Manual spells this out so clearly I wonder why I didn’t think of it before. I didn’t get how the child process and parent process’ distinct code portions were triggered, but that was only because I hadn’t read the f’ing manual.

These manuals don’t read like terse manpages, they read like manuals that you would actually want to read. The Libc manual and the Emacs manual both repeatedly surprise me. Emacs users often joke about learning new “features” of Emacs that have actually existed for decades. Whenever I am frustrated with Emacs in some way, I’ll usually find a workaround, and then months later I’ll be reading the manual for some unrelated cause and find a solution to my problem. It was right there the whole time! You can imagine how empowering reading these manuals is.

The weird thing is that although I’ve repeatedly had this experience with GNU Manuals, they aren’t the first thing that I go to. I need to change that habit. We often treat reference materials as though we shouldn’t sit and read them, we should instead browse through them until we find what we need and then put them away. That’s what manpages are for. GNU Manuals are different. GNU Manuals actually tell you what’s going on and what to do: they are great for beginning programmers. I’m not going to waste my time going to the library; I’m going to read the Libc Manual.

eBooks, rms and DRM

Recently ebooks outsold paperbacks on Amazon.com. People may be treating this as the final sign that the death of paper is coming, but I don’t, for one considering that Amazon has been set up as an ebook store from the very beginning, i.e. they’re on the friggin’ web — it’s obvious they would try to compete by delivering their content as quickly and conveniently as possible. I’ve always seen it as a goal of theirs, although I think back in the 90s most of us thought ebooks would just be webpages, rather than something you’d actually carry around, i.e. we thought they would be different enough from regular books to combat the problems of regular books.

Amazon however has a different idea: they and their competitors would like you to think of ebooks as the same as regular books, just lighter weight, and easier to pay for. Their ridiculous idea of “e-lending” is so stupidly backward that I laughed out loud when I heard of it:

They have managed to recreate, in the palm of a reader’s hand, the thrill of tracking down a call number deep in the library stacks only to find its spot occupied by empty space. With a clever arrangement of bytes, they have enabled users to experience the equivalent of being without their books while their friends’ dogs chew on them. Maybe if we’re lucky, next they’ll implement the feature that allows two electronic pages to be stuck together as if by gum, or that translates coffee spilled on the screen into equivalent damage to the digital pages.–John Sullivan, Lending: A solved problem

They’ve done this with DRM or “Digital Restrictions Management.” Its practitioners call it “Digital Rights Management,” which I think is sinister enough: do you want your rights digitally managed? They’ve managed to make ebooks just as problematic as paper books, and why?

The question of their motives becomes so much clearer when we consider that not only did Richard Stallman create great free books about computing, like the Emacs Manual and the GNU Libc manual, he also helped create the best ebook reader out there (info), and all with the goal that it will facilitate user freedom. The choice is yours: do you want ebooks to be as inconvenient as regular books? Or would you rather have convenient, indexed, hyperlinked text written by people who care about you and your freedom? The choice is clear to me.

Some Further Reading

The history is about as interesting as the books themselves. Some people think that ebooks (or the concept) is new, just as they think about tablet computers and touch screens. Both touch screens and “ebooks” are about as old as computing itself. If you’re skeptical about that, think of how simple an idea it is: many, many books that you can carry around in your pocket at no additional weight. “Hey let’s use computers,” is a pretty simple solution. Computers were almost built for the task. The only new idea is making ebooks as inconvenient as paper books. I’m reminded of Douglas Adams‘ explanation that if a hitch-hiker wanted to carry a paper copy of The Hitchhiker’s Guide to the Galaxy (a text that bears a strange resemblance to Wikipedia), he would have to carry several enormous buildings with him.

Categories: Freedom, Programming, Tools Tags: , ,

Which Programming Language Should I Learn Next?

January 30, 2011 10 comments

Fairly often I see people asking in online communities “which programming language should I learn next?” I have asked this question often recently. I want to learn something new. I always enjoy learning new things. It’s what I do. I’m a scientist, and my current occupation (what I put on forms) is “student,” but I think of myself as a student in a much more holistic way. I always enjoy learning and I seek it out on a moment-to-moment basis. Programming is a large part of what I do: in a way, it’s also been my “occupation” for at least the past five years. I programmed in Stata for my job before I came to graduate school and now I use C, Scheme, bash, Emacs Lisp and a few other “languages” every day.

I feel like I reached a plateau of sorts a couple of years ago, after studying languages at a rate of about two per month for at least two years. By that I mean I studied Emacs Lisp and Python for a month, then things seemed to shift to Scheme and R, or Perl and Common Lisp for the next month. I think I intensely studied about ten languages over three years, including various Unix shells and a few specialty languages (like Mathematica: yuck!) . There’s still a whole bunch that I would say I’m conversant in, and some even that I use as a fairly essential part of my work, that I might be able to use better if I knew them better, like TeX. As my graduate school research picked up, however, I settled on C and Scheme as my main languages.

I found this plateau somewhat dismaying: as I said I always want to learn new things, and there seem to be really cool languages out there that I could learn. For about two years I’ve been casually reading about, and doing minor coding in the ML family and Haskell. However in each case I’ve found that there are reasons I shouldn’t bother. Here are my conclusions:

  1. My needs as a programmer are different from the vast majority of people who put the title of this posting into Google
  2. Most programs people want to write are quite different from the ones that I want to write
  3. I really like the Unix workflow

Other Programmers Learn For Jobs

The most common answer I see to “What programming language…?” is “You should know C, C++, Lisp, Python, Javascript,…so that when you go to your interview…” That’s when I stop reading. The authors assume (and often rightly so) that I’m asking because I’m looking for a job. I’m not, as it turns out, but I’m sure a lot of other people are. I’m a scientist, I have a job (in a way), and I wouldn’t wish a corporate job on my worst enemy. As I said, I had a job that had a large programming component, but the interviewers didn’t really care that I didn’t have experience specifically with that language (Stata). What they cared about was whether I was good at learning. In my mind that should always be more important and I will always hold these people in high esteem for doing things that way. I remember the person who became my closest colleague interviewing me and saying “Stata’s pretty easy, it has a very simple syntax, you should be able to pick it up pretty fast.” He was right.

In my discussion of object-oriented programming I got the comment quite often that “You need to know object-oriented programming because it controls complexity, and is therefore essential in corporate programming environments, so if you want a job…” End of discussion. Don’t believe the hype. If you want such a job, then by all means, learn Java. If you’re more like me, and you realize that programming is not the hardest part of most jobs then focus on those other parts, and get good at using whichever programming paradigm is most well-suited to the task at hand. Don’t obsess about which programming paradigm is most suited to having people fire you easily.

Other Programmers Write Monolithic, Interactive Programs

The programming task that I’m most often using is numerical analysis, the oldest programming task in the universe — the one that pre-dates computers. I conclude that the source of my confusion with many programming texts and the explanations given is that other programmers are interested in (or at least authors are trying to interest them in) designing large, monolithic, interactive programs. In my mind there are only a few good examples of such programs, and they are already written: Emacs, the shell, window managers and file managers, and a web-browser (which is really a noninteractive program dressed up as an interactive one). I’m not going to write one of those. Seems to me like most people learning Haskell, for example, are coming from writing monolithic programs in C++ or Java, and probably on Microsoft Windows.

What’s particularly funny to me about this is that this split goes back a few decades to the “Worse is better” controversy of the early nineties. Unix’ detractors generally believed in writing monolithic programs and their favorite development environments were eclipsed by Unix and the hardware it came with (workstations). I guess Microsoft and Apple were able to steer people away from Unix once again; now people come from environments where they are used to building these monolithic programs to Unix-like systems, and they don’t find out they can use computers a particular way. I started using Unix when I was thirteen: I guess this means I’m old. I’d rather be an old Unix-user than a young anything.

There are a few other reasons I’m not writing such big programs: an interactive environment for numerical operations only makes sense up to a point. It’s great for experimenting. However, even in Stata I ended up writing scripts, in a programmatic style, and executing them in batch mode, carefully writing the important results to output, and saving graphs as files. Either those programs have been written and are awesome, or I don’t need monolithic, interactive programs to do the things I’m doing. I have a different perspective on how people should use computers.

Unix Philosophy Works For Me

I often read that the Unix philosophy became “Do one thing and do it well.” Other people seem to want to start a program, work with just that program for a long time, and then do something else using a different huge, monolithic program. I think that’s a waste of time. It sounds extremely limiting. Especially when I have a whole bunch of tools available to integrate my work into a common whole. I often read the derisive aphorism “When all you’ve got is a hammer, everything starts to look like a nail.” I think the supposed wisdom of that remark is placed elsewhere, but it has the opposite meaning when speaking about using Unix tools. Yes, when you have Make, everything starts to look like targets and dependencies. When you have sed and awk, everything becomes text processing.

Consequently all I need is an editor to make me happy. I use Emacs, which becomes a whole “working environment,” but I could get by using vi with shell access (however much it hurts me to say that). Everything becomes editing when you have the idea that to use a computer is to write programs, and you know which tools can glue those programs together. Then all you need is a single command (e.g. “make”) to get the ball rolling. Given this perspective, learning new languages just becomes a matter of fitting those into an existing workflow. I generally think of programs as “input-output” and it’s okay if that input is a program, but it shouldn’t try to be its own environment and supersede Unix.

The language that fits in best with Unix philosophy and GNU Tools is C. Not only does C fit in, the GNU system is built around it, including a huge number of tools that make using C really, really easy. Automake, autoconf and the other auto-tools mean that all I have to do is write a little program, write a little Makefile.am, use autoscan and a few other things, and “make” builds me a program. Writing programs for simple number-crunching also means that most of the problems people associate with C are not my problems. I don’t have memory leaks in my programs, they just don’t happen. Therefore I don’t really need to care about having a language with garbage collection. Everybody’s screaming about making programs faster with parallel execution, but that’s for web-servers, databases that get called by web-servers, and other things that I’m not writing. C is extremely fast for number crunching, and we can make the kernel run parallel jobs using “make -j” or GNU Parallel. C is just fine.

Am I the only one out there interested in using something other than Fortran for number-crunching? Probably yes, but I can use C. I don’t need Haskell. I like the mathematical cleanliness of Haskell, but that doesn’t matter when I already know a functional language (Scheme), can already write bloody-fast numerical algorithms in C, and can run parallel jobs with Make. I read a lot of stuff about writing parallel programs and other features of supposedly “modern” languages, but they are almost always things important for writing web servers or GUIs, things that I’m not doing.

Languages

I’m still tempted to learn certain languages: here’s a run-down of why.

C++

C++ is still tantalizing because so many people know it. In addition to that, it seems to have a very mature set of standard libraries. However, especially when I hear people say stuff like “Many C++ programmers write in C, but just don’t know it,” it seems still more unnecessary. C++ has a large community, GNU development tools, and seems like I’d have to change very little of how I do my work in order to learn it. All I would have to learn is the language.

D

D is an interesting language because it includes well-implemented features of some other language platforms, like garbage collection. D seems basically like C with some added on features, and the ability to extend its programming paradigms. I haven’t taken the steps to see what kind of development tools are available for D, so I haven’t given it the full evaluation yet. Unfortunately, it doesn’t seem to have a large enough user community to fit fully in with GNU yet, which is a critical factor.

Haskell

The big thing Haskell has going for it is that Fedora has a full complement of development tools to form a Haskell environment. Haskell has just as huge a network of tools as Lisp (close enough to look that way), so that would make it easy to get going. I think the problems with Haskell are that it seems too difficult to get going with, it seeks to be its own environment (i.e. doesn’t fit in with my working environment), seems suited to doing other things than I would do with it, and I don’t need to learn it. I would really like to learn it, but all these things just add up to me saying “I don’t have the time.” That’s fewer words than all that other stuff I said.

Javascript

I keep thinking I should learn Javascript. I feel like if I know Emacs Lisp, and I use Firefox as much as I use Emacs, I should know a scripting language for Firefox or be able to add the features that I need. However, all the learning materials for Javascript seem job-focused and doing stuff that I wouldn’t be interested in.

What Makes a Language Usable or Worth Learning?

This is a common question I see people discuss: most often I’ve seen it in “Common Lisp vs. Scheme” discussions common in Lisp forums. The question there seems directed at why Common Lisp has been so much more popular than Scheme. That’s a dubious premise, seeing that many people learn Scheme in college CS classes, at least that’s my impression (as I said, I’ve never taken such a class). The real premise of the question is “Why does Common Lisp have so many libraries, whereas Scheme makes you recreate format?” Paul Graham’s creation of Arc was driven by this contention: people say “If you want to actually get work done, use Common Lisp,” but Scheme is so cool, right? I have come to a different question which is “How does this language fit into my workflow?” This was also a critical part of choosing a Scheme implementation. There are tons of them, but they are all designed for slightly different purposes, or they are someone’s proof-of-concept compiler. Guile is a great example of the reasons I would put time into learning to use a particular language.

I find the relevant factors in choosing to spend time with a language are (a) fitting in with Unix workflow/mindset, (b) a good community (hopefully aligned with GNU), (c) libraries, utilities and functions that have the features I expect, (d) development and workflow tools and (e) good learning materials. I have found that certain languages or implementations fit all these features, and some fit some, but not others. The best is obviously C, which has all these qualities. Guile is the best Scheme implementation because it has all these qualities. Guile even integrates with C; I think my next big project will be C with Guile fully integrated. Python has a great community, but it’s quite distinct from the GNU community, the community I prefer. I’m less likely to find a fellow Unix-minded friend in the Python community. Haskell has good support on Fedora, but I haven’t found a good text for learning it. Pike looks thoroughly Unixy in its syntax, but its development materials, or even its interactive environment are not available in Fedora repositories. I’ve found the tools that work for me, and I suppose the best thing is to learn how to use them better.

Categories: Programming Tags: , , , , ,

Simpleio on Launchpad

September 15, 2010 Leave a comment

Yes I repeatedly decried distributed version control, but now I am hosting a library project called Simpleio on Launchpad.net:

Simpleio Logo
Simpleio provides an easy-to-use I/O API for batch scientific applications.

Simpleio seeks to provide a simple interface for configuration files and writing results. Project goals are to keep the interface clean and simple.

Contributors can help by finding and fixing bugs. Examples of usage are greatly appreciated!

I have made a snapshot available for download. We still need to write a manual and there are several open bugs, so jump in and help! Of course Simpleio is Free Software, released under GPLv3.

Simpleio now on LaunchPad!

September 3, 2010 2 comments

I’ve developed an input-output library for scientific applications that is now available on Launchpad. Take a look, download it, play with it, see what you can do with it.

As of now the only documentation is the source code, but I am working on a manual. Simpleio allows you to specify simple configuration files where you can specify ranges for variables, define variables, and input values for calculations. You can also use its writing capabilities to write large buffers of output at the desired time.

Programmer Smart, C Smarter

July 16, 2010 2 comments

For the second or third time in the course of a very large programming project I’m working on, I have discovered that a big runtime problem I was having was due not to program or computational complexity but because I did something very basically stupid. I’ve been programming a population genetic simulation that is, by its nature and rationale, very complex. One of the biggest problems in the history of population genetics is that programming multilocus (and in some case, multi-allele) problems results in hugely costly use of programming memory. Programs that seek to model systems with non-free recombination (recombination rates other than 0.5) must hold a huge number of variables.

Part of this problem I’ve solved by deploying haploid. However, the whole point of programming that algorithm was so that I could code even more complex things, like populations with age-structure, the project I’m talking about now. However, in trying to get this program to run I’ve had a string of problems that looked like the program was running out of resources. I have been using OpenMP to speed up the computations, but it seemed that I was running into either a race condition or some bizarre side-effect of parallelism where threads were waiting for threads that were otherwise blocked. In other words, I’d run my program, on my quad-core workstation, or on the university’s big cluster, and it would just stop after a while, despite the processors continuing to be totally occupied. A few times I had to actually turn my machine off; setting OMP_NUM_THREADS didn’t seem to be helping either. I was really puzzled.

Then I met with my advisor and she suggested that instead of running a huge number of initial values, I should just be using one initial value (0.1) for one of my variables. I did this and got the same hangup behavior from the program. This was totally irritating me by this point, especially because my much-wiser adviser (she’s my sensei) looked at me with furrowed brow and said “It shouldn’t be taking that long.”

So I ran the program in gdb (GNU Debugger). This had screwed me before, and it turned out gdb had a bug. I had spent all day chasing what was not a bug: when I ran the program it looked like the pointer I passed to a function was not the pointer evaluated when gdb entered the function. Then I realized that not only had gcc failed to produce good debugging code, but I had just given the program incorrect input values.

In this case I came upon something else weird: a check that should have stopped the iteration instead evaluated to false. I was really confused until I looked at the types of the variables:

_Bool keep_going_p;

keep_going_p = ... ; /* set the value */

if (keep_going_p == ERANGE)
  raise hell;
else
  return library_book;

See the problem? ERANGE is an integer. Booleans are stored as integers (i.e. you can assign an int to a _Bool), but they only evaluate to either true or false. 0 = false and everything else is true. So even though keep_going_p does in fact equal ERANGE, the program can’t tell. I changed keep_going_p to an int and the program now completes (with up to 6 loci, that’s 64 genotypes to iterate over up to a billion iterations on four processors) in less than five minutes.

C Respects the Programmer’s Intellect

So one explanation for all this freedom to make stupid mistakes is that you just have to be a really smart programmer to use a language like C. C is supposed to be good for creating fast programs that run on system-level, and possibly in very resource-sparse environments (like a PDP-11). In other words C favors the creation of good compilers. The typical example is that arrays can run over their boundaries without throwing a compiler error, only resulting in (sometimes) hard to pin down runtime errors. In other words, you won’t know that you programmed the wrong number of elements in the array until you run the program and it returns garbage. That can be a long time, considering that the part of the program containing the bug might not be used for a few years after the program is released to the public.

However, what that means is that you have to know how the compiler deals with things inside the computer; you have to think of the memory of the computer in the way that the compiler deals with it. This is not a restriction, but a liberation, because it forces you to think in terms of how the computer actually works. Not as tightly as assembly language, but still pretty darn close. I see this as forcing me to think in terms of how the computer really works, and therefore coming up with better algorithms. In other words: I like it.

On top of this, this attitude of C compiler-writers fits in with the rest of the Unix philosophy: computer programs should be written for users who see themselves fundamentally as programmers. They are essentially people who know as much about the machine as the programmer, and it is completely immoral for the programmer to program in a way that insults the user by presuming the user to be a stupid and unsophisticated person.

Inevitably when someone who’s an adherent of a different philosophy hears me talking about one of these stupid mistakes, they say I should be using Java, C# or some other garbage-collected or dynamically-typed language. However, that disregards the nature of the stupid mistake, and disregards a basic problem with programming in general: I could have made the same mistake in any language with a type system. If I had done that in Lisp, or Java or whatever, the compiler still would not have caught it. There’s a difference between dynamically-typed languages and strongly-typed lagnuages. Almost every programming language is strongly-typed, and it has nothing to do with when those types are decided (compile-time versus run time). And none of the stupid mistakes I have made have involved dynamic allocation, so garbage-collection wouldn’t matter either. People often say we should all be using interpreted languages like Matlab, which I find objectionable on both philosophical (programming) grounds and moral grounds.

There’s just no beating the right tool for the job, which in the case of complex simulations that need to be portable and freely distributable, is C.

Categories: Programming, Research, Technology Tags: , , , ,
%d bloggers like this: