Sadly, many students come to college knowing only the minimum they need to pass certain exams, and that does not reflect genuine interest. Most discussions I’ve had about instruction tend to end up with the conclusion that our teaching style would be totally different if we didn’t have to trick people into getting interested in classes they are taking. Today I’m asking the question: what are all those students doing there in the first place? If you are a student, you need to ask yourself if you’re in the right place. You might be in the wrong major. You might be in the wrong university. And any university might not be the right place for you at this time in your life.
I’d like to explore the problem of majoring in science from two perspectives, that of students and that of instructors. This is not really a how-to or algorithm for choosing a major. However, if you are a student, there are some things I think you should think about before going to college, or before declaring a major. These are problems that go beyond any individual student, and they are symptomatic of wider societal issues. If you are an instructor, hopefully we can begin a dialogue about instruction style and advice to students. As an instructor I’ve seen that advice based on competitive social values sometimes gives students harmful ideas about why they are in college and how to get the most out of it.
I find it interesting to see what students blame for their lack of success in particular majors. In Talking about Leaving anthropologists Elaine Seymour and Nancy Hewitt relate a narrative of a young woman in a basic electrical engineering class. She expressed anxiety about how her (mostly male) peers seemed way ahead of her in a basic lab. Once she had constructed the beginning parts of a circuit, her (male) TA came over and said “Looks good, just wire it up” and walked away. She, of course, didn’t know what he was talking about and changed majojrs. She blamed this on how her male classmates had been working on cars in the garage with their dads for the past decade. Since she didn’t have that experience, on account of being female, in her view, she just couldn’t keep up. I want to be careful about something here. The first is that the authors of the study did not blame this episode on gender disparities, but they did ask researchers to pay attention to perception of gender disparities.
My question about this narrative is “If you know that you need a decade of experience messing around with hobby electronics to be successful as an electrical engineering major, and you know you don’t have that experience, why major in electrical engineering?” My basic suggestion is don’t major in something you know nothing about. The issue is experience. What’s troubling to me is that people who have no experience in a particular field would, despite knowing that they need that experience to succeed, choose to do it anyway. Who would encourage that kind of thinking, and what would they gain from encouraging people to do things they can’t succeed at?
My suggestion for how to maximize your learning if you are a student, and reduce the problem of uninterested students if you are an instructor, is for each student to be an interested student. This might sound like something that you don’t choose. “I’m either one of those smart people at the front asking questions all the time or I’m not” might seem reasonable. However, I ask you to consider that you did (at one stage) choose to be in that classroom. You chose a major, field of study or a particular track. If you’re not one of those interested people at the front, then why not choose a different place to be?
For students, I suggest choosing a major from things you already have experience with. Preferably this would be experience outside of classrooms, perhaps even entirely outside of classrooms. Almost everybody has something that actually interests them, and it’s not always biology or engineering. Do you like to cook? Have you ridden horses? Have you decorated a room? Those are probably things you would be really satisfied studying. My first suggestion is that if it’s not entirely obvious, then write down a list of things you’ve done in your life that you found interesting. Not just stuff you’ve read about, but stuff you’ve actually done: real projects, real challenges that you had to stick with. Find the thing on the list that you already have studied, and then study that on a higher level at a university. Of course, it has to be something that can be studied at a university, and that narrows the choices. There are alternatives to going to college.
If you really haven’t spent time with a hobby of any kind, then there are two alternatives I suggest that allow you to become one of those people at the front of the class. The first is to go to a different kind of university where you can get experience doing something really interesting. Small universities allow students to get hands-on and get started with one-on-one instruction, somewhat in an apprenticeship fashion. I did research at a big university, but I started doing research with the same collaborators in middle school, not after I got to college. If you don’t know what you’re going to do, but you can think of what you would like and it’s something amenable to college, small colleges offer a way for you to get started.
The other suggestion I have if you have limited experience is to avoid college and get experience. Don’t go to college. Get a job at a bakery and learn one-on-one from somebody who is already an expert. I’m not suggesting that you beg your parents for money and go backpacking across Europe. I’m suggesting you get a job. Like music? Start hanging out at a recording studio. No recording studios in your area? Move to Nashville (or Austin, or maybe Portland). You might know somebody who’s a music major and seems to be well-connected. If you dig, my guess is you’ll find that’s how he started, except he started when he was fourteen, not after getting a bachelor’s degree.
Let me give you two examples of people who followed the latter approach. The first is my brother Michael. He could have gone to college. But after high school he moved to Arizona, and then to Italy to work at professional cycling. He start his cycling career when he was fourteen, and at nineteen Europe was where to take the next step. After a while of seeing the professional cycling world, he decided it wasn’t for him. Coincidentally, he really loved Italian culture and speaking Italian, so he started teaching English. After he did that professionally for a while he went to work in marketing for his friends’ father. Later he wanted to return to the US and go to college. As I remember it (correct me bro, if I am wrong), but the only reason he ended up going was that he saw not having a bachelor’s degree as hampering his chances of promotion at a large corporation in the United States. By the time he went to college, he already had (at least) three years of experience in marketing, and had traveled the world doing it as a professional. He finished business school in three years while his wife worked as an Italian instructor at the same university.
The second example is my friend Meagan Chandler who still hasn’t gone to college. I say “still hasn’t” because every now and then she mentions that she might want to transition to a profession where a college degree would actually be valuable. However, she’s been working as an artist, musician, dancer, music and dance teacher for over fifteen years. I would say she’s successful, not because she’s made a ton of money doing it (she hasn’t), but because she has been intentionally living that way, doing what she knows and really cares about. She knows, however, that she’s gotten experience doing other things in the meantime, and some of those things might benefit from a college education. There’s nothing wrong with going to college when you’re forty. During graduate school, it’s people like her that I’ve really gained admiration for.
When people choose a particular path without considering preparation, they pay a price. There is something that every student has spent time preparing for. I often hear instructors talking about “unpreparedness” as if it’s a problem in itself that needs fixing: let’s throw more education at people so that they’re “ready” for college. That doesn’t help people who are bored because they’re doing something that they don’t care about. This also assumes that people really fundamentally need to go to college, and that they will all benefit in some substantial way. Everyone has spent time on something that they really love, and not all of those things are helped by higher education. Some of those things are helped more by hands-on training, finding the right mentor, and just plain years of experience. College will only get you that in certain fields that value certain kinds of intelligence. It’s not for everyone.
I often hear that we need to change our teaching style, re-work the curriculum or take other measures to prevent losing science majors. But I’d like to ask if it is really a loss to lose people from a major that they don’t want. Who does it benefit to have more science majors? The most common appeal to the tragedy of losing science majors cites political calls for another one million scientists by a certain date. None of these arguments make an appeal to personal satisfaction for students or instructors. They all rely on someone’s economic and political goals, or that greed is good (more science = more money = more gooder). I don’t want any students out there to be doing something that they don’t want to just so the USA can beat Finland in science.
So to answer the question in the title: major in science if you already have experience in science. If you’re still in high school and you’re reading this, don’t go out of your way to get experience doing something you don’t want just so you can meet my criteria. Get experience doing what you want to do right now and carry on with that. If that’s something that will benefit from higher education, then go to college. Really question whether the skills that you can get from college will help you become a better chef, or horse trainer or artist.
Recently I went to a planetarium show with my kids. The show was an interesting blend of digital animation and puppet theatre, staged by a local puppet theatre company and animators from Morehead Planetarium and the University of North Carolina. There’s a girl who gets lost in the woods, meets a magical old lady, and she returns the sacred fire to a dragon who brings Spring back. Basically it’s the story of the Winter solstice and the changing of the seasons. I and my kids really enjoyed it. The production was high quality and the storytelling was fun and I think the kids learned something from it.
Then after the show and the credits a planetarium staff member addressed the audience and said she was going to show us some basic astronomy, a few constellations and the path of the sun at the solstices and equinoxes. Then she said “Of course, it’s not dragons that change the seasons, it’s the tilt of the earth. Sorry to disappoint you.” What she said was perfectly reasonable, but I would like to question the motives of saying that to a room full of five-year-old kids. A colleague of mine, who is also a father of a five-year-old boy said “Yeah, and there’s no Easter Bunny either!” I don’t think the planetarium staff member was mean-spirited about what she said, but I have some ideas about where she’s coming from making such a remark, and I think it raises some questions about the intellectual climate of science at this time. Incidentally, just guessing by her age, I think this person knew a lot more about astronomy than she knew about kids, so again, this isn’t personally about her, but about the intellectual environment that we create when we insist on militant scientific positivism in all areas of life.
The question I raise is whether everything needs to be science. Are there kinds of knowing, learning and being in the world that are served better by other enterprises? Does science really have to dominate everything we do? Science is great. It’s not just my job; I really love it, and yes I’d probably be a lot worse off without vaccines and blah blah blah. None of that is at issue. The question is whether that disqualifies the rest of human endeavors. Are other kinds of thinking allowed? Without someone sneaking up behind you and saying “Well, actually the temperature differential between points A and B leads to variation in pressure that…”
Here’s another illustration of the problem. I have read, a few times with my kids, the book A Child’s Introduction to the Night Sky by Michael Driscoll. This is an excellent book about astronomy and I recomend it whether you have kids or not. Children’s books are really great to read: they are packed with information, presented in easy-to-remember ways, and they have all the basic background. I always feel like I’m missing something when I read age-appropriate (I won’t say “adult”) books on technical topics. I especially like this book because in addition to telling kids the typical stuff about the solar system, it tells kids what they can see with a telescope or their own eyes. In other words, it teaches kids how to collect their own data. That’s how I aim to teach science, so I really love seeing it in a children’s book. This book might even be where I got the idea.
The topic comes up late in the book about the history of astronomy, the zodiac and astrology. The author makes the claim that priests and fortune tellers were just as interested in the stars as “early astronomers.” He fails to mention that these were the same people. The occupation “scientist” is a fairly recent invention, and so is the distinction between astrologers and astronomers. Even a paragon of empiricism such as Isaac Newton was a far out mystic by today’s standards (and Wikipedia says he’d be considered a heretic by the standards of his day). The author seems to go to a lot of trouble to make sure kids know that there are scientists and non-scientists. In his defense, he tells the Greek myths the constellations are based on, and explains where the zodiac signs come from and how they are associated with astrology without judging astrology harshly. I believe the author’s motives are totally beneficent. But again I ask why do we tell this sort of thing to kids? What goal does it serve? Who does it serve?
Perhaps we can re-examine those motives and see if they really check out. I’ll use myself as an example. What motivated me to tell The Truth to people for a long time, was that I thougth people would be happier if they knew The Truth (i.e. my version of it). I thought that if people could accept science then they would see the wonder of the natural world, have an idea of where they came from, and all the things I was excited about. This sounds weird, but I really wanted to help people. I thought “this is my way of helping people,” this is my role, this is my purpose. Unfortunately the way it came out was cussing out a room full of Christians and telling them that the speaker was lying to the audience (he was, by the way). So, as much as I wanted to help people, it came out simply as rude and inconsiderate. When people wouldn’t listen, I would just shrug and say “Well, if they want to live their lives as morons, I guess I can’t stop them.” Looking back, I see now that this isn’t that different from saying “Well, you’re the one who’s going to Hell.”
So what message do we send when we say things like “Well, actually it’s not dragons.” My concern is that we are telling kids that it’s not okay to have an imagination. Now put yourself in my shoes, trying to teach science to people with no imagination. What I’m thinking is that insistence on science as the One True Way can dull people’s imaginations just as much as a fundamentalist religion. When we fail to see the value of other ways of thinking, we could be tying kids down to only one set of mental habits, limiting their flexibility. I think the scariest thing about hearing people say stuff like this is that reminds me of myself in middle school and high school, when I refused to see the value of anything other than science.
My ninth-grade English teacher is going to love this: myths have their own value. What is the value of the Santa Claus myth? It teaches kids about giving, but not in a didactic “You better do this” kind of way. It also teaches them that it’s nice to receive gifts. There is someone who will just give you something because that’s his job. That’s just what he does. The Easter Bunny? That teaches kids about the changing seasons, about how life comes from somewhere, and that spring and changing seasons are something to celebrate. Telling these stories also teaches kids the value of story telling. As kids get older, big brothers tell these stories to little brothers (that’s how it works in my family, anyway) and the cycle starts all over again. Gee, maybe there’s a story about the world being full of cycles? Kids get the idea that you can learn from these stories, and that playing and pretending that they’re real is a great way to learn about the world. They’re also just fun, and there’s plenty of value in that. Not only does not everything have to be science, but not everything has to be about money, or values, or even learning. As long as you’re not hurting somebody, fun is a perfectly good reason to do something.
The biggest problem I see with thinking everything needs to be science is that we will fail to see the value of other modes of thinking. Jerry Coyne, Sam Harris and others seem to think that religions and mythology are “failed science.” This seems true as long as everything is trying to be science. Maybe not. Perhaps the goal of telling stories is not to get at what an empiricist thinks is The Truth. I ask if it’s at all conceivable to you, as a scientist reading this, that myths about natural phenomena are actually about the course of human lives, about how people change, and about valuable lessons in how to get along with people (like how if you keep transforming yourself into animals and raping virgins your wife might get a little peeved). Perhaps there’s value in learning how to live with people and there’s something called wisdom that it’s hard to get through studying science. Myths could serve this purpose, but not if we tell the story and then dismiss it by saying “Well, I’m glad we know better now, thanks to modern science! What a bunch of baloney!”
One final question (not bloody likely) is what are we left with if we don’t bother to think in terms of anything other than science? What do we have if we dismiss every story and myth as just plain wrong? Seems to me like we’re left with a bunch of seventh-grade boys. All we have left is “Well, technically…” and “I heard there’s this virus that can eat your brain” and “Nuh-uh” and “Yuh-huh” and…
Don’t we all remember how stupid that was?
- If and When to “Spill the Beans” about Santa Claus (psychologytoday.com)
I’ve just finished reading portions of Rupert Sheldrake‘s The Science Delusion. The title is an obvious allusion to Richard Dawkins’ The God Delusion so you can guess that Sheldrake’s thesis is that scientists have great faith in their craft, elevating it to the level of producing what I call Truth. The problem, Sheldrake points out, is that modern science is based on adhering to a dogmatic assumption that the universe is a machine. He points out that this is a fairly new idea, and worst of all for supposedly empirical science, there is absolutely no evidence for it. It’s a belief. It’s a myth. I’d like to leave aside the readability and scholarship for a proper review (perhaps elsewhere), but here I’ll deal with the real philosophical problem this presents.
Sheldrake points out that the mechanistic worldview, that is seeing the universe and everything in it as a machine, was a fairly radical idea in the sixteenth through eighteenth centuries when it was proposed by a minority of scientists and natural philosophers. David Hume dismissed it completely. The universe and its inhabitants were seen as something organic, i.e. something that grows, by most ordinary and learned people. However, the material success of Newton’s Laws and (Sheldrake doesn’t mention!) the Industrial Revolution, and continuing into the computer age, has helped convince most people that they are robots inhabiting a giant clock. This is bad for science, as dogmatism stifles creativity and ideas that could be either helpful for science (like Sheldrake’s own theories of morphic resonance) or helpful to the general population (like “alternative medicine”) are dismissed since they don’t fit in to the mechanistic, materialistic worldview of science.
As an example, many scientists dismiss acupuncture as incapable of anything but a placebo effect, since its “mechanism of action” is not known; therefore it’s a money-making tool for charlatans and shouldn’t be used to try to heal people. Sheldrake points out that’s not a valid criticism since the effect on the health of the patient is the same regardless of the mechanism of action, even if it’s just a placebo. Scientists and materialist physicians, on the other hand, will support many drugs whose mechanisms are poorly understood, simply because they are produced by chemistry. As someone who’s seen the inside of pharmaceutical research, Sheldrake is dead on: we don’t know much more about methylphenidate than we know about acupuncture. The mechanism of action of many psychiatric drugs is completely unknown and that doesn’t stop doctors and scientists from having total faith in them.
Although Sheldrake makes his point somewhat clearly, I’m not sure it’s the biggest problem with the mechanistic worldview and dogmatism in science. The problem I see is not within science, but in how the general public is persuaded to see science as Truth. Just witness how scientific graphics are used in TV commercials to sell running shoes: it’s very convincing even when there’s no actual science behind it. This means that scientists do a very good job of convincing people that science is the only route to Truth, or merely that science is the most pragmatic method of achieving their goals. People either see science as infallible, and they swallow the idea that the current mechanistic worldview of science is It. The big problem, as I see it, is that people are encouraged to deny their own experiences in favor of the findings of science, which are inextricably linked to the dogmatic assumptions of the mechanistic worldview.
I’ll give you an example. Let’s pretend, just for the sake of discussion, that I suffer from terrible migraines up to three times a month that keep me from going to work or enjoying and taking care of my family. Totally hypothetical (not). Let’s also pretend that I’ve been to lots of doctors, been prescribed all kinds of drugs, vitamins, diets and exercise based on “evidence.” I’m still getting headaches. None of this stuff has helped to my satisfaction. I’ve had improvements, and I’m slowly learning to live with it, but the best most doctors have to offer me is “try this, there was a study done…” Science is slow. It’s way too slow to help me with this problem. I’ve been having these headaches for thirteen years and the science has not improved much in that time. The best a headache specialist could offer me was to take large doses of vitamins that were identified to help people with mitochondrial disorders, in a study done over forty years ago. The mechanistic worldview, encouraging me to see my body as a set of pumps and electronic circuits mounted on an armature of primitive calcite crystals tells me to see more doctors until I find the one who’s read the right peer-reviewed study. Why should I deny my own experience in favor of peer-review? No thanks. You bet your ass I’m going to try Chinese medicine before I’m going to wait for science to catch up to what I need in my life. I do science, I know how slow it is, even for the fast people.
My biggest problem with the book is this: scientists play the game of “Who’s right?” I used to believe that being factually correct was the most important thing in life. Most of the scientists I know also believe this and they don’t just apply it to their work. They apply it in all realms of their being, particularly because our language and culture is set up for it. People like to be right. Many see life as a competition. Unfortunately, Sheldrake is also playing this game. He spends most of the book promoting his own scientific theories of morphic resonance and other ideas about psychic phenomena. I see this as more of the problem. We don’t need more science or better science. We need to see science for what it is: a way of learning. When we ask for more science, we are reinforcing the attitudes that lead to the problem in the first place. This is particularly evident in how we teach science.
When we teach science, we play the same game by teaching not methods, but findings. Most often those findings are actually models and metaphors, not experiences. For example, right now I’m helping to teach genetics and molecular biology. Most of the course material is not experimental procedures as it could be, but models of the function of biological molecules. The biggest one is the model of protein synthesis, where DNA is transcribed into RNA, which is translated into polypeptides. This is not anyone’s direct experience. This is a story (you could even call it a myth, due to the dogmatism it attracts) that is supported by clever experiments. Nicholas Maxwell points out that we could come up with a huge number of alternative myths that would also be supported by the same experiments, but that’s not how science works. Science seizes upon the first kinda-plausible idea and runs with it until it runs out of steam. The “findings” or “facts” that are found to support this story are wrapped up in it: we never would have done those experiments and found them to support the story if we didn’t have the story in the first place. When we teach science, we don’t teach method, we teach the mechanistic worldview, which is a myth. I often remind my colleagues that most of science is made up. Surprisingly a lot of them take no issue with that assertion, just as I don’t. The problem comes when we present it as something that’s Right, and don’t present people with the alternative of trusting their own experience. If we were honest about the nature of science, then people would see science as one fun way of learning, rather than The Way of Learning.
Unfortunately we encourage intellectual terrorism (“Who’s right?”) by refusing to be honest with people about the nature of our ideas. Sheldrake points this out, but quickly gets caught up in the same game by proposing alternatives. We don’t need more science, we just need to be honest about what science is. This is Sheldrake’s main point, but he primarily focuses on the danger of it to science, proving that he is, after all, a scientist. I am a lot less skeptical about my overall experience than I used to be. However, I’m still just as skeptical about scientific matters because science is a particular way of doing things and it’s intensely limited. I happen to think the prevailing theories of science are just fine. Swallowing them whole as the key to understanding your own direct experience is not just fine.
My overall point is that I don’t think the abuse of mechanistic metaphors is as big a problem for science as it is for regular people (scientists included). I’m surprised how often I see people who have a problem with science, e.g. adherents to “alternative” medicine, are doggedly scientific. In other words, I often encounter people raising gripes against “science,” and their first response is to propound an alternative scientific theory, i.e. to do more science. I’m also surprised how often I hear people explain their personal experiences (mostly bizarre, inexplicable ones) in terms of science: people usually invoke quantum mechanics because it’s the weirdest scientific thing they’ve heard of. It’s almost like they feel they need to defend their own experiences. That’s sad. Personal experience is not a competition, nor is it subject to peer-review. This just shows how deeply science-as-truth is ingrained in our culture. This probably has to do with the Puritan origins of our country; to understand that I’m reading Paul Feyerabend.
- 3 TED Talks the Establishment Would Prefer You To Miss (talesfromthelou.wordpress.com)
- The Science Delusion and Good News for Lumbering Robots (linguaphileapprentice.wordpress.com)
- The debate about Rupert Sheldrake’s talk (ted.com)
- Try not to be dogmatic about this (lackofenvironment.wordpress.com)
- TED’s Censorship of Rupert Sheldrake and Graham Hancock (rockandrollphilosopher.wordpress.com)
Tony the Mechanic is a character that I really loved on Seinfeld. Tony believes that Jerry’s Real Problem is how he takes care of his car. This of course puts Tony in a position of power:
Public institutions act a lot like Tony: “Come here and you will get what you need.” I see universities especially telling people “as long as you come here, pay your money and give it a real effort, you will be okay.” Unfortunately universities go quite a bit further than that: even as young as Kindergarten, children are being told that to be “good” they have to go to college and try to get into medical school. Only then will they be able to get all the things they really need in life, like a house, three cars and a big huge TV with an Xbox attached. And when people tell their friends and relatives they want to do things differently they get “I don’t understand you.”
I’ve written a bit more about Tony in my teaching philosophy.
There should be no doubt to anyone in the sciences that there is a “gender gap” in the sciences: there are fewer female professors than male professors in most scientific disciplines. The degree varies across scientific disciplines, I’ve found it strongest in physics, and weakest in math, psychology and biology, but it’s always there and in the same direction. A recent study shows that the problem is related to women’s perceptions of operating stereotypes in their colleagues: when women perceive that they will be judged as inferior, they often behave in such a way that reinforces the stereotype. This reminds me of a now-classic study that had young Asian women read articles about their identity, either as female, or as Asian, and then take a math quiz. When they read about Asian identity, they scored super-high, when they read about female identity, they scored low.
This is a common topic of discussion around my lab, since there are many female graduate students and professors in biology, and we hear all the time of measures to get girls interested in science, increase career advancement and other efforts to make working better for female scientists. The overall goal is to increase the number of women in science. However, I’m a little concerned that people don’t pause and ask what’s really going on, or ask why it’s happening. For example, the lead of the NPR Article on the recent psychological study poses this problem:
Over the years, educators, recruiters and government authorities have bemoaned the gender gap and warned that it can have dire consequences for American competitiveness and continued technological dominance.
Really? That’s the problem? We’re not keeping up with Finland? The reason we need to keep more women in scientific professorships is so that the Japanese won’t be smarter than us? Not only does that sound kinda hostile to everyone who isn’t American (which is quite a few people), but it paints a nice, simplistic picture over the real problem.
Perhaps the real problem is exactly what the quotation points out: our ridiculously competitive society. Maybe more women than men figure out earlier on that the goal of their lives shouldn’t be helping America crush Iceland. A big problem in science is that most scientists believe that the number one goal in life is to be factually correct about everything. Perhaps more women than men figure out that there are other things that are more important: things like compassion, kindness and generosity. Is anyone doing research on that gender disparity? Is anyone running a program to recruit men into kindness rather than insane competitiveness? No one has tried to recruit me.
Lately I’ve been reading a lot about teaching methods, and here’s one I think we should try in biological sciences: wiki-based learning. The goal of a semester in evolution/population biology would be to produce a professional quality free textbook. This would be the equivalent of a ‘wiki-sprint’ where students would each be responsible for a topic. Students that are particularly active can receive the role of editor. No exams; all grading is based on participation and quality of wiki-work. The intervention of the instructional staff should be minimal. All incentivization will be social or based on quality of work.
This incorporates project-based learning, and “scientific teaching.” Students are asked to produce something whose quality they can all see. They can check each other, edit each other. Those who trash-edit/vandalize can be seen by everybody, not just the instructors.
Students should be encouraged to incorporate other free materials. The quality of the material will always be subject to review. They will learn about copyright, authors’ rights and the re-use of material (which is a fundamental part of science). It will be very difficult for students to receive a good grade if they wholesale copy other wikis (for example, those of previous semesters). They wouldn’t be actively participating. However, it will be good to copy and incorporate from previous semesters, as they can learn how to improve the material. They will need to understand the material to improve it.
This would not be a special class. This would be the standard course in evolution, probably a required class. What would be the role for the instructors? TAs would show people how to use Wiki software (easy!) and advise them on topics. The professor’s job would be to clarify good topics for articles/chapters. The overall goal is synthesis. Students can learn terminology and concepts on their own. However, to synthesize topics, they can use the project. The role of the instructor in synthesis is what? Mainly in advising people on good topics for articles, philosophical directions and coordinating between students, rewarding students with the title of editor, etc. The professor could also steer students in the direction of good primary literature and advise them on scientific writing style. “Lectures” would consist mainly of reviewing articles and critiquing them as a class.
What do you think? Something similar has been done at CMU in electrical engineering. Does anyone have the means for trying this out?
I had another interesting breakthrough yesterday with regard to how I think about programming, or rather creating applications using programming. I’ve learned over the past couple years of creating an application that seemed simple at the outset (a simple number-crunching program!) that the really hard parts of programming are not the parts that people typically write about, and the really challenging parts are things that are obvious, but nobody talks about those things, perhaps because they are so challenging. This brings me to another question about how I should handle those really hard parts.
Yesterday I toyed again with the idea of learning C++, and decided against it yet again. I’d heard that C++ had a number of tools that are good for numerical programs, like vectors, and I recently heard of a new matrix library called Eigen. However, I’ve avoided C++ because I still believe object-oriented programming is one of those bad habits people pick up in programming classes, and it didn’t seem to offer any advantages over C. C is good; I mean C is The Right Thing.
There was still something else nagging me, however. When you read the books with the trivial examples that you could do easily with a pocket calculator, they don’t match up with the way the language is designed. Even books on Haskell don’t seem to be saying as much as they should about what’s really important in designing an application. That was the realization: what you’re supposed to do with a programming language is build an application. The “program” part (that is, the algorithm) is really immaterial. After exploring many programming languages, I have found that with few exceptions there are very few that really differ in their offerings for completing algorithms. You can write the same algorithm in almost any language and have it perform pretty well on most hardware these days. So what’s missing? What are all the manuals full of? Why does every programming language have a preoccupation with strings?
Let me use an analogy: I play the banjo, and took over a year of lessons, read tons of books and have probably spent over 3,000 hours playing and practicing, and even after getting in a really good band, having great people to jam with, and practicing really well, there was still something about playing that was so difficult. I just kept saying “I don’t know what to play,” or “I can’t make the notes fit there!” After I started graduate school and my second son was born, I needed to shift back to listening and if playing, playing a quieter instrument, so I started doing things I’d never done with my banjo using a guitar: playing scales, picking out melodies, and listening very carefully to my favorite guitar players. Listening to Steve Stevens, Jerry Garcia, David Gilmour and Kurt Cobain, I noticed something: these guys don’t play notes, they play phrases.
Why had absolutely no one mentioned playing phrases to me? Was I not listening? Did no one just say “Melodies, counter-melodies, rhythms, etc., i.e. music (dude!) is composed of phrases. You can construct phrases in many ways, but the key is punctuation.” When I learned to play the banjo, I learned the punctuation marks (licks). I learned how to move my fingers, and I learned chord formations. But I never learned the fundamental thing about music is phrasing. After I figured this out my brother told me how a famous drummer sat him down at a workshop and pointed his finger saying “One thing is important: phrasing.” Luckily this was when my brother was fifteen. Since I’m not a pro like him, I can understand why I didn’t get that opportunity, but still come on! This is hugely important. Why did nobody mention it?
And why has nobody mentioned, in any programming book that I’ve ever found that the crucial thing — the hard thing — about designing a program is the user interface. There are books about user interface, certainly, but they are concerned with superficialities of user interface, like what color the frame should be. Who cares? The difficult part is deciding how your program should interact with its user. Eric Raymond does spend a whole chapter on this, but he doesn’t start with it. I’d like to read a book that starts with “You can figure out all that stuff about your algorithms: you have the equations, you have the data structures, you know what it’s going to do; spend time thinking about how a user would get it to do that well.”
So my realization yesterday is that the reason the C standard library is full of string functions, the reason Lisp programmers are so concerned with reading program text and the reason that there are so many programming languages and libraries and plugins is that the really hard part is between the user and the algorithm. My inclination is to say that the simplest interface is best. The simplest interface would be “stick something in and see what comes out.” That’s called Unix. Even in Unix you can’t just do that: you have to mediate somehow between the algorithm in a world of numbers, and the user who lives in a world of text. This is easiest on Unix, but it’s still not easy.
There are other schools of thought: your user interface should be a pane full of buttons and pretty colors to dazzle your user into thinking they’re doing something useful, or a monolithic shell that does everything with the computer. I don’t really buy either of those things, because I know how to use stream editing and Make to tie things together. However, sometimes I need a program that I don’t have to re-run all the time. I would like something in between: something where I can run a simulation, look at the results, then tweak it a little and run it again, then set it up into batch mode to produce a huge pile of results that I can analyze. There’s no reason that all has to be in one huge program, it could be several, but the point is that the algorithm contained in there would be the same for all those steps. There are languages like this, such as R, Octave and Scilab. However, I don’t like programming in any of their languages. Maybe I can come to like it since they make the hard parts easy.
The approach I should take with my next program is “How do I write a language for running a simulation?”