Thursday, March 25, 2010

Computer Graphics

As part of my course on computer graphics this semester the class has been writing a ray tracer. The details of ray tracing aren't really worth going into, instead I'd rather share a picture (more technically a rendering) that is the result of my work.


If you are particularly learned, you'll recognize this figure as the Mandelbrot set. In case you didn't recognize it, at least you will in the future! This version in particular is really an abuse of the ray tracing engine we've developed; typically other much more efficient means are used to generate an image. However in using the ray tracer I'm able to generate images that simply couldn't be done with the more traditional methods. For instance, this rendering uses a reflection model to add an additional layer of the delicious recursiveness that characterizes fractals. Though you could do the same given the more traditional code, simply having the code alone versus a ready-made rendering program allows me to color the actual Mandelbrot set, which is almost always left black:



For the sake of completeness, here's a more traditional ray traced image that specifically includes a good variety of capabilities a ray tracing engine made in a single undergraduate semester has:


As you can tell, my cylinder code still has some issues that need to be resolved.

Tuesday, March 23, 2010

By the Numbers

I presume most people recognize that there is a vague connection between statistics and probability, but, having taken a course in probability theory, I'd be willing to bet the farm that very few people realize the full breadth of intimacy between the two. This is true in particular because despite having studied both, I'd count myself as one amongst the naive. From the outset probability is simply difficult, and often counter-intuitive. Not only does probability proceed in ways contrary to our intuition, it does so in such an amazingly tricky way! Maybe it is a function of how easy it starts out: given a typical six sided die, most everyone knows that the chance of guessing which number comes up is one in six. Easy enough, you pick one side out of a total 6, so the probability is 1/6. The common understanding of probability stops there, for the simple reason that any situation even marginally more complicated than that becomes remarkably more logically and mathematically sophisticated. Suppose I'm flipping a coin and you're guessing the results. For some reason you're having terrible luck and you've guessed wrong 10 times in a row, what's the probability that you guess the next flip wrong as well? Think about it for a minute and when you've logically arrived at what must certainly be the answer, highlight the following space for the answer:  1/2

Next, try to logically deduce the probability of guessing incorrectly for 10 coin flips in a row. Answer:   1/1024

It only gets so much worse from there, to the extent that I'm really not confident I could present the correct answers myself! Even admitting that I can't help but try for one more. Assume that 4 out of 5 people prefer Crelm toothpaste. What's the probability that from a selection of 5 people 4 of them prefer Crelm? Answer (I think): 256/625

The important notion here is that a probability says something both nebulous and concrete about reality. If a truly random die is thrown 6 million times, in all likelihood each number will have come up about 1 million times. If 4 out of 5 people really do prefer Crelm, then the chance that a randomly selected person prefers Crelm is 4/5 or 80%. As much as we all like to think that the statistics don't apply to us (because we're special), if the statistics are accurate there's no way to escape them. Most of the time this is a banal statement, as when referring to whether or not you prefer Crelm--either way it's not exactly a big deal. But then... there are the other statistics. "Around 50% of US marriages end in divorce" can be a pretty hard pill to swallow for a couple walking down the aisle. I have reason to believe the number of couples who'd figure they end up on the successful half of that statistic while exchanging vows is much higher than 50%--clearly if they thought it wasn't going to last they'd probably not be entering the commitment in the first place. Similarly, doubting the success of the marriage from the outset probably isn't going to increase the chance of a favorable outcome. What's left is an awkward position, objectively maybe the best one can think is that at least the odds aren't as bad as they could be, better than any casino game. However marriage is a particularly special case for a number of reasons, the primary one being the shift in locus of control which is applicable to all interpersonal relationships; though a bit less severe, anyone who's been dismayed by the lack of a second date (etc.) knows the score. To be fair the actual divorce rate changes based on many factors, where 50% is just the overall rate. The lowest divorce rates are found in each of the following categories: first marriage, atheist or agnostic, age 30 or older, residing in the Northeast and no cohabitation prior to marriage.

Uncontrollable statistics naturally lead to other more personally manageable probabilities. For instance, 28% of car accidents in the US happen while at least one of the drivers is using a cell phone. This is the part where I reiterate: we love to think we're special and that the statistics don't apply to us, but it just doesn't work that way. We are all special, I'm fully on board with that, but that doesn't grant any of us statistical immunity. Using a cell phone while driving (even with a hands-free headset) substantially increases the chance that you will be in a car accident, which could result in your death, or, arguably worse, the death of another/others with the accrual of manslaughter charges and the lifelong burden of knowing that you've killed someone. It's very simple: while the car is in gear, your phone doesn't exist. There are absolutely no excuses.

Friday, March 19, 2010

Technology II: State of an Art

For today's exercise, please read the following passage and give the question at the end a sincere and thoughtful rumination. Once you feel you've thoughtfully ruminated enough, watch the video.

Imagine a modern machine, one that could be called a robot, that consists of a three fingered hand mounted at the end of an arm with a range of motion similar to our own and a single camera. Given the present state of technology, which any sensible person would describe as "quite advanced," what might this arm to be capable of?




It is astounding, yes, no less should have been expected, but there is something a bit backwards about it. Traditionally machines are constructed and used because they can do some certain task vastly better than we are able to. Naturally the machine's form and means of manipulation don't resemble ours in the slightest, otherwise we'd probably not have needed it in the first place. A good number of years ago, enchanted by the ideas of Isaac Asimov, I had a strong interest in androids--humanoid robots. But even before I knew the beginning of the true technical challenges behind building an android I realized something: a person desiring to make a passable humanoid machine would save themselves a lot of effort and greatly increase their probability of success by doing so the old fashioned way, that is by seeking a viable mating partner and letting nature run its course. At the time the thought was conceived mostly as a joke, and though it's still humorous, it's also quite sensible--practically speaking I think we have more than enough roughly human shaped objects with adequately human like capabilities. Nonetheless it is almost certain that many will continue attempts to build an android, and it's far from difficult to imagine that one day a result could be described as nothing other than successful. However one thing will remain true even then, even when androids exceed our capabilities: the human form can't do everything. No matter how dexterous or sophisticated, our fat fingered mechanical offspring won't be able to manipulate the atoms of a molecule unaided; even less technical, these two handed automatons will have just as much trouble as we doing the work of three hands. This will be a small victory for three handed people as they will get to remain not yet obsolete longer than the rest of us, at least for the few moments it takes to add one more hand to the robot. All silliness aside (well ok just most of it), there's clearly a huge number of tasks which won't benefit from the superhuman but still human capacity of these imaginary androids unaugmented. This represents a significant relief since we aren't stuck waiting for these super androids to come along (which nonetheless probably isn't too far off, though given the rate of technological progress, relatively probably quite a ways off). In summary, I've basically stated in a very roundabout way that we are free to continue to augment our own similarly limited mechanics the same as we have since the invention of the first tool; we can use our already inconceivably sophisticated body of technology to extend and enhance our capabilities. Case and point, the da Vinci surgical robot. Surgeons are essentially required to have superior motor control as even the slightest irregular movement could result in a fatality. However, no matter how talented the person holding the knife with intent to open you up, there is a fundamental biological limitation to the amount of accuracy they are capable of. Rather than just hoping their home life isn't distracting them and that their cup of coffee wasn't abnormally strong that morning, the da Vinci confers peace of mind with a laundry list of features specifically designed to maximize precision by counteracting the inherent imprecision of human hands. There are over 700 worldwide, and though it is only approved for a limited number of procedures, the number is expected to continue increasing as rapidly as it has been. While it's already on it's second version, I think it's a safe bet that further enhancements will be rapidly forthcoming. Of course, the proof is in the numbers, and the numbers are unambiguous--given the choice between traditional and robot assisted surgery, choose the latter! Here's a video of it peeling a grape on live television:


In conclusion, I'm compelled to once again say the same thing I've said previously: over the past few decades in particular we've been developing foundational technologies. Because each of these have such vast potential for application, the first and most obvious few applications took hold and found success. Being as we are focused on a multitude of things wholly different from the vastness of yet unrealized and incredible possibilities that these technologies enable, it is natural to unconsciously assume that what we see is more or less the extent of what technology can offer, but this tacit assumption is, in my opinion, absolute rubbish. In particular the most overlooked and underutilized technology is cheap and powerful microprocessors; everyone knows that desktop processors keep getting more powerful without getting more expensive, but the bit of interest is that the processors of yesteryear continue to get smaller and cheaper. This fact in itself isn't unacknowledged, actually there's a well known meme that suggests a common calculator found in a high school today has more processing power than the space shuttle that delivered the Apollo astronauts to the moon and back. The overlooked bit is that that little processor can do an awful lot more than help with algebra homework. Like what? Well, I have a video demonstration of one such device, but before you watch it consider that the processor in the device shown is essentially as powerful as a 1986 state of the art desktop that cost $6500 (the Compaq Deskpro 386), can be had for around $3, and is smaller than a dime. The whole device could probably be made wholesale for under $10.

Thursday, March 11, 2010

Electronic Music

Vitalic is a musician that constructs and delivers frequent variations in air pressure in a manner with a rather more contemporary lineage than what I've shared prior.

Thursday, March 4, 2010

Ramachandran on the encephalon

Vilayanur Ramachandran is a neurologist. What's more is that he has a very keen insight, and a particularly effective ability to communicate. Given that the nervous system (including the brain) is naturally and rapidly a profound topic of consideration, such a person as Ramachandran could really make 20 minutes intriguing. Well it was 23 actually, but I'm willing to wager nobody in the room wanted him to stop.

Sunday, February 14, 2010

Observations

First, one of the most notable characteristics of life is activity in the 4th dimension. Observe:


Another interesting fact is that computers also exhibit activity in the 4th dimension:

and

Almost entirely unrelated to the entries prior, here is a coffee cup as a measure of progress towards some certain success, followed by a painting and a portrait:



Saturday, February 13, 2010

Technology

I've heard that some people think technology isn't really progressing at an amazing rate. I think they're crazy. I don't think I've shared this yet, it's an example of the state of technology:


Frankly, I think we have so much technology at our fingertips that we have hardly even begun to scratch the surface of what it's capable of. On top of that, better technology is hitting the scene faster than anyone can keep up with. I certainly think that we are in a technological singularity, and that Kurzweil's condition (strong artificial intelligence) is satisfied by our own intelligence as augmented by the Internet. It's a subtle, almost secret form of artificial intelligence that, from what I gather, no one has yet realized the significance of. With the power of the Internet, a person, so willing, may learn practically anything, and at record speed--no digging through card catalogs or driving to the library necessary. Suppose you want to learn engineering but can't afford school? No problem, one of the best engineering schools in the country, MIT, has put all course materials for the first four introductory engineering courses online (lectures, notes, assignments, labs, etc) for free, available to anyone with an Internet connection. You'd probably want more than an introduction, so it's a good thing they've made freely available most of their curriculum, including that from other programs. You don't get any certificate, but does that really matter? I'm certain that a degree is worthless in lieu of an education, and that an education is no less valid if it isn't certified. This isn't a new fact, it's just easier to get an uncertified education now than it has been in the past. Take Dean Kamen, the man behind this amazing prosthetic arm and many other similarly astounding creations--he didn't earn even an undergraduate degree, though he now has around 7 honorary doctorates. The real point is that now other potential Kamens are easily able to obtain the resources necessary for their talent to reach fruition. An important addition is that I think most people have more potential than is generally realized; if this is true, than we should expect a significant increase in technological progression. The question I'll leave for you to answer then is this: have we seen a significant increase in technological progression since the Internet became widely available?

Tuesday, February 9, 2010

Arbitrary archetypical emotion

Describing an emotion such as disappointment is an undertaking well served by describing situations which would cause this emotion, that is by conveying an archetypal scenario. Thus what follows.

My math professor had mentioned multiple times in class that an upcoming assignment could be done by way of spreadsheets or with a computer program. Given that I'm supposed to be a computer programmer, I figured that despite the bit of extra work involved I was either obliged or compelled to write the computer program. Hoping to not be merely a person who could write a computer program but instead perhaps someone who could (possibly, maybe) be described as having a talent for writing these programs, I spent some time and effort more than reason would suggest to construct a beautiful program, replete with color graphs and readable formatting. I went to some length to make it work with just a single double click, dissatisfied with the alternative prospect of explaining to my instructor how to enter a command into the shell, much less how to add java to the PATH. I added an equation parser so that input like sin(e^(ln(y-sqrt(x-y)))) could be interpreted correctly, a scrollable output window for tables of values, and 2 windows for 12 full graphs, labeled unambiguously with their associated equations. I even tested it on three different platforms to make certain it would Just Work (note it won't run on OSX because it doesn't have the latest JVM, Doesn't Just Work). So it was with pride and satisfaction that I sent it off along with the source code, wondering, wishing I could see the reaction.

A day later, I received the response. It was a request, for me to bring in a paper copy of my results; my professor, for whatever reason, didn't want to run my program.


That vignette could be used to describe disappointment, but with a catch: it would best illustrate the emotion only if the emotion was stated beforehand. If I had said that I was about to describe anger, frustration, or even success and satisfaction, would the story have illustrated the emotion any less? It is easy to see that a person could be angry in such a situation, frustrated too. But I could have just as easily felt flattered, were I to think that I had created a program so sophisticated it was to be handled with caution. If any of these clearly distinct emotions could have occurred, then circumstantial emotion is itself in some ways ambiguous (I hope that's not news). The ambiguity of emotion is something easily used to our advantage; it's a fact that when you feel positive emotions, you feel better, you're physically healthier. That being the case, why not exploit the ambiguity of emotion? In writing that program I learned a lot, I improved my life regardless of who sees or approves of it. I could feel disappointed, or I could feel a strong sense of self satisfaction in my accomplishment. Given those options the choice isn't very difficult! The power is in the fact that there is a choice, that you have a say in how you feel. You can spend all your time looking for reasons to be sad or you can spend all your time looking for reasons to be happy--either way you shouldn't be surprised by the results.

Saturday, February 6, 2010

One down

One assignment done, 6 or 7 to go. Ok, just an excuse to share some Rachels - Last Things Last.



Technically the piano is a percussion instrument, but I don't think that's why I like it so much. I like harpsichords a lot as well, which utilize plucking instead of striking to elicit those wonderful resonant frequencies. The logical conclusion then is that I like instruments roughly shaped like pianos. On a related note, Wikipedia says "The word piano is a shortened form of the word pianoforte, which is derived from the original Italian name for the instrument, clavicembalo [or gravicembalo] col piano e forte (literally harpsichord with soft and loud)."

Demonstration of feasibility

I'm a bit too busy to give the normal glyphic flood, but as proof following my plea for autonomous vehicles this demanded mention. The Center for Automotive Research at Stanford (CARS) is planning to send an autonomous Audi up Pikes Peak at race speeds. Pikes Peak is a mountain road used as a rally stage, with surfaces varying from packed dirt to loose gravel. Actually an autonomous car has finished the course previously, but "only" at an average 25 mph. There is some reason to suspect the Stanford team will succeed in their intent, as they won DARPAs Grand Challenge and took 2nd place in their Urban Challenge; the car can at least drive 120 mph across the salt flats. I cannot wait to hear the results!

Saturday, January 23, 2010

Light and reflections

I was randomly trying to get a good photo of my iris when I noticed something interesting. One of my attempted variations was bouncing the flash off a nearby wall while looking at said wall.



It's not a good photo of the iris at all, but something interesting was going on in my pupil...



It almost looks like an image...



Hm, that looks familiar.




It's a horizontally mirrored image of the wall I was using to reflect my flash. Neat.

Wednesday, January 20, 2010

Max Richter

Max Richter is a contemporary classically trained composer and musician. By my judgment he is absolutely a musical prodigy, certainly amongst the most gifted musical artists of this time. His pieces can often be characterized as avant-garde as he blends modern and classical elements, breaking conventions to enable the forging of some extraordinary, transcendental aural experience. Interestingly his Wikipedia page mentions him "commissioning and performing works by Arvo Pärt, Brian Eno, Philip Glass, Julia Wolfe and Steve Reich" early in his career, which happen to be a few of my favorite artists. Predictably enough, his music has had a rather profound impact on my life several times over several albums. Nonetheless, I know enough to understand music is as subjective as it gets, to the extent that the reactions of others can be rather hard to predict. Well, there's only one way to find out; here's some evidence for my conjecture:



The thing is, like many great musicians, he doesn't just write songs, he writes albums. Thus, as great as this song is alone on this page, it is but a mere shadow of itself when played as part of the whole. I recommend purchasing all his albums, if only to ensure he's funded and motivated to continue producing lots of music for as long as possible.

Saturday, January 16, 2010

2 photos and a fabrication








Time keeps on slippin'

I like to think about really mind bending things, but even more than that I like to think about real mind bending things. For instance, I'm not sure I'll ever be comfortable in my understanding of the fact that the further we look into space the further back into time we look. This is a legitimately crazy thing--it means that given a powerful enough telescope we could watch the creation of our own universe. As impossible as that sounds, it's more or less correct; in fact, we have telescopes powerful enough to look inconceivably deep into spacetime, and we have actually captured the direct aftermath of the birth of our universe. This aftermath is known as the cosmic microwave background radiation, and its discovery earned a Nobel prize. No matter which way we look, the furthest we can see is the CMBR, it surrounds us. This is interesting because the best theory of the origins of the universe, the Big Bang theory, posits that the universe started in an extremely dense bit of space, expanding from there. But how much sense does it make that no matter which way we look we end up looking at that little bit of space? In one sense, the CMBR is a sphere that surrounds the universe. In another sense the universe surrounds it, as the universe grew from it. Either way it doesn't make any sense.

Not everything that throws us for a loop needs to be on a universal scale. Recently I read an article from NewScientist titled "Timewarp: How your brain creates the fourth dimension," which I found to be nothing short of profound. As you might gather from the admittedly bad title, the article is about our perception of time, which is something I had (surprisingly) never really considered before. Early on there is a sentence that nonchalantly flicks off a few words; "Time... is much weirder than we think it is."

Invitingly audacious, isn't it?

The whole article is well worth it, as it shows some ingenious experimental methodology and keen insight regarding something as potentially slippery as temporal perception. One thing that stood out was research apparently showing that a click track at 5 clicks per second (300 beats/minute) for 10 seconds improved performance in basic arithmetic, memorizing words or hitting a specific key on a computer keyboard by 10 to 20%.

There has been research done that shows binaural beats at certain frequencies can entrain certain brainwave frequencies, and it has been suggested that this phenomenon could possibly be used to enhance the performance of the brain. I wonder if binaural beats are somehow related to rapid beats... I know that because of the time delay with binaural beats it sounds as though there are twice as many beats as usual. Anyway, it will be interesting to see where all that research goes.

Wednesday, January 13, 2010

Intelligent Transportation Systems

In 2004 it took an estimated 6400 megajoules to build a typical computer, including 17" CRT. This works out to 1778 kilowatt/hours, or about the average consumption of a house in the US for two months. Based on the 2009 US average industrial rate of $0.07 per kwh, assuming that only electricity was used and at 100% efficiency, $124.46 of the cost of the computer went to energy alone. I reckon this would represent somewhere around 10% of the total cost.

A gallon of gasoline has about 1.3x10^8 joules or 138 mj, meaning the computer would require about 46.4 gallons of gasoline to build. Using a rough average of the current prices, $2.70, this means about $125 worth of gasoline. Note that 1 gallon of gasoline has about 36.6 kwh.

Alternatively, the current average residential rate for electricity is $0.12 per kwh, meaning the cost to build the computer would be $213.36. Notice that a mere change of 5 cents to the cost of a kwh nearly doubles the end cost of the energy, which would most likely be reflected in the purchase price. It's important to recognize that energy and the cost thereof, from, gasoline, electricity, or beyond, is extant in all facets of our modern lives. In other words, if the price of gasoline goes up, the price of everything goes up. Of course most of our electricity is generated from coal and natural gas, so the price of gasoline doesn't seem directly related to building a computer. Unfortunately that's rather short sighted, as gasoline is required in order to move the computer parts to and fro, not to mention to transport the coal to the power plant to begin with.

Clearly if the cost of both gasoline and electricity were to rise by a nickel, we should naturally expect everything we buy to become quite a bit more expensive. It isn't difficult to see that this in turn would most likely have dire economic consequences. This is why there's so much buzz about energy, it should be obvious that the extreme consequences of demand outstripping energy supply readily justify extreme evasive efforts.

Imagine it was discovered that a meteor sufficient to absolutely obliterate Earth was headed straight towards us with a 100% probability of collision. Our dependence on gasoline is kind of like that. Buying a Prius would be like building a large bomb shelter: it would show that you probably realize there's some kind of problem, but that you nonetheless have absolutely no understanding of its magnitude. Do you know how much energy it takes to turn bits of iron buried in the Earth into a shiny new Prius? According to an average figure per car, not the Prius specifically, given by Toyota, around 22,519 kwh or 22.5 megawatt/hours , the rough equivalent of 615 gallons of gas. This means that driving a Prius 31,000 miles uses about the same amount of energy as building the thing to begin with! 22.5 mwh would power the average house for over 2 years, it's quite a lot of energy.

One implication is that by buying a used car instead of a Prius you are preventing the use of the equivalent 615 gallons of gas--buying a used 15 mpg beast and driving it 9,000 miles uses less gas than a new Prius with 0 miles on it, making the beast more sustainable and conscientious up to that point. I realize I always pick on the Prius but I don't mean to be too disheartening, the Prius is one of the better options available, even if I think it's not as extreme as it should be. Anybody who buys a new Prius with legitimate environmental concern is now obligated to drive that car into the ground.

New cars aside, one Mythbusters experiment showed a 39% increase in fuel efficiency from drafting a big rig--driving 10 feet behind it. Assuming every car could always draft in such a manner and increase efficiency by 39%, well then the yearly consumption of gas would decrease by a monumental 39%. That's a big assumption, but there's one way it could be realized, and that's with autopilot.

We can't all draft all the time because it is very dangerous to drive at almost any speed 10 feet behind anything, and the reason is simply biological: it takes a measurable and substantive amount of time for information to traverse the nervous system, this phenomenon is commonly referred to as reaction time. When you see brake lights, the light must activate an action potential in your retina, which travels into the brain. Once processed, another signal is sent down the looong path (compared to microscopic neural cells, inconceivably long) to your foot, telling it to press the brake pedal. If a truck moving 60 mph slams on the brakes with you at 10 ft behind, that reaction time is simply way too slow and it's game over. On the other hand, with autopilot brake lights aren't even necessary, the computer in each vehicle would be in constant communication with the cars in front and behind; the vehicles could be 10 ft apart, 2 ft apart, even physically connected like a train without any problem. I imagine the optimum arrangement would be a physical connection for a number of reasons. Of course, if all drivers were computers, the brakes themselves would hardly be needed, especially on the freeway. If you know the status of every car around you, about their planned movements, power characteristics and beyond, less wasteful air friction could be used to decelerate as appropriate, perhaps to allow a car to enter the train, which is itself a task much easier for computers than humans.

It isn't hard to envision that traffic lights would disappear with irrelevance as well, indeed I doubt it would make much sense to sit at an intersection when precise control and rapid, traffic-omniscient computer communications would allow cars of all headings to pass through synchronously. Sure, it will take a while to get used to constantly missing that other car by inches, but abandoning the familiar start/stop/wait process will give tremendous fuel savings, as it is the most inefficient part of driving and why the distinction between city mpg and highway mpg exists. Accordingly, the pace of society will see a new and considerable boost as not only the time between locations diminishes, but we are also free to spend that time doing something other than driving. Not only will we get places fast, ambulances, police, and fire trucks will be able to reach their destinations in the maximum possible time. If that weren't enough, we should expect that we are all a noticeably wealthier as our expenditure on gas shrinks, car insurance disappears, and as mentioned above practically everything drops in price along with energy cost. It's such a win for everyone it feels like cheating, but all of that is just the start.

The first thing people say when they hear of computers or robots driving cars is "but that sounds So dangerous, it would never be safe enough, I would never trust it!" Well, the bleakness of reality readily illustrates the absurdity of such a thought. Think of it this way: the autopilot system could have 5 million accidents a year and that would still be a huge improvement over humans driving cars! There were around 6.4 million car accidents in 2005. 100 people could die every single day in a computer driven car and it would still be safer, because 115 people are dying every day in the current system. One hundred and fifteen people sure seems like a lot, doesn't it? Well, consider that 3,303 people died in car accidents in the month of September, 2001. That month is and always will be bitterly remembered solely for the terrorist attacks that fell the Twin Towers, acts that meant the death of 2,819 people. There is no doubt that 9/11 was a tragedy, but so was 3,303 car accidents. Death by car accident and terrorist attack are fundamentally similar in that the victims of either are generally no less expecting nor deserving the outcome--incidence is practically random. Just because the first figure elicits strong memories and the next is unfamiliar doesn't make the prior any more tragic! Personally I'm inclined to think that every person is more or less equally valuable (namely, invaluable) and thus that each person's death is equally tragic. That being the case, the 2,819 terrorism related deaths on 9/11 are quantitatively about 85.347% as tragic as those due to car accidents in that same month. Alternatively, if we were to assume that only the death of a relative or dear friend qualified as measurably tragic then the majority of people would see that 2,819 random strangers and 3,303 random strangers are pretty close to each other, and we might expect to estimate their relative tragedy as similarly near. Objectivity aside, you would have to be colder than cold to somehow consider 3,303 lives lost any less tragic than 2,819 lives lost regardless of the details, these are all people that could have been you or I, yearning to be alive just like you and I: husbands, daughters, mothers, brothers... neighbors, friends and mentors; they were real people!

So it's established, there were two significant tragedies in the US in September, 2001, now what? Perspective: the 3,303 fatal car crashes in September was actually fewer than that for the two months post and prior, which makes 5 tragic months in a row. If you figure that anything over 2,000 deaths is sufficient to be labeled tragic, every single month in 2001 was a tragedy considering car accidents alone... 37,862 people died. Every single year from 1994 to 2008 has been a tragic year, with an average 37,500 fatal car accidents per year. 1994 is the earliest data I have, but I'm willing to bet the numbers don't improve much by going back further. Over the 14 years that span 1994 and 2008 562,712 people died in car accidents. If instead of happening over 14 years it happened in one day, that day would be about 199.6 times as tragic as 9/11, like the events of 9/11 replayed 199.6 times in one day. 562,712 is 2.5 times the total number of people that died from the atomic bombing of Hiroshima and Nagasaki combined. Know that those weren't the most devastating though--strategic firebombing of Japanese cities killed around 500,000 people, inconceivable yet still fewer.

Cars driven by people are as deadly, if not more, than world wars.

Then how dangerous would it be? Because an intelligent transportation system would need to be implemented everywhere all at once and thus a massive project, breadth and depth of testing at all stages is a certainty. To start, there has been decades of dedicated research on this specific issue, and the state of affairs is amazing (see DARPA's grand and urban challenges). Given the talent inevitably attracted to exceptional challenges (such as top engineers to NASA), a category for which this certainly qualifies, I presume each issue arising throughout development would be deftly handled. Finally, I would expect that some qualified organization would be intimately involved, dictating the requirements and governing the development to ensure safety and reliability, much as the FAA does with all things aerial. An autopilot system made properly as thus, I predict less than a hundred accidents per year from the very start, probably no deaths. With such a system the probability of dying in a car accident would go from frighteningly high to somewhere less than being struck by lightning. The current estimated yearly cost of car accidents is over $230 billion dollars, so... cha-ching! There's an extra $229.98 billion dollars floating around. Nonetheless we would expect the system to improve over time, transforming cars from most dangerous to safest form of transport.

Optimally the typical commuter car should be prepare for transition by being made small and ultra-light, with aerodynamics engineered in terms of chains of cars. The majority of cars should seat one passenger since most often a car carries only one person and any empty seats means wasted energy. With standardized interfacing and characteristics, other vehicle forms would fulfill the need for cargo haulers, high capacity vehicles, and so forth. Ideally vehicles would be public property, eliminating the need for a family to have multiple vehicles for commuting and family outings, but realistically this is the US and people want to own the things they use. Regardless, thanks to the reduced complexity and altogether more efficient vehicle design coupled with energy efficiency savings, a family could afford to own a number of vehicles which nonetheless add up to a fraction of the energy and materials cost of the present steel monstrosities, maybe able even to be stored in the same amount of space. Alternatively a sufficiently large platform could allow for modular passenger compartments; though the platform size would be less than optimum for single passengers, needing only one drivetrain would decrease materials consumption. The subsequent implication is that modular drivetrains could be used instead of modular passenger compartments.

The aforementioned efforts combined would make for an increase in efficiency so marvelous that domestic oil production would actually be sufficient for the first time since the 70's, when it peaked. Since we're making a whole new concept of car, it would make sense to complete the metamorphosis: ditch internal combustion for electrical, pave the road with solar cells, and oil becomes practically irrelevant for the first time since the second industrial revolution. Rather than carry around the really heavy main batteries, leave them stationary and build contact strips in the road so that cars can zip around like full scale slot cars. The relatively lightweight backup batteries would still be carried so that in the case of main power failure the vehicle could still maneuver and communicate safely. With the sum of these modifications, we should expect our busiest roads to give the impression of losing much of the normal traffic--in reality, the same road may have even more traffic, only seeming less because more cars fit in less space for less time. Each intersection would know about every car planning to traverse it from the earliest possible moment, and would assign each car a set of parameters with which it is to use for traversal, including possible alternate plans. Each car would then communicate with every other car assigned to the intersection around the same time to verify that everything works out, a sanity check independent of the intersection. For example, two chains of several cars each plan to travel east and north through the same intersection at the same time. The intersection may dictate that both chains enter the intersection moving 80 mph, the first at 5:00:00 and the other at 5:04:00. The chains verify together and find that they will pass within 6 inches of each other, but that this is an acceptable margin given the wind conditions and other factors. The plan is confirmed with the intersection and each car passes through, deviating a few hundredths of an inch from their predictions--these deviations would then be incorporated back into the prediction model which is distributed across the whole network. Suppose four very long chains travelling in every direction are approaching the same intersection. This time the intersection would probably dictate that the lead cars split and accelerate through such that at any moment there are 4, possibly 8 cars in the intersection, each one missing the other by a hair. Eventually it is expected that the traffic network will maximize efficiency of the whole system in unexpected ways. Maybe previously busy intersections will be used as though there were no crossing, or all but a handful of wide, long, straight thoroughfares will fall into relative disuse.

I am a driving enthusiast, I really love driving. Many days it seems my highest aspiration is to do laps around Laguna Seca in some kind of ultra performance four wheeled vehicle. But despite my pleasure in driving there is no way that I can call the present system workable. It's extremely dangerous, terribly slow, woefully inefficient, and absurdly expensive. The truth is that we have the technology to automate the roads, people have been working on it for decades and the resulting systems have proven reliable even in novel situations many humans might otherwise fail. It might not be perfect, but it's much better, and as I've shown we're so terrifically awful at driving that that's not saying much. The transition is ready to happen, and when it finally does our world will simply become safer, faster, better, and wealthier. The only downside is that it can't be done over night.



Once we finish automating our roads, what's the next revolutionary development? A space elevator. More on that some time.


some sources:

"Energy Intensity of Computer Manufacturing" by Eric Williams, United Nations University
  http://www.scribd.com/doc/4183/Energy-Intensity-of-Computer-Manufacturing
"How much does electricity cost? What is a kilowatt-hour?"
  http://michaelbluejay.com/electricity/cost.html
"How much electricity do computers use?"
  http://michaelbluejay.com/electricity/computers.html
Energy Content of Fuels (in Joules), other useful tables
  http://physics.syr.edu/courses/modules/ENERGY/ENERGY_POLICY/tables.html
"Weekly U.S. Retail Gasoline Prices, Regular Grade"
  http://www.eia.doe.gov/oil_gas/petroleum/data_publications/wrgp/mogas_home_page.html
"Average Retail Price of Electricity to Ultimate Customers by End-Use Sector, by State"
  http://www.eia.doe.gov/cneaf/electricity/epm/table5_6_a.html
"Energy to build a car?"
  http://www.cleanmpg.com/forums/showthread.php?t=18240
"Most and Least Fuel Efficient Cars "
  http://www.fueleconomy.gov/FEG/bestworst.shtml
National Highway Traffic Safety Administration's Fatality Analysis Reporting System
  http://www-fars.nhtsa.dot.gov/Main/index.aspx
Wikipedia - "Intelligent Transportation System"
  http://en.wikipedia.org/wiki/Intelligent_transportation_system
U.S. Dept. of Transportation - "Intelligent Transportation Systems Benefits and Costs, 2003 Update"
  http://ntl.bts.gov/lib/jpodocs/repts_te/13772.html#4.0

Wednesday, December 30, 2009

Wednesday, December 2, 2009

A bend

Release 3 wouldn't be much of a follow up without a few milestones, and milestones (of a sort) I did make. This time the stretch became a bend, which doesn't make sense but for the fact that thereafter follows the break--fortunately I anticipate it to be more of the winter kind than the psychotic kind. The 17 straight hours of desk work seemed pretty significant at the time, but today upon leaving my obligations (met and otherwise) at the classroom door, I realized that 17 hours was a number unwittingly exceeded. Admittedly, it took a while to push aside enough fuzz to narrow down two sides of time, longer still to then find the difference within that silly base-twelve chronological institution. Fancy dictum and questionable conflations aside, this time I worked for around 21 hours straight. The distinction though is probably less than significant, since the prior 17 was constrained by having such a late start--this time I only managed to begin my Olympic marathon of stationary feats a few hours earlier. For the better too, as there was much to be done! It started rough, as my digital tablet suddenly refused to comply with my digital scribing needs. Its absence might have meant more time available, not lost to enjoying or perfecting my production, but that was time lost nonetheless trying to get it to work as it has and should (by all reasonable expectations), and this was time lost without the end result of a better picture, mind. Eventually reason won over and I resigned myself to the mouse for my work. While we're here, let me opine that the mouse was a brilliant introduction to the world of computers 25 years ago--today it's just a sad and unavoidable display of our deficiency in interacting with computers. Even worse, people with money to move markets have been convinced of the absurd notion that touch interfaces are superior. That's the point where I might have written "but I digress," but didn't; to clarify, I did digress, briefly (and continue to do so), but did not make the matter explicit in the sentence prior--I was saving it for this sentence. Moving on, I decided to embrace the uniformity of which electronic mice conduce by creating a virtual milieu of very uniform structures for my next game. For this strange place I envisioned a purple sky and square hills covered in blue grass, but attempts at both proved unsatisfactory. Instead I went for square hills covered in arbitrary textures. The result certainly was strange.
 With my patience presently waning just as it was then, I moved on to a mildly meaningful but voluminous task (thusly giving a questionable but satisfying notion of productivity) partially akin to sorting books for my groups' body of computer code. It was with irony then that I moved on to some real work of debugging, for which several hours of careful reading gave in return very little productivity. At some point near there others in my group awoke (as normal people are expected to do) and began to make their own contributions to our code, starting with the goal of the illusion of a completed project, with any progress thereafter for good measure. As the only person able to produce visual elements (or the only person foolish enough to readily volunteer), I then spent most of my time drawing (or mousing) graphics. Of course we had way too much to do, and even in a rush I have a hard time producing insufficient material (unless that's my intent from the start, but lack of time isn't sufficient to incite acceptable intent for whatever reason), so I spent a lot of time thinking that I should really just leave it, whatever it was, how it was, but that it really needed to be fixed and so on. Perhaps the highlight of the bend was another event which surpassed its image in the stretch: once again, I managed to do quite an amazing thing in the final moments of the ordeal--I made an entire game, from drawing to coding, in about a half hour. In the stretch it was much the same in an hour, but for whatever reason so much more thrilling. If anything, back then my modifications to the code were rapid foolish dashes of adventure into the unknown, whereas now my understanding of the code was such that my programming amounted to copypasta with a few lines changed to get new images. Nonetheless the result was a game that functioned to some degree (don't try anything other than moving left and right), but it met the final requirements we naively set for ourselves long ago of having 6 total games. In reality there were two types games with slight variations between them but otherwise obviously the same. Though our result was a little rough, it could be cleaned up in a day or two, so it wasn't that bad. While only having two gameplay mechanics is less entertaining in the long run, I think it was sufficient for our purposes.

Tuesday, November 24, 2009

Could be a good deal...

Yesterday I got an email regarding the Department of Energy computational science graduate fellowship. Normally I'd probably ignore such an email, but for whatever reason I took a gander. Inside I found some words (as one might expect), but these were the ones in particular that caught my eye:

Benefits of the Fellowship:

  • $32,400 yearly stipend
  • Payment of all tuition and fees
  • Workstation purchase assistance
  • Yearly conferences
  • $1,000 yearly academic allowance
  • 12-week research practicum
  • Renewable up to four years
Sounds pretty decent I thought, but what's the catch? This kind of dough doesn't usually come without some strings (or steel cables) attached, so what is it, a lifetime of indentured servitude? Well the conferences are required, but they're all expenses paid on top of extra stipend for attendance, so it's more like a mandatory paid vacation. Same for the research practicum, in which you are required to use massive DoE supercomputers for whatever you want. Notice that when they say "workstation purchase assistance," they mean that they will only match the money you put up for whatever high performance computer you want. In addition, whatever school you attend has to agree to not have you working as a TA or research assistant for more than one semester. Finally, the only non-academic requirement is that you agree to consider job offers from the DoE or contractors.

As far as I'm concerned, this whole program is the best idea anyone has ever had! The thought of being paid to go to grad school makes me very, very happy. I managed to find the applicant statistics for last year and it turns out that about 1/20 people who applied got in. Assuming equal probability those odds aren't bad at all, however that's probably not a fair assumption to make; with benefits like these, it's easy to be motivated to do better in school so that my probability of selection might improve, hence the current time and my working on homework (well, ok, blogging, but motivated or not everyone needs a break now and again). I'm actually fairly confident in my grad school prospects, mainly because of my undergraduate research. This is my second semester of such, and apparently my research advisor likes me enough to propose advising me next semester as well despite him being on sabbatical (so that we'll be prepared to "hit the ground running in the summer"). I can't express how grateful I am to have found such a good fit and generally exceptional person to work with... though that statement does do a pretty good job of at least indicating the magnitude of my gratitude. As it stands, it seems that I will be graduating with 6 semesters of research experience, which, combined with being the student administrator for the CS department's linux server and a double major, ought to more than make up for some of my less than optimal grades. Nonetheless, better grades certainly aren't going to hurt, so back to the books!

Friday, November 20, 2009

HD isn't always HD

For years now HD has been a magic word, and for just as long I've found humor in its use, when not shaking my head at the naievete involved. Everybody knows that you have two options with HD, 720 or 1080. Clearly 1080 is the better option, because it's a bigger number... right? Well, yes and no. These numbers represent the vertical resolution or number of rows of pixels from top to bottom of the screen. Of course vertical resolution is only half the picture, for whatever reason the horizontal resolution is implicit: 720 has a full resolution of 1280x720, 1080 has 1920x1080. In terms of resolution, yes, 1080 is better, but this is a really restricted and possibly misleading analysis. In terms of actual clarity, a vastly more important measure is PPI (pixels per inch). Imagine for instance that the big man on the block has a 60" 1080 HD lcd screen, in his own little world he is really special for having such a ginormous TV with such crystal clarity. But in reality, his neighbor's 20" 1080 HD lcd screen looks much clearer, and the reason is simple: both TVs have the exact same number of pixels, which means that to fill the extra space the 60" has pixels that are 3 times as big (with 9 times the area), making them much easier to distinguish from the same distance, making the contrasting areas of the image look blocky and jagged. To further illustrate, imagine another neighbor has a sad little 5" 1080 HD lcd screen--in truth, he is the one to envy! The clarity of such a screen would be astounding, with 440.6 pixels per inch it could draw letters and numbers 1/100th of an inch tall, just about twice the width (diameter) of an average human hair. On the other hand, Mr. big man only has 36.7 ppi, the smallest letter his TV could draw would be 1/7th of an inch tall, close to the width (diameter) of a pencil eraser!

The reason I say HD isn't HD is that as a computer user, I'm accustomed to HHD (higher than high-def)--you probably are too, you just weren't aware of it. Suppose we ignore all this (very relevant) pixel density stuff and focus solely on resolution; the average computer monitor has been capable of resolution better than 720 for a long time. 1280x1024 is the most common computer resolution, and it has 142% the resolution of 720. It is only 63% of 1080, but 1280x1024 is rapidly going out the window--in fact, you can now get a new 22" lcd computer monitor with greater than 1080 HD resolution for $200. The discovery of this recently surprised me, that seems like a great bargain. I'm a big fan of 30" 2560x1600 monitors, but unfortunately they are tremendously expensive, so in my idle pondering and interest in value metrics I ended up deriving the very simple math to get the numbers above and a few more that relate to 30" monitors. In short, a 2048x1152 screen has only 57.6% the resolution of a 30", but can be bought for 15-20% of the price. Likewise, if you really want to match the 30" experience, a 24" 2048x1152 monitor will have the same PPI... but any smaller size with the same resolution will also have a smoother image (or higher definition) than the 30". With this perspective it's no longer a great bargain, but an amazing deal.

Wednesday, November 18, 2009

need... more... ram... MORE RAM!

Depending on who you're talking to and how much they like to debate semantics, data mining and machine learning are essentially the same thing. The important point is that incredible amounts of data are needed in order to mine golden nuggets of useful information. The Netflix dataset used for the Netflix Prize, for instance, is well over 4GB; to make matters worse, unless one writes a lot of skillfully efficient code, putting this into a data structure takes quite a bit more space. On top of that, raw data isn't very useful unless you have room to fit whatever models you're trying to construct in addition. Despite my relatively short presence and shorter sentience on this Earth, I remember a time when 4GB was a huge capacity for a hard disk. Of course, these days 4GB will fit on the increasingly outdated optical DVD format... 4GB can even fit on a plastic sliver of flash memory, less substantial than a humble dime. In a time of terabyte hard drives costing less than a trip to the grocery store, 4GB seems laughably diminutive. However, there's a significant issue here! Most people know that a computer has several types of memory: RAM and a hard drive (there are more, to be covered momentarily). Why are there two types of memory, why not just use a hard drive? The answer is simple, getting data from a hard drive takes 100,000 times longer than from ram! While a specially built computer could run with only a hard drive, it would be so unbelievably slow that nobody in their right mind would ever use it. For a good number of tasks, like listening to music and looking at pictures, a hard drive works just fine. The reason is that these things are just read--once they've been read and used, say the sound the data represents has been sent to the speakers, the data can be thrown away. However, the more important, invisible bits of data that allow a computer to run are most often handled very differently: once read and processed, the results are stored so that they might be used later. Imagine, for instance, that you have a counter (which are extremely common in computers and programming) that counts the numbers of mouse clicks. If the processer were to take the stored counter, add one, and throw the result away, then the next time the processor read the counter it would get the number that the counter started at. If you have a program that displays some message when you click 10 times, the message will never get displayed. Obviously, in order for your program to work, the cpu must be able to remember how many clicks have happened, so it must read and write. For this simple example, the time it takes to read/write from a hard drive is ok--even the fastest human clicker is inconceivably slow compared to the inner workings of a computer. However, if this count is something that is read/written millions of times a second, the time it takes to access the hard drive will be an incredible bottleneck. In fact, this kind of situation is extremely common in computers (hard drives are the biggest bottleneck in a computer), hence why we have ram; computers simply need a place that can be accessed very quickly in order to work fast enough for us not to prefer watching grass grow. For a bit of extra credit, let me point out that as far as the cpu is concerned, even ram is dreadfully slow. See why after the jump.