Saturday, January 16, 2010

2 photos and a fabrication








Time keeps on slippin'

I like to think about really mind bending things, but even more than that I like to think about real mind bending things. For instance, I'm not sure I'll ever be comfortable in my understanding of the fact that the further we look into space the further back into time we look. This is a legitimately crazy thing--it means that given a powerful enough telescope we could watch the creation of our own universe. As impossible as that sounds, it's more or less correct; in fact, we have telescopes powerful enough to look inconceivably deep into spacetime, and we have actually captured the direct aftermath of the birth of our universe. This aftermath is known as the cosmic microwave background radiation, and its discovery earned a Nobel prize. No matter which way we look, the furthest we can see is the CMBR, it surrounds us. This is interesting because the best theory of the origins of the universe, the Big Bang theory, posits that the universe started in an extremely dense bit of space, expanding from there. But how much sense does it make that no matter which way we look we end up looking at that little bit of space? In one sense, the CMBR is a sphere that surrounds the universe. In another sense the universe surrounds it, as the universe grew from it. Either way it doesn't make any sense.

Not everything that throws us for a loop needs to be on a universal scale. Recently I read an article from NewScientist titled "Timewarp: How your brain creates the fourth dimension," which I found to be nothing short of profound. As you might gather from the admittedly bad title, the article is about our perception of time, which is something I had (surprisingly) never really considered before. Early on there is a sentence that nonchalantly flicks off a few words; "Time... is much weirder than we think it is."

Invitingly audacious, isn't it?

The whole article is well worth it, as it shows some ingenious experimental methodology and keen insight regarding something as potentially slippery as temporal perception. One thing that stood out was research apparently showing that a click track at 5 clicks per second (300 beats/minute) for 10 seconds improved performance in basic arithmetic, memorizing words or hitting a specific key on a computer keyboard by 10 to 20%.

There has been research done that shows binaural beats at certain frequencies can entrain certain brainwave frequencies, and it has been suggested that this phenomenon could possibly be used to enhance the performance of the brain. I wonder if binaural beats are somehow related to rapid beats... I know that because of the time delay with binaural beats it sounds as though there are twice as many beats as usual. Anyway, it will be interesting to see where all that research goes.

Wednesday, January 13, 2010

Intelligent Transportation Systems

In 2004 it took an estimated 6400 megajoules to build a typical computer, including 17" CRT. This works out to 1778 kilowatt/hours, or about the average consumption of a house in the US for two months. Based on the 2009 US average industrial rate of $0.07 per kwh, assuming that only electricity was used and at 100% efficiency, $124.46 of the cost of the computer went to energy alone. I reckon this would represent somewhere around 10% of the total cost.

A gallon of gasoline has about 1.3x10^8 joules or 138 mj, meaning the computer would require about 46.4 gallons of gasoline to build. Using a rough average of the current prices, $2.70, this means about $125 worth of gasoline. Note that 1 gallon of gasoline has about 36.6 kwh.

Alternatively, the current average residential rate for electricity is $0.12 per kwh, meaning the cost to build the computer would be $213.36. Notice that a mere change of 5 cents to the cost of a kwh nearly doubles the end cost of the energy, which would most likely be reflected in the purchase price. It's important to recognize that energy and the cost thereof, from, gasoline, electricity, or beyond, is extant in all facets of our modern lives. In other words, if the price of gasoline goes up, the price of everything goes up. Of course most of our electricity is generated from coal and natural gas, so the price of gasoline doesn't seem directly related to building a computer. Unfortunately that's rather short sighted, as gasoline is required in order to move the computer parts to and fro, not to mention to transport the coal to the power plant to begin with.

Clearly if the cost of both gasoline and electricity were to rise by a nickel, we should naturally expect everything we buy to become quite a bit more expensive. It isn't difficult to see that this in turn would most likely have dire economic consequences. This is why there's so much buzz about energy, it should be obvious that the extreme consequences of demand outstripping energy supply readily justify extreme evasive efforts.

Imagine it was discovered that a meteor sufficient to absolutely obliterate Earth was headed straight towards us with a 100% probability of collision. Our dependence on gasoline is kind of like that. Buying a Prius would be like building a large bomb shelter: it would show that you probably realize there's some kind of problem, but that you nonetheless have absolutely no understanding of its magnitude. Do you know how much energy it takes to turn bits of iron buried in the Earth into a shiny new Prius? According to an average figure per car, not the Prius specifically, given by Toyota, around 22,519 kwh or 22.5 megawatt/hours , the rough equivalent of 615 gallons of gas. This means that driving a Prius 31,000 miles uses about the same amount of energy as building the thing to begin with! 22.5 mwh would power the average house for over 2 years, it's quite a lot of energy.

One implication is that by buying a used car instead of a Prius you are preventing the use of the equivalent 615 gallons of gas--buying a used 15 mpg beast and driving it 9,000 miles uses less gas than a new Prius with 0 miles on it, making the beast more sustainable and conscientious up to that point. I realize I always pick on the Prius but I don't mean to be too disheartening, the Prius is one of the better options available, even if I think it's not as extreme as it should be. Anybody who buys a new Prius with legitimate environmental concern is now obligated to drive that car into the ground.

New cars aside, one Mythbusters experiment showed a 39% increase in fuel efficiency from drafting a big rig--driving 10 feet behind it. Assuming every car could always draft in such a manner and increase efficiency by 39%, well then the yearly consumption of gas would decrease by a monumental 39%. That's a big assumption, but there's one way it could be realized, and that's with autopilot.

We can't all draft all the time because it is very dangerous to drive at almost any speed 10 feet behind anything, and the reason is simply biological: it takes a measurable and substantive amount of time for information to traverse the nervous system, this phenomenon is commonly referred to as reaction time. When you see brake lights, the light must activate an action potential in your retina, which travels into the brain. Once processed, another signal is sent down the looong path (compared to microscopic neural cells, inconceivably long) to your foot, telling it to press the brake pedal. If a truck moving 60 mph slams on the brakes with you at 10 ft behind, that reaction time is simply way too slow and it's game over. On the other hand, with autopilot brake lights aren't even necessary, the computer in each vehicle would be in constant communication with the cars in front and behind; the vehicles could be 10 ft apart, 2 ft apart, even physically connected like a train without any problem. I imagine the optimum arrangement would be a physical connection for a number of reasons. Of course, if all drivers were computers, the brakes themselves would hardly be needed, especially on the freeway. If you know the status of every car around you, about their planned movements, power characteristics and beyond, less wasteful air friction could be used to decelerate as appropriate, perhaps to allow a car to enter the train, which is itself a task much easier for computers than humans.

It isn't hard to envision that traffic lights would disappear with irrelevance as well, indeed I doubt it would make much sense to sit at an intersection when precise control and rapid, traffic-omniscient computer communications would allow cars of all headings to pass through synchronously. Sure, it will take a while to get used to constantly missing that other car by inches, but abandoning the familiar start/stop/wait process will give tremendous fuel savings, as it is the most inefficient part of driving and why the distinction between city mpg and highway mpg exists. Accordingly, the pace of society will see a new and considerable boost as not only the time between locations diminishes, but we are also free to spend that time doing something other than driving. Not only will we get places fast, ambulances, police, and fire trucks will be able to reach their destinations in the maximum possible time. If that weren't enough, we should expect that we are all a noticeably wealthier as our expenditure on gas shrinks, car insurance disappears, and as mentioned above practically everything drops in price along with energy cost. It's such a win for everyone it feels like cheating, but all of that is just the start.

The first thing people say when they hear of computers or robots driving cars is "but that sounds So dangerous, it would never be safe enough, I would never trust it!" Well, the bleakness of reality readily illustrates the absurdity of such a thought. Think of it this way: the autopilot system could have 5 million accidents a year and that would still be a huge improvement over humans driving cars! There were around 6.4 million car accidents in 2005. 100 people could die every single day in a computer driven car and it would still be safer, because 115 people are dying every day in the current system. One hundred and fifteen people sure seems like a lot, doesn't it? Well, consider that 3,303 people died in car accidents in the month of September, 2001. That month is and always will be bitterly remembered solely for the terrorist attacks that fell the Twin Towers, acts that meant the death of 2,819 people. There is no doubt that 9/11 was a tragedy, but so was 3,303 car accidents. Death by car accident and terrorist attack are fundamentally similar in that the victims of either are generally no less expecting nor deserving the outcome--incidence is practically random. Just because the first figure elicits strong memories and the next is unfamiliar doesn't make the prior any more tragic! Personally I'm inclined to think that every person is more or less equally valuable (namely, invaluable) and thus that each person's death is equally tragic. That being the case, the 2,819 terrorism related deaths on 9/11 are quantitatively about 85.347% as tragic as those due to car accidents in that same month. Alternatively, if we were to assume that only the death of a relative or dear friend qualified as measurably tragic then the majority of people would see that 2,819 random strangers and 3,303 random strangers are pretty close to each other, and we might expect to estimate their relative tragedy as similarly near. Objectivity aside, you would have to be colder than cold to somehow consider 3,303 lives lost any less tragic than 2,819 lives lost regardless of the details, these are all people that could have been you or I, yearning to be alive just like you and I: husbands, daughters, mothers, brothers... neighbors, friends and mentors; they were real people!

So it's established, there were two significant tragedies in the US in September, 2001, now what? Perspective: the 3,303 fatal car crashes in September was actually fewer than that for the two months post and prior, which makes 5 tragic months in a row. If you figure that anything over 2,000 deaths is sufficient to be labeled tragic, every single month in 2001 was a tragedy considering car accidents alone... 37,862 people died. Every single year from 1994 to 2008 has been a tragic year, with an average 37,500 fatal car accidents per year. 1994 is the earliest data I have, but I'm willing to bet the numbers don't improve much by going back further. Over the 14 years that span 1994 and 2008 562,712 people died in car accidents. If instead of happening over 14 years it happened in one day, that day would be about 199.6 times as tragic as 9/11, like the events of 9/11 replayed 199.6 times in one day. 562,712 is 2.5 times the total number of people that died from the atomic bombing of Hiroshima and Nagasaki combined. Know that those weren't the most devastating though--strategic firebombing of Japanese cities killed around 500,000 people, inconceivable yet still fewer.

Cars driven by people are as deadly, if not more, than world wars.

Then how dangerous would it be? Because an intelligent transportation system would need to be implemented everywhere all at once and thus a massive project, breadth and depth of testing at all stages is a certainty. To start, there has been decades of dedicated research on this specific issue, and the state of affairs is amazing (see DARPA's grand and urban challenges). Given the talent inevitably attracted to exceptional challenges (such as top engineers to NASA), a category for which this certainly qualifies, I presume each issue arising throughout development would be deftly handled. Finally, I would expect that some qualified organization would be intimately involved, dictating the requirements and governing the development to ensure safety and reliability, much as the FAA does with all things aerial. An autopilot system made properly as thus, I predict less than a hundred accidents per year from the very start, probably no deaths. With such a system the probability of dying in a car accident would go from frighteningly high to somewhere less than being struck by lightning. The current estimated yearly cost of car accidents is over $230 billion dollars, so... cha-ching! There's an extra $229.98 billion dollars floating around. Nonetheless we would expect the system to improve over time, transforming cars from most dangerous to safest form of transport.

Optimally the typical commuter car should be prepare for transition by being made small and ultra-light, with aerodynamics engineered in terms of chains of cars. The majority of cars should seat one passenger since most often a car carries only one person and any empty seats means wasted energy. With standardized interfacing and characteristics, other vehicle forms would fulfill the need for cargo haulers, high capacity vehicles, and so forth. Ideally vehicles would be public property, eliminating the need for a family to have multiple vehicles for commuting and family outings, but realistically this is the US and people want to own the things they use. Regardless, thanks to the reduced complexity and altogether more efficient vehicle design coupled with energy efficiency savings, a family could afford to own a number of vehicles which nonetheless add up to a fraction of the energy and materials cost of the present steel monstrosities, maybe able even to be stored in the same amount of space. Alternatively a sufficiently large platform could allow for modular passenger compartments; though the platform size would be less than optimum for single passengers, needing only one drivetrain would decrease materials consumption. The subsequent implication is that modular drivetrains could be used instead of modular passenger compartments.

The aforementioned efforts combined would make for an increase in efficiency so marvelous that domestic oil production would actually be sufficient for the first time since the 70's, when it peaked. Since we're making a whole new concept of car, it would make sense to complete the metamorphosis: ditch internal combustion for electrical, pave the road with solar cells, and oil becomes practically irrelevant for the first time since the second industrial revolution. Rather than carry around the really heavy main batteries, leave them stationary and build contact strips in the road so that cars can zip around like full scale slot cars. The relatively lightweight backup batteries would still be carried so that in the case of main power failure the vehicle could still maneuver and communicate safely. With the sum of these modifications, we should expect our busiest roads to give the impression of losing much of the normal traffic--in reality, the same road may have even more traffic, only seeming less because more cars fit in less space for less time. Each intersection would know about every car planning to traverse it from the earliest possible moment, and would assign each car a set of parameters with which it is to use for traversal, including possible alternate plans. Each car would then communicate with every other car assigned to the intersection around the same time to verify that everything works out, a sanity check independent of the intersection. For example, two chains of several cars each plan to travel east and north through the same intersection at the same time. The intersection may dictate that both chains enter the intersection moving 80 mph, the first at 5:00:00 and the other at 5:04:00. The chains verify together and find that they will pass within 6 inches of each other, but that this is an acceptable margin given the wind conditions and other factors. The plan is confirmed with the intersection and each car passes through, deviating a few hundredths of an inch from their predictions--these deviations would then be incorporated back into the prediction model which is distributed across the whole network. Suppose four very long chains travelling in every direction are approaching the same intersection. This time the intersection would probably dictate that the lead cars split and accelerate through such that at any moment there are 4, possibly 8 cars in the intersection, each one missing the other by a hair. Eventually it is expected that the traffic network will maximize efficiency of the whole system in unexpected ways. Maybe previously busy intersections will be used as though there were no crossing, or all but a handful of wide, long, straight thoroughfares will fall into relative disuse.

I am a driving enthusiast, I really love driving. Many days it seems my highest aspiration is to do laps around Laguna Seca in some kind of ultra performance four wheeled vehicle. But despite my pleasure in driving there is no way that I can call the present system workable. It's extremely dangerous, terribly slow, woefully inefficient, and absurdly expensive. The truth is that we have the technology to automate the roads, people have been working on it for decades and the resulting systems have proven reliable even in novel situations many humans might otherwise fail. It might not be perfect, but it's much better, and as I've shown we're so terrifically awful at driving that that's not saying much. The transition is ready to happen, and when it finally does our world will simply become safer, faster, better, and wealthier. The only downside is that it can't be done over night.



Once we finish automating our roads, what's the next revolutionary development? A space elevator. More on that some time.


some sources:

"Energy Intensity of Computer Manufacturing" by Eric Williams, United Nations University
  http://www.scribd.com/doc/4183/Energy-Intensity-of-Computer-Manufacturing
"How much does electricity cost? What is a kilowatt-hour?"
  http://michaelbluejay.com/electricity/cost.html
"How much electricity do computers use?"
  http://michaelbluejay.com/electricity/computers.html
Energy Content of Fuels (in Joules), other useful tables
  http://physics.syr.edu/courses/modules/ENERGY/ENERGY_POLICY/tables.html
"Weekly U.S. Retail Gasoline Prices, Regular Grade"
  http://www.eia.doe.gov/oil_gas/petroleum/data_publications/wrgp/mogas_home_page.html
"Average Retail Price of Electricity to Ultimate Customers by End-Use Sector, by State"
  http://www.eia.doe.gov/cneaf/electricity/epm/table5_6_a.html
"Energy to build a car?"
  http://www.cleanmpg.com/forums/showthread.php?t=18240
"Most and Least Fuel Efficient Cars "
  http://www.fueleconomy.gov/FEG/bestworst.shtml
National Highway Traffic Safety Administration's Fatality Analysis Reporting System
  http://www-fars.nhtsa.dot.gov/Main/index.aspx
Wikipedia - "Intelligent Transportation System"
  http://en.wikipedia.org/wiki/Intelligent_transportation_system
U.S. Dept. of Transportation - "Intelligent Transportation Systems Benefits and Costs, 2003 Update"
  http://ntl.bts.gov/lib/jpodocs/repts_te/13772.html#4.0

Wednesday, December 30, 2009

Wednesday, December 2, 2009

A bend

Release 3 wouldn't be much of a follow up without a few milestones, and milestones (of a sort) I did make. This time the stretch became a bend, which doesn't make sense but for the fact that thereafter follows the break--fortunately I anticipate it to be more of the winter kind than the psychotic kind. The 17 straight hours of desk work seemed pretty significant at the time, but today upon leaving my obligations (met and otherwise) at the classroom door, I realized that 17 hours was a number unwittingly exceeded. Admittedly, it took a while to push aside enough fuzz to narrow down two sides of time, longer still to then find the difference within that silly base-twelve chronological institution. Fancy dictum and questionable conflations aside, this time I worked for around 21 hours straight. The distinction though is probably less than significant, since the prior 17 was constrained by having such a late start--this time I only managed to begin my Olympic marathon of stationary feats a few hours earlier. For the better too, as there was much to be done! It started rough, as my digital tablet suddenly refused to comply with my digital scribing needs. Its absence might have meant more time available, not lost to enjoying or perfecting my production, but that was time lost nonetheless trying to get it to work as it has and should (by all reasonable expectations), and this was time lost without the end result of a better picture, mind. Eventually reason won over and I resigned myself to the mouse for my work. While we're here, let me opine that the mouse was a brilliant introduction to the world of computers 25 years ago--today it's just a sad and unavoidable display of our deficiency in interacting with computers. Even worse, people with money to move markets have been convinced of the absurd notion that touch interfaces are superior. That's the point where I might have written "but I digress," but didn't; to clarify, I did digress, briefly (and continue to do so), but did not make the matter explicit in the sentence prior--I was saving it for this sentence. Moving on, I decided to embrace the uniformity of which electronic mice conduce by creating a virtual milieu of very uniform structures for my next game. For this strange place I envisioned a purple sky and square hills covered in blue grass, but attempts at both proved unsatisfactory. Instead I went for square hills covered in arbitrary textures. The result certainly was strange.
 With my patience presently waning just as it was then, I moved on to a mildly meaningful but voluminous task (thusly giving a questionable but satisfying notion of productivity) partially akin to sorting books for my groups' body of computer code. It was with irony then that I moved on to some real work of debugging, for which several hours of careful reading gave in return very little productivity. At some point near there others in my group awoke (as normal people are expected to do) and began to make their own contributions to our code, starting with the goal of the illusion of a completed project, with any progress thereafter for good measure. As the only person able to produce visual elements (or the only person foolish enough to readily volunteer), I then spent most of my time drawing (or mousing) graphics. Of course we had way too much to do, and even in a rush I have a hard time producing insufficient material (unless that's my intent from the start, but lack of time isn't sufficient to incite acceptable intent for whatever reason), so I spent a lot of time thinking that I should really just leave it, whatever it was, how it was, but that it really needed to be fixed and so on. Perhaps the highlight of the bend was another event which surpassed its image in the stretch: once again, I managed to do quite an amazing thing in the final moments of the ordeal--I made an entire game, from drawing to coding, in about a half hour. In the stretch it was much the same in an hour, but for whatever reason so much more thrilling. If anything, back then my modifications to the code were rapid foolish dashes of adventure into the unknown, whereas now my understanding of the code was such that my programming amounted to copypasta with a few lines changed to get new images. Nonetheless the result was a game that functioned to some degree (don't try anything other than moving left and right), but it met the final requirements we naively set for ourselves long ago of having 6 total games. In reality there were two types games with slight variations between them but otherwise obviously the same. Though our result was a little rough, it could be cleaned up in a day or two, so it wasn't that bad. While only having two gameplay mechanics is less entertaining in the long run, I think it was sufficient for our purposes.

Tuesday, November 24, 2009

Could be a good deal...

Yesterday I got an email regarding the Department of Energy computational science graduate fellowship. Normally I'd probably ignore such an email, but for whatever reason I took a gander. Inside I found some words (as one might expect), but these were the ones in particular that caught my eye:

Benefits of the Fellowship:

  • $32,400 yearly stipend
  • Payment of all tuition and fees
  • Workstation purchase assistance
  • Yearly conferences
  • $1,000 yearly academic allowance
  • 12-week research practicum
  • Renewable up to four years
Sounds pretty decent I thought, but what's the catch? This kind of dough doesn't usually come without some strings (or steel cables) attached, so what is it, a lifetime of indentured servitude? Well the conferences are required, but they're all expenses paid on top of extra stipend for attendance, so it's more like a mandatory paid vacation. Same for the research practicum, in which you are required to use massive DoE supercomputers for whatever you want. Notice that when they say "workstation purchase assistance," they mean that they will only match the money you put up for whatever high performance computer you want. In addition, whatever school you attend has to agree to not have you working as a TA or research assistant for more than one semester. Finally, the only non-academic requirement is that you agree to consider job offers from the DoE or contractors.

As far as I'm concerned, this whole program is the best idea anyone has ever had! The thought of being paid to go to grad school makes me very, very happy. I managed to find the applicant statistics for last year and it turns out that about 1/20 people who applied got in. Assuming equal probability those odds aren't bad at all, however that's probably not a fair assumption to make; with benefits like these, it's easy to be motivated to do better in school so that my probability of selection might improve, hence the current time and my working on homework (well, ok, blogging, but motivated or not everyone needs a break now and again). I'm actually fairly confident in my grad school prospects, mainly because of my undergraduate research. This is my second semester of such, and apparently my research advisor likes me enough to propose advising me next semester as well despite him being on sabbatical (so that we'll be prepared to "hit the ground running in the summer"). I can't express how grateful I am to have found such a good fit and generally exceptional person to work with... though that statement does do a pretty good job of at least indicating the magnitude of my gratitude. As it stands, it seems that I will be graduating with 6 semesters of research experience, which, combined with being the student administrator for the CS department's linux server and a double major, ought to more than make up for some of my less than optimal grades. Nonetheless, better grades certainly aren't going to hurt, so back to the books!

Friday, November 20, 2009

HD isn't always HD

For years now HD has been a magic word, and for just as long I've found humor in its use, when not shaking my head at the naievete involved. Everybody knows that you have two options with HD, 720 or 1080. Clearly 1080 is the better option, because it's a bigger number... right? Well, yes and no. These numbers represent the vertical resolution or number of rows of pixels from top to bottom of the screen. Of course vertical resolution is only half the picture, for whatever reason the horizontal resolution is implicit: 720 has a full resolution of 1280x720, 1080 has 1920x1080. In terms of resolution, yes, 1080 is better, but this is a really restricted and possibly misleading analysis. In terms of actual clarity, a vastly more important measure is PPI (pixels per inch). Imagine for instance that the big man on the block has a 60" 1080 HD lcd screen, in his own little world he is really special for having such a ginormous TV with such crystal clarity. But in reality, his neighbor's 20" 1080 HD lcd screen looks much clearer, and the reason is simple: both TVs have the exact same number of pixels, which means that to fill the extra space the 60" has pixels that are 3 times as big (with 9 times the area), making them much easier to distinguish from the same distance, making the contrasting areas of the image look blocky and jagged. To further illustrate, imagine another neighbor has a sad little 5" 1080 HD lcd screen--in truth, he is the one to envy! The clarity of such a screen would be astounding, with 440.6 pixels per inch it could draw letters and numbers 1/100th of an inch tall, just about twice the width (diameter) of an average human hair. On the other hand, Mr. big man only has 36.7 ppi, the smallest letter his TV could draw would be 1/7th of an inch tall, close to the width (diameter) of a pencil eraser!

The reason I say HD isn't HD is that as a computer user, I'm accustomed to HHD (higher than high-def)--you probably are too, you just weren't aware of it. Suppose we ignore all this (very relevant) pixel density stuff and focus solely on resolution; the average computer monitor has been capable of resolution better than 720 for a long time. 1280x1024 is the most common computer resolution, and it has 142% the resolution of 720. It is only 63% of 1080, but 1280x1024 is rapidly going out the window--in fact, you can now get a new 22" lcd computer monitor with greater than 1080 HD resolution for $200. The discovery of this recently surprised me, that seems like a great bargain. I'm a big fan of 30" 2560x1600 monitors, but unfortunately they are tremendously expensive, so in my idle pondering and interest in value metrics I ended up deriving the very simple math to get the numbers above and a few more that relate to 30" monitors. In short, a 2048x1152 screen has only 57.6% the resolution of a 30", but can be bought for 15-20% of the price. Likewise, if you really want to match the 30" experience, a 24" 2048x1152 monitor will have the same PPI... but any smaller size with the same resolution will also have a smoother image (or higher definition) than the 30". With this perspective it's no longer a great bargain, but an amazing deal.

Wednesday, November 18, 2009

need... more... ram... MORE RAM!

Depending on who you're talking to and how much they like to debate semantics, data mining and machine learning are essentially the same thing. The important point is that incredible amounts of data are needed in order to mine golden nuggets of useful information. The Netflix dataset used for the Netflix Prize, for instance, is well over 4GB; to make matters worse, unless one writes a lot of skillfully efficient code, putting this into a data structure takes quite a bit more space. On top of that, raw data isn't very useful unless you have room to fit whatever models you're trying to construct in addition. Despite my relatively short presence and shorter sentience on this Earth, I remember a time when 4GB was a huge capacity for a hard disk. Of course, these days 4GB will fit on the increasingly outdated optical DVD format... 4GB can even fit on a plastic sliver of flash memory, less substantial than a humble dime. In a time of terabyte hard drives costing less than a trip to the grocery store, 4GB seems laughably diminutive. However, there's a significant issue here! Most people know that a computer has several types of memory: RAM and a hard drive (there are more, to be covered momentarily). Why are there two types of memory, why not just use a hard drive? The answer is simple, getting data from a hard drive takes 100,000 times longer than from ram! While a specially built computer could run with only a hard drive, it would be so unbelievably slow that nobody in their right mind would ever use it. For a good number of tasks, like listening to music and looking at pictures, a hard drive works just fine. The reason is that these things are just read--once they've been read and used, say the sound the data represents has been sent to the speakers, the data can be thrown away. However, the more important, invisible bits of data that allow a computer to run are most often handled very differently: once read and processed, the results are stored so that they might be used later. Imagine, for instance, that you have a counter (which are extremely common in computers and programming) that counts the numbers of mouse clicks. If the processer were to take the stored counter, add one, and throw the result away, then the next time the processor read the counter it would get the number that the counter started at. If you have a program that displays some message when you click 10 times, the message will never get displayed. Obviously, in order for your program to work, the cpu must be able to remember how many clicks have happened, so it must read and write. For this simple example, the time it takes to read/write from a hard drive is ok--even the fastest human clicker is inconceivably slow compared to the inner workings of a computer. However, if this count is something that is read/written millions of times a second, the time it takes to access the hard drive will be an incredible bottleneck. In fact, this kind of situation is extremely common in computers (hard drives are the biggest bottleneck in a computer), hence why we have ram; computers simply need a place that can be accessed very quickly in order to work fast enough for us not to prefer watching grass grow. For a bit of extra credit, let me point out that as far as the cpu is concerned, even ram is dreadfully slow. See why after the jump.

Thursday, November 5, 2009

The end of scrubbing tubs

As a college aged male bachelor, I know a thing or two about bathrooms that have gone uncleaned for longer than many people might think possible. To make matters worse, I use a humble castile soap that produces scum with unabashed vigor. Last week, by the combined effort of many unknowable forces, I decided to clean the bathroom, and in the process made a fantastic, incredible, revolutionary discovery (this time not involving micro fiber cloths)! I started with the typical futile effort, spraying everything with potent cleaning chemicals and scrubbing, scrubbing, scrubbing with a typical plastic bristle brush. Before long it was clear that all my effort was adding up to nothing. I decided to pull out the big guns, and began to seek a green scouring pad. I found one, and thumbed it as I saw in the same drawer a box of those new-fangled magic "eraser sponges," also known as melamine foam, I had picked up on a whim at the dollar store. Feeling my sense of adventure kick in (and not knowing what else I might ever use them for), I decided to grab a magic sponge and give it a try. The results were unbelievable. The soap scum literally rolled off every surface after just a pass or three; I had the whole shower cleaned and sparkling in under 5 minutes! Never before in my life has a shower taken less than an hour, at least, to clean, and thus my excitement for this finding. From now on I needn't fear nor need to clear a day so that I can clean the shower; maybe, just maybe, it'll get cleaned more often now... no commitment there though. I checked the box of the eraser sponge afterward, and it does actually suggest using it for cleaning the shower, which means someone, somewhere out there actually knew about this beforehand. This is hard to believe, I would have figured the news would spread like word of a Gmail outage (wildfire in this age is the new grass growing, amirite?). I don't know if they've advertised these for this purpose since my exposure to commercials is essentially nonexistent, but I'm led to believe that even if they did it would pass unnoticed and the reason is simple: from memory, bathroom cleaning products are depicted in an ineffective way. I remember them showing what looked like an evenly disgusting tile wall, oddly aesthetic in the precision of its filth, which becomes pristine after just one pass of a sponge. Yeah, sure, everyone believes that. Even with the magic sponge there's some work involved, but I think 30 seconds of someone cleaning a real bathroom in real time with real results would be an amazing commercial. It'd say "Hey, look, this actually works. We're not trying to trick you using magical cartoon scrubbing bubbles." But then again, this is marketing we're talking about, which leads to a strong movie recommendation: How to get Ahead in Advertising.

More than you ever wanted to know about soap after the jump!

Wednesday, November 4, 2009

A stretch

I am now full and well wholly exhausted. Today saw Release 2 for software engineering; yesterday saw the most recent time I got out of bed. Far from complaint, there's something about staying awake for 36 hours or more that strongly appeals to me. I enjoy the feeling near the end of it, a sort of lightness of being. I enjoy the solitude of the earliest morning/latest night, that short period in which this little city is blanketed by an expansive silence--and Oh, those precious moments you can hear the sound of snow falling, a performance so minuscule that only amongst the stillest movements can an audience find itself. Further, with some associative tendrils linking them all, there is special joy in getting into bed after having not gotten into bed for some atypical stretch. It is almost as if as time goes on, the general average distribution contracts to a focal point of particular lucidity, a larger than normal indefinite fog lying within the perimeter that typically defines the periphery, and which invokes some new order of perception in many ways desirable but at least for its provocation of unique insight.

I Really enjoy the amazing amount I can accomplish over that time... I have some sense stronger than naught that I'm not really able to hit my stride until 12 or more hours after awakening. Actually, I recall clearly that despite finally managing to awaken early enough for class yesterday, I was in a Franciscan class mental fog nearly all day--it wasn't until around 11 PM that the urgency of the upcoming deadline and hopeless mounds of work (even worse, the mounds hadn't yet been assembled, by then there was merely a postulation that some mother-lode waited for a few rocks to be overturned) translated into some motivation to begin working on it. To be certain the thought of a good chance to pace my new furniture played some part, but before long, with some pride of performance and an awkwardly intangible form of irony, the chair disappeared and I became absolutely consumed with work. I worked for 17 hours, from 11 PM to 5:30 PM, and the only thing that stopped me then was the necessity of attending class; despite the fact, I was 15 minutes late for my inability to find a timely conclusion. I wish I could explain exactly why this was the case, but I'm afraid it's the type of situation that even another knowledgeable programmer would have a difficult time understanding. In sufficient, through the final pressing hour I managed to do what I would have previously considered impossible from a number of perspectives. That is to say I experienced some minor miracle nonetheless greater than maintaining consciousness through somnolence by programming a computer to move monkeys and boxes about a screen with bounded futility.

Actually, it was a lot of fun. Throughout the hours I happily dabbled slightly deeper into otherwise foreign but interest arts: graphical design, audio production, not to mention the manufacture of code that has a functional, visual interface.

Tuesday, November 3, 2009

Herman Miller Embody - first reactions

I think talking about "my chair" is banal and way too close to boastful, the egotism inherent therein being a personality characteristic I try particularly hard to avoid (synonyms of boastful read like a list of things I doubt many people aim for as a characterization: arrogant, conceited, pompous, pretentious, etc). With that in mind, I think "the chair" in general is fascinating, and am grateful that I have the opportunity to experience it in real life; though I was hesitant to write this up, perhaps my thoughts will lend insight to someone else.

With all that said, I have to admit, few things will sound more pretentious than what I'm about to say anyway.

First, the Embody has an arresting aesthetic, its form is absolutely captivating; it is actually inspiring to look at. I don't mean inspiring in the generic, feel-good way, I mean when I look at it there is a surge of creative, unique thoughts and a sense that the majority of objects we encounter each day are needlessly bland, expressionless, devoid of notability--altogether invisible and uninspiring. You might look at a picture of the Embody and think 'I don't get it, looks like a moderately interesting chair,' and with that I'd agree, a picture of the Embody displays a moderately interesting chair. However, as I've mentioned before, the 2 dimensional projection of a 3 dimensional image loses and incredible amount of information; the chair IRL is a whole different story.

Of course, function is a critical element of design... personally, I think form is just the whipped cream topping of any design. To be sure, a desk isn't much of a desk if you can't use it as a working surface, no matter how beautiful it is--Michelangelo's David is not a desk. On the other hand, as long as you can use it is a working surface, no matter how nauseatingly ugly it is, it'll work as a desk. Thus, it is a good thing that the Embody has function covered. But there are levels of functionality, and true to the reason I chose this particular chair, it seems to have function covered to an exceptional degree; not only can you sit in it, sitting in it is a pleasure. I haven't had the chance to sit in it for one of my 12-hour-straight coding jams yet, and that's the true test, so the full extent of its functionality remains to be seen. As it is though, it's a pleasure to sit in, it feels something like sitting on a bed. This isn't altogether surprising, as the seat has a system of suspension much like a mattress. This is a wonderful idea, and one that really surprises me for its obviousness, yet lack of presence in every other office chair I know of. The back also has an interesting suspension system, one with less give.

One thing that I love most about the Embody is that despite looking and sounding very complicated, despite a long design process with many prototypes, it is actually surprisingly simple, especially the suspension systems. The composition, strength, elasticity, and formation of the various plastics used is probably fairly involved, but in the end, the shape and intersection of all of them is very natural and efficient. Certainly this was a design objective (easier said than done), but the result is powerful; the Embody looks like an exoskeleton, an extension of the body, and what could be better than a solution provided by nature?

I don't think it's the be-all, end-all of office furniture, but then again I have a strong bias against conclusive permanence, so no chair will ever fulfill that criteria (except possibly an infinitely adjustable "indefinite chair" formed in real time by nanobots). In case you couldn't tell, so far I love it. The only shortcoming I've thought of, an insignificant and unimportant one, is that it doesn't have a headrest. There's no obvious reason why they might choose to omit such a thing, but I'm certain that it wasn't a simple fact of oversight--clearly there was no oversight involved in the design of this chair. The only reason I can think of is that they figured the inclusion of a headrest would provoke a change in the implied posture which would possibly have a detrimental impact on ergonomic functionality. Well, I'm sure there is a reason, I'd like to know what it is. My only other complaint is that it took a long 8 weeks to get here.

Sunday, November 1, 2009

A Pictorial Vindication

I have received word that, given the existence of infinite permutations of parallel universes, some people have been considering my recent furniture upgrade as unnecessary. However, in light of what I'm about to show you, I believe any transdimensional rumors will be sufficiently concluded as having no factual foundation. I present my current chair (with bonus Pickles action):


If one happened to feel gifted with a finely tuned, elite aesthetic sense, and that by seeing this image this sense has been dismantled and disfigured, transformed irreperably into an unidentifiable blop of goo, allow me to comfort them; had their sense been incapable of handling this fundamentally evocative display of Form, it wasn't worth a tarnished penny anyway and nothing of value was lost. I suppose they wouldn't see the profound symbolism in the bits of open cell gray foam that with clinging tenacity remain on the rough plane of misshapen splinter spitting plywood, refugees of a forceful division of useless padding from malformed foundation. Nor would they see, I imagine, the ongoing dialog between the thoughtlessly intrusive, precariously balanced seat, mismatched and standing as a contradiction to the firm stability of the back, which with a posture slightly less than vertical seems to mourn the loss of its intended counterpart. Of course it all goes so much deeper, more than I could fit into any number of blog posts.

I will surrender, yes, that I assembled this marvel in a fit of vanity, that I ought to have focused from the start on a more balanced, less perfect design. However, I assure you that the striking, fluid beauty of this piece is complemented by an enhanced functionality; namely, this artifactual collage doesn't make the lower half of my body go numb as its unglorified predecessor did. Frankly, I do admit some amount of guilt for purchasing a replacement for a chair that is otherwise the paragon of design, but I think that it is better served encased in glass as an enduring testament of what can be accomplished with concerted effort and a healthy dose of luck.

In case it sounds as though I've gone completely bonkers, the truth is yes, I have--but only temporarily. Such is the result of staying up all night doing probability homework.

Friday, October 23, 2009

Software Engineering

For my Software Engineering course we are spending the whole semester developing educational software as small groups. Each group was given the choice between math or spelling oriented software for 1st to 2nd graders. Our group chose math, since it is inherently easier to deal with numbers than words in programs. The course calls for 3 releases, or waypoints at which we demonstrate our software and, since they are in place of what would typically be exams, our software is supposed to meet criteria we determined for ourselves at the beginning of the semester. Thus far there has been one release, and two of the three groups had very visually limited programs, instead focusing on the backend. My group, however, was the complete opposite. This was not a mistake, either. If you know me well enough (or have read enough of my blog), you'll know that I am very critical when it comes to design, and this project is no exception. Initially I was hesitant to work in a group, as I've never handled group dynamic as well as I should, and indeed at the start of the semester it seemed as though the influence of the group was resulting in nothing short of chaos. However, when it came down to assigning the tasks, I ended up with about 90% or more of the work. While that meant a lot of chairtime, it also meant that I got to design the foundation of the project without any interference, so I decided to take it as a positive thing.

Until the demonstrations for the first release came, I honestly hadn't even considered the difference between focusing on back and front end, but in retrospect I think that's because all the back end in the world isn't going to make a first grader want to play your game! Minutes prior to our group's presentation, I jotted down a few ideas about my design objectives. What I came up with was that calling what we were making a game failed to represent the magnitude of our undertaking; what we were doing (in theory) was taking part in the earliest exposure and formation of the foundation of childrens' experience with mathematics. Framed in this way it is easy to see that our software being designed as best as possible is critical. What we want to do is create substantive, positive, memorable experiences involving math, in such a way that in the future these children might not run in fear from math (as many of us do now) but instead view it as an exciting and fun thing. Thus my aim in constructing the front end was to craft sensational experiences... I got a few laughs when I said that, but my guess is because people misunderstood: by sensational, I mean of the senses. Given that we are limited to merely 2 of 5 senses, it is extremely important to emphasize those senses, yet being mindful so that the experience doesn't get so chaotic that it is overwhelming and stressful as this would subvert our purpose.

I was part of one of the very first generations to have computers available in elementary school; Jordan and I were remeniscing the other day about playing those green screen computers with the floppy disks that were actually floppy in elementary school. Thus, my design objectives drew heavily on my own vivid memories of using computers to play games in school. For instance, the music in SimCity 2000, first experienced at school simply had a profound effect on my entire life. I still remember receiving SimCity from my grandparents as a Christmas gift some time shortly thereafter; obviously it was cause for substantial excitement given the detail that I can remember this event despite it having happened more than half-my-life ago. Accordingly I made certain to pick suitable music (light ambient) and sound effects... and ours was the only group to have any sound at all. Can you imagine, completely ignoring half of the senses you're given to captivate first graders?? Anyway, there's no good way to put our game online yet (there might be nearer the end of the project), so here are some screenshots:



 

 

Not only did I handle designing the game mechanic, choosing the music, and actually programming the game, but I also did all the art. With the exception of the menu screen image (from wikicommons) and a few standard fonts, I drew everything, even the animated monkey. I probably don't have a future in professional graphic design, sure, but I'm quite proud of what I've done--particularly given that I did essentially everything.

The reason I'm bringing up my homework is that release 2 is coming up quickly, we basically only get 2 weeks to put it together, and I set the bar high enough for the first release that I've got a lot of work to do! Fortunately the work was divided up a bit better this time, so I'm mostly in charge of the graphics. Nonetheless it's 3:45 AM and I'd been drawing for about 12 hours straight so it was time for a break. Yes, I'm really wishing my chair was here... and now you can see where all my complaining about design and chairs came from; you try sitting in the same thoughtlessly designed chair for 12 hours and see if you don't get a little grumpy! I think that waiting for this chair has probably been the most painful anticipation I've experienced, literally and figuratively.

Moving on, I wanted to mention that I've been spending so much time in GIMP (the GNU Image Manipulation Program, GNU stands for GNU's Not Unix), that I've actually been getting much better images out of it. I've learned a few basic techniques from some well written GIMP tutorials that have made a huge difference. While I haven't used anything from it for my own purposes, the results in this tutorial on drawing your very own planet starting with just a blank canvas are stunning, especially since the process isn't very complicated. I've also learned how to use a few of the tools better and found some other less than obvious features that have helped me get some results I'm very happy with:




  

 






 


There's always room for improvement, but I'm excited to see how it all comes together!

Tuesday, October 20, 2009

Economic Doom & Gloom?

I've decided that if I've put the time and effort into writing something moderately interesting, I may as well post it here. Recently I received an email from an acquaintance that suggested with urgency and confidence that because of the current debt financed federal deficit and the associated cost of interest the US economy will probably collapse within the next decade. What a terrifying prospect! For my own peace of mind and some hope of bringing another perspective to the grim proposal, I decided to do my own analysis.

First, a few items of business. I'm not an economist by a long shot. The extent of my economic training is an elective course titled Economics as a Social Science, in which I got a C. Next, as clarification, the federal deficit and the federal debt are two distinct things: the deficit refers to the difference between expected yearly income and expenditure in the government budget, which is usually what's in the news and also is typically "in the red," otherwise we'd know it better as the federal surplus. The federal debt, on the other hand, is the total accumulated debt, which has been in the red for the majority of the history of the US. According to the US Debt Clock, our national debt is currently very near 13 trillion dollars. This is an unintelligibly large number, obviously, but it's all relative. One of the more useful perspectives of this otherwise ambiguously huge figure is as a percentage of Gross Domestic Product (GDP).

Find the relevant bit of the email and my response after the jump...

Pumpkin Carving!

It's that time of year again, when a good number of North Americans prepare for all sorts of bizarre and hilariously entertaining rituals surrounding the end of October. The extent of my Halloween celebration usually matches that for all other holidays, as in not doing anything, but I have a soft spot for carving pumpkins. For reasons unknown to me, pumpkins seem to be my optimal artistic medium. Honestly I'd prefer it be a more lucrative medium, but I suppose I'll take what I can get. Despite the joy carving pumpkins brings me, lately I've been forgetting to do it... I can actually say for certain that since 2005 I've carved two pumpkins, and the only reason I remember that is because both of them were memorable. Ok, actually now I'm remembering that technically that's not true, because I carved two in 2005, but both of them were virtually identical. The concept I went for with those two was some mix of Golem and an angler fish. I wasn't completely thrilled with the outcome (thought it could've been much better), but it nonetheless won a fairly informal competition on campus in addition to making it into the yearly selection (for 2006, inexplicably) on ExtremePumpkins.com, probably one of very few sites dedicated to pumpkin carving. Honestly there are people out there who are much better than I at carving pumpkins, but that's ok with me--I do it because I enjoy it. I don't care if the result is the best or worst pumpkin ever. Enough talk, here's some walk:







Moving on, my latest pumpkin was done last year, and I'm a bit more pleased with how it turned out. One of the things I had in mind was the effect it would have glowing, which came out just as I had hoped.



 




I confess, in order to get it to light up like that I had to use more than a tealight, but getting the walls thin enough for the tiny bit of light a tealight puts out to show through would have compromised the structural integrity. Also, I'd like to point out that while it wouldn't fool even a bad neuroanatomist, I did do my best to represent the major sulci (fissures). A true representation was out of the question, as a simple matter of time! There are way too many sulci in the brain, at least enough in my own to convince me that there are better ways to while away a few hours. Perhaps if I had used a dremel or other power tool, but this was done entirely with good old manual knives. It's not looking like I'm going to have a stab (pun!) at a pumpkin this year, but who knows, maybe some time will present itself.

Saturday, October 17, 2009

Reproducing Foods

Every so often I find some irresistably scrumptious item available exclusively at some restaurant nearby. Recently this happened with the "mocha blender" at Einstein Bros. Bagels and I found myself more or less addicted. Such a habit can become expensive quickly, so I sought to reproduce it at home. The last time this happened was with a smoothie from Jamba Juice, which was easy to reproduce almost exactly given that they put all the ingredients together right in front of you. However, this was going to be a bit of a challenge, because the ingredients as put together in view consisted of ice, Hershey's chocolate syrup, and some liquid poured from a generic carton. My less than trained gustatory instinct pointed to most of the desirability being from the texture, unusually velvety for a smoothie--closer to a milkshake, which it definitely isn't. The obvious next step was to seek nutritional information, which I found after a quick google. Unfortunately, due to poor pdf formatting, some portion of the ingredients for the liquid of interest, "cappucinno base," were cut off, but my suspicions were nonetheless confirmed as there were several thickening agents visible: carageenan, guar gum, and locust bean gum. The use of whey protein probably also plays an important part in the final experience, otherwise it seems to be sweeteners, stabilizers, and the ubiquitous, impossibly ambiguous "natural flavors." As far as these flavors go, I don't think they have much if anything to do with espresso.

Given that I don't have easy access to any of these commercial thickeners, I had to improvise with powdered sugar for its corn starch content. Here's what I've come up with so far, texture-wise it seems pretty close:

8-10 oz. whole milk
Equal part or more ice (a lot).
Espresso to taste
Hershey's chocolate syrup to taste (probably about 2 Tbsp)
1/2 scoop whey protein
2 Tbsp powdered sugar
1 Tbsp granular sugar

This is the result of only my second attempt, so there's probably improvements that can be made. I think the most promising avenue is the addition of some salt to drop the freezing point of the mixture. The flavor is still way off, but the only thing I can think to make it closer is just removing the espresso altogether.

Thursday, October 15, 2009

Thoughts regarding the poor sense of probability

As a homework assigment for my probability class, we were asked to respond with our thoughts concerning the following TED talk.



In the interest of availability, persistence, and my efforts not going to waste (on the off chance someone reads my blog at some point), I've reproduced my responses after the jump with minor edits so they might make sense outside of the discussion amongst classmates.

Saturday, October 10, 2009

Nissan Succumbs to Logic

Making the rounds on the web is a new Nissan Land Glider concept vehicle. My opinion is that this represents the first indication of a correct step towards a sustainable near-range vehicular platform from a major automobile manufacturer. Included with all the sites discussing it are a few pictures and the following video (which has a very interesting choice of music with what I'm quite certain is the avant-ambient work of Steve Roach):



Get more after the jump!

Thursday, October 8, 2009

Telescopes in Space

At first the idea of a telescope floating around in space is absurd, but any marginally knowledgeable astronomer can profess that it's a fantastic idea. Astronomy at the most fundamental level is the study of space, everything and anything that's not Earth, and it's one of the oldest realms of intrigue known to humankind; it was popular long before the scientific method wandered onto the scene, despite being very much a scientific pursuit. On one hand, that space is an old interest isn't surprising--anyone that has turned their sight to the sky on a clear, dark night knows exactly why. A gaze into what might as well be the infinite unknown, the act itself as simple as a glance at our own hands, has a way of inspiring speechless profundity in even the most uninterested amongst us. On the other hand our primal fascination with space is surprising for its distance, simply far removed from our experience and altogether relatively bland to the naked eye for its expansive empty darkness excepting the occasional tiny point of light. I find it interesting that this practical void drew fascination more readily than the exceptionally vibrant and astonishing diversity of phenomenon on Earth which we can easily approach and examine. I suppose it's another case of obscene acclimation leading to an almost humorous misplacement of gratitude (or the frog in slowly heated water, though I'm not a fan of the literal part of the notion when put that way). Nonetheless, space is a fascinating place, especially when explored with our modern technologically augmented senses, the subject of this post.

As it turns out, Earth is a lousy place from which to explore everything that's not Earth. The telescope, primary instrument of astronomers, is often incapacitated by the humble cloud, and it is increasingly difficult to find a spot where light pollution (that light from the ground which obfuscates the much fainter light from billions of miles away) isn't a problem. But even on the highest, most remote mountain on the clearest night, a telescope on Earth is substantially limited by a variety of factors, and thus the idea for a telescope in space. Space telescopes were proposed by at least the 1920's; the first (Hubble) was funded in the '70s but took about twenty years to get into space, in 1990. Of course, 20 years from paper to space is ok by me, given that it's a hulking monstrosity, nearly 25,000 lbs of technical wizardry. It may have launched as early as 1986 if it weren't for the Challenger disaster, which put Hubble in cold storage but to the tune of $6 million a month, not your everyday storage unit. Nonetheless, the time investment seems to have paid off, as the Hubble is very near entering its 20th year of functionality.

Despite the near 20 years of development, shortly after launch the images Hubble was transmitting indicated a serious issue, with quality far less than expected to the extent that it performed similarly to ground telescopes. Before long it was discovered that the main mirror was shaped incorrectly. Telescopes depend almost wholly upon the precise shape of the main mirror, and the precision of the Hubble's is astounding--it was perhaps the most precisely manufactured mirror ever made, with a deviation from the intended curve never more than 10 nanometers. In other words, the shape was at most off by a length about 40 times shorter than the shortest wavelength of visible light (the color violet, at 400 nm). To give you some kind of perspective, nothing skinnier than about 400 nm can be seen with our eyes, no matter how powerful a microscope you can find: the problem is that for something under 400 nm, visible light can't hit it, which means it can't bounce back and into our eyes. So given a mirror so amazingly precise, how could it possibly have been so bad? Well, the mirror was very precisely manufactured to the wrong shape!

Here's a question: how do you fix a ~7ft diameter mirror that took 5 years to manufacture, stuck in the middle of a technological marvel which is hurtling through space at 17,000 mph?? There were two backup mirrors made, but replacement wasn't an option. Fortunately, the Hubble had a strength, a unique design choice: it was built so that it could be serviced by astronauts. After extensive analysis of the problem, a surprising solution was conceived--new sensor instruments, something like the chip in any digital camera, would be  specifically designed to be flawed in a way that would be the anti-flaw of the mirror, thus cancelling out the effects! It reminds me very much of doing the same thing to both sides of an equation in math; you can do whatever you want, as long as you do it to both sides (note that this isn't always true). This story is one that I find informative and inspiring, I hope you can find similar value in it. I also recommend taking a look at the Hubble Space Telescope page on wikipedia, as there's a lot more generally interesting stuff to know. Surprisingly, the Hubble is just one of around 100 space observatories past, present, and future. ~45 of them have been terminated, ~15 are planned for the future, and this year alone stands to see the launch of 8 new observatories!

Saturday, September 26, 2009

Chairs and Design

As a computer "power user" aka computer geek, I spend somewhere near 95% of my time at home (and awake) sitting at my desk. For a few years I had a pretty good chair that was snagged for free from an unused office. However, with the extreme use it received, it gradually and literally fell to pieces. Though I did my best to keep it in functioning order, which near the end of its life involved keeping it together with rope, it finally gave up the ghost when bolts irreparably sheared and welds failed. I moved to a backup chair that was primarily used by Pickles (who wasn't so happy about me stealing her chair), but as time has progressed the chair has proven wholly unfit for sitting, often feeling more like a torture device than furniture.

For one reason or another, functionality and office chairs are in most cases two unrelated concepts; I have been looking for a new chair for years but never found anything close to adequate. As far as I can tell, the design of office chairs starts and stops at the notion that there is some sort of surface with dimensions such that any person so inclined could do something resembling "sitting," with the result that any rudimentary design meeting this limited criterion can pass as a chair. Any person who sits as much as I do can profess that much more is involved in what can qualify as a chair... anything less is simply a surface which can be sat upon (regardless of if sitting upon it is a good idea). Considering the depraved state of computer furniture thusly described, for the longest time my intention was to design and construct my own chair, just as I did my desk. However, before long I realized that the manufacture of a chair was a problem much less feasible for an individual in comparison to that of a desk, and resigned to waiting for a better solution to present itself, which worked until my marginally adequate chair decommissioned itself.

With every passing moment in my backup chair it became clearer that the need for a new chair was desperate; it is never a good sign when your legs fall asleep sitting in a normal position, nor when they go numb while one's butt just constantly hurts. At such a point even an end table starts to seem like a superior alternative. Fortunately my waiting appeared to pay off, as a solution presented itself: the Herman Miller Embody, fairly recently introduced as successor to the famed and prestigious aeron (which was nonetheless eliminated from consideration in my previous seating quest). At first glance the price completely banished any desire of purchasing it--as with all Herman Miller furniture, it cost somewhere near an arm + half a leg. Nonetheless, moments passing in a chair unfit for sitting humans (despite being suitable for cats) had me realizing with increasing urgency that an arm and half a leg was cheaper than everything waist down. Still hesitant, a bit more research proved it to be a viable choice: a 12 year warranty(!!), reports of it being the most comfortable chair ever sat in, and finally, a site that for one reason or another had $300 in options available for free. And thus, it was settled.

I really hate spending money (which is not to say I don't enjoy the results!) and so this was a difficult thing to do. However, there are a few notions that even a frugal person need keep in mind. First and foremost is the idea that often despite a high entry price, the purchase in question can prove to be a far better value over time. Of course this takes research, because there are an incredible amount of nauseatingly overpriced products, especially relative to quality. In this case the 12 year warranty easily dispelled all fears of poor quality. Second, particular emphasis must be placed in purchasing products which will receive substantial use; only the most foolish professional house-framing carpenter would buy a hammer out of the dollar bin! To me as a programmer, a chair is just like a hammer, it's a tool necessary for getting work done with maximal efficiency, which in turn maximizes value. With only these two things in mind, the purchase is easily justifiable, but there is another critical point which seals the deal: health. Just as a poorly designed pneumatic nailing gun can be the death of a carpenter, a poorly designed chair can quickly harm the health of someone who sits for extended periods--this is why wheelchair cushions are very specialized, and why bedridden folks must be treated with care (otherwise they will get bed sores).

In my experience, when all the research is done and the intended purchase thought out well enough, even a frugal person can spend a chunk of change without feeling purchase remorse. Indeed, I have never felt an ounce of regret after buying the pricier items I own; when I do feel regret after a purchase, it is always for the cheaper items that I failed to adequately contemplate. Anyway, enough blabbing, eye candy after the jump (yes, there's more).