Wednesday, December 30, 2009

Wednesday, December 2, 2009

A bend

Release 3 wouldn't be much of a follow up without a few milestones, and milestones (of a sort) I did make. This time the stretch became a bend, which doesn't make sense but for the fact that thereafter follows the break--fortunately I anticipate it to be more of the winter kind than the psychotic kind. The 17 straight hours of desk work seemed pretty significant at the time, but today upon leaving my obligations (met and otherwise) at the classroom door, I realized that 17 hours was a number unwittingly exceeded. Admittedly, it took a while to push aside enough fuzz to narrow down two sides of time, longer still to then find the difference within that silly base-twelve chronological institution. Fancy dictum and questionable conflations aside, this time I worked for around 21 hours straight. The distinction though is probably less than significant, since the prior 17 was constrained by having such a late start--this time I only managed to begin my Olympic marathon of stationary feats a few hours earlier. For the better too, as there was much to be done! It started rough, as my digital tablet suddenly refused to comply with my digital scribing needs. Its absence might have meant more time available, not lost to enjoying or perfecting my production, but that was time lost nonetheless trying to get it to work as it has and should (by all reasonable expectations), and this was time lost without the end result of a better picture, mind. Eventually reason won over and I resigned myself to the mouse for my work. While we're here, let me opine that the mouse was a brilliant introduction to the world of computers 25 years ago--today it's just a sad and unavoidable display of our deficiency in interacting with computers. Even worse, people with money to move markets have been convinced of the absurd notion that touch interfaces are superior. That's the point where I might have written "but I digress," but didn't; to clarify, I did digress, briefly (and continue to do so), but did not make the matter explicit in the sentence prior--I was saving it for this sentence. Moving on, I decided to embrace the uniformity of which electronic mice conduce by creating a virtual milieu of very uniform structures for my next game. For this strange place I envisioned a purple sky and square hills covered in blue grass, but attempts at both proved unsatisfactory. Instead I went for square hills covered in arbitrary textures. The result certainly was strange.
 With my patience presently waning just as it was then, I moved on to a mildly meaningful but voluminous task (thusly giving a questionable but satisfying notion of productivity) partially akin to sorting books for my groups' body of computer code. It was with irony then that I moved on to some real work of debugging, for which several hours of careful reading gave in return very little productivity. At some point near there others in my group awoke (as normal people are expected to do) and began to make their own contributions to our code, starting with the goal of the illusion of a completed project, with any progress thereafter for good measure. As the only person able to produce visual elements (or the only person foolish enough to readily volunteer), I then spent most of my time drawing (or mousing) graphics. Of course we had way too much to do, and even in a rush I have a hard time producing insufficient material (unless that's my intent from the start, but lack of time isn't sufficient to incite acceptable intent for whatever reason), so I spent a lot of time thinking that I should really just leave it, whatever it was, how it was, but that it really needed to be fixed and so on. Perhaps the highlight of the bend was another event which surpassed its image in the stretch: once again, I managed to do quite an amazing thing in the final moments of the ordeal--I made an entire game, from drawing to coding, in about a half hour. In the stretch it was much the same in an hour, but for whatever reason so much more thrilling. If anything, back then my modifications to the code were rapid foolish dashes of adventure into the unknown, whereas now my understanding of the code was such that my programming amounted to copypasta with a few lines changed to get new images. Nonetheless the result was a game that functioned to some degree (don't try anything other than moving left and right), but it met the final requirements we naively set for ourselves long ago of having 6 total games. In reality there were two types games with slight variations between them but otherwise obviously the same. Though our result was a little rough, it could be cleaned up in a day or two, so it wasn't that bad. While only having two gameplay mechanics is less entertaining in the long run, I think it was sufficient for our purposes.

Tuesday, November 24, 2009

Could be a good deal...

Yesterday I got an email regarding the Department of Energy computational science graduate fellowship. Normally I'd probably ignore such an email, but for whatever reason I took a gander. Inside I found some words (as one might expect), but these were the ones in particular that caught my eye:

Benefits of the Fellowship:

  • $32,400 yearly stipend
  • Payment of all tuition and fees
  • Workstation purchase assistance
  • Yearly conferences
  • $1,000 yearly academic allowance
  • 12-week research practicum
  • Renewable up to four years
Sounds pretty decent I thought, but what's the catch? This kind of dough doesn't usually come without some strings (or steel cables) attached, so what is it, a lifetime of indentured servitude? Well the conferences are required, but they're all expenses paid on top of extra stipend for attendance, so it's more like a mandatory paid vacation. Same for the research practicum, in which you are required to use massive DoE supercomputers for whatever you want. Notice that when they say "workstation purchase assistance," they mean that they will only match the money you put up for whatever high performance computer you want. In addition, whatever school you attend has to agree to not have you working as a TA or research assistant for more than one semester. Finally, the only non-academic requirement is that you agree to consider job offers from the DoE or contractors.

As far as I'm concerned, this whole program is the best idea anyone has ever had! The thought of being paid to go to grad school makes me very, very happy. I managed to find the applicant statistics for last year and it turns out that about 1/20 people who applied got in. Assuming equal probability those odds aren't bad at all, however that's probably not a fair assumption to make; with benefits like these, it's easy to be motivated to do better in school so that my probability of selection might improve, hence the current time and my working on homework (well, ok, blogging, but motivated or not everyone needs a break now and again). I'm actually fairly confident in my grad school prospects, mainly because of my undergraduate research. This is my second semester of such, and apparently my research advisor likes me enough to propose advising me next semester as well despite him being on sabbatical (so that we'll be prepared to "hit the ground running in the summer"). I can't express how grateful I am to have found such a good fit and generally exceptional person to work with... though that statement does do a pretty good job of at least indicating the magnitude of my gratitude. As it stands, it seems that I will be graduating with 6 semesters of research experience, which, combined with being the student administrator for the CS department's linux server and a double major, ought to more than make up for some of my less than optimal grades. Nonetheless, better grades certainly aren't going to hurt, so back to the books!

Friday, November 20, 2009

HD isn't always HD

For years now HD has been a magic word, and for just as long I've found humor in its use, when not shaking my head at the naievete involved. Everybody knows that you have two options with HD, 720 or 1080. Clearly 1080 is the better option, because it's a bigger number... right? Well, yes and no. These numbers represent the vertical resolution or number of rows of pixels from top to bottom of the screen. Of course vertical resolution is only half the picture, for whatever reason the horizontal resolution is implicit: 720 has a full resolution of 1280x720, 1080 has 1920x1080. In terms of resolution, yes, 1080 is better, but this is a really restricted and possibly misleading analysis. In terms of actual clarity, a vastly more important measure is PPI (pixels per inch). Imagine for instance that the big man on the block has a 60" 1080 HD lcd screen, in his own little world he is really special for having such a ginormous TV with such crystal clarity. But in reality, his neighbor's 20" 1080 HD lcd screen looks much clearer, and the reason is simple: both TVs have the exact same number of pixels, which means that to fill the extra space the 60" has pixels that are 3 times as big (with 9 times the area), making them much easier to distinguish from the same distance, making the contrasting areas of the image look blocky and jagged. To further illustrate, imagine another neighbor has a sad little 5" 1080 HD lcd screen--in truth, he is the one to envy! The clarity of such a screen would be astounding, with 440.6 pixels per inch it could draw letters and numbers 1/100th of an inch tall, just about twice the width (diameter) of an average human hair. On the other hand, Mr. big man only has 36.7 ppi, the smallest letter his TV could draw would be 1/7th of an inch tall, close to the width (diameter) of a pencil eraser!

The reason I say HD isn't HD is that as a computer user, I'm accustomed to HHD (higher than high-def)--you probably are too, you just weren't aware of it. Suppose we ignore all this (very relevant) pixel density stuff and focus solely on resolution; the average computer monitor has been capable of resolution better than 720 for a long time. 1280x1024 is the most common computer resolution, and it has 142% the resolution of 720. It is only 63% of 1080, but 1280x1024 is rapidly going out the window--in fact, you can now get a new 22" lcd computer monitor with greater than 1080 HD resolution for $200. The discovery of this recently surprised me, that seems like a great bargain. I'm a big fan of 30" 2560x1600 monitors, but unfortunately they are tremendously expensive, so in my idle pondering and interest in value metrics I ended up deriving the very simple math to get the numbers above and a few more that relate to 30" monitors. In short, a 2048x1152 screen has only 57.6% the resolution of a 30", but can be bought for 15-20% of the price. Likewise, if you really want to match the 30" experience, a 24" 2048x1152 monitor will have the same PPI... but any smaller size with the same resolution will also have a smoother image (or higher definition) than the 30". With this perspective it's no longer a great bargain, but an amazing deal.

Wednesday, November 18, 2009

need... more... ram... MORE RAM!

Depending on who you're talking to and how much they like to debate semantics, data mining and machine learning are essentially the same thing. The important point is that incredible amounts of data are needed in order to mine golden nuggets of useful information. The Netflix dataset used for the Netflix Prize, for instance, is well over 4GB; to make matters worse, unless one writes a lot of skillfully efficient code, putting this into a data structure takes quite a bit more space. On top of that, raw data isn't very useful unless you have room to fit whatever models you're trying to construct in addition. Despite my relatively short presence and shorter sentience on this Earth, I remember a time when 4GB was a huge capacity for a hard disk. Of course, these days 4GB will fit on the increasingly outdated optical DVD format... 4GB can even fit on a plastic sliver of flash memory, less substantial than a humble dime. In a time of terabyte hard drives costing less than a trip to the grocery store, 4GB seems laughably diminutive. However, there's a significant issue here! Most people know that a computer has several types of memory: RAM and a hard drive (there are more, to be covered momentarily). Why are there two types of memory, why not just use a hard drive? The answer is simple, getting data from a hard drive takes 100,000 times longer than from ram! While a specially built computer could run with only a hard drive, it would be so unbelievably slow that nobody in their right mind would ever use it. For a good number of tasks, like listening to music and looking at pictures, a hard drive works just fine. The reason is that these things are just read--once they've been read and used, say the sound the data represents has been sent to the speakers, the data can be thrown away. However, the more important, invisible bits of data that allow a computer to run are most often handled very differently: once read and processed, the results are stored so that they might be used later. Imagine, for instance, that you have a counter (which are extremely common in computers and programming) that counts the numbers of mouse clicks. If the processer were to take the stored counter, add one, and throw the result away, then the next time the processor read the counter it would get the number that the counter started at. If you have a program that displays some message when you click 10 times, the message will never get displayed. Obviously, in order for your program to work, the cpu must be able to remember how many clicks have happened, so it must read and write. For this simple example, the time it takes to read/write from a hard drive is ok--even the fastest human clicker is inconceivably slow compared to the inner workings of a computer. However, if this count is something that is read/written millions of times a second, the time it takes to access the hard drive will be an incredible bottleneck. In fact, this kind of situation is extremely common in computers (hard drives are the biggest bottleneck in a computer), hence why we have ram; computers simply need a place that can be accessed very quickly in order to work fast enough for us not to prefer watching grass grow. For a bit of extra credit, let me point out that as far as the cpu is concerned, even ram is dreadfully slow. See why after the jump.

Thursday, November 5, 2009

The end of scrubbing tubs

As a college aged male bachelor, I know a thing or two about bathrooms that have gone uncleaned for longer than many people might think possible. To make matters worse, I use a humble castile soap that produces scum with unabashed vigor. Last week, by the combined effort of many unknowable forces, I decided to clean the bathroom, and in the process made a fantastic, incredible, revolutionary discovery (this time not involving micro fiber cloths)! I started with the typical futile effort, spraying everything with potent cleaning chemicals and scrubbing, scrubbing, scrubbing with a typical plastic bristle brush. Before long it was clear that all my effort was adding up to nothing. I decided to pull out the big guns, and began to seek a green scouring pad. I found one, and thumbed it as I saw in the same drawer a box of those new-fangled magic "eraser sponges," also known as melamine foam, I had picked up on a whim at the dollar store. Feeling my sense of adventure kick in (and not knowing what else I might ever use them for), I decided to grab a magic sponge and give it a try. The results were unbelievable. The soap scum literally rolled off every surface after just a pass or three; I had the whole shower cleaned and sparkling in under 5 minutes! Never before in my life has a shower taken less than an hour, at least, to clean, and thus my excitement for this finding. From now on I needn't fear nor need to clear a day so that I can clean the shower; maybe, just maybe, it'll get cleaned more often now... no commitment there though. I checked the box of the eraser sponge afterward, and it does actually suggest using it for cleaning the shower, which means someone, somewhere out there actually knew about this beforehand. This is hard to believe, I would have figured the news would spread like word of a Gmail outage (wildfire in this age is the new grass growing, amirite?). I don't know if they've advertised these for this purpose since my exposure to commercials is essentially nonexistent, but I'm led to believe that even if they did it would pass unnoticed and the reason is simple: from memory, bathroom cleaning products are depicted in an ineffective way. I remember them showing what looked like an evenly disgusting tile wall, oddly aesthetic in the precision of its filth, which becomes pristine after just one pass of a sponge. Yeah, sure, everyone believes that. Even with the magic sponge there's some work involved, but I think 30 seconds of someone cleaning a real bathroom in real time with real results would be an amazing commercial. It'd say "Hey, look, this actually works. We're not trying to trick you using magical cartoon scrubbing bubbles." But then again, this is marketing we're talking about, which leads to a strong movie recommendation: How to get Ahead in Advertising.

More than you ever wanted to know about soap after the jump!

Wednesday, November 4, 2009

A stretch

I am now full and well wholly exhausted. Today saw Release 2 for software engineering; yesterday saw the most recent time I got out of bed. Far from complaint, there's something about staying awake for 36 hours or more that strongly appeals to me. I enjoy the feeling near the end of it, a sort of lightness of being. I enjoy the solitude of the earliest morning/latest night, that short period in which this little city is blanketed by an expansive silence--and Oh, those precious moments you can hear the sound of snow falling, a performance so minuscule that only amongst the stillest movements can an audience find itself. Further, with some associative tendrils linking them all, there is special joy in getting into bed after having not gotten into bed for some atypical stretch. It is almost as if as time goes on, the general average distribution contracts to a focal point of particular lucidity, a larger than normal indefinite fog lying within the perimeter that typically defines the periphery, and which invokes some new order of perception in many ways desirable but at least for its provocation of unique insight.

I Really enjoy the amazing amount I can accomplish over that time... I have some sense stronger than naught that I'm not really able to hit my stride until 12 or more hours after awakening. Actually, I recall clearly that despite finally managing to awaken early enough for class yesterday, I was in a Franciscan class mental fog nearly all day--it wasn't until around 11 PM that the urgency of the upcoming deadline and hopeless mounds of work (even worse, the mounds hadn't yet been assembled, by then there was merely a postulation that some mother-lode waited for a few rocks to be overturned) translated into some motivation to begin working on it. To be certain the thought of a good chance to pace my new furniture played some part, but before long, with some pride of performance and an awkwardly intangible form of irony, the chair disappeared and I became absolutely consumed with work. I worked for 17 hours, from 11 PM to 5:30 PM, and the only thing that stopped me then was the necessity of attending class; despite the fact, I was 15 minutes late for my inability to find a timely conclusion. I wish I could explain exactly why this was the case, but I'm afraid it's the type of situation that even another knowledgeable programmer would have a difficult time understanding. In sufficient, through the final pressing hour I managed to do what I would have previously considered impossible from a number of perspectives. That is to say I experienced some minor miracle nonetheless greater than maintaining consciousness through somnolence by programming a computer to move monkeys and boxes about a screen with bounded futility.

Actually, it was a lot of fun. Throughout the hours I happily dabbled slightly deeper into otherwise foreign but interest arts: graphical design, audio production, not to mention the manufacture of code that has a functional, visual interface.

Tuesday, November 3, 2009

Herman Miller Embody - first reactions

I think talking about "my chair" is banal and way too close to boastful, the egotism inherent therein being a personality characteristic I try particularly hard to avoid (synonyms of boastful read like a list of things I doubt many people aim for as a characterization: arrogant, conceited, pompous, pretentious, etc). With that in mind, I think "the chair" in general is fascinating, and am grateful that I have the opportunity to experience it in real life; though I was hesitant to write this up, perhaps my thoughts will lend insight to someone else.

With all that said, I have to admit, few things will sound more pretentious than what I'm about to say anyway.

First, the Embody has an arresting aesthetic, its form is absolutely captivating; it is actually inspiring to look at. I don't mean inspiring in the generic, feel-good way, I mean when I look at it there is a surge of creative, unique thoughts and a sense that the majority of objects we encounter each day are needlessly bland, expressionless, devoid of notability--altogether invisible and uninspiring. You might look at a picture of the Embody and think 'I don't get it, looks like a moderately interesting chair,' and with that I'd agree, a picture of the Embody displays a moderately interesting chair. However, as I've mentioned before, the 2 dimensional projection of a 3 dimensional image loses and incredible amount of information; the chair IRL is a whole different story.

Of course, function is a critical element of design... personally, I think form is just the whipped cream topping of any design. To be sure, a desk isn't much of a desk if you can't use it as a working surface, no matter how beautiful it is--Michelangelo's David is not a desk. On the other hand, as long as you can use it is a working surface, no matter how nauseatingly ugly it is, it'll work as a desk. Thus, it is a good thing that the Embody has function covered. But there are levels of functionality, and true to the reason I chose this particular chair, it seems to have function covered to an exceptional degree; not only can you sit in it, sitting in it is a pleasure. I haven't had the chance to sit in it for one of my 12-hour-straight coding jams yet, and that's the true test, so the full extent of its functionality remains to be seen. As it is though, it's a pleasure to sit in, it feels something like sitting on a bed. This isn't altogether surprising, as the seat has a system of suspension much like a mattress. This is a wonderful idea, and one that really surprises me for its obviousness, yet lack of presence in every other office chair I know of. The back also has an interesting suspension system, one with less give.

One thing that I love most about the Embody is that despite looking and sounding very complicated, despite a long design process with many prototypes, it is actually surprisingly simple, especially the suspension systems. The composition, strength, elasticity, and formation of the various plastics used is probably fairly involved, but in the end, the shape and intersection of all of them is very natural and efficient. Certainly this was a design objective (easier said than done), but the result is powerful; the Embody looks like an exoskeleton, an extension of the body, and what could be better than a solution provided by nature?

I don't think it's the be-all, end-all of office furniture, but then again I have a strong bias against conclusive permanence, so no chair will ever fulfill that criteria (except possibly an infinitely adjustable "indefinite chair" formed in real time by nanobots). In case you couldn't tell, so far I love it. The only shortcoming I've thought of, an insignificant and unimportant one, is that it doesn't have a headrest. There's no obvious reason why they might choose to omit such a thing, but I'm certain that it wasn't a simple fact of oversight--clearly there was no oversight involved in the design of this chair. The only reason I can think of is that they figured the inclusion of a headrest would provoke a change in the implied posture which would possibly have a detrimental impact on ergonomic functionality. Well, I'm sure there is a reason, I'd like to know what it is. My only other complaint is that it took a long 8 weeks to get here.

Sunday, November 1, 2009

A Pictorial Vindication

I have received word that, given the existence of infinite permutations of parallel universes, some people have been considering my recent furniture upgrade as unnecessary. However, in light of what I'm about to show you, I believe any transdimensional rumors will be sufficiently concluded as having no factual foundation. I present my current chair (with bonus Pickles action):


If one happened to feel gifted with a finely tuned, elite aesthetic sense, and that by seeing this image this sense has been dismantled and disfigured, transformed irreperably into an unidentifiable blop of goo, allow me to comfort them; had their sense been incapable of handling this fundamentally evocative display of Form, it wasn't worth a tarnished penny anyway and nothing of value was lost. I suppose they wouldn't see the profound symbolism in the bits of open cell gray foam that with clinging tenacity remain on the rough plane of misshapen splinter spitting plywood, refugees of a forceful division of useless padding from malformed foundation. Nor would they see, I imagine, the ongoing dialog between the thoughtlessly intrusive, precariously balanced seat, mismatched and standing as a contradiction to the firm stability of the back, which with a posture slightly less than vertical seems to mourn the loss of its intended counterpart. Of course it all goes so much deeper, more than I could fit into any number of blog posts.

I will surrender, yes, that I assembled this marvel in a fit of vanity, that I ought to have focused from the start on a more balanced, less perfect design. However, I assure you that the striking, fluid beauty of this piece is complemented by an enhanced functionality; namely, this artifactual collage doesn't make the lower half of my body go numb as its unglorified predecessor did. Frankly, I do admit some amount of guilt for purchasing a replacement for a chair that is otherwise the paragon of design, but I think that it is better served encased in glass as an enduring testament of what can be accomplished with concerted effort and a healthy dose of luck.

In case it sounds as though I've gone completely bonkers, the truth is yes, I have--but only temporarily. Such is the result of staying up all night doing probability homework.

Friday, October 23, 2009

Software Engineering

For my Software Engineering course we are spending the whole semester developing educational software as small groups. Each group was given the choice between math or spelling oriented software for 1st to 2nd graders. Our group chose math, since it is inherently easier to deal with numbers than words in programs. The course calls for 3 releases, or waypoints at which we demonstrate our software and, since they are in place of what would typically be exams, our software is supposed to meet criteria we determined for ourselves at the beginning of the semester. Thus far there has been one release, and two of the three groups had very visually limited programs, instead focusing on the backend. My group, however, was the complete opposite. This was not a mistake, either. If you know me well enough (or have read enough of my blog), you'll know that I am very critical when it comes to design, and this project is no exception. Initially I was hesitant to work in a group, as I've never handled group dynamic as well as I should, and indeed at the start of the semester it seemed as though the influence of the group was resulting in nothing short of chaos. However, when it came down to assigning the tasks, I ended up with about 90% or more of the work. While that meant a lot of chairtime, it also meant that I got to design the foundation of the project without any interference, so I decided to take it as a positive thing.

Until the demonstrations for the first release came, I honestly hadn't even considered the difference between focusing on back and front end, but in retrospect I think that's because all the back end in the world isn't going to make a first grader want to play your game! Minutes prior to our group's presentation, I jotted down a few ideas about my design objectives. What I came up with was that calling what we were making a game failed to represent the magnitude of our undertaking; what we were doing (in theory) was taking part in the earliest exposure and formation of the foundation of childrens' experience with mathematics. Framed in this way it is easy to see that our software being designed as best as possible is critical. What we want to do is create substantive, positive, memorable experiences involving math, in such a way that in the future these children might not run in fear from math (as many of us do now) but instead view it as an exciting and fun thing. Thus my aim in constructing the front end was to craft sensational experiences... I got a few laughs when I said that, but my guess is because people misunderstood: by sensational, I mean of the senses. Given that we are limited to merely 2 of 5 senses, it is extremely important to emphasize those senses, yet being mindful so that the experience doesn't get so chaotic that it is overwhelming and stressful as this would subvert our purpose.

I was part of one of the very first generations to have computers available in elementary school; Jordan and I were remeniscing the other day about playing those green screen computers with the floppy disks that were actually floppy in elementary school. Thus, my design objectives drew heavily on my own vivid memories of using computers to play games in school. For instance, the music in SimCity 2000, first experienced at school simply had a profound effect on my entire life. I still remember receiving SimCity from my grandparents as a Christmas gift some time shortly thereafter; obviously it was cause for substantial excitement given the detail that I can remember this event despite it having happened more than half-my-life ago. Accordingly I made certain to pick suitable music (light ambient) and sound effects... and ours was the only group to have any sound at all. Can you imagine, completely ignoring half of the senses you're given to captivate first graders?? Anyway, there's no good way to put our game online yet (there might be nearer the end of the project), so here are some screenshots:



 

 

Not only did I handle designing the game mechanic, choosing the music, and actually programming the game, but I also did all the art. With the exception of the menu screen image (from wikicommons) and a few standard fonts, I drew everything, even the animated monkey. I probably don't have a future in professional graphic design, sure, but I'm quite proud of what I've done--particularly given that I did essentially everything.

The reason I'm bringing up my homework is that release 2 is coming up quickly, we basically only get 2 weeks to put it together, and I set the bar high enough for the first release that I've got a lot of work to do! Fortunately the work was divided up a bit better this time, so I'm mostly in charge of the graphics. Nonetheless it's 3:45 AM and I'd been drawing for about 12 hours straight so it was time for a break. Yes, I'm really wishing my chair was here... and now you can see where all my complaining about design and chairs came from; you try sitting in the same thoughtlessly designed chair for 12 hours and see if you don't get a little grumpy! I think that waiting for this chair has probably been the most painful anticipation I've experienced, literally and figuratively.

Moving on, I wanted to mention that I've been spending so much time in GIMP (the GNU Image Manipulation Program, GNU stands for GNU's Not Unix), that I've actually been getting much better images out of it. I've learned a few basic techniques from some well written GIMP tutorials that have made a huge difference. While I haven't used anything from it for my own purposes, the results in this tutorial on drawing your very own planet starting with just a blank canvas are stunning, especially since the process isn't very complicated. I've also learned how to use a few of the tools better and found some other less than obvious features that have helped me get some results I'm very happy with:




  

 






 


There's always room for improvement, but I'm excited to see how it all comes together!

Tuesday, October 20, 2009

Economic Doom & Gloom?

I've decided that if I've put the time and effort into writing something moderately interesting, I may as well post it here. Recently I received an email from an acquaintance that suggested with urgency and confidence that because of the current debt financed federal deficit and the associated cost of interest the US economy will probably collapse within the next decade. What a terrifying prospect! For my own peace of mind and some hope of bringing another perspective to the grim proposal, I decided to do my own analysis.

First, a few items of business. I'm not an economist by a long shot. The extent of my economic training is an elective course titled Economics as a Social Science, in which I got a C. Next, as clarification, the federal deficit and the federal debt are two distinct things: the deficit refers to the difference between expected yearly income and expenditure in the government budget, which is usually what's in the news and also is typically "in the red," otherwise we'd know it better as the federal surplus. The federal debt, on the other hand, is the total accumulated debt, which has been in the red for the majority of the history of the US. According to the US Debt Clock, our national debt is currently very near 13 trillion dollars. This is an unintelligibly large number, obviously, but it's all relative. One of the more useful perspectives of this otherwise ambiguously huge figure is as a percentage of Gross Domestic Product (GDP).

Find the relevant bit of the email and my response after the jump...

Pumpkin Carving!

It's that time of year again, when a good number of North Americans prepare for all sorts of bizarre and hilariously entertaining rituals surrounding the end of October. The extent of my Halloween celebration usually matches that for all other holidays, as in not doing anything, but I have a soft spot for carving pumpkins. For reasons unknown to me, pumpkins seem to be my optimal artistic medium. Honestly I'd prefer it be a more lucrative medium, but I suppose I'll take what I can get. Despite the joy carving pumpkins brings me, lately I've been forgetting to do it... I can actually say for certain that since 2005 I've carved two pumpkins, and the only reason I remember that is because both of them were memorable. Ok, actually now I'm remembering that technically that's not true, because I carved two in 2005, but both of them were virtually identical. The concept I went for with those two was some mix of Golem and an angler fish. I wasn't completely thrilled with the outcome (thought it could've been much better), but it nonetheless won a fairly informal competition on campus in addition to making it into the yearly selection (for 2006, inexplicably) on ExtremePumpkins.com, probably one of very few sites dedicated to pumpkin carving. Honestly there are people out there who are much better than I at carving pumpkins, but that's ok with me--I do it because I enjoy it. I don't care if the result is the best or worst pumpkin ever. Enough talk, here's some walk:







Moving on, my latest pumpkin was done last year, and I'm a bit more pleased with how it turned out. One of the things I had in mind was the effect it would have glowing, which came out just as I had hoped.



 




I confess, in order to get it to light up like that I had to use more than a tealight, but getting the walls thin enough for the tiny bit of light a tealight puts out to show through would have compromised the structural integrity. Also, I'd like to point out that while it wouldn't fool even a bad neuroanatomist, I did do my best to represent the major sulci (fissures). A true representation was out of the question, as a simple matter of time! There are way too many sulci in the brain, at least enough in my own to convince me that there are better ways to while away a few hours. Perhaps if I had used a dremel or other power tool, but this was done entirely with good old manual knives. It's not looking like I'm going to have a stab (pun!) at a pumpkin this year, but who knows, maybe some time will present itself.

Saturday, October 17, 2009

Reproducing Foods

Every so often I find some irresistably scrumptious item available exclusively at some restaurant nearby. Recently this happened with the "mocha blender" at Einstein Bros. Bagels and I found myself more or less addicted. Such a habit can become expensive quickly, so I sought to reproduce it at home. The last time this happened was with a smoothie from Jamba Juice, which was easy to reproduce almost exactly given that they put all the ingredients together right in front of you. However, this was going to be a bit of a challenge, because the ingredients as put together in view consisted of ice, Hershey's chocolate syrup, and some liquid poured from a generic carton. My less than trained gustatory instinct pointed to most of the desirability being from the texture, unusually velvety for a smoothie--closer to a milkshake, which it definitely isn't. The obvious next step was to seek nutritional information, which I found after a quick google. Unfortunately, due to poor pdf formatting, some portion of the ingredients for the liquid of interest, "cappucinno base," were cut off, but my suspicions were nonetheless confirmed as there were several thickening agents visible: carageenan, guar gum, and locust bean gum. The use of whey protein probably also plays an important part in the final experience, otherwise it seems to be sweeteners, stabilizers, and the ubiquitous, impossibly ambiguous "natural flavors." As far as these flavors go, I don't think they have much if anything to do with espresso.

Given that I don't have easy access to any of these commercial thickeners, I had to improvise with powdered sugar for its corn starch content. Here's what I've come up with so far, texture-wise it seems pretty close:

8-10 oz. whole milk
Equal part or more ice (a lot).
Espresso to taste
Hershey's chocolate syrup to taste (probably about 2 Tbsp)
1/2 scoop whey protein
2 Tbsp powdered sugar
1 Tbsp granular sugar

This is the result of only my second attempt, so there's probably improvements that can be made. I think the most promising avenue is the addition of some salt to drop the freezing point of the mixture. The flavor is still way off, but the only thing I can think to make it closer is just removing the espresso altogether.

Thursday, October 15, 2009

Thoughts regarding the poor sense of probability

As a homework assigment for my probability class, we were asked to respond with our thoughts concerning the following TED talk.



In the interest of availability, persistence, and my efforts not going to waste (on the off chance someone reads my blog at some point), I've reproduced my responses after the jump with minor edits so they might make sense outside of the discussion amongst classmates.

Saturday, October 10, 2009

Nissan Succumbs to Logic

Making the rounds on the web is a new Nissan Land Glider concept vehicle. My opinion is that this represents the first indication of a correct step towards a sustainable near-range vehicular platform from a major automobile manufacturer. Included with all the sites discussing it are a few pictures and the following video (which has a very interesting choice of music with what I'm quite certain is the avant-ambient work of Steve Roach):



Get more after the jump!

Thursday, October 8, 2009

Telescopes in Space

At first the idea of a telescope floating around in space is absurd, but any marginally knowledgeable astronomer can profess that it's a fantastic idea. Astronomy at the most fundamental level is the study of space, everything and anything that's not Earth, and it's one of the oldest realms of intrigue known to humankind; it was popular long before the scientific method wandered onto the scene, despite being very much a scientific pursuit. On one hand, that space is an old interest isn't surprising--anyone that has turned their sight to the sky on a clear, dark night knows exactly why. A gaze into what might as well be the infinite unknown, the act itself as simple as a glance at our own hands, has a way of inspiring speechless profundity in even the most uninterested amongst us. On the other hand our primal fascination with space is surprising for its distance, simply far removed from our experience and altogether relatively bland to the naked eye for its expansive empty darkness excepting the occasional tiny point of light. I find it interesting that this practical void drew fascination more readily than the exceptionally vibrant and astonishing diversity of phenomenon on Earth which we can easily approach and examine. I suppose it's another case of obscene acclimation leading to an almost humorous misplacement of gratitude (or the frog in slowly heated water, though I'm not a fan of the literal part of the notion when put that way). Nonetheless, space is a fascinating place, especially when explored with our modern technologically augmented senses, the subject of this post.

As it turns out, Earth is a lousy place from which to explore everything that's not Earth. The telescope, primary instrument of astronomers, is often incapacitated by the humble cloud, and it is increasingly difficult to find a spot where light pollution (that light from the ground which obfuscates the much fainter light from billions of miles away) isn't a problem. But even on the highest, most remote mountain on the clearest night, a telescope on Earth is substantially limited by a variety of factors, and thus the idea for a telescope in space. Space telescopes were proposed by at least the 1920's; the first (Hubble) was funded in the '70s but took about twenty years to get into space, in 1990. Of course, 20 years from paper to space is ok by me, given that it's a hulking monstrosity, nearly 25,000 lbs of technical wizardry. It may have launched as early as 1986 if it weren't for the Challenger disaster, which put Hubble in cold storage but to the tune of $6 million a month, not your everyday storage unit. Nonetheless, the time investment seems to have paid off, as the Hubble is very near entering its 20th year of functionality.

Despite the near 20 years of development, shortly after launch the images Hubble was transmitting indicated a serious issue, with quality far less than expected to the extent that it performed similarly to ground telescopes. Before long it was discovered that the main mirror was shaped incorrectly. Telescopes depend almost wholly upon the precise shape of the main mirror, and the precision of the Hubble's is astounding--it was perhaps the most precisely manufactured mirror ever made, with a deviation from the intended curve never more than 10 nanometers. In other words, the shape was at most off by a length about 40 times shorter than the shortest wavelength of visible light (the color violet, at 400 nm). To give you some kind of perspective, nothing skinnier than about 400 nm can be seen with our eyes, no matter how powerful a microscope you can find: the problem is that for something under 400 nm, visible light can't hit it, which means it can't bounce back and into our eyes. So given a mirror so amazingly precise, how could it possibly have been so bad? Well, the mirror was very precisely manufactured to the wrong shape!

Here's a question: how do you fix a ~7ft diameter mirror that took 5 years to manufacture, stuck in the middle of a technological marvel which is hurtling through space at 17,000 mph?? There were two backup mirrors made, but replacement wasn't an option. Fortunately, the Hubble had a strength, a unique design choice: it was built so that it could be serviced by astronauts. After extensive analysis of the problem, a surprising solution was conceived--new sensor instruments, something like the chip in any digital camera, would be  specifically designed to be flawed in a way that would be the anti-flaw of the mirror, thus cancelling out the effects! It reminds me very much of doing the same thing to both sides of an equation in math; you can do whatever you want, as long as you do it to both sides (note that this isn't always true). This story is one that I find informative and inspiring, I hope you can find similar value in it. I also recommend taking a look at the Hubble Space Telescope page on wikipedia, as there's a lot more generally interesting stuff to know. Surprisingly, the Hubble is just one of around 100 space observatories past, present, and future. ~45 of them have been terminated, ~15 are planned for the future, and this year alone stands to see the launch of 8 new observatories!

Saturday, September 26, 2009

Chairs and Design

As a computer "power user" aka computer geek, I spend somewhere near 95% of my time at home (and awake) sitting at my desk. For a few years I had a pretty good chair that was snagged for free from an unused office. However, with the extreme use it received, it gradually and literally fell to pieces. Though I did my best to keep it in functioning order, which near the end of its life involved keeping it together with rope, it finally gave up the ghost when bolts irreparably sheared and welds failed. I moved to a backup chair that was primarily used by Pickles (who wasn't so happy about me stealing her chair), but as time has progressed the chair has proven wholly unfit for sitting, often feeling more like a torture device than furniture.

For one reason or another, functionality and office chairs are in most cases two unrelated concepts; I have been looking for a new chair for years but never found anything close to adequate. As far as I can tell, the design of office chairs starts and stops at the notion that there is some sort of surface with dimensions such that any person so inclined could do something resembling "sitting," with the result that any rudimentary design meeting this limited criterion can pass as a chair. Any person who sits as much as I do can profess that much more is involved in what can qualify as a chair... anything less is simply a surface which can be sat upon (regardless of if sitting upon it is a good idea). Considering the depraved state of computer furniture thusly described, for the longest time my intention was to design and construct my own chair, just as I did my desk. However, before long I realized that the manufacture of a chair was a problem much less feasible for an individual in comparison to that of a desk, and resigned to waiting for a better solution to present itself, which worked until my marginally adequate chair decommissioned itself.

With every passing moment in my backup chair it became clearer that the need for a new chair was desperate; it is never a good sign when your legs fall asleep sitting in a normal position, nor when they go numb while one's butt just constantly hurts. At such a point even an end table starts to seem like a superior alternative. Fortunately my waiting appeared to pay off, as a solution presented itself: the Herman Miller Embody, fairly recently introduced as successor to the famed and prestigious aeron (which was nonetheless eliminated from consideration in my previous seating quest). At first glance the price completely banished any desire of purchasing it--as with all Herman Miller furniture, it cost somewhere near an arm + half a leg. Nonetheless, moments passing in a chair unfit for sitting humans (despite being suitable for cats) had me realizing with increasing urgency that an arm and half a leg was cheaper than everything waist down. Still hesitant, a bit more research proved it to be a viable choice: a 12 year warranty(!!), reports of it being the most comfortable chair ever sat in, and finally, a site that for one reason or another had $300 in options available for free. And thus, it was settled.

I really hate spending money (which is not to say I don't enjoy the results!) and so this was a difficult thing to do. However, there are a few notions that even a frugal person need keep in mind. First and foremost is the idea that often despite a high entry price, the purchase in question can prove to be a far better value over time. Of course this takes research, because there are an incredible amount of nauseatingly overpriced products, especially relative to quality. In this case the 12 year warranty easily dispelled all fears of poor quality. Second, particular emphasis must be placed in purchasing products which will receive substantial use; only the most foolish professional house-framing carpenter would buy a hammer out of the dollar bin! To me as a programmer, a chair is just like a hammer, it's a tool necessary for getting work done with maximal efficiency, which in turn maximizes value. With only these two things in mind, the purchase is easily justifiable, but there is another critical point which seals the deal: health. Just as a poorly designed pneumatic nailing gun can be the death of a carpenter, a poorly designed chair can quickly harm the health of someone who sits for extended periods--this is why wheelchair cushions are very specialized, and why bedridden folks must be treated with care (otherwise they will get bed sores).

In my experience, when all the research is done and the intended purchase thought out well enough, even a frugal person can spend a chunk of change without feeling purchase remorse. Indeed, I have never felt an ounce of regret after buying the pricier items I own; when I do feel regret after a purchase, it is always for the cheaper items that I failed to adequately contemplate. Anyway, enough blabbing, eye candy after the jump (yes, there's more).

Friday, September 25, 2009

Project 10^100

About a year ago Google started up Project 10^100 which invited people to submit ideas they thought would change the world. The intention was that a few of the best ideas would be put up for a vote, where one or several winning ideas may or may not be put into motion through funding and initial management by Google. Note, I used the ambiguous qualifier to reflect the verbiage of the site, which says "Your vote for one of these ideas will help our advisory board choose up to 5 projects to fund," thus somewhat resembling the electoral college in terms of feel good vote theater. Snark aside, I genuinely support the idea of this project regardless of the chosen process.

Moving on, the window for submissions closed quickly as over 150,000 entries flooded the digital suggestion box, and now, having supposedly read through every one, the voting has opened. The project somewhat defies our typical expectations in that it's not really about ego, that is, there is no winning person or prizes for winning people; the notion is that what really matters are the ideas, and that the people behind the ideas should be perfectly satisfied that their world-changing idea is getting attention. Likewise, judging a winning person would probably be very challenging, as there is little doubt in my mind that the ideas up for voting were put forward more or less with a consensus amongst many submissions. Humility therefore firmly established, it appears that the idea I (and certainly many others) submitted has made it into those selected for voting.

Here's an excerpt from my submission:

Unfortunately the sister project of Wikipedia, Wikiversity, has had a difficult time getting off the ground. This is attributable to several factors, most significantly the lack of contribution which itself is likely resultant from the expertise required in the knowledge of the topics as well as in the arrangement of the information in a manner conducive to learning. Thus in the spirit of Knol, portions are written by community members to be periodically approved by volunteers who are acknowledged experts in the field. International schools from elementary forward would be able to use crowd crafted expert approved materials for free, as well as individuals desiring to educate themselves in any topic. It may even be possible to establish an accredited university online using performance tests based on these materials, providing a degree for the bare minimum cost. Education is generally presumed to be a profoundly positive thing, thus indirectly the issues resolved by universally available education are multitudinous. More directly however the result of education in an individual's life are tools for empowerment and progress, which itself may eventually benefit all mankind were they to become the next Gandhi or Gödel.

From the idea titled "Make educational content available online for free," there are shown suggestions that contributed to the idea:

1. Collaborate with top schools around the world to make their lectures freely accessible online
2. Create an online educational platform that provides free training and education as part of a worldwide, officially accepted degree
3. Provide free online lectures and textbooks for every subject and grade level
4. Facilitate information exchange among students around the world, including cross-country "study groups" on specific topics


Honestly I think that all of this is inevitable, and in fact much of it has already happened. Years ago MIT kick-started what would become the OpenCourseWare consortium by making available course materials for free, and since then a large number of other institutions have joined. Though limited to post secondary materials, I'm certain it will expand soon. Point 4 is pretty well taken care of with various message boards and forums online, I often find help through questions already asked and answered on these sites with a quick search. Point 2 is the most technically challenging one, and that is only because accreditation is done through outdated organizations operating in their own interests. My opinion on this matter is something like the inverse of point 2--rather than seeking accreditation, seek to dismantle the accreditation organizations. It seems to me rather clear that the qualifications and abilities of an individual cannot even begin to be known simply because they have a degree from an accredited institution. Nonetheless, I doubt accreditation is going away, and I have heard of some organizations working towards minimum cost accredited degrees. On that note I ought to mention that affordable education is available--the tuition for foreigners at the Universidad Nacional Autónoma de México (UNAM) is a couple hundred US dollars, which despite being an incredible bargain is still more than native Mexicans who pay less than a hundred dollars. For anyone who might find themselves prejudiced against Mexico for whatever reason, I ought to also mention that UNAM is a world class university that has produced a number of Nobel laureates, participates in cutting edge research, and is one of the largest universities in the world, with satellite campuses all over the world and nearly 306,000 students.

Of course all the criticism I have offered is quite ironic given that I apparently contributed to the idea... but I suppose all I can say is that a lot has changed in the past year, especially my own thought patterns. Fortunately the collective conscious has my folly covered in this situation, as there were enough people thinking then as I am now to have also gotten a spot titled "Drive innovation in public transportation." Intriguingly, a number of the "suggestions" for this spot match very closely with what I would have said myself (emphasizing ultralight vehicles, preferably power-assisted pedal bicycles, minimizing injury and maximizing efficiency via autonomous transports). This makes me think one thing more than any other: what are the people that suggested those things a year ago thinking about now?? Apparently I am a year behind others in getting to the thoughts I'm having now, it'd be really nice and interesting to be able to jump ahead another year's worth of thinking!

Anyway, I highly recommend taking a look at the ideas. There are a total of 16 big ideas representing a good cut of what the collective mind is thinking for the future, some of which at the very least will probably pique your interest, or maybe even move you to action.

Wednesday, September 23, 2009

Old cars are not safer!

Here's another common misconception ready to be exposed: old cars aren't safer. By older cars it is typically meant late 60's and prior, and the thought goes that because they are heavier, they are safer. This is absolutely false, weight doesn't matter in vehicle safety; what really matters is the ability of the frame to absorb impact while maintaining structural integrity of the passenger space. The safety of a vehicle accident is very simple physics: by spreading an impact over time and distance the force of impact is also minimized. This matches our intuitive understanding, just imagine dropping an egg from 10 feet. If the egg hits concrete, it breaks--it goes from speed to stopped instantaneously. If the egg hits 5 feet of padding, it will be fine--it will slowly go from speed to stopped over time and distance, as the padding absorbs energy from the moving egg. The "padding" in a car accident is mainly of one form, the crumpling of steel. Of course this is all moot if the passenger cabin is compromised, as our soft, fragile bodies are no match for hard things moving at high speeds, and that's the rub; no matter how much energy is absorbed, if the engine block ends up in the driver's seat or the vehicle explodes, there is little hope of walking away from the accident. Excluding air bags, seat belts, and other obvious safety features, modern cars still have the advantage because they are designed to crumple up to the cabin, which is in turn designed to be as rigid as possible. As far as I know, older cars weren't designed with any energy absorption in consideration, and thus a double edged sword: if the car doesn't crumple at all, no energy is absorbed and it is like an egg hitting concrete; if the car does crumple, it will most likely continue to crumple well past the engine bay and into the cabin, rendering all energy absorption for naught. And now, for the demonstration:



The cars collided each moving at around 40 mph. As is usual for the Internet, a number of people (I think it's safe to assume they are classic car enthusiasts) have stepped forward challenging the veracity of the video, suggesting that the chosen car was not representative. As in any scientific pursuit, contentions are often valid and desired, and the responsible scientist will acknowledge, explore, and respond to any valid concerns. I found this on Consumerist, where amongst the comments arose criticism (from user Nighthawke) as follows: the appearance of reddish dust that may indicate the presence of structure-compromising rust; the lack of seat belts used in the test vehicle (which were available as a dealer option); The expense of an unsafe frame for the aesthetics of the curved front pillar specific to Bell-Airs; Finally, the frontal offset test is unfair because the skinny engine didn't have the opportunity to absorb energy.

The responses are easy, as only the first point is really valid. The IIHS, which conducted the test, assured that the "rust" was just accumulated dirt and the car appeared structurally sound. For the final 3: the optional seat belts were lap belts only, and almost certainly wouldn't have made a modicum of difference; perhaps the Bell-Air has uncommonly poor structural integrity (I'm not sure, but I know of other classic cars with the same pillar shape), but the whole point is to show that collision safety design has improved tremendously and no other modern American car has performed anywhere near as bad as this Bell-Air; Last but not least, life isn't fair and the frontal offset test is one of a scant few standard tests that all cars undergo. Likewise, it's an important test for how common this type of accident is; James Dean died in a frontal offset collision. Frontal offsets have a particular propensity to cause extensive damage--the energy of the collision is focused on a smaller portion of the vehicle, thus causing more damage. In fact, in terms of energy absorption, a direct, in-line/"head to head" collision is safer! Clearly our intuition begins to fail us at this point, our instinct even more so; two people destined for a head on collision will swerve, unfortunately magnifying the danger of the impact be reducing the surface area of the collision. Nonetheless, this idea of applying a force over increased surface area is one that is often understood (or at least utilized) by people using snowshoes. This same principle is what allows people to lay on a bed of nails.

At least we have some kind of standards! Check out other poorly fairing vehicles here and here. In closing, I want to point out that heavy modern cars aren't safer either--in fact many large vehicles (trucks, SUVs) fare worse in passenger protection than smaller vehicles for a few fairly obvious reasons. Also, it's a matter of perspective: presume large vehicles are safer for the occupants, what about the people in any smaller car that may be hit? You'll probably walk away from your Suburban with a few scratches, but how will you feel about having possibly killed several or all of the people in that Yaris? The truth is, large vehicles aren't safer, they're more dangerous for everyone. The only reason huge cars can be viewed as safe is because there are other huge cars out there, and that's just an unsustainable and foolish perspective--keep it going and before long we're all driving monster trucks. Unfortunately even that won't help just as our huge SUVs haven't helped because more and more people will be getting injured in single-vehicle rollovers.

Certainly cars have gotten a lot safer, but as long as they are being driven by people, they will never be safe enough.

Saturday, September 19, 2009

Information, a perspective

Fair warning: I'm about to talk about math. However, I don't think you need to know or even like math to enjoy this. Suppose I were to tell you that the following images were both of the same thing. Would you believe me?






Unless you know multivariable functions or are pretty slick, you probably think I'm crazy. However, I can assure you that these are simply two different perspectives of the exact same shape; the only thing that has changed from one to the next is the place from which you are looking at it. If you're a skeptic (and I hope you are), you still don't believe me. Fair enough, but look at the animation after the jump and you don't have to believe me--you will see it with your own eyes.

Saturday, September 12, 2009

IQ Scores: Junk

Intelligence is a thing that is very difficult to define, if it's even possible to begin with. Historically something called an IQ test is considered the way to quantify or put to some kind of standard scale. In fact there are may different IQ tests with a fairly substantial deviation in approach. I think that IQ tests might be one area where we see how tradition is not adequate to justify continued use. Interestingly, there are ways to quantitatively explore an abstract idea such as IQ scores, and that is through associated statistics. For instance, wouldn't it be interesting to consider IQ score as it correlates to salary? It is at least interesting enough for someone to have done the data collection from the same people nearly every year since 1979 (the National Longitudinal Survey of Youth, funded by the Bureau of Labor Statistic, information found here)... and the results? Smart most definitely does not mean rich! In fact, people with high IQ scores exhibit higher than average rates of fiscal stupidity. With this in mind, IQ scores can be thought of as failing to consider a rather essential and applied form of intelligence for proper functioning in this modern era--personal finance. Ultimately an IQ score can only tell somebody how well they are at taking IQ tests.

There are certainly many instances in which IQ tests fail to adequately describe something easily considered intelligence. The most common example is the phenomenon of savant syndrome, where the affected people have such unique and powerful capabilities that they almost seem intellectually super-human; recalling the phone number and address of any random person they've read in a phone book, instantly performing arithmetic operations on large numbers, and on and on. One of the most famous savants, local to SLC, Kim Peek, was the basis for the character in the film Rain Man; Kim is able to read books at a rate of ~10 seconds per page, even faster accounting for his ability to read in parallel, two pages at a time--one for each eye. If that weren't convincing enough, he can recall each of ~12,000 books he has read, and his amazing abilities extend beyond reading/recollection. Despite all this, Kim has an IQ of 73. Clearly IQ fails to capture something that would most certainly be called intelligence. Further, I think this failure is much more ubiquitous than the special case of savants; from personal experience I can say with certainty that I've met many, many people who would appear unexceptional to an IQ test but whom I can attest have a special (and meaningful) type of intelligence.

It might seem that I'm battling IQ like a spurned testee--indeed we would expect a person who considers themselves intelligent to express their dissatisfaction with their score. Thus it may be moderately surprising that I'm battling IQ despite having gotten a favorable score, though the reason is simple: I don't think IQ scores do any good for anyone. In fact, I think it very well may be to everyone's detriment to put any reliance on such an ambiguous and not altogether indicative thing such as an IQ score. It's a sword that cuts all ways too; it would be erroneous to think that because one has a higher than average score they are somehow superior or more likely to be successful in life... if anything, a person who finds themselves to have a high IQ should recognize a statistical disadvantage and start paying closer attention to their finances! Likewise, people with very average scores shouldn't feel limited--plenty of people with average IQs have been billionaires, CEOs, athletes, world-renowned musicians, and presidents.

In short, don't let anyone tell you what you are and aren't capable of. We are all amazing, and when it comes to nearly 7 billion unique people, there is little hope of a meaningful, broad quantification, and a great chance of spectacular, rare, and unforeseen abilities to arise.

Saturday, August 29, 2009

How Dangerous is the Road?

Using data from 2005, as supplied by the National Safety Council, we can add up all the number of deaths related to normal road travel, that is excluding categories such as 3-wheeled vehicles, ATV's, construction equipment, trains and so forth but including pedestrian and bicycle deaths since the vast majority of these are caused by collisions with other motor vehicles. Including the very ambiguous unspecified transportation-related category, the result is 45,180. Since the total number of external injury deaths (which excludes health related mortality such as cancer and heart disease but oddly including suicide) is 176,406, we can subtract to get the number of deaths unrelated to driving: 176,406-45,180 = 131,226. To get the percent of external injury deaths related to cars, we divide the category by the total, 45,180 / 176,406 = .2561, thus 25.61% or one quarter of the people who died from external injuries in 2005 did so because of car accidents.

However, if you choose to not consider suicide an external injury, the percentage jumps up to 31.43%, or nearly one third.

Using statistical projections from Carnegie-Mellon, we can (somewhat sloppily) extrapolate these results across all causes and by age group into the next year. We take the number of injuries in "accidental" and multiply by .3143, which is acceptable since this data doesn't include suicide in accidents. Now we divide the result by the sum of all causes for each age group, and come up with:

Age     % Projected to die from motor vehicle accidents

5-9          12.59 %
10-19      14.75 %
20-29      11.98 %
30-39      7.17 %
40-49      3.86 %
50-59      1.66 %
60-69      0.71 %
70-79      0.56 %
80           0.57 %
_________________
All           1.32 %

These figures aren't necessarily very precise at all, but the general idea is the same, cars are really dangerous. Studies have shown that talking on a cell phone while driving increases the chance of a car accident by 400%, and my guess would be that there has been a substantial increase in driving while talking since 2005, suggesting a similar increase in vehicular accidents. Nonetheless, using these figures we can see that people under the age of 40 are generally more likely to die from a simple car accident than anything else. It's time to face the facts: humans are not equipped to react appropriately to everyday driving conditions - from the neurophysiological perspective we simply can't react fast enough, as the time for a neural impulse to transmit from eyes to feet is substantial enough that it is measurable with a normal watch. If you want to test this, get 5 or 10 people holding hands. The game is to have one person squeeze their neighbors hand, who is then to squeeze the next person. Another person can measure how long it takes the "pulse" to go from beginning to end; that time divided by the number of people is the average amount of time it takes to propagate a real neural impulse from one hand to the other. Again, if you do this from foot to hand (if you can manage to find enough willing people), you get maximal neural distance and it takes measurably longer. When moving at 40 mph, milliseconds make a difference, and this reaction time is not even considering the amount of distraction we have in the extremely fast paced modern era nor the amount of unpredictable obstacles (other people on cell phones) we must be aware of to be safe, which often exceeds the amount of things we can be conscious of at any moment. Add into the mix blind turns, unskilled drivers, and thousands of other impediments and there is no uncertainty in the result: people should not be allowed to drive.

We have the technology for automated vehicles, it is very doable, and now more than ever we have both the need and chance to make this a reality. Combined with the fresh and tenable (enough to get $100,000 in DOT funding for development) idea of solar panels embedded in roads, we could save many lives, increase efficiency of travel and energy, banish automotive And power plant pollution, etc. etc. Where is the downside?

Am I the only one thinking this through??

Friday, August 28, 2009

Energy? Let's Keep it Real.

When I was in elementary school my dream was to make a perpetual motion machine, which are commonly referred to these days as "over-unity" devices. I'm all for other people trying to do it, but I no longer feel the need to waste my time with it. Of course, there are loads and loads of people lacking a strong scientific background trying to come up with these devices, and as such it is useful to know a bit of the scientific background so we aren't so easily deluded into believing their claims. Personally I'm a fan of innovative approaches and casting much doubt towards commonly held assumptions, but there are definite limits to this concept - at some point, you are just wasting time trying to come up with results that have already been long known (and which were discovered by geniuses who got lucky, something unlikely to happen again to any naive experimentalist).

Conservation of energy is the first law of thermodynamics and fundamental to every physical science, it's shown up in every one of millions of experiments and is about as established as a theory gets. Even more, the theory is one that makes a lot of sense and is descriptive to the extent that it has encapsulated and explained every single experimental observation yet made. Of course, science hinges on the precision of explanation, which implies that experimentation outside common conception--a theoretical dictum such as Thermodynamics--is not a threat but either 1. A chance of showing that common conception is accurate, or 2. A chance at showing it is incorrect and must be changed to accommodate new observations. Quality observation is very difficult to do, and it is easy to make mistakes in measurement that will lead to erroneous results, as was the case with the famed events surrounding cold fusion. Scientists as a community realize this difficulty and thus relies on an unofficial system called peer-review. In attempting to submit your results to a reputable source, a small group of individuals including some in the field of concern reviews the document for possible experimentation errors. Rather than publishing it outright, the expectation is that you receive your returned paper with questions and concerns to which you respond or preclude publication with that journal. This kind of process is often not enjoyed by scientists, but I'd say on the whole it is accepted as important when not disheartening. Thus, with proper background, it makes sense that there was some controversy over cold fusion, because the researchers went to the popular media which lacks peer-review or the knowledge to vet the material. As it became more clear that such technology would revolutionize the world, it was also with growing disappointment as other scrambling scientists failed to reproduce their results. Thus also we can see the importance of proper procedure with science, and the reason pseudosciences always have air-time on the local news but not space in reputable journals. This too is why anything related to emerging science in mass media should be taken with serious skepticism (though if I take my science pants off I'd also argue that all mass media content should be avoided at all cost).

Perpetual motion now acknowledged as very unlikely, the closest we're going to get to "free energy" in the real world/foreseeable future is going to be nuclear power. That's not to say nuclear power is anything less than enough; the process converts mass directly into energy, and there is an incredible amount of energy stored in mass. Einstein's famous and very proven equation shows this clearly: Energy = mass * speed of light^2 aka E=m*c^2 (the c is thought to stand for celeritas, Latin for speed or swiftness). Thus, even the slightest amount of mass stores an amount of energy proportional to the speed of light squared, which is an Incredible amount; the Fat Man dropped on Nagasaki was the result of just ~1 gram (the same mass as about half of a US dime) of mass being turned into energy.

This is actually really easy to calculate with the help of google's calculator, since google is just awesome like that. The wikipedia article says Fat Man released about 88 terajoules of energy. Since we know energy and c, rearrange to solve for mass:

E = m c^2
to
E / c^2 = m

and simply google "88 terajoules/(c^2)" (or clicky here).

So the whole bomb weighed over 4,000,000 grams... had all that mass been turned to energy there would probably no longer be a place called Earth. Likewise, my body mass (and I am a rather Skinny Man) converted directly into energy would be about 74,000 times more powerful than Fat Man. Thus, one could guess that the next greatest energy discovery be how to turn some "more stable" mass (as in not plutonium) into energy by nuclear fission, which is the idea behind cold fusion. Cold fusion is generally considered impossible, but some researchers continue to look into it.

Electromagnetism was discovered in the early 1800's, so it's pretty safe to say that any secret way to get free energy with magnets/electricity would've been figured out by now, particularly given that we have explored electromagnetism (EM) at the most fundamental (quantum) level; EM is one of four fundamental forces in physics: strong nuclear, weak nuclear, EM, and gravity. Since we're on a physics roll, connecting these four forces (referred to as unification) into a single theory is the holy grail of physics research today, and the person who figures it out will probably become the most famous scientist in history. String theory (actually theories) is an untestable proposal for the unified theory. Since they are untestable, they aren't considered scientific and thus not viable candidates until tests are developed.

Back to nuclear power: recently a story in the local paper had our new governor Gary Herbert saying much about the role of nuclear power in future infrastructure. This all stems from the current Energy Secretary, Steven Chu, pushing for nuclear reactors to be the future energy source for the USA. It has been suggested that the US has wasted the past 30 years by not developing energy infrastructure based on nuclear power, and this is true. Nuclear power is the cleanest, most sustainable and efficient way to get power. Likewise, there is a lot of opinion that the explosive growth seen in China over the past few decades has been fueled by nuclear power, and it isn't difficult to see that without this kind of powerful technology for power generation the rate growth couldn't have had such a pace.

There have been plans put forth for miniature reactors, termed "neighborhood nuclear reactors" or "nuclear batteries." Residing in a 10 foot cube of heavily reinforced concrete, the reactor can provide power for 20,000 homes for 10 years. Divided evenly between 10,000 households, the projected cost for a decade of electricity is $250. Backyard reactor sounds like a bad idea? Absolutely not. Even if a group were able to secretly reach the buried cube, they would need to penetrate several feet of reinforced concrete. Assuming they were able to do that, they would need to do it many times as each reactor contains a diminutive amount of nuclear material. They would be better off just buying some on the Internet, which anyone can do (I used to have a bookmark for an online store with plutonium available for purchase, but alas, no longer). Assuming these would-be idiot terrorists had secured enough nuclear material, they would then need the resources of a nation to refine it into something weapons-grade, not to mention the necessary detonation device. Thus, there is no risk of terrorism aided by nuclear batteries, QED.

What about catastrophic failure, as in Chernobyl or 3-mile island (which is when we stopped building reactors)? Not possible. First, nuclear power technology has come a Long way since the 1970's, just like Everything else. Second, the mini-reactors are closed systems with no moving parts, there is no way for them to catastrophically fail. Third, there isn't enough radioactive material in them to do much damage in the impossible case they were to fail.

Quick digression: radioactivity gets a bad rap because of a few common misconceptions, so it's re-education time! Everything you see is radioactive! Color is simply a form of electromagnetic radiation in the range of frequencies we happen to be able to see... in other words, light is radiation. Heat can also be radioactive, which is why something "glows red hot." In fact, there is a whole construct called the electromagnetic spectrum, on which all radioactive frequencies are described. On this spectrum resides color (light), micro waves, radio waves, gamma rays, X-rays, and so on. Thus there is an important distinction to be made with different types of radiation, and it's very simple: there is ionizing radiation, and there is non-ionizing radiation. Things like light and radio waves are non-ionizing, which means there is no risk of cellular damage. You can think of it in terms of light: light can't penetrate a piece of paper (otherwise it would be invisible) much less your skin, and neither can many other forms of radiation. On the other hand, there are very powerful forms of radiation that can ionize. These compact rays of energy are so powerful and concentrated that they literally knock atoms out of molecular bonds, and this is a bad thing for we cellular/molecular creatures. A small dose of ionizing radiation will probably not have major effects, which is why it is considered ok to have an X-ray done every once in a while. A large dose of ionizing radiation will completely disrupt the cellular processes that allow a living thing to live, thus able to cause extremely fast death. However it is not even necessarily to be considered a negative thing, ionizing radiation--Carl Sagan postulates in Cosmos that the occasional radioactive wave that manages to penetrate the ozone layer may have been critical in the role of evolution, by knocking apart random pieces of DNA with possibly beneficial side effects. By analogy, we might imagine a lucky hominid named Peter Parker getting hit by an interstellar wave in such a way that he gains super-human, spider like abilities, making him an exceptionally viable reproductive candidate (all the ladies know Spider Man is hawt). Thus evolution could depend on cosmic rays for random mutation, with similar albeit far more subtle results. Amazingly, simple forms of life have been found that can repair cellular damage due to radiation. For one, this opens the possibility of anti-radiation medications, but this also means that were humans to wipe out most life on Earth in a global nuclear war And the ozone completely wiped out, other forms of life would continue despite the heavily irradiated environment, ionizing and otherwise.

Nonetheless, nuclear power is the only viable energy source for the very near future. And it can't happen soon enough, when you consider the amount of pollution from coal and fossil-fuel power plants... which is so extensive that nearly every body of water is severely contaminated by mercury, a dangerous neurotoxin. In case you don't know the connection, coal fired power plants are by and far the greatest source of mercurial emissions, about 50 Tons released into the air each year according to EPA estimates from 2000. I would much rather have spent nuclear material buried in my literal backyard than be breathing mercury. Let's get the ball rolling, folks!