Wednesday, April 21, 2010

The Secret to Weight Loss

This is a "common misconceptions" post that I've been meaning to do for a long time, and thanks to a recent article in The New York Times I finally have a good reason. More on that later.

Everybody knows weight loss is a big deal, the fact is obvious from the astounding range of products/services with weighty promises (lose 30 pounds in 30 days!!!); the advertisements assault us constantly, from every possible angle. Given that the majority of US Americans are considered overweight in a culture with highest regards only for the exact opposite build, it's really no surprise that weight loss is big business. The real surprise is just how successful such ventures are when practically all of them make explicitly outrageous claims and just as many (if not more) are wholly ineffective. The truth is that with few exceptions commercial weight loss products are simply fraudulent--they are designed to take your money, not to help you lose weight.

I know the secret to losing weight, and I'm willing to share it... for free! It is very simple, and not simple in the subtly very complicated way, just simple. Ready?

How to lose weight:  Eat less.

 It's a matter of physics. Imagine an extreme case where a person doesn't eat or drink anything; by the very laws of nature and obvious from elementary intuition, it is impossible for that person to gain weight. This would be just like setting a scale in a sealed room: it would be very silly to think that the scale might at any point suddenly measure any more weight than it has all along. Humans are magnificently, extraordinarily, incomprehensibly complex systems, but that doesn't exempt us from the laws of physics. Unless more stuff is added to a body, that body will either maintain or lose weight. In case it isn't obvious, let me remind you that abstaining from all consumption for longer than a little while is a bad idea--remember, the rule is to eat less, not to eat nothing.

Let's explore the physics in slightly more detail. The main reason we eat is to supply our body with energy; our bodies need fuel to keep the magic alive, just like a car needs gas to move. Clearly it would be a bad setup if the energy we consume couldn't be stored, like a car without a gas tank we wouldn't get very far. There are a variety of ways the human body can store energy, but the presently relevant one is best known as fat. Call me crazy, but next time you see that extra bit of flab, try being grateful--if it weren't for that "unsightly" bit of excess, a few missed meals would result in death. I don't know about you, but I'd rather have a less than optimal social image than be dead.

So fat is stored energy, but what's this energy? Is there any way to quantify it so that its consumption might be regulated? In fact, yes, there is! The energy in food is also known as Calories, which is actually a kilocalorie or 1,000 calories. A calorie is a unit of energy, just like an hour is a unit of time. If you eat 2,000 Calories in a day and only use half of them, the rest will be stored, with some portion of them being stored as fat, it's as simple as that! If you are gaining weight and it's not because you're building muscle mass, you are eating more energy than you're using. Here's the Eureka moment!

How to lose weight (revised): Eat fewer calories than you use.

But wait, what about fatty foods, exercise, and metabolism, don't these play a major role in weight loss? Lets look at each of them.

Fatty Foods
One of the strongest diet related misconceptions around is that eating foods with excess fat, saturated, unsaturated, or otherwise will lead to increased body fat. This isn't true, food fat doesn't automatically turn into body fat. Perhaps this misconception arose because lipid nutrients and adipose tissue are both known colloquially as fat, but the notion that consumed lipids will transform into adipose tissue is as silly as the notion that eating brain will make a person smarter. Anybody can eat pure fat every day and lose weight, because the amount of fat in a food doesn't matter for weight management, what matters is the amount of Calories in the food and how much food (ergo how many Calories) is consumed. It's true that fat, with 9 Calories per gram, has a higher energy density than protein and carbohydrates, which have 4 Calories per gram, but for the purposes of weight loss this is moot--all Calories in a food, regardless of the source, are accounted for by the "Calories" figure on every nutritional label. Predictably there's a fair degree of complexity in how effectively food energy is captured, but the given number of Calories represents the maximum; if you closely regulate energy intake, you will realize there are no magical foods that cause body fat. Often, however, energy intake is far from regulated, far even from monitored, and it is very easy to underestimate how many Calories are eaten in a day. One case deserves special mention: high-fructose corn syrup (HFCS), the modern sweetener du jour, has been shown in a recent Princeton study to lead to more weight gain in mice than equal amounts of cane sugar. The theory I've heard is that HFCS is far more easily digested than cane sugar, and since digestion requires energy, HFCS results in more energy than an equal amount of sugar.

Exercise
When people think weight loss, they usually think exercise. It's always a point of contention when I say it, but exercise does very little to hasten weight loss. The reason is that the body burns a lot of energy no matter what its doing; for most people exercise causes only a marginal increase in energy consumption from the already high baseline. Remember the NYTimes article I mentioned? Here's a quote from it:
“In general, exercise by itself is pretty useless for weight loss,” says Eric Ravussin, a professor at the Pennington Biomedical Research Center in Baton Rouge, La., and an expert on weight loss.
The exception here is athletes, whom require many more calories than everybody else. This is because athletes have bodies that are especially efficient in utilizing energy--in other words, they have a higher basal metabolic rate. For those of us who aren't professionally physically fit, the connection between exercise and weight loss isn't anywhere near as clear cut. For more information on this topic I recommend reading the aforementioned NYTimes article: "Weighing the Evidence on Exercise." Beyond weight loss, keep in mind that frequent aerobic exercise is universally acknowledged as a critical component in the maintenance of cardiovascular health.

Metabolism
One of my pet-peeves, if you can call it that, is when people disseminate false information. We live in an age when almost the full knowledge of Earth is accessible on demand, so the reasoning goes that it's time we stop defaulting to wild speculation and just google it. Of course I have nothing wrong with wild speculation, my displeasure arises when the speculation is presented as fact. I'm bringing this up because it's relevant to the topic at hand, metabolism. Everybody has heard the word, it's used all the time, especially in regard to weight management, but what does it mean? What is metabolism? For all the mention it gets, I'd think everyone would be familiar with what exactly was being referred to. If you visit the Wikipedia page for metabolism, you might find that the subject is rather complicated; the summary refers to cellular respiration, metabolic pathways, and the carboxylic acids that are part of the citric acid cycle. That doesn't sound like weight loss! Metabolism is something of a shotgun term that refers to the chemistry of life. The basal metabolic rate is a bit more specific, as it refers to the amount of energy an organism expends while at rest and in a post-absorptive state. Since basal metabolic rate is roughly energy expenditure, it must be able to indicate how many Calories are needed to manage weight, and indeed it does. Interestingly enough, metabolic rate is strongly correlated with lean muscle mass and the same figure has been arrived at for all people: 16 Calories per pound of lean mass per day. This means an estimate for how many Calories you need each day can be found by multiplying your lean mass by 16. This also indicates what has been shown in other studies as well: the best known way to increase the basal metabolic rate is by increasing lean muscle mass.

Just one final note: losing more than a pound or two a week is neither healthy nor permanent.

Sunday, April 18, 2010

Nexus One, an Android

I've had an iPhone since shortly after they were first released, nearly three years now. For the most part, I've enjoyed it. These days, particularly when it comes to electronic devices, three years is a really long time; as such, it's almost difficult to recall why the iPhone had the hype it had. One thing to recall is that the app store, which is now probably the most attractive and well known feature of the phone, didn't exist when the phone first came out. The reason the iPhone was viewed as revolutionary (and that it was) was because it was the first cell phone to give what could be called functional access to the Internet, where most all websites were available to a mobile phone without any modifications. Clearly the Internet has revolutionized society; the movement from being available only on home computers to being available almost anywhere with cell reception is undoubtedly a movement that has been similarly transformative.

The availability of the whole content of the Internet, many Terabytes of information, on a diminutive device feeling like a polished stone, is practically inconceivable to me. But the notion is one conceived many times over in the science fiction canon. The most obvious example I know of is the device which shares the name of the book in which it resides: The Hitchhiker's Guide to the Galaxy. In his remarkable series Cosmos, Carl Sagan repeatedly fantasizes about perusing the fundamentally similar, fictional Encyclopaedia Galactica, a compendium of all the knowledge gathered throughout the existence of an intergalactic species. Both of these bits of media originate around 1978, a time in which something like the iPhone and the Internet must have been considered far out by any reasoning; it is apparent that at least two foraward thinking people saw such a device as a product of civilizations living on a galactic scale.

From 1978 the iPhone must have been a long way away, considering the primitive original Apple Macintosh didn't even hit the market until January of 1984, though development started in 1979. The Macintosh had an 8 MHz processor, 128 KB of RAM, and a 9" 512x342 monochrome display. Fast forward 23.5 years, and though our progress in intergalactic exploration hadn't much changed from naught, our computers had made unexpected advances! The original iPhone runs at 412 MHz, 128 MB of RAM, and a 3.5" 320x480 18-bit color display--it's roughly 52 times faster, has 1,000 times more memory, and a far superior display. It fits in a pocket and can run all day without needing a charge, it can replace books, newspapers, televisions, and the list goes on beyond any reasonable expectations.

Three years later, the revolution of Internet on a phone has taken place, and giant leap taken all that remains is incremental improvements: the Nexus One. This past December there was a buzz about the web as rumors of a Google phone spread. The buzz persisted for a little while and then mysteriously subsisted. The Google phone arrived almost as if it were secret all along, almost as if it remained a secret--from what I've read, the sales of the device aren't remotely as impressive as those for the iPhone. But for what it lacks in popular perception, it makes up for in spec: 1 GHz processor, 512 MB RAM, 3.7" 800x480 display, or about twice an iPhone. Having just recently mentioned that GHz isn't a very important measure, I'd be foolish to regard that as a concrete measure of performance; it isn't, but the Nexus One noticeably outperforms the iPhone in every respect. Interestingly enough, the Nexus One matches or exceeds the recently released iPad in almost every spec except for screen resolution--it's truly a remarkable device.

One of the things about today's cell phones, also called smart phones or super phones, is that they're actually powerful little computers masquerading as phones. The iPhone does a very good job at hiding the power under it's hood, and this is very much one of the reasons I chose to go with a Nexus One over another iPhone; the Nexus One has only a thin veil to hide the fact that it's a computer running a version of Linux. In order to write an application for the iPhone, one needs to pay Apple about $100 to apply for the opportunity. If they choose to accept you, there are a number of steps to follow, including authorizing a particular device, associating it with a particular machine, writing particular code, and accepting a very hefty agreement which includes conditions such as not displaying your device in public and the right of Apple to take ownership of your code without notification or recompense. The $100 only covers one year--every year requires another $100 to continue participation. I did go through this process at some point, but I didn't get as far as getting code onto a device before my membership expired; after that, I gave up. The Nexus One is a different story: anyone can write anything and put it on their phone at any time, for free. The first day I had my new phone I had a custom application uploaded to it. The second day I gained root access, installed a custom bootloader and a modified version of the Android operating system known as CyanogenMod; in other words, I now own my phone.

The subject of science fiction is relevant for one last note: the name Nexus One comes from the most advanced android in a story called "Do Androids Dream of Electric Sheep?" better known as "Blade Runner," by Philip K. Dick.

Nothing says "I'm a geek and I know what I'm doing" like a command line:


One thing that really stands out about the Nexus One versus the iPhone is the much higher resolution display (click to see a version large enough to tell the difference, also note that some aliasing in the form of red, blue, and green banding may appear depending on your monitor):



Here's a side by side comparison:



There are still a few things I like about one more than the other, but the power of the Nexus One is that I can change nearly everything as I see fit--the same most certainly cannot be said for the iPhone.

Monday, April 12, 2010

Music for a Season

I can't speak for the daylight hours immediately passed thanks to a fittingly peculiar sleep cycle (which I happen to appreciate despite, or perhaps for its general incongruity), but early this morning I emerged from in-doors to find the weather teetering upon perfection. It was the type of occasion that calls for the composition of a remarkably accessible, evocative, and timeless piece of music; fortunately for me, given my lack of musical training and the otherwise moderate difficulty of composing such a sound, someone has already taken and decidedly owned the feat. Thus it is with endless gratitude to, and for the inspiration of, Antonio Vivaldi, paragon of baroque classical composition, that I present music fit for this season, the 3rd movement of Spring, from The Four Seasons:

Wednesday, April 7, 2010

SparkFun SEN09423 integration issues

Anyone seeking to use SparkFun's SEN09423 breakout board for the LPY530AL as a position sensor should be advised that the two 4.7 µF capacitors (C1 and C2 on the schematic) used for the high pass filter need to be removed and the contacts bridged. This image shows which tiny bits are of concern, however note that it seems the resistors indicated therein do not need to be removed. This information comes thanks to a few people who know what they're doing (which excludes myself), as discussed on the SparkFun forums here and here. From what I gather this may be an issue with numerous (all?) SparkFun breakouts including ST rate gyros, the two threads alone implicate boards containing LPR530AL or LPY530AL, including the IMU 6DOF Razor. This is a particularly odd case because Inertial Measurement Units are mostly used for dead-reckoning, and the inclusion of these caps will effectively frustrate anyone with such an intent. As far as removing them, good luck! Here's my own picture of how gigantic these caps are:


I found the best luck (given a fine tip soldering iron) with adding a little solder to one side so that solder wick can get most of it. Then just heat up the other side and push gently. The first one I removed took the contact pad with it, if that happens to you you may or may not be high and dry. I managed to salvage the situation by drawing between the appropriate areas with a pencil. In case you weren't aware, graphite is conductive--clearly this is a handy bit of information on occasion.

For a slightly more general audience, here's some interesting information. The capacitors pictured are about 0.065 inches wide, or 1.66 mm; the skinny dimension of the penny pictured is about 1.52 mm. I said these capacitors are gigantic, and relatively speaking this is true! Relative to molecules, light rays, and subatomic particles sure, but also relative to the vast majority of capacitors out there. We will get to how in a minute, but first a brief overview. The electronic components most of us are used to seeing are the ones attached to those (usually) green boards also known as circuit boards, like this one:



These days most circuit boards we encounter are printed circuit boards or PCBs, called such because the production process resembles printing to varying degrees. The principle elements of a PCB are, put simply, fiberglass, copper or other conductive metal, and solder mask. The fiberglass makes up the board-ness, the copper is akin to wiring for conducting electricity amongst the components, and the solder mask, the colored part, is a coating that solder doesn't stick to, in place so that connections aren't made accidentally by wandering solder. Not too long ago, I thought the PCB was made of silicon; after all, electronics are associated with silicon, and from a naive perspective the shiny green board looks like something that might be called silicon. But if that's not it, where's the silicon? In an IC of course! These days most all the action of an electronic device happens in an Integrated Circuit, which looks something like this:


Inside that chunk of plastic there's a wafer of silicon, which could contain anywhere from hundreds to Billions of electronic components. Wouldn't it be nice if there was a window that showed the silicon? Like this one?



Instead of discrete components like the capacitors I shared above, these components are formed by spraying (very precisely) successive layers of various chemicals in a process called photolithography, resulting in something like a miniature PCB. The CPU is the biggest, most complicated IC in the box that is your computer (unless you have a very fancy video card), and because of this it looks different than all the others. For one, you can't even see it, it's hidden underneath a big heatsink, which is there to help get rid of all the electricity that turns into heat in the CPU (the process is conceptually similar to heat generated from friction). CPUs generate so much heat that one would burn itself to a crisp almost instantly without a heatsink. But even if you remove the heatsink (after you've turned off the computer), modern processors have another metal plate which hides another sealed package that finally contains the silicon. Here we're finally at the land of magic: as of now, April 2010, Intel has a 32 nm manufacturing process, which means that the typical component width is less than 32 nm. This also means that the 1.66 mm wide capacitor above is about 52,000 times wider than a single component on a 2010 Intel CPU, or, relatively gigantic. Granted, most things we know are relatively gigantic compared to 32 nm, particularly since that's quite a bit smaller than the shortest wavelength of visible light--violet, at 400 nm. Reality check: we're making electrical components so small that a ray of light can't even hit them, so small that even the most powerful microscope couldn't see them, way smaller than the average bacteria. Really!? Apparently that's not enough, industry projections have us with 11 nm chips in 2022, which would make each component about the same width as 55 carbon atoms. Interestingly, the first time a single carbon atom was photographed (after a manner) was 9/2009. Of course, there are certain problems that what we know as computers, that is Turing class machines, can't solve--certain problems that could be described in a hundred or so lines of computer code that would take a computer the size of the universe longer than the universe is supposed to exist to solve. Not content to take limitations as they're handed to us, work is well under way to develop a different class of computer: the quantum computer. Quantum computers are very different in that they can take very specific problems, like the one I just mentioned, and solve them instantly. I don't know enough about quantum computation to judge if they'll ever reach the ubiquity our Turing machines have, but I can say one thing for certain: there's not much certainty in the future! Intel will probably plug ahead and reach 11 nm in 2022, but the real question is will that even be relevant? I'm willing to bet not, it almost seems like sitting in 2002 and projecting that by 2012 our CPUs will run at 11 GHz; as it turns out, GHz aren't all that important. Take a top of the line 3.8 GHz Pentium 4 from 2004 and I assure you a 1.8 GHz chip from today will outperform it. Maybe the state of the art in 2022 will be a 100 MHz chip with a million cores--only time will tell.

Thursday, March 25, 2010

Computer Graphics

As part of my course on computer graphics this semester the class has been writing a ray tracer. The details of ray tracing aren't really worth going into, instead I'd rather share a picture (more technically a rendering) that is the result of my work.


If you are particularly learned, you'll recognize this figure as the Mandelbrot set. In case you didn't recognize it, at least you will in the future! This version in particular is really an abuse of the ray tracing engine we've developed; typically other much more efficient means are used to generate an image. However in using the ray tracer I'm able to generate images that simply couldn't be done with the more traditional methods. For instance, this rendering uses a reflection model to add an additional layer of the delicious recursiveness that characterizes fractals. Though you could do the same given the more traditional code, simply having the code alone versus a ready-made rendering program allows me to color the actual Mandelbrot set, which is almost always left black:



For the sake of completeness, here's a more traditional ray traced image that specifically includes a good variety of capabilities a ray tracing engine made in a single undergraduate semester has:


As you can tell, my cylinder code still has some issues that need to be resolved.

Tuesday, March 23, 2010

By the Numbers

I presume most people recognize that there is a vague connection between statistics and probability, but, having taken a course in probability theory, I'd be willing to bet the farm that very few people realize the full breadth of intimacy between the two. This is true in particular because despite having studied both, I'd count myself as one amongst the naive. From the outset probability is simply difficult, and often counter-intuitive. Not only does probability proceed in ways contrary to our intuition, it does so in such an amazingly tricky way! Maybe it is a function of how easy it starts out: given a typical six sided die, most everyone knows that the chance of guessing which number comes up is one in six. Easy enough, you pick one side out of a total 6, so the probability is 1/6. The common understanding of probability stops there, for the simple reason that any situation even marginally more complicated than that becomes remarkably more logically and mathematically sophisticated. Suppose I'm flipping a coin and you're guessing the results. For some reason you're having terrible luck and you've guessed wrong 10 times in a row, what's the probability that you guess the next flip wrong as well? Think about it for a minute and when you've logically arrived at what must certainly be the answer, highlight the following space for the answer:  1/2

Next, try to logically deduce the probability of guessing incorrectly for 10 coin flips in a row. Answer:   1/1024

It only gets so much worse from there, to the extent that I'm really not confident I could present the correct answers myself! Even admitting that I can't help but try for one more. Assume that 4 out of 5 people prefer Crelm toothpaste. What's the probability that from a selection of 5 people 4 of them prefer Crelm? Answer (I think): 256/625

The important notion here is that a probability says something both nebulous and concrete about reality. If a truly random die is thrown 6 million times, in all likelihood each number will have come up about 1 million times. If 4 out of 5 people really do prefer Crelm, then the chance that a randomly selected person prefers Crelm is 4/5 or 80%. As much as we all like to think that the statistics don't apply to us (because we're special), if the statistics are accurate there's no way to escape them. Most of the time this is a banal statement, as when referring to whether or not you prefer Crelm--either way it's not exactly a big deal. But then... there are the other statistics. "Around 50% of US marriages end in divorce" can be a pretty hard pill to swallow for a couple walking down the aisle. I have reason to believe the number of couples who'd figure they end up on the successful half of that statistic while exchanging vows is much higher than 50%--clearly if they thought it wasn't going to last they'd probably not be entering the commitment in the first place. Similarly, doubting the success of the marriage from the outset probably isn't going to increase the chance of a favorable outcome. What's left is an awkward position, objectively maybe the best one can think is that at least the odds aren't as bad as they could be, better than any casino game. However marriage is a particularly special case for a number of reasons, the primary one being the shift in locus of control which is applicable to all interpersonal relationships; though a bit less severe, anyone who's been dismayed by the lack of a second date (etc.) knows the score. To be fair the actual divorce rate changes based on many factors, where 50% is just the overall rate. The lowest divorce rates are found in each of the following categories: first marriage, atheist or agnostic, age 30 or older, residing in the Northeast and no cohabitation prior to marriage.

Uncontrollable statistics naturally lead to other more personally manageable probabilities. For instance, 28% of car accidents in the US happen while at least one of the drivers is using a cell phone. This is the part where I reiterate: we love to think we're special and that the statistics don't apply to us, but it just doesn't work that way. We are all special, I'm fully on board with that, but that doesn't grant any of us statistical immunity. Using a cell phone while driving (even with a hands-free headset) substantially increases the chance that you will be in a car accident, which could result in your death, or, arguably worse, the death of another/others with the accrual of manslaughter charges and the lifelong burden of knowing that you've killed someone. It's very simple: while the car is in gear, your phone doesn't exist. There are absolutely no excuses.

Friday, March 19, 2010

Technology II: State of an Art

For today's exercise, please read the following passage and give the question at the end a sincere and thoughtful rumination. Once you feel you've thoughtfully ruminated enough, watch the video.

Imagine a modern machine, one that could be called a robot, that consists of a three fingered hand mounted at the end of an arm with a range of motion similar to our own and a single camera. Given the present state of technology, which any sensible person would describe as "quite advanced," what might this arm to be capable of?




It is astounding, yes, no less should have been expected, but there is something a bit backwards about it. Traditionally machines are constructed and used because they can do some certain task vastly better than we are able to. Naturally the machine's form and means of manipulation don't resemble ours in the slightest, otherwise we'd probably not have needed it in the first place. A good number of years ago, enchanted by the ideas of Isaac Asimov, I had a strong interest in androids--humanoid robots. But even before I knew the beginning of the true technical challenges behind building an android I realized something: a person desiring to make a passable humanoid machine would save themselves a lot of effort and greatly increase their probability of success by doing so the old fashioned way, that is by seeking a viable mating partner and letting nature run its course. At the time the thought was conceived mostly as a joke, and though it's still humorous, it's also quite sensible--practically speaking I think we have more than enough roughly human shaped objects with adequately human like capabilities. Nonetheless it is almost certain that many will continue attempts to build an android, and it's far from difficult to imagine that one day a result could be described as nothing other than successful. However one thing will remain true even then, even when androids exceed our capabilities: the human form can't do everything. No matter how dexterous or sophisticated, our fat fingered mechanical offspring won't be able to manipulate the atoms of a molecule unaided; even less technical, these two handed automatons will have just as much trouble as we doing the work of three hands. This will be a small victory for three handed people as they will get to remain not yet obsolete longer than the rest of us, at least for the few moments it takes to add one more hand to the robot. All silliness aside (well ok just most of it), there's clearly a huge number of tasks which won't benefit from the superhuman but still human capacity of these imaginary androids unaugmented. This represents a significant relief since we aren't stuck waiting for these super androids to come along (which nonetheless probably isn't too far off, though given the rate of technological progress, relatively probably quite a ways off). In summary, I've basically stated in a very roundabout way that we are free to continue to augment our own similarly limited mechanics the same as we have since the invention of the first tool; we can use our already inconceivably sophisticated body of technology to extend and enhance our capabilities. Case and point, the da Vinci surgical robot. Surgeons are essentially required to have superior motor control as even the slightest irregular movement could result in a fatality. However, no matter how talented the person holding the knife with intent to open you up, there is a fundamental biological limitation to the amount of accuracy they are capable of. Rather than just hoping their home life isn't distracting them and that their cup of coffee wasn't abnormally strong that morning, the da Vinci confers peace of mind with a laundry list of features specifically designed to maximize precision by counteracting the inherent imprecision of human hands. There are over 700 worldwide, and though it is only approved for a limited number of procedures, the number is expected to continue increasing as rapidly as it has been. While it's already on it's second version, I think it's a safe bet that further enhancements will be rapidly forthcoming. Of course, the proof is in the numbers, and the numbers are unambiguous--given the choice between traditional and robot assisted surgery, choose the latter! Here's a video of it peeling a grape on live television:


In conclusion, I'm compelled to once again say the same thing I've said previously: over the past few decades in particular we've been developing foundational technologies. Because each of these have such vast potential for application, the first and most obvious few applications took hold and found success. Being as we are focused on a multitude of things wholly different from the vastness of yet unrealized and incredible possibilities that these technologies enable, it is natural to unconsciously assume that what we see is more or less the extent of what technology can offer, but this tacit assumption is, in my opinion, absolute rubbish. In particular the most overlooked and underutilized technology is cheap and powerful microprocessors; everyone knows that desktop processors keep getting more powerful without getting more expensive, but the bit of interest is that the processors of yesteryear continue to get smaller and cheaper. This fact in itself isn't unacknowledged, actually there's a well known meme that suggests a common calculator found in a high school today has more processing power than the space shuttle that delivered the Apollo astronauts to the moon and back. The overlooked bit is that that little processor can do an awful lot more than help with algebra homework. Like what? Well, I have a video demonstration of one such device, but before you watch it consider that the processor in the device shown is essentially as powerful as a 1986 state of the art desktop that cost $6500 (the Compaq Deskpro 386), can be had for around $3, and is smaller than a dime. The whole device could probably be made wholesale for under $10.

Thursday, March 11, 2010

Electronic Music

Vitalic is a musician that constructs and delivers frequent variations in air pressure in a manner with a rather more contemporary lineage than what I've shared prior.

Thursday, March 4, 2010

Ramachandran on the encephalon

Vilayanur Ramachandran is a neurologist. What's more is that he has a very keen insight, and a particularly effective ability to communicate. Given that the nervous system (including the brain) is naturally and rapidly a profound topic of consideration, such a person as Ramachandran could really make 20 minutes intriguing. Well it was 23 actually, but I'm willing to wager nobody in the room wanted him to stop.

Sunday, February 14, 2010

Observations

First, one of the most notable characteristics of life is activity in the 4th dimension. Observe:


Another interesting fact is that computers also exhibit activity in the 4th dimension:

and

Almost entirely unrelated to the entries prior, here is a coffee cup as a measure of progress towards some certain success, followed by a painting and a portrait:



Saturday, February 13, 2010

Technology

I've heard that some people think technology isn't really progressing at an amazing rate. I think they're crazy. I don't think I've shared this yet, it's an example of the state of technology:


Frankly, I think we have so much technology at our fingertips that we have hardly even begun to scratch the surface of what it's capable of. On top of that, better technology is hitting the scene faster than anyone can keep up with. I certainly think that we are in a technological singularity, and that Kurzweil's condition (strong artificial intelligence) is satisfied by our own intelligence as augmented by the Internet. It's a subtle, almost secret form of artificial intelligence that, from what I gather, no one has yet realized the significance of. With the power of the Internet, a person, so willing, may learn practically anything, and at record speed--no digging through card catalogs or driving to the library necessary. Suppose you want to learn engineering but can't afford school? No problem, one of the best engineering schools in the country, MIT, has put all course materials for the first four introductory engineering courses online (lectures, notes, assignments, labs, etc) for free, available to anyone with an Internet connection. You'd probably want more than an introduction, so it's a good thing they've made freely available most of their curriculum, including that from other programs. You don't get any certificate, but does that really matter? I'm certain that a degree is worthless in lieu of an education, and that an education is no less valid if it isn't certified. This isn't a new fact, it's just easier to get an uncertified education now than it has been in the past. Take Dean Kamen, the man behind this amazing prosthetic arm and many other similarly astounding creations--he didn't earn even an undergraduate degree, though he now has around 7 honorary doctorates. The real point is that now other potential Kamens are easily able to obtain the resources necessary for their talent to reach fruition. An important addition is that I think most people have more potential than is generally realized; if this is true, than we should expect a significant increase in technological progression. The question I'll leave for you to answer then is this: have we seen a significant increase in technological progression since the Internet became widely available?

Tuesday, February 9, 2010

Arbitrary archetypical emotion

Describing an emotion such as disappointment is an undertaking well served by describing situations which would cause this emotion, that is by conveying an archetypal scenario. Thus what follows.

My math professor had mentioned multiple times in class that an upcoming assignment could be done by way of spreadsheets or with a computer program. Given that I'm supposed to be a computer programmer, I figured that despite the bit of extra work involved I was either obliged or compelled to write the computer program. Hoping to not be merely a person who could write a computer program but instead perhaps someone who could (possibly, maybe) be described as having a talent for writing these programs, I spent some time and effort more than reason would suggest to construct a beautiful program, replete with color graphs and readable formatting. I went to some length to make it work with just a single double click, dissatisfied with the alternative prospect of explaining to my instructor how to enter a command into the shell, much less how to add java to the PATH. I added an equation parser so that input like sin(e^(ln(y-sqrt(x-y)))) could be interpreted correctly, a scrollable output window for tables of values, and 2 windows for 12 full graphs, labeled unambiguously with their associated equations. I even tested it on three different platforms to make certain it would Just Work (note it won't run on OSX because it doesn't have the latest JVM, Doesn't Just Work). So it was with pride and satisfaction that I sent it off along with the source code, wondering, wishing I could see the reaction.

A day later, I received the response. It was a request, for me to bring in a paper copy of my results; my professor, for whatever reason, didn't want to run my program.


That vignette could be used to describe disappointment, but with a catch: it would best illustrate the emotion only if the emotion was stated beforehand. If I had said that I was about to describe anger, frustration, or even success and satisfaction, would the story have illustrated the emotion any less? It is easy to see that a person could be angry in such a situation, frustrated too. But I could have just as easily felt flattered, were I to think that I had created a program so sophisticated it was to be handled with caution. If any of these clearly distinct emotions could have occurred, then circumstantial emotion is itself in some ways ambiguous (I hope that's not news). The ambiguity of emotion is something easily used to our advantage; it's a fact that when you feel positive emotions, you feel better, you're physically healthier. That being the case, why not exploit the ambiguity of emotion? In writing that program I learned a lot, I improved my life regardless of who sees or approves of it. I could feel disappointed, or I could feel a strong sense of self satisfaction in my accomplishment. Given those options the choice isn't very difficult! The power is in the fact that there is a choice, that you have a say in how you feel. You can spend all your time looking for reasons to be sad or you can spend all your time looking for reasons to be happy--either way you shouldn't be surprised by the results.

Saturday, February 6, 2010

One down

One assignment done, 6 or 7 to go. Ok, just an excuse to share some Rachels - Last Things Last.



Technically the piano is a percussion instrument, but I don't think that's why I like it so much. I like harpsichords a lot as well, which utilize plucking instead of striking to elicit those wonderful resonant frequencies. The logical conclusion then is that I like instruments roughly shaped like pianos. On a related note, Wikipedia says "The word piano is a shortened form of the word pianoforte, which is derived from the original Italian name for the instrument, clavicembalo [or gravicembalo] col piano e forte (literally harpsichord with soft and loud)."

Demonstration of feasibility

I'm a bit too busy to give the normal glyphic flood, but as proof following my plea for autonomous vehicles this demanded mention. The Center for Automotive Research at Stanford (CARS) is planning to send an autonomous Audi up Pikes Peak at race speeds. Pikes Peak is a mountain road used as a rally stage, with surfaces varying from packed dirt to loose gravel. Actually an autonomous car has finished the course previously, but "only" at an average 25 mph. There is some reason to suspect the Stanford team will succeed in their intent, as they won DARPAs Grand Challenge and took 2nd place in their Urban Challenge; the car can at least drive 120 mph across the salt flats. I cannot wait to hear the results!

Saturday, January 23, 2010

Light and reflections

I was randomly trying to get a good photo of my iris when I noticed something interesting. One of my attempted variations was bouncing the flash off a nearby wall while looking at said wall.



It's not a good photo of the iris at all, but something interesting was going on in my pupil...



It almost looks like an image...



Hm, that looks familiar.




It's a horizontally mirrored image of the wall I was using to reflect my flash. Neat.

Wednesday, January 20, 2010

Max Richter

Max Richter is a contemporary classically trained composer and musician. By my judgment he is absolutely a musical prodigy, certainly amongst the most gifted musical artists of this time. His pieces can often be characterized as avant-garde as he blends modern and classical elements, breaking conventions to enable the forging of some extraordinary, transcendental aural experience. Interestingly his Wikipedia page mentions him "commissioning and performing works by Arvo Pärt, Brian Eno, Philip Glass, Julia Wolfe and Steve Reich" early in his career, which happen to be a few of my favorite artists. Predictably enough, his music has had a rather profound impact on my life several times over several albums. Nonetheless, I know enough to understand music is as subjective as it gets, to the extent that the reactions of others can be rather hard to predict. Well, there's only one way to find out; here's some evidence for my conjecture:



The thing is, like many great musicians, he doesn't just write songs, he writes albums. Thus, as great as this song is alone on this page, it is but a mere shadow of itself when played as part of the whole. I recommend purchasing all his albums, if only to ensure he's funded and motivated to continue producing lots of music for as long as possible.

Saturday, January 16, 2010

2 photos and a fabrication








Time keeps on slippin'

I like to think about really mind bending things, but even more than that I like to think about real mind bending things. For instance, I'm not sure I'll ever be comfortable in my understanding of the fact that the further we look into space the further back into time we look. This is a legitimately crazy thing--it means that given a powerful enough telescope we could watch the creation of our own universe. As impossible as that sounds, it's more or less correct; in fact, we have telescopes powerful enough to look inconceivably deep into spacetime, and we have actually captured the direct aftermath of the birth of our universe. This aftermath is known as the cosmic microwave background radiation, and its discovery earned a Nobel prize. No matter which way we look, the furthest we can see is the CMBR, it surrounds us. This is interesting because the best theory of the origins of the universe, the Big Bang theory, posits that the universe started in an extremely dense bit of space, expanding from there. But how much sense does it make that no matter which way we look we end up looking at that little bit of space? In one sense, the CMBR is a sphere that surrounds the universe. In another sense the universe surrounds it, as the universe grew from it. Either way it doesn't make any sense.

Not everything that throws us for a loop needs to be on a universal scale. Recently I read an article from NewScientist titled "Timewarp: How your brain creates the fourth dimension," which I found to be nothing short of profound. As you might gather from the admittedly bad title, the article is about our perception of time, which is something I had (surprisingly) never really considered before. Early on there is a sentence that nonchalantly flicks off a few words; "Time... is much weirder than we think it is."

Invitingly audacious, isn't it?

The whole article is well worth it, as it shows some ingenious experimental methodology and keen insight regarding something as potentially slippery as temporal perception. One thing that stood out was research apparently showing that a click track at 5 clicks per second (300 beats/minute) for 10 seconds improved performance in basic arithmetic, memorizing words or hitting a specific key on a computer keyboard by 10 to 20%.

There has been research done that shows binaural beats at certain frequencies can entrain certain brainwave frequencies, and it has been suggested that this phenomenon could possibly be used to enhance the performance of the brain. I wonder if binaural beats are somehow related to rapid beats... I know that because of the time delay with binaural beats it sounds as though there are twice as many beats as usual. Anyway, it will be interesting to see where all that research goes.

Wednesday, January 13, 2010

Intelligent Transportation Systems

In 2004 it took an estimated 6400 megajoules to build a typical computer, including 17" CRT. This works out to 1778 kilowatt/hours, or about the average consumption of a house in the US for two months. Based on the 2009 US average industrial rate of $0.07 per kwh, assuming that only electricity was used and at 100% efficiency, $124.46 of the cost of the computer went to energy alone. I reckon this would represent somewhere around 10% of the total cost.

A gallon of gasoline has about 1.3x10^8 joules or 138 mj, meaning the computer would require about 46.4 gallons of gasoline to build. Using a rough average of the current prices, $2.70, this means about $125 worth of gasoline. Note that 1 gallon of gasoline has about 36.6 kwh.

Alternatively, the current average residential rate for electricity is $0.12 per kwh, meaning the cost to build the computer would be $213.36. Notice that a mere change of 5 cents to the cost of a kwh nearly doubles the end cost of the energy, which would most likely be reflected in the purchase price. It's important to recognize that energy and the cost thereof, from, gasoline, electricity, or beyond, is extant in all facets of our modern lives. In other words, if the price of gasoline goes up, the price of everything goes up. Of course most of our electricity is generated from coal and natural gas, so the price of gasoline doesn't seem directly related to building a computer. Unfortunately that's rather short sighted, as gasoline is required in order to move the computer parts to and fro, not to mention to transport the coal to the power plant to begin with.

Clearly if the cost of both gasoline and electricity were to rise by a nickel, we should naturally expect everything we buy to become quite a bit more expensive. It isn't difficult to see that this in turn would most likely have dire economic consequences. This is why there's so much buzz about energy, it should be obvious that the extreme consequences of demand outstripping energy supply readily justify extreme evasive efforts.

Imagine it was discovered that a meteor sufficient to absolutely obliterate Earth was headed straight towards us with a 100% probability of collision. Our dependence on gasoline is kind of like that. Buying a Prius would be like building a large bomb shelter: it would show that you probably realize there's some kind of problem, but that you nonetheless have absolutely no understanding of its magnitude. Do you know how much energy it takes to turn bits of iron buried in the Earth into a shiny new Prius? According to an average figure per car, not the Prius specifically, given by Toyota, around 22,519 kwh or 22.5 megawatt/hours , the rough equivalent of 615 gallons of gas. This means that driving a Prius 31,000 miles uses about the same amount of energy as building the thing to begin with! 22.5 mwh would power the average house for over 2 years, it's quite a lot of energy.

One implication is that by buying a used car instead of a Prius you are preventing the use of the equivalent 615 gallons of gas--buying a used 15 mpg beast and driving it 9,000 miles uses less gas than a new Prius with 0 miles on it, making the beast more sustainable and conscientious up to that point. I realize I always pick on the Prius but I don't mean to be too disheartening, the Prius is one of the better options available, even if I think it's not as extreme as it should be. Anybody who buys a new Prius with legitimate environmental concern is now obligated to drive that car into the ground.

New cars aside, one Mythbusters experiment showed a 39% increase in fuel efficiency from drafting a big rig--driving 10 feet behind it. Assuming every car could always draft in such a manner and increase efficiency by 39%, well then the yearly consumption of gas would decrease by a monumental 39%. That's a big assumption, but there's one way it could be realized, and that's with autopilot.

We can't all draft all the time because it is very dangerous to drive at almost any speed 10 feet behind anything, and the reason is simply biological: it takes a measurable and substantive amount of time for information to traverse the nervous system, this phenomenon is commonly referred to as reaction time. When you see brake lights, the light must activate an action potential in your retina, which travels into the brain. Once processed, another signal is sent down the looong path (compared to microscopic neural cells, inconceivably long) to your foot, telling it to press the brake pedal. If a truck moving 60 mph slams on the brakes with you at 10 ft behind, that reaction time is simply way too slow and it's game over. On the other hand, with autopilot brake lights aren't even necessary, the computer in each vehicle would be in constant communication with the cars in front and behind; the vehicles could be 10 ft apart, 2 ft apart, even physically connected like a train without any problem. I imagine the optimum arrangement would be a physical connection for a number of reasons. Of course, if all drivers were computers, the brakes themselves would hardly be needed, especially on the freeway. If you know the status of every car around you, about their planned movements, power characteristics and beyond, less wasteful air friction could be used to decelerate as appropriate, perhaps to allow a car to enter the train, which is itself a task much easier for computers than humans.

It isn't hard to envision that traffic lights would disappear with irrelevance as well, indeed I doubt it would make much sense to sit at an intersection when precise control and rapid, traffic-omniscient computer communications would allow cars of all headings to pass through synchronously. Sure, it will take a while to get used to constantly missing that other car by inches, but abandoning the familiar start/stop/wait process will give tremendous fuel savings, as it is the most inefficient part of driving and why the distinction between city mpg and highway mpg exists. Accordingly, the pace of society will see a new and considerable boost as not only the time between locations diminishes, but we are also free to spend that time doing something other than driving. Not only will we get places fast, ambulances, police, and fire trucks will be able to reach their destinations in the maximum possible time. If that weren't enough, we should expect that we are all a noticeably wealthier as our expenditure on gas shrinks, car insurance disappears, and as mentioned above practically everything drops in price along with energy cost. It's such a win for everyone it feels like cheating, but all of that is just the start.

The first thing people say when they hear of computers or robots driving cars is "but that sounds So dangerous, it would never be safe enough, I would never trust it!" Well, the bleakness of reality readily illustrates the absurdity of such a thought. Think of it this way: the autopilot system could have 5 million accidents a year and that would still be a huge improvement over humans driving cars! There were around 6.4 million car accidents in 2005. 100 people could die every single day in a computer driven car and it would still be safer, because 115 people are dying every day in the current system. One hundred and fifteen people sure seems like a lot, doesn't it? Well, consider that 3,303 people died in car accidents in the month of September, 2001. That month is and always will be bitterly remembered solely for the terrorist attacks that fell the Twin Towers, acts that meant the death of 2,819 people. There is no doubt that 9/11 was a tragedy, but so was 3,303 car accidents. Death by car accident and terrorist attack are fundamentally similar in that the victims of either are generally no less expecting nor deserving the outcome--incidence is practically random. Just because the first figure elicits strong memories and the next is unfamiliar doesn't make the prior any more tragic! Personally I'm inclined to think that every person is more or less equally valuable (namely, invaluable) and thus that each person's death is equally tragic. That being the case, the 2,819 terrorism related deaths on 9/11 are quantitatively about 85.347% as tragic as those due to car accidents in that same month. Alternatively, if we were to assume that only the death of a relative or dear friend qualified as measurably tragic then the majority of people would see that 2,819 random strangers and 3,303 random strangers are pretty close to each other, and we might expect to estimate their relative tragedy as similarly near. Objectivity aside, you would have to be colder than cold to somehow consider 3,303 lives lost any less tragic than 2,819 lives lost regardless of the details, these are all people that could have been you or I, yearning to be alive just like you and I: husbands, daughters, mothers, brothers... neighbors, friends and mentors; they were real people!

So it's established, there were two significant tragedies in the US in September, 2001, now what? Perspective: the 3,303 fatal car crashes in September was actually fewer than that for the two months post and prior, which makes 5 tragic months in a row. If you figure that anything over 2,000 deaths is sufficient to be labeled tragic, every single month in 2001 was a tragedy considering car accidents alone... 37,862 people died. Every single year from 1994 to 2008 has been a tragic year, with an average 37,500 fatal car accidents per year. 1994 is the earliest data I have, but I'm willing to bet the numbers don't improve much by going back further. Over the 14 years that span 1994 and 2008 562,712 people died in car accidents. If instead of happening over 14 years it happened in one day, that day would be about 199.6 times as tragic as 9/11, like the events of 9/11 replayed 199.6 times in one day. 562,712 is 2.5 times the total number of people that died from the atomic bombing of Hiroshima and Nagasaki combined. Know that those weren't the most devastating though--strategic firebombing of Japanese cities killed around 500,000 people, inconceivable yet still fewer.

Cars driven by people are as deadly, if not more, than world wars.

Then how dangerous would it be? Because an intelligent transportation system would need to be implemented everywhere all at once and thus a massive project, breadth and depth of testing at all stages is a certainty. To start, there has been decades of dedicated research on this specific issue, and the state of affairs is amazing (see DARPA's grand and urban challenges). Given the talent inevitably attracted to exceptional challenges (such as top engineers to NASA), a category for which this certainly qualifies, I presume each issue arising throughout development would be deftly handled. Finally, I would expect that some qualified organization would be intimately involved, dictating the requirements and governing the development to ensure safety and reliability, much as the FAA does with all things aerial. An autopilot system made properly as thus, I predict less than a hundred accidents per year from the very start, probably no deaths. With such a system the probability of dying in a car accident would go from frighteningly high to somewhere less than being struck by lightning. The current estimated yearly cost of car accidents is over $230 billion dollars, so... cha-ching! There's an extra $229.98 billion dollars floating around. Nonetheless we would expect the system to improve over time, transforming cars from most dangerous to safest form of transport.

Optimally the typical commuter car should be prepare for transition by being made small and ultra-light, with aerodynamics engineered in terms of chains of cars. The majority of cars should seat one passenger since most often a car carries only one person and any empty seats means wasted energy. With standardized interfacing and characteristics, other vehicle forms would fulfill the need for cargo haulers, high capacity vehicles, and so forth. Ideally vehicles would be public property, eliminating the need for a family to have multiple vehicles for commuting and family outings, but realistically this is the US and people want to own the things they use. Regardless, thanks to the reduced complexity and altogether more efficient vehicle design coupled with energy efficiency savings, a family could afford to own a number of vehicles which nonetheless add up to a fraction of the energy and materials cost of the present steel monstrosities, maybe able even to be stored in the same amount of space. Alternatively a sufficiently large platform could allow for modular passenger compartments; though the platform size would be less than optimum for single passengers, needing only one drivetrain would decrease materials consumption. The subsequent implication is that modular drivetrains could be used instead of modular passenger compartments.

The aforementioned efforts combined would make for an increase in efficiency so marvelous that domestic oil production would actually be sufficient for the first time since the 70's, when it peaked. Since we're making a whole new concept of car, it would make sense to complete the metamorphosis: ditch internal combustion for electrical, pave the road with solar cells, and oil becomes practically irrelevant for the first time since the second industrial revolution. Rather than carry around the really heavy main batteries, leave them stationary and build contact strips in the road so that cars can zip around like full scale slot cars. The relatively lightweight backup batteries would still be carried so that in the case of main power failure the vehicle could still maneuver and communicate safely. With the sum of these modifications, we should expect our busiest roads to give the impression of losing much of the normal traffic--in reality, the same road may have even more traffic, only seeming less because more cars fit in less space for less time. Each intersection would know about every car planning to traverse it from the earliest possible moment, and would assign each car a set of parameters with which it is to use for traversal, including possible alternate plans. Each car would then communicate with every other car assigned to the intersection around the same time to verify that everything works out, a sanity check independent of the intersection. For example, two chains of several cars each plan to travel east and north through the same intersection at the same time. The intersection may dictate that both chains enter the intersection moving 80 mph, the first at 5:00:00 and the other at 5:04:00. The chains verify together and find that they will pass within 6 inches of each other, but that this is an acceptable margin given the wind conditions and other factors. The plan is confirmed with the intersection and each car passes through, deviating a few hundredths of an inch from their predictions--these deviations would then be incorporated back into the prediction model which is distributed across the whole network. Suppose four very long chains travelling in every direction are approaching the same intersection. This time the intersection would probably dictate that the lead cars split and accelerate through such that at any moment there are 4, possibly 8 cars in the intersection, each one missing the other by a hair. Eventually it is expected that the traffic network will maximize efficiency of the whole system in unexpected ways. Maybe previously busy intersections will be used as though there were no crossing, or all but a handful of wide, long, straight thoroughfares will fall into relative disuse.

I am a driving enthusiast, I really love driving. Many days it seems my highest aspiration is to do laps around Laguna Seca in some kind of ultra performance four wheeled vehicle. But despite my pleasure in driving there is no way that I can call the present system workable. It's extremely dangerous, terribly slow, woefully inefficient, and absurdly expensive. The truth is that we have the technology to automate the roads, people have been working on it for decades and the resulting systems have proven reliable even in novel situations many humans might otherwise fail. It might not be perfect, but it's much better, and as I've shown we're so terrifically awful at driving that that's not saying much. The transition is ready to happen, and when it finally does our world will simply become safer, faster, better, and wealthier. The only downside is that it can't be done over night.



Once we finish automating our roads, what's the next revolutionary development? A space elevator. More on that some time.


some sources:

"Energy Intensity of Computer Manufacturing" by Eric Williams, United Nations University
  http://www.scribd.com/doc/4183/Energy-Intensity-of-Computer-Manufacturing
"How much does electricity cost? What is a kilowatt-hour?"
  http://michaelbluejay.com/electricity/cost.html
"How much electricity do computers use?"
  http://michaelbluejay.com/electricity/computers.html
Energy Content of Fuels (in Joules), other useful tables
  http://physics.syr.edu/courses/modules/ENERGY/ENERGY_POLICY/tables.html
"Weekly U.S. Retail Gasoline Prices, Regular Grade"
  http://www.eia.doe.gov/oil_gas/petroleum/data_publications/wrgp/mogas_home_page.html
"Average Retail Price of Electricity to Ultimate Customers by End-Use Sector, by State"
  http://www.eia.doe.gov/cneaf/electricity/epm/table5_6_a.html
"Energy to build a car?"
  http://www.cleanmpg.com/forums/showthread.php?t=18240
"Most and Least Fuel Efficient Cars "
  http://www.fueleconomy.gov/FEG/bestworst.shtml
National Highway Traffic Safety Administration's Fatality Analysis Reporting System
  http://www-fars.nhtsa.dot.gov/Main/index.aspx
Wikipedia - "Intelligent Transportation System"
  http://en.wikipedia.org/wiki/Intelligent_transportation_system
U.S. Dept. of Transportation - "Intelligent Transportation Systems Benefits and Costs, 2003 Update"
  http://ntl.bts.gov/lib/jpodocs/repts_te/13772.html#4.0

Wednesday, December 30, 2009