Tuesday, March 17, 2009

iPhone 3.0: Finally right?

It used to be said that you should never buy a Microsoft product until it was in version 3.0. I think this held true for Windows, but I'm not sure it did for many of their other products (Bob, anyone?). Apple finally announced the feature set in their version 3.0 software today. Instead of going through features, I thought I'd opine on what sort of applications with accessories could now be useful. This article inspired me to write. Here are my thoughts:

Workout Monitoring: The combination of a bluetooth heartrate monitor (one is coming, see here), bluetooth accelerometers (preferably 6 axis of this size, for both wrists and ankles. All sensors should transmit to the iPhone and should be rechargeable. These sensors, along with some intuitive workout logging software would be killer. See my earlier ramblings on the topic.

Receipt Scanner: How cool would it be if you could clip a stubby little scanner and scan in store receipts directly to your iPhone? You would never lose a receipt and could manage/print out in on a web-based companion application.

Laser Measurement: Clip a simple laser measurement tool into the bottom of the iPhone and quickly take room measurements while you sketch a map of the room on the screen. Combine with the iPhone's accelerometers and have the software draw the full map for you.

Pool/Hottub Chemistry Testing: Yes, water is not a good friend of the iPhone. But, using the accessory port, make a tool that you dip into your pool or hot tub and it would tell you all of the relevant metrics: chlorine levels, pH, temperature, etc. The application should keep track of your measurements over time (and allow you to track pool and hot tub separately).

"My Life in Words": Track every conversation you have. Record (on a VOX basis), all day long. Recognize those with whom you are having the conversations with and combine with voice recognition software to capture in easily text-readable form. Sync everyday with your computer to limit the data requirements.

Whole-house Intercom: Put each iPhone in the house capable of contacting any computer/VoIP enable intercom and let loose (as in, talk to anybody in the defined network).

Bluetooth/WiFi Kitchen Scale: Totally unnecessary, but it would be pretty cool if you had a kitchen scale that connected to your iPhone that could walk through recipe and could let you know when you've put enough of each ingredient in the bowl.

Wednesday, March 4, 2009

Some Further Thoughts on TED

So, I've learned a bit more about TED and I owe their designers an apology. They do provide easy access to both MTU's data. I just had to go down the drop down box a bit further to find the separate export options. It just turns out that for me, the one MTU has an average measurement of 4 kWh/hr and the other one is 0.4 kWh/hr, making the split between the two not that interesting from me.

If we pull out the other activities going on, we can isolate the heat pump as a 9 kWh/hr load (wow!). This is the single largest and most notable load in my house and one that needs to be addressed next year.

Also, I learned a bit more about TED in that the Footprints software actually exposes its current information using http. If you go to http://localhost:9090/DashboardData on a browser on the computer that is running, you'll get the below XML. If you have a static IP address on this computer (reasonably easy to set up on a modern router/wireless access point using the computer's MAC address), you can then call the information from any computer on your network.

Unfortunately, despite all of the wondrous information (including separate real-time readings from the two different MTUs), they left out the current time. Huh? They included lots of random daily accumulations of data, but not the current timestamp. Odd choice, not quite sure why they would see fit to include the month and year, but not the full timestamp.

This seems well suited to a WAMP setup to capture the data in a much more flexible fashion (though one that's much more likely to fail at some point).

See the full XML below:

<dashboarddata>
<vrmsnowdsp>121.4</vrmsnowdsp>
<daysleftinbillingcycle>28</daysleftinbillingcycle>
<presentspendingperhour>0.00</presentspendingperhour>
<currentrate>0.0000</currentrate>
<lovrmstdy>117.3</lovrmstdy>
<stlovtimtdy>09:20</stlovtimtdy>
<hivrmstdy>124.3</hivrmstdy>
<sthivtimtdy>13:36</sthivtimtdy>
<lovrmsmtd>117.3</lovrmsmtd>
<hivrmsmtd>125.4</hivrmsmtd>
<kwpeaktdy>16.650</kwpeaktdy>
<dlrpeaktdy>0.00</dlrpeaktdy>
<kwpeakmtd>18.390</kwpeakmtd>
<dlrpeakmtd>0.00</dlrpeakmtd>
<watttdysum>3967344</watttdysum>
<kwhmtdcnt>4074.000</kwhmtdcnt>
<lovdaymtd>99</lovdaymtd>
<hivdaymtd>65</hivdaymtd>
<dlrnow>0.00</dlrnow>
<dlrtdy>0.00</dlrtdy>
<dlrmtd>0.00</dlrmtd>
<dlrproj>0.00</dlrproj>
<dlravg>0.00</dlravg>
<kwnow>2.350</kwnow>
<kwtdy>66.1</kwtdy>
<kwproj>3266</kwproj>
<kwmtd>298</kwmtd>
<kwavg>149.1</kwavg>
<co2now>3.65</co2now>
<co2tdy>102.49</co2tdy>
<co2mtd>462.11</co2mtd>
<co2proj>5062.30</co2proj>
<co2avg>231.06</co2avg>
<ledstatus>GREEN</ledstatus>
<buzzerstatus>OFF</buzzerstatus>
<pastmonthlydata>
<monthhistoricaldata>
<month>3</month>
<year>2009</year>
<dlr>0</dlr>
<kwh>26</kwh>
</monthhistoricaldata>
</pastmonthlydata>
<isdualmtu>True</isdualmtu>
<mtu1wattsnow>1.690</mtu1wattsnow>
<mtu2wattsnow>0.650</mtu2wattsnow>
<mtu1co2now>2.62</mtu1co2now>
<mtu2co2now>1.00</mtu2co2now>
<mtu1dlrnow>0.00</mtu1dlrnow>
<mtu2dlrnow>0.00</mtu2dlrnow>
<mtu1vrmsnow>121.4</mtu1vrmsnow>
<mtu2vrmsnow>121.4</mtu2vrmsnow>
<demandusage>8.524</demandusage>
<demandcharge>0.00</demandcharge>
<energycharge>0.00</energycharge>
</dashboarddata>

Tuesday, March 3, 2009

TED: Voltage Sag

Not sure if its meaningful, but there is a pretty strong voltage sag when my house draws a lot of power from the Dominion system:



The R^2 of the relationship is 58% and the T-stat for the KW variable is 218 - I'd consider this significant. This graph was generated in 'R' using the following commands:

plot(ted$KW, ted$VRMS, xlim=c(0,20), ylim=c(117,123), pch=3)
lines(ted$KW, lm(ted$VRMS ~ ted$KW)$fitted.values, col="red")


In any event, the voltage ranges appear to be well-within the current standards (if Wikipedia is correct):
In the United States and Canada, national standards specify that the nominal voltage at the source should be 120 V and allow a range of 114 to 126 V (-5% to +5%). Historically 110, 115 and 117 volts have been used at different times and places in North America. Main power is sometimes spoken of as "one-ten"; however, 120 is the nominal voltage.

Some Thoughts on TED

Thoughts about the Gadget:

Too Much Juice: TED delivers what it promises, but in surprisingly non-green way. I say this because the RDU (the "Receiving Data Unit" - the little white box with the LCD display) does not cache any of the data that it collects. Therefore, if you want to be able to analyze anything, you have to keep your computer on and logging data. For as cheap as flash storage is these days and as compact as these files are - why not throw a couple of GB in there and let the user download to their computer 1x per week (or live if they choose). TED brags that it only consumes 6 watts. Sure it does, but when you add in my computer, then you get a number roughly 10x that. So, if the TED designers didn't want to pay for the flash memory, fine, include a USB port that was thumb-drive compatible. Keep logging onto that device until it runs out of space.

Weak Software: The Footprints software is pretty much useless at this point. It installed just fine and connected to the RDU (the most important aspect) and began logging the information. Unfortunately, now I just get a blank white screen with the same symbol my browser gives me when it can't find an image. Impressive. And, if I turn it off to troubleshoot, it will lose data. Great.

Stupidly Aggregating Data: The TED 1002 model that I have gets two separate signals from the two separate circuit breaker panels that the MTUs are located in. Why does the log only include one total consumption number? The whole point of this device is to measure usage so that it can be curbed or eliminated. TED loses valuable segmentation on where the load is coming from when it is available for free. An astoundingly bad design choice.

Thoughts about the Data:
My average electricity usage over the displayed time is 5.01 kWh/hr - pretty high. That would mean (if I assume a similar average profile over the course of a month) that I could expect and electric bill of $328.01. This is built up on an $0.088/kWh (I believe the current Dominion average rate) * 744 hours in a 31 day month * 5.01 kWh/hr.

Last month (actually January 12 - February 9), I used a total of 7616 kWh, which I find astounding. That equates to roughly a 10.5 kWh/hr average usage rate. Based on what I've seen so far, that basically means that the heat pump (using resistive heat) upstairs ran continuously for the entire month. The heat pump must go!

Further Extensions:
It looks like there has been some hacking about the TED database. This could allow me to set up a process by which the tick data is uploaded real-time to a local MySQL server and the data available for jpgraph display and analysis in 'R'. Sounds like a good project for the kids. . .

First TED Reading

Here is a quick view of my first overnight energy readings from TED:



This was generated in 'R' using the following commands:

# Read in data
ted <- read.csv('User/markb/Desktop/Recent_TED.csv')

# Format so TIMESTAMP can be used with POSIXct
ted$time <- as.character( strptime(ted$TIMESTAMP, "%m/%d/%Y %I:%M:%S %p", tz="") )

# Plot the data
plot(as.POSIXct(ted$time),ted$KW, type="l", bty="L", col="red", ylab="Usage (kW)", xlab="Time (each point is second)", ylim=c(0,20), main="Electricity Usage Patterns")
abline(h=0)

# show baseline usage
abline(h=mean(ted$KW[ted$KW < 2]), col="gray")

# add voltage information
par(new=TRUE)
plot(as.POSIXct(ted$time), ted$VRMS, axes=FALSE, bty='L', ylab='', xlab='', type="l", col="gray", ylim=c(100,125))
axis(4)
mtext("Voltage (VRMS, in gray)",4)
abline(h=120, col="gray")

# go to bed
abline(v=as.POSIXct("2009-03-02 22:00:00"), lty=2)
# wake up
abline(v=as.POSIXct("2009-03-03 06:00:00"), lty=2)

Sunday, March 1, 2009

TED: The Energy Detective

The last two months we've gotten some pretty outrageous electricity bills. My wife's first inclination was that the furnace upstairs was not a propane-fired furnace, but rather a heat pump. It would appear that she is correct.

However, that wasn't enough for me. I wanted to see where we were using our electricity over the course of a longer period of time. To that end, I purchased TED: The Energy Detective. It came on Friday, today I installed it.

Here's the content of the package (not showing the software CD):


The contents of the brown box, are just more of the same of what is laid out on the table: two more round clips (called Current Transformers or CTs) and the Measuring Transmitting Unit (or MTU). The MTU is in a plastic housing which itself doesn't feel too cheap, but the connector (for the two CTs), is horrible. I had issues with both MTUs of getting it to connect - it took way too long and way too much fiddling around to get it done. In addition, the second MTU's plastic connector was loose on one side of the MTU. Doesn't give one a whole lot of confidence in the build quality of TED.

Installation:
To install TED, you have to open up your circuit breaker and potentially expose yourself to all sorts of dangerous shock hazards. Anything I write about here is not instructional in nature and none of it should be relied on to do this yourself.

First, let's look at the electricity setup of my house:

There are two parallel 200 AMP circuit breakers next to each other. The loads that they serve are obviously unique and here is how they are split up:


The first step in dealing with the breakers is to take off the front panel (there were for screws to do this). You may want to throw the breaker first - I chose not to because I wanted to do a before and after to ensure my multi-meter was working and that I wouldn't fry myself. Specifically, after removing the cover, I checked the voltage from the screw on the top most circuit breaker to the neutral bar. It read 110 Volts. After I flipped the main circuit breaker (which took a surprising amount of effort), it measured 0 Volts. After I took the cover off, this is what I saw:

IMHO, a pretty nicely organized box. At that point, there were several things to do: clip the CTs around the two main wires coming into the box, like so:

The next step is to connect the power for the MTU. This is done by connecting the black and white wires. For the white wire, you loosen one of the neutral bar screws, push the wire tip (it is pre-stripped for you) in and tighten the screw back in. Ideally, you would install another 15 Amp circuit breaker and connect the black wire to it. In my case, I loosened the screw for an existing 15 Amp breaker and added the black tip. At that point, I turned back on the breaker to make sure the MTU was functional. The little green LED started flashing, which I assumed meant that I was in business.


I went back upstairs to check out the TED Receiving Display Unit (RDU) to see if it could read the signal. To do so, I was told in the instructions that because I had the 1002 model (for 200 Amp service), I had to follow the instructions that came with the second MTU. Those instructions were a bit obnoxious (lots of repetitive steps referring to go back and repeat steps 3 & 4 - really, is ink that expensive that you couldn't have printed out each step?). I was quickly able to verify that the RDU was reading the signal from MTU.

I headed back down stairs to install the second MTU on the second circuit-breaker. After doing that, I added the second MTU signal to the RDU and I was in business reading both signals. For me, it was pretty obvious that it was, in fact, reading both signals. That was because the baseline power consumption on one was only 1.2 kWh/hr while when I added the second, it jumped up to ~5 kWh/hr.

At this point, I am data collection mode. I would like to wait until I have enough data to start being able to do some interesting data analysis. I bought the TED Footprints software with model, which is absurdly only Windows compatible. I'll report later on how well it works in identifying usage (primarily opportunities to reduce usage). Already, we've seen the instantaneous usage move from 2.4 kWh/hr to 11 kWh/hr. Looks like we may have room to reduce some usage.

Sunday, February 22, 2009

Geotagging in iPhoto '09

So today I was able to figure out merge my GPS track from my Garmin Forerunner 405 watch, with my photostream. Not hard, just required some poking around on the internet for the right software and some trial and error in the process. I'm sure that there is an easier way, but this is the way that I made it work. So here's what I did:

1) Take photos; record GPS track.
2) Download GPS track to Garmin Training Center
3) Export TCX file to EveryTrail and create a trip
4) Download the GPX file from EveryTrail
5) Load new GPX file into GPSPhotoLinker
6) Connect camera to Mac, but hold off on importing photos to iPhoto
7) Drop pictures that are still on the camera to GPSPhotoLinker (Note: it turns out my camera was off by an hour; GPSPhotoLinker had an easy way of changing everything by an hour and then matching up with the GPS track.)
8) Manually link and save GPS coordinates of pictures.
9) Import pictures into iPhoto
10) Enjoy "Places" - it all worked.

A few notes:
- This would be much easier if Garmin allowed you to easily control the kind of file that you were exporting from their software. So far, I haven't found any options that would allow me to, though without the heartrate information, the PC version spat out a GPX file directly.
- iPhoto should be doing this directly: hello, anybody home at Apple; I thought they were all about making things easier. GPX is pretty universal.
- This is too cumbersome to worry about past photos, but I will definitely be adding this information to future streams. I will also be wearing the GPS watch on other photo expeditions (around DC or any other places that I'm taking pictures).

Tuesday, February 17, 2009

"Faces" are good - but I want people

Just got the new iLife '09 package from Apple. Fine package and I do like the addition of "places" and "faces" in iPhoto. I was inspired to write this post given some limitations that I see in the latter, but decided that I should really take the discussion to the perhaps absurd, but complete view of meta-data that I would like.

Let's take stock of how we are doing with meta-data for photos. Here's my list of desired meta-data and my view of where we are currently (in order of easiest to address and most useful to hardest and most obscure):

1) When: this was the first and easiest issue for people to both understand and deal with - its even available in the file properties as the creation date. This has been, almost by default, become the most common and useful way of organizing large photo libraries. The "grand daddy" of all meta-data for your pictures.

2)Picture Orientation: this one is dealt with in Exif data, but incredibly many point and shoot cameras still do not provide this information. My DSLRs do just fine - its not a big deal, but it sure is an annoyance to rotate all you pictures when you import. This also represents the category of "data lost". As in, the information was available for "free" if the hardware had the where-with-all to capture it.

3) Why (event): iPhoto introduced this a few versions ago. Seems to be a useful way of grouping (e.g., "John & Debs Wedding Weekend") photos that span a medium term of time, but a bad way of trying to implement tags. Overall, reasonably useful, but I believe on the whole implemented in a very proprietary manner.

4) Where: Geotagging - good stuff, but not quite ubiquitous. Would like to see some super easy to use software to merge trail information with a photostream (I've read that it exists, but haven't been able to find it). This should be built in to the likes of iPhoto.

5) Who's in the picture: "Faces" (in iPhoto) are great, but I have tons of pictures that are full of other body parts and I would like a fast way of tagging all of my historic and new pictures with that information. There seem to be plenty of ways to use pattern recognition (not just of faces) to make this super easy. For instance, when I take several pictures of people just moments apart, it is probably a reasonable assumption that they have the same clothes on (not true for all of us, but for most of us). Why not use this information (and that of the human form) along with the genius facial recognition software to make this a "Who" not just "Face".

6) How (exposure, etc.): Nicely dealt with (for the most part), with Exif data and potentially XMP (if anybody other than Adobe ever supports that). Mostly us photo geeks like this stuff - but eventually most people might care.

7) Who's taking the picture: This comes from a strong bias in the photostream at my house. As in, I'm almost always the one taking the picture, not the one in the picture. Its not that important, but come on, let's collect this information!

8) Orientation: (i.e., elevation and compass direction) Location is good but location, plus camera elevation and camera direction completely define the source point of the picture. How cool would it be to see a virtual panorama on g-maps if done on the fly by the mob?

9) Atmospheric Conditions: I think it'd be useful to capture all of the atmospheric conditions (weather information) that were present when the picture was taken. Much more of an issue for outside photography, but the first layer of information would be whether the picture was taken inside or outside. Then, why not store temperature, humidity, cloud cover, precipitation, wind speed, etc. Most of that information could be merged at a later time from outside sources based on location and timestamps.

10) Light augmentation: This is one for the photography geek in me. I'd like to be able to track all manner of light augmentation/modification: flashes, filters and reflectors. Useful information, probably most of the usefulness of this would be in the professional/studio setting. EXIF also deals with this to a limited extent, but not generally captured well.

11) Spoken commentary: How cool would it be that you tag your photos will a few comments about what was going on. Entry should be facilitated by the camera, but there will need to be a universal way of linking the audio file to the picture file for this to be effective. Speech recognition would eventually catch up regular folks and would allow us to have a treasure trove of useful information in our photo archives.

Finally, I believe that all of this information should be able to be stored in the photo files themselves in an open format. I'm not too happy with the fact that I'm a hostage to all of the tags that I've spent time with in iPhoto. Faces and Places just make this dependency even deeper. My only hope is that some day there is a way out (somebody figures out how to write an export routine).

Tuesday, March 4, 2008

Looks Like I'm a Step Behind

See this news blurb about Nike & Life Fitness getting together to allow users to track their cardio experience. Certainly not the full set of data that people would want to be tracking, but its getting interesting. Also, it appears that these disparate companies can work together

Saturday, February 23, 2008

Fitness Data Ecosystem (Take 2)

A BIG Problem
I've been thinking a bit more about my thoughts on the Fitness Data Ecosystem (FDE) and one of the key problems with the concept is that it requires multiple companies, without necessarily aligned interests, to work together to make it happen. It is, quite problematically, a system.

A Solution?
Perhaps a much more contained solution would consist of the following three items:
Wrist Watch or System Controller: This is the nerve center of the system and unsurprisingly, looks and behaves just like a watch. It also receives signaling from the other devices and in th best case, also allows some user input. A much more flexible device, such as an iPhone, would be an ideal but clunky solution. User input will be described later.
Wrist and Ankle Accelerometer Bracelets: Simple and light-weight devices that measure acceleration and transmit that information to the system controller. If there is a Wrist Watch Controller, it would need an accelerometer, too.
Wireless Heart Rate Monitor: Nothing special, straps to the chest and transmits a heartrate signal to the System Controller. It, too, would have an accelerometer to track "core" motions.

Details
Basically, the FDE devices described above could keep track of all movements and physiological reactions (at least heart rate, which hopefully is a good proxy for everything you'd like to be measuring). The FDE System Controller would not have to process the data, just store the information for later processing. If the information is of high quality, the post-processing could literally map your progress through the gym and determine each exercise done. It could match your heart rate with the pace and patterns of your movements. What it wouldn't know, however, was how much weight you lifted for each exercise. This is where the System Controller and its interface become important.

System Controller Interface
You want the FDE SC to do two things for you: let you know what part of your workout you are supposed to be doing and track what you did. Here, a bright big display of your iPhone would be great. It would not only tell you that you were supposed to do 20 reps of preacher curls, but it would tell you that you should begin with 25 pound dumbbells and work up 30, 45 and 50 for your next three sets. If you ate your Wheaties that day and you want to crack it up a notch, the interface should let you quickly adjust the expected routine for what you actually did (a slider that goes in the increments of the machines or available weight set that you have).

I think due to the above interface requirements, a watch-sized device would be ill-suited to the task. I vote for the iPhone - only a few more weeks and we'll have an SDK that can make this all a reality (if all of the wireless devices are running no Bluetooth).

Will People Use It?
I think some would be intrigued by the notion that they can easily track and monitor their workouts. Others will find the ankle and wrist-bands and unsightly and geeky addition to their otherwise fashionable workout attire. In the end, it will be a subset of people, but I think large enough for it to be of interest of a company such as Nike or Adidas.

Side Benefits?
Why not wear these things all day long? They seem like they could serve as great low cost whole-body monitors for medical examination - you could just add whatever other specific device (like a blood pressure monitor) you might need and transmit its signal via Bluetooth, too. It seems to me that you should go to your next physical after having emailed your doctor a week's worth of body monitoring information. The interaction could be much more productive: you need to exercise more (not likely to be the feedback to somebody who would wear such a device regularly, but perhaps the doctors office mails you one a week before your appointment); you need to cool down more slowly in your workouts; focus on moving in the middle of the day (you sit still too long at your desk); why not walk a few extra blocks in the morning instead of getting off at the closest subway stop; your resting heartrate looks good (too high, too low).

You may just want to wear it all day long because you want to have a better idea how many calories you are burning in the course of a day/week and how you should adjust your diet accordingly.

Conclusion, for now
A lot of this sounds like something from the MIT Media Lab (and perhaps it has all been done before), but I don't necessarily think its outlandish to propose. But, despite Mr. Kent's prodding, I don't think I'm about to leave my current line of work to make it happen.

Thursday, February 21, 2008

Asynchronous Conversations

Idea:
Enable people to carry on spoken conversations in an asynchronous fashion.

Details:
The concept isn't too far out there - really its similar to email. As in, I write you and email and at some point in the future (not virtually instantaneously) you read it and potentially respond to it. Why not do the same thing with the spoken word?

I believe the scenario would look like this:
1) I have a number of topics that I want to talk to you about. I make a voice recording of topic one (i.e., I speak into my iPhone to the aSynch application). After the first topic area, I tap "end topic" and then begin talking about the next one.
2) I synchronize my conversation with a asynchronous conversation website (ACW). While doing so, I add a text tag to describe any new topics that I've brought up. Perhaps its a new service of Skype (although it doesn't seem to fit their mold).
3) My friend then synchronizes to the ACW, and my portion of the conversation is downloaded to his device (perhaps a Blackberry, but in any event, the only requirement would be that it was "standards compliant").
4) My friend looks at the topic areas (remember, all have been tagged), and arranges them in order that s/he wants to listen to them (or leaves them in the default order). Also, he decides how he wants to listen to the topics (her/his previous comments, then my responses; or just my responses).
5) My friend listens to my side of the conversation and at the end of each topic, they are prompted to add their response. Alternatively, they can break into the conversation and respond with something; useful if you are afraid that you'll forget about a specific point that you wanted to make.
6) The cycle of synchronizing and responding continues.

Interface
I think that this idea, while intriguing, is a real challenge from an interface standpoint. It will have to be very easy and intuitive to listen and respond. Ideally, it would be as intuitive as having a face-to-face conversation. It can't be that, but it can strive to be as close as possible.

Will it Work?
With the caveat of the above paragraph, I think that it absolutely could work, but there are a whole host of challenges:
1) Hardware: you don't want to have a dedicated piece of hardware to make this work, it has to be integrated into existing devices. As much as smartphones are doing something very similar, they seem like obvious devices to build software around.
2) [More to Come]

Sunday, February 17, 2008

LED Lighting: In the (LED) Mood

Concept
Shift the lighting in your room/house to match your mood.

Background
LED lighting, though still prohibitively expensive for most of us, will become an increasingly viable alternative to incandescent and even compact fluorescents. Advantages will include both longer life and higher efficiency that we currently have today with incandescent bulbs. But LEDs will also allow a greater control over the mood of the room. Here, I propose a simple control system for in-room lighting that is compatible with your existing home wiring.

Details
The LED Concept Lighting (LCL) consists of two modules: the switch and the "bulb". The switch will be backwards compatible with existing wall switches and really only require two wires in: hot and ground. The bulb will screw into the standard set of sockets used today for lighting such as the S100 "Edison" socket or its European equivalent of the E27. See here and here for some ideas of the types of bulbs that are currently available. Current bulbs are available at a "fixed" color or frequency; they cannot be adjusted after they've been manufactured. Some have flirted with these ideas, such as this project, though I'm not sure they have expressed all of the thoughts I'm about to here (forgive me if you've done so, and I just missed it).

What will be unique about this system is the user's ability to change the light characteristics with a simple wall switch. This is far more that just a "dimmer switch", but a "mood switch". The switch will probably look much like the dimmer switches that you'll find that fit into a standard switch plate (i.e., the switch itself is narrowed a bit and there is an up/down slider on the side that governs the intensity of the light). The coolness is added with two extra controls: the three-bar slider (TBS) and the Temperature Dial (TD). The controls don't necessarily coexist well with each other, so there would need to be a small red LED above the control that is currently setting the light output. The switch would directly control all of the lights on the given circuit. High frequency data bursts could be put out on the circuit to set the parameters, similar, but much less complex than HomePlug although using it as a standard might keep the costs down.

The TBS is what it sounds like, basically a way for the user to directly control the Red, Green and Blue components of the light output. If you want the room to be "hot" and all red, just slide down the green and blue sliders and crank up the red. The possibilities are (almost) endless - to the degree of variation that each color element will allow. I'm guessing that 256 should be more than sufficient for customized coloration.

The TD is really an alternate way of adjusting the light that will be more intuitive for some people. This will, as the name suggests, set the light temperature and it will have the Kelvin markings around the edge of the dial. A range of 3000 to 5500 should probably be sufficient for most people.

The addition of the TBS and TD will require a unique faceplate to be used, but I think that as long as it follows standard conventions for everything else, it will not be a significant problem for backward compatibility.

One Step Beyond . . .
So all of that is cool, really cool actually. But it could be even cooler. How? By going One Step Beyond for this Gadget to the point where you add customizable programming for each lightbulb. Too hard - I don't think so. Here's how you do it:

The Bulb:
The bulk would need to be capable of "listening" to an outside control source and adjust the output of the three LEDs accordingly. The listening could take place over the power input (similar to the HomePlug idea floated above). The key for this to work, though is that each light has to listen and filter out instructions for other bulbs and only listen to its own instructions. Thus each light will need to effectively have its own ID or MAC address. For our purposes, instead of "Media Access Control", we need Lighting Access Control, so for fun we'll describe the address/ID as an LAC address.

The Controller:
Generally, I think that to have anything of even moderate complication, you are going to need to use a computer-based programming platform. So everything that follows, will make that assumption. The controller process consists of the following four functions: mapping the room, writing the lighting program, transferring to a dedicated controller and finally, running the program.

Mapping the room: As a first step, you'd want to map out each light's location in the room and probably associate a LAC address with each point on the map. Probably not too hard? I don't think so, but there could be a complication if the orientation of the LED lightbulb made a difference in the final program. There, the mapping may require the user to (a) take note of the LAC (b) screw the LED lightbulb and take note of the orientation with respect to the room. Orientation could be controlled by the outer ring of LEDs lighting as different colors and the user picking the best match for one of the walls. This could even work on chandeliers and such. If the system took off, you could even allow people (or manufacturers) to share models of common light fixtures, so they wouldn't have to remap/design lights themselves.

Lighting Program: The computer platform would give you, as the "light architect", total freedom to set the mood of your room. Your mood could be static, animated or responsive to its environment. Static moods don't necessarily mean boring: you could differentially color a room and come up with cool patterns that play well to the furniture and other surroundings there. An animated room could be simple or complex. The complexity could be stepped up another level through the use of "Short Throw Wide-Angle Opaque Glass" as I will describe later below. A rolling or pulsating pattern might look pretty cool. As would many "screensaver"-like options.

Transferring to Controller: After the lights have been mapped, the program written, its time to transfer it to the device that will control the actual bulbs. The technology used to send the control signals will dictate some of the aspects of the controller. If a wireless standard like ZigBee is used, the controller could be anything from a handheld remote to your laptop with a ZigBee USB fob sitting on the side of it. If HomePlug is used instead, the controller would have to be connected in some fashion to the same circuit as the LED lights that you were looking to control.

Running the Program: Running the program could be a matter pressing play on your laptop (perhaps the animation is tied into the iTunes visualizer on the music that you are stream through your house for your party). Alternately, it could be pressing button 1 on your handheld remote when you walk in the door and the room begins pulsating in the very cool way that you've spent hours programming.

"Short Throw Wide-Angle Opaque Glass"
OK, so the LED bulbs cost about $150 each. That's a lot of money! I have a room in my basement with 15 recessed lights. Even if I'm a bit crazy and I spend $2,250 to put LED bulbs in each of those lights, I am probably not going to add another couple hundred lights to make a contiguous "canvas" for me to develop some really, really cool lighting program.

Instead, what I propose to do is to mount a 2' x 2' piece of opaque glass below each recessed light that will spread the light from the LED out about 10x from the "wide angle" that bulb would produce. The opaque glass would hang about 4 inches from the ceiling and would be supported by four corner fixed pegs or screws going into the drywall of the ceiling. Between the opaque glass panel and the LED bulb, there would be an innovative lens that spreads the light evenly out to the full area of the glass. I'm not quite done with ensuring that the physics work here, but I'm sure that somebody smarter than me could tackle this in a matter of no time.

The effect of adding the "Short Throw Wide-Angle Opaque Glass" is that you'd have an almost contiguous surface that you could do some really cool things with, such as display pictures and more complex graphics. Here, the orientation of the bulbs really do matter and you'd begin to talk pretty quickly about how many "pixels" each bulb could represent or display. Initial lights ought to be capable of 128 pixels and densities could be increased in the bulbs without the glass needing upgrading (as long as it was in increments of 4x).

Who Should Do This?
GE, Philips come to mind immediately. They certainly own the lighting market in the U.S.

In fact Philips appears to have the beginnings of a system in place, but unfortunately looks to be only in the press release stage. Products available today look simplistic, such as the LED Color Changing Party & Deco Bulb. Other, similarly simplistic LED light items can be found here.

Smaller companies, such as ChannelBrite might have just the right combination of innovation and business skill to make this happen, at least at a small scale.

Will it Happen?
Probably, but will it take a while. Unfortunately, politicians are fixated on compact fluorescents as being the "mandated" wave of the future. Instead of looking for more efficient outcomes, the government has decided instead it will pick technology winners. This is not a case where the government should be mandating method, only outcome (efficiency). OK, I'm off my soapbox now.

Tuesday, February 5, 2008

Time Out: The MacBook Air, It Just (Doesn't) Work(s)

My wife has been happily computing for the last three years on her G4 iBook. It has been getting long in the tooth and she just started classes this semester at GMU. It was time for a new computer, one that was light but did not need to be exceedingly powerful.

The rumored light-weight MacBook seemed like a good fit. The keynote presentation at MacWorld by the man in the black turtleneck sealed the deal. I ordered the MacBook Air (MBA) that day and it finally arrived last Friday. So far so good.

Wireless Migration Assistant
The problems pretty much started immediately. The Mac has a great little intro video (Leopard, actually) and then it goes into a configuration mode that allows you to migrate your information from an old Mac. Well, as instructed, I popped the DVD into Christa's old iBook and loaded the Wireless Migration Assistant (WMA) and proceeded to launch the transfer. All went as expected until the estimate times for a completed transfer began growing, and growing and growing.

Eventually, the old iBook gave some cryptic error message. Basically, it said that the transfer had failed for an unknown reason. The new MBA did not fail out, but the estimated completion time kept growing. No keystroke combination was successful in stopping or interrupting the process. The only thing to do: press and hold the power button until it shut down. And I tried again several times to the same effect. All the while with no messaging to tell me what went wrong. Eventually, I gave up and went to bed.

The next morning (Saturday, now), I gave it a few more shots. I connected the iBook to an ethernet connection to try to eliminate flaky wireless on its part as being the source of the problem. No luck. I eventually concluded that it must be bad wireless on the MBA, and called my local Apple store to see if they had the USB ethernet dongle in stock yet. They did - I then planned to go that evening. Then I went crazy.

Crazy, as defined as, "doing the same thing and expecting different results" kind of crazy. And there is a reason that people go crazy. It sometimes pays off. To be cleared: I gave it another go, doing nothing differently, and for whatever reason, it finally worked.

After completing the process on the MBA, all was good. All was good except for their being no printer. [What's up with Leopard not transferring printers, or even remembering them when you do the upgrade from Tiger?]

Remote Disk
One of the compromises, so to speak, that I made when trying to get the WMA to work was skipping my wife's application directory. This means that I had to install iWork '08 again. My only choice to do this: Remote Disk.

Things began just fine here. I loaded the software on my MacPro and enabled sharing. I inserted the install DVD and the problems began.

While the MBA instantly found the disk and the install package, it didn't like installing from the remote disk. So much so that I repeatedly go the error message that the install DVD was bad and that I should contact the manufacturer to get a new one. [Hello, Apple? I understand that the install program is what was generating these messages, but didn't you test this thing? Provide some context for the user?] Trust me, the disk wasn't bad.

Again, craziness ensued. Lots of it. Eventually it did pay off and the installation completed. I can't explain why: I did nothing differently the 10th time as the early nine tries.

It Just Doesn't Work
Don't get me wrong, I'm still probably too much of an Apple fanboy. But that is why it really bugs me when their tools don't do what the say they do. Specifically, there were a number of things that really bugged me about the whole MBA setup experience:
- Lack of useful error messages and feedback. None. No message was at all useful.
- Processes still "working" when they obviously were not. This was perhaps the most annoying aspect. Just admit it already when you're broken.
- The migration process could be much more flexible. The MBA does have a USB port and indeed many people have large external USB drives. Why not give the flexibility of exporting to a USB-drive and then importing via that drive with the MBA?
- Finally, I am annoyed that the processes finally did work. It wasn't as if I was doing something (obviously) wrong. Just keep trying and it might work. Isn't that the whole reason I stopped using Windows?

Monday, February 4, 2008

Time Out(s)

So I decided that from time to time, I will also talk about gadgets that I currently own and how they do or do not live up to their promise. I was inspired this weekend to do so - and a follow-up post will explain why.

Sunday, February 3, 2008

Fitness Data Ecosystem

Of the few posts that I've made, I'm probably at greatest risk of just not being aware of products that might already exist to address this. I've looked - but haven't seen - so if you are aware of what I speak of - please let me know. But here goes . . .

The Beginning
My wife was kind enough to buy me a Nike triax c3 (sorry, no direct link due to an absurd use of Flash) for Christmas, which is a watch and heart-rate monitor. I had been looking into buying one and she was aware my research. The watch/monitor combination is nice and works just find (though the first one she bought did not work at all). It was a very modest price at Costco, I think about $35.

So it's great, I can look at my watch and see what my heart rate is. But unfortunately, it stops there. There must be something more and there is, but I will argue later that it isn't enough or at the very least, it falls short of the potential with current technology.

The Ideal
The ideal Fitness Data Ecosystem (FDE) would do three things well: define a standard data format for workout information and human physiology (perhaps with a cute name like FitXML), spur a new set of data capture and transfer devices and finally, spur the creation of a whole range of programs for people to evaluate and track their workouts and progress. Let's look at each of them in turn.

FitXML
Yes, I know, I Googled it too. It apparently does exist, but only used in a proprietary and horribly obscure program, Personal Fitness. So, for all practical purposes, it does not exist.

Contents of FitXML
I think it should have the following components:
- Any human body characteristics one could think up (height, weight, waist size, bicep, neck size, body fat, etc., etc.)
- Workout routine information (name, type of exercise, repetitions, weights, other gear used or to be used, etc.)
- Heart rate & other real time body stats
- Running/biking mileage, pace, GPS or pre-programmed waypoints (see Nike's very cool Google Maps mashup where you can just click your route).
- Songs played (or to be played)
- More? (let me know if something obvious is missing)

Data Capture & Transfer Devices
Here, I think most of the devices for data capture all exist in some form or another from all of the usual suspects. They do a great job if you are doing some sustained cardio routine. I do cardio, sure, but I also lift quite a bit (not that you'd know it by looking at me).

The next big leap that would have to happen is a set of weight machines that would sense what you were lifting. It doesn't seem like it'd be that hard (i.e., a well placed strain gauge and some electronics). I think it could work like this:
- You approach a machine and when you are seated, your workout fob communicates with the machine to let it know that you've arrived and they synchronize timing.
- You lift (hard because you're a beast, as my kids would say).
- On each movement (begin, top, end), the machine transmits a time code, type of movement, the amount of weight going up.
- Some machines, the movement isn't obvious (like a Smith Bar). Here, your fob gives you several choices, but defaults to the workout routine that you've preloaded into the workout fob. The range of motions could be narrowed down by what portion of the machine the bar is traversing.
- For free weights, things potentially get trickier. However, the solution might be as simple as built in accelerometers. They are relatively cheap and small and would likely do the trick. There would have to be some sort of activation of dumbbells. There it could be as simple as when you take them off the (non-direct contact charging) rack, they search for the closest workout fob. Barbells could perhaps work with the combination of a strain gauge and the accelerometer. This would save the system from having to make all of the weight plates self-aware, so to speak.


Finally, after the workout the workout fob would wirelessly (Bluetooth?) send all of the workout information to your computer to where the information would be imported by your computer-based program.

Fitness Program (excuse the pun)
Here is where you will track your progress and build your future workouts. The primary function will be to manipulate FitXML. Well behaved programs will keep all of your data in the non-proprietary format and will compete instead on the innovative and intuitive ways to manipulate and create workout routines. I'd expect everything from opensource to commercial alternatives.

Who should do it?
All of the current fitness companies. But the problem is that they legitimately only control a portion of the fitness space, not enough. Nike already has a pretty good system of tracking running information with or without an iPod. Cybex does a good job getting pulse/calorie/distance information to the user on a per use basis. Nobody has a comprehensive system.

I think that somebody like Cybex or Nautilus are in the best position to make this happen, not somebody who just makes shoes. Unfortunately, the most likely outcome of Nautilus or Cybex doing this will be a closed system that does allow for an ecosystem to develop around it (i.e., Nike do its thing with shoe sensors and watches).

Cost Implications?
I believe that Bluetooth is the best choice for data transfer - it should be wireless and on a standard protocol. Costs for Bluetooth are probably are $10 to $15 per unit (based on this article).

Its My (Your) Data

A few weeks ago, I was talking to an old friend of mine from college (Scott Raymond) and our conversation touched on the topic of online banking. He made a statement that stuck with me and probably will show up in a number of future posts.

He said, "It should be your data," in reference to the online banking that we both use. USAA, for all of its wonderful characteristics, has some odd/annoying limitations on downloading your account activity data. Unless you use Quicken or MS Money, you're pretty much out of luck (and it only goes back six months). Doesn't seem quite right - It Should Be Your Data!

Personal Financial XML
I think we'd all be better off with a standard mark-up language for personal financial information; it would be something like FiXML, which is oriented toward (fixed income derivative) trading. It need not be more complicated than an SGML variant (of which HTML is the most famous child). The standard, if created, could free those stuck with Quicken and MS Money. It could also ensure that all of us would be able to download our personal financial information from a wide array of financial service companies and be able to store it and use it for years to come.

Ideally, this language would be flexible enough to handle both transaction level information (deposit or check written against a checking account) and invoice or statement level information (end of month credit card statement or bill from the cable company). The ability to incorporate both of those things would be quite valuable (I think), as would the ability to include blobs or pictures (think canceled checks).

Who Will Drive This?
It would be great to think that some forward-looking consumer advocacy group, perhaps the Consumers Union (the publishers of Consumers Reports), could find reason to put together interested parties and get a standard published. That, however, is probably way too optimistic.

That leaves me without an obviously standard bearer. Clearly Microsoft and Intuit (publishers of Quicken) have little to no incentive to make something like this happen, with their de facto standards already being adopted by many online services. Who then?

All my musings for naught

The NY Times reminds us today that it isn't enough to just have the idea - you can find the article here - it takes countless hours and determination to follow through and make the idea a reality. Oh well, its still fun to develop the ideas.

Sunday, December 30, 2007

Video Cataloguing Software

Home/Small Business Video Cataloguing Software

I take a lot of digital pictures and digital movies of activities my family does. While I feel like I have a very good program for managing my photos (iPhoto), my options on the movie side are much less well developed. Here’s where I think we need to go to get it right for movie cataloguing and project development.

General Concepts:
Since I take a large amount of video, I need to be able to store, track and access all of the old movies in a form better than “grab that tape sitting on your desk somewhere”. While I do love the convenience of MiniDVs, and it can serve as a great long-term storage solution, it just isn’t convenient for working with except in the context of a one-off project (think, iMovie 06).

Why not just iMovie 08?
Let me count the ways and by extension, the features missing from iMovie 08. Here we go:
• This is a bit simplistic, and your experience may vary, but the Stability has to be improved. I’ve used two different DV cameras on two different recent model macs (a Quad-core MacPro and a MacBook Pro) from two different video cameras (granted, they are both Canon - a ZR200 and a XH-A1) and have had severe problems importing. The most prevalent problem is outright crashing of iMovie. This happens 8 out of 10 attempts. Unacceptable, Apple. Other attempts have included a host of other problems: dropped frames, unrendered thumbnails (with no clear way of forcing a rendering) and missing video (just doesn’t show up in the event). I ought to be able to start an automatic import and 99% of the time come back an hour later to a completed, error-free import.
• Looking at clips in some other fashion than the event library (not to mention the performance issues of the event library from time to time – I do not want to wait to look at clips). Right now, I can only look at clips in terms of events.
• Looking at full clip information. There seems to me that there is a host of other information that I would want to view about a clip that I don’t have access to right now. For instance, I would like to set the start date of the clip (for those times that my video camera internal battery may have been dead or incorrectly set). I’d like to be able to tag with location and people in the video and other category (keyword) type information (such as school concert).
• Speed of the viewing/scrolling/searching needs to be quick. While Jobs has bragged in past keynote speeches that iPhoto scrolls “like butter”, iMovie right now can only be described to scroll like a three-sided wheel.

Ideal Feature Set for Video Cataloguing Software (VC)
• Flawless imports with auto-clip splitting on breaks in the video (why doesn’t FC have this?) while adding some reasonable/intelligent file structuring in the back end.
• Event and clip tagging: This should always give you the option to tag all clips with the same information provided for the event, but allow each clip to have separate information.
• Video/Picture Adjustments: The adjustments that are available in iMovie 08, namely Exposure, Brightness, Contrast, Saturation, and White Point recentering are useful, but don’t get you all of the way there. Often with home movies, you have limited lighting options. You can’t just set up three tungsten lights at the right angles and have a reflector on the person in focus at all times. Nope, you pretty much have to live with what you’ve got naturally in the room (if my experience is any indicator, your wife or husband or family is probably annoyed enough already with your efforts to record the event to do much more). If that is the case, the video adjustments ought to allow for some varying exposure adjustments across the picture frame. Said another way, tone down the floor lamp and the portion of the wall that it overexposed, but leave the people alone. One of the simpler aspects of this feature, would be the removal of “noise storms”. I’m not sure what the real name of this is, but it is the noise that is present in a fixed perspective video. Obviously, the colors are not shifting around, but the noise makes it look like it does. Automatic de-interlacing would be appreciated. I’m sure that this can be done, but would require some fancy algorithms to get it done right. Initial iterations may require some manual interaction (which would work well for tapings where the camera doesn’t move at all), but later ones should be fully automatic. Turning real world lighting problems into Hollywood lighting would be a big selling point.
• Audio Adjustments: Home movies also have the unfortunate reality that people don’t always talk at the same volume nor are the same distance away from the camera’s imperfect microphone. Incorporate some automatic adjustments here – perhaps as simple as “auto volume balancing”. This feature would be smart enough to recognize silence and not crank up the volume when all you would hear would be whine of the camera’s tape motor, but would boost Grandma when she speaks quietly and tone down Junior when he screams uncontrollably about how his sister took his truck. On the same theme as above, turning real-world audio into Hollywood-level audio would be a huge selling point.

Video editing/exporting feature set
Here, I don’t have too much to say/complain about in terms of where current products are today. I think that iMovie (when working properly) hits the mark on simplicity and feature set for quick and dirty movie exporting. I use Final Cut Express for my serious projects, so I don’t think that my video cataloguing software needs to be all things in this regard. [However, I do think that for Apple to legitimately call this an HD program, they ought to beef up support for HD exports beyond the 720 x 540 option without having to use QuickTime option.]

One click YouTube uploading is great, but I never like to be stuck with a single option. The software should integrate YouTube, Yahoo Video and any other major contenders (it can’t be that hard to add more).

Network Storage Options
Your VC software should work with you to make effective use of network storage. Obviously, your software will be able to see any network drives you have attached to your computer, the difference for video, though is that the sustained speed of your connection to your network drive is critical. Here’s where your video cataloguing software needs to be smart. It should ask the user if they want it to do some performance benchmarking whenever it sees a new drive. If the performance of the connection will not sustain the fairly high requirements for a video import, the software should ask you how you’d like to accommodate this. The primary option should be to cache the import on the computer’s hard drive and push it up in chunks as the bandwidth of the connection allows. The software can estimate how many minutes of import it can handle before your current computer’s hard drive fills up.

Archive Feature
Many videos that are not in your favorite clips (or are from completed videography engagements) no longer need to be present on your primary hard drive or network attached storage. Archiving at full quality should be a core feature.

It is not clear exactly how your typical user would be archiving at the present moment, except by writeable DVDs. Ideally, Blu-Ray disks will be very common and the user will have access to a 45 GB Blu-ray RW, which can be added to incrementally by the user. The VC should assign unique volume names to the disk and track which clips are on each volume, such that the user can retrieve specific clips efficiently.

Access of Clips in Media Browser
No matter who’s written the software, all of the clips should be available by the “Media Browser” or its equivalent by simple dragging and dropping.

Who should do this?
Obviously Apple, Adobe, Avid and Sony. I’m not sure who else. Write me scathing comments about my stupidity if they already do.

Why won’t this work?
It can and should. Perhaps as part of a fuller suite, but I think it would be best as a stand-alone piece that would allow the user to choose their favorite editing solution.

Thursday, December 27, 2007

Hard Drive Graveyard

The HardDrive Graveyard (HDG)

I’m one of those people that just won’t throw away a hard drive. Not so much because I need the extra 8 GB that my four year old hard drive will give me, but because I just don’t want any of my private data getting out there. It may also be driven by some laziness in organizing all of my old data and the nagging fear that I haven’t gotten everything that I need from the old hard drive. So what to do?

General Concept
Ideally, there would be a ATX sized case with 12 flexible slots for desktop and laptop-sized hard drives. The slots would be flexible on both size and type (supporting SATA all the way back to the really old-school hard drives (think IDE).

The flexibility could be obtained by having a standardized connection at the back of the slot of the HDG. Then, through different trays (think MacPro) and adapters that would snap on the back of the hard drive, the device could handle any previous format. The specific trays and adapters could be sold with the HDG or after the fact for a minimum ($10) amount of money. Speed would not be a key factor of this device – this is not designed to provide primary, shared storage for a home network, but rather to only provide cheat and convenient access to old data.

The HDG would have a FireWire 800, USB 2.0 and Ethernet connection on the back. From there, it should have the same network accessibility as the best home-targeted storage area network boxes that are on the market (think Infrant and others). RAID features probably don’t need to be a big factor, but couldn’t hurt. I think a small number of added features here would help. One would be to allow the user to set a hard drive as active or old. All of the old hard drives would be logically grouped together by the server software and the first level of folders would be the separate hard drives with titles of their volume names. In addition, it would be nice to have an “OS Scrub” feature, that would remove all of the files that are on your hard drive that were there simply to allow you to run your computers (i.e., Windows or OSX system files). You shouldn’t need these files anymore, because you don’t even have the computer anymore. [Note: it would be really cool to rig up a virtualization program to make that last statement untrue. Parallels or Virtual Desktop could add some great features if they could figure out how to recognize a “computer” and just make that hard drive act as the old computer.] The space freed by the removal of system files could be used as additional storage in the context of the old stuff that you were doing, or as part of the next feature. The next additional feature (though not a critical one) is “Large Slow Storage” or LSS. If I am describing something that is already well-described, please forgive me, but it is a novel concept to me. LSS generally would be used for files that would not be accessed with any frequency, and when they were accessed, would not need lightning fast file delivery. Archiving video would be a good example of where this could be useful.

But for this concept to work well, the HDG will have to be a champ with power savings features: to both minimize customer operational costs as well as manage heat. The device should be able to individually turn off hard drives when they are not in use based on customer-set preferences. This would minimize time that the old hard drives are spinning and allow the user to get the quality of service that they need.

Total price for the HDG could not be more than $400 and the lower it can go, the better.

Why won’t this probably work?
• Storage is getting large too quickly. You can almost just copy your old hard drive lock, stock and barrel, onto the new hard drive and still have plenty of room.
• Storage prices continue to fall. Perhaps just a corollary to the first reason, but the fact that you can do the above, doesn’t mean that you necessarily will. The fact that you can do it ridiculously cheaply (at least compared to the last time you saw storage prices), you may very well do it without thinking.
• Better migration software has emerged (either within the OS or in addition to it), to allow users to move all of their files onto the new computer. My view is that OSX does a pretty good job on this, but I’m wary of trusting something like this without double-checking. That’s where your time can really be sucked up – others probably disagree.

Who should do it?
Obvious candidates are DLink, Linksys or Buffalo. Others, closer to start up, like Infrant, could really make a go of it here. Its probably not a huge market, but one that I would assume that they could pursue. Pursuit could also bring some modular benefits for them. The connectors/sleds/slots that are generic for this project, could be adapted and developed only once for the rest of their product line.

[Update 2014-11-28]
There are a great many devices known as Hard Drive Docks available that do something similar to what I described here.  I must have been a thought leader?  No, I'm just joking.

Saturday, October 13, 2007

Programming for Kids

This is another one that I've been thinking about for some time. Why hasn't anybody put together a good programming environment for kids.

Purpose:
  • Teach the fundamentals of computer programming to kids in a fun way.
  • Provide "layers of complexity" that will allow both the youngest and the older to make programs.

Thoughts:
  • There have been some initial movements in a positive direction here, but I've yet to find something really satisfactory. The Lego Mindstorms kit provides a programming environment. Unfortunately, its really limited what one can do and the graphical aspects take some getting used to. I would love for that environment to take the graphical abstractions off the page and allow kids to program with actual function calls and logical structures.
  • There are some sites (here's one) that purport to have links to other software packages, but I can't really find much of interest.
  • Web programming would be, I think, very interesting to my kids. If they could do something there, they'd be able to share it with their friends. It could be linked to databases. Eventually, they could be writing PHP and SQL? I don't know how old somebody needs to be to learn those things, but I think it would be possible for my 13 year old daughter to do it.
  • If the above statement is true, these kids are going to need some real logic training - probably more than just their intuition.
  • If you have ideas or software that you like that does this, please let me know. Thanks!