Why I didn't buy a new iPhone how Android gets my vote

I'll admit it. I didn't buy a new one. The old one was nice but when it spent a half hour in the washing machine and I went back to my first generation iPod (waiting for 7-11-08) I found I could still listen to the podcasts to which I am addicted. I wasn't *unhappy* enough to go back.

THEN I found out the data charge would be thirty dollars a month! Frankly I am not willing to pay that much to look at a four inch screen. A dollar a day isn't much but I only use the data service once a week unless I'm out of town. So that's seven dollars per use.

I will probably reevaluate in 2009 and see how things are going. In the meantime Android will launch on new phones for multiple service providers.

I was going to use my iPhone in my Interactive Design class as a guinea pig/sacrificial phone. But after consideration I realized Android is more practical for my students since they will be able to afford their own Android capable phones next year. This will vastly improve the social aspect of the class.

The social interaction that surrounds handheld devices is worth of spending time studying.


Global Search and Stress

This post:"Is Google Making Us Stupid?" by Craig Newmark proposes some interesting observations about the interaction of wealth and personal well-being. I would like to make some observations on search technology and the way we interact with it. Newmark is quite right in the way he points out the deleterious effects of our interaction with technology. The original article with the same title ran in the Atlantic.

There are quite a few subjects in the post that deserve treatment. But one theme that runs through is choice.

If you are conscious of your habits, it is not difficult to immerse yourself in a deep and lengthy work. However, people and students in particular choose not to read information in full context. How much of it is the way they have been taught to read and how much is the style of writing is debatable.

To tell you the truth, the last sentence of the post is probably my favorite:
"(I note a third hypothesis, also related to economics. Since information has become more important and more valuable, some authors have gotten much better at producing it. High quality information lowers patience with the lower quality forms. Once you've read a really well-written microeconomics textbook, say, it's much harder to wade through a poorly-written one, or a poorly written text on any subject.)"

Television is a good parallel that I can use as an example of why I think this is true. OMG, I didn't mean that. I am giving an example of production volume, not quality. Don't worry, I will get to finding quality a bit further on.

As with the Internet, cable and satellite bandwidth far outstrips the content a given individual wishes to consume.

Viewing habits are a good indicator. I survey my freshmen every semester on how many network channels they watch in any given week.

In a class of 25, the median is from 7 to 9 with very small variation. There will be one student with no television, one near 15 and one over 20.

This seems at odds with the availability of several hundred alternatives. Because I am a technology and society guy, I only survey news content viewing as a measure of choice variation. Again, there is very little spread. 80% of the average class watches one of only three channels out of approximately 25.

How much of the choice is due to poor interface? A Finnish researcher claimed in 2005 that our interactive (haptic) interface hasn't changed in over 100 years. This was based on his comparing current technology with mechanical arcade games that use pressure sensors to deliver feedback from the turn of the previous century.

In the case of television, development capital is scarce because of the low chance of economic success. Consequently both content and quality are scarce in comparison to bandwidth. (Think repeat of last season yet again.) In the case of Internet content, the development capital threshold is so low my students can afford it. Content creation has been democratized to an amazing extent. This makes the process of making a reading or viewing choice so difficult, we need the help of a search engine. Quality has to be treated differently as you will see.

The famous failure of full text search to live up to expectations is also to blame for a good portion of the reading statistics. Searches turn up hundreds of thousands of hits, a good number of which must be called up in order to eliminate them from consideration. This is the so-called "bounce rate" but it is not indicative when a reader finds exactly the page he is looking for. Bounce rate is really a measure of depth of site penetration, not content quality or relevance.

So Google does have something to do with it (!) because Google is unable to search for a concept that can be described in many ways. Take for instance a full text search for an emergent concept that people label or describe differently. But what the heck, it isn't magic, it's just a computer. The user had better be able to bring something to the table.

To some extent, that's why Google bought GapMinder. You can see the nations of the world compared by average income, infant mortality, longevity, and GDP over time. It is easy to see that if a population is healthy, average income will increase more rapidly than it does in a country where health care remains poor while economic development is happening.

GapMinder is an example of the semantic web and how it may well cause an accelerated discovery revolution comparable to that of the rapid discovery revolution of the 1600's.

Filtering noise in an effective way will be the next challenge that allows stress to subside. But the challenge is huge. How do you decide whether to exclude news from Fox or the Competitive Enterprise Institute in a meaningful way?

I feel that there are two things that will reduce stress when it comes to overload. First, open access to professional communities. Second, semantic utilities with tiered relationships. (Tiered in terms of technical knowledge and links outward to either collections of data or critical evaluations of relationships.

Is this just another whiz-bang computer nerd technology solution? That's what I heard in my SIG meeting when I proposed we sponsor a DOE symposium on one of their collaboration pieces. But Raymond Orbach (DOE) and Abe Leiderman (Deep Web Technologies) have produced good work with federated data previously and I see no reason to think that extending it a bit is unreasonable.

Hyperlinks and search results are seductive in the way wandering in the library is. Debating the percentage of the population susceptible to wandering off topic in a given circumstance is probably not productive. Again, it's choice.

Does Google make us stupid? To the extent that using a tool without knowing how full text search and popularity ranking works can produce ineffective and flawed work. (Student papers. rofl rofl) Using other search techniques such as reading full context and gauging professional reputation can help stabilize the work. In other words, making good choices is central to success in search.


NCLB (FCAT) Broward County Schools

NCLB testing (aka FCAT) results in Google spreadsheet form. Broward County schools ordered by percentage of students qualifying for free and reduced lunch.

The "poverty percentage" column is ordered from lowest to highest to highlight the test results in low poverty communities. All schools have been included regardless of student age. Without working any further, it appears to me that minority percentage may be a stronger indicator but if you're high in both columns, an A or B looks really hard to do.

I might order it differently from time to time so don't be surprised. It should not be hard to tell.