In the coming years, there will be a shift toward contextual computing, writes Pete Mortensen of Jump Associates, defined in large part by Georgia Tech researchers Anind Dey and Gregory Abowd about a decade ago.
“Always-present computers, able to sense the objective and subjective aspects of a given situation, will augment our ability to perceive and act in the moment based on where we are, who weâ€™re with, and our past experiences. These are our sixth, seventh, and eighth senses. […]
The adoption of contextual computing–combinations of hardware, software, networks, and services that use deep understanding of the user to create tailored, relevant actions that the user can take–is contingent on the spread of new platforms. Frankly, it depends on the smartphone. Mobile technology isnâ€™t interesting because itâ€™s a new form factor. Itâ€™s interesting because itâ€™s always with the user and because itâ€™s equipped with sensors. Future platforms designed from the ground up for contextual computing will make such devices seem like closer to toys than to a phone with cool tools.”
Read the article with a critical mind, and think about what kind of invasiveness people would be willing to tolerate. Mortensen definitely is an optimist:
“Within a decade, contextual computing will be the dominant paradigm in technology. Even office productivity will move to such a model. By combining a task with broad and relevant sets of data about us and the context in which we live, contextual computing will generate relevant options for us, just as our brains do when we hear footsteps on a lonely street today. Then and only then will we have something more intriguing than the narrow visions of wearable computing that continually surface: Weâ€™ll have wearable intelligence.”