Wednesday, August 18, 2010

Idea #3: Tell me what you see...

Challenge: How do you utilize existing technology to enhance the life of someone who is visually impaired?

Solution: This challenge dates back to this past January while studying System Architecture under Ed Crawley, Professor of Aeronautics and Astronautics at MIT. I'll be back in Prof Crawley's classroom come this September, so this idea came back to me today. Put in simplest terms, Prof Crawley's methodologies involve the analysis and design of the overall structure, key components, and interaction between the components of a system. Systems can be anything from a pencil to an aircraft, or better yet, an organization of people.

In the process of breaking down a system, you begin to think about the function and meaning of each individual component, which a fascinating exercise that forces you to contemplate the overall purpose of a product. For example, is the purpose of a digital camera to create digital images or it to capture a scene? If it's the former, then digital cameras have likely reached their dominant design and the impact that a designer/architect can have is minimal. However, if you consider the latter "scene capture" purpose, the possibilities for innovation are expanded...which brings us to the proposed challenge of enhancing the life of the visually impaired.

Imagine if you disassembled a digital camera into all its functional pieces and parts. One could imagine analyzing each individual component to determine its purpose and importance relative to the overall function of the camera. As you work your way through the components, consider the visual display that is present on the rear of most cameras. Clearly this feature is useful to give the user a preview of the captured image, but it's certainly not required as a camera is still able to capture a scene and produce a digital image without it. So, we have an opportunity here....

What if the display was not visual, but it was audible instead? Given today's image recognition technologies (Google, Apple, HP) and the endless supply of user-generated content (Yelp, Flickr), it's reasonable to believe that a device could capture a scene, analyze it for recognizable objects, and then dictate a description of that scene using simple text-to-speech capabilities.

The resulting experience would be one where a visually impaired person may have a richer understanding of the world around them. The audible (or potentially Braille-based) output would provide the user with the ability to build an accurate mental depiction without the need for assistance from another person.

1 comment:

  1. Todd,

    Great concept. Here's one that I find particularly fascinating in this space. Apparently the neural pathways in our brains that convert signals from our eyes can be rewired to read inputs from other sources. Initial experiments happened back in (I think) the sixties, with people "seeing" through a huge array of electrodes attached to their backs; think of the array of electrodes as making a bitmap of the image in front of them. Recently the electrode technology has improved and devices now exist that allow a blind person to "see" through an array of electrodes held against their tongue. See the following for more information.

    http://www.scientificamerican.com/article.cfm?id=device-lets-blind-see-with-tongues

    Cheers,

    Matt.

    ReplyDelete