The User Interface

The user interface is any input/output device and an operating system that enables a user to interact with their computing device, whether it be the desktop, laptop, PDA or other mobile device. Not a lot of thought is given to the present day interface, except perhaps for the operating system, primarily because the hardware devices used today have become somewhat static and have not significantly changed over the last 20 or so years. Let's face it, there really isn't a whole lot that can be done with a keyboard or mouse; there have been some creative configurations and physical geometries that have occurred in the last couple of years that are interesting and may prove to be successful. But research-in-progress may very well change how we interact with our computing devices in the future.

So where is the user interface headed? How can we interact more efficiently and effectively with these devices? What kind of interface can we expect to see in the near future? How will we interact with computing devices in a more distant future?

To understand where the computer interface is headed, it may be beneficial to see where it has come from and to review where it is today. From there we can look at current research and see where the user interface might go in the future.

We all should be glad that we do not have to interface with the earliest computing devices like sliding sticks, abacus, rotary calculator, slide rules and early 4 function electronic calculators to compute an amortization schedule for a loan. However, trying to do the same type of work on the first personal computers like the Kenbak-1 or the Altair 8800 was basically the same. These first machines had a very primitive interface when compared to today's standards. The interface for these devices required the user to know how to convert regular numbers into binary and then input the binary equivalent into the machine with a set of on/off toggle switches for each calculation. The output was primitive as well, displaying the calculated results in a sequence of small lights representing the binary numeric value. There was no need for an operating system because there was no disk drive, no monitor, no keyboard and no saved programs that could be loaded into the device. So the user interface for data input and output of these early computers was you, binary code and the toggle switch.

A couple of years later, in 1977, Apple Computer, Commodore, and Radio Shack introduced mass-market computers. The interface for this next generation of computers had greatly improved over their predecessors, as the user was able to interface with the computer via a keyboard and TV screen. Hence, we see the beginning of the user interface pretty much as we see it today. Although this hardware provided a simpler interface for data input and output, the operating system was still lacking. The user of these systems could only interact with the computer using Basic programming language. A tape storage system came along a little bit later; programs and data could be stored and loaded from tape. Even with the shortcomings, the introduction of the keyboard, TV screen and tape system was a significant step forward for the user input/output interface.

The input/output side of the user interface saw a steady stream of improvements with the introduction of the 5 1/4" floppy disk. The 3 1/2" floppy disk, introduced later, had a slightly different format with a significant improvement in data storage capacity. Later came the Zip disks, thereby increasing the capacity almost 100 fold. Currently, systems have almost moved away from the floppy disk media, preferring the higher capacity CD-ROMs and DVDs. Clearly, higher capacity drives have greatly improved the input/output side of the user interface.

With the ability to load larger programs and save higher volumes of data, we entered into the age of the graphical user interface (GUI). Apple first introduced the GUI operating system for public use with its Prodos for the Apple II. The first IBM PCs were loaded with the text-based DOS operating system, but later, they too, entered the GUI arena and would load their computers with Microsoft's Windows. At first, GUI drove the demand for better graphics. So there began a spiral of improved video systems and then better GUI operating system to take advantage of the improved graphics. Thus better displays and GUI operating systems were driving the improvements in the user interface for a few of years.

Although not formally considered part of the user interface, improvement in applications set the stage for the next round of interface improvements. Applications were becoming standardized in their look and feel, but more importantly, data could easily be transported between applications and into the application from external sources. This transport functionality opened up new avenues for data input for the user interface.

One of the best sources for external data is the Internet. Here the user can connect to a public wide area network and download reams of data like stock price histories into a spreadsheet. Or the user can download a program into the computer, install and run the program, and never touch a floppy disk or CD. And the user can even keep the hardware and software completely up to date by downloading updates and yet never leave the house. Entertainment also entered the scene where the user can download music and soon movies. Clearly the Internet provided a wealth of information that can be directly imported into the computer.

Automation provides another frontier for data input into the computer. Currently, several businesses utilize automation to import data directly into the computer and accounting system. A couple of examples would be automation sensors at oil wells, pipelines, refineries keep tally of product and processing operations. Phone lines and satellites help merchants make transactions in real time. Some businesses are now tracking sales and inventory in real time because automation can directly interface with their computer system. As you can see, getting data into the computer has gotten as fast as when the transaction or event occurs.

So how can the human interactivity achieve these levels of performance? One step in this direction would be full development and implementation of a voice recognition system. There are programs that have been available for years, but they have been limited in one fashion or other. If anyone has looked at a voice wave pattern, then they would know that voice recognition is a very difficult science. However, with increased processing power and continued research, these limitations should be overcome in the near future.

Parallel in the development of voice recognition and computer control will be developments in AI and 3D Graphics. As graphics continue to advance and processing power becomes cheaper, we will see broader utilization of Avatars, the graphical representation of a human. These systems will demonstrate self learning, adaptive and predictive responsive behaviors to the user's needs. These types of interfaces will reduce the effort the user spends in interacting with the computer. Currently there are a limited number of Avatars in service, but that number is expected to grow.

There is going to come a time when the hand movements in typing or mouse movements are going to be too slow, and the users are going to be looking for faster way to interact with their computers. Voice recognition will bridge the gap for a while, but even speech will have inadequate speed. An area that will pick up where voice recognition stops is the eye tracking system. Eye movements are very quick and these systems will track the eye movements and perform the desired function at a much greater speed than any previous interface system. A Google search indicates that Europe has done a lot of research in this area and we may see commercial hardware in the near future. The following link takes you to a BBC article addressing this subject And more information can be found by performing an advanced Google search using the specific keyword "Eye Tracking".

The advancements in technology always yield interesting products never before contemplated. For example-the current research in human organ supplements has computer interface ramification in the more distant future. Presently, medical scientists and doctors are routinely performing cochlear ear implants. With continued miniaturization of analog and digital circuits, it is not hard to believe that we could interface the audio directly to the implant. And with ID chip technology (,1848,50187,00.html) we could discreetly receive specific information programmed just for the user.

There are other implant programs underway. One of the more promising is the retina eye implant. Here doctors place a photosensitive silicon chip on the damaged part of the retina. Although it is in the early stages of development, patients who have received the implant are able to distinguish objects that they could not see before. With continued research and development this technology could be adapted to a user interface in the distant future. For more information, the following link is a brief but objective review of the retina implant research.

The last interface to be described sounds a bit like science fiction. But then 25 years ago voice recognition would have seemed like science fiction if you were entering data into your Altair 8800 by flipping the on/off toggle switches. So keeping an open perspective is always helpful. The fastest possible interface to a computer would be a brain implant. Sounds bizarre. but researchers have implanted hundreds of miniature electrodes on the surface of the brain of a monkey. And with a little training, the monkey was able to control the actions of a robot simply by thinking about it. Can you imagine the ramifications if humans had this ability to interface with their computer with just their mind? The first link is an article that describes the research and test subject. The second link is to the ABC news report on brain implants and computer control. With the convergence of various technologies, we are starting to see an evolution in various user interface systems. A point mentioned in "The Future of the Microprocessor" article in the February 2003 issue of HAL PC Magazine indicated that in the next 20 years, it is very probable that a single processor would have the computational power of a human brain. It should not be hard to believe that as the advancement of technology quickens. we can expect the flow of information to overwhelm the human brain. Philosophers have debated what mankind can do to keep up, and many have suggested human augmentation. So today we may be switching the on/off toggle switches of our computer when compared to what the computer interface may look like in the future.