After reading several papers on the previous subject, namely interface evaluation, a few interesting things do crop up.
I find it interesting to note that (And i'm paraphrasing here, as i could not trace this specific though) "Good technology weaves itself in the fabric of our lives, becoming invisible but still serving its purpose." The irony here consists therein that all interface evaluation attempts to create solid, qualitative data on the usability and 'betterness' of an interface by actually drawing the attention of the user to the interface.
When performing a task either consciously or unconsciously does influence the way a task is handled. And most interface evaluation comes down to the testing of one or more new interfaces, effectively only evaluating the behaviour of users learning to use the interface. I believe this results in a bias towards easy-to-learn and simple interfaces.
But back to the conscious and unconscious tasks. Undoubtedly you've walked through a door several times this day, noting the door, but not the interface : the door handle. Stare back at your door, walk up to it and open it whilst being aware of what you are doing. By making the interface visible and experiencing it consciously, you've changed the way, timing and experience of using the door. And that's just a door!
As explained in the previous post, in HCI we attempt to formalize ways to determine wether an interface will be a good* interface. Preferrably, we'd form some kind of repertoire of metrics that can be applied universally and still be true.
Sadly, however, this is not the case. The efficiency of an interface is something that can only be measured in relation to the underlying goals of the interface, which are generally somewhat fuzzy. It would be folly to attempt to use the same metrics for an accounting program to a game for children as the metrics for the accountants will not incorporate things such as fun, learning, experience, etc..
Next to that, most of our interface design knowledge comes from a very narrow domain:the standard interface or the use of a computer with a mouse and a keyboard, which has been the prevalent (And to a certain degree, invisible) form of physical interface to software up to this point. In the twenty or so years that this physical interface has been used, both our software interfaces and our knowledge of them in this domain have matured and grown. Outside of this domain, however, the old methodology breaks up. I believe great damage can be done to the potential of new physical interfaces (take for instance mobile platforms) as the expertise of the standard interface is wrongly applied outside of its own domain.
This worries me most of all; interface evaluation is still a very narrow field as far as metrics go. I had expected a richness of all kinds of metrics, but most research is still done using simply timing, users rapporting on their feelings and easy-to-measure input events such as mouseclicks and/or keystrokes.
Timing is the main workhorse of interface evaluation, but measures only one thing: Speed of use when performing one specific pre-set task in a new enviroment.
Users rapporting is somewhat fuzzy, even after statistical analysis, as most users in these tests are CS or HCI students, instead of aveage users
And lastly, input events do not translate well over different types of physical interface as the method of input itself differs over these platforms.
And still these are the most used metrics in interface evaluation. There is a creeping suspicion that interface evaluation metrics are used so the researchers are able to at least get some qualitative data, as opposed to truly attempting to determine the viability and effectiveness of an interface. This also shows another problem with current metrics: They measure primarily efficiency in tasks or user preference.
I find it hard to believe that all software could be correctly evaluated over just these two dimensions.
* The discussion on what makes interfaces good is actually a very broad and large one, which i'll forego here.