Information Design

Beyond HCI - towards Information Interaction

Online preprint

P. Duchastel
Information Design Atelier



Abstract

HCI grew out of the difficulties people had with using computer applications. The relationship of interest in HCI is that between a human and an interactive cognitive task that is mediated by computer. The interface is the mediational framework that is designed to optimally support task processes. That framework consists of concrete elements (such as displays...) as well as abstract task structures (cognitive processes...) that interface the human agent with task goals. Such an abstract HCI model places the interaction within the larger human plane, thus bringing out two implications for the field. First, it helps explain the difficulties of grounding HCI in psychology. Second, it announces the arrival of a new brother: AAI (Autonomous Agent Interaction). Each of these implications leads to a redefinition of focus for the field. Psychology must re-emphasize the role of learning in interaction and furthermore, the enlarged scope of the field offered by AAI shifts our attention to the more abstract nature of the underlying interaction, characterized as information interaction (II).


Introduction

HCI is currently poised to break out of its mold, as defined by its first half century history, and redefine itself in another mold that is at once more abstract and wider in scope. In the process, it will redefine its very name, HCI becoming a subset of the larger field of Information Interaction (II). This coming transformation is what is described here. The presentation has a largely abstract character to it. Indeed, it seeks to reframe our discussion of the phenomenon of interaction under study in such a way as to go beyond the pitfalls of concrete problems usually associated with the field. By stepping back from the usual issues of concern and from the usual way of categorizing the elements of the field (such as in the Handbook of HCI - Helander et al., 2000, or Jacko & Sears, 2003), the goal is to contextualize HCI within a broader, necessarily philosophical, plane of concern in order to look at it afresh and thereby see where it might be headed. The direction proposed is decidedly more englobing, more abstract, and hence more theoretical in its analysis.

This orientation may trouble some, but you are invited to simply extrapolate from what is here and imaginatively see how to apply the perspective expressed to your own research issues and theoretical concerns. Depending on where you stand philosophically with respect to the human condition, there is also potentially an ethical aspect to what is foreseen in the growth of autonomy for software agents (see below), but that is a matter for philosophical debate in another forum.

Any proposed reconfiguration of an active and healthy field of research is bound to be controversial. While the perspective presented here is in some ways normative and prescriptive, suggesting for instance that the field focus on reaffirming learning as its central concern, the analysis is largely descriptive, even if interpretive, exploring an unfolding process that is underway. It points out a trend, that towards autonomous agents, and only incidentally comments on the normative aspect of that trend. Controversy should thus center not on issues of what we want the field to be, but rather on what we think the field is becoming - a scientific perspective, not mainly a philosophical one.

The focus of HCI

HCI is a field that grew out of the expansion of computing beyond the early context of usage by technically inclined specialists, who were quite eager to access the potential of computing and did not mind the learning curve involved. The scope of HCI continues to expand as computing becomes ever more pervasive and novice users expect to use computing artifacts without fuss, to put it bluntly. The goal of HCI is thus to ease usage while preserving the power of the artifact, effecting whatever compromises are possible in order to achieve a workable solution. That this goal is difficult not only to achieve, but even to have accepted, is well illustrated by Carroll's (1990, 1998) proposal for minimalism and Norman's (1998) proposal for information appliances (building on the notion initially proposed by Raskin - see Norman).

And so we continue to indulge in situations where complex system requirements are specified and HCI expertise is brought in to do what it may to perhaps ameliorate the situation somewhat. Attempts to break out of this design context (as through the various means presented in section II of the Handbook of HCI - Helander et al., 2000) certainly point the way, but may only succeed when computing itself is seen to disappear (in the spirit of Weiser and Brown's ubiquitous computing - 1997- and Norman's 'invisible' computer - 1998) into the larger context of human activity structures. How we view cognitive tasks is thus central to HCI past, present, and future, and needs to be considered in a high-level framework, as described below.

A second aspect of HCI growth over the years has been in the scope of human activities being supported and enhanced by computing. Computing itself has merged with communications and now serves not only well defined cognitive tasks such as those embodied in word processors and spreadsheets, but also much more open-ended tasks involved in social and entertainment pursuits. Carroll (2000) suggests that this move to ubiquitous cultural artifacts may overly stretch our capability to pre-specify requirements and thus underscores the importance of participant design methodologies. While the latter will always be desirable, it may be preferable to see the challenge as one of developing conceptual frameworks that sketch the constraints and possibilities involved. That inventiveness, combined with market forces such as those we see playing out on the Internet today, is the true driver of technological evolution. Thus, the cognitive context of human activities is also an essential consideration, beyond the task considerations.

In sum, the locus of HCI is shifting from complex task design that is highly focused towards activity structures that are more diffuse and varied, in much more wide open contexts than previously. Our technology-driven society is changing the very nature of HCI.

The cognitive task

The most basic question of HCI is what is the interaction between? The three elements generally involved in the answer are the person (user), the system (computer and its interface), and the task (goal). An answer with more guts or more ambition would do away with the middle element and pursue analysis purely in terms of person and task. Doing away with the interface itself is after all the ultimate in the quest of transparency that drives all HCI design.

A computer system, represented to the person by its interface, is an artifact that mediates some specific process, i.e. it supports the interfacing between person and task such that the person can realize the task. The person does not care about the interface (it is just a tool), but does care a great deal about the task. Transparency in HCI means forgetting about the interface. A good example from a more familiar domain (Duchastel, 1996) is the steering wheel in a car. The steering wheel is the interface between myself and the road, and I never ever think about the steering wheel, I observe the bends in the road. The steering wheel disappears, as ideal interfaces should, and all that is left is the road and I (the task and the person).

This analogy, despite the simplification involved (after all, the steering wheel was designed at some point in time and I did learn to use it at some other time), points out the centrality of task analysis unencumbered by system concerns. The feature-temptation of software design, whereby features are crammed into a system and then coordinated into some kind of coherent interface, continues to plague software development not only because it answers to other imperatives (such as marketing), but also because it underlies the traditional notion of HCI as interaction between a person and a system. Future HCI must and will refocus on the true interaction of interest. To the extent that it does, HCI will take on a new shape, as will computing itself, most likely along the lines of information appliances (possibly in the form described by Norman, 1998).

Two aspects of this new shape can be discerned, even though they are now merely speculative. The first concerns the shift away from traditional, generally narrowly-focused, task analysis (Jeffries, 2000) to a much greater concern for abstract task structures involved in human activity. The important considerations will likely be around identifying goals and the cognitive processes underlying them rather than the specific tasks involved. Interest will likely move away from software as enabler (with all its possible features) to software as activity-specific supporter of goals. HCI will move from how to interact with software to how to naturally accomplish some goal, finally approaching the age-old quest for transparency. There is more than HCI involved here of course - all of computing is.

The second aspect of the new HCI concerns interaction modalities and its concrete elements. Just as command modalities gave way to the WIMP paradigm of contemporary interfaces (Pew, 2003), the latter will give way to yet more natural interfaces involving speech and immersive technologies in the VR realm (see below). The driver of this shift, beyond the developing feasibility of these technologies, is the HCI goal of adapting to humans through use of natural environmental settings, i.e. another facet of the transparency goal. The day when my interface will be an earpiece, lapel button and ring (the button for sensory input of various kinds and for projection; the ring as a gestural device) may not be far off. Screens and wrap-around glasses will be specialty devices, and keyboards and mice endangered species.

These evolutions (of process and of gear) will make the person see computing as interfacing, with current gear long forgotten and the computer, while ubiquitous, nevertheless invisible.

The cognitive context

The disappearing computer will not leave great empty spaces, however. There will be agents to interact with (discussed later) and there will be novel forms of interaction, discussed here.

As we digitize our world, its processes become malleable by the cognitive and computational artifacts that are computers. The range of such processes has increased to the point that computers are involved more with communication than with computing (the web is a good current illustration of this). The implications for HCI are tremendous. As aphoristically put elsewhere (Duchastel, 1998), "the cornerstone of ergonomics, functionality, thus crumbles on the web". This situation is due mainly to the reduction of intentionality in HCI: no longer do people only use computers to transact specific processes, but they also use them to stroll within new landscapes, in the true spirit of the Italian passagiata, where strolling may be full of expectations, but also open to pleasantly unanticipated encounters.

The new landscapes include application areas such as communication, education, entertainment, etc. (Schneiderman, 2003).They all involve interaction with information, but also add to the mix the social aspect of interaction, thus creating a new and more complex cognitive context of action. The backdrop for HCI has suddenly changed, the cognitive context has evolved to a socio-cognitive one, as illustrated by the current interest in CSCW, itself only part of the new landscape.

The notion of interface itself can be re-examined. In a broad definition (Duchastel, 1996), an interface is the locus of interaction between person and environment, more specifically the information environment within which the person is inserted. In these general terms, interfaces can be viewed as abstract cognitive artifacts that constrain or direct the interaction between a person and that person's environment. In the end, the task itself is an interface, one that connects actor to goal through a structured process. Even the most archaic software is the concrete embodiment of a task structure. Thus, on the one hand, HCI deals with the person-information relation and is concerned with the design of information products; and on the other, is deals with the person-task relation and is here concerned with the guidance of process. It is the interplay between these two facets (product and process) that creates the richness of HCI as an applied field of the social sciences.

The expansion of computing leads to a renewed consideration of the realms of cognitive functioning. Indeed, we operate in different realms at different times, sometimes dealing directly with the world, sometimes traveling our imaginations, and at other times acting within VR worlds of great variety (Duchastel, 2002). On another dimension, we can distinguish between actional, conceptual, emotional and social realms, depending on the modality of our involvement. These realms of being fashion our interactions and could well be explicitly considered in designing information artifacts and processes. Their pervasive nature has led them to be considered implicitly in design activities, but explicit consideration raises the level of attention given to each, with potentially superior design results.

Implications for HCI

Such an abstract HCI model places the interaction within the larger human plane, expanding HCI concerns to match the expansion of computing. It shows the historical narrowness of the field (appropriate for its day of course) and is suggestive as to why it has proven so difficult to ground HCI in the scientific pursuits of psychology (Carroll, 2003). While HCI was more narrowly focused on the local psychology of interacting with information elements on a screen in the context of natural cognitive constraints, the wider picture of goal and task structure within different realms of being were generally left to vary and to wander. Looking back, it is no longer surprising that the psychological linkages were so difficult to make.

If anything, though, the current expansion implicates psychology more than ever as cognitive tasks and cognitive contexts fashion the interactive behavior of people within their ever more accessible information worlds. There are two facets to the question: the first deals with adaptation to novel situations, the second with learning, both eminently psychological concerns, both of course interrelated.

The constant novelty factor that we experience with technology generally and with computing in particular sets us up for fully using our intelligence to adapt. Not only do the tools (interfaces) change, so too do the tasks and activities themselves, as witnessed for instance by the arrival of web browsing and many other web tasks. In this respect, then, HCI is faced with a losing battle with mounting diversity and complexity, and can only purport to alleviate some of the strain involved with these needs for humans to adapt. What has happened to HCI as the 'process of adapting computers to humans'? HCI must find ways to assist human adaptation, with general means such as only gradually increasing the complexity of an artifact, forcing stability in contexts that may prove otherwise unmanageable, increasing monitoring of the user and just-in-time learning support. All of these means are merely illustrative of a style of HCI design effort that we will likely see more and more of in response to computing complexity.

It is in reality complexity of activity that has increased, not complexity of computing itself. Cars and telephones have also required adaptability for optimum usage recently. But as computing penetrates all areas more fully, and the possibilities for more symbolic mediacy increase (look at the choices on the telephone now), the question to ask is how can HCI help? Are there general principles that can be applied? Perhaps not, for what we are witnessing here is the removal of the C from HCI. As computing becomes pervasive, it indeed disappears, as suggested earlier by Weiser & Brown (1997), and it is replaced by human-task interaction. Attention shifts up the scale of abstraction and designers focus on task structure and context (Kyng & Mathiassen, 1997; Winograd, 1997) more than on operational task mediators (even though, somewhere along the line, hard tool design is needed). A more human-focused HCI (away from the software, more towards the experience) evolves.

The other side of the coin is learning, that quintessential aspect of intelligence. Learning has always been considered a necessary difficulty for HCI, the goal generally being to simplify or metaphorize the interface so as to ease whatever learning is needed, this despite the recognition of a potential trade-off at times between ease of learning on one hand and ease of use, power, and flexibility on the other (Gentner & Nielsen, 1996). With the expansion of computing, and the added adaptation that is called for, comes the concomitant increase in need for learning, hence the centrality of this issue for HCI. Two prime factors come into play. The first concerns the extent of learning expected, the second concerns means for assisting learning.

As new tasks and new information exchanges are suggested to users, some degree of adaptation and learning is inevitable. The goal of minimizing the extent of learning must be considered among other goals, and may not be very high in priority in many circumstances. A complex task, whether computer-mediated or not, generally has to be learnt by a new user. The issue for the designer is to determine how much of it needs to be learned by different types of users (diversity is usually the case in this respect). To consider the same issue from the other side of the coin (Duchastel, 1996), the user herself must decide what learning investment to make, i.e. what competence level to achieve with a new tool. In this sense, the designer is really creating an environment rather than a single, unified artifact. A word processor, for instance, is best viewed as a whole host of activities (even if bundled together), depending on how different users view it and use it. In effect, the designer is creating a word processing environment that is appropriated and adapted in actual use in different ways by different users. Software modularity, both within and across systems, impinges greatly on HCI design efforts. As this trend grows, so too will the challenges. The further issue of task delegation to software agents (see below) only magnifies those challenges. The point here, however, is that the user himself must partake of the decision regarding learning and performance. Interface designers can only set the scene, so to speak.

Quality of learning can be defined (Duchastel, 1996) in terms of appeal, intensity of effect, and time to learn. Designing software environments should ideally optimize all three of these factors. The result should lead the user to interesting interactions, to depth of understanding, and to succinct learning experiences. Learning is an internal process that is stimulated and channelled by the external factors present, whether these be deliberately planned via instructional design or implicitly designed into cognitive artifacts through intuitive design processes, or simply fortuitously available in the environment. Indeed,  a great deal of our learning happens all the time in informal settings as we interact with information around us.

The focus of learning psychology is the study of the constraints impinging on the interaction between learner and environment in the pursuit of competence at a task or activity. Learning psychology, particularly as the cognitive context of computing takes on further complexity, thus becomes the core focus of HCI, more so than the human factors concerns of traditional HCI. Learning psychology may have lost some of its historical importance in psychology as the cognitive perspective brought other concerns to the fore (Anderson and Lebiere, 1998), but it will undoubtedly be of growing interest in artifact design because of the issues raised above.

Birthing autonomous agents

Computer agents in the form of software that carries out specialized tasks for a user, such as handling one's telephoning, or in the form of softbots that seek out information and prepare transactions are already well with us (Bradshaw, 1997). That their numbers and functions will grow seems quite natural, given their usefulness in an ever more digitized and networked world.

What will grow out of the agent phenomenon, however, has the potential to radically transform the context of our interactions, both digital and not, and hence the purview and nature of HCI. It should be noted that the design of agents goes much beyond the category known as 'interface agents' that is the most usual one encountered by HCI professionals (Lieberman, 2002).

The natural evolution of the field of agent technology (Maes, 1996; Jennings & Wooldridge, 1998) leads to the creation, deployment and adaptation of autonomous agents (AAs) (Sycara and Wooldridge, 1998; Luck et al., 2003). These agents are expected to operate (i.e. make reasoned decisions) on behalf of their owners in the absence of full or constant supervision. What is at play here is the autonomy of the agent, the degree of decision-making control invested in it by the owner, within the contextual limits imposed by the owner for the task at hand and within the natural limits of the software itself.

Seen from another perspective, the computer user removes herself to an extent from the computer interactions that will unfold, knowing that the agent will take care of them appropriately and in her best interest. We witness here a limited removal of the human (the H) from HCI.

All this is relative, of course. Current stock management programs that activate a sale when given market conditions prevail already operate with a certain level of autonomy, as do as well process control programs that monitor and act upon industrial processes. Autonomy will largely increase, however, as we invest agents with abilities to learn (such as agents that learn a user's personal tastes from observation of choices made by the user) and to use knowledge appropriately within limited domains. As we also develop in agents the ability of evolving adaptation (from the research strand known as artificial life - Adami & Wilke, 2004), we will be reaching out to an agent world where growing, albeit specialized, autonomy may be the rule. HCI will be complemented with AAI (Autonomous Agent Interaction), for these agents will become participants in the digital world just as we are, learning about one another through their autonomous interactions (Williams, 2004).

The roles these agents will be playing out in the world will undoubtedly be many and the nature of the consequences that could ensue (consider how the financial software programs sometimes created havoc on Wall Street in their early years of operation) will not always have been fully thought out. Nor may the behaviors of agents be fully predictable, given the uncertainty of the workings of agent evolution. Further, in our networked digital world, a great deal of our interactions will take place with other people's agents, of unknown character and function.

There are certainly ethical considerations that will fuel debate in this area of technological development, as in many other areas of applied science (Lanier, 1995; Maes & Schneiderman, 1997). The issue being considered is not a normative one, however, but rather one of describing and analyzing a situation that continues to emerge from this area of computing (Sycara and Wooldridge, 1998). Even today, we are faced with having to deal with a variety of agents, including spam-bots and viruses (some agents are beneficial while others can certainly be made nefarious). The relevant question is what does this mean for HCI?

The implications are immense. HCI currently places the user in the center of activity within computing, with the goal of adapting computers to users. Work in CSCW brings in the team aspect of interacting with computers, but does not fundamentally change the focus on getting a task accomplished. Even major current work on agents generally accepts the view of the field as an assistive technology, i.e. one of helping people with tasks, rather than actually doing the tasks. The arrival of agents with growing autonomy, however, radically transforms the focus to social interaction, even if tasks remain in the background (Sloman, 1997).

As we populate digital space with agents that are more autonomous, we create an environment that takes on a life of its own, in the sense that we create uncertainty and open interaction up to adventure, in a true social context. Not only will people have to learn how to react to the agents that they encounter, the latter will also have to react to people and to other autonomous agents (Glass & Grosz, 2003). The interfacing involved in this novel cognitive context is changing radically from its traditional meaning, with issues of understanding, trust, initiative, and influence coming to the fore (Wexelblat & Maes, 1999). In discussing agents in the future of interfaces, Gentner & Nielsen (1996) talk of a shared world, in which the user's environment will no longer be completely stable, and the user will no longer be totally in control, and they were talking of one's own assistive agents, not of those of other people or of autonomous agents. Georgeff and Rao (1998) express the evolving context in these terms: In the world in which we live, chaos, uncertainty, and change are the norm, not the exception. Despite this, most designers of complex real-time systems continue to try to apply software technologies and methodologies that were constructed for static, certain, and definable worlds. (p. 139). The change occurring in HCI is merely reflecting the changing environment at large.

Perhaps an easy way to grasp what might be involved is to consider avatar interaction in VR worlds. Avatars are interfaces to other humans involved in a social interaction. Just as with authentic settings in which they mingle, humans in virtual settings must learn something about the others involved and learn to compose with them harmoniously in the accomplishment of their goals. The important consideration in this situation is that while the VR world may be artificial and experienced vicariously in physical terms, in psychological terms, the VR world can be just as genuine as our 'real' world, as hinted at by Turkle's (1995) interviews with digital world inhabitants (e.g. "real life is just one more window"). Inter-agent communication, just like its interpersonal counterpart, will be improvised and creative, with codes and norms emerging from the froth of the marketplace (Biocca & Levy, 1995). The potential for enhancing interaction certainly exists, particularly within VR worlds that not only reproduce but extend features of our regular world, but new risks also appear, for instance in the form of misrepresentation of agent intentions or outright deception (again, just as can occur in our normal interpersonal context) (Palmer, 1995).

The point is that the new cognitive context that is being created by both VR worlds and autonomous agents roaming cyberspace, all of which are but software artifacts, changes how we view interacting with computers. There will still exist the typical applications for assisting us in accomplishing specific creative tasks (and the associated HCI challenges), but the greater part of our interfacing with digital artifacts will more generally resemble our interfacing with others in our social world. In addition, interfacing specialists will be as concerned with the interface between AAs as with the interface between them and humans.

To a redefinition of the field

In concluding, it is time to bring together the various strands presented in this perspective and to state how HCI is likely to evolve in the near future. I foresee nothing short of a redefinition of the field, with classic HCI becoming a subset of a much wider-scoped field.

This evolution is largely coming about because of the ongoing transformation of computing itself and of the resulting novel cognitive context that it is generating. As Gentner and Nielsen (1996) have nicely put it a few years ago, "During the relatively short history of computing, several changes have taken place in the ways computers are used and in the people who form the main user community." Both usage and users are continuing to expand as we digitize ever more our world and computing becomes ubiquitous and invisible.

This expansion shifts the focus of interfacing away from its traditional moorings in functionality and onto new landscapes that are much more socio-cognitive in nature. The wider, more abstract, notion of an interface being the locus of interaction between a person and her environment leads us to define the field in terms of information interaction (II). Indeed, the environment a person inhabits is ever more symbolically and digitally mediated. While psychology broadly defines that interaction in general terms, II defines it in symbolic terms. Information constantly gleaned from the environment regulates our actions, which in turn are themselves increasingly effected through information. We enter the age of interaction design (Winograd 1997; Preece et al., 2002) and environment design (Pearce 1997).

This is particularly evident as we not only design interactions with information but also come to inhabit environments that are pure information (as VR worlds are). The added complexity resulting from the growth in autonomous agents potentially makes II all the more challenging, bringing, so to speak, a level of politics into what was hitherto a fairly individual and somewhat straightforward interaction. Agents can be both autonomous cognitive artifacts and assistive interfaces, depending on their design specifics. As Lanier (1995) suggests, their arrival can have a profound effect not only on how we interact with cognitive artifacts, but very crucially on how we humans view humanity and our own place in the world.

The point is well illustrated by Donald (1991) who explains how cognitive inventions have led to cultural transitions in the evolution of the human mind, and specifically how the invention of external memory devices, in expanding our natural biological memories, has fuelled the modern age leading us to digital realms. Autonomous agents lead us beyond 'out of the skin' memories to 'out of the skin' actions, via the delegation we invest our assistive agents with. The implications of this possibility are immense, even if only hazily perceived at this moment.

In practical terms, the age-old and very central HCI question of task allocation to person or computer - the MABA-MABA issue - (Sheridan,2000) takes on new meaning. It can no longer be decided purely on technical and human factors grounds, but rather enters something akin to the world of organizational management, perhaps even leadership. We are much beyond classic HCI here. The centrality of activity analysis remains, however, with questions such as 'How should the activity de done, ideally?' and 'How do current technologies or processes support this activity?' (Olsen & Olsen, 1991) taking on renewed importance.

In sum, HCI in the new millennium will transform itself into a much wider and more complex field based on information interaction. HCI will become a subset of the new field, alongside AAI dealing with interaction between autonomous agents. The new field will parallel the concerns of our own human-human interactions and thus involve social concerns alongside cognitive concerns.



References

 

Adami, C. & Wilke, C. (2004). Experiments in Digital Evolution. Artificial Life, 10 (2), 117-122.

Anderson, J. and Lebiere, C. (1998). The Atomic Components of Thought. Mahwah, NJ: Erlbaum.

Biocca, F. & Levy, M. (1995). Communication applications of virtual reality. In F. Biocca & M. Levy (Eds.) Communication in the Age of Virtual Reality. Hillsdale, NJ: Erlbaum. pp. 127-158.

Bradshaw, J. (1997) (Ed.). Software Agents. Cambridge, MA: MIT Press.

Carroll, J. (1990). The Nurnberg Funnel. Cambridge, MA: The MIT Press.

Carroll, J. (1998). Minimalism beyond the Nurnberg Funnel. Cambridge, MA: MIT Press.

Carroll, J. (2000) Scenario-based design. In Helander, M., Landauer, T. & Prabhu, P. (Eds.) Handbook of Human-Computer Interaction (2nd edition). Amsterdam: Elsevier.

Carroll, J. (2003). HCI Models, Theories, and Frameworks : Toward a Multidisciplinary Science. San Fransisco, CA: Morgan Kaufmann.

Donald, Merlin (1991). Origins of the Modern Mind.  Cambridge, MA: Harvard University Press.

Duchastel, P. (1996). Learning Interfaces. In T. Liao (Ed.) Advanced Educational Technology: Research Issues and Future Potential. New York: Springer Verlag.

Duchastel, P. (1998). Knowledge Interfacing in Cyberspace. International Journal of Industrial Ergonomics,  22, 267-274.

Duchastel, P. (2002). Information Interaction. Proceedings of the Third International Cyberspace Conference on Ergonomics, September 2002.

Gentner, D. & Nielsen., J. (1996). The Anti-Mac Interface. Communications of the ACM. August 1996.

Georgeff, M. and Rao, A. (1998). Rational software agents: From theory to practice. In N. Jennings and M. Wooldridge (1998) (Eds.) Agent Technology. Berlin: Springer. 139-160.

Glass, A. & Grosz, B. (2003). Socially Conscious Decision-Making. Autonomous Agents and Multi-Agent Systems. 6,  317-339,

Helander, M., Landauer, T. & Prabhu, P. (Eds.) (2000). Handbook of Human-Computer Interaction (2nd edition). Amsterdam: Elsevier.

Jacko, J. & Sears, A. (Eds) (2003). The Human-Computer Interaction Handbook: Fundamentals, Evolving Technologies and Emerging Applications, Mahwah, NJ: Lawrence Erlbaum.

Jeffries, R. (2000). The role of task analysis in the design of software. In Helander, M., Landauer, T. & Prabhu, P. (Eds.) Handbook of Human-Computer Interaction (2nd edition). Amsterdam: Elsevier.

Jennings, N. and Wooldridge, M. (Eds.) (1998). Agent Technology. Berlin: Springer.

Kyng, M. & Mathiassen, L. (Eds.) (1997). Computers and Design in Context. Cambridge, MA: MIT Press.

Lanier, J. (1995). Agents of Alienation. Interactions 2, 3, 66-72.

Lieberman, H. (2002).  Intelligent Interfaces.  In J. Carroll (Ed.) Human-Computer Interaction in the New Millennium. Boston, MA: Addison-Wesley.

 

M. Luck, P. McBurney, and C. Preist (2003). Agent Technology: Enabling Next Generation Computing. AgentLink report, 2003. ISBN 0854 327886.

Maes, P., Intelligent Software: Easing the Burdens that Computers Put on People. IEEE Expert, Special Issue on Intelligent Agents, edited by Jim Hendler, December 1996.

Maes, P. and Schneiderman, B. (1997). Direct Manipulation vs. Interface Agents: a Debate. Interactions, 4, 6, 42-61.

Newell, A. & Card, S.K. (1985). The prospects for psychological science in human-computer interaction. Human Computer Interaction 1: 209-242.

Norman, D. (1990). The Invisible Computer. Cambridge, MA: The MIT Press.

Olsen, G. & Olsen, J. (1991). User-centered design of collaboration technology. Journal of Organizational Computing, 1 (1), 61-83.

Palmer, M. (1995). Interpersonal communication and virtual reality: Mediating interpersonal relationships. In F. Biocca & M. Levy (Eds.) Communication in the Age of Virtual Reality. Hillsdale, NJ: Erlbaum. 277-302.

Pearce, C. (1997). The Interactive Book. Indianapolis, IN: Macmillan Technical Publishing.

Pew, R. (2003). The Evolution of Human-Computer Interaction: From Memex to Bluetooth and Beyond. In Jacko, J. & Sears, A. (Eds) (2003) The Human-Computer Interaction Handbook: Fundamentals, Evolving Technologies and Emerging Applications, Mahwah, NJ: Lawrence Erlbaum.

 

Preece, J., Rogers, Y. & Sharp, H. (2002). Interaction Design: beyond human-computer interaction. Hoboken, NJ: Wiley.

Sheridan, T. (2000). Task analysis, task allocation and supervisory control. In Helander, M., Landauer, T. & Prabhu, P. (Eds.) Handbook of Human-Computer Interaction (2nd edition). Amsterdam: Elsevier.

Shneiderman, B. (2003). Leonardo's Laptop : Human Needs and the New Computing Technologies. Cambridge, MA: MIT Press.

Sloman, A. (1997). What kind of control system is able to have a personality. In R. Trappl and P. Petta (Eds.) Creating Personalities for Synthetic Actors. Berlin: Springer. 166-218.

Sycara, K and Wooldridge, M. (Eds.) (1998). Proceedings of the Second International Conference on Autonomous Agents. New York: ACM

Turkle, S. (1995). Life on the Screen: Identity in the Age of the Internet. New York: Simon & Shuster.

Weiser, M. & Brown, J. S. (1997). The coming age of calm technology. In P. Denning & R. Metcalfe (Eds.) Beyond Calculation. New York: Springer-Verlag. 75-86.

Williams, A. (2004). Learning to Share Meaning in a Multi-Agent System. Autonomous Agents and Multi-Agent Systems. 8, 165-193.

 

Winograd, T. (1997). Beyond interaction. In P. Denning & R. Metcalfe (Eds.) Beyond Calculation. New York: Springer-Verlag. 149-162.
  ญญญญญญญญญญญญญญญญญญญญญญ