ACM
Edited by
Massachusetts Institute of Technology,Laboratory for ComputerScience
Abstract: This report, writtenfor the general computing and scientific audience and for students andothers interested in artificial intelligence, summarizes the majordirections in artificial intelligence research, sets them in contextrelative to other areas of computing research, and gives a glimpse ofthe vision, depth, research partnerships, successes, and excitement ofthe field.Categories and Subject Descriptors: I.2 [ComputingMethodologies]: Artificial Intelligence
Contents
The field of artificial intelligence (AI) consists of long-standingintellectual and technological efforts addressing several interrelatedscientific and practical aims:
The aims of AI reflect ancient dreams of using minds and hands tocreate beings like ourselves. In centuries past, pursuit of thesedreams gave rise to both mechanical automata and formal theories ofreasoning, eventually yielding the spectacularly successful modernartificial computers that, in calculating and computing, replicateand surpass abilities that people of earlier times regarded as intellectualactivities on a par with writing letters and playing good chess.Using these computers over the past four decades, modern AI has built onthe best thinking in a number of areas---especially computer systems,logic, the mathematical theory of computation, psychology, economics,control theory, and mathematical problem solving---to constructconcrete realizations of devices that
One can divide present-day AI research into the following primary (andoverlapping) areas [2]:
AI has won its successes only with great effort. In earlier times,researchers used informal means to specify the problems underinvestigation, and their work revealed the great difficulty offormulating these problems in precise terms. Solving these problemsof formulation (Minsky, 1962) requiredconsiderable experimentation with and exploration of alternativeconceptualizations in order to find appropriate ways of making themamenable to technical investigation and solution. Although logic,game theory, and other disciplines contributed formal approaches tospecifying these problems, their methods often missed the mark inessential ways, especially by begging AI's question through presumingtoo much reasoning power and coherence on the part of the agent. Incoming to new formulations, AI has often advanced these other fields,providing the first precise means for addressing problems shared withthem.
Researchers in AI have traditionally met problems of formulationjoyfully, courageously, and proudly, accepting the severe risksensuing from such exploratory work in pursuit of the proportionatelylarge gains that can result from finding successful formulations. Thewillingness to cultivate problems lacking ready formalizations hasalso engendered some disrespect for AI, as observers focus on thefailures rather than on the successes. However, this adventurousnesshas proven highly fruitful, creating whole new subfields for formalinvestigation. Though some important problems still lack adequateformalizations, for many others AI has successfully provided formalfoundations supporting rich areas of technical investigation.
AI has undergone a sea-change in the general character of its researchmethodology since about 1980, partly through progress on its problemsof formulation, and partly through increasing integration with relatedareas of computing research and other fields. Speculative,exploratory work remains necessary in investigations of many difficultissues. In particular, the natural or useful scope of the formalizedknowledge employed in an investigation does not always admit simple,formally satisfying characterizations, so the field retains an elementof conceptual exploration. The more typical research effort today,however, relies on formal, theoretically precise, and experimentallysophisticated methods for investigation and technical communication.Rigorous science, engineering and mathematics now overshadow otherwork in much of the literature. Recent AI also replaces the focus ofthe early analytical studies on using isolated ``toy'' domains with afocus on using realistically broad and large-scale problem domains,and concentrates much more on integrating its ideas, systems, andtechniques into standard computing theory and practice. These changesnot only complement the increase in precision and formality, butdemand additional rigor in order to enforce the conventions andcoherence necessary in scaling up and integrating systems.
Accompanying this change in the character of AI results and research,accepted methods of educating students in AI have changed to recognizemany prerequisites sometimes overlooked in years past. To understandthe literature and make good in their own work, modern AI studentsmust learn basics of a number of fields: logic, statistics, decisiontheory, stochastic processes, analysis of algorithms, complexitytheory, concurrency, and computational geometry, to name but a few.3 Contributions
Some highlights of the major contributions of AI to computing, and toscience more generally, includeartificial neural networks,automated deduction, autonomous and semi-autonomous mobile robots,computational qualitative reasoning (about physical systems),constraint programming, data-mining systems, decision-tree learning methods,description logics(structured declarative representations going beyond those structures common in traditional logic),design and configuration systems, evolutionary computation, expert or knowledge-based systems (based on corpora of explicit mainly declarative knowledge),fuzzy logic and control systems,graphical representations ofuncertain information (Bayesian belief networks and others),heuristic search,logic and rule-based programming systems,mechanized symbolic mathematical calculation,natural language understanding and generation systems,nonmonotonic logics (a newcategory of logic formalizing assumption making),planning and scheduling systems,program synthesis and verification methods,real-time speaker-independent speech understanding,reason or truth maintenance systems (systematic recording andreuse of reasoning steps),robotic assembly systems,text processing and retrieval systems, andvisual classification and registration systems.
One can appreciate the intellectual productivity of AI through thesubjects it launched or has helped launch as independent areas of research,includingartificial neural networks,automated deduction,constraint programming,heuristic search,integrated software development environments,logic programming,object-oriented programming,mechanized symbolic mathematical calculation, andprogram synthesis and verification methods.One should also note the major contributions AI has made tosymbolic computing and functional programming. Bothhave been stimulated in fundamental ways through the sustained development anduse of LISP and its relatives in AI research. AI has madeimportant contributions to computational linguistics,to the area ofepistemic logics (especially through nonmonotoniclogics, theories of belief revision, and the computationalapplications now also heavily used in the theory of distributedsystems), and to economics and operations research (where AI methodsof heuristic search, especially stochastic heuristic search, havecaused something of a revolution). AI has also served computingresearch as a prime exporter to other scientific fields of the notionof studying processes in their own right. AI models of process andinformation processing in language, reasoning, and representation havecaused major shifts in linguistics, psychology, philosophy, andorganization theory (e.g., with rule-based systems andartificial neural networks providing a``rehabilitation'' of the impoverished and formerly stagnatingbehavioristic approach to psychology), and AI models now figureprominently in each of these fields. In addition to changingscientific fields, some AI methodologies (especially expertknowledge-based systems, artificial neural networks, and fuzzysystems) have changed the perspective of many engineers, who now gobeyond the traditional concerns of algorithms and data to capture theknowledge or expertise underlying desired functionalities.
The manifold practical applications of AI continue to expand everyyear. The following few examples give the flavor of currentsuccesses, but one may find many more in the proceedings of the annualAAAI conference on Predicting the results of the next generation of fundamental researchrequires either bravery or foolishness. One need not hazard suchrisks, however, to identify the core challenges facing the nextgeneration of AI systems, namely exhibiting robust operation inhostile environments, broad and deep knowledge of large domains, theability to interact naturally with people, and a degree ofself-understanding and internal integrity. Making progress on hard problems requires analysis, and AI has madesubstantial progress by isolating and understanding many of theimportant subtasks and subsystems of intelligent behavior in terms ofknowledge representation, learning, planning, vision, and likesubjects. Much current research seeks to put the pieces back togetherby constructing integrated systems that incorporate major capabilitiesdrawn from several or all of these areas. For example, naturallanguage processing systems now incorporate learning techniques,recent planning systems incorporate methods for reasoning underuncertainty, and ``active'' vision systems combine planning control ofrobot motions with analysis of the resulting sensor data. Integrationoffers a special opportunity both to test the component theories andalso to constrain further the requirements on them. Integration takesspecial prominence in work on building robotsand supporting collaboration, detailedin the following, and in work on complete cognitive architectures, such as Apart from the engineering challenge of building complex, hybridsystems capable of accomplishing a wide range and mixture of tasks,AI's scientific challenge consists of providing integratedcomputational theories that accommodate the wide range of intellectualcapabilities attributed to humans and assumed necessary for nonhumanintelligences. Many efforts at theoretical integration occur amongthe subfields of AI. Common logical underpinnings help integratetheories of knowledge representation, planning, problem solving,reasoning, and some aspects of natural language processing, whileeconomic concepts of rationality and the mathematics of Markovdecision processes help unify recent theories of probabilisticplanning, fault diagnosis and repair, reinforcement learning, robotcontrol, and aspects of speech recognition and image processing. Ofnecessity, many of these efforts at theoretical integration crossdisciplinary boundaries and lead to integration with other fields. AIhas drawn on and contributed to logic, philosophy, psychology, andlinguistics for some time. Integration with economics,decision theory, control theory, and operations research has served asa focus for more recent efforts, detailed in the section on The most novel case, but perhaps of the greatest immediate practicalimportance, consists of integration with related areas of computingresearch and practice. Integration with these areas has progressedsteadily, but slower than one might hope; the areas of tightestintegration include theory, databases, and programming languages(especially for logic and object-oriented programming). No one in AItoday views AI systems as standing alone; instead, most view AItechniques as supplying components of complex computer systems,components that provide key elements of the capabilities, flexibility,and cooperativeness of an overall system. To realize theirbenefits fully, AI techniques and the theories underlying them must beintegrated much more completely into the warp and woof of computingtheory and practice. Representative long-term goals forintegration with related areas of computing research include: The term ``robot'' traditionally refers to automated agents acting inphysical environments, with terms like ``softbot'' and ``softwareagent'' introduced to refer to agents acting purely within informationsystems, but this distinction promises to fade in importance asphysical agents enter into electronic communication with each otherand with online information sources, and as informational agentsexploit perceptual and motor mechanisms (such as interpretation ofgraphical images and synthesis of gestures and other animations).Accordingly, this report calls both types of agents robots, returningto the original sense of the word as an artificial worker in KarelCapek's 1921 play Many of the major areas of AI and computing research play essentialroles in work on robots, from planning, sensing and learning tohigh-performance numerical computing and interacting with multipledatabases across networks. Robots working in informationalenvironments require little investment in additional expensive orunreliable robotic hardware, since existing computer systems andnetworks provide their sensors and effectors. Robots with physicalabilities, in contrast, require mechanization of various physicalsensory abilities, including vision, hearing, touch, taste, smell,thermoreceptivity, and mechanization of various physical motorabilities, including manipulation and locomotion. These areascomprise some of the major efforts of AI and provide some of its mostimpressive successes. Recent work points toward new directions and applications inphysical perception and motor abilities. Maturing work on vision asinverse graphics now finds applications in medicine and industry,while research on vision for autonomous robots now takes as its focusless well understood approaches employing more qualitative and``purposive'' analyses that select which portions or aspects of imagesto look at based on what the robot is trying to do. Work on motorabilities now yields unexpected applications in rational drug designfor traditional techniques like configuration-space planning, whileresearch on control of autonomous robots has shifted toward lessdetailed representations that make simpler demands on sensory andactuation systems. Other work actively seeks to transfer the newrepresentation techniques to applications such as industrial cleaningand ordnance disposal. Scaling the operation of autonomous robots to more complicated tasks,and to natural environments in which the robots operate safely in thepresence of humans, requires further integration of perception, action,and reasoning. High-level reasoning about what to do requiresdeveloping new perceptual systems that generate the kinds of dataneeded by the reasoning system, but the reasoning system in turn mustmake realistic demands on perception. The marriage of these abilitiesaims to produce robots that combine the high-level programmability oftraditional AI systems with the fault tolerance of current autonomousrobots. The area of computer visionexhibits increasing integration with other disciplines. The subfieldof active vision, for example, seeks to radically simplify the processof information extraction by closely coupling it to the control ofaction for a particular task, thus exploiting the practicalconstraints imposed by the domain of operation. Other approachesexploit theoretical and technological integration. For example,inverse optics---roughly, the use of images to build models like thoseused in computer-aided design systems---now draws on collaborationswith computer graphics, medical image processing, computationalgeometry, and multimedia. Representative long-term goals in this direction include buildingrobots that: Early work in AI largely rejected formal economic models in favor ofpsychological ones because the standard economic theory focuses on anidealization in which rational agents suffer no limitations of memoryor time in coming to decisions, and which, for these reasons andothers, may not be realizable in the world. Economic approachesgenerally presupposed possession of utility and probability functionsover all contingencies, which did not help in AI's need to constructthese functions at the outset. Moreover, economics formalizedpreference and probability information in terms of very abstractrepresentations that, through a lack of much structure, supported onlyvery inefficient algorithms for making rational choices. In contrast,the psychological problem-solving methodology quickly adopted in AIstarts with an easily realizable notion of rationality that is muchweaker than the standard economic notion (one sanctioned, moreover, byHerbert Simon, a heretical economist founder of AI). Rather thanseeking to maximize the numerical utility or expected utility acrossall conceivable actions, problem-solving rationality simply seeks tofind actions meeting less stringent aspirations, such as satisfyingdesignated conditions (``goals'') on the resulting states. Buildingon this approach, researchers now work towards ideal rationalitythrough several means: by increasing the sophistication ofreasoning about goals, by adopting explicit notions of utility, and byperforming tractable optimizations that take into account the limitedknowledge and abilities of the decision maker. As this approach to rationality suggests, recent work in AI has drawn oneconomic theory in many ways while remaining cognizant of itslimitations. The first and major exploitation came about throughpartially solving the problem of representing probabilisticinformation that stymied early attempts to use decision-theoreticideas directly. The popular graphical formalisms, especially Bayesiannetworks and influence diagrams, now support great numbers ofsuccessful applications, from sophisticated medical reasoners tomundane printer-diagnostic subsystems of personal computer operatingsystems. Indeed, the decision-theoretic notions of preference,utility, and expected utility now play important roles in many areasof AI research, as they help to shape learning and adaptation, toguide the plans and actions of autonomous agents and robots, and toreconcile and integrate AI planning methods with those of operationsresearch. As interest in collaboration and multiagent systems hasincreased, many AI researchers have adopted the tools of game theoryand the theory of social choice to analyze and design agentinteraction protocols, to understand computational decision-makingmethods, and to analyze functional decompositions of mentalorganization. In the most explicit borrowing from economics, somework employs computational market price systems to allocate resourcesin a decentralized manner, and uses theoretical analyses of differenteconomic systems to tailor multiagent organizations to achieve highefficiency in performing specific tasks. Just as AI has contributed to logic, the intellectual trade witheconomics flows both ways, though unequally at present. Bayesiannetworks and other AI methods have improved practice in statistics.The anticipated but as yet unrealized prize contribution, however,lies in using the precise, detailed models of mental organizationdeveloped in AI in formulating a realistic and useful theory of therationality of limited agents (such as people) and organizationscomposed of such agents, something that has evaded economicsthroughout its history. The AI theories relating goals andpreferences provide one step in this direction, as they augment thetraditional economic theories of preference with new qualitativelanguages for modeling the incomplete and conflicting desires ofagents. Recent work on control of deliberation, balancing the costsof further deliberation against the expected benefits, also points inthis direction. More immediately, AI and computing research mighthelp economists get a handle on costs and value of information,computation and communication, factors too often neglected ineconomics. Representative long-term goals in this direction include: Studies of collaboration have a long history in sociology, economics,politics, linguistics, and philosophy. AI has studied collaborationissues in four primary contexts: understanding dialogue, constructingintelligent assistants, supporting collaborative and group work, anddesigning ``artificial societies''. In the longest-studied of thesecontexts, understanding dialogue, the normal rules of conversationalimplicature presuppose cooperative intent on the part of the listener.Asking a computer ``Can I see the accounts receivable summary?''should yield either presentation of the summary or an explanation ofthe reason for its unavailability, not a less-than-helpful ``yes'' or``no''. Aggravation with the stupidity of computers will never ceasewithout such cooperative interpretation of requests and statements. In the more recent context of designing intelligent assistants, theassistant systems must seek to understand and support the aims of theuser. These systems go beyond mere decision support by attempting toanticipate and satisfy the needs of the user whenever possible andappropriate. The ARPA/Rome Laboratory Planning Initiative In a broader context, AI research contributes to providing supportiveenvironments for collaboration and group-cooperative work. As inunderstanding discourse and designing intelligent assistants, thesesupportive environments must model processes and plans, but they mustalso supply methods which reason from these models to coordinateprojects, manage workflow constraints, filter and broker information,answer questions, notify participants as appropriate, translate``utterances'' between different interface modalities, and generatesummaries to quickly bring offline participants up to date. The newest context, designing artificial societies, introduces adesign perspective into economics by seeking to tailor the preferencesof agents, the protocols of interaction, and the environmentalconstraints so as to automatically yield collaboration,non-interference, and other desirable properties of group behavior. Research on collaborative systems draws together many of the researchareas of AI, especially planning, multi-agent learning, speech andlanguage, and image understanding and presentation, and involvesfundamental issues of modeling commitment, communicationrequirements, constraints and tradeoffs, negotiation methods, andmethods for resolving conflicts among the intentions of collaboratingagents. Collaborative systems also provide an interesting environmentfor attacking a core problem of knowledge representation, that ofamassing enough knowledge about a broad domain, including manyapplication tasks, to improve performance significantly. Situating peopleand artificial agents in a common environment with a shared domainmodel, even a rudimentary one, creates the opportunity for largenumbers of collaborators to convey their knowledge to andshare their discoveries with one another and with the artificialagents, and for each participant to learn from the collaborativeexperience. Representative long-term goals in this direction include: Efficient and natural communication holds the key to many of thepromises of computers, given that relying on command languages, menus,textual display, and other traditional media stymies many potentialapplications [3]. The activities theseapplications support normally rely on many different communicationmodalities, such as spoken utterances, written texts, and the gesturesthat accompany them, and effective participation in these activitiesrequires the ability to understand and generate communications inthese modalities. In addition, the ability to read would greatlysimplify the task of imparting knowledge to artificial agents,considering the vast amount of human knowledge encoded in writtenform. AI has long addressed these issues, and has contributed togreat progress on realizing linguistic and visual communicationmechanisms involving multiple modalities, including natural language,gestures, and graphics. The most general form of these abilities,however, lies far beyond current scientific understanding andcomputing technology. Ambiguity, intent, and thinking while speaking form some of the mainobstacles to achieving the desired communication. Human languages alluse a small set of resources (such as words, structures, intonations,and gestures) to convey an exceedingly wide, rich, and varied set ofmeanings. Speakers often use the same word, structure, or gesture inmany different ways, even in the same sentence or episode. Althoughpeople rarely notice such ambiguities, their identification andresolution challenge current speech- and language-processing systems.Intent, or the difference between what people say (or write) and whatthey actually mean, arises because people rely on their audience toinfer many things left unsaid or unwritten from context and commonknowledge. Furthermore, people often begin to speak or write beforethinking through their ideas completely, using the formulation ofutterances as a step in understanding their own partially formedideas. Both practices result in partial and imperfect evidence forwhat people really mean to communicate. Recent developments include the use of statistical models, typicallygenerated automatically, to predict with good accuracy simplegrammatical features of utterances such as the part of speech of aword, as well as semantic properties such as the word sense mostlikely in a given context. These models thus reduce problems causedby ambiguities in the grammatical and semantic properties of words.In other work, idealized models of purposive communicative actionsupport improved discourse modeling. Much of the success of current natural language processing technologystems from a long and tedious process of incremental improvement inexisting approaches. Extracting the best possible performance fromknown techniques requires more work of this kind, but exploration ofnew and combined approaches supplies additional opportunities. Forexample, although statistical and machine-learning techniques innatural language processing offer broad (but shallow) coverage androbustness with respect to noise and errors, grammatical and logicaltechniques offer deeper analyses of meaning, purpose, and discoursestructure. These two types of techniques could complement oneanother, with the symbolic techniques serving to specify a space ofinterpretation possibilities and the statistical techniques servingto evaluate efficiently the evidence for alternative interpretations.The results of such integration should prove of value to all naturallanguage processing applications, from information extraction andmachine translation to collaborative interfaces. Another opportunityinvolves determining the most effective combination of naturallanguage processing technology with other technologies to forgeeffective multimodal user interfaces. Representative long-term goals in this direction include: The most widespread benefit so far of putting AI into practiceconsists of the bodies of human knowledge formalized with an eye tomechanizing reasoning. Though the idea of writing down expertknowledge in explicit form goes back at least to the code ofHammurabi, if not to the earlier Egyptian and Babylonian inventors ofgeometry and arithmetic, the knowledge formalized and codified throughAI methods has a very different character and purpose. AIcompilations go beyond mere books by representing not just the``factual'' knowledge about the subject but also the reasoningprocesses appropriate to specific uses of the knowledge. Authors ofbooks focus on conveying propositional knowledge, normally leaving itup to the reader to learn how to apply and interpret the knowledge.Authors of traditional computer programs focus on representingprocesses, necessarily leaving it to the documentation (if any) toconvey the facts used or presupposed in the design or operation of theprograms. The efficient mechanization, maintenance, and explicationof expertise requires expressing both types of knowledge indeclarative representations. Reasoning systems may then manipulatethese representations in a variety of ways to support explanation,guidance, maintenance, and learning. The novel opportunities createdby capturing reasoning processes as well as factual knowledge havestimulated great effort in this area, and construction ofknowledge-based systems today goes on in hundreds if not thousands ofsites. Most of this work stays invisible, as businesses andorganizations view these bodies of articulated expertise as tradesecrets and competitive advantages they do not wish to see theircompetitors replicate. The problem of formalizing knowledge remains one of the principalchallenges to AI research. Current successful knowledge-based systemsrely on carefully limiting the scope and domain of the formalizedknowledge, in order to make it tractable to collect,codify, and correct this knowledge. The experience of AI showstwo key lessons about this task: formalizing knowledge is difficult,and adequate formalizations are feasible. The current formalizations,although adequate to the specific tasks addressed so far, fail to supportthe integration aims of AI research in several ways, and overcomingthese limitations forms a major task for AI research that forcesconsideration of many fundamental issues in knowledge representation. First, current formalizations do not cover the broad scope ofknowledge needed for intelligent activity outside of carefullycircumscribed circumstances, in particular, the knowledge needed byintegrated systems acting in everyday household, social, workplace, ormedical situations; nor do current formalizations fit togethersmoothly, since the conceptualizations adequate to one domain rarelydo justice to the concepts from peripheral domains. Addressing theseproblems calls for constructing formal ``ontologies'' or conceptualorganizations adequate to the broad scope of human knowledge thatinclude propositional, uncertain, and algorithmic and proceduralknowledge; finding ways for efficiently structuring, indexing, andretrieving large-scale bodies of knowledge; reasoning across multipledomains, and across the same knowledge represented for differentpurposes; and efficiently representing the contexts or foci ofattention that form the specific portions of the large bodies ofinterest in episodes of reasoning. To prove useful in practice, thestructures and methods developed here will require (and benefit from)smooth integration with extant databases and database organizations,as well as a closer integration between declarative knowledge aboutformalized procedures and the use of typical procedural programminglanguages. Second, most extant bodies of formalized knowledge presuppose, butavoid formalizing, the commonsense knowledge so characteristic ofpeople. Although expert performance often does not depend on commonsense (as any number of jokes about experts illustrate), commonsenseknowledge and reasoning appear crucial, both for tying togetherdomains of expert knowledge and for recognizing the boundaries ofspecialized expertise in order to avoid acting inappropriately. Thusconstructing broadly knowledgeable and capable systems requiresformalizing and mechanizing commonsense reasoning. The amount ofknowledge needed for intelligent action across the broad range ofhuman activity promises to dwarf even the large body developed in thelong-running CYC project (Lenat, 1995). Third, current methods for constructing bodies of formalized knowledgerequire much (often heroic) human labor on the part of the best (andleast available) people knowledgable in each area, as does theirmaintenance or adjustment as circumstances change. Though someapplications may command the resources these methods demand, realizingthe benefits of knowledge-based systems in the broad spectrum ofapplications requires developing methods in which the necessary massof knowledge accumulates through many small contributions made by arange of people, both the ordinary many and the expert few, andthrough the exploitation of machine labor. The goal of enabling people to make incremental contributions toknowledge bases motivates research on simplifying and streamlining theprocess of updating and maintaining the system's knowledge andabilities. Performing the primary tasks---identifying gaps inknowledge, expressing the knowledge needed to fill those gaps, andchecking new knowledge against old---requires knowledge about thesystem's own knowledge and operation. Accordingly, methods for thesetasks rely on declarative formalizations of both the processes forcarrying out each of these steps and of the structure and function ofeach part of the knowledge base, rather than on the mainly proceduralrepresentations found in most programming languages. Suchformalizations, and methods for using them, form the basis of theextensively investigated KADS methodology and library ( Of course, people do not always possess the knowledge they need, andeven with automated helps may still find it extremely hard toarticulate the knowledge they do have. Work on machine learning anddiscovery techniques bridges the gap in many cases. This work buildson statistical methods and ``connectionist'' models inspired byneurophysiology, but extends them to cover a much richer class ofmodels and to combine symbolic and numerical methods in useful ways.Current methods can capture some expert behavior, but often do so in away that does not provide useful explanations of the behavior. Usingthese bits of embodied expertise in many cases requires furtheranalysis to transform the knowledge (e.g., ``turn to the right ifE>0'' for some complex numerical expression E)into a more explicit and sensible form (``turn to the right if theroad turns right''). For example, one important new area usesBayesian networks to summarize prior knowledge in an understandableway, Bayesian inference to combine prior knowledge with new data, andtechniques of compositional representation to learn (construct) newnetworks when the prior network fails to accommodate thenew data adequately. Another new area, knowledge discovery in databases (or``data mining''), finds regularities and patterns in extremely largedata sets by integrating techniques from machine learning andstatistics with modern database technology. Representative long-term goals in this direction include: Mathematical work in AI has long swum in the same waters as the theoryof computation, logic, and mathematical economics. Early mathematicalwork focused on the theory of search and the power of statistical andneural-net models of recognition, but later work has added deep andrich theories of nonmonotonic reasoning; of the expressiveness,inferential complexity, and learnability of structured descriptionlanguages; and of stochastic search techniques. Some of this workemploys notions taken from or developed in concert with the theory ofcomputation, such as time-space classifications of computationalcomplexity and epistemic theories of distributed systems. AI theoriesmust consider richer classifications of systems, however, since theproperties distinguishing minds (belief, desire, intent, rationality,consciousness, sensory and motor faculties, etc.) constitute a largerand more puzzling set than those distinguishing computations.Although reasonable formalizations exist for some of thesedistinguishing properties, others remain problems for formulation. AIshares some of these problems with the mathematical sides of logic,economics, physics, and the theory of computation, but alone among thedisciplines aims to characterize the full range of possiblepsychological organizations for minds, from the trivial to thesuperhuman. Since conceptual analysis flourishes best in the contextof solving specific problems, the concrete complex systems developedin AI research bestow an advantage on AI over its relatives, whichtypically lack nontrivial yet tractable examples to study. Theseconcrete, complex examples continue to attract the attention ofworkers in other disciplines, and this comparative advantage promisesa stream of AI contributions to these other fields. Representative long-term goals in this direction include: By addressing both the underlying nature of intelligence and thedevelopment of theories, algorithms, and engineering techniquesnecessary to reproduce reliable, if rudimentary, machine intelligence,AI research makes numerous, large, and growing contributions tocomputing research and to the evolving social and industrialinformation infrastructure. Some contributions come through study ofthe deep scientific issues that concern our understanding ofcomputation, intelligence, and the human mind. Others come throughpractical applications that help make computer systems easier and morenatural to use and more capable of acting as independent intelligentworkers and collaborators. Continued progress requires pursuing bothtypes of contributions. The practical applications alone offer someof the strongest motivations for pursuing the scientific studies, asachieving the practical benefits seems hopeless without obtaining adeeper scientific understanding of many issues. At the same time,success in many of the scientific investigations calls for developingbroad bodies of knowledge and methods---and practical applicationsprovide the most natural context for developing these bodies ofintelligence. AI researchers retain enthusiasm about their field, both about theproblems it addresses and about the ongoing progress on theseproblems, even as it has matured into a field of substantial contentand depth. AI has needs that intersect with all areas of computingresearch, and a corresponding interest in partnerships with theseareas in advancing knowledge and technique on these shared problems.It offers techniques and theories providing leverage on hard problemsand also offers large important problems that might well serve astarget applications for much of computing research. Only a few ofthese have been described in this short summary, and manyopportunities remain for joint exploration with other areas ofcomputing. As a field, AI embarks on the next fifty years excitedabout the prospects for progress, eager to work with otherdisciplines, and confident of its contributions, relevance, andcentrality to computing research. This report draws on two longer ones prepared by the Permission to make digitalor hard copies of part or all of this work for personal or classroomuse is granted without fee provided that copies are not made ordistributed for profit or commercial advantage and that copies bearthis notice and the full citation on the first page. Copyrights forcomponents of this work owned by others than ACM must be honored.See footnotes [2] and [3] forportions based on text Copyright © 1995 American Association forArtificial Intelligence, and reprinted with permission. Abstractingwith credit is permitted. To copy otherwise, to republish, to post onservers, or to redistribute to lists, requires prior specificpermission and/or a fee. Request permissions from Publications Dept,ACM Inc., fax +1 (212) 869-0481, orpermissions@acm.org.4 Directions
Building systems with these characteristics poses the same challengesthat have driven AI research throughout its history, and each of theareas of technical investigation introduced earlier---knowledgerepresentation and articulation, learning and adaptation,deliberation, planning, and acting, speech and language processing,image understanding and synthesis, manipulation and locomotion,autonomous agents and robots, multiagent systems, cognitive modeling,and mathematical foundations---supports a vigorous research effortcontributing to meeting these challenges. This brief survey cannotpresent a complete picture of all the important directions of researchin each of these areas (see (Weld, Marks, andBobrow, 1995) for a more generous, though still abbreviated,summary, the challenge problems listed by (Selmanet al., 1996),and the 1994 Turing Award lectures of Feigenbaum (4.1 Pursuing Integration
AI today vigorously pursues integration along several dimensions:integrating systems that support different capabilities, combiningtheories and methodologies that concern different facets ofintelligence, coordinating subfields within AI, and reconciling,accommodating, and exploiting ideas from other disciplines.
4.2 Building Robots (Physical and Computational)
Building integrated agents that perceive and act in extant complexand dynamic environments requires integrating a wide range ofsubfields of AI and computing research. These environments includeboth physical environments and the ``virtual'' worlds ofinformation systems. By focusing on everyday worlds of interest topeople, such as office buildings or the Internet, researchers avoidthe methodological hazards of designing and simulating toy worldsunwittingly tailored to the designs they were supposed to validate.They also avoid the opposite problem of focusing on problems so hardeven humans cannot solve them.
4.3 Modeling Rationality
Formal and informal notions of rationality from psychology (reasoningand argument) and logic (semantic consistency, deductive closure) haveserved AI well from its earliest days. They supply concepts useful inmechanizing several forms of reasoning, and provide the basis formajor cognitive-modeling explorations of hypotheses about thepsychology of human ratiocination and its integration with othermental faculties. These large-scale, detailed cognitive theories havealready begun to change the face of psychological theory, whilenonmonotonic, probabilistic, and new modal logics continue to expandconceptions of logical rationality. The main new direction here,however, seeks integration of rationality in the logical andpsychological senses with the economic sense of rationality (maximumutility, optimal allocation of resources). Rationality in theeconomic sense has made only sporadic appearances in AI untilrecently, even though it subsumes the logical sense from a formalpoint of view and provides explanations of important aspects ofrationality in the psychological sense. Rationality in the economicsense offers many attractions as an organizing principle for bothintelligent system construction and intellectual integration. Itcontributes to the system's coherence (in terms of explanation,justification, and verification), to its competence (offeringperformance advantages), and to its construction methodology (designand development advantages). Researchers in many areas of AI haverecognized these advantages and begun work on exploiting rationalityin the economic sense. In consequence, economic rationality promisesto permeate much of AI; indeed, this work also promises tocontribute to economics as well, as AI and economics work together ontheir shared problems.4.4 Supporting Collaboration
Quite apart from the research collaborations within and without AIjust described, the subject matter of collaboration and coordinationof multiple agents (human or artificial) forms one of the maindirections for AI research in coming years (Grosz,1996). To prove useful as assistants, AI systems must interpretthe words and deeds of people to model the desires, intentions,capabilities and limitations of those people, and then use thesemodels to choose the most appropriate or helpful actions of their own.Making these interpretations often means relying on statisticalproperties of past behavior, and choosing how to cooperate often meansassessing or negotiating the preferences and tradeoffs held by thevarious participants.
4.5 Enhancing Communication
4.6 Obtaining Knowledge
4.7 Deepening Foundations
5 Conclusion
These studies are an impetus to youth, and a delight to age; theyare an adornment to good fortune, refuge and relief in trouble; theyenrich private and do not hamper public life; they are with us bynight, they are with us on long journeys, they are with us in thedepths of the country.
Write the vision; make it plain upon tablets, so he may run whoreads it.
Acknowledgments
The editors thank the group members (see Footnote Footnotes
References
Last modified: Thu Dec 5 08:53:46 EST 1996Jon Doyle<doyle@mit.edu>