Distributed artificial intelligence ( DAI ) also called Decentralized Artificial Intelligence is a subfield of artificial intelligence research dedicated to the development of distributed solutions for problems. DAI is closely related to and a predecessor of the field of multi-agent systems .
57-491: Multi-agent systems and distributed problem solving are the two main DAI approaches. There are numerous applications and tools. Distributed Artificial Intelligence (DAI) is an approach to solving complex learning, planning , and decision-making problems. It is embarrassingly parallel , thus able to exploit large scale computation and spatial distribution of computing resources . These properties allow it to solve problems that require
114-427: A mind , consciousness or true understanding . It seems not imply John Searle's " strong AI hypothesis ". It also doesn't attempt to draw a sharp dividing line between behaviors that are "intelligent" and behaviors that are "unintelligent"—programs need only be measured in terms of their objective function. More importantly, it has a number of practical advantages that have helped move AI research forward. It provides
171-416: A partially observable Markov decision process (POMDP). If there are more than one agent, we have multi-agent planning , which is closely related to game theory . In AI planning, planners typically input a domain model (a description of a set of possible actions which model the domain) as well as the specific problem to be solved specified by the initial state and goal, in contrast to those in which there
228-453: A state , or a biome . Leading AI textbooks define "artificial intelligence" as the "study and design of intelligent agents", a definition that considers goal-directed behavior to be the essence of intelligence. Goal-directed agents are also described using a term borrowed from economics , " rational agent ". An agent has an "objective function" that encapsulates all the IA's goals. Such an agent
285-421: A " rational agent "). An agent that is assigned an explicit "goal function" is considered more intelligent if it consistently takes actions that successfully maximize its programmed goal function. The goal can be simple: 1 if the IA wins a game of Go , 0 otherwise. Or the goal can be complex: Perform actions mathematically similar to ones that succeeded in the past. The "goal function" encapsulates all of
342-459: A " reward function " that encourages some types of behavior and punishes others. Alternatively, an evolutionary system can induce goals by using a " fitness function " to mutate and preferentially replicate high-scoring AI systems, similar to how animals evolved to innately desire certain goals such as finding food. Some AI systems, such as nearest-neighbor , instead of reason by analogy , these systems are not generally given goals, except to
399-434: A "rational agent" as: "An agent that acts so as to maximize the expected value of a performance measure based on past experience and knowledge." It also defines the field of "artificial intelligence research" as: "The study and design of rational agents" Padgham & Winikoff (2005) agree that an intelligent agent is situated in an environment and responds in a timely (though not necessarily real-time) manner to changes in
456-406: A behavior graph contains action commands, but no loops or if-then-statements. Conditional planning overcomes the bottleneck and introduces an elaborated notation which is similar to a control flow , known from other programming languages like Pascal . It is very similar to program synthesis , which means a planner generates sourcecode which can be executed by an interpreter. An early example of
513-399: A conditional planner is “Warplan-C” which was introduced in the mid 1970s. What is the difference between a normal sequence and a complicated plan, which contains if-then-statements? It has to do with uncertainty at runtime of a plan. The idea is that a plan can react to sensor signals which are unknown for the planner. The planner generates two choices in advance. For example, if an object
570-493: A description of the possible initial states of the world, a description of the desired goals, and a description of a set of possible actions, the planning problem is to synthesize a plan that is guaranteed (when applied to any of the initial states) to generate a state which contains the desired goals (such a state is called a goal state). The difficulty of planning is dependent on the simplifying assumptions employed. Several classes of planning problems can be identified depending on
627-462: A precise numerical value. Deterministic planning was introduced with the STRIPS planning system, which is a hierarchical planner. Action names are ordered in a sequence and this is a plan for the robot. Hierarchical planning can be compared with an automatic generated behavior tree . The disadvantage is, that a normal behavior tree is not so expressive like a computer program. That means, the notation of
SECTION 10
#1732908528061684-428: A primitive action or decomposed into a set of other tasks. This does not necessarily involve state variables, although in more realistic applications state variables simplify the description of task networks. Temporal planning can be solved with methods similar to classical planning. The main difference is, because of the possibility of several, temporally overlapping actions with a duration being taken concurrently, that
741-407: A reliable and scientific way to test programs; researchers can directly compare or even combine different approaches to isolated problems, by asking which agent is best at maximizing a given "goal function". It also gives them a common language to communicate with other fields—such as mathematical optimization (which is defined in terms of "goals") or economics (which uses the same definition of
798-407: A route planner is typical of a domain-specific planner. The most commonly used languages for representing planning domains and specific planning problems, such as STRIPS and PDDL for Classical Planning, are based on state variables. Each possible state of the world is an assignment of values to the state variables, and actions determine how the values of the state variables change when that action
855-408: A self-driving car would have to be more complicated. Evolutionary computing can evolve intelligent agents that appear to act in ways intended to maximize a "fitness function" that influences how many descendants each agent is allowed to leave. The mathematical formalism of AIXI was proposed as a maximally intelligent agent in this paradigm. However, AIXI is uncomputable . In the real world, an IA
912-575: A single entity like society for problem solving that an individual agent cannot solve. The key concept used in DPS and MABS is the abstraction called software agents . An agent is a virtual (or physical) autonomous entity that has an understanding of its environment and acts upon it. An agent is usually able to communicate with other agents in the same system to achieve a common goal, that one agent alone could not achieve. This communication system uses an agent communication language . A first classification that
969-457: A vehicle for emergence . The challenges in Distributed AI are: Areas where DAI have been applied are: DAI integration in tools has included: Notion of Agents: Agents can be described as distinct entities with standard boundaries and interfaces designed for problem solving. Notion of Multi-Agents: Multi-Agent system is defined as a network of agents which are loosely coupled working as
1026-465: Is a scheduling problem which involves controllable actions, uncertain events and temporal constraints. Dynamic Controllability for such problems is a type of scheduling which requires a temporal planning strategy to activate controllable actions reactively as uncertain events are observed so that all constraints are guaranteed to be satisfied. Probabilistic planning can be solved with iterative methods such as value iteration and policy iteration , when
1083-404: Is also possible to define a measure of how desirable a particular state is. This measure can be obtained through the use of a utility function which maps a state to a measure of the utility of the state. A more general performance measure should allow a comparison of different world states according to how well they satisfied the agent's goals. The term utility can be used to describe how "happy"
1140-421: Is always known in advance which actions will be needed. With nondeterministic actions or other events outside the control of the agent, the possible executions form a tree, and plans have to determine the appropriate actions for every node of the tree. Discrete-time Markov decision processes (MDP) are planning problems with: When full observability is replaced by partial observability, planning corresponds to
1197-678: Is an abstract concept as it could incorporate various principles of decision making like calculation of utility of individual options, deduction over logic rules, fuzzy logic , etc. The program agent, instead, maps every possible percept to an action. We use the term percept to refer to the agent's perceptional inputs at any given instant. In the following figures, an agent is anything that can be viewed as perceiving its environment through sensors and acting upon that environment through actuators. Russell & Norvig (2003) group agents into five classes based on their degree of perceived intelligence and capability: Simple reflex agents act only on
SECTION 20
#17329085280611254-399: Is an agent that perceives its environment , takes actions autonomously in order to achieve goals, and may improve its performance with learning or acquiring knowledge . An intelligent agent may be simple or complex: A thermostat or other control system is considered an example of an intelligent agent, as is a human being , as is any system that meets the definition, such as a firm ,
1311-442: Is attempting to maximize a function encapsulating how well it can fool an antagonistic "predictor"/"discriminator" component. While symbolic AI systems often accept an explicit goal function, the paradigm can also be applied to neural networks and to evolutionary computing . Reinforcement learning can generate intelligent agents that appear to act in ways intended to maximize a "reward function". Sometimes, rather than setting
1368-499: Is categorized into multi-agent systems and distributed problem solving. In multi-agent systems the main focus is how agents coordinate their knowledge and activities. For distributed problem solving the major focus is how the problem is decomposed and the solutions are synthesized. The objectives of Distributed Artificial Intelligence are to solve the reasoning , planning, learning and perception problems of artificial intelligence , especially if they require large data, by distributing
1425-449: Is closely related to that of an intelligent agent. Philosophically, this definition of artificial intelligence avoids several lines of criticism. Unlike the Turing test , it does not refer to human intelligence in any way. Thus, there is no need to discuss if it is "real" vs "simulated" intelligence (i.e., "synthetic" vs "artificial" intelligence) and does not indicate that such a machine has
1482-486: Is constrained by finite time and hardware resources, and scientists compete to produce algorithms that can achieve progressively higher scores on benchmark tests with existing hardware. A simple agent program can be defined mathematically as a function f (called the "agent function") which maps every possible percepts sequence to a possible action the agent can perform or to a coefficient, feedback element, function or constant that affects eventual actions: Agent function
1539-458: Is designed to create and execute whatever plan will, upon completion, maximize the expected value of the objective function. For example, a reinforcement learning agent has a "reward function" that allows the programmers to shape the IA's desired behavior, and an evolutionary algorithm 's behavior is shaped by a "fitness function". Intelligent agents in artificial intelligence are closely related to agents in economics , and versions of
1596-419: Is designed to function in the absence of human intervention. Intelligent agents are also closely related to software agents . An autonomous computer program that carries out tasks on behalf of users. Artificial Intelligence: A Modern Approach defines an "agent" as "Anything that can be viewed as perceiving its environment through sensors and acting upon that environment through actuators" It defines
1653-475: Is doing and determines how the performance element, or "actor", should be modified to do better in the future. The performance element, previously considered the entire agent, takes in percepts and decides on actions. The last component of the learning agent is the "problem generator". It is responsible for suggesting actions that will lead to new and informative experiences. Weiss (2013) defines four classes of agents: In 2013, Alexander Wissner-Gross published
1710-400: Is no input domain specified. Such planners are called "domain independent" to emphasize the fact that they can solve planning problems from a wide range of domains. Typical examples of domains are block-stacking, logistics, workflow management, and robot task planning. Hence a single domain-independent planner can be used to solve planning problems in all these various domains. On the other hand,
1767-419: Is taken. Since a set of state variables induce a state space that has a size that is exponential in the set, planning, similarly to many other computational problems, suffers from the curse of dimensionality and the combinatorial explosion . An alternative language for describing planning problems is that of hierarchical task networks , in which a set of tasks is given, and each task can be either realized by
Distributed artificial intelligence - Misplaced Pages Continue
1824-502: Is useful is to divide agents into: Well-recognized agent architectures that describe how an agent is internally structured are: Automated planning Automated planning and scheduling , sometimes denoted as simply AI planning , is a branch of artificial intelligence that concerns the realization of strategies or action sequences, typically for execution by intelligent agents , autonomous robots and unmanned vehicles . Unlike classical control and classification problems,
1881-401: The advantage of allowing agents to initially operate in unknown environments and become more competent than their initial knowledge alone might allow. The most important distinction is between the "learning element", responsible for making improvements, and the "performance element", responsible for selecting external actions. The learning element uses feedback from the "critic" on how the agent
1938-495: The agent can randomize its actions, it may be possible to escape from infinite loops. A model-based agent can handle partially observable environments. Its current state is stored inside the agent maintaining some kind of structure that describes the part of the world which cannot be seen. This knowledge about "how the world works" is called a model of the world, hence the name "model-based agent". A model-based reflex agent should maintain some sort of internal model that depends on
1995-427: The agent is. A rational utility-based agent chooses the action that maximizes the expected utility of the action outcomes - that is, what the agent expects to derive, on average, given the probabilities and utilities of each outcome. A utility-based agent has to model and keep track of its environment, tasks that have involved a great deal of research on perception, representation, reasoning, and learning. Learning has
2052-511: The basis of the current percept , ignoring the rest of the percept history. The agent function is based on the condition-action rule : "if condition, then action". This agent function only succeeds when the environment is fully observable. Some reflex agents can also contain information on their current state which allows them to disregard conditions whose actuators are already triggered. Infinite loops are often unavoidable for simple reflex agents operating in partially observable environments. If
2109-462: The capabilities of the model-based agents, by using "goal" information. Goal information describes situations that are desirable. This provides the agent a way to choose among multiple possibilities, selecting the one which reaches a goal state. Search and planning are the subfields of artificial intelligence devoted to finding action sequences that achieve the agent's goals. Goal-based agents only distinguish between goal states and non-goal states. It
2166-447: The concept of an "action" is here extended to encompass the "act" of giving an answer to a question. As an additional extension, mimicry-driven systems can be framed as agents who are optimizing a "goal function" based on how closely the IA succeeds in mimicking the desired behavior. In the generative adversarial networks of the 2010s, an "encoder"/"generator" component attempts to mimic and improvise human text composition. The generator
2223-489: The definition of a state has to include information about the current absolute time and how far the execution of each active action has proceeded. Further, in planning with rational or real time, the state space may be infinite, unlike in classical planning or planning with integer time. Temporal planning is closely related to scheduling problems when uncertainty is involved and can also be understood in terms of timed automata . The Simple Temporal Network with Uncertainty (STNU)
2280-459: The degree that goals are implicit in their training data. Such systems can still be benchmarked if the non-goal system is framed as a system whose "goal" is to accomplish its narrow classification task. Systems that are not traditionally considered agents, such as knowledge-representation systems , are sometimes subsumed into the paradigm by framing them as agents that have a goal of (for example) answering questions as accurately as possible;
2337-432: The environment is observable through sensors, which can be faulty. It is thus a situation where the planning agent acts under incomplete information. For a contingent planning problem, a plan is no longer a sequence of actions but a decision tree because each step of the plan is represented by a set of states rather than a single perfectly observable state, as in the case of classical planning. The selected actions depend on
Distributed artificial intelligence - Misplaced Pages Continue
2394-489: The environment. However, intelligent agents must also proactively pursue goals in a flexible and robust way. Optional desiderata include that the agent be rational , and that the agent be capable of belief-desire-intention analysis. Kaplan and Haenlein define artificial intelligence as "a system's ability to correctly interpret external data, to learn from such data, and to use those learnings to achieve specific goals and tasks through flexible adaptation". This definition
2451-473: The goals the agent is driven to act on; in the case of rational agents, the function also encapsulates the acceptable trade-offs between accomplishing conflicting goals. Terminology varies. For example, some agents seek to maximize or minimize an " utility function ", "objective function" or " loss function ". Goals can be explicitly defined or induced. If the AI is programmed for " reinforcement learning ", it has
2508-533: The intelligent agent paradigm are studied in cognitive science , ethics , and the philosophy of practical reason , as well as in many interdisciplinary socio-cognitive modeling and computer social simulations . Intelligent agents are often described schematically as an abstract functional system similar to a computer program. Abstract descriptions of intelligent agents are called abstract intelligent agents ( AIA ) to distinguish them from their real-world implementations. An autonomous intelligent agent
2565-408: The percept history and thereby reflects at least some of the unobserved aspects of the current state. Percept history and impact of action on the environment can be determined by using the internal model. It then chooses an action in the same way as reflex agent. An agent may also use models to describe and predict the behaviors of other agents in the environment. Goal-based agents further expand on
2622-445: The problem definition or underlying data sets due to the scale and difficulty in redeployment. DAI systems do not require all the relevant data to be aggregated in a single location, in contrast to monolithic or centralized Artificial Intelligence systems which have tightly coupled and geographically close processing nodes. Therefore, DAI systems often operate on sub-samples or hashed impressions of very large datasets. In addition,
2679-418: The problem is always EXPTIME-complete and 2EXPTIME-complete if the goal is specified with LDLf. Conformant planning is when the agent is uncertain about the state of the system, and it cannot make any observations. The agent then has beliefs about the real world, but cannot verify them with sensing actions, for instance. These problems are solved by techniques similar to those of classical planning, but where
2736-447: The problem to autonomous processing nodes (agents). To reach the objective, DAI requires: There are many reasons for wanting to distribute intelligence or cope with multi-agent systems. Mainstream problems in DAI research include the following: Two types of DAI has emerged: DAI can apply a bottom-up approach to AI, similar to the subsumption architecture as well as the traditional top-down approach of AI. In addition, DAI can also be
2793-453: The processing of very large data sets . DAI systems consist of autonomous learning processing nodes ( agents ), that are distributed, often at a very large scale. DAI nodes can act independently, and partial solutions are integrated by communication between nodes, often asynchronously . By virtue of their scale, DAI systems are robust and elastic, and by necessity, loosely coupled. Furthermore, DAI systems are built to be adaptive to changes in
2850-532: The properties the problems have in several dimensions. The simplest possible planning problem, known as the Classical Planning Problem, is determined by: Since the initial state is known unambiguously, and all actions are deterministic, the state of the world after any sequence of actions can be accurately predicted, and the question of observability is irrelevant for classical planning. Further, plans can be defined as sequences of actions, because it
2907-511: The reward function to be directly equal to the desired benchmark evaluation function, machine learning programmers will use reward shaping to initially give the machine rewards for incremental progress in learning. Yann LeCun stated in 2018, "Most of the learning algorithms that people have come up with essentially consist of minimizing some objective function." AlphaZero chess had a simple objective function; each win counted as +1 point, and each loss counted as -1 point. An objective function for
SECTION 50
#17329085280612964-695: The solutions are complex and must be discovered and optimized in multidimensional space. Planning is also related to decision theory . In known environments with available models, planning can be done offline. Solutions can be found and evaluated prior to execution. In dynamically unknown environments, the strategy often needs to be revised online. Models and policies must be adapted. Solutions usually resort to iterative trial and error processes commonly seen in artificial intelligence . These include dynamic programming , reinforcement learning and combinatorial optimization . Languages used to describe planning and scheduling are often called action languages . Given
3021-422: The source dataset may change or be updated during the course of the execution of a DAI system. In 1975 distributed artificial intelligence emerged as a subfield of artificial intelligence that dealt with interactions of intelligent agents. Distributed artificial intelligence systems were conceived as a group of intelligent entities, called agents, that interacted by cooperation, by coexistence or by competition. DAI
3078-499: The state of the system. For example, if it rains, the agent chooses to take the umbrella, and if it doesn't, they may choose not to take it. Michael L. Littman showed in 1998 that with branching actions, the planning problem becomes EXPTIME -complete. A particular case of contiguous planning is represented by FOND problems - for "fully-observable and non-deterministic". If the goal is specified in LTLf (linear time logic on finite trace) then
3135-505: The state space is exponential in the size of the problem, because of the uncertainty about the current state. A solution for a conformant planning problem is a sequence of actions. Haslum and Jonsson have demonstrated that the problem of conformant planning is EXPSPACE -complete, and 2EXPTIME-complete when the initial situation is uncertain, and there is non-determinism in the actions outcomes. Intelligent agent In intelligence and artificial intelligence, an intelligent agent ( IA )
3192-486: The state space is sufficiently small. With partial observability, probabilistic planning is similarly solved with iterative methods, but using a representation of the value functions defined for the space of beliefs instead of states. In preference-based planning, the objective is not only to produce a plan but also to satisfy user-specified preferences . A difference to the more common reward-based planning, for example corresponding to MDPs, preferences don't necessarily have
3249-400: Was detected, then action A is executed, if an object is missing, then action B is executed. A major advantage of conditional planning is the ability to handle partial plans . An agent is not forced to plan everything from start to finish but can divide the problem into chunks . This helps to reduce the state space and solves much more complex problems. We speak of "contingent planning" when
#60939