Artificial Intelligence and Modeling and Simulation



Paper presented to Eleventh Annual Technical Seminar, Arizona Chapter of Armed Forces Communications and Electronics Association (AFCEA), Sierra Vista, Arizona, March 9-10, 1988.



February 5, 1988





Dr. J. George Caldwell

Vista Research Corporation

3826 Snead Drive

Sierra Vista, Arizona 85635







Copyright (C) 1988 Vista Research Corporation




I. Introduction. 3

II. Definition and Uses of Combat Models. 2

III. Historical Development of Combat Modeling. 4

IV. Comparison of Strategic Modeling to Tactical and Operational-Level Modeling. 8

V. Historical Development of Artificial Intelligence. 12

VI. Some Potential Applications of AI Technology to Military Modeling. 13

VII. Major Problems Facing AI Applications in Military Modeling. 18

VIII. The Information Overload. 26

IX. Recommended Areas for Research and Development 28

Biography. 32

References. 33



Artificial Intelligence and Modeling and Simulation


Dr. J. George Caldwell

Vista Research Corporation

Sierra Vista, Arizona


Copyright (C) 1987 Vista Research Corporation


Abstract This paper describes the potential role of artificial intelligence (AI) in military modeling, and identifies and discusses problems associated with the use of AI in military modeling.  It describes the relationship of AI and modeling to the information overload in military systems.  Areas for research and development to solve the identified problems and reduce the information overload are identified.


I. Introduction


This paper discusses the application of artificial intelligence (AI) concepts to military simulation and modeling.  The discussion is oriented toward applications in the fields of combat modeling, with reference to test and evaluation of command, control, communications and intelligence (C3I) systems.  The paper identifies some specific problems underlying the application of AI concepts to military systems modeling, and suggests areas for future research and development to address these problems.  Specific consideration is given to the problem of the "information overload" in the field of military communications-electronics (CE).


The paper consists of nine sections:


            I.    Introduction


            II.   Definition and Uses of Combat Models


            III.  Historical Development of Combat Modelling


            IV.   Comparison of Strategic Modeling to Tactical

                         and Operational-Level Modeling


            V.    Historical Development of Artificial Intelligence


            VI.   Some Potential Applications of AI Technology

                         to Military Modeling


            VII.  Major Problems Facing AI Applications in

                         Military Modeling


            VIII. The Information Overload


            IX.   Recommendations for Research and Development


II. Definition and Uses of Combat Models


A. Definition of Combat Model


Because of the infeasibility of conducting real-war experiments, modern military system are generally developed and tested making heavy use of mathematical models.  The mathematical model is an abstraction of reality, which represents aspects of the real-world system that are considered relevant to a particular problem.  The goal is to develop a model that produces results (model output) that are similar to those expected in the real world, given specified initial conditions (model input).  The model is used to analyze alternative systems concepts for which real-world experimentation is not possible or practical for a variety of reasons, such as cost, time, ethical or political considerations, or inability to control conditions.


Combat models represent the outcome of a military engagement.  Generally, combat models are divided into two categories, corresponding to levels of warfare -- strategic warfare models and tactical warfare models.  A third category is often used, corresponding to the "operational level" of warfare -- a level of warfare intermediate between the tactical and strategic.  Strategic warfare is concerned with national war policy and the achievement of national objectives by force or threat of force.  It is concerned with the allocation of major force elements (army level) or ballistic missile warfare.  The operational level refers to the allocation of large military units (corps/division) to achieve strategic goals within a theater of war.  It involves the use of planning and conducting campaigns -- operations designed to defeat an enemy.  The tactical level refers to the movement and positioning of smaller-unit forces on the battlefield, and the provision of fire support and logistical support.


A combat model may be designed at varying levels of organization (echelon) and resolution.  The level and resolution of models varies because it is not feasible to develop a single model that addresses all possible issues of interest.  A specific model addresses specific concerns.


B. Classification of Combat Models


In the twentieth century, a very large number of combat models has been developed.  For purposes of understanding how these models are used, it is convenient to classify them into categories of related models.  There are many ways in which combat models may be classified.  The major categories of classification are as follows.


(1) Stochastic versus deterministic.  A stochastic model represents the probability distribution of combat activity and outcomes; a deterministic model does not.  A stochastic model may be used as the basis for deriving expected values of measures of combat outcome (target damage, personnel losses, territory losses), or as a basis for simulating (by sampling from the distributions) the combat processes and observing the combat outcome.


(2) Game-theoretic versus non-game-theoretic.  A game-theoretic model determines optimal strategies and tactics for the deployment and employment of resources.  A non-game-theoretic model uses specified strategies and tactics.


(3) Interactive versus non-interactive.  An interactive model allows for input (strategic and tactical decisions) from human decisionmakers during the course of execution of the model; a non-interactive (or automatic, or "systemic") model does not.


(4) Analytical versus simulated.  An analytic model is one in which solutions may be derived by analysis (either closed-form solutions or answers that may be derived by computer).  A simulation model is one in which the probability distributions of combat events are specified, events are sampled from these distributions, and estimates of interest are derived from these samples.


C. Uses of Combat Models


Tactical combat models have several uses.  They may be categorized (see Reference 1) as force planning; research, development, test and evaluation; operational planning and evaluation; and training and education.  Force planning includes force structure analysis (determination of the organizational form and types of equipment associated with land, sea and air combat units of a specified size, such as a battalion, carrier task group, or tactical air wing); force level analysis (determination of the number of combat units of the same type to include in a larger force assigned to a military purpose); and force mix analysis (determination of the mix of land, sea, and air units that constitute a force assigned to a military purpose).  Research, development, test and evaluation is concerned with concepts development, system design and development, system test and evaluation (development test and evaluation and operational test and evaluation, OT&E).  Operational planning and evaluation includes analysis of operational concepts, tactics and doctrine.  Training and education includes training of military commanders under realistic conditions and education of military science students in warfare concepts and analysis and gaming.


A major application area of tactical combat models, and a principal concern of this paper, is the area of OT&E.  Tactical combat models are used to evaluate the combat effectiveness of military systems, particularly command, control, communications and intelligence (C3I) systems, in mathematically constructed environments that simulate real environments.  This type of testing has many advantages over testing in an actual battle environment or military exercise: lower cost, reproducibility, the ability to simulate conditions that cannot be produced at will in the real world, and control of test conditions.


Recently, increased attention has been given to the use of tactical combat models to support OT&E, because of the extremely long duration of the acquisition cycle (10-15 years).  New systems may be obsolete by the time they become operational.  With increased use of tactical combat models to support testing early in the acquisition cycle (particularly in the concept development stage), substantial decreases could be realized in the time required to field new systems.  This decrease may be realized for several reasons: the ability to test new systems in the conceptual stage, prior to expending funds on prototypes; reduction in the need for design changes late in the development cycle, when such changes are very costly; an ability to evaluate system effectiveness in terms of effect on combat outcome, instead of in terms of basic system performance measures that may be difficult to relate to combat outcome.


D. Relevance of AI to Tactical Combat Models


The essence of combat is embodied in several aspects: the combat goals, the perception of the situation, the resources and time available, and the environment.  A combat model must represent these aspects.  While some of the aspects of combat are strongly physical (e.g., movement, supply, communications, and the result of a weapon/target interaction), some aspects are strongly cognitive (e.g., goals, situation assessment, planning, tactics).  While much laboratory and field data are available to define the physical processes, the problem of representing the cognitive process (situation assessment, planning, decisions) in a model has been much more difficult.  It is this area for which the concepts of AI show promise of improving the state of the art of combat modeling.


III. Historical Development of Combat Modeling


In order to understand the current situation with respect to the relationship of AI to military modeling, it is helpful to know something of the history of combat modeling.  To this end, the history of large-scale tactical and strategic combat modeling will be reviewed.


Prior to the 1950s and the availability of high-speed digital computers, combat models were small.  They were essentially simple analytical models, such as those developed by F. W. Lanchester in 1914.  In the 1960s, a wide variety of large-scale combat models were developed, using the digital computer to do the computations required to implement the models.


Combat gaming includes a wide spectrum of activities, including military exercises, manual war games, computer- assisted manual war games, mathematical models, and interactive computer models.  This paper is concerned only with the last two types of combat gaming approaches, which are referred to a combat modeling, simulation and modeling, or force-on-force modeling.  There is a very rich literature on combat modeling, including professional journals such as Operations Research, a wide variety of research reports, and symposia publications.  Several examples of combat model references are included at the end of this paper (References 1-8).  Over the past 25 years, a very large number of combat models has been developed.  Many of them are described in the references.


Some of the significant events in the recent history (i.e., since 1960) of combat modeling are summarized in the paragraphs that follow.


Over the past several decades, there have been a variety of approaches to tactical combat modeling.  Early work in this century was represented by the work of F. W. Lanchester, who developed simple differential models of combat.  In the 1960s a variety of more complex mathematical combat models were implemented on digital computers.


The most sophisticated strategic combat model developed in the 1960s was the QUICK general war game model, developed by Lambda Corporation.  (Note: Lambda's QUICK model is unrelated to the manual Theater Quick Game Model, developed by Research Analysis Corporation.)  This model was a game-theoretic model of strategic combat.  It determined the optimal allocation of a mix of weapons to targets, using Everett's Generalized Lagrange Multiplier method.


While the QUICK model represented an elegant solution to strategic modeling, progress was very limited in the 1960s in the area of tactical combat modeling.  The difficulty stemmed from the dynamic essence of tactical combat.  Strategic modeling can ignore the dynamics of combat (or represent it simply as a simultaneous exchange or two-step sequential exchange); it can concentrate on the allocation of forces and still address meaningful issues and obtain reasonable results.  For example, in using the QUICK model to allocate missiles and bombers in strategic nuclear warfare, one can reasonably view the exchange as a non-multi-stage (non-time-sequential, or single-time-period) game.  Such an approach is not reasonable in tactical warfare, however, where the outcome depends significantly on what is done and what happens at many points during an extended period of time.


In general, large-scale tactical combat models (e.g., ATLAS) fell into disrepute in the late 1960s, because of the use of "firepower scores" to determine attrition and force movement.  A firepower score is a linear combination, or weighted average, of the various forces possessed by a side.  Each weapon type (e.g., a tank, a howitzer, a machine gun) is assigned a numerical value, representing its perceived ability to destroy enemy targets.  The firepower score is the sum of the products of the numbers of each weapon type possessed by a side, multiplied by their values.  The firepower scores of the two sides were compared, and the attrition of the forces and the movement of the Forward Edge of the Battle Area (FEBA) was determined by the relative magnitudes of the two sides' firepower scores.


A major problem associated with the use of firepower scores was that the method lacked "face validity."  Face validity refers to the extent to which there is a close correspondence between entities, activities, and events in the model and in real combat.  The firepower score approach was based instead on the intuitive appeal that there should be some sort of relationship of the attrition and FEBA movement to firepower score.  Another problem associated with use of firepower scores is that the aggregation of maneuver, combat support, and close air support precludes use of firepower-based models to perform force structure analysis.


The principal reason why progress was slow in developing satisfactory tactical combat models was the absence of procedures for solving the mathematical game representing tactical combat.  The usual representation of tactical combat is as a two-sided multi-stage (time-sequential) resource-constrained game, and no fast solution techniques were available for solving such games in the realistic case in which there are many stages.  Richard Bellman's Dynamic Programming (DP) approach proved unsatisfactory because of the so-called "curse of dimensionality."


Because of the tremendous success of Everett's Generalized Lagrange Multiplier (GLM) method in solving single-stage resource-constrained games, attention was centered in the late 1960s on extending this method to solve multi-stage resource-constrained games.  The GLM optimization method is extremely fast, and can be used to solve ill-conditioned extremization problems having nonlinear, nonconvex, noncontinuous objective (payoff) functions.  Hugh Everett, George Pugh and Bob Goodrich developed a new technique -- a combination of Everett's GLM technique and Bellman's DP technique -- which they called Lagrange Dynamic Programming (LDP).  This approach avoided the dimensionality problem of DP by separating the time-sequential game into an independent sequence of games, through the introduction of a sequence of Lagrange multipliers which were associated with the resource constraints at each stage of the game.  Unfortunately, the potential of the LDP method was never realized.  Funding for continued development of the method disappeared at the end of the 1960s, and no further development occurred.


Without a truly satisfactory method for solving multi-stage games, attention turned to use of Bellman's DP method.  The DYGAM ("Dynamic Game Solver") model was developed (1973) by Control Analysis Corporation (Zachary Lansdowne) to solve multi-stage games using the DP method.  The TAC CONTENDER model and the Optimal Sortie Allocation (OPTSA) model were developed by the Institute for Defense Analysis (Jerome Bracken).  The ATACM model was developed at Ketron (Fred Miercort, Bob Galiano, 1974) for the US Arms Control and Disarmament Agency.  To avoid the long running time of the DP algorithm, the tactical game was divided into a very small number of stages (e.g., three).


In 1971 the Army released a study (Review of Selected Army Models, Reference 2, by the Models Review Committee, John Honig, Chairman) which reviewed war game models in use at that time.  That report criticized the use of firepower scores.  In 1972, Gen. Glenn Kent of the Weapon System Evaluation Group (WSEG) sponsored the development of three different theater-level combat models, which would not use firepower scores either as a basis for determining attrition or movement.  Three organizations were funded: Lulejian Associates, the Institute for Defense Analysis (IDA), and Vector Research, Incorporated.  Their models were named LULEJIAN, IDAGAM, and VECTOR, respectively.  All three were deterministic models.  LULEJIAN included optimization, but aggregated forces for attrition calculation.  IDAGAM used Lanchester-type attrition equations, but maneuver below corps level was not represented.  The VECTOR model utilized detailed Lanchester-type difference equation models of attrition, and simulated the dynamics of combat in very small time steps (e.g., one minute).


Of the three efforts, only the Vector Research effort produced substantial, lasting results.  The VECTOR-II model was completed by 1978, and has been widely used since that time by numerous organizations.  A principal reason underlying the success of the VECTOR-II model is the high degree of face validity possessed by the model.


The approach used by Vector Research was to represent the combat process by a large number of difference equations, each representing a different aspect of combat and specified in terms of physically meaningful parameters, such as the probability rate of detection of a target or the probability rate of identification of a unit.  The approach was similar to the Lanchester approach, except that difference equations were used instead of differential equations, many equations were used instead of just two, and the equations were not solved but instead were used as a basis for simulating the combat process over time.  Also, the model user had the ability to specify thirteen "tactical decision rules," which specified what tactical responses to utilize in response to a perceived battlefield situation.


The VECTOR-II model is an "integrated" combat model -- it represents a wide spectrum of battlefield activity, from a high level of resolution (e.g., weapon-weapon interactions) to a low level of resolution (e.g., battalion moves).  In the early 1980s, the Army decided that the time was right for an alternative approach to tactical combat modeling -- a "hierarchical" approach, in which separate models were to be developed for different echelons of the military organization.  With this approach, combat activity was categorized at three levels of resolution, and models were to be developed for each level.  The models were to interface with each other by means of parameters.  The lowest-level model, representing battalion combat, was referred to as the CASTFOREM ("Combined Arms Task Force Evaluation Model"); the middle-level model, representing corps/division level combat, was referred to as CORDIVEM ("Corps/Division Model"); the highest level, representing theater-level combat, was referred to as FORCEM ("Force Evaluation Model").


Two rationales for a hierarchical approach to model construction are design simplicity (modular construction) and the ability to perform quick-response analyses at a high level of aggregation.  A hierarchical approach is reasonable if it is possible to establish hierarchical models having simple interfaces.  If there is substantial interaction among levels of the hierarchy and if these interactions are complex, the use of a hierarchical approach is not reasonable.  The situation corresponds to the process of defining modules in a top-level system design, such that the complexity of the module interfaces is low; modules are coherent, not highly dependent on other modules; and the functions associated with a module address a common objective.  For OT&E applications, where it is desired that combat outcome (a high-level model output) be sensitive to equipment performance (represented by low-level parameters), a hierarchical approach is probably not desirable.


A "man-in-the-loop" or interactive version of CORDIVEM was developed.  A contract was awarded to General Research Corporation (1982) to develop an automatic, noninteractive, or "systemic" version of CORDIVEM.  The government was not satisfied with the resulting model for use as the automatic CORDIVEM.  Instead, the Army combined the VECTOR-II model with a theater-air model (COMMANDER) to produce the VIC (VECTOR in COMMANDER) model as the automatic corps-level model of the hierarchy.  That model is now supported by the US Army Training and Doctrine Command (TRADOC).


Work is now under way at the Lawrence Livermore National Laboratory to develop a new corps-level model, called the Conflict Model, or ConMod.


IV. Comparison of Strategic Modeling to Tactical and Operational-Level Modeling


The problem of conducting strategic warfare is both political and technological.  The political aspect is concerned with the government's goals -- what it wishes to achieve by force or threat of force.  The technological area is concerned with an assessment of what can be accomplished through the application of force.  In the area of ballistic missile warfare, the technological issues of strategic warfare were essentially resolved in the 1960s (although new issues of strategic battle management are arising in the context of the Strategic Defense Initiative).  To date, however, the problem of validly representing strategic and tactical warfare using conventional (nonnuclear, nonbiological, nonchemical) warfare is still very much unsolved.  To appreciate the potential role of AI in combat modeling, it is helpful to compare strategic warfare with tactical and operational-level warfare.


The following paragraphs compare the problem of tactical or operational plan development to the problem of strategic missile plan development.  We shall compare these two problems from several different aspects.  In every case, difficulties arise which render the problem, solvable at the strategic level, intractable at the tactical and operational levels.


1. Expected-value Analysis Versus Stochastic Analysis.  The basic problem in ballistic missile warfare is to allocate a large number of missiles against a large number of targets.  The correlation of the damage occurring at one target with that occurring at another is low.  The variance of the estimated damage is low, because of the large number of low-correlation targets involved.  The damage caused by each side to the other is in general not dependent on the other side's attack (or dependencies can be examined in a small number of cases, such as simultaneous or sequential attacks, counterforce or countervalue attacks, subtractive or nonsubtractive defense).  In other words, the dynamics of battle may be largely ignored, or treated.  The location and values of the targets is essentially known.  Strategic nuclear exchange is not characterized by complex weapon interactions and discrete events that drastically alter the course of the battle.  Weapons and targets are relatively homogeneous.

The problem of determining a good battle plan can reasonably be formulated as an optimal allocation problem, where each target has a specified value and the damage caused by the weapons can be described in terms of an expected damage function that expresses damage (fraction of value destroyed for area targets, probability of kill for point targets) as a function of warhead size.


Tactical and operational-level warfare, on the other hand, is characterized by complex weapon-weapon and weapon-target interactions, discrete events, and dynamics.  Weapons may be used to attack other weapons, communications links, control centers, supply depots, roads, personnel, or targets of intrinsic value (e.g., factories, farms, infrastructure, cities, population).  The enemy may be defeated, for example, by destroying his weapons, his C3I capability, or his combat service support capability.  Information about the enemy order of battle may be imperfect.  The result of one unit's attack on another is highly unpredictable, and a function of many factors, both controllable and uncontrollable, on both sides.  The occurrence of certain critical events during the course of the battle (e.g., the destruction of a particular command and control center, communication node, or intelligence fusion center) may significantly alter the course of the battle.


For all of these reasons, tactical and operational-level combat cannot be viewed simply as a problem of static (simultaneous or sequential two-step) allocation of weapons against a fixed set of uncorrelated targets.  It is not acceptable to develop plans using optimal allocation based on expected values.  The system is highly nonlinear and discrete, and the expected value of the outcome cannot be reliably gleaned from an "expected-value" analysis.  Instead, the expected outcome can be determined only by analysis of a large number of independent simulations.  The dynamics, interactions, and discrete-event nature of tactical and operational-level combat must be taken into account.


2. Dynamics.  In strategic missile warfare, the dynamics of the situation can either be ignored or accounted for.  The dynamics may be characterized in terms of whether the attack is simultaneous or sequential. 


On the other hand, dynamics constitutes the essence of tactical and operational-level combat.  Tactical combat may be characterized as a multistage or time-sequential game, not a simultaneous or two-step exchange.  The order in which the targets are attacked is critical.  The destruction of a supply depot or communications link or intelligence sensor early in the battle may make all the difference at a later stage of the battle.  The problem of deciding whether to attack a weapon or a C3I component is difficult to solve.  The locations of the targets may not be known, or may be known with imprecision or inaccuracy.  A target may move.  The geometric relationship of the weapon and the target may affect the outcome.  Moreover, terrain, weather, lighting conditions, smoke, decisions at all echelons, intelligence, and electronic warfare all may have significant effects on the battle outcome.


3. Scalar Objective Function.  In strategic missile warfare, targets are of essentially two types (the enemy's missiles or his population).  There is not a tremendous argument over the specification of the payoff (objective) function.  It is possible to specify a scalar-valued objective function that is generally regarded as having face validity: the payoff function may be specified in terms of the enemy's forces (a counterforce attack) or his value (a countervalue attack) or some combination thereof.  The targets (whether force-type or value-type) are essentially independent.  The problem of determining an optimal allocation of weapons to targets is an easy matter, given a scalar objective function and a payoff function that is additive over the targets (e.g., by using Everett's method of generalized Lagrange multipliers).  Military officers can understand and agree on the objective function and the method of optimization.


In tactical combat, there is no agreement about the objective function.  The outcome is a vector of many components; it is difficult to obtain a consensus on a definition of "win" or "lose."  Achievement of an agreement on a scalar objective function has proved to be impossible.


4. Zero-sum Game Formulation.  Because of the low interaction among weapons, strategic-missile warfare can reasonably be formulated as a zero-sum game for each side.  Under this formulation, the battle is represented by the play of two zero-sum games.  This formulation is reasonable in the case of simultaneous exchanges (sequential exchanges may be represented as min-max solutions over strategy sets that are fixed plans rather than probability distributions over plans).  In the first game, side A is attempting to maximize damage to side B, while side B is attempting to minimize that damage (if he has available defensive strategies, e.g., missile interceptors or beam weapons).  In the second game, side B is attempting to maximize damage to side A, while side A is attempting to minimize that damage.


In tactical and operational-level combat, a zero-sum game formulation is not reasonable.  The attack and the defense are inextricably "coupled," since weapons may be used to attack weapons as well as a wide variety of nonweapon targets (e.g., communications links, control centers, supply depots) that can affect the outcome.  The two sides may have different goals and objectives (i.e., they have their own separate payoff functions), but the combat cannot be represented simply as the play of two separate zero-sum games (having the two respective payoff functions).  The combat must be represented as a general-sum (nonzero sum) game.


5. Intelligence.  It is reasonable to model strategic missile warfare under the assumption that both sides have perfect knowledge of each other's weapons and targets (but not of the allocation of weapons to targets).  No such similar assumption may be made in tactical or operational-level combat.  Tactical combat is characterized by imprecision (variance of estimates), inaccuracy (variance and bias in estimates) and otherwise imperfect knowledge (incomplete information, misinformation, untimely information).  The state of knowledge of one side about the other can have a dramatic impact on the selected course of action and on the outcome.  Electronic or destructive warfare can be used to impair the enemy's intelligence-collection ability.  The quality of a side's situation-assessment capability can have a dramatic impact on the battle outcome.

6. Communications / Electronic Warfare.  While communications may be important prior to a strategic missile attack, it is not an essential dynamic aspect of strategic missile warfare.  At the tactical level, the ability of a side to communicate determines the extent to which he can control his forces, collect intelligence data, and supply his forces.  Communications may be interrupted by electronic warfare (EW) or by destruction.  Weapons may be targeted against communications with dramatic impact on the enemy's ability to wage war.


7. Command and Control.  The outcome of a strategic missile exchange may depend on one or two key decisions (e.g., launch on warning) made at a very high level.  On the other hand, the outcome of tactical and operational-level combat depends substantially on the tactical decisions made by a large number of commanders.  These decisions must be made in the absence of perfect information, by individuals under severe stress.  The quality of the native intellectual capacity, as well as intelligence collection and fusion systems and tactical decision aids, can have a major impact on the battle outcome.


8. Wide Range of Echelons.  Strategic missile warfare does not involve the command and control of large numbers of heterogeneous military units over an extended period of time and span of organizational structure.  It involves a relatively small number of similar weapons under direct control.  Tactical combat, on the other hand, involves large numbers of diverse units, arranged in a multi-echelon hierarchical organizational structure.  It is an easy matter to represent strategic missile warfare at both extensive and intensive levels of detail -- the entire force can easily be represented at item-level resolution.  In tactical combat, it is very difficult to represent a broad scope (theater, army or corps) to item-level (individual weapon/target) resolution.  In order to represent the effect of weapon system performance on combat outcome, both high resolution and extensive scope are desired, in an integrated tactical combat model.  This requirement results in extremely large tactical combat models, such as VECTOR-II or VIC.  The attempt to decompose the echelons into a hierarchy of separate models has proved unsatisfactory; it has not proved possible to establish the linkages (interfaces) among the components to produce a credible whole.  Recent efforts (e.g., the ConMod development project at Lawrence Livermore National Laboratory) have once again been oriented toward the development of an integrated broad-scope tactical combat model.


The preceding paragraphs have described the many differences between strategic missile warfare and tactical and operational-level warfare using conventional forces, and the many problems facing the field of tactical combat modeling.  These problems have thwarted the development of a sound analytical framework for conducting broad-scope tactical and operational-level analyses.  Instead, attention has centered on the solvable problem of estimating battle outcome given specific scenarios, strategies, and tactics, using high-detail simulations (both discrete-event and deterministic simulations).


One of the most serious problems associated with the trend to high-detail military simulation evaluations conditioned on specific situations is the fact that it denies the resources needed to conduct broad-scope analyses.  Today, major military analyses are generally based on a single scenario.  The reason for this situation is that it is not practical (because of the great amount of manual labor involved) to generate broad-scope samples of scenarios, nor is it practical (because of the detailed, nonparametric input) to set up large-scale weapon system evaluation models for a large number of different scenarios.


AI technology holds the promise to help solve many of the problems that arise in tactical modeling.  For example, with respect to the problem of scope limitations caused by the impracticability of using samples of scenarios, AI technology can help solve this problem in two ways.  First, by providing automated means for generating samples of scenarios; and second, by providing automated means of inputing scenario information into weapon system evaluation models (such as tactical combat models and C3I system evaluation models).  In a later section, we will see that AI concepts address many of the problems of tactical modeling that were identified above. 


V. Historical Development of Artificial Intelligence


In recent years, increasing attention has been given to the application of AI concepts to solving military problems.  A summary of work in this area is presented in a recent publication of the Operations Research Society of America (Artificial Intelligence for Military Applications, Reference 9).


The field of artificial intelligence (AI) may be divided (see Shannon, Reference 10) into two major classes: duplication of natural human capabilities, such as language processing, visual scene interpretation, and manual control; and duplication of learned skills and expertise, such as automation of tasks performed by human experts -- specially trained, talented and experienced people.   The first class contains applications in automated process control and robotics.  The second class contains applications called "expert systems" or "knowledge-based systems," which are intended to duplicate the decisionmaking capabilities of expert human beings.


While both areas have application to military analysis problems, the area that has perhaps the greatest immediate potential for contribution is the area of knowledge-based systems.  In this area, computer programs (expert systems, knowledge-based systems) have been developed that can solve logical problems, and emulate human experts.  These programs are referred to as knowledge engineering tools.  The field of knowledge engineering is advancing rapidly, and shows promise of making substantial contribution to the field of military analysis.


The history of knowledge engineering is short (essentially, less than a decade), but rapid progress has been made.  Current knowledge engineering tools are referred to as "third-generation" tools.  The first generation included special-purpose expert systems, such as MYCIN, an expert system developed to assist medical diagnosis.  These first-generation expert systems were generally LISP-based programs in which the knowledge base and inferencing procedures were combined.  In the second generation of knowledge engineering tools, the knowledge base and the inferencing procedures were separated.  These programs, called expert system shells or knowledge-based systems (KBSs), could be applied to solve wide classes of related problems.  The third generation of knowledge engineering tools are general, powerful, easy-to-use, commercially supported system development environments.


Although the field of knowledge engineering has developed rapidly, the types of problems that may be addressed profitably by current KBSs are generally restricted to deterministic, discrete decision-tree type problems.  Some KBSs allow for consideration of uncertainty (Bayesian inference, fuzzy logic), but in general the inference process involves deductive logic rather than statistical inference.  The KBSs are primarily frameworks for storing a large number of deterministic rules.  The KBS works by applying the rules to input facts, to determine the truth or falsity of propositions of interest, i.e., to make a decision.  Current KBSs are not adequate to solve problems involving complex continuous stochastic, temporal or spatial relationships.  (See Reference 10 for a detailed discussion of this situation.  See also Reference 11 for a description of current large-scale KBSs.)


To date, with respect to military applications, KBSs are restricted to development environments.  No major military operational system intended for combat use is based on a commercial knowledge-based system.


VI. Some Potential Applications of AI Technology to Military Modeling


In recent years, developments in the field of artificial intelligence have occurred that may substantially assist military modeling.  These include situation assessment (intelligence) and tactical decisionmaking (command and control).  AI can assist these ares of military modeling, even if the developed systems are never used in an operational environment.


A second area in which AI can contribute to combat modeling is the area of helping to determine optimal or desirable strategies or tactics, by means of the "goal-seeking" ability of KBSs.  Modern KBSs can explore sets of alternative "futures," corresponding to alternative strategies or tactics, and select a desired solution.


As was mentioned earlier, another problem that can benefit from AI technology is the generation of samples of scenarios.  Much current analysis is based on the use of a single scenario.  The scope of such analysis is severely restricted.  The problem is that it is not feasible to generate samples of scenarios using current manual methods, nor is it practical to input many scenarios into large-scale tactical combat models or C3I system evaluation models (also because of the amount of manual labor involved).  AI technology holds the promise of assisting both of these problems. 


Because of the substantial attention that situation assessment and tactical decisionmaking have received, this paper will not address those areas in detail.  Instead, attention will be directed toward the other, less-publicized, potential applications of AI to military modeling, such as goal-seeking and scenario development.  These areas of potential AI application are described in the paragraphs that follow.


Multidimensional Objective Functions (Goals).  The difficulty in handling the multidimensional objective functions of tactical combat may be addressed by the "goal-seeking" ability of modern expert systems.  With these systems, it is not necessary to specify a scalar objective function; instead, the planner or evaluator may specify a set of goals that he wishes to attain.  The expert system then proceeds to search for strategies or plans that accomplish these goals.  The search may involve "forward-chaining," "backward-chaining," or hybrid approaches.  In forward chaining, the system attempts to determine a solution starting from the initial conditions (resource availability and other constraints), such as a "branch and bound" method.  In backward chaining, the system attempts to determine a solution in working from the goals (desired end-states).  Backward chaining is based on the same concept underlying dynamic programming.  Because of the "curse of dimensionality" (i.e., the combinatorial explosion of cases to consider, if the number of variables or stages is large), it works well only with problems having a small number of variables and stages.


Automated Scenario Generation.  In order to run a large-scale tactical combat model or C3I system evaluation model, it is necessary to specify a military scenario to the model.  Currently, most large-scale models use scenarios that are prepared by the US Army Training and Doctrine Command (TRADOC) Analysis Command (TRAC); the TRADOC-approved scenarios are called Standard Scenarios.  The process of developing a Standard Scenario is manual, and involves the participation of many individuals and organizations over a long period of time (two years).


Once the scenario is available, much more additional time and effort is required to set up the tactical combat model or C3I system evaluation model to correspond to the specified scenario.  A massive amount of effort is required to prepare the model input corresponding to a scenario because of the nonparametric nature of large-scale tactical combat models and C3I system evaluation models.  The amount of manual labor required to input a scenario into a large-scale model is so great, in fact, that samples of scenarios would not be used, even if they were available, as input to current models.  If the scope of inference of large-scale tactical combat models and C3I system evaluation models is to be expanded, automated systems must be available not only to generate scenarios, but to input the scenario information into the model (i.e., to prepare the model input corresponding to the scenario).  KBSs have the potential to address this problem -- the problem of "front-ending" a large-scale model.


The benefit that would result from an automatic scenario-generation capability is that the availability of samples of scenarios would enable the scope of inference of an analysis to be broader-based than if the analysis were based on a single scenario, strategy, plan, or tactic.  The practice of basing important planning and evaluation decisions on analysis based on a single scenario or plan is considered dangerous, in view of the fact that combat outcome may be highly dependent on the scenario and plan employed.

An automatic scenario-generation capability would represent a powerful methodology for evaluation of command and control, communications, electronic warfare, and intelligence (CCCEWI) systems.  The results obtained from a single tactical or operational-level simulation are highly correlated and extremely variable, both because tactical combat is not an ergodic process and because the course of tactical combat may be dramatically altered by a few key events that occur during the course of the combat.  Because of this nature, the effectiveness of CCCEWI systems cannot be reliably assessed from a single or even a small sample of scenarios.  The effects of variations in scenarios, strategies, plans, tactics, environment, weapon systems, command and control, communications, and intelligence systems are all mixed together.  A deterministic approach is not appropriate, and large samples of independent runs are required to separate the effects of each of these many variables.  Current evaluations are often highly conditioned on holding a large number of these variables constant; the scope of inference is very narrow.  The effectiveness of such systems should properly be determined from an analysis of a large number of different scenarios.


An automated scenario generation capability would represent a capability to automatically generate large samples of scenarios and to perform analysis based on those samples.  This technology would substantially broaden the scope of inference of military analysis (planning and weapon system evaluation), and establish a sound methodological basis for evaluating CCCEWI systems.


Automated scenario and plan generators were developed for strategic ballistic missile warfare in the 1960s.  In the case of tactical and operational-level warfare, however, to date no automated means has been developed as a satisfactory alternative to the manual scenario-development process.  The basic reason for this is that tactical and operational-level warfare are very complex processes.  It has not been possible to develop a generally agreed-upon conceptual framework, analytical framework, or process which is considered to validly emulate or substitute for the process used by human experts.  The problem of developing an accepted automated process for the development of tactical and operational-level scenarios, strategies, plans, and tactics has not been solved.


Without a solution to this problem, the fields of weapon system planning and evaluation are seriously hampered.  The inferential scope of weapon system evaluation is seriously restricted if the analysis is based on a single scenario.  Massive amounts of time and resources are required to develop and use manually developed scenarios.  Using the current manual scenario-development process, the cost and time required to conduct evaluations are very great, particularly in the concept development phase of the military acquisition cycle.  The intelligence represented in the scenario is not easily transmitted to the tactical combat model or weapon system evaluation model, so that the dynamic operation of the model is, in many cases, done without adequate consideration or awareness of the goals of the scenario developers.  Because of the lack of intelligence, and because no satisfactory method has been developed for discerning the effect of systems from the effect of strategy and tactics (given the stochastic nature of tactical combat), tactical combat modeling and weapon system evaluation modeling efforts are generally relegated to providing basic physical performance measures, instead of high-level effectiveness measures (related to combat outcome).


Validation of Automatically Generated Scenarios and Strategies.  In addition to assisting the generation of scenarios, AI techniques could be used to address a key problem associated with automatically generated tactical or operational-level plans; namely, the problem of their validation.  Validation is a serious problem in military modeling.  It is a difficult problem because it is often not possible to validate military models by using comparisons to real-world field tests.  Instead, the validation must rest on issues such as face validity and analysis of special or limiting cases.


In the 1960s, combat models (theater-level and corps/division level) earned a "bad reputation" by incorporating artificial concepts such as "firepower scores."  Top-level military planners could not accept the results of these models, because they lacked face validity (an apparent correspondence between the objects and processes of the simulation model and those of the real world).  The model or the model designer or user could not justify the processes or the outcomes of the model.  If someone asked why a certain action was taken or result occurred, he could at most be shown a set of equations or rules that bore little resemblance to real-world situations.


A major difference between a modern KBS and a traditional model written in a procedural programming language (such as FORTRAN) or traditional simulation language (such as SIMSCRIPT II.5) is that the user may ask the model to justify itself.  If the program asks for a certain input, the user may ask why, and receive an explanation.  If the program produces a particular result, the user may ask why, and obtain a rationale or justification.  The program keeps track of the logical processes by which it reached its answer.  This feature of KBSs is an invaluable aid in validating the model results.  It could be used to assist validation of the scenarios generated by KBSs.


AI-Based Simulation.  This section describes developments in the field of AI-based simulation.  For more discussion, see Reference 10 (Shannon).


In recent years, simulation systems have been developed which incorporate AI concepts, in particular, the concept of goal finding.  These simulation systems have the capability to explore, by simulation, alternative situations, and to make decisions based on that exploration.  These AI-based simulation systems have the promise of assisting tactical combat modeling, by explicitly recognizing the goal-seeking character of combat, i.e., its goal-theoretic nature.  While current systems are not sufficiently powerful to provide a general solution to tactical combat, the approach of linking the AI concept of goal-finding with simulation offers promise for tactical combat modeling, and deserves consideration.  Although the concept offers the promise of finding goals in a tactical combat model, it should be recognized that current large-scale, high-detail tactical combat models are too slow-running to allow for the exploration of many alternatives.   

An area of development that bodes well for the solution to the problem of developing an automated tactical and operational-level scenario and plan generation capability is the field of object-oriented programming.  Object-oriented programming is distinguished from process-oriented or procedural programming.  In procedural programming, the programmer describes a process or procedure (such as the movement of a tank or the interactive solution of a generalized Lagrange multiplier optimization problem) by means of a step-by-step algorithmic description of the process.  In object-oriented programming, the programmer describes an object (e.g., a tank) in terms of its many attributes and how it interacts with other objects.  Object-oriented programming is oriented toward solving complex simulation problems.  The programming is faster and the product has a high degree of face validity since the objects of the simulation may be easily depicted on a graphics monitor.


Object-oriented programming is not new.  It originated in the SIMULA simulation language, in 1966.  It is well-suited to AI applications, however, and has recently undergone extensive development.


Reference 10 (Shannon) provides an excellent description of the recent developments in the field of AI-based simulation and modeling.  These developments are well-suited to addressing the problem of developing an automated tactical and operational-level scenario and plan generation capability.  Most AI/ES-based simulation systems are based on object-oriented programming.  In object-oriented programming, entities called "objects" have attributes, usually called "slots."  Objects interact by passing "messages" that affect the values of the attributes.  Related objects may be grouped into classes.  Shannon observes that the power of object-oriented programming derives from two features of classes: encapsulation and inheritance.  Encapsulation refers to the ability to group specific types of data and the means to manipulate it into a class.  Inheritance refers to the ability to define new classes which assume the same characteristics as existing classes, plus added characteristics.  These classes form a hierarchy.  These features (encapsulation and inheritance) enable rapid and efficient development of a top-down model design.


AI-based simulation systems enable the user to specify knowledge in various forms.  The most common form is a "rule," or "if-then" statement.  Rules may be used for a variety of purposes, such as to define the behavior or methods used by objects, to test models for validity, and to implement goal-seeking behavior.


Modern AI-based simulation systems have powerful graphics capabilities.  The graphics may be used to display the objects of the real system and their relationships (e.g., positions of two tanks or two units), to display the components of the simulation model and their logical relationships (e.g., the relationship of two model modules), or to present statistical summaries of the simulation execution (e.g., animation of the dynamic course of the simulation, or a histogram depicting the frequency of occurrence of events of interest).


To date, a number of AI-based simulation systems have been developed.  These include the following.


ROSS (Rule-Oriented Simulation System), developed by the RAND Corporation, and used for war game simulation and air battles.


KBS (Knowledge-Based Simulation), a LISP-based discrete-event simulation system developed at Carnegie-Mellon University, and commercially available from the Carnegie Group (SIMULATION CRAFT), IntelliCorp (SIMKIT), and IntelliSys Corporation (LASER/SIM).


T-PROLOG, developed by the Hungarian Institute for Coordination of Computer Techniques, a PROLOG-based system (for discrete and continuous simulation) oriented toward goal-finding


V-GOSS (Vienna Goal Oriented Simulation System), a PROLOG-based, process-oriented discrete-event simulation system, also oriented toward goal achievement.  V-GOSS explores alternative paths in the search for goals.


In summary, AI technology shows promise of being able to assist solution of many of the problems facing tactical combat modeling.  Modern KBSs may be used to perform the intelligence functions and tactical decisionmaking in tactical combat models.  Through use of the goal-seeking features of modern KBSs, it can help implement a game-theoretic framework for tactical combat modeling.  In the area of scenario generation, the military planner can obtain scenarios that address a wide range of specified goals.  It would not be necessary to specify an artificial scalar objective function or tactical decision rule.  Furthermore, the planner can ask for justification of the results, to establish their validity.  With these two features, modern expert systems overcome two of the key difficulties associated with the automated development of tactical and operational-level scenarios.


VII. Major Problems Facing AI Applications in Military Modeling


A. Summary of Problems


There are several major problems facing the successful application of AI technology to military modeling.  These problems stem from limitations both of current AI technology and current military modeling.  This section identifies and discusses these problems.  They are:


            1. The nonparametric nature of military system evaluation models


            2. The lack of a game-theoretic (goal-seeking) conceptual framework


            3. The nonparametric nature of current KBSs

            4. The deprofessionalization/deskilling of the fields of simulation and AI


            5. The trend to hierarchical models


            6. The lack of "total error analysis" in military modeling


            7. The lack of an "extensive/intensive" balance in military modeling


Each of these problems is described in the paragraphs that follow.


B. Description of Problems


1. Nonparametric Military System Evaluation Models


The ballistic warfare models of the 1960s were analytical parametric models.  The battle process was represented in a well-formulated conceptual and analytical framework.  The desired allocation of weapons to targets was expressed analytically in terms of a mathematical game-theoretic formulation, and the results of the battle were determined by the game solution.  In sharp contrast, the positioning and movement of weapons and other military systems in today's tactical combat and weapon system evaluation models is essentially "nonparametric," and the deployment and employment of forces is characterized by micro-level local interactions rather than on overall perception of the situation and overall military goals.


The term "nonparametric" is used to describe most current large-scale, detailed combat models and C3I system evaluation models.  By the term "nonparametric" is meant that the locations and interactions of objects in the battlefield simulation is specified in terms of the locations and interactions of individual items and interactions, rather than in terms of probability distributions.  The amount of detail incorporated in current military system evaluation models is staggering.  In a major C3I system evaluation model, the positions of approximately three hundred thousand electromagnetic emitters are specified.


The problems with nonparametric specifications are several:


            1. Massive amounts of effort are required to specify a single situation, so that examination of large numbers of alternative situations is not feasible.  The scope of inference of the analysis is severely restricted.  The amount of information required to describe situations and to be processed is exploding.  Processing times are so slow that few cases can be examined.


            2. It is not possible to specify alternative situations in terms of changes in parameter values.  It is not practical to conduct extensive sensitivity analyses (e.g., to explore alternatives by means of large fractional factorial designs).


            3. It is difficult to control the macroscopic features of a situation.


            4. The sheer volume of the data makes it difficult to conduct quality control of the data.


            5. Massive amounts of time and money are required to assemble and maintain the data bases used in support of models.  Models become "frozen," in order to avoid data base changes.  The cost of conducting analysis is rising dramatically because of increased dependence on nonparametric representations (larger data bases, more powerful computers, more data analysts).


The nonparametric approach to modeling reduces the pressure for a well-conceived conceptual framework.  The high face validity of the model silences critics (as mentioned earlier, "face validity" refers to the correspondence between the objects and processes of the model and those of the real world).  This approach has, however, a high cost associated with it.  Namely, it abandons the very important scientific model-building principle of parsimony (representing a situation by means of a model defined in terms of a small number of variables).


Model problems tend to relate to specific situations, and model "fixes" tend to be in the nature of patches, rather than revisions to flawed or inadequate conceptual or analytical frameworks.  A good example of this situation is afforded by the VIC tactical combat model.  The VIC is "driven" by a number of table-driven tactical decision rules.  When the model doesn't "work right" in a particular case, the table values are adjusted.  This may cause a problem to arise in another situation, leading to another "fix."  The process continues.  The result is a model whose behavior is defined in terms of a myriad of adjustments to decision tables.  The reasons for the model's behavior cannot be explained in terms of general analytical principles or concepts, but in terms of thousands of parameter values.


In scientific model building, one of the tests of a model is whether it is consistent with new data sets, other than the one (or ones) from which it was developed.  Consistency of the model with new data sets is accepted as evidence of the validity of the model.  Inconsistency of the model and new data sets is construed as evidence that the model representation is an inadequate representation of reality, over the range of situations (scope) it is desired to cover.  The continuing need to modify the tactical decision rules of a combat model is evidence of the lack of validity of the model.  It is evidence that the model's representation of decisionmaking (or situation assessment) is inadequate.


The same problem -- the need for continual adjustment of the model -- has been encountered with exemplar-based KBSs.  Such systems derive general rules from sets of examples.  A problem that has been observed with exemplar-based systems is that they often need to be modified (i.e., new rules added, or probabilities adjusted) for new cases.  If this process continues for more than a few iterations, the conclusion should be reached that the knowledge representation is inadequate.  A continuing need to adjust the model (either a simulation model or a KBS model) should be recognized as evidence of the inadequacy of the model's representation of reality, or of the KBS's knowledge representation.  The search should begin for a better representation, i.e., to find a cure for the problem rather than to continue to treat the symptoms.


It is ironic that one of the reasons underlying the current trend to nonparametric models is rooted in the desire to enhance model validity.  A drawback of the tactical combat models of the 1960s is that they often lacked face validity.  There was not a close correspondence between the objects and processes of the real world and those of the model.  (The use of firepower scores is a case in point.)  In the move toward models having a greater degree of face validity, greater and greater levels of detail were added to models.  The great increase in computing processing speed and storage facilitated and accelerated this trend.  The result has been the heavy dependence on models that are so detailed that they are costly, inflexible, and of limited scope.  Rather that enhance the validity of tactical combat models, the move to high-detail and limited scope has in fact decreased it.


Most modern tactical combat models are computer simulation models.  With the vast improvement in the speed and storage capability of computers, there was an opportunity to expand the validity of the models, through the examination of a wide variety of situations.  Unfortunately, the power of the computer was used primarily to examine high detail for special cases.  The allocation of effort between more detail (precision) and more cases (broader scope) has swung heavily toward the former. 


2. The Lack of a Game-Theoretic (Goal-Oriented) Conceptual Framework


Today's large-scale tactical combat models (e.g., VECTOR-II, VIC) are computer programs that estimate the outcome of a specific situation.  The control (movement and engagement) of forces is specified by the model user, through tactical decision rule subroutines.  It is not determined in a game-theoretic framework, in which overall military goals are specified.  The decision made by tactical decision rules are locally reactive.  They depend mainly on the actual or perceived local situation, rather than on a global perspective of the combat situation and goal.  Current models are evaluative, rather than prescriptive, or normative, in nature.


While this approach may be acceptable for some applications (e.g., training and education), it has serious drawbacks for test and evaluation applications, and, indeed, for potential applications of AI technology to operational combat.  In training applications, a trainee can specify a set of situations that are of personal interest to him, view their consequences, and learn preferred modes of behavior.  The trainee represents the normative element of the system.  In test and evaluation applications, on the other hand, the models are generally noninteractive.  The human being is absent from the model; his goal-seeking ability must be replaced by another -- automatic -- normative (goal-seeking) element.  This replacement should be effected by the incorporation of a game-theoretic conceptual framework.


This same requirement holds for operational applications of AI technology (i.e., applications to actual combat).  If the human being is not a "part of the loop," the system must fulfill the normative role of the human being.


Evaluation models have an important role to play.  They are a means for evaluating proposed plans.  They have, however, serious shortcomings in test and evaluation applications.  They require the user to specify the plans.  They do not shed light on what is the potentially best outcome of a situation.  They cannot impute the value of a weapon system without a very large number of runs (e.g., a fractional factorial experimental design).


Game-theoretic models, on the other hand, enable automatic development and play of strategies, tactics, and plans.  They can often directly indicate the average value of particular systems (through the value of the Lagrange multipliers, or "shadow values," which are determined during the course of the optimization).  In summary, game-theoretic models are well-suited to system evaluation.  Non-game-theoretic models are much more difficult to use in evaluation; they are better suited to other applications, such as training.


Much more attention needs to be addressed to the issue of developing tactical combat models that have game-theoretic analytical frameworks.


3. The Nonparametric Nature of Knowledge-Based Systems


Modern large-scale KBSs are essentially nonparametric in nature.  They function by processing large numbers of facts, by means of large numbers of rules.  The decisions they make are reached by applying the rules of the knowledge base over and over, to the input facts.  The conclusions reached are guided by logical, deterministic processing of facts, rather than by an overall analytical framework.


The nonparametric character of KBSs creates several problems for modeling.  First, models are difficult to maintain.  The model may be built on-the-fly, by adding more and more rules.  Each time the system doesn't work correctly, more rules may be added.  Eventually, the process may go out of control.  The system's behavior is defined in terms of so many rules that no one really knows why it works the way it does.  Small changes to the system may cause radical changes in performance.  A touted strength of the KBS -- simplicity and visibility of the rules -- ultimately may contribute to its very downfall.  Large-scale KBSs are difficult to maintain.  They become veritable "black boxes" defined in terms of thousands of rules.  Although the system may be able to describe the chain of reasoning leading to a conclusion, understanding or modification of the system may become very difficult.


Because of their nonparametric character, major existing KBSs are essentially unable to handle difficult problems, involving spatial or temporal or complex continuous stochastic processes.  While some KBSs do address uncertainty through the incorporation of Bayesian inference and fuzzy logic concepts, in general their method of representing knowledge is simply too primitive to handle large-scale complex problems.  Major KBSs are oriented toward the solution of deterministic problems characterizable as decision trees, rather than complex stochastic processes.  Unlike classical statistical decision-theoretic methodology (e.g., a Bayesian framework), they are poorly suited for incorporating information into a comprehensive, efficient representation of the situation.  Situations are described in terms of more and more facts and information, rather than through better and better estimates of parameters.


4. The Deprofessionalization/Deskilling of the Field of Simulation and AI


In recent years, there has been a proliferation of simulation languages, including highly commercialized languages that promise the user the ability to develop a simulation model "without programming."  This trend may be contributing to a decrease in the skill level of simulation practitioners, and the quality of the completed simulation model.  The trend to event-driven simulations and object-oriented programming encourages the development of simplistic simulation models that emphasize events and local interactions among objects, rather than an overall conceptual or analytical framework.  They encourage the development of rapid prototypes that may be based on a poor concept or understanding of the system being modeled.


The same deskilling phenomenon that has occurred over the past two decades in the field of simulation is now happening, at a much faster rate, in the field of AI.  KBSs are being provided en masse to individuals with limited foundation in the theory of AI or model building.  Without a firm understanding of the concepts (knowledge representation, inferencing, forward and backward chaining, model building, scope, validity, representation of uncertainty, precision), there is a danger that the product of using these tools will be of little value.  In order for a modeling application to be successful, the intellectual content (conceptual and analytical frameworks) of the model must be solid.  KBSs and simulation systems are nothing more than empty shells -- development environments.  While they facilitate the system development process, they cannot produce meaningful results by themselves.  The quality of the system product will be determined more by the quality of the knowledge representation or model representation and the analytical framework of the inferencing procedures, than on the mechanical features of the shells.


The current situation vis-a-vis the deprofessionalization of the fields of simulation and AI is reminiscent of a similar situation in the field of statistics in the 1960s.  Prior to the 1960s, most practicing statisticians were graduate mathematicians who understood the assumptions and procedures underlying statistical analysis.  They understood the implications of autocorrelated residuals, of correlations among the model error terms and explanatory variables, of measurement errors in the dependent variables, of a lack of randomization or control of the explanatory variables.  They could invert matrices by hand, and knew the implications of a singular design matrix or multicollinearity.


In the 1960s, with the widespread availability of mainframe computers and the advent of timesharing, statistical software packages were available for everyone.  Short courses were developed.  "Statistical analysis" could now be accomplished by anyone with access to a computer terminal.  In the 1970s, most undergraduate curricula were expanded to include a mandatory first course in statistics.


The result was a disaster.  Data were processed by "turning the crank" of an analytical formula, with little regard for, or total ignorance of the critical assumptions underlying the analysis.  Models were developed from passively observed data, and used to predict what would happen if forced changes were made in the model explanatory variables.  Naturally, the models failed.  Econometric models failed to predict economic changes.  Because of flawed design (selection/regression biases), statistical evaluation models of social programs incorrectly "showed" that non-program control groups improved more than the program recipients.


As a result of these developments, the field of statistics suffered a severe loss of confidence.  The opinion that you could prove anything with statistics -- that statistical conclusion could not be trusted -- flourished.  The statistics profession suffered a loss of reputation that still persists.  The deprofessionalization/deskilling of statistics has lowered confidence in statistical analysis, and contributed to many flawed analyses and false conclusions.


In the 1960s most simulation scientists were highly trained mathematicians and scientists.  This is no longer true.  The conceptual and analytical foundations of many simulations is weak, and the deprofessionalization of the fields of simulation and AI will contribute to a further decline.


5. The Use of Hierarchical Models


As mentioned earlier, in the 1980s the Army Model Improvement Program initiated a program to replace the integrated VECTOR-II model by a series of hierarchical models -- CASTFOREM (battalion level), CORDIVEM (corps/division level) and FORCEM (army and theater level).  Models at one echelon would provide input to models at another echelon.


The trend to hierarchical models in fraught with danger.  The problem is that in order for the concept to work, there must be very sound linkages, or interfaces, between the models.  Unfortunately, combat is not well represented by a "separable" series of independent models.  The combat process involves complex interrelationships spanning many echelons (communications, intelligence, command and control), and these cross-echelon processes may have a dramatic effect on the combat outcome.


These linkages are especially critical with regard to AI applications.  A decision made by a local commander should take into account the overall military mission, and assessment of the total situation.  The suggestion that combat may be represented by a "cellular automata" approach is not justified.  In order for AI to function effectively, across all echelons, an integrated model (such as VECTOR-II and the new ConMod) affords a more natural conceptual framework.  This framework more naturally supports global concepts than a hierarchical approach.


While use of hierarchical models is efficient (specific-purpose hierarchical models can be developed much faster than integrated multi-echelon models), the difficulty of establishing proper linkages among the models, and reflecting critical cross-echelon phenomena (e.g., communications, situation assessment, command and control) makes the development of a high-validity hierarchical tactical combat model very difficult.


6. The Lack of Total Error Analysis


At the present time, high-resolution models are available that provide very detailed answers in very particular situations.  Such results are of restricted scope.  They provide an answer that is conditional on a wide variety of fixed values of parameters that may have a significant effect on the outcome.  As mentioned, many Army analyses are based, for example, on a single TRAC Standard Scenario, instead of a sample of scenarios.  A military system may work fine in a particular case, but what can be said about other situations (threat, terrain, tactics, scenarios, weather).


In order to broaden the scope of inference of military analysis, much more attention needs to be given to identifying all of the variables that may affect the battle outcome, and to representing all of the significant ones in the analysis.  A "components of variance" analysis should be conducted, that assesses the magnitude of the variance associated with each variable.  It is not sufficient to base a system analysis on a single scenario, if it is not known how much the results would vary if another scenario were used.


The field of tactical combat modeling could take a lesson from the field of sample survey research.  In recent years, the concept of "total error analysis" has been promoted in sample survey methodology.  Under that concept, the survey designer must identify all potentially significant sources of error -- not just sampling error -- and develop a design that keeps all of them under control.


In the early days of military modeling, there was much talk of sensitivity analysis.  Sensitivity analysis is conducted much more easily in a model development context that permits change in a well-designed analytical framework.  It is not easily conducted in the context of massive high-detail nonparametric models that address only part of the problem.  Implementation of a fractional factorial experimental design to explore a wide range of situations is feasible in the context of parametric models.  Examination of alternative situations to determine a game-theoretic solution is feasible in the context of an analytical representation.  Both of these methodologies are extremely difficult to implement in a modeling context where the model is defined nonparametrically in terms of hundreds of thousands of items of information in a data base.

7. The Lack of an "Extensive/Intensive" Balance in Military Modeling


Upon completion of a total error analysis, attention should be allocated to the problem of allocating resources between precision (high-detail) and scope of inference.  On the one hand, detailed, high-resolution, item-level analysis is necessary, to determine the outcome of weapon-target interactions and the effect of military systems (communications, intelligence) on battle outcome.  On the other hand, these results are needed for a wide range of circumstances, so that the overall effectiveness of a system is known, and the conditions under which it performs well or poorly are known.


The allocation of evaluation resources between analysis of high-detail under specific circumstances, and analysis of a lower level of detail under a wide range of conditions, should be justified.


VIII. The Information Overload


The important problems in military systems analysis involve consideration of many variables and complex relationships.  Modern sensor systems collect data at astronomical rates, and the information systems cannot keep pace.  Systems that collect large amounts of data at a rapid rate, but lack a sound strategy for promptly organizing the data, will ultimately fail.  This is the information overload.


The information overload is occurring in development contexts, and is in danger of spilling over to operational contexts.  In an operational setting, the danger is that information systems will not be able to process the collected data rapidly enough, or will fail to extract the full significance or meaning from the collected data.  The full impact of the information overload is not yet being felt in operational contexts, because the high-data-rate information systems are not yet operational.


In the development context, we are already experiencing the results of the information overload.  The move to high-detail, nonparametric, event-driven simulations has so saturated the computer processing capacity (and the capacity of systems to prepare broad-scope model input) that the scope of major analyses is severely restricted.  The alarming aspect of this situation is that, if the use of inefficient information and knowledge representations is continued in a development context (e.g., in test and evaluation) it will spread to the operational context as well.  If means cannot be found to efficiently process data and extract the needed information in a development context (i.e., in tactical combat models), the hope of doing so in a real-time operational context (i.e., actual battle) is nonexistent.


In summary, the information overload is a threat to both development and operational applications.  Combat models are choking on the information overload, and future operational systems will follow suit, if the problem is not solved.


Through modern technology we are building systems that generate massive amounts of information.  We are now attempting to build systems that can process this information.  Unfortunately, the information processing systems cannot keep pace with the information generation systems.  Furthermore, the use of nonparmetric-type AI and modeling systems, with inadequate knowledge representations and analytical frameworks, poses the danger of developing information processing systems whose functioning is not fully understood, and which are incapable of distilling the full intelligence content from the data and thereby making full use of it.


The nonparametric character of military systems evaluation models and KBSs is contributing to the information overload.  Models are being developed that are specified in terms of massive amounts of nonparametrically specified data, rather than efficient, parametric representations.  KBSs are available that are capable of embodying thousands of rules and facts, but they cannot efficiently process information.  Graphics systems are being developed that require a megabyte of storage per frame; the problem of processing these images to extract intelligence has not been solved.  The amount of information that is present in the models, KBSs, and graphics systems is staggering.  It far outweighs the amount of information content, or value, or intelligence that is embodied in the data, relative to combat goals.


The information overload has been spawned in military system evaluation models and development prototypes, and it is in danger of spreading to operational systems.  To date, AI does not play a major role in complex operational systems.  Unless AI systems begin to be implemented in efficient conceptual and analytical frameworks, they never will play a major role.  They will be relegated to playing minor roles in training, in conceptually simple (although complicated) situations involving little of a spatial, temporal, or continuous multidimensional stochastic nature.


Current KBSs attempt to duplicate the decisions reached by human experts, without a conceptual understanding of the process used by human beings to reach those decisions.  Lacking an overall analytical framework, they provide rigorous attention to detail while overlooking or failing to recognize the big picture.  As a result, they are failing to provide useful solutions to problems in which comprehension of the overall problem contexts is an important factor.  Modern military system evaluation models, KBSs, and graphics systems are contributing to the information overload, rather than solving it. 


The event-driven, nonparametric approach to modeling cannot handle complex, multidimensional stochastic problems well.  The approach of attempting to represent such problems nonparametrically has failed to achieve results of broad scope.  Means are needed for processing information to reduce its amount, to combine it with other information in a sequential fashion that maintains control of the amount of information.


The current situation with regard to the information overload is a dismal one.  Modern sensor systems generate massive amounts of data, and current nonparametric AI and graphics systems are not equipped to process the data.  They are destined to drown in a sea of detail, unless better, analytical knowledge representations and effective strategies for information processing are developed.


As gloomy as the present situation is, there is hope.  Efficient analytical representations have been developed in the past (e.g., Bayesian representations).  The field of AI is young and advancing.  The concepts of AI can be combined with analysis to solve the information overload problem.  The first step of obtaining a solution to this problem is to recognize its nature.  Aware of the nature of the problem, positive steps can be taken to solve it.  The next section describes steps that can be taken to solve the information overload problem.


IX. Recommended Areas for Research and Development


The information overload is a fact in development, and looms as a serious problem in an operational context.  It is a thesis of this paper that much of the information overload is caused by the use of inefficient, nonparametric knowledge representations.  Furthermore, current simulation models and KBSs are oriented to inefficient, nonanalytical knowledge representations, and their continued use will perpetuate the information overload.  To address the information overload problem, improvements are necessary in the conceptual and analytical frameworks of military evaluation models (tactical combat models, weapon system evaluation models), KBSs, and AI-based simulation systems.


The following areas are identified as areas in which improvements are considered both necessary and possible, to improve the quality of AI and military modeling, and thereby ameliorate the information overload.


            1. Improve Analytical Frameworks.  Evaluation models should rest on efficient, parametric representations of the real-world phenomena.


            2. Establish Game-Theoretic Conceptual Frameworks.  Evaluation models should be based on a solid understanding and appreciation, and recognition of the goal-seeking nature of combat.


            3. Develop KBSs Having Analytical Knowledge Representations.  Effort should be allocated to the development of KBSs that do more than represent logical rules and process facts according to these rules.  Improved knowledge representations need to be developed for representing spatial, temporal, and multidimensional continuous stochastic processes.


            4. Conduct Total Error Analysis.  System evaluations should include a total error analysis.


            5. Balance Scope and Detail.  A balance needs to be achieved between the extensive (more cases) and intensive (more detail) aspects of model analysis.

            6. Improve Linkages between Hierarchical Models.  Unless better means are developed for establishing linkages between hierarchical models, more emphasis should be given to integrated (multi-echelon) models, in spite of their higher cost.


Each of these recommendations will be described in the paragraphs that follow.


B. Description of Recommendations


1. Improve Analytical Frameworks


The trend to nonparametric models needs to be reversed.  The increased use of nonparametric simulations has resulted in massive simulations that are costly to support and difficult to maintain and modify.  They are so difficult to set up that they tend to be used to examine only a few special cases, leading to a reduction in scope of inference.  Much more attention needs to be given to the development of conceptual and analytical frameworks .  Face validity is not sufficient to establish model validity, and may in fact prove a detriment if it results in a model that is so detailed that only a few cases may be examined.


The heavy dependence on event-driven simulation needs to be reexamined.  Such simulations are easy to develop, and commercial systems are available to support their development.  While an event-driven simulation may be easy to develop, however, it may be used as a means of avoiding difficult conceptual issues.  More attention needs to be paid to the use of analytical models.  An event-driven simulation may be used as a basis for developing an analytical representation of a process.  The analytical representation can then be used as a basis for a broad-scope analysis.


Face validity cannot constitute the primary means of establishing model validity.  It is necessary, but not sufficient.  The trend to developing models that include all of the parts and microscopic processes of the real-world system, but omit the cognition of the real-world process, will fail to produce useful results.


The growing dependence on "simulation without programming," of object-oriented programming, of rapid prototyping, with inadequate attention given to the conceptual and analytical framework, must be resisted.  This same phenomenon is occurring in the field of AI.  While simulation-without-programming may work quite well in helping to set up ten machines in a production shop, it is an inadequate tool for solving complex military problems.  Similarly, the application of AI-without-programming may solve simple problems well and even large problems efficiently, but it is not necessarily certain that it can solve large problems well.  Computation is not a substitute for analysis.  Models and KBSs lacking strong analytical content cannot provide satisfactory solutions to complex problems.


2. Establish Game-Theoretic Conceptual Frameworks for Models of Tactical and Operational-Level Combat

The lack of game-theoretic foundations for tactical combat models needs to be corrected.  Much more attention needs to be given to representing tactical and operational-level combat in a game-theoretic context.  One of the significant factors in the rapid success of strategic missile warfare models in the 1960s was the presence of a game-theoretic foundation.  The goals of combat should permeate the model.  The use of models that emphasize the physics of combat, while ignoring the essential cognitive aspects, must be avoided.


With a strong game-theoretic conceptual and analytical framework, the problem of deriving manual scenarios and developing corresponding input to tactical combat models and C3I evaluation models can be largely solved.  In cases in which a scalar objective function may be specified, the need for massive numbers of sensitivity runs to determine the relative value of alternative systems is largely obviated, since the imputed value may be obtained as a byproduct of the game solution (relative to the specified objective function).


3. Develop KBSs Having Analytical Knowledge Representations


Modern KBSs cannot handle problems that have a high spatial, temporal, or stochastic character.  KBSs are needed that include analytical representations of knowledge, and incorporate analytical frameworks for processing data.  Methods such as Bayesian analysis, embodying prior knowledge and enabling a very efficient processing of new data, should be emphasized.  Some KBSs do embody Bayesian concepts, but much more development is necessary along these lines to provide solutions to complex problems.


While KBSs have proved useful as means of solving problems in deductive inference (deterministic logic, decision-tree problems), the significance of their role in problems of statistical inference, particularly of a multidimensional nature, needs to be reexamined.  The role of KBSs to represent multiple goals of tactical combat needs to be expanded.


4. Conduct Total Error Analysis


Much more consideration needs to be given to the scope of inference of a study.  The use of a single scenario as a basis for an important procurement decision must be stopped.  Every major analysis should include a "total error analysis" that identifies the likely sources of variation affecting system performance, and estimates their magnitudes.  If the scope of a study is deliberately restricted, e.g., by basing the analysis on a single scenario, a thorough justification should be required.


The expenditure of great amounts of funds to analyze a particular set of circumstances in detail, when the answer may well change significantly in other situations, must not be condoned.  The current procedure of spending millions to analyze a single "tree" (i.e., scenario), while learning little of the "forest" (full range of scenarios) must be stopped.


A formal set of requirements for model validation should be established.

5. Balance Scope and Detail


After identifying the sources of variation that may affect the results, a justification should be required concerning the allocation of effort to produce a broad-scope versus a high level of detail.  The concentration in detail, at the expense of scope, is a principal factor contributing to the information overload.  Massive amounts of information are being collected, without a clear-cut analysis of the amount of information content associated with the data.  In the absence of a conceptual and analytical framework, it is difficult or impossible to know how much should be collected, and the best way to process it.  An analytical model can assist decisions concerning how much needs to be collected, and how it should be processed.


6. Improve Linkages between Hierarchical Models


Increased attention needs to be given to the problem of establishing linkages between hierarchical models.  Analysis needs to be conducted to determine what parameters should define the interfaces.  Hierarchical models can play a definitely valuable role, if the nature of the interface to other parts of the total system are known. The total error analysis can help define a useful role for hierarchical models.


The problems described above will not be solved easily.  Many of these problems are very difficult analytical problems, that have eluded solution for many years.  Their solution will require a significant commitment of research and development revenues.  The problem of strategic combat were solved in the 1960s in part because of a strong commitment to basic research in game theory and optimization.  The nation's commitment to research in these ares waned at the end of the 1960s, and the significant problems of tactical and operational-level warfare remain unsolved.  It is time to consider the significance of these military problems, and to reevaluate the level of resources committed to their solution.


This paper has identified many of the problems facing the field of AI and military modeling, and has shown how the information overload is related to these problems.  While these problems are serious and challenging, their nature is understood.  With a strong commitment, solutions can be achieved.





Dr. J. George Caldwell is Director of Vista Research Corporation of Sierra Vista, Arizona.  He holds a BS degree in mathematics from Carnegie-Mellon University and a PhD degree in mathematical statistics from the University of North Carolina at Chapel Hill.  In his doctoral dissertation, he developed the best known class of codes for correcting both additive and synchronization errors in noisy communication channels.  His career in military systems analysis has centered on using statistical and optimization techniques to solve problems in ballistic missile defense, ocean surveillance, and test and evaluation of military communication systems.  Particular research interests include game theory, optimization, time series analysis, sample survey, and nontraditional estimation procedures.  Dr. Caldwell has taught statistics at the University of Arizona, and has lived in Arizona since 1981.





1. Theater-Level Gaming and Analysis Workshop for Force Planning, SRI International, Office of Naval Research, September 1977 (ADA068177)


2. Honig, John, Chairman, et al., Review of Selected Army Models, Models Review Committee, Department of the Army, May 1971 (AD887175)


3. A Hierarchy of Combat Analysis Models, General Research Corporation, January 1973 (ADA026336)


4. Berg, D. J., and M. E. Strickland, Catalog of War Gaming and Military Simulation Models, 7th edition, Studies, Analysis, and Gaming Agency, Organization of the Joint Chiefs of Staff, August 1977 (ADA050320). (9th edition now available from Joint Analysis Directorate, Office of the Joint Chiefs of Staff)


5. Taylor, J. G., Force-on-Force Attrition Modelling, Military Applications Section, Operations Research Society of America, January 1980


6. Caldwell, J. G. Subtractive Overlapping Island Defense with Imperfect Interceptors, ACDA/ST-166, Lambda Corporation (subsidiary of General Research Corporation) / US Arms Control and Disarmament Agency, 1969 (Secret)


7. Caldwell, J. G. Conflict, Negotiation, and General-Sum Game Theory, Lambda Paper 45, Lambda Corporation, 1970.


8. Caldwell, J. G., and G. E. Pugh, Multiple Resource-Constrained Game Solution, Lambda Corporation, 1972.


9. Artificial Intelligence for Military Applications, edited by Barry G. Silverman and William P. Hutzler, Military Applications Section, Operations Research Society of America, October 1986


10. Shannon, Robert E., "Models and Artificial Intelligence," in Proceedings of the 1987 Winter Simulation Conference, A. Thesen, H. Grant, W. David Kelton, Editors, pp. 16-23.


11. Mettrey, William, "An Assessment of Tools for Building Large Knowledge-Based Systems," AI Magazine, Winter, 1987, pp. 81-89