VERIFICATION AND VALIDATION OF SIMULATION MODELS

更新时间:2023-07-23 06:06:53 阅读: 评论:0

Proceedings of the 2003 Winter Simulation Conference S. Chick, P. J. Sánchez, D. Ferrin, and D. J. Morrice, eds.
ABSTRACT
In this paper we discuss verification and validation of simu-lation models.  Four different approaches to deciding model validity are described; two different paradigms that relate verification and validation to the model development proc-ess are prented; various validation techniques are defined; conceptual model validity, model verification, operational validity, and data validity are discusd; a way to document results is given; a recommended procedure for model valida-tion is prented; and accreditation is briefly discusd.
1 INTRODUCTION
Simulation models are increasingly being ud in problem solving and in decision making.  The developers and urs of the models, the decision makers using information de-rived from the results of the models, and the individuals affected by decisions bad on such models are all rightly concerned with whether a model and its results are “correct”.  This concern is addresd through model verification and validation. Model verification is often defined as “ensuring that the computer pro
gram of the computerized model and its implementation are correct” and is the definition adopted here.  Model validation is usually defined to mean “substan-tiation that a computerized model within its domain of ap-plicability posss a satisfactory range of accuracy consis-tent with the intended application of the model” (Schlesinger et al. 1979) and is the definition ud here. A model some-times becomes accredited through model accreditation. Model accreditation determines if a model satisfies specified model accreditation criteria according to a specified process.
A related topic is model credibility. Model credibility is concerned with developing in (potential) urs the confi-dence they require in order to u a model and in the infor-mation derived from that model.
A model should be developed for a specific purpo (or application) and its validity determined with respect to that purpo.  If the purpo of a model is to answer a variety of questions, the validity of the model needs to be determined with respect to each question.  Numerous ts of experimen-tal conditions are usually required to define the domain of a model’s intended applicability.  A model may be valid for one t of experimental conditions and invalid in another. A model is considered valid for a t of experimental condi-tions if the model’s accuracy is within its acceptable range, which is the amount of accuracy required for the model’s intended purpo. This generally requires that the model’s output variables of interest (i.e., the model variables ud in answering the question
s that the model is being developed to answer) be identified and that their required amount of accu-racy be specified. The amount of accuracy required should be specified prior to starting the development of the model or very early in the model development process. If the vari-ables of interest are random variables, then properties and functions of the random variables such as means and vari-ances are usually what is of primary interest and are what is ud in determining model validity. Several versions of a model are often developed prior to obtaining a satisfactory valid model. The substantiation that a model is valid, i.e., performing model verification and validation, is generally considered to be a process and is usually part of the model development process.
VERIFICATION AND VALIDATION  OF SIMULATION MODELS
Robert G. Sargent
Department of Electrical Engineering and Computer Science
L.C. Smith College of Engineering and Computer Science
Syracu University
Syracu, NY 13244, U.S.A.
It is often too costly and time consuming to determine
that a model is absolutely valid over the complete domain of
its intended applicability.  Instead, tests and evaluations are
conducted until sufficient confidence is obtained that a
model can be considered valid for its intended application
(Sargent 1982, 1984). If a test determines that a model does
not have sufficient accuracy for a t of experimental condi-
tions, then the model is invalid.  However, determining that
a model has sufficient accuracy for numerous experimental
conditions does not guarantee that a model is valid every-
where in its applicable domain. The relationships of cost (a
similar relationship holds for the amount of time) of per-
forming model validation and the value of a model to the
ur as a function of model confidence are shown in Figure
1.  The cost of model validation is usually significant, espe-
cially when extremely high model confidence is required.
Figure 1: Model Confidence
The remainder of this paper is organized as follows: Section 2 prents the basic approaches ud in deciding model validity; Section 3 describes two different para-digms ud in verification and validation; Section 4 defines validation techniques; Sections 5, 6, 7, and 8 discuss data validity, conceptual model validity, computerized model verification, and operational validity, respectively; Section
9 describes a way of documenting results; Section 10 gives
a recommended validation procedure; Section 11 contains a brief description of accreditation; and Section 12 prents the summary.
uhk2 BASIC APPROACHES
There are four basic approaches for deciding whether a simulation model is valid or invalid.  Each of the ap-proaches requires the model development team to conduct verification and validation as part of the model develop-ment process, which is discusd below.  One approach, and a frequently ud one, is for the model development team itlf to make the decision as to whether a simulation model is valid.  A subjective decision is made bad on the results of the various tests and evaluations conducted as part of the model development process. However, it is usu-ally better to us
e one of the next two approaches, depend-ing on which situation applies.
If the size of the simulation team developing the model is not large, a better approach than the one above is to have the ur(s) of the model heavily involved with the model development team in determining the validity of the simulation model. In this approach the focus of who de-termines the validity of the simulation model should move from the model developers to the model urs.  Also, this approach aids in model credibility.
Another approach, usually called “independent verifica-tion and validation” (IV&V), us a third (independent) party to decide whether the simulation model is valid.  The third party is independent of both the simulation develop-ment team(s) and the model sponsor/ur(s). This approach should normally be ud when developing large-scale simu-lation models, which usually have one large or veral teams involved in developing the simulation model. Also, this ap-proach is often ud when a large cost is associated with the problem the simulation model is being developed for and/or to help in model credibility. In this approach the third party needs to have a thorough understanding of what the intended purpo of the simulation model is. There are two common ways that IV&V is conducted by the third party. One way is to conduct IV&V concurrently with the development of the simulation model. The other way is to conduct IV&V after the simulation model has been developed.
In the concurrent way of conducting IV&V, the model development team(s) receives input from the IV&V team regarding verification and validation as the model is being developed. Thus, the development of a simulation model should not progress beyond each stage of development if the model is not satisfying the verification and validation re-quirements. It is the author’s opinion that this is the better of the two ways.  In the other way, where IV&V is conducted after the model has been completely developed, the evalua-tion performed can range from simply evaluating the verifi-cation and validation conducted by the model development team to performing a complete verification and validation effort. Wood (1986) describes experiences over this range of evaluation by a third party on energy models.  One conclu-sion that Wood makes is that performing a complete IV&V effort after the simulation has been completely developed is extremely costly and time consuming for what is obtained. This author’s view is that if IV&V is going to be conducted on a completed simulation model then it is usually best to only evaluate the verification and validation that has already been performed.
The last approach for determining whether a model is valid is to u a scoring model (e, e.g., Balci (1989), Gass (1993), and Gass and Joel (1987)).  Scores (or weights) are determined subjectively when conducting various aspects of the validation process and then combined to determine cate-gor
y scores and an overall score for the simulation model.  A simulation model is considered valid if its overall and cate-gory scores are greater than some passing score(s).  This ap-proach is ldom ud in practice.
典范英语6This author does not believe in the u of scoring models for determining validity becau (1) a model may receive a passing score and yet have a defect that needs to be corrected, (2) the subjectiveness of this approach tends to be hidden and thus this approach appears to be objec-tive, (3) the passing scores must be decided in some (usu-ally) subjective way, and (4) the score(s) may cau over confidence in a model or be ud to argue that one model is better than another.
3 PARADIGMS
In this ction we prent and discuss paradigms that relate verification and validation to the model development proc-ess. There are two common ways to view this relationship.  One way us a simple view and the other us a complex view. Banks et al. (1988) reviewed work using both of
the ways and concluded that the simple way more clearly illuminates model verification and validation. We prent a paradigm of each way that this author has developed. The paradigm of the simple way is prented first and is this author’s preferred paradigm.super jet
Consider the simplified version of the model devel-opment process in Figure 2 (Sargent 1981). The problem entity is the system (real or propod), idea, situation, pol-icy, or phenomena to be modeled; the conceptual model is the mathematical/logical/verbal  reprentation (mimic) of  the problem entity developed for a particular study; and the computerized model is the conceptual model implemented on a computer. The conceptual model is developed through an analysis and modeling pha, the computerized model is developed through a computer programming and imple-mentation pha, and inferences about the problem entity are obtained by conducting computer experiments o
n the computerized model in the experimentation pha.
We now relate model validation and verification to this simplified version of the modeling process. (See Fig-
ure 2.) Conceptual model validation is defined as deter-mining that the theories and assumptions underlying the conceptual model are correct and that the model repren-tation of the problem entity is “reasonable” for the in-tended purpo of the model.  Computerized model verifi-cation is defined as assuring that the computer programming and implementation of the conceptual model is correct.  Operational validation is defined as determin-ing that the model’s output behavior has sufficient accu-racy for the model’s intended purpo over the domain of the model’s intended applicability.  Data validity is defined as ensuring that the data necessary for model building, model evaluation and testing, and conducting the model experiments to solve the problem are adequate and correct. perimenting) on the system. System theories are developed by abstracting what has been obrved from the system and by hypothesizing from the system data and results.  If a simulation model exists of this system, then hypothesizing of system theories can also be done from simulation data and results. System theories are validated by performing theory validation. Theory validation involves the compari-son of system theories against system data and results over the domain the theory is applic
able for to determine if there is agreement.  This process requires numerous experiments to be conducted on the real system.
We now discuss the Simulation World, which shows a (slightly) more complicated model development process than the other paradigm.  A simulation model should only be developed for a t of well-defined objectives.  The conceptual model is the mathematical/logical/verbal repre-ntation (mimic) of the system developed for the objec-tives of a particular study. The simulation model specifica-tion is a written detailed description of the software design and specification for programming and implementing the conceptual model on a particular computer system. The simulation model is the conceptual model running on a computer system such that experiments can be conducted on the model. The simulation model data and results are the data and results from experiments conducted (experi-menting) on the simulation model. The conceptual model is developed by modeling the system, where the understand-ing of the system is contained in the system theories,  for the objectives of the simulation study. The simulation model is obtained by implementing the model on the speci-fied computer system, which includes programming the conceptual model who specifications are contained in the simulation model specification.  Inferences  about  the sys-tem  are obtained  by  conducting  computer  experiments (experimenting) on the simulation model. Conceptual
In using this paradigm to develop a valid simulation model, veral versions of a model are usually developed during the modeling process prior to obtaining a satisfac-tory valid model.  During each model iteration, model veri-fication and validation are performed (Sargent 1984). A variety of (validation) techniques are ud, which are given below. No algorithm or procedure exists to lect which techniques to u.  Some attributes that affect which tech-niques to u are discusd in Sargent (1984).
别丢掉 林徽因
小学五年级英语上册A detailed way of relating verification and validation to developing simulation models and system theories is shown in Figure 3.  This paradigm shows the process of develop-ing system theories and simulation models and relates verifi-cation and validation to both of the process.
This paradigm (Sargent 2001b) shows a Real World and a Simulation World. We first discuss the Real World. There exist some system or problem entity in the real world of which an understanding of is desired.  System theories describe the characteristics of the system (or problem en-tity) and possibly its behavior (including data). System data and results are obtained by conducting experiments (ex-
Figure 3: Real World and Simulation World Relationships with Verification and Validation
model validation is defined as determining that the theories and assumptions underlying the conceptual model are con-sistent with tho in the system theories and that the model reprentation of the system is “reasonable” for the intended purpo of the simulation model.  Specification verification is defined as assuring that the software design and the speci-fication for programming and implementing the conceptual model on the specified computer system is satisfactory. Im-plementation verification is defined as assuring that the simulation model has been implemented according to the simulation model specification. Operational validation is defined as determining that the model’s output behavior has sufficient accuracy for the model’s intended purpo over the domain of the model’s intended applicability.
This paradigm shows process for both developing valid system theories and valid simulation models.  Both are accomplished through iterative process. To develop valid system theories, which are usually for a specific pur-po, the system is first obrved and then abstraction is performed from what has been obrved to develop pro-pod system theories. The theories are tested for cor-rectness by conducting experiments on the system to obtain data and results to compare against the propod system theories. New propod system theories may be hypothe-sized from the data and comparisons made, and also possi-bly from abstraction performed on additi
onal system ob-rvation, and the new propod theories will require new experiments to be conducted on the system to obtain data to evaluate the correctness of the propod system theories.  This process repeats itlf until a satisfactory t of validated system theories has been obtained. To develop a valid simulation model, veral versions of a model are usually developed prior to obtaining a satisfactory valid simulation model. During every model iteration, model verification and validation are performed.  This process is
similar to the one for the other paradigm except there is more detail given in this paradigm.
4 VALIDATION TECHNIQUES
This ction describes various validation techniques and tests ud in model verification and validation.  Most of the techniques described here are found in the literature, al-though some may be described slightly differently.  They can be ud either subjectively or objectively.  By “objec-tively,” we mean using some type of statistical test or mathematical procedure, e.g., hypothesis tests or confi-dence intervals. A combination of techniques is generally ud.  The techniques are ud for validating and verify-ing the submodels and overall model.
Animation: The model’s operational behavior is dis-played graphically as the model moves through time.  For example the movements of parts through a factory during a simulation run are shown graphically.
Comparison to Other Models: Various results (e.g., out-puts) of the simulation model being validated are compared to results of other (valid) models.  For example, (1) simple cas of a simulation model are compared to known results of analytic models, and (2) the simulation model is compared to other simulation models that have been validated.
Degenerate Tests: The degeneracy of the model’s be-havior is tested by appropriate lection of values of the input and internal parameters.  For example, does the aver-age number in the queue of a single rver continue to in-crea over time when the arrival rate is larger than the rvice rate?
Event Validity: The “events” of occurrences of the simulation model are compared to tho of the real system to determine if they are similar.  For example, compare the number of deaths in a fire department simulation.
Extreme Condition Tests: The model structure and output should be plausible for any extreme and unlikely combination of levels of factors in the system.  For exam-ple, if in-process inventories are zero, production output should be zero.
Face Validity: Asking individuals knowledgeable about the system whether the model and/or its behavior is reasonable.  For example,  is the logic in the conceptual model correct and is the model’s input-output relationships reasonable.
Historical Data Validation: If historical data exist (or if data are collected on a system for building or testing a model), part of the data is ud to build the model and the remaining data are ud to determine (test) whether the model behaves as the system does.  (This testing is con-ducted by drivi
ng the simulation model with either samples from distributions or traces (Balci and Sargent 1982a, 1982b, 1984b).)
peugeot 音标Historical Methods: The three historical methods of validation are rationalism, empiricism, and positive eco-nomics. Rationalism assumes that everyone knows whether the underlying assumptions of a model are true.  Logic de-ductions are ud from the assumptions to develop the correct (valid) model.  Empiricism requires every assump-tion and outcome to be empirically validated.  Positive economics requires only that the model be able to predict the future and is not concerned with a model’s assumptions or structure (causal relationships or mechanisms).
Internal Validity: Several replication (runs) of a sto-chastic model are made to determine the amount of (inter-nal) stochastic variability in the model.  A large amount of variability (lack of consistency) may cau the model’s re-sults to be questionable and if typical of the problem entity, may question the appropriateness of the policy or system being investigated.
Multistage Validation: Naylor and Finger (1967) pro-pod combining the three historical methods of rational-ism, empiricism, and positive economics into a multistage process of validation.  This validation method consists of (1) developing the model’s assumptions on theory, obr-vations, and
general knowledge, (2) validating the model’s assumptions where possible by empirically testing them, and (3) comparing (testing) the input-output relationships of the model to the real system.
Operational Graphics: Values of various performance measures, e.g., the number in queue and percentage of rvers busy, are shown graphically as the model runs through time; i.e., the dynamical behaviors of performance indicators are visually displayed as the simulation model runs through time to ensure they are correct.
国家mba
Parameter Variability - Sensitivity Analysis: This technique consists of changing the values of the input and internal parameters of a model to determine the effect upon the model’s behavior or output.  The same relationships should occur in the model as in the real system.  Tho pa-rameters that are nsitive, i.e., cau significant changes in the model’s behavior or output, should be made suffi-ciently accurate prior to using the model.  (This may re-quire iterations in model development.)
Predictive Validation: The model is ud to predict (forecast) the system’s behavior, and then comparison are made between the system’s behavior and the model’s fore-cast to determine if they are the same. The system data may come from an operational system or be obtained by conducting experiments on the system, e.g., field tests.
Traces: The behavior of different types of specific en-tities in the model are traced (followed) through the model to determine if the model’s logic is correct and if the nec-essary accuracy is obtained.
Turing Tests: Individuals who are knowledgeable about the operations of the system being modeled are asked if they can discriminate between system and model outputs.  (Schruben (1980) contains statistical tests for u with Turing tests.)
print spooler
5 DATA VALIDITY
We discuss data validity even though it is often not consid-ered to be part of model validation becau it is usually dif-ficult, time consuming, and costly to obtain sufficient, ac-curate, and appropriate data, and is the often the reason that attempts to valid a model fail.  Data are needed for three purpos: for building the conceptual model, for validating the model, and for performing experiments with the vali-dated model.  In model validation we are concerned only with data for the first two purpos.
四级分数分配情况
To build a conceptual model we must have sufficient data on the problem entity to develop theories that can be ud to build the model, to develop mathematical and logi-cal relationships for u in the model that will allow the model to adequately reprent the problem entity for its in-tended purpo,
and to test the model’s underlying assump-tions.  In additional, behavioral data are needed on the problem entity to be ud in the operational validity step of comparing the problem entity’s behavior with the model’s behavior. (Usually, this data are system input/output data.)  If behavior data are not available, high model confidence usually cannot be obtained becau sufficient operational validity cannot be achieved.
The concern with data is that appropriate, accurate, and sufficient data are available, and if any data transformations are made, such as disaggregation, they are correctly per-formed.  Unfortunately, there is not much that can done to ensure that the data are correct.  The best that can be done is to develop good procedures for collecting and maintaining data, test the collected data using techniques such as internal consistency checks, and screen for outliers and determine if they are correct.  If the amount of data is large, a databa should be developed and maintained.
6 CONCEPTUAL MODEL VALIDATION Conceptual model validity is determining that (1) the theo-ries and assumptions underlying the conceptual model are correct and (2) the model’s reprentation of the problem entity and the model’s structure, logic, and mathematical and causal relationships are “reasonable” for the intended purpo of the model.  The theories and assumptions un-derlying the model should be tested using mathematical analysis and statistical methods on probl
em entity data. Examples of theories and assumptions are linearity, inde-pendence of data, and arrivals are Poisson.  Examples of applicable statistical methods are fitting distributions to data, estimating parameter values from the data, and plot-ting data to determine if the data are stationary.  In addi-tion, all theories ud should be reviewed to ensure they were applied correctly; for example, if a Markov chain is ud, does the system have the Markov property, and are the states and transition probabilities correct?
Next, every submodel and the overall model must be evaluated to determine if they are reasonable and correct for the intended purpo of the model.  This should include determining if the appropriate detail and aggregate rela-tionships have been ud for the model’s intended purpo, and if appropriate structure, logic, and mathematical and causal relationships have been ud.  The primary valida-tion techniques ud for the evaluations are face valida-tion and traces.  Face validation has experts on the problem entity evaluate the conceptual model to determine if it is correct and reasonable for its purpo.  This usually re-quires examining the flowchart or graphical model, or the t of model equations.  The u of traces is the tracking of entities through each submodel and the overall model to determine if the logic is correct and if the necessary accu-racy is maintained.  If errors are found in the conceptual model, it must be revid and conceptual model validation performed again.
体育教育考研
7 COMPUTERIZED MODEL VERIFICATION Computerized model verification ensures that the computer programming and implementation of the conceptual model are correct.  The major factor affecting verification is whether a simulation language or a higher level program-ming language such as FORTRAN, C, or C++ is ud. The u of a special-purpo simulation language generally will result in having fewer errors than if a general-purpo simulation language is ud, and using a general purpo simulation language will generally result in having fewer errors than if a general purpo higher level programming language is ud. (The u of a simulation language also usually reduces both the programming time required and the amount of flexibility.)
When a simulation language is ud, verification is pri-marily concerned with ensuring that an error free simulation language has been ud, that the simulation language has been properly implemented on the computer, that a tested (for correctness) pudo random number generator has been properly implemented, and the model has been programmed correctly in the simulation language.  The primary techniques ud to determine that the model has been programmed cor-rectly are structured walk-throughs and traces.
If a higher level programming language has been ud, then the computer program should have been designed, de-veloped, and implemented using techniques found in soft-ware engineering.  (Th
e include such techniques as ob-ject-oriented design, structured programming, and program modularity.)  In this ca verification is primarily con-cerned with determining that the simulation functions (e.g.,  the time-flow mechanism, pudo random number genera-tor, and random variate generators) and the computer model have been programmed and implemented correctly.
There are two basic approaches for testing simulation software: static testing and dynamic testing (Fairley 1976).  In static testing the computer program is analyzed to de-termine if it is correct by using such techniques as struc-

本文发布于:2023-07-23 06:06:53,感谢您对本站的认可!

本文链接:https://www.wtabcd.cn/fanwen/fan/90/185968.html

版权声明:本站内容均来自互联网,仅供演示用,请勿用于商业和其他非法用途。如果侵犯了您的权益请与我们联系,我们将在24小时内删除。

标签:考研   小学   分配情况   教育
相关文章
留言与评论(共有 0 条评论)
   
验证码:
Copyright ©2019-2022 Comsenz Inc.Powered by © 专利检索| 网站地图