Research for MARKOV Models and Processes

Syndicate content
March 1, 1997
In the wake of a changing defense climate, the Navy is continuing to find ways to adjust to its smaller size while maintaining its ability to respond when required. An important part of strategy is to monitor readiness during the downsizing process. The first step toward managing readiness is to understand what readiness is and why it changes over time or among units. This paper contributes to the further understanding of readiness by identifying the relationship between standard readiness measures and their determinants for Navy fighter, attack, and fighter/attack aircraft. The analysis is an extension of our earlier work on explaining the readiness of surface combatants. Our objective was to build a comprehensive database of navy fighter and attack units over time and identify readiness trends and relationships between readiness determinants and readiness measures where they exist.
Read More | Download Report
August 1, 1985
To make a drug-testing program successful and to minimize the cost of the program, the minimum number of tests that must be given in a specified period to identify a fixed percentage of drug users must be determined. This memorandum presents a Markov model that can be used to determine the number of tests that should be given. In addition, three applications of the model, showing how it can be used to analyze the drug-user population, are presented.
Read More | Download Report
June 1, 1982
This paper derives the method for aggregating conditional absorbing Markov chains into a single chain that is representative of the total process and has the same state space as the conditional chains.
Read More | Download Report
March 1, 1982
This paper derives a probability model which analyzes multiple spell data by taking into account both the probability of changing states and the length of time an individual remains in each state.
Read More | Download Report
February 1, 1981
Examines a class of MARKOV matrices which arise in a simple model of a defense system. The model illustrates a MARKOV chain which is not time-homogeneous but is still amenable to analytic treatment.
Read More
February 1, 1981
This paper calculates the distribution of the number of survivors of a set number of attacks with given parameters, for various missile-allocation situations, and the expected number of missiles fired. The emphasis is on eliminating the complexity arising from a large number of missiles attacking simultaneously. Computer programs for these calculations are presented in Appendix A.
Read More | Download Report
February 1, 1981
This paper presents a mathematical model of regime change in Latin America. The model is a finite Markov chain with stationary transition probabilities.
Read More | Download Report
December 1, 1978
This study describes two stochastic models for evaluating air combat maneuvering (ACM) engagements. The Maneuver Conversion Model is applicable to engagements where a successful outcome is determined primarily by maneuvering effectiveness of the combatants. In this model, the events of air-to-air engagements are assumed to behave as a semi-Markov process with various absorbing states. The Firing Sequence Model is intended for analysis of engagements where a successful outcome depends on aircrew ability to capitalize on weapon performance. This model also assumes a Markov process, but analyzes test-range data as tabulations of weapon-firing incidents for each engagement. Common measures of effectiveness, such as the probability of achieving first weapon-firing opportunity and the expected exchange ratio, may be used in both models to estimate ACM performance. Volume I presents the analytic methodologies for both models, and provides under CNO Project P/V2 (Battle Cry), and illustrates the Maneuver Conversion Model methodology.
Read More | Download Report
August 1, 1974
Gun system operation is represented as a first-order MARKOV process, and an optimum linear filter is derived for closed-loop control of mean square error. Potential improvement is then estimated by contrasting the variance in performance and the auto-correlation for the open-loop system with that for the optimum linearly corrected process.
Read More | Download Report
June 1, 1974
This paper develops duel models for the situation in which the outcomes from a finite stationary Markov chain and both weapons have an unlimited supply of ammunition, fire at constant intervals of time, and duel until one is killed.
Read More | Download Report