Annotation of imach/html/doc/imach.htm, revision 1.2

1.2     ! brouard     1: <!-- $Id: imach.htm,v 1.1 2004/06/16 12:05:30 brouard Exp $ --!>
1.1       brouard     2: <html>
                      3: 
                      4: <head>
                      5: <meta http-equiv="Content-Type"
                      6: content="text/html; charset=iso-8859-1">
                      7: <title>Computing Health Expectancies using IMaCh</title>
                      8: <html>
                      9: 
                     10: <head>
                     11: <meta http-equiv="Content-Type"
                     12: content="text/html; charset=iso-8859-1">
                     13: <title>IMaCh</title>
                     14: </head>
                     15: 
                     16: <body bgcolor="#FFFFFF">
                     17: 
                     18: <hr size="3" color="#EC5E5E">
                     19: 
                     20: <h1 align="center"><font color="#00006A">Computing Health
                     21: Expectancies using IMaCh</font></h1>
                     22: 
                     23: <h1 align="center"><font color="#00006A" size="5">(a Maximum
                     24: Likelihood Computer Program using Interpolation of Markov Chains)</font></h1>
                     25: 
                     26: <p align="center">&nbsp;</p>
                     27: 
                     28: <p align="center"><a href="http://www.ined.fr/"><img
                     29: src="logo-ined.gif" border="0" width="151" height="76"></a><img
                     30: src="euroreves2.gif" width="151" height="75"></p>
                     31: 
                     32: <h3 align="center"><a href="http://www.ined.fr/"><font
                     33: color="#00006A">INED</font></a><font color="#00006A"> and </font><a
                     34: href="http://euroreves.ined.fr"><font color="#00006A">EUROREVES</font></a></h3>
                     35: 
                     36: <p align="center"><font color="#00006A" size="4"><strong>Version
1.2     ! brouard    37: 0.97, June 2004</strong></font></p>
1.1       brouard    38: 
                     39: <hr size="3" color="#EC5E5E">
                     40: 
                     41: <p align="center"><font color="#00006A"><strong>Authors of the
                     42: program: </strong></font><a href="http://sauvy.ined.fr/brouard"><font
                     43: color="#00006A"><strong>Nicolas Brouard</strong></font></a><font
                     44: color="#00006A"><strong>, senior researcher at the </strong></font><a
                     45: href="http://www.ined.fr"><font color="#00006A"><strong>Institut
                     46: National d'Etudes Démographiques</strong></font></a><font
                     47: color="#00006A"><strong> (INED, Paris) in the &quot;Mortality,
                     48: Health and Epidemiology&quot; Research Unit </strong></font></p>
                     49: 
                     50: <p align="center"><font color="#00006A"><strong>and Agnès
                     51: Lièvre<br clear="left">
                     52: </strong></font></p>
                     53: 
                     54: <h4><font color="#00006A">Contribution to the mathematics: C. R.
                     55: Heathcote </font><font color="#00006A" size="2">(Australian
                     56: National University, Canberra).</font></h4>
                     57: 
                     58: <h4><font color="#00006A">Contact: Agnès Lièvre (</font><a
                     59: href="mailto:lievre@ined.fr"><font color="#00006A"><i>lievre@ined.fr</i></font></a><font
                     60: color="#00006A">) </font></h4>
                     61: 
                     62: <hr>
                     63: 
                     64: <ul>
                     65:     <li><a href="#intro">Introduction</a> </li>
                     66:     <li><a href="#data">On what kind of data can it be used?</a></li>
                     67:     <li><a href="#datafile">The data file</a> </li>
                     68:     <li><a href="#biaspar">The parameter file</a> </li>
                     69:     <li><a href="#running">Running Imach</a> </li>
                     70:     <li><a href="#output">Output files and graphs</a> </li>
                     71:     <li><a href="#example">Exemple</a> </li>
                     72: </ul>
                     73: 
                     74: <hr>
                     75: 
                     76: <h2><a name="intro"><font color="#00006A">Introduction</font></a></h2>
                     77: 
                     78: <p>This program computes <b>Healthy Life Expectancies</b> from <b>cross-longitudinal
                     79: data</b> using the methodology pioneered by Laditka and Wolf (1).
                     80: Within the family of Health Expectancies (HE), Disability-free
                     81: life expectancy (DFLE) is probably the most important index to
                     82: monitor. In low mortality countries, there is a fear that when
                     83: mortality declines, the increase in DFLE is not proportionate to
                     84: the increase in total Life expectancy. This case is called the <em>Expansion
                     85: of morbidity</em>. Most of the data collected today, in
                     86: particular by the international <a href="http://www.reves.org">REVES</a>
                     87: network on Health expectancy, and most HE indices based on these
                     88: data, are <em>cross-sectional</em>. It means that the information
                     89: collected comes from a single cross-sectional survey: people from
                     90: various ages (but mostly old people) are surveyed on their health
                     91: status at a single date. Proportion of people disabled at each
                     92: age, can then be measured at that date. This age-specific
                     93: prevalence curve is then used to distinguish, within the
                     94: stationary population (which, by definition, is the life table
                     95: estimated from the vital statistics on mortality at the same
                     96: date), the disable population from the disability-free
                     97: population. Life expectancy (LE) (or total population divided by
                     98: the yearly number of births or deaths of this stationary
                     99: population) is then decomposed into DFLE and DLE. This method of
                    100: computing HE is usually called the Sullivan method (from the name
                    101: of the author who first described it).</p>
                    102: 
1.2     ! brouard   103: <p>Age-specific proportions of people disabled (prevalence of
        !           104: disability) are dependent on the historical flows from entering
        !           105: disability and recovering in the past until today. The age-specific
        !           106: forces (or incidence rates), estimated over a recent period of time
        !           107: (like for period forces of mortality), of entering disability or
        !           108: recovering a good health, are reflecting current conditions and
        !           109: therefore can be used at each age to forecast the future of this
        !           110: cohort<em>if nothing changes in the future</em>, i.e to forecast the
        !           111: prevalence of disability of each cohort. Our finding (2) is that the period
        !           112: prevalence of disability (computed from period incidences) is lower
        !           113: than the cross-sectional prevalence. For example if a country is
        !           114: improving its technology of prosthesis, the incidence of recovering
        !           115: the ability to walk will be higher at each (old) age, but the
        !           116: prevalence of disability will only slightly reflect an improve because
        !           117: the prevalence is mostly affected by the history of the cohort and not
        !           118: by recent period effects. To measure the period improvement we have to
        !           119: simulate the future of a cohort of new-borns entering or leaving at
        !           120: each age the disability state or dying according to the incidence
        !           121: rates measured today on different cohorts. The proportion of people
        !           122: disabled at each age in this simulated cohort will be much lower that
        !           123: the proportions observed at each age in a cross-sectional survey. This
        !           124: new prevalence curve introduced in a life table will give a more
        !           125: realistic HE level than the Sullivan method which mostly measured the
        !           126: History of health conditions in this country.</p>
1.1       brouard   127: 
                    128: <p>Therefore, the main question is how to measure incidence rates
                    129: from cross-longitudinal surveys? This is the goal of the IMaCH
                    130: program. From your data and using IMaCH you can estimate period
                    131: HE and not only Sullivan's HE. Also the standard errors of the HE
                    132: are computed.</p>
                    133: 
                    134: <p>A cross-longitudinal survey consists in a first survey
                    135: (&quot;cross&quot;) where individuals from different ages are
                    136: interviewed on their health status or degree of disability. At
                    137: least a second wave of interviews (&quot;longitudinal&quot;)
                    138: should measure each new individual health status. Health
                    139: expectancies are computed from the transitions observed between
                    140: waves and are computed for each degree of severity of disability
                    141: (number of life states). More degrees you consider, more time is
                    142: necessary to reach the Maximum Likelihood of the parameters
                    143: involved in the model. Considering only two states of disability
                    144: (disable and healthy) is generally enough but the computer
                    145: program works also with more health statuses.<br>
                    146: <br>
                    147: The simplest model is the multinomial logistic model where <i>pij</i>
                    148: is the probability to be observed in state <i>j</i> at the second
                    149: wave conditional to be observed in state <em>i</em> at the first
                    150: wave. Therefore a simple model is: log<em>(pij/pii)= aij +
                    151: bij*age+ cij*sex,</em> where '<i>age</i>' is age and '<i>sex</i>'
                    152: is a covariate. The advantage that this computer program claims,
                    153: comes from that if the delay between waves is not identical for
                    154: each individual, or if some individual missed an interview, the
                    155: information is not rounded or lost, but taken into account using
                    156: an interpolation or extrapolation. <i>hPijx</i> is the
                    157: probability to be observed in state <i>i</i> at age <i>x+h</i>
                    158: conditional to the observed state <i>i</i> at age <i>x</i>. The
                    159: delay '<i>h</i>' can be split into an exact number (<i>nh*stepm</i>)
                    160: of unobserved intermediate states. This elementary transition (by
                    161: month or quarter trimester, semester or year) is modeled as a
                    162: multinomial logistic. The <i>hPx</i> matrix is simply the matrix
                    163: product of <i>nh*stepm</i> elementary matrices and the
                    164: contribution of each individual to the likelihood is simply <i>hPijx</i>.
                    165: <br>
                    166: </p>
                    167: 
                    168: <p>The program presented in this manual is a quite general
                    169: program named <strong>IMaCh</strong> (for <strong>I</strong>nterpolated
                    170: <strong>MA</strong>rkov <strong>CH</strong>ain), designed to
                    171: analyse transition data from longitudinal surveys. The first step
                    172: is the parameters estimation of a transition probabilities model
                    173: between an initial status and a final status. From there, the
                    174: computer program produces some indicators such as observed and
                    175: stationary prevalence, life expectancies and their variances and
                    176: graphs. Our transition model consists in absorbing and
                    177: non-absorbing states with the possibility of return across the
                    178: non-absorbing states. The main advantage of this package,
                    179: compared to other programs for the analysis of transition data
                    180: (For example: Proc Catmod of SAS<sup>®</sup>) is that the whole
                    181: individual information is used even if an interview is missing, a
                    182: status or a date is unknown or when the delay between waves is
                    183: not identical for each individual. The program can be executed
                    184: according to parameters: selection of a sub-sample, number of
                    185: absorbing and non-absorbing states, number of waves taken in
                    186: account (the user inputs the first and the last interview), a
                    187: tolerance level for the maximization function, the periodicity of
                    188: the transitions (we can compute annual, quarterly or monthly
                    189: transitions), covariates in the model. It works on Windows or on
                    190: Unix.<br>
                    191: </p>
                    192: 
                    193: <hr>
                    194: 
                    195: <p>(1) Laditka, Sarah B. and Wolf, Douglas A. (1998), &quot;New
                    196: Methods for Analyzing Active Life Expectancy&quot;. <i>Journal of
                    197: Aging and Health</i>. Vol 10, No. 2. </p>
1.2     ! brouard   198: <p>(2) <a href=http://taylorandfrancis.metapress.com/app/home/contribution.asp?wasp=1f99bwtvmk5yrb7hlhw3&referrer=parent&backto=issue,1,2;journal,2,5;linkingpublicationresults,1:300265,1
        !           199: >Lièvre A., Brouard N. and Heathcote Ch. (2003) Estimating Health Expectancies 
        !           200: from Cross-longitudinal surveys. <em>Mathematical Population Studies</em>.- 10(4), pp. 211-248</a>
1.1       brouard   201: 
                    202: <hr>
                    203: 
                    204: <h2><a name="data"><font color="#00006A">On what kind of data can
                    205: it be used?</font></a></h2>
                    206: 
                    207: <p>The minimum data required for a transition model is the
                    208: recording of a set of individuals interviewed at a first date and
                    209: interviewed again at least one another time. From the
                    210: observations of an individual, we obtain a follow-up over time of
                    211: the occurrence of a specific event. In this documentation, the
                    212: event is related to health status at older ages, but the program
                    213: can be applied on a lot of longitudinal studies in different
                    214: contexts. To build the data file explained into the next section,
                    215: you must have the month and year of each interview and the
                    216: corresponding health status. But in order to get age, date of
                    217: birth (month and year) is required (missing values is allowed for
                    218: month). Date of death (month and year) is an important
                    219: information also required if the individual is dead. Shorter
                    220: steps (i.e. a month) will more closely take into account the
                    221: survival time after the last interview.</p>
                    222: 
                    223: <hr>
                    224: 
                    225: <h2><a name="datafile"><font color="#00006A">The data file</font></a></h2>
                    226: 
                    227: <p>In this example, 8,000 people have been interviewed in a
1.2     ! brouard   228: cross-longitudinal survey of 4 waves (1984, 1986, 1988, 1990).  Some
        !           229: people missed 1, 2 or 3 interviews. Health statuses are healthy (1)
        !           230: and disable (2). The survey is not a real one. It is a simulation of
        !           231: the American Longitudinal Survey on Aging. The disability state is
        !           232: defined if the individual missed one of four ADL (Activity of daily
        !           233: living, like bathing, eating, walking).  Therefore, even if the
        !           234: individuals interviewed in the sample are virtual, the information
        !           235: brought with this sample is close to the situation of the United
        !           236: States. Sex is not recorded is this sample. The LSOA survey is biased
        !           237: in the sense that people living in an institution were not surveyed at
        !           238: first pass in 1984. Thus the prevalence of disability in 1984 is
        !           239: biased downwards at old ages. But when people left their household to
        !           240: an institution, they have been surveyed in their institution in 1986,
        !           241: 1988 or 1990. Thus incidences are not biased. But cross-sectional
        !           242: prevalences of disability at old ages are thus artificially increasing
        !           243: in 1986, 1988 and 1990 because of a higher weight of people
        !           244: institutionalized in the sample. Our article shows the
        !           245: opposite: the period prevalence is lower at old ages than the
        !           246: adjusted cross-sectional prevalence proving important current progress
        !           247: against disability.</p>
1.1       brouard   248: 
                    249: <p>Each line of the data set (named <a href="data1.txt">data1.txt</a>
1.2     ! brouard   250: in this first example) is an individual record. Fields are separated
        !           251: by blanks: </p>
1.1       brouard   252: 
                    253: <ul>
                    254:     <li><b>Index number</b>: positive number (field 1) </li>
                    255:     <li><b>First covariate</b> positive number (field 2) </li>
                    256:     <li><b>Second covariate</b> positive number (field 3) </li>
                    257:     <li><a name="Weight"><b>Weight</b></a>: positive number
                    258:         (field 4) . In most surveys individuals are weighted
                    259:         according to the stratification of the sample.</li>
                    260:     <li><b>Date of birth</b>: coded as mm/yyyy. Missing dates are
                    261:         coded as 99/9999 (field 5) </li>
                    262:     <li><b>Date of death</b>: coded as mm/yyyy. Missing dates are
                    263:         coded as 99/9999 (field 6) </li>
                    264:     <li><b>Date of first interview</b>: coded as mm/yyyy. Missing
                    265:         dates are coded as 99/9999 (field 7) </li>
                    266:     <li><b>Status at first interview</b>: positive number.
                    267:         Missing values ar coded -1. (field 8) </li>
                    268:     <li><b>Date of second interview</b>: coded as mm/yyyy.
                    269:         Missing dates are coded as 99/9999 (field 9) </li>
                    270:     <li><strong>Status at second interview</strong> positive
                    271:         number. Missing values ar coded -1. (field 10) </li>
                    272:     <li><b>Date of third interview</b>: coded as mm/yyyy. Missing
                    273:         dates are coded as 99/9999 (field 11) </li>
                    274:     <li><strong>Status at third interview</strong> positive
                    275:         number. Missing values ar coded -1. (field 12) </li>
                    276:     <li><b>Date of fourth interview</b>: coded as mm/yyyy.
                    277:         Missing dates are coded as 99/9999 (field 13) </li>
                    278:     <li><strong>Status at fourth interview</strong> positive
                    279:         number. Missing values are coded -1. (field 14) </li>
                    280:     <li>etc</li>
                    281: </ul>
                    282: 
                    283: <p>&nbsp;</p>
                    284: 
                    285: <p>If your longitudinal survey do not include information about
                    286: weights or covariates, you must fill the column with a number
                    287: (e.g. 1) because a missing field is not allowed.</p>
                    288: 
                    289: <hr>
                    290: 
                    291: <h2><font color="#00006A">Your first example parameter file</font><a
                    292: href="http://euroreves.ined.fr/imach"></a><a name="uio"></a></h2>
                    293: 
1.2     ! brouard   294: <h2><a name="biaspar"></a>#Imach version 0.97b, June 2004,
1.1       brouard   295: INED-EUROREVES </h2>
                    296: 
1.2     ! brouard   297: <p>This first line was a comment. Comments line start with a '#'.</p>
1.1       brouard   298: 
                    299: <h4><font color="#FF0000">First uncommented line</font></h4>
                    300: 
                    301: <pre>title=1st_example datafile=data1.txt lastobs=8600 firstpass=1 lastpass=4</pre>
                    302: 
                    303: <ul>
                    304:     <li><b>title=</b> 1st_example is title of the run. </li>
                    305:     <li><b>datafile=</b> data1.txt is the name of the data set.
                    306:         Our example is a six years follow-up survey. It consists
                    307:         in a baseline followed by 3 reinterviews. </li>
                    308:     <li><b>lastobs=</b> 8600 the program is able to run on a
                    309:         subsample where the last observation number is lastobs.
                    310:         It can be set a bigger number than the real number of
                    311:         observations (e.g. 100000). In this example, maximisation
                    312:         will be done on the 8600 first records. </li>
                    313:     <li><b>firstpass=1</b> , <b>lastpass=4 </b>In case of more
                    314:         than two interviews in the survey, the program can be run
                    315:         on selected transitions periods. firstpass=1 means the
                    316:         first interview included in the calculation is the
                    317:         baseline survey. lastpass=4 means that the information
                    318:         brought by the 4th interview is taken into account.</li>
                    319: </ul>
                    320: 
                    321: <p>&nbsp;</p>
                    322: 
                    323: <h4><a name="biaspar-2"><font color="#FF0000">Second uncommented
                    324: line</font></a></h4>
                    325: 
                    326: <pre>ftol=1.e-08 stepm=1 ncovcol=2 nlstate=2 ndeath=1 maxwav=4 mle=1 weight=0</pre>
                    327: 
                    328: <ul>
                    329:     <li><b>ftol=1e-8</b> Convergence tolerance on the function
                    330:         value in the maximisation of the likelihood. Choosing a
                    331:         correct value for ftol is difficult. 1e-8 is a correct
                    332:         value for a 32 bits computer.</li>
                    333:     <li><b>stepm=1</b> Time unit in months for interpolation.
                    334:         Examples:<ul>
                    335:             <li>If stepm=1, the unit is a month </li>
                    336:             <li>If stepm=4, the unit is a trimester</li>
                    337:             <li>If stepm=12, the unit is a year </li>
                    338:             <li>If stepm=24, the unit is two years</li>
                    339:             <li>... </li>
                    340:         </ul>
                    341:     </li>
1.2     ! brouard   342:     <li><b>ncovcol=2</b> Number of covariate columns included in the
        !           343:         datafile before the column of the date of birth. You can have
        !           344: covariates that won't necessary be used during the
1.1       brouard   345:         run. It is not the number of covariates that will be
1.2     ! brouard   346:         specified by the model. The 'model' syntax describes the
        !           347:         covariates to be taken into account during the run. </li>
1.1       brouard   348:     <li><b>nlstate=2</b> Number of non-absorbing (alive) states.
                    349:         Here we have two alive states: disability-free is coded 1
                    350:         and disability is coded 2. </li>
                    351:     <li><b>ndeath=1</b> Number of absorbing states. The absorbing
                    352:         state death is coded 3. </li>
                    353:     <li><b>maxwav=4</b> Number of waves in the datafile.</li>
                    354:     <li><a name="mle"><b>mle</b></a><b>=1</b> Option for the
                    355:         Maximisation Likelihood Estimation. <ul>
                    356:             <li>If mle=1 the program does the maximisation and
                    357:                 the calculation of health expectancies </li>
                    358:             <li>If mle=0 the program only does the calculation of
1.2     ! brouard   359:                 the health expectancies and other indices and graphs
        !           360: but without the maximization.. </li>
        !           361:                There also other possible values:
        !           362:           <ul>
        !           363:             <li>If mle=-1 you get a template which can be useful if
        !           364: your model is complex with many covariates.</li>
        !           365:             <li> If mle=-3 IMaCh computes the mortality but without
        !           366:             any health status (May 2004)</li> <li>If mle=2 IMach
        !           367:             likelihood corresponds to a linear interpolation</li> <li>
        !           368:             If mle=3 IMach likelihood corresponds to an exponential
        !           369:             inter-extrapolation</li> 
        !           370:             <li> If mle=4 IMach likelihood
        !           371:             corresponds to no inter-extrapolation, and thus biasing
        !           372:             the results. </li> 
        !           373:             <li> If mle=5 IMach likelihood
        !           374:             corresponds to no inter-extrapolation, and before the
        !           375:             correction of the Jackson's bug (avoid this).</li>
        !           376:             </ul>
1.1       brouard   377:         </ul>
                    378:     </li>
                    379:     <li><b>weight=0</b> Possibility to add weights. <ul>
                    380:             <li>If weight=0 no weights are included </li>
                    381:             <li>If weight=1 the maximisation integrates the
                    382:                 weights which are in field <a href="#Weight">4</a></li>
                    383:         </ul>
                    384:     </li>
                    385: </ul>
                    386: 
                    387: <h4><font color="#FF0000">Covariates</font></h4>
                    388: 
                    389: <p>Intercept and age are systematically included in the model.
                    390: Additional covariates can be included with the command: </p>
                    391: 
                    392: <pre>model=<em>list of covariates</em></pre>
                    393: 
                    394: <ul>
                    395:     <li>if<strong> model=. </strong>then no covariates are
                    396:         included</li>
                    397:     <li>if <strong>model=V1</strong> the model includes the first
                    398:         covariate (field 2)</li>
                    399:     <li>if <strong>model=V2 </strong>the model includes the
                    400:         second covariate (field 3)</li>
                    401:     <li>if <strong>model=V1+V2 </strong>the model includes the
                    402:         first and the second covariate (fields 2 and 3)</li>
                    403:     <li>if <strong>model=V1*V2 </strong>the model includes the
                    404:         product of the first and the second covariate (fields 2
                    405:         and 3)</li>
                    406:     <li>if <strong>model=V1+V1*age</strong> the model includes
                    407:         the product covariate*age</li>
                    408: </ul>
                    409: 
                    410: <p>In this example, we have two covariates in the data file
                    411: (fields 2 and 3). The number of covariates included in the data
                    412: file between the id and the date of birth is ncovcol=2 (it was
                    413: named ncov in version prior to 0.8). If you have 3 covariates in
                    414: the datafile (fields 2, 3 and 4), you will set ncovcol=3. Then
                    415: you can run the programme with a new parametrisation taking into
                    416: account the third covariate. For example, <strong>model=V1+V3 </strong>estimates
                    417: a model with the first and third covariates. More complicated
                    418: models can be used, but it will takes more time to converge. With
                    419: a simple model (no covariates), the programme estimates 8
                    420: parameters. Adding covariates increases the number of parameters
                    421: : 12 for <strong>model=V1, </strong>16 for <strong>model=V1+V1*age
                    422: </strong>and 20 for <strong>model=V1+V2+V3.</strong></p>
                    423: 
                    424: <h4><font color="#FF0000">Guess values for optimization</font><font
                    425: color="#00006A"> </font></h4>
                    426: 
                    427: <p>You must write the initial guess values of the parameters for
                    428: optimization. The number of parameters, <em>N</em> depends on the
                    429: number of absorbing states and non-absorbing states and on the
                    430: number of covariates. <br>
                    431: <em>N</em> is given by the formula <em>N</em>=(<em>nlstate</em> +
                    432: <em>ndeath</em>-1)*<em>nlstate</em>*<em>ncovmodel</em>&nbsp;. <br>
                    433: <br>
                    434: Thus in the simple case with 2 covariates (the model is log
                    435: (pij/pii) = aij + bij * age where intercept and age are the two
                    436: covariates), and 2 health degrees (1 for disability-free and 2
                    437: for disability) and 1 absorbing state (3), you must enter 8
                    438: initials values, a12, b12, a13, b13, a21, b21, a23, b23. You can
                    439: start with zeros as in this example, but if you have a more
                    440: precise set (for example from an earlier run) you can enter it
                    441: and it will speed up them<br>
                    442: Each of the four lines starts with indices &quot;ij&quot;: <b>ij
                    443: aij bij</b> </p>
                    444: 
                    445: <blockquote>
                    446:     <pre># Guess values of aij and bij in log (pij/pii) = aij + bij * age
                    447: 12 -14.155633  0.110794 
                    448: 13  -7.925360  0.032091 
                    449: 21  -1.890135 -0.029473 
                    450: 23  -6.234642  0.022315 </pre>
                    451: </blockquote>
                    452: 
                    453: <p>or, to simplify (in most of cases it converges but there is no
                    454: warranty!): </p>
                    455: 
                    456: <blockquote>
                    457:     <pre>12 0.0 0.0
                    458: 13 0.0 0.0
                    459: 21 0.0 0.0
                    460: 23 0.0 0.0</pre>
                    461: </blockquote>
                    462: 
                    463: <p>In order to speed up the convergence you can make a first run
                    464: with a large stepm i.e stepm=12 or 24 and then decrease the stepm
                    465: until stepm=1 month. If newstepm is the new shorter stepm and
                    466: stepm can be expressed as a multiple of newstepm, like newstepm=n
                    467: stepm, then the following approximation holds: </p>
                    468: 
                    469: <pre>aij(stepm) = aij(n . stepm) - ln(n)
                    470: </pre>
                    471: 
                    472: <p>and </p>
                    473: 
                    474: <pre>bij(stepm) = bij(n . stepm) .</pre>
                    475: 
                    476: <p>For example if you already ran for a 6 months interval and
                    477: got:<br>
                    478: </p>
                    479: 
                    480: <pre># Parameters
                    481: 12 -13.390179  0.126133 
                    482: 13  -7.493460  0.048069 
                    483: 21   0.575975 -0.041322 
                    484: 23  -4.748678  0.030626 
                    485: </pre>
                    486: 
                    487: <p>If you now want to get the monthly estimates, you can guess
                    488: the aij by substracting ln(6)= 1,7917<br>
                    489: and running<br>
                    490: </p>
                    491: 
                    492: <pre>12 -15.18193847  0.126133 
                    493: 13 -9.285219469  0.048069
                    494: 21 -1.215784469 -0.041322
                    495: 23 -6.540437469  0.030626
                    496: </pre>
                    497: 
                    498: <p>and get<br>
                    499: </p>
                    500: 
                    501: <pre>12 -15.029768 0.124347 
                    502: 13 -8.472981 0.036599 
                    503: 21 -1.472527 -0.038394 
                    504: 23 -6.553602 0.029856 
                    505: 
                    506: which is closer to the results. The approximation is probably useful
                    507: only for very small intervals and we don't have enough experience to
                    508: know if you will speed up the convergence or not.
                    509: </pre>
                    510: 
                    511: <pre>         -ln(12)= -2.484
                    512:  -ln(6/1)=-ln(6)= -1.791
                    513:  -ln(3/1)=-ln(3)= -1.0986
                    514: -ln(12/6)=-ln(2)= -0.693
                    515: </pre>
                    516: 
1.2     ! brouard   517: In version 0.9 and higher you can still have valuable results even if
        !           518: your stepm parameter is bigger than a month. The idea is to run with
        !           519: bigger stepm in order to have a quicker convergence at the price of a
        !           520: small bias. Once you know which model you want to fit, you can put
        !           521: stepm=1 and wait hours or days to get the convergence!
        !           522: 
        !           523: To get unbiased results even with large stepm we introduce the idea of
        !           524: pseudo likelihood by interpolating two exact likelihoods. Let us
        !           525: detail this:
        !           526: <p>
        !           527: If the interval of <em>d</em> months between two waves is not a
        !           528: mutliple of 'stepm', but is comprised between <em>(n-1) stepm</em> and
        !           529: <em>n stepm</em> then both exact likelihoods are computed (the
        !           530: contribution to the likelihood at <em>n stepm</em> requires one matrix
        !           531: product more) (let us remember that we are modelling the probability
        !           532: to be observed in a particular state after <em>d</em> months being
        !           533: observed at a particular state at 0). The distance, (<em>bh</em> in
        !           534: the program), from the month of interview to the rounded date of <em>n
        !           535: stepm</em> is computed. It can be negative (interview occurs before
        !           536: <em>n stepm</em>) or positive if the interview occurs after <em>n
        !           537: stepm</em> (and before <em>(n+1)stepm</em>).
        !           538: <br>
        !           539: Then the final contribution to the total likelihood is a weighted
        !           540: average of these two exact likelihoods at <em>n stepm</em> (out) and
        !           541: at <em>(n-1)stepm</em>(savm). We did not want to compute the third
        !           542: likelihood at <em>(n+1)stepm</em> because it is too costly in time, so
        !           543: we used an extrapolation if <em>bh</em> is positive.  <br> Formula of
        !           544: inter/extrapolation may vary according to the value of parameter mle:
        !           545: <pre>
        !           546: mle=1    lli= log((1.+bbh)*out[s1][s2]- bbh*savm[s1][s2]); /* linear interpolation */
        !           547: 
        !           548: mle=2  lli= (savm[s1][s2]>(double)1.e-8 ? \
        !           549:           log((1.+bbh)*out[s1][s2]- bbh*(savm[s1][s2])): \
        !           550:           log((1.+bbh)*out[s1][s2])); /* linear interpolation */
        !           551: mle=3  lli= (savm[s1][s2]>1.e-8 ? \
        !           552:           (1.+bbh)*log(out[s1][s2])- bbh*log(savm[s1][s2]): \
        !           553:           log((1.+bbh)*out[s1][s2])); /* exponential inter-extrapolation */
        !           554: 
        !           555: mle=4   lli=log(out[s[mw[mi][i]][i]][s[mw[mi+1][i]][i]]); /* No interpolation  */
        !           556:         no need to save previous likelihood into memory.
        !           557: </pre>
        !           558: <p>
        !           559: If the death occurs between first and second pass, and for example
        !           560: more precisely between <em>n stepm</em> and <em>(n+1)stepm</em> the
        !           561: contribution of this people to the likelihood is simply the difference
        !           562: between the probability of dying before <em>n stepm</em> and the
        !           563: probability of dying before <em>(n+1)stepm</em>. There was a bug in
        !           564: version 0.8 and death was treated as any other state, i.e. as if it
        !           565: was an observed death at second pass. This was not precise but
        !           566: correct, but when information on the precise month of death came
        !           567: (death occuring prior to second pass) we did not change the likelihood
        !           568: accordingly. Thanks to Chris Jackson for correcting us. In earlier
        !           569: versions (fortunately before first publication) the total mortality
        !           570: was overestimated (people were dying too early) of about 10%. Version
        !           571: 0.95 and higher are correct.
        !           572: 
        !           573: <p> Our suggested choice is mle=1 . If stepm=1 there is no difference
        !           574: between various mle options (methods of interpolation). If stepm is
        !           575: big, like 12 or 24 or 48 and mle=4 (no interpolation) the bias may be
        !           576: very important if the mean duration between two waves is not a
        !           577: multiple of stepm. See the appendix in our main publication concerning
        !           578: the sine curve of biases.
        !           579:  
        !           580: 
1.1       brouard   581: <h4><font color="#FF0000">Guess values for computing variances</font></h4>
                    582: 
1.2     ! brouard   583: <p>These values are output by the maximisation of the likelihood <a
        !           584: href="#mle">mle</a>=1. These valuse can be used as an input of a
        !           585: second run in order to get the various output data files (Health
        !           586: expectancies, period prevalence etc.) and figures without rerunning
        !           587: the long maximisation phase (mle=0). </p>
        !           588: 
        !           589: <p>These 'scales' are small values needed for the computing of
        !           590: numerical derivatives. These derivatives are used to compute the
        !           591: hessian matrix of the parameters, that is the inverse of the
        !           592: covariance matrix. They are often used for estimating variances and
        !           593: confidence intervals. Each line consists in indices &quot;ij&quot;
        !           594: followed by the initial scales (zero to simplify) associated with aij
        !           595: and bij. </p>
1.1       brouard   596: 
                    597: <ul>
                    598:     <li>If mle=1 you can enter zeros:</li>
                    599:     <li><blockquote>
                    600:             <pre># Scales (for hessian or gradient estimation)
                    601: 12 0. 0. 
                    602: 13 0. 0. 
                    603: 21 0. 0. 
                    604: 23 0. 0. </pre>
                    605:         </blockquote>
                    606:     </li>
1.2     ! brouard   607:     <li>If mle=0 (no maximisation of Likelihood) you must enter a covariance matrix (usually
1.1       brouard   608:         obtained from an earlier run).</li>
                    609: </ul>
                    610: 
                    611: <h4><font color="#FF0000">Covariance matrix of parameters</font></h4>
                    612: 
1.2     ! brouard   613: <p>The covariance matrix is output if <a href="#mle">mle</a>=1. But it can be
        !           614: also used as an input to get the various output data files (Health
        !           615: expectancies, period prevalence etc.) and figures without
        !           616: rerunning the maximisation phase (mle=0). <br>
1.1       brouard   617: Each line starts with indices &quot;ijk&quot; followed by the
                    618: covariances between aij and bij:<br>
                    619: </p>
                    620: 
                    621: <pre>
                    622:    121 Var(a12) 
                    623:    122 Cov(b12,a12)  Var(b12) 
                    624:           ...
                    625:    232 Cov(b23,a12)  Cov(b23,b12) ... Var (b23) </pre>
                    626: 
                    627: <ul>
                    628:     <li>If mle=1 you can enter zeros. </li>
                    629:     <li><pre># Covariance matrix
                    630: 121 0.
                    631: 122 0. 0.
                    632: 131 0. 0. 0. 
                    633: 132 0. 0. 0. 0. 
                    634: 211 0. 0. 0. 0. 0. 
                    635: 212 0. 0. 0. 0. 0. 0. 
                    636: 231 0. 0. 0. 0. 0. 0. 0. 
                    637: 232 0. 0. 0. 0. 0. 0. 0. 0.</pre>
                    638:     </li>
                    639:     <li>If mle=0 you must enter a covariance matrix (usually
                    640:         obtained from an earlier run). </li>
                    641: </ul>
                    642: 
                    643: <h4><font color="#FF0000">Age range for calculation of stationary
                    644: prevalences and health expectancies</font></h4>
                    645: 
                    646: <pre>agemin=70 agemax=100 bage=50 fage=100</pre>
                    647: 
1.2     ! brouard   648: <p>
1.1       brouard   649: Once we obtained the estimated parameters, the program is able
1.2     ! brouard   650: to calculate period prevalence, transitions probabilities
1.1       brouard   651: and life expectancies at any age. Choice of age range is useful
1.2     ! brouard   652: for extrapolation. In this example, age of people interviewed varies
        !           653: from 69 to 102 and the model is estimated using their exact ages. But
        !           654: if you are interested in the age-specific period prevalence you can
        !           655: start the simulation at an exact age like 70 and stop at 100. Then the
        !           656: program will draw at least two curves describing the forecasted
        !           657: prevalences of two cohorts, one for healthy people at age 70 and the second
        !           658: for disabled people at the same initial age. And according to the
        !           659: mixing property (ergodicity) and because of recovery, both prevalences
        !           660: will tend to be identical at later ages. Thus if you want to compute
        !           661: the prevalence at age 70, you should enter a lower agemin value.
        !           662: 
        !           663: <p>
        !           664: Setting bage=50 (begin age) and fage=100 (final age), let
        !           665: the program compute life expectancy from age 'bage' to age
1.1       brouard   666: 'fage'. As we use a model, we can interessingly compute life
                    667: expectancy on a wider age range than the age range from the data.
                    668: But the model can be rather wrong on much larger intervals.
                    669: Program is limited to around 120 for upper age!
                    670: </pre>
                    671: 
                    672: <ul>
                    673:     <li><b>agemin=</b> Minimum age for calculation of the
1.2     ! brouard   674:         period prevalence </li>
1.1       brouard   675:     <li><b>agemax=</b> Maximum age for calculation of the
1.2     ! brouard   676:         period prevalence </li>
1.1       brouard   677:     <li><b>bage=</b> Minimum age for calculation of the health
                    678:         expectancies </li>
                    679:     <li><b>fage=</b> Maximum age for calculation of the health
                    680:         expectancies </li>
                    681: </ul>
                    682: 
                    683: <h4><a name="Computing"><font color="#FF0000">Computing</font></a><font
1.2     ! brouard   684: color="#FF0000"> the cross-sectional prevalence</font></h4>
1.1       brouard   685: 
                    686: <pre>begin-prev-date=1/1/1984 end-prev-date=1/6/1988 estepm=1</pre>
                    687: 
1.2     ! brouard   688: <p>
1.1       brouard   689: Statements 'begin-prev-date' and 'end-prev-date' allow to
                    690: select the period in which we calculate the observed prevalences
                    691: in each state. In this example, the prevalences are calculated on
                    692: data survey collected between 1 january 1984 and 1 june 1988. 
1.2     ! brouard   693: </p>
1.1       brouard   694: 
                    695: <ul>
                    696:     <li><strong>begin-prev-date= </strong>Starting date
                    697:         (day/month/year)</li>
                    698:     <li><strong>end-prev-date= </strong>Final date
                    699:         (day/month/year)</li>
                    700:     <li><strong>estepm= </strong>Unit (in months).We compute the
                    701:         life expectancy from trapezoids spaced every estepm
                    702:         months. This is mainly to measure the difference between
                    703:         two models: for example if stepm=24 months pijx are given
                    704:         only every 2 years and by summing them we are calculating
                    705:         an estimate of the Life Expectancy assuming a linear
                    706:         progression inbetween and thus overestimating or
                    707:         underestimating according to the curvature of the
                    708:         survival function. If, for the same date, we estimate the
                    709:         model with stepm=1 month, we can keep estepm to 24 months
                    710:         to compare the new estimate of Life expectancy with the
                    711:         same linear hypothesis. A more precise result, taking
                    712:         into account a more precise curvature will be obtained if
                    713:         estepm is as small as stepm.</li>
                    714: </ul>
                    715: 
                    716: <h4><font color="#FF0000">Population- or status-based health
                    717: expectancies</font></h4>
                    718: 
                    719: <pre>pop_based=0</pre>
                    720: 
1.2     ! brouard   721: <p>The program computes status-based health expectancies, i.e health
        !           722: expectancies which depend on the initial health state.  If you are
        !           723: healthy, your healthy life expectancy (e11) is higher than if you were
        !           724: disabled (e21, with e11 &gt; e21).<br> To compute a healthy life
        !           725: expectancy 'independent' of the initial status we have to weight e11
        !           726: and e21 according to the probability to be in each state at initial
        !           727: age which are corresponding to the proportions of people in each health
        !           728: state (cross-sectional prevalences).<p> 
        !           729: 
        !           730: We could also compute e12 and e12 and get e.2 by weighting them
        !           731: according to the observed cross-sectional prevalences at initial age.
        !           732: <p> In a similar way we could compute the total life expectancy by
        !           733: summing e.1 and e.2 .
        !           734: <br>
        !           735: The main difference between 'population based' and 'implied' or
        !           736: 'period' consists in the weights used. 'Usually', cross-sectional
        !           737: prevalences of disability are higher than period prevalences
        !           738: particularly at old ages. This is true if the country is improving its
        !           739: health system by teaching people how to prevent disability as by
        !           740: promoting better screening, for example of people needing cataracts
        !           741: surgeryand for many unknown reasons that this program may help to
        !           742: discover. Then the proportion of disabled people at age 90 will be
        !           743: lower than the current observed proportion.
        !           744: <p>
        !           745: Thus a better Health Expectancy and even a better Life Expectancy
        !           746: value is given by forecasting not only the current lower mortality at
        !           747: all ages but also a lower incidence of disability and higher recovery.
        !           748: <br> Using the period prevalences as weight instead of the
        !           749: cross-sectional prevalences we are computing indices which are more
        !           750: specific to the current situations and therefore more useful to
        !           751: predict improvements or regressions in the future as to compare
        !           752: different policies in various countries.
1.1       brouard   753: 
                    754: <ul>
1.2     ! brouard   755:     <li><strong>popbased= 0 </strong>Health expectancies are computed
        !           756:     at each age from period prevalences 'expected' at this initial
        !           757:     age.</li> 
1.1       brouard   758:     <li><strong>popbased= 1 </strong>Health expectancies are
1.2     ! brouard   759:     computed at each age from cross-sectional 'observed' prevalence at
        !           760:     this initial age. As all the population is not observed at the
        !           761:     same exact date we define a short period were the observed
        !           762:     prevalence can be computed.<br>
        !           763: 
        !           764:  We simply sum all people surveyed within these two exact dates
        !           765:  who belong to a particular age group (single year) at the date of
        !           766:  interview and being in a particular health state. Then it is easy to
        !           767: get the proportion of people of a particular health status among all
        !           768: people of the same age group.<br>
        !           769: 
        !           770: If both dates are spaced and are covering two waves or more, people
        !           771: being interviewed twice or more are counted twice or more. The program
        !           772: takes into account the selection of individuals interviewed between
        !           773: firstpass and lastpass too (we don't know if it can be useful).
        !           774: </li>
1.1       brouard   775: </ul>
                    776: 
1.2     ! brouard   777: <h4><font color="#FF0000">Prevalence forecasting (Experimental)</font></h4>
1.1       brouard   778: 
                    779: <pre>starting-proj-date=1/1/1989 final-proj-date=1/1/1992 mov_average=0 </pre>
                    780: 
                    781: <p>Prevalence and population projections are only available if
                    782: the interpolation unit is a month, i.e. stepm=1 and if there are
                    783: no covariate. The programme estimates the prevalence in each
                    784: state at a precise date expressed in day/month/year. The
                    785: programme computes one forecasted prevalence a year from a
                    786: starting date (1 january of 1989 in this example) to a final date
                    787: (1 january 1992). The statement mov_average allows to compute
                    788: smoothed forecasted prevalences with a five-age moving average
                    789: centered at the mid-age of the five-age period. <br>
                    790: </p>
                    791: 
1.2     ! brouard   792: <h4><font color="#FF0000">Population forecasting (Experimental)</font></h4>
        !           793: 
1.1       brouard   794: <ul>
                    795:     <li><strong>starting-proj-date</strong>= starting date
                    796:         (day/month/year) of forecasting</li>
                    797:     <li><strong>final-proj-date= </strong>final date
                    798:         (day/month/year) of forecasting</li>
                    799:     <li><strong>mov_average</strong>= smoothing with a five-age
                    800:         moving average centered at the mid-age of the five-age
                    801:         period. The command<strong> mov_average</strong> takes
                    802:         value 1 if the prevalences are smoothed and 0 otherwise.</li>
                    803: </ul>
                    804: 
                    805: 
                    806: <ul type="disc">
1.2     ! brouard   807:     <li><b>popforecast=
1.1       brouard   808:         0 </b>Option for population forecasting. If
                    809:         popforecast=1, the programme does the forecasting<b>.</b></li>
1.2     ! brouard   810:     <li><b>popfile=
1.1       brouard   811:         </b>name of the population file</li>
1.2     ! brouard   812:     <li><b>popfiledate=</b>
1.1       brouard   813:         date of the population population</li>
1.2     ! brouard   814:     <li><b>last-popfiledate</b>=
1.1       brouard   815:         date of the last population projection&nbsp;</li>
                    816: </ul>
                    817: 
                    818: <hr>
                    819: 
                    820: <h2><a name="running"></a><font color="#00006A">Running Imach
                    821: with this example</font></h2>
                    822: 
1.2     ! brouard   823: <p>We assume that you already typed your <a href="biaspar.imach">1st_example
1.1       brouard   824: parameter file</a> as explained <a href="#biaspar">above</a>. 
                    825: 
1.2     ! brouard   826: To run the program under Windows you should either:
        !           827: </p>
1.1       brouard   828: 
                    829: <ul>
1.2     ! brouard   830:     <li>click on the imach.exe icon and either:
        !           831:       <ul>
        !           832:          <li>enter the name of the
        !           833:         parameter file which is for example <tt>
        !           834: C:\home\myname\lsoa\biaspar.imach"</tt></li>
        !           835:     <li>or locate the biaspar.imach icon in your folder such as
        !           836:     <tt>C:\home\myname\lsoa</tt> 
        !           837:     and drag it, with your mouse, on the already open imach window. </li>
        !           838:   </ul>
        !           839: 
        !           840:  <li>With version (0.97b) if you ran setup at installation, Windows is
        !           841:  supposed to understand the &quot;.imach&quot; extension and you can
        !           842:  right click the biaspar.imach icon and either edit with wordpad
        !           843:  (better than notepad) the parameter file or execute it with
        !           844:  IMaCh. </li>
1.1       brouard   845: </ul>
                    846: 
1.2     ! brouard   847: <p>The time to converge depends on the step unit that you used (1
        !           848: month is more precise but more cpu consuming), on the number of cases,
        !           849: and on the number of variables (covariates).
        !           850: 
        !           851: <p>
        !           852: The program outputs many files. Most of them are files which will be
        !           853: plotted for better understanding.
1.1       brouard   854: 
1.2     ! brouard   855: </p>
        !           856: To run under Linux it is mostly the same.
        !           857: <p>
        !           858: It is neither more difficult to run it under a MacIntosh.
1.1       brouard   859: <hr>
                    860: 
                    861: <h2><a name="output"><font color="#00006A">Output of the program
                    862: and graphs</font> </a></h2>
                    863: 
1.2     ! brouard   864: <p>Once the optimization is finished (once the convergence is
        !           865: reached), many tables and graphics are produced.<p>
        !           866: The IMaCh program will create a subdirectory of the same name as your
        !           867: parameter file (here mypar) where all the tables and figures will be
        !           868: stored.<br>
        !           869: 
        !           870: Important files like the log file and the output parameter file (which
        !           871: contains the estimates of the maximisation) are stored at the main
        !           872: level not in this subdirectory. File with extension .log and .txt can
        !           873: be edited with a standard editor like wordpad or notepad or even can be
        !           874: viewed with a browser like Internet Explorer or Mozilla.
        !           875: 
        !           876: <p> The main html file is also named with the same name <a
        !           877: href="biaspar.htm">biaspar.htm</a>. You can click on it by holding
        !           878: your shift key in order to open it in another window (Windows).
        !           879: <p>
        !           880:  Our grapher is Gnuplot, it is an interactive plotting program (GPL) which
        !           881:  can also work in batch. A gnuplot reference manual is available <a
        !           882:  href="http://www.gnuplot.info/">here</a>. <br> When the run is
        !           883:  finished, and in order that the window doesn't disappear, the user
        !           884:  should enter a character like <tt>q</tt> for quitting. <br> These
        !           885:  characters are:<br>
1.1       brouard   886: </p>
                    887: <ul>
1.2     ! brouard   888:     <li>'e' for opening the main result html file <a
        !           889:     href="biaspar.htm"><strong>biaspar.htm</strong></a> file to edit
        !           890:     the output files and graphs. </li> 
        !           891:     <li>'g' to graph again</li>
1.1       brouard   892:     <li>'c' to start again the program from the beginning.</li>
                    893:     <li>'q' for exiting.</li>
                    894: </ul>
                    895: 
1.2     ! brouard   896: The main gnuplot file is named <tt>biaspar.gp</tt> and can be edited (right
        !           897: click) and run again.
        !           898: <p>Gnuplot is easy and you can use it to make more complex
        !           899: graphs. Just click on gnuplot and type plot sin(x) to see how easy it
        !           900: is.
        !           901: 
        !           902: 
1.1       brouard   903: <h5><font size="4"><strong>Results files </strong></font><br>
                    904: <br>
                    905: <font color="#EC5E5E" size="3"><strong>- </strong></font><a
1.2     ! brouard   906: name="cross-sectional prevalence in each state"><font color="#EC5E5E"
        !           907: size="3"><strong>cross-sectional prevalence in each state</strong></font></a><font
1.1       brouard   908: color="#EC5E5E" size="3"><strong> (and at first pass)</strong></font><b>:
1.2     ! brouard   909: </b><a href="biaspar/prbiaspar.txt"><b>biaspar/prbiaspar.txt</b></a><br>
1.1       brouard   910: </h5>
                    911: 
                    912: <p>The first line is the title and displays each field of the
1.2     ! brouard   913: file. First column corresponds to age. Fields 2 and 6 are the
1.1       brouard   914: proportion of individuals in states 1 and 2 respectively as
1.2     ! brouard   915: observed at first exam. Others fields are the numbers of
1.1       brouard   916: people in states 1, 2 or more. The number of columns increases if
                    917: the number of states is higher than 2.<br>
                    918: The header of the file is </p>
                    919: 
                    920: <pre># Age Prev(1) N(1) N Age Prev(2) N(2) N
                    921: 70 1.00000 631 631 70 0.00000 0 631
                    922: 71 0.99681 625 627 71 0.00319 2 627 
                    923: 72 0.97125 1115 1148 72 0.02875 33 1148 </pre>
                    924: 
1.2     ! brouard   925: <p>It means that at age 70 (between 70 and 71), the prevalence in state 1 is 1.000
1.1       brouard   926: and in state 2 is 0.00 . At age 71 the number of individuals in
                    927: state 1 is 625 and in state 2 is 2, hence the total number of
                    928: people aged 71 is 625+2=627. <br>
                    929: </p>
                    930: 
                    931: <h5><font color="#EC5E5E" size="3"><b>- Estimated parameters and
                    932: covariance matrix</b></font><b>: </b><a href="rbiaspar.txt"><b>rbiaspar.imach</b></a></h5>
                    933: 
                    934: <p>This file contains all the maximisation results: </p>
                    935: 
                    936: <pre> -2 log likelihood= 21660.918613445392
                    937:  Estimated parameters: a12 = -12.290174 b12 = 0.092161 
                    938:                        a13 = -9.155590  b13 = 0.046627 
                    939:                        a21 = -2.629849  b21 = -0.022030 
                    940:                        a23 = -7.958519  b23 = 0.042614  
                    941:  Covariance matrix: Var(a12) = 1.47453e-001
                    942:                     Var(b12) = 2.18676e-005
                    943:                     Var(a13) = 2.09715e-001
                    944:                     Var(b13) = 3.28937e-005  
                    945:                     Var(a21) = 9.19832e-001
                    946:                     Var(b21) = 1.29229e-004
                    947:                     Var(a23) = 4.48405e-001
                    948:                     Var(b23) = 5.85631e-005 
                    949:  </pre>
                    950: 
                    951: <p>By substitution of these parameters in the regression model,
                    952: we obtain the elementary transition probabilities:</p>
                    953: 
1.2     ! brouard   954: <p><img src="biaspar/pebiaspar11.png" width="400" height="300"></p>
1.1       brouard   955: 
                    956: <h5><font color="#EC5E5E" size="3"><b>- Transition probabilities</b></font><b>:
1.2     ! brouard   957: </b><a href="biaspar/pijrbiaspar.txt"><b>biaspar/pijrbiaspar.txt</b></a></h5>
1.1       brouard   958: 
1.2     ! brouard   959: <p>Here are the transitions probabilities Pij(x, x+nh). The second
        !           960: column is the starting age x (from age 95 to 65), the third is age
        !           961: (x+nh) and the others are the transition probabilities p11, p12, p13,
        !           962: p21, p22, p23. The first column indicates the value of the covariate
        !           963: (without any other variable than age it is equal to 1) For example, line 5 of the file
        !           964: is: </p>
1.1       brouard   965: 
1.2     ! brouard   966: <pre>1 100 106 0.02655 0.17622 0.79722 0.01809 0.13678 0.84513 </pre>
1.1       brouard   967: 
                    968: <p>and this means: </p>
                    969: 
                    970: <pre>p11(100,106)=0.02655
                    971: p12(100,106)=0.17622
                    972: p13(100,106)=0.79722
                    973: p21(100,106)=0.01809
                    974: p22(100,106)=0.13678
                    975: p22(100,106)=0.84513 </pre>
                    976: 
                    977: <h5><font color="#EC5E5E" size="3"><b>- </b></font><a
1.2     ! brouard   978: name="Period prevalence in each state"><font color="#EC5E5E"
        !           979: size="3"><b>Period prevalence in each state</b></font></a><b>:
        !           980: </b><a href="biaspar/plrbiaspar.txt"><b>biaspar/plrbiaspar.txt</b></a></h5>
1.1       brouard   981: 
                    982: <pre>#Prevalence
                    983: #Age 1-1 2-2
                    984: 
                    985: #************ 
                    986: 70 0.90134 0.09866
                    987: 71 0.89177 0.10823 
                    988: 72 0.88139 0.11861 
                    989: 73 0.87015 0.12985 </pre>
                    990: 
1.2     ! brouard   991: <p>At age 70 the period prevalence is 0.90134 in state 1 and 0.09866
        !           992: in state 2. This period prevalence differs from the cross-sectional
        !           993: prevalence. Here is the point. The cross-sectional prevalence at age
        !           994: 70 results from the incidence of disability, incidence of recovery and
        !           995: mortality which occurred in the past of the cohort.  Period prevalence
        !           996: results from a simulation with current incidences of disability,
        !           997: recovery and mortality estimated from this cross-longitudinal
        !           998: survey. It is a good predictin of the prevalence in the
        !           999: future if &quot;nothing changes in the future&quot;. This is exactly
        !          1000: what demographers do with a period life table. Life expectancy is the
        !          1001: expected mean survival time if current mortality rates (age-specific incidences
        !          1002: of mortality) &quot;remain constant&quot; in the future. </p>
1.1       brouard  1003: 
                   1004: <h5><font color="#EC5E5E" size="3"><b>- Standard deviation of
1.2     ! brouard  1005: period prevalence</b></font><b>: </b><a
        !          1006: href="biaspar/vplrbiaspar.txt"><b>biaspar/vplrbiaspar.txt</b></a></h5>
1.1       brouard  1007: 
1.2     ! brouard  1008: <p>The period prevalence has to be compared with the cross-sectional
        !          1009: prevalence. But both are statistical estimates and therefore
        !          1010: have confidence intervals.
        !          1011: <b>For the cross-sectional prevalence we generally need information on
        !          1012: the design of the surveys. It is usually not enough to consider the
        !          1013: number of people surveyed at a particular age and to estimate a
        !          1014: Bernouilli confidence interval based on the prevalence at that
        !          1015: age. But you can do it to have an idea of the randomness. At least you
        !          1016: can get a visual appreciation of the randomness by looking at the
        !          1017: fluctuation over ages.
        !          1018: 
        !          1019: <p> For the period prevalence it is possible to estimate the
        !          1020: confidence interval from the Hessian matrix (see the publication for
        !          1021: details). We are supposing that the design of the survey will only
        !          1022: alter the weight of each individual. IMaCh is scaling the weights of
        !          1023: individuals-waves contributing to the likelihood by making the sum of
        !          1024: the weights equal to the sum of individuals-waves contributing: a
        !          1025: weighted survey doesn't increase or decrease the size of the survey,
        !          1026: it only give more weights to some individuals and thus less to the
        !          1027: others.
1.1       brouard  1028: 
1.2     ! brouard  1029: <h5><font color="#EC5E5E" size="3">-cross-sectional and period
1.1       brouard  1030: prevalence in state (2=disable) with confidence interval</font>:<b>
1.2     ! brouard  1031: </b><a href="biaspar/vbiaspar21.htm"><b>biaspar/vbiaspar21.png</b></a></h5>
1.1       brouard  1032: 
1.2     ! brouard  1033: <p>This graph exhibits the period prevalence in state (2) with the
        !          1034: confidence interval in red. The green curve is the observed prevalence
        !          1035: (or proportion of individuals in state (2)).  Without discussing the
        !          1036: results (it is not the purpose here), we observe that the green curve
        !          1037: is rather below the period prevalence. It the data where not biased by
        !          1038: the non inclusion of people living in institutions we would have
        !          1039: concluded that the prevalence of disability will increase in the
        !          1040: future (see the main publication if you are interested in real data
        !          1041: and results which are opposite).</p>
1.1       brouard  1042: 
1.2     ! brouard  1043: <p><img src="biaspar/vbiaspar21.png" width="400" height="300"></p>
1.1       brouard  1044: 
                   1045: <h5><font color="#EC5E5E" size="3"><b>-Convergence to the
1.2     ! brouard  1046: period prevalence of disability</b></font><b>: </b><a
        !          1047: href="biaspar/pbiaspar11.png"><b>biaspar/pbiaspar11.png</b></a><br>
        !          1048: <img src="biaspar/pbiaspar11.png" width="400" height="300"> </h5>
1.1       brouard  1049: 
                   1050: <p>This graph plots the conditional transition probabilities from
                   1051: an initial state (1=healthy in red at the bottom, or 2=disable in
                   1052: green on top) at age <em>x </em>to the final state 2=disable<em> </em>at
                   1053: age <em>x+h. </em>Conditional means at the condition to be alive
                   1054: at age <em>x+h </em>which is <i>hP12x</i> + <em>hP22x</em>. The
                   1055: curves <i>hP12x/(hP12x</i> + <em>hP22x) </em>and <i>hP22x/(hP12x</i>
1.2     ! brouard  1056: + <em>hP22x) </em>converge with <em>h, </em>to the <em>period
        !          1057: prevalence of disability</em>. In order to get the period
1.1       brouard  1058: prevalence at age 70 we should start the process at an earlier
                   1059: age, i.e.50. If the disability state is defined by severe
                   1060: disability criteria with only a few chance to recover, then the
                   1061: incidence of recovery is low and the time to convergence is
                   1062: probably longer. But we don't have experience yet.</p>
                   1063: 
                   1064: <h5><font color="#EC5E5E" size="3"><b>- Life expectancies by age
                   1065: and initial health status with standard deviation</b></font><b>: </b><a
1.2     ! brouard  1066: href="biaspar/erbiaspar.txt"><b>biaspar/erbiaspar.txt</b></a></h5>
1.1       brouard  1067: 
                   1068: <pre># Health expectancies 
                   1069: # Age 1-1 (SE) 1-2 (SE) 2-1 (SE) 2-2 (SE)
1.2     ! brouard  1070:  70   11.0180 (0.1277)    3.1950 (0.3635)    4.6500 (0.0871)    4.4807 (0.2187)
        !          1071:  71   10.4786 (0.1184)    3.2093 (0.3212)    4.3384 (0.0875)    4.4820 (0.2076)
        !          1072:  72    9.9551 (0.1103)    3.2236 (0.2827)    4.0426 (0.0885)    4.4827 (0.1966)
        !          1073:  73    9.4476 (0.1035)    3.2379 (0.2478)    3.7621 (0.0899)    4.4825 (0.1858)
        !          1074:  74    8.9564 (0.0980)    3.2522 (0.2165)    3.4966 (0.0920)    4.4815 (0.1754)
        !          1075:  75    8.4815 (0.0937)    3.2665 (0.1887)    3.2457 (0.0946)    4.4798 (0.1656)
        !          1076:  76    8.0230 (0.0905)    3.2806 (0.1645)    3.0090 (0.0979)    4.4772 (0.1565)
        !          1077:  77    7.5810 (0.0884)    3.2946 (0.1438)    2.7860 (0.1017)    4.4738 (0.1484)
        !          1078:  78    7.1554 (0.0871)    3.3084 (0.1264)    2.5763 (0.1062)    4.4696 (0.1416)
        !          1079:  79    6.7464 (0.0867)    3.3220 (0.1124)    2.3794 (0.1112)    4.4646 (0.1364)
        !          1080:  80    6.3538 (0.0868)    3.3354 (0.1014)    2.1949 (0.1168)    4.4587 (0.1331)
        !          1081:  81    5.9775 (0.0873)    3.3484 (0.0933)    2.0222 (0.1230)    4.4520 (0.1320)
1.1       brouard  1082: </pre>
                   1083: 
1.2     ! brouard  1084: <pre>For example  70  11.0180 (0.1277) 3.1950 (0.3635) 4.6500 (0.0871)  4.4807 (0.2187)
        !          1085: means
        !          1086: e11=11.0180 e12=3.1950 e21=4.6500 e22=4.4807 </pre>
1.1       brouard  1087: 
1.2     ! brouard  1088: <pre><img src="biaspar/expbiaspar21.png" width="400" height="300"><img
        !          1089: src="biaspar/expbiaspar11.png" width="400" height="300"></pre>
1.1       brouard  1090: 
                   1091: <p>For example, life expectancy of a healthy individual at age 70
1.2     ! brouard  1092: is 11.0 in the healthy state and 3.2 in the disability state
        !          1093: (total of 14.2 years). If he was disable at age 70, his life expectancy
        !          1094: will be shorter, 4.65 years in the healthy state and 4.5 in the
        !          1095: disability state (=9.15 years). The total life expectancy is a
        !          1096: weighted mean of both, 14.2 and 9.15. The weight is the proportion
        !          1097: of people disabled at age 70. In order to get a period index
1.1       brouard  1098: (i.e. based only on incidences) we use the <a
1.2     ! brouard  1099: href="#Period prevalence in each state">stable or
        !          1100: period prevalence</a> at age 70 (i.e. computed from
1.1       brouard  1101: incidences at earlier ages) instead of the <a
1.2     ! brouard  1102: href="#cross-sectional prevalence in each state">cross-sectional prevalence</a>
        !          1103: (observed for example at first medical exam) (<a href="#Health expectancies">see
1.1       brouard  1104: below</a>).</p>
                   1105: 
                   1106: <h5><font color="#EC5E5E" size="3"><b>- Variances of life
                   1107: expectancies by age and initial health status</b></font><b>: </b><a
1.2     ! brouard  1108: href="biaspar/vrbiaspar.txt"><b>biaspar/vrbiaspar.txt</b></a></h5>
1.1       brouard  1109: 
                   1110: <p>For example, the covariances of life expectancies Cov(ei,ej)
                   1111: at age 50 are (line 3) </p>
                   1112: 
                   1113: <pre>   Cov(e1,e1)=0.4776  Cov(e1,e2)=0.0488=Cov(e2,e1)  Cov(e2,e2)=0.0424</pre>
                   1114: 
                   1115: <h5><font color="#EC5E5E" size="3"><b>-Variances of one-step
1.2     ! brouard  1116: probabilities </b></font><b>: </b><a href="biaspar/probrbiaspar.txt"><b>biaspar/probrbiaspar.txt</b></a></h5>
1.1       brouard  1117: 
                   1118: <p>For example, at age 65</p>
                   1119: 
                   1120: <pre>   p11=9.960e-001 standard deviation of p11=2.359e-004</pre>
                   1121: 
                   1122: <h5><font color="#EC5E5E" size="3"><b>- </b></font><a
                   1123: name="Health expectancies"><font color="#EC5E5E" size="3"><b>Health
                   1124: expectancies</b></font></a><font color="#EC5E5E" size="3"><b>
                   1125: with standard errors in parentheses</b></font><b>: </b><a
1.2     ! brouard  1126: href="biaspar/trbiaspar.txt"><font face="Courier New"><b>biaspar/trbiaspar.txt</b></font></a></h5>
1.1       brouard  1127: 
                   1128: <pre>#Total LEs with variances: e.. (std) e.1 (std) e.2 (std) </pre>
                   1129: 
                   1130: <pre>70 13.26 (0.22) 9.95 (0.20) 3.30 (0.14) </pre>
                   1131: 
                   1132: <p>Thus, at age 70 the total life expectancy, e..=13.26 years is
1.2     ! brouard  1133: the weighted mean of e1.=13.46 and e2.=11.35 by the period
        !          1134: prevalences at age 70 which are 0.90134 in state 1 and 0.09866 in
        !          1135: state 2 respectively (the sum is equal to one). e.1=9.95 is the
1.1       brouard  1136: Disability-free life expectancy at age 70 (it is again a weighted
                   1137: mean of e11 and e21). e.2=3.30 is also the life expectancy at age
                   1138: 70 to be spent in the disability state.</p>
                   1139: 
                   1140: <h5><font color="#EC5E5E" size="3"><b>-Total life expectancy by
                   1141: age and health expectancies in states (1=healthy) and (2=disable)</b></font><b>:
1.2     ! brouard  1142: </b><a href="biaspar/ebiaspar1.png"><b>biaspar/ebiaspar1.png</b></a></h5>
1.1       brouard  1143: 
                   1144: <p>This figure represents the health expectancies and the total
1.2     ! brouard  1145: life expectancy with a confidence interval (dashed line). </p>
1.1       brouard  1146: 
1.2     ! brouard  1147: <pre>        <img src="biaspar/ebiaspar1.png" width="400" height="300"></pre>
1.1       brouard  1148: 
                   1149: <p>Standard deviations (obtained from the information matrix of
                   1150: the model) of these quantities are very useful.
                   1151: Cross-longitudinal surveys are costly and do not involve huge
                   1152: samples, generally a few thousands; therefore it is very
                   1153: important to have an idea of the standard deviation of our
                   1154: estimates. It has been a big challenge to compute the Health
                   1155: Expectancy standard deviations. Don't be confuse: life expectancy
                   1156: is, as any expected value, the mean of a distribution; but here
                   1157: we are not computing the standard deviation of the distribution,
                   1158: but the standard deviation of the estimate of the mean.</p>
                   1159: 
                   1160: <p>Our health expectancies estimates vary according to the sample
                   1161: size (and the standard deviations give confidence intervals of
1.2     ! brouard  1162: the estimates) but also according to the model fitted. Let us
1.1       brouard  1163: explain it in more details.</p>
                   1164: 
1.2     ! brouard  1165: <p>Choosing a model means at least two kind of choices. At first we
        !          1166: have to decide the number of disability states. And at second we have to
        !          1167: design, within the logit model family, the model itself: variables,
        !          1168: covariables, confounding factors etc. to be included.</p>
1.1       brouard  1169: 
                   1170: <p>More disability states we have, better is our demographical
                   1171: approach of the disability process, but smaller are the number of
                   1172: transitions between each state and higher is the noise in the
                   1173: measurement. We do not have enough experiments of the various
                   1174: models to summarize the advantages and disadvantages, but it is
                   1175: important to say that even if we had huge and unbiased samples,
                   1176: the total life expectancy computed from a cross-longitudinal
                   1177: survey, varies with the number of states. If we define only two
                   1178: states, alive or dead, we find the usual life expectancy where it
                   1179: is assumed that at each age, people are at the same risk to die.
                   1180: If we are differentiating the alive state into healthy and
                   1181: disable, and as the mortality from the disability state is higher
                   1182: than the mortality from the healthy state, we are introducing
                   1183: heterogeneity in the risk of dying. The total mortality at each
                   1184: age is the weighted mean of the mortality in each state by the
                   1185: prevalence in each state. Therefore if the proportion of people
1.2     ! brouard  1186: at each age and in each state is different from the period
1.1       brouard  1187: equilibrium, there is no reason to find the same total mortality
                   1188: at a particular age. Life expectancy, even if it is a very useful
                   1189: tool, has a very strong hypothesis of homogeneity of the
                   1190: population. Our main purpose is not to measure differential
                   1191: mortality but to measure the expected time in a healthy or
                   1192: disability state in order to maximise the former and minimize the
                   1193: latter. But the differential in mortality complexifies the
                   1194: measurement.</p>
                   1195: 
1.2     ! brouard  1196: <p>Incidences of disability or recovery are not affected by the number
        !          1197: of states if these states are independent. But incidences estimates
        !          1198: are dependent on the specification of the model. More covariates we
        !          1199: added in the logit model better is the model, but some covariates are
        !          1200: not well measured, some are confounding factors like in any
        !          1201: statistical model. The procedure to &quot;fit the best model' is
        !          1202: similar to logistic regression which itself is similar to regression
        !          1203: analysis. We haven't yet been sofar because we also have a severe
        !          1204: limitation which is the speed of the convergence. On a Pentium III,
        !          1205: 500 MHz, even the simplest model, estimated by month on 8,000 people
        !          1206: may take 4 hours to converge.  Also, the IMaCh program is not a
        !          1207: statistical package, and does not allow sophisticated design
        !          1208: variables. If you need sophisticated design variable you have to them
        !          1209: your self and and add them as ordinary variables. IMaCX allows up to 8
        !          1210: variables. The current version of this program allows only to add
        !          1211: simple variables like age+sex or age+sex+ age*sex but will never be
        !          1212: general enough. But what is to remember, is that incidences or
        !          1213: probability of change from one state to another is affected by the
        !          1214: variables specified into the model.</p>
1.1       brouard  1215: 
1.2     ! brouard  1216: <p>Also, the age range of the people interviewed is linked 
1.1       brouard  1217: the age range of the life expectancy which can be estimated by
                   1218: extrapolation. If your sample ranges from age 70 to 95, you can
                   1219: clearly estimate a life expectancy at age 70 and trust your
1.2     ! brouard  1220: confidence interval because it is mostly based on your sample size,
1.1       brouard  1221: but if you want to estimate the life expectancy at age 50, you
1.2     ! brouard  1222: should rely in the design of your model. Fitting a logistic model on a age
        !          1223: range of 70 to 95 and estimating probabilties of transition out of
        !          1224: this age range, say at age 50, is very dangerous. At least you
1.1       brouard  1225: should remember that the confidence interval given by the
                   1226: standard deviation of the health expectancies, are under the
                   1227: strong assumption that your model is the 'true model', which is
1.2     ! brouard  1228: probably not the case outside the age range of your sample.</p>
1.1       brouard  1229: 
                   1230: <h5><font color="#EC5E5E" size="3"><b>- Copy of the parameter
                   1231: file</b></font><b>: </b><a href="orbiaspar.txt"><b>orbiaspar.txt</b></a></h5>
                   1232: 
                   1233: <p>This copy of the parameter file can be useful to re-run the
                   1234: program while saving the old output files. </p>
                   1235: 
                   1236: <h5><font color="#EC5E5E" size="3"><b>- Prevalence forecasting</b></font><b>:
1.2     ! brouard  1237: </b><a href="biaspar/frbiaspar.txt"><b>biaspar/frbiaspar.txt</b></a></h5>
1.1       brouard  1238: 
1.2     ! brouard  1239: <p>
        !          1240: 
        !          1241: First,
1.1       brouard  1242: we have estimated the observed prevalence between 1/1/1984 and
1.2     ! brouard  1243: 1/6/1988 (June, European syntax of dates). The mean date of all interviews (weighted average of the
        !          1244: interviews performed between 1/1/1984 and 1/6/1988) is estimated
1.1       brouard  1245: to be 13/9/1985, as written on the top on the file. Then we
                   1246: forecast the probability to be in each state. </p>
                   1247: 
1.2     ! brouard  1248: <p>
        !          1249: For example on 1/1/1989 : </p>
1.1       brouard  1250: 
                   1251: <pre class="MsoNormal"># StartingAge FinalAge P.1 P.2 P.3
                   1252: # Forecasting at date 1/1/1989
                   1253:   73 0.807 0.078 0.115</pre>
                   1254: 
1.2     ! brouard  1255: <p>
        !          1256: 
        !          1257: Since the minimum age is 70 on the 13/9/1985, the youngest forecasted
        !          1258: age is 73. This means that at age a person aged 70 at 13/9/1989 has a
        !          1259: probability to enter state1 of 0.807 at age 73 on 1/1/1989.
1.1       brouard  1260: Similarly, the probability to be in state 2 is 0.078 and the
1.2     ! brouard  1261: probability to die is 0.115. Then, on the 1/1/1989, the prevalence of
        !          1262: disability at age 73 is estimated to be 0.088.</p>
1.1       brouard  1263: 
                   1264: <h5><font color="#EC5E5E" size="3"><b>- Population forecasting</b></font><b>:
1.2     ! brouard  1265: </b><a href="biaspar/poprbiaspar.txt"><b>biaspar/poprbiaspar.txt</b></a></h5>
1.1       brouard  1266: 
                   1267: <pre># Age P.1 P.2 P.3 [Population]
                   1268: # Forecasting at date 1/1/1989 
                   1269: 75 572685.22 83798.08 
                   1270: 74 621296.51 79767.99 
                   1271: 73 645857.70 69320.60 </pre>
                   1272: 
                   1273: <pre># Forecasting at date 1/1/19909 
                   1274: 76 442986.68 92721.14 120775.48
                   1275: 75 487781.02 91367.97 121915.51
                   1276: 74 512892.07 85003.47 117282.76 </pre>
                   1277: 
                   1278: <p>From the population file, we estimate the number of people in
                   1279: each state. At age 73, 645857 persons are in state 1 and 69320
                   1280: are in state 2. One year latter, 512892 are still in state 1,
                   1281: 85003 are in state 2 and 117282 died before 1/1/1990.</p>
                   1282: 
                   1283: <hr>
                   1284: 
                   1285: <h2><a name="example"></a><font color="#00006A">Trying an example</font></h2>
                   1286: 
                   1287: <p>Since you know how to run the program, it is time to test it
                   1288: on your own computer. Try for example on a parameter file named <a
1.2     ! brouard  1289: href="imachpar.imach">imachpar.imach</a> which is a copy
1.1       brouard  1290: of <font size="2" face="Courier New">mypar.imach</font> included
                   1291: in the subdirectory of imach, <font size="2" face="Courier New">mytry</font>.
1.2     ! brouard  1292: Edit it and change the name of the data file to <font size="2"
        !          1293: face="Courier New">mydata.txt</font> if you don't want to
1.1       brouard  1294: copy it on the same directory. The file <font face="Courier New">mydata.txt</font>
                   1295: is a smaller file of 3,000 people but still with 4 waves. </p>
                   1296: 
1.2     ! brouard  1297: <p>Right click on the .imach file and a window will popup with the
        !          1298: string '<strong>Enter the parameter file name:'</strong></p>
1.1       brouard  1299: 
                   1300: <table border="1">
                   1301:     <tr>
1.2     ! brouard  1302:         <td width="100%"><strong>IMACH, Version 0.97b</strong><p><strong>Enter
        !          1303:         the parameter file name: imachpar.imach</strong></p>
1.1       brouard  1304:         </td>
                   1305:     </tr>
                   1306: </table>
                   1307: 
                   1308: <p>Most of the data files or image files generated, will use the
                   1309: 'imachpar' string into their name. The running time is about 2-3
                   1310: minutes on a Pentium III. If the execution worked correctly, the
                   1311: outputs files are created in the current directory, and should be
                   1312: the same as the mypar files initially included in the directory <font
                   1313: size="2" face="Courier New">mytry</font>.</p>
                   1314: 
                   1315: <ul>
                   1316:     <li><pre><u>Output on the screen</u> The output screen looks like <a
1.2     ! brouard  1317: href="biaspar.log">biaspar.log</a>
1.1       brouard  1318: #
1.2     ! brouard  1319: title=MLE datafile=mydaiata.txt lastobs=3000 firstpass=1 lastpass=3
1.1       brouard  1320: ftol=1.000000e-008 stepm=24 ncovcol=2 nlstate=2 ndeath=1 maxwav=4 mle=1 weight=0</pre>
                   1321:     </li>
                   1322:     <li><pre>Total number of individuals= 2965, Agemin = 70.00, Agemax= 100.92
                   1323: 
                   1324: Warning, no any valid information for:126 line=126
                   1325: Warning, no any valid information for:2307 line=2307
                   1326: Delay (in months) between two waves Min=21 Max=51 Mean=24.495826
                   1327: <font face="Times New Roman">These lines give some warnings on the data file and also some raw statistics on frequencies of transitions.</font>
                   1328: Age 70 1.=230 loss[1]=3.5% 2.=16 loss[2]=12.5% 1.=222 prev[1]=94.1% 2.=14
                   1329:  prev[2]=5.9% 1-1=8 11=200 12=7 13=15 2-1=2 21=6 22=7 23=1
                   1330: Age 102 1.=0 loss[1]=NaNQ% 2.=0 loss[2]=NaNQ% 1.=0 prev[1]=NaNQ% 2.=0 </pre>
                   1331:     </li>
                   1332: </ul>
1.2     ! brouard  1333: It includes some warnings or errors which are very important for
        !          1334: you. Be careful with such warnings because your results may be biased
        !          1335: if, for example, you have people who accepted to be interviewed at
        !          1336: first pass but never after. Or if you don't have the exact month of
        !          1337: death. In such cases IMaCh doesn't take any initiative, it does only
        !          1338: warn you. It is up to you to decide what to do with these
        !          1339: people. Excluding them is usually a wrong decision. It is better to
        !          1340: decide that the month of death is at the mid-interval between the last
        !          1341: two waves for example.<p>
        !          1342: 
        !          1343: If you survey suffers from severe attrition, you have to analyse the
        !          1344: characteristics of the lost people and overweight people with same
        !          1345: characteristics for example.
        !          1346: <p>
        !          1347: By default, IMaCH warns and excludes these problematic people, but you
        !          1348: have to be careful with such results.
1.1       brouard  1349: 
                   1350: <p>&nbsp;</p>
                   1351: 
                   1352: <ul>
                   1353:     <li>Maximisation with the Powell algorithm. 8 directions are
                   1354:         given corresponding to the 8 parameters. this can be
                   1355:         rather long to get convergence.<br>
                   1356:         <font size="1" face="Courier New"><br>
                   1357:         Powell iter=1 -2*LL=11531.405658264877 1 0.000000000000 2
                   1358:         0.000000000000 3<br>
                   1359:         0.000000000000 4 0.000000000000 5 0.000000000000 6
                   1360:         0.000000000000 7 <br>
                   1361:         0.000000000000 8 0.000000000000<br>
                   1362:         1..........2.................3..........4.................5.........<br>
                   1363:         6................7........8...............<br>
                   1364:         Powell iter=23 -2*LL=6744.954108371555 1 -12.967632334283
                   1365:         <br>
                   1366:         2 0.135136681033 3 -7.402109728262 4 0.067844593326 <br>
                   1367:         5 -0.673601538129 6 -0.006615504377 7 -5.051341616718 <br>
                   1368:         8 0.051272038506<br>
                   1369:         1..............2...........3..............4...........<br>
                   1370:         5..........6................7...........8.........<br>
                   1371:         #Number of iterations = 23, -2 Log likelihood =
                   1372:         6744.954042573691<br>
                   1373:         # Parameters<br>
                   1374:         12 -12.966061 0.135117 <br>
                   1375:         13 -7.401109 0.067831 <br>
                   1376:         21 -0.672648 -0.006627 <br>
                   1377:         23 -5.051297 0.051271 </font><br>
                   1378:         </li>
                   1379:     <li><pre><font size="2">Calculation of the hessian matrix. Wait...
                   1380: 12345678.12.13.14.15.16.17.18.23.24.25.26.27.28.34.35.36.37.38.45.46.47.48.56.57.58.67.68.78
                   1381: 
                   1382: Inverting the hessian to get the covariance matrix. Wait...
                   1383: 
                   1384: #Hessian matrix#
                   1385: 3.344e+002 2.708e+004 -4.586e+001 -3.806e+003 -1.577e+000 -1.313e+002 3.914e-001 3.166e+001 
                   1386: 2.708e+004 2.204e+006 -3.805e+003 -3.174e+005 -1.303e+002 -1.091e+004 2.967e+001 2.399e+003 
                   1387: -4.586e+001 -3.805e+003 4.044e+002 3.197e+004 2.431e-002 1.995e+000 1.783e-001 1.486e+001 
                   1388: -3.806e+003 -3.174e+005 3.197e+004 2.541e+006 2.436e+000 2.051e+002 1.483e+001 1.244e+003 
                   1389: -1.577e+000 -1.303e+002 2.431e-002 2.436e+000 1.093e+002 8.979e+003 -3.402e+001 -2.843e+003 
                   1390: -1.313e+002 -1.091e+004 1.995e+000 2.051e+002 8.979e+003 7.420e+005 -2.842e+003 -2.388e+005 
                   1391: 3.914e-001 2.967e+001 1.783e-001 1.483e+001 -3.402e+001 -2.842e+003 1.494e+002 1.251e+004 
                   1392: 3.166e+001 2.399e+003 1.486e+001 1.244e+003 -2.843e+003 -2.388e+005 1.251e+004 1.053e+006 
                   1393: # Scales
                   1394: 12 1.00000e-004 1.00000e-006
                   1395: 13 1.00000e-004 1.00000e-006
                   1396: 21 1.00000e-003 1.00000e-005
                   1397: 23 1.00000e-004 1.00000e-005
                   1398: # Covariance
                   1399:   1 5.90661e-001
                   1400:   2 -7.26732e-003 8.98810e-005
                   1401:   3 8.80177e-002 -1.12706e-003 5.15824e-001
                   1402:   4 -1.13082e-003 1.45267e-005 -6.50070e-003 8.23270e-005
                   1403:   5 9.31265e-003 -1.16106e-004 6.00210e-004 -8.04151e-006 1.75753e+000
                   1404:   6 -1.15664e-004 1.44850e-006 -7.79995e-006 1.04770e-007 -2.12929e-002 2.59422e-004
                   1405:   7 1.35103e-003 -1.75392e-005 -6.38237e-004 7.85424e-006 4.02601e-001 -4.86776e-003 1.32682e+000
                   1406:   8 -1.82421e-005 2.35811e-007 7.75503e-006 -9.58687e-008 -4.86589e-003 5.91641e-005 -1.57767e-002 1.88622e-004
                   1407: # agemin agemax for lifexpectancy, bage fage (if mle==0 ie no data nor Max likelihood).
                   1408: 
                   1409: 
                   1410: agemin=70 agemax=100 bage=50 fage=100
                   1411: Computing prevalence limit: result on file 'plrmypar.txt' 
                   1412: Computing pij: result on file 'pijrmypar.txt' 
                   1413: Computing Health Expectancies: result on file 'ermypar.txt' 
                   1414: Computing Variance-covariance of DFLEs: file 'vrmypar.txt' 
                   1415: Computing Total LEs with variances: file 'trmypar.txt' 
                   1416: Computing Variance-covariance of Prevalence limit: file 'vplrmypar.txt' 
                   1417: End of Imach
                   1418: </font></pre>
                   1419:     </li>
                   1420: </ul>
                   1421: 
                   1422: <p><font size="3">Once the running is finished, the program
1.2     ! brouard  1423: requires a character:</font></p>
1.1       brouard  1424: 
                   1425: <table border="1">
                   1426:     <tr>
                   1427:         <td width="100%"><strong>Type e to edit output files, g
                   1428:         to graph again, c to start again, and q for exiting:</strong></td>
                   1429:     </tr>
                   1430: </table>
                   1431: 
1.2     ! brouard  1432: In order to have an idea of the time needed to reach convergence,
        !          1433: IMaCh gives an estimation if the convergence needs 10, 20 or 30
        !          1434: iterations. It might be useful.
        !          1435: 
1.1       brouard  1436: <p><font size="3">First you should enter <strong>e </strong>to
                   1437: edit the master file mypar.htm. </font></p>
                   1438: 
                   1439: <ul>
                   1440:     <li><u>Outputs files</u> <br>
                   1441:         <br>
                   1442:         - Copy of the parameter file: <a href="ormypar.txt">ormypar.txt</a><br>
                   1443:         - Gnuplot file name: <a href="mypar.gp.txt">mypar.gp.txt</a><br>
1.2     ! brouard  1444:         - Cross-sectional prevalence in each state: <a
1.1       brouard  1445:         href="prmypar.txt">prmypar.txt</a> <br>
1.2     ! brouard  1446:         - Period prevalence in each state: <a
1.1       brouard  1447:         href="plrmypar.txt">plrmypar.txt</a> <br>
                   1448:         - Transition probabilities: <a href="pijrmypar.txt">pijrmypar.txt</a><br>
                   1449:         - Life expectancies by age and initial health status
                   1450:         (estepm=24 months): <a href="ermypar.txt">ermypar.txt</a>
                   1451:         <br>
                   1452:         - Parameter file with estimated parameters and the
                   1453:         covariance matrix: <a href="rmypar.txt">rmypar.txt</a> <br>
                   1454:         - Variance of one-step probabilities: <a
                   1455:         href="probrmypar.txt">probrmypar.txt</a> <br>
                   1456:         - Variances of life expectancies by age and initial
                   1457:         health status (estepm=24 months): <a href="vrmypar.txt">vrmypar.txt</a><br>
                   1458:         - Health expectancies with their variances: <a
                   1459:         href="trmypar.txt">trmypar.txt</a> <br>
1.2     ! brouard  1460:         - Standard deviation of period prevalences: <a
1.1       brouard  1461:         href="vplrmypar.txt">vplrmypar.txt</a> <br>
                   1462:         No population forecast: popforecast = 0 (instead of 1) or
                   1463:         stepm = 24 (instead of 1) or model=. (instead of .)<br>
                   1464:         <br>
                   1465:         </li>
                   1466:     <li><u>Graphs</u> <br>
                   1467:         <br>
                   1468:         -<a href="../mytry/pemypar1.gif">One-step transition
                   1469:         probabilities</a><br>
                   1470:         -<a href="../mytry/pmypar11.gif">Convergence to the
1.2     ! brouard  1471:         period prevalence</a><br>
        !          1472:         -<a href="..\mytry\vmypar11.gif">Cross-sectional and period
1.1       brouard  1473:         prevalence in state (1) with the confident interval</a> <br>
1.2     ! brouard  1474:         -<a href="..\mytry\vmypar21.gif">Cross-sectional and period
1.1       brouard  1475:         prevalence in state (2) with the confident interval</a> <br>
                   1476:         -<a href="..\mytry\expmypar11.gif">Health life
                   1477:         expectancies by age and initial health state (1)</a> <br>
                   1478:         -<a href="..\mytry\expmypar21.gif">Health life
                   1479:         expectancies by age and initial health state (2)</a> <br>
                   1480:         -<a href="..\mytry\emypar1.gif">Total life expectancy by
                   1481:         age and health expectancies in states (1) and (2).</a> </li>
                   1482: </ul>
                   1483: 
                   1484: <p>This software have been partly granted by <a
                   1485: href="http://euroreves.ined.fr">Euro-REVES</a>, a concerted
                   1486: action from the European Union. It will be copyrighted
                   1487: identically to a GNU software product, i.e. program and software
                   1488: can be distributed freely for non commercial use. Sources are not
                   1489: widely distributed today. You can get them by asking us with a
                   1490: simple justification (name, email, institute) <a
                   1491: href="mailto:brouard@ined.fr">mailto:brouard@ined.fr</a> and <a
                   1492: href="mailto:lievre@ined.fr">mailto:lievre@ined.fr</a> .</p>
                   1493: 
1.2     ! brouard  1494: <p>Latest version (0.97b of June 2004) can be accessed at <a
1.1       brouard  1495: href="http://euroreves.ined.fr/imach">http://euroreves.ined.fr/imach</a><br>
                   1496: </p>
                   1497: </body>
                   1498: </html>

FreeBSD-CVSweb <freebsd-cvsweb@FreeBSD.org>