File:  [Local Repository] / imach096d / doc / imach.htm
Revision 1.11: download - view: text, annotated - select for diffs
Mon Mar 11 22:52:27 2002 UTC (22 years, 4 months ago) by brouard
Branches: MAIN
CVS tags: HEAD
Some cleaning

    1: <!-- $Id: imach.htm,v 1.11 2002/03/11 22:52:27 brouard Exp $ --!>
    2: <html>
    3: 
    4: <head>
    5: <meta http-equiv="Content-Type"
    6: content="text/html; charset=iso-8859-1">
    7: <meta name="GENERATOR" content="Microsoft FrontPage Express 2.0">
    8: <title>Computing Health Expectancies using IMaCh</title>
    9: <!-- Changed by: Agnes Lievre, 12-Oct-2000 -->
   10: <html>
   11: 
   12: <head>
   13: <meta http-equiv="Content-Type"
   14: content="text/html; charset=iso-8859-1">
   15: <meta name="GENERATOR" content="Microsoft FrontPage Express 2.0">
   16: <title></title>
   17: </head>
   18: 
   19: <body bgcolor="#FFFFFF">
   20: 
   21: <hr size="3" color="#EC5E5E">
   22: 
   23: <h1 align="center"><font color="#00006A">Computing Health
   24: Expectancies using IMaCh</font></h1>
   25: 
   26: <h1 align="center"><font color="#00006A" size="5">(a Maximum
   27: Likelihood Computer Program using Interpolation of Markov Chains)</font></h1>
   28: 
   29: <p align="center">&nbsp;</p>
   30: 
   31: <p align="center"><a href="http://www.ined.fr/"><img
   32: src="logo-ined.gif" border="0" width="151" height="76"></a><img
   33: src="euroreves2.gif" width="151" height="75"></p>
   34: 
   35: <h3 align="center"><a href="http://www.ined.fr/"><font
   36: color="#00006A">INED</font></a><font color="#00006A"> and </font><a
   37: href="http://euroreves.ined.fr"><font color="#00006A">EUROREVES</font></a></h3>
   38: 
   39: <p align="center"><font color="#00006A" size="4"><strong>Version
   40: 0.71a, March 2002</strong></font></p>
   41: 
   42: <hr size="3" color="#EC5E5E">
   43: 
   44: <p align="center"><font color="#00006A"><strong>Authors of the
   45: program: </strong></font><a href="http://sauvy.ined.fr/brouard"><font
   46: color="#00006A"><strong>Nicolas Brouard</strong></font></a><font
   47: color="#00006A"><strong>, senior researcher at the </strong></font><a
   48: href="http://www.ined.fr"><font color="#00006A"><strong>Institut
   49: National d'Etudes Démographiques</strong></font></a><font
   50: color="#00006A"><strong> (INED, Paris) in the &quot;Mortality,
   51: Health and Epidemiology&quot; Research Unit </strong></font></p>
   52: 
   53: <p align="center"><font color="#00006A"><strong>and Agnès
   54: Lièvre<br clear="left">
   55: </strong></font></p>
   56: 
   57: <h4><font color="#00006A">Contribution to the mathematics: C. R.
   58: Heathcote </font><font color="#00006A" size="2">(Australian
   59: National University, Canberra).</font></h4>
   60: 
   61: <h4><font color="#00006A">Contact: Agnès Lièvre (</font><a
   62: href="mailto:lievre@ined.fr"><font color="#00006A"><i>lievre@ined.fr</i></font></a><font
   63: color="#00006A">) </font></h4>
   64: 
   65: <hr>
   66: 
   67: <ul>
   68:     <li><a href="#intro">Introduction</a> </li>
   69:     <li><a href="#data">On what kind of data can it be used?</a></li>
   70:     <li><a href="#datafile">The data file</a> </li>
   71:     <li><a href="#biaspar">The parameter file</a> </li>
   72:     <li><a href="#running">Running Imach</a> </li>
   73:     <li><a href="#output">Output files and graphs</a> </li>
   74:     <li><a href="#example">Exemple</a> </li>
   75: </ul>
   76: 
   77: <hr>
   78: 
   79: <h2><a name="intro"><font color="#00006A">Introduction</font></a></h2>
   80: 
   81: <p>This program computes <b>Healthy Life Expectancies</b> from <b>cross-longitudinal
   82: data</b> using the methodology pioneered by Laditka and Wolf (1).
   83: Within the family of Health Expectancies (HE), Disability-free
   84: life expectancy (DFLE) is probably the most important index to
   85: monitor. In low mortality countries, there is a fear that when
   86: mortality declines, the increase in DFLE is not proportionate to
   87: the increase in total Life expectancy. This case is called the <em>Expansion
   88: of morbidity</em>. Most of the data collected today, in
   89: particular by the international <a href="http://www.reves.org">REVES</a>
   90: network on Health expectancy, and most HE indices based on these
   91: data, are <em>cross-sectional</em>. It means that the information
   92: collected comes from a single cross-sectional survey: people from
   93: various ages (but mostly old people) are surveyed on their health
   94: status at a single date. Proportion of people disabled at each
   95: age, can then be measured at that date. This age-specific
   96: prevalence curve is then used to distinguish, within the
   97: stationary population (which, by definition, is the life table
   98: estimated from the vital statistics on mortality at the same
   99: date), the disable population from the disability-free
  100: population. Life expectancy (LE) (or total population divided by
  101: the yearly number of births or deaths of this stationary
  102: population) is then decomposed into DFLE and DLE. This method of
  103: computing HE is usually called the Sullivan method (from the name
  104: of the author who first described it).</p>
  105: 
  106: <p>Age-specific proportions of people disable are very difficult
  107: to forecast because each proportion corresponds to historical
  108: conditions of the cohort and it is the result of the historical
  109: flows from entering disability and recovering in the past until
  110: today. The age-specific intensities (or incidence rates) of
  111: entering disability or recovering a good health, are reflecting
  112: actual conditions and therefore can be used at each age to
  113: forecast the future of this cohort. For example if a country is
  114: improving its technology of prosthesis, the incidence of
  115: recovering the ability to walk will be higher at each (old) age,
  116: but the prevalence of disability will only slightly reflect an
  117: improve because the prevalence is mostly affected by the history
  118: of the cohort and not by recent period effects. To measure the
  119: period improvement we have to simulate the future of a cohort of
  120: new-borns entering or leaving at each age the disability state or
  121: dying according to the incidence rates measured today on
  122: different cohorts. The proportion of people disabled at each age
  123: in this simulated cohort will be much lower (using the exemple of
  124: an improvement) that the proportions observed at each age in a
  125: cross-sectional survey. This new prevalence curve introduced in a
  126: life table will give a much more actual and realistic HE level
  127: than the Sullivan method which mostly measured the History of
  128: health conditions in this country.</p>
  129: 
  130: <p>Therefore, the main question is how to measure incidence rates
  131: from cross-longitudinal surveys? This is the goal of the IMaCH
  132: program. From your data and using IMaCH you can estimate period
  133: HE and not only Sullivan's HE. Also the standard errors of the HE
  134: are computed.</p>
  135: 
  136: <p>A cross-longitudinal survey consists in a first survey
  137: (&quot;cross&quot;) where individuals from different ages are
  138: interviewed on their health status or degree of disability. At
  139: least a second wave of interviews (&quot;longitudinal&quot;)
  140: should measure each new individual health status. Health
  141: expectancies are computed from the transitions observed between
  142: waves and are computed for each degree of severity of disability
  143: (number of life states). More degrees you consider, more time is
  144: necessary to reach the Maximum Likelihood of the parameters
  145: involved in the model. Considering only two states of disability
  146: (disable and healthy) is generally enough but the computer
  147: program works also with more health statuses.<br>
  148: <br>
  149: The simplest model is the multinomial logistic model where <i>pij</i>
  150: is the probability to be observed in state <i>j</i> at the second
  151: wave conditional to be observed in state <em>i</em> at the first
  152: wave. Therefore a simple model is: log<em>(pij/pii)= aij +
  153: bij*age+ cij*sex,</em> where '<i>age</i>' is age and '<i>sex</i>'
  154: is a covariate. The advantage that this computer program claims,
  155: comes from that if the delay between waves is not identical for
  156: each individual, or if some individual missed an interview, the
  157: information is not rounded or lost, but taken into account using
  158: an interpolation or extrapolation. <i>hPijx</i> is the
  159: probability to be observed in state <i>i</i> at age <i>x+h</i>
  160: conditional to the observed state <i>i</i> at age <i>x</i>. The
  161: delay '<i>h</i>' can be split into an exact number (<i>nh*stepm</i>)
  162: of unobserved intermediate states. This elementary transition (by
  163: month or quarter trimester, semester or year) is modeled as a
  164: multinomial logistic. The <i>hPx</i> matrix is simply the matrix
  165: product of <i>nh*stepm</i> elementary matrices and the
  166: contribution of each individual to the likelihood is simply <i>hPijx</i>.
  167: <br>
  168: </p>
  169: 
  170: <p>The program presented in this manual is a quite general
  171: program named <strong>IMaCh</strong> (for <strong>I</strong>nterpolated
  172: <strong>MA</strong>rkov <strong>CH</strong>ain), designed to
  173: analyse transition data from longitudinal surveys. The first step
  174: is the parameters estimation of a transition probabilities model
  175: between an initial status and a final status. From there, the
  176: computer program produces some indicators such as observed and
  177: stationary prevalence, life expectancies and their variances and
  178: graphs. Our transition model consists in absorbing and
  179: non-absorbing states with the possibility of return across the
  180: non-absorbing states. The main advantage of this package,
  181: compared to other programs for the analysis of transition data
  182: (For example: Proc Catmod of SAS<sup>®</sup>) is that the whole
  183: individual information is used even if an interview is missing, a
  184: status or a date is unknown or when the delay between waves is
  185: not identical for each individual. The program can be executed
  186: according to parameters: selection of a sub-sample, number of
  187: absorbing and non-absorbing states, number of waves taken in
  188: account (the user inputs the first and the last interview), a
  189: tolerance level for the maximization function, the periodicity of
  190: the transitions (we can compute annual, quarterly or monthly
  191: transitions), covariates in the model. It works on Windows or on
  192: Unix.<br>
  193: </p>
  194: 
  195: <hr>
  196: 
  197: <p>(1) Laditka, Sarah B. and Wolf, Douglas A. (1998), &quot;New
  198: Methods for Analyzing Active Life Expectancy&quot;. <i>Journal of
  199: Aging and Health</i>. Vol 10, No. 2. </p>
  200: 
  201: <hr>
  202: 
  203: <h2><a name="data"><font color="#00006A">On what kind of data can
  204: it be used?</font></a></h2>
  205: 
  206: <p>The minimum data required for a transition model is the
  207: recording of a set of individuals interviewed at a first date and
  208: interviewed again at least one another time. From the
  209: observations of an individual, we obtain a follow-up over time of
  210: the occurrence of a specific event. In this documentation, the
  211: event is related to health status at older ages, but the program
  212: can be applied on a lot of longitudinal studies in different
  213: contexts. To build the data file explained into the next section,
  214: you must have the month and year of each interview and the
  215: corresponding health status. But in order to get age, date of
  216: birth (month and year) is required (missing values is allowed for
  217: month). Date of death (month and year) is an important
  218: information also required if the individual is dead. Shorter
  219: steps (i.e. a month) will more closely take into account the
  220: survival time after the last interview.</p>
  221: 
  222: <hr>
  223: 
  224: <h2><a name="datafile"><font color="#00006A">The data file</font></a></h2>
  225: 
  226: <p>In this example, 8,000 people have been interviewed in a
  227: cross-longitudinal survey of 4 waves (1984, 1986, 1988, 1990).
  228: Some people missed 1, 2 or 3 interviews. Health statuses are
  229: healthy (1) and disable (2). The survey is not a real one. It is
  230: a simulation of the American Longitudinal Survey on Aging. The
  231: disability state is defined if the individual missed one of four
  232: ADL (Activity of daily living, like bathing, eating, walking).
  233: Therefore, even is the individuals interviewed in the sample are
  234: virtual, the information brought with this sample is close to the
  235: situation of the United States. Sex is not recorded is this
  236: sample.</p>
  237: 
  238: <p>Each line of the data set (named <a href="data1.txt">data1.txt</a>
  239: in this first example) is an individual record which fields are: </p>
  240: 
  241: <ul>
  242:     <li><b>Index number</b>: positive number (field 1) </li>
  243:     <li><b>First covariate</b> positive number (field 2) </li>
  244:     <li><b>Second covariate</b> positive number (field 3) </li>
  245:     <li><a name="Weight"><b>Weight</b></a>: positive number
  246:         (field 4) . In most surveys individuals are weighted
  247:         according to the stratification of the sample.</li>
  248:     <li><b>Date of birth</b>: coded as mm/yyyy. Missing dates are
  249:         coded as 99/9999 (field 5) </li>
  250:     <li><b>Date of death</b>: coded as mm/yyyy. Missing dates are
  251:         coded as 99/9999 (field 6) </li>
  252:     <li><b>Date of first interview</b>: coded as mm/yyyy. Missing
  253:         dates are coded as 99/9999 (field 7) </li>
  254:     <li><b>Status at first interview</b>: positive number.
  255:         Missing values ar coded -1. (field 8) </li>
  256:     <li><b>Date of second interview</b>: coded as mm/yyyy.
  257:         Missing dates are coded as 99/9999 (field 9) </li>
  258:     <li><strong>Status at second interview</strong> positive
  259:         number. Missing values ar coded -1. (field 10) </li>
  260:     <li><b>Date of third interview</b>: coded as mm/yyyy. Missing
  261:         dates are coded as 99/9999 (field 11) </li>
  262:     <li><strong>Status at third interview</strong> positive
  263:         number. Missing values ar coded -1. (field 12) </li>
  264:     <li><b>Date of fourth interview</b>: coded as mm/yyyy.
  265:         Missing dates are coded as 99/9999 (field 13) </li>
  266:     <li><strong>Status at fourth interview</strong> positive
  267:         number. Missing values are coded -1. (field 14) </li>
  268:     <li>etc</li>
  269: </ul>
  270: 
  271: <p>&nbsp;</p>
  272: 
  273: <p>If your longitudinal survey do not include information about
  274: weights or covariates, you must fill the column with a number
  275: (e.g. 1) because a missing field is not allowed.</p>
  276: 
  277: <hr>
  278: 
  279: <h2><font color="#00006A">Your first example parameter file</font><a
  280: href="http://euroreves.ined.fr/imach"></a><a name="uio"></a></h2>
  281: 
  282: <h2><a name="biaspar"></a>#Imach version 0.71a, March 2002,
  283: INED-EUROREVES </h2>
  284: 
  285: <p>This is a comment. Comments start with a '#'.</p>
  286: 
  287: <h4><font color="#FF0000">First uncommented line</font></h4>
  288: 
  289: <pre>title=1st_example datafile=data1.txt lastobs=8600 firstpass=1 lastpass=4</pre>
  290: 
  291: <ul>
  292:     <li><b>title=</b> 1st_example is title of the run. </li>
  293:     <li><b>datafile=</b>data1.txt is the name of the data set.
  294:         Our example is a six years follow-up survey. It consists
  295:         in a baseline followed by 3 reinterviews. </li>
  296:     <li><b>lastobs=</b> 8600 the program is able to run on a
  297:         subsample where the last observation number is lastobs.
  298:         It can be set a bigger number than the real number of
  299:         observations (e.g. 100000). In this example, maximisation
  300:         will be done on the 8600 first records. </li>
  301:     <li><b>firstpass=1</b> , <b>lastpass=4 </b>In case of more
  302:         than two interviews in the survey, the program can be run
  303:         on selected transitions periods. firstpass=1 means the
  304:         first interview included in the calculation is the
  305:         baseline survey. lastpass=4 means that the information
  306:         brought by the 4th interview is taken into account.</li>
  307: </ul>
  308: 
  309: <p>&nbsp;</p>
  310: 
  311: <h4><a name="biaspar-2"><font color="#FF0000">Second uncommented
  312: line</font></a></h4>
  313: 
  314: <pre>ftol=1.e-08 stepm=1 ncov=2 nlstate=2 ndeath=1 maxwav=4 mle=1 weight=0</pre>
  315: 
  316: <ul>
  317:     <li><b>ftol=1e-8</b> Convergence tolerance on the function
  318:         value in the maximisation of the likelihood. Choosing a
  319:         correct value for ftol is difficult. 1e-8 is a correct
  320:         value for a 32 bits computer.</li>
  321:     <li><b>stepm=1</b> Time unit in months for interpolation.
  322:         Examples:<ul>
  323:             <li>If stepm=1, the unit is a month </li>
  324:             <li>If stepm=4, the unit is a trimester</li>
  325:             <li>If stepm=12, the unit is a year </li>
  326:             <li>If stepm=24, the unit is two years</li>
  327:             <li>... </li>
  328:         </ul>
  329:     </li>
  330:     <li><b>ncov=2</b> Number of covariates in the datafile. </li>
  331:     <li><b>nlstate=2</b> Number of non-absorbing (alive) states.
  332:         Here we have two alive states: disability-free is coded 1
  333:         and disability is coded 2. </li>
  334:     <li><b>ndeath=1</b> Number of absorbing states. The absorbing
  335:         state death is coded 3. </li>
  336:     <li><b>maxwav=4</b> Number of waves in the datafile.</li>
  337:     <li><a name="mle"><b>mle</b></a><b>=1</b> Option for the
  338:         Maximisation Likelihood Estimation. <ul>
  339:             <li>If mle=1 the program does the maximisation and
  340:                 the calculation of health expectancies </li>
  341:             <li>If mle=0 the program only does the calculation of
  342:                 the health expectancies. </li>
  343:         </ul>
  344:     </li>
  345:     <li><b>weight=0</b> Possibility to add weights. <ul>
  346:             <li>If weight=0 no weights are included </li>
  347:             <li>If weight=1 the maximisation integrates the
  348:                 weights which are in field <a href="#Weight">4</a></li>
  349:         </ul>
  350:     </li>
  351: </ul>
  352: 
  353: <h4><font color="#FF0000">Covariates</font></h4>
  354: 
  355: <p>Intercept and age are systematically included in the model.
  356: Additional covariates can be included with the command: </p>
  357: 
  358: <pre>model=<em>list of covariates</em></pre>
  359: 
  360: <ul>
  361:     <li>if<strong> model=. </strong>then no covariates are
  362:         included</li>
  363:     <li>if <strong>model=V1</strong> the model includes the first
  364:         covariate (field 2)</li>
  365:     <li>if <strong>model=V2 </strong>the model includes the
  366:         second covariate (field 3)</li>
  367:     <li>if <strong>model=V1+V2 </strong>the model includes the
  368:         first and the second covariate (fields 2 and 3)</li>
  369:     <li>if <strong>model=V1*V2 </strong>the model includes the
  370:         product of the first and the second covariate (fields 2
  371:         and 3)</li>
  372:     <li>if <strong>model=V1+V1*age</strong> the model includes
  373:         the product covariate*age</li>
  374: </ul>
  375: 
  376: <p>In this example, we have two covariates in the data file
  377: (fields 2 and 3). The number of covariates is defined with
  378: statement ncov=2. If now you have 3 covariates in the datafile
  379: (fields 2, 3 and 4), you have to set ncov=3. Then you can run the
  380: programme with a new parametrisation taking into account the
  381: third covariate. For example, <strong>model=V1+V3 </strong>estimates
  382: a model with the first and third covariates. More complicated
  383: models can be used, but it will takes more time to converge. With
  384: a simple model (no covariates), the programme estimates 8
  385: parameters. Adding covariates increases the number of parameters
  386: : 12 for <strong>model=V1, </strong>16 for <strong>model=V1+V1*age
  387: </strong>and 20 for <strong>model=V1+V2+V3.</strong></p>
  388: 
  389: <h4><font color="#FF0000">Guess values for optimization</font><font
  390: color="#00006A"> </font></h4>
  391: 
  392: <p>You must write the initial guess values of the parameters for
  393: optimization. The number of parameters, <em>N</em> depends on the
  394: number of absorbing states and non-absorbing states and on the
  395: number of covariates. <br>
  396: <em>N</em> is given by the formula <em>N</em>=(<em>nlstate</em> +
  397: <em>ndeath</em>-1)*<em>nlstate</em>*<em>ncov</em>&nbsp;. <br>
  398: <br>
  399: Thus in the simple case with 2 covariates (the model is log
  400: (pij/pii) = aij + bij * age where intercept and age are the two
  401: covariates), and 2 health degrees (1 for disability-free and 2
  402: for disability) and 1 absorbing state (3), you must enter 8
  403: initials values, a12, b12, a13, b13, a21, b21, a23, b23. You can
  404: start with zeros as in this example, but if you have a more
  405: precise set (for example from an earlier run) you can enter it
  406: and it will speed up them<br>
  407: Each of the four lines starts with indices &quot;ij&quot;: <b>ij
  408: aij bij</b> </p>
  409: 
  410: <blockquote>
  411:     <pre># Guess values of aij and bij in log (pij/pii) = aij + bij * age
  412: 12 -14.155633  0.110794 
  413: 13  -7.925360  0.032091 
  414: 21  -1.890135 -0.029473 
  415: 23  -6.234642  0.022315 </pre>
  416: </blockquote>
  417: 
  418: <p>or, to simplify (in most of cases it converges but there is no
  419: warranty!): </p>
  420: 
  421: <blockquote>
  422:     <pre>12 0.0 0.0
  423: 13 0.0 0.0
  424: 21 0.0 0.0
  425: 23 0.0 0.0</pre>
  426: </blockquote>
  427: 
  428: <p> In order to speed up the convergence you can make a first run with
  429: a large stepm i.e stepm=12 or 24 and then decrease the stepm until
  430: stepm=1 month. If newstepm is the new shorter stepm and stepm can be
  431: expressed as a multiple of newstepm, like newstepm=n stepm, then the
  432: following approximation holds: 
  433: <pre>aij(stepm) = aij(n . stepm) - ln(n)
  434: </pre> and
  435: <pre>bij(stepm) = bij(n . stepm) .</pre>
  436: 
  437: <p> For example if you already ran for a 6 months interval and
  438: got:<br>
  439:  <pre># Parameters
  440: 12 -13.390179  0.126133 
  441: 13  -7.493460  0.048069 
  442: 21   0.575975 -0.041322 
  443: 23  -4.748678  0.030626 
  444: </pre>
  445: If you now want to get the monthly estimates, you can guess the aij by
  446: substracting ln(6)= 1,7917<br> and running<br>
  447: <pre>12 -15.18193847  0.126133 
  448: 13 -9.285219469  0.048069
  449: 21 -1.215784469 -0.041322
  450: 23 -6.540437469  0.030626
  451: </pre>
  452: and get<br>
  453: <pre>12 -15.029768 0.124347 
  454: 13 -8.472981 0.036599 
  455: 21 -1.472527 -0.038394 
  456: 23 -6.553602 0.029856 
  457: </br>
  458: which is closer to the results. The approximation is probably useful
  459: only for very small intervals and we don't have enough experience to
  460: know if you will speed up the convergence or not.
  461: <pre>         -ln(12)= -2.484
  462:  -ln(6/1)=-ln(6)= -1.791
  463:  -ln(3/1)=-ln(3)= -1.0986
  464: -ln(12/6)=-ln(2)= -0.693
  465: </pre>
  466: 
  467: <h4><font color="#FF0000">Guess values for computing variances</font></h4>
  468: 
  469: <p>This is an output if <a href="#mle">mle</a>=1. But it can be
  470: used as an input to get the various output data files (Health
  471: expectancies, stationary prevalence etc.) and figures without
  472: rerunning the rather long maximisation phase (mle=0). </p>
  473: 
  474: <p>The scales are small values for the evaluation of numerical
  475: derivatives. These derivatives are used to compute the hessian
  476: matrix of the parameters, that is the inverse of the covariance
  477: matrix, and the variances of health expectancies. Each line
  478: consists in indices &quot;ij&quot; followed by the initial scales
  479: (zero to simplify) associated with aij and bij. </p>
  480: <ul> <li>If mle=1 you can enter zeros:</li>
  481: <blockquote><pre># Scales (for hessian or gradient estimation)
  482: 12 0. 0. 
  483: 13 0. 0. 
  484: 21 0. 0. 
  485: 23 0. 0. </pre>
  486: </blockquote>
  487:     <li>If mle=0 you must enter a covariance matrix (usually
  488:         obtained from an earlier run).</li>
  489: </ul>
  490: 
  491: <h4><font color="#FF0000">Covariance matrix of parameters</font></h4>
  492: 
  493: <p>This is an output if <a href="#mle">mle</a>=1. But it can be
  494: used as an input to get the various output data files (Health
  495: expectancies, stationary prevalence etc.) and figures without
  496: rerunning the rather long maximisation phase (mle=0). <br>
  497: Each line starts with indices &quot;ijk&quot; followed by the
  498: covariances between aij and bij:<br>
  499: <pre>
  500:    121 Var(a12) 
  501:    122 Cov(b12,a12)  Var(b12) 
  502:           ...
  503:    232 Cov(b23,a12)  Cov(b23,b12) ... Var (b23) </pre>
  504: <ul>
  505:     <li>If mle=1 you can enter zeros. </li>
  506:     <pre># Covariance matrix
  507: 121 0.
  508: 122 0. 0.
  509: 131 0. 0. 0. 
  510: 132 0. 0. 0. 0. 
  511: 211 0. 0. 0. 0. 0. 
  512: 212 0. 0. 0. 0. 0. 0. 
  513: 231 0. 0. 0. 0. 0. 0. 0. 
  514: 232 0. 0. 0. 0. 0. 0. 0. 0.</pre>
  515:     <li>If mle=0 you must enter a covariance matrix (usually
  516:         obtained from an earlier run). </li>
  517: </ul>
  518: 
  519: <h4><font color="#FF0000">Age range for calculation of stationary
  520: prevalences and health expectancies</font></h4>
  521: 
  522: <pre>agemin=70 agemax=100 bage=50 fage=100</pre>
  523: 
  524: <br>Once we obtained the estimated parameters, the program is able
  525: to calculated stationary prevalence, transitions probabilities
  526: and life expectancies at any age. Choice of age range is useful
  527: for extrapolation. In our data file, ages varies from age 70 to
  528: 102. It is possible to get extrapolated stationary prevalence by
  529: age ranging from agemin to agemax.
  530: 
  531: <br>Setting bage=50 (begin age) and fage=100 (final age), makes
  532: the program computing life expectancy from age 'bage' to age
  533: 'fage'. As we use a model, we can interessingly compute life
  534: expectancy on a wider age range than the age range from the data.
  535: But the model can be rather wrong on much larger intervals.
  536: Program is limited to around 120 for upper age!
  537: <ul>
  538:     <li><b>agemin=</b> Minimum age for calculation of the
  539:         stationary prevalence </li>
  540:     <li><b>agemax=</b> Maximum age for calculation of the
  541:         stationary prevalence </li>
  542:     <li><b>bage=</b> Minimum age for calculation of the health
  543:         expectancies </li>
  544:     <li><b>fage=</b> Maximum age for calculation of the health
  545:         expectancies </li>
  546: </ul>
  547: 
  548: <h4><a name="Computing"><font color="#FF0000">Computing</font></a><font
  549: color="#FF0000"> the observed prevalence</font></h4>
  550: 
  551: <pre>begin-prev-date=1/1/1984 end-prev-date=1/6/1988 </pre>
  552: 
  553: <br>Statements 'begin-prev-date' and 'end-prev-date' allow to
  554: select the period in which we calculate the observed prevalences
  555: in each state. In this example, the prevalences are calculated on
  556: data survey collected between 1 january 1984 and 1 june 1988. 
  557: <ul>
  558:     <li><strong>begin-prev-date= </strong>Starting date
  559:         (day/month/year)</li>
  560:     <li><strong>end-prev-date= </strong>Final date
  561:         (day/month/year)</li>
  562: </ul>
  563: 
  564: <h4><font color="#FF0000">Population- or status-based health
  565: expectancies</font></h4>
  566: 
  567: <pre>pop_based=0</pre>
  568: 
  569: <p>The program computes status-based health expectancies, i.e
  570: health expectancies which depends on your initial health state.
  571: If you are healthy your healthy life expectancy (e11) is higher
  572: than if you were disabled (e21, with e11 &gt; e21).<br>
  573: To compute a healthy life expectancy independant of the initial
  574: status we have to weight e11 and e21 according to the probability
  575: to be in each state at initial age or, with other word, according
  576: to the proportion of people in each state.<br>
  577: We prefer computing a 'pure' period healthy life expectancy based
  578: only on the transtion forces. Then the weights are simply the
  579: stationnary prevalences or 'implied' prevalences at the initial
  580: age.<br>
  581: Some other people would like to use the cross-sectional
  582: prevalences (the &quot;Sullivan prevalences&quot;) observed at
  583: the initial age during a period of time <a href="#Computing">defined
  584: just above</a>. <br>
  585: 
  586: <ul>
  587:     <li><strong>popbased= 0 </strong>Health expectancies are
  588:         computed at each age from stationary prevalences
  589:         'expected' at this initial age.</li>
  590:     <li><strong>popbased= 1 </strong>Health expectancies are
  591:         computed at each age from cross-sectional 'observed'
  592:         prevalence at this initial age. As all the population is
  593:         not observed at the same exact date we define a short
  594:         period were the observed prevalence is computed.</li>
  595: </ul>
  596: 
  597: <h4><font color="#FF0000">Prevalence forecasting ( Experimental)</font></h4>
  598: 
  599: <pre>starting-proj-date=1/1/1989 final-proj-date=1/1/1992 mov_average=0 </pre>
  600: 
  601: <p>Prevalence and population projections are only available if
  602: the interpolation unit is a month, i.e. stepm=1 and if there are
  603: no covariate. The programme estimates the prevalence in each
  604: state at a precise date expressed in day/month/year. The
  605: programme computes one forecasted prevalence a year from a
  606: starting date (1 january of 1989 in this example) to a final date
  607: (1 january 1992). The statement mov_average allows to compute
  608: smoothed forecasted prevalences with a five-age moving average
  609: centered at the mid-age of the five-age period. <br>
  610: 
  611: <ul>
  612:     <li><strong>starting-proj-date</strong>= starting date
  613:         (day/month/year) of forecasting</li>
  614:     <li><strong>final-proj-date= </strong>final date
  615:         (day/month/year) of forecasting</li>
  616:     <li><strong>mov_average</strong>= smoothing with a five-age
  617:         moving average centered at the mid-age of the five-age
  618:         period. The command<strong> mov_average</strong> takes
  619:         value 1 if the prevalences are smoothed and 0 otherwise.</li>
  620: </ul>
  621: 
  622: <h4><font color="#FF0000">Last uncommented line : Population
  623: forecasting </font></h4>
  624: 
  625: <pre>popforecast=0 popfile=pyram.txt popfiledate=1/1/1989 last-popfiledate=1/1/1992</pre>
  626: 
  627: <p>This command is available if the interpolation unit is a
  628: month, i.e. stepm=1 and if popforecast=1. From a data file
  629: including age and number of persons alive at the precise date
  630: &#145;popfiledate&#146;, you can forecast the number of persons
  631: in each state until date &#145;last-popfiledate&#146;. In this
  632: example, the popfile <a href="pyram.txt"><b>pyram.txt</b></a>
  633: includes real data which are the Japanese population in 1989.<br>
  634: 
  635: <ul type="disc">
  636:     <li class="MsoNormal"
  637:     style="TEXT-ALIGN: justify; mso-margin-top-alt: auto; mso-margin-bottom-alt: auto; mso-list: l10 level1 lfo36; tab-stops: list 36.0pt"><b>popforecast=
  638:         0 </b>Option for population forecasting. If
  639:         popforecast=1, the programme does the forecasting<b>.</b></li>
  640:     <li class="MsoNormal"
  641:     style="TEXT-ALIGN: justify; mso-margin-top-alt: auto; mso-margin-bottom-alt: auto; mso-list: l10 level1 lfo36; tab-stops: list 36.0pt"><b>popfile=
  642:         </b>name of the population file</li>
  643:     <li class="MsoNormal"
  644:     style="TEXT-ALIGN: justify; mso-margin-top-alt: auto; mso-margin-bottom-alt: auto; mso-list: l10 level1 lfo36; tab-stops: list 36.0pt"><b>popfiledate=</b>
  645:         date of the population population</li>
  646:     <li class="MsoNormal"
  647:     style="TEXT-ALIGN: justify; mso-margin-top-alt: auto; mso-margin-bottom-alt: auto; mso-list: l10 level1 lfo36; tab-stops: list 36.0pt"><b>last-popfiledate</b>=
  648:         date of the last population projection&nbsp;</li>
  649: </ul>
  650: 
  651: <hr>
  652: 
  653: <h2><a name="running"></a><font color="#00006A">Running Imach
  654: with this example</font></h2>
  655: 
  656: We assume that you typed in your <a href="biaspar.imach">1st_example
  657: parameter file</a> as explained <a href="#biaspar">above</a>. 
  658: <br>To run the program you should either:
  659: <ul> <li> click on the imach.exe icon and enter
  660: the name of the parameter file which is for example <a
  661: href="C:\usr\imach\mle\biaspar.imach">C:\usr\imach\mle\biaspar.imach</a>
  662: <li> You also can locate the biaspar.imach icon in 
  663: <a href="C:\usr\imach\mle">C:\usr\imach\mle</a> with your mouse and drag it with
  664: the mouse on the imach window).
  665: <li> With latest version (0.7 and higher) if you setup windows in order to
  666: understand ".imach" extension you can right click the
  667: biaspar.imach icon and either edit with notepad the parameter file or
  668: execute it with imach or whatever.
  669: </ul>  
  670: 
  671: The time to converge depends on the step unit that you used (1
  672: month is cpu consuming), on the number of cases, and on the
  673: number of variables.
  674: 
  675: <br>The program outputs many files. Most of them are files which
  676: will be plotted for better understanding.
  677: 
  678: <hr>
  679: 
  680: <h2><a name="output"><font color="#00006A">Output of the program
  681: and graphs</font> </a></h2>
  682: 
  683: <p>Once the optimization is finished, some graphics can be made
  684: with a grapher. We use Gnuplot which is an interactive plotting
  685: program copyrighted but freely distributed. A gnuplot reference
  686: manual is available <a href="http://www.gnuplot.info/">here</a>. <br>
  687: When the running is finished, the user should enter a caracter
  688: for plotting and output editing.
  689: 
  690: <br>These caracters are:<br>
  691: 
  692: <ul>
  693:     <li>'c' to start again the program from the beginning.</li>
  694:     <li>'e' opens the <a href="biaspar.htm"><strong>biaspar.htm</strong></a>
  695:         file to edit the output files and graphs. </li>
  696:     <li>'q' for exiting.</li>
  697: </ul>
  698: 
  699: <h5><font size="4"><strong>Results files </strong></font><br>
  700: <br>
  701: <font color="#EC5E5E" size="3"><strong>- </strong></font><a
  702: name="Observed prevalence in each state"><font color="#EC5E5E"
  703: size="3"><strong>Observed prevalence in each state</strong></font></a><font
  704: color="#EC5E5E" size="3"><strong> (and at first pass)</strong></font><b>:
  705: </b><a href="prbiaspar.txt"><b>prbiaspar.txt</b></a><br>
  706: </h5>
  707: 
  708: <p>The first line is the title and displays each field of the
  709: file. The first column is age. The fields 2 and 6 are the
  710: proportion of individuals in states 1 and 2 respectively as
  711: observed during the first exam. Others fields are the numbers of
  712: people in states 1, 2 or more. The number of columns increases if
  713: the number of states is higher than 2.<br>
  714: The header of the file is </p>
  715: 
  716: <pre># Age Prev(1) N(1) N Age Prev(2) N(2) N
  717: 70 1.00000 631 631 70 0.00000 0 631
  718: 71 0.99681 625 627 71 0.00319 2 627 
  719: 72 0.97125 1115 1148 72 0.02875 33 1148 </pre>
  720: 
  721: <p>It means that at age 70, the prevalence in state 1 is 1.000
  722: and in state 2 is 0.00 . At age 71 the number of individuals in
  723: state 1 is 625 and in state 2 is 2, hence the total number of
  724: people aged 71 is 625+2=627. <br>
  725: </p>
  726: 
  727: <h5><font color="#EC5E5E" size="3"><b>- Estimated parameters and
  728: covariance matrix</b></font><b>: </b><a href="rbiaspar.txt"><b>rbiaspar.imach</b></a></h5>
  729: 
  730: <p>This file contains all the maximisation results: </p>
  731: 
  732: <pre> -2 log likelihood= 21660.918613445392
  733:  Estimated parameters: a12 = -12.290174 b12 = 0.092161 
  734:                        a13 = -9.155590  b13 = 0.046627 
  735:                        a21 = -2.629849  b21 = -0.022030 
  736:                        a23 = -7.958519  b23 = 0.042614  
  737:  Covariance matrix: Var(a12) = 1.47453e-001
  738:                     Var(b12) = 2.18676e-005
  739:                     Var(a13) = 2.09715e-001
  740:                     Var(b13) = 3.28937e-005  
  741:                     Var(a21) = 9.19832e-001
  742:                     Var(b21) = 1.29229e-004
  743:                     Var(a23) = 4.48405e-001
  744:                     Var(b23) = 5.85631e-005 
  745:  </pre>
  746: 
  747: <p>By substitution of these parameters in the regression model,
  748: we obtain the elementary transition probabilities:</p>
  749: 
  750: <p><img src="pebiaspar1.gif" width="400" height="300"></p>
  751: 
  752: <h5><font color="#EC5E5E" size="3"><b>- Transition probabilities</b></font><b>:
  753: </b><a href="pijrbiaspar.txt"><b>pijrbiaspar.txt</b></a></h5>
  754: 
  755: <p>Here are the transitions probabilities Pij(x, x+nh) where nh
  756: is a multiple of 2 years. The first column is the starting age x
  757: (from age 50 to 100), the second is age (x+nh) and the others are
  758: the transition probabilities p11, p12, p13, p21, p22, p23. For
  759: example, line 5 of the file is: </p>
  760: 
  761: <pre> 100 106 0.02655 0.17622 0.79722 0.01809 0.13678 0.84513 </pre>
  762: 
  763: <p>and this means: </p>
  764: 
  765: <pre>p11(100,106)=0.02655
  766: p12(100,106)=0.17622
  767: p13(100,106)=0.79722
  768: p21(100,106)=0.01809
  769: p22(100,106)=0.13678
  770: p22(100,106)=0.84513 </pre>
  771: 
  772: <h5><font color="#EC5E5E" size="3"><b>- </b></font><a
  773: name="Stationary prevalence in each state"><font color="#EC5E5E"
  774: size="3"><b>Stationary prevalence in each state</b></font></a><b>:
  775: </b><a href="plrbiaspar.txt"><b>plrbiaspar.txt</b></a></h5>
  776: 
  777: <pre>#Prevalence
  778: #Age 1-1 2-2
  779: 
  780: #************ 
  781: 70 0.90134 0.09866
  782: 71 0.89177 0.10823 
  783: 72 0.88139 0.11861 
  784: 73 0.87015 0.12985 </pre>
  785: 
  786: <p>At age 70 the stationary prevalence is 0.90134 in state 1 and
  787: 0.09866 in state 2. This stationary prevalence differs from
  788: observed prevalence. Here is the point. The observed prevalence
  789: at age 70 results from the incidence of disability, incidence of
  790: recovery and mortality which occurred in the past of the cohort.
  791: Stationary prevalence results from a simulation with actual
  792: incidences and mortality (estimated from this cross-longitudinal
  793: survey). It is the best predictive value of the prevalence in the
  794: future if &quot;nothing changes in the future&quot;. This is
  795: exactly what demographers do with a Life table. Life expectancy
  796: is the expected mean time to survive if observed mortality rates
  797: (incidence of mortality) &quot;remains constant&quot; in the
  798: future. </p>
  799: 
  800: <h5><font color="#EC5E5E" size="3"><b>- Standard deviation of
  801: stationary prevalence</b></font><b>: </b><a
  802: href="vplrbiaspar.txt"><b>vplrbiaspar.txt</b></a></h5>
  803: 
  804: <p>The stationary prevalence has to be compared with the observed
  805: prevalence by age. But both are statistical estimates and
  806: subjected to stochastic errors due to the size of the sample, the
  807: design of the survey, and, for the stationary prevalence to the
  808: model used and fitted. It is possible to compute the standard
  809: deviation of the stationary prevalence at each age.</p>
  810: 
  811: <h5><font color="#EC5E5E" size="3">-Observed and stationary
  812: prevalence in state (2=disable) with the confident interval</font>:<b>
  813: </b><a href="vbiaspar21.htm"><b>vbiaspar21.gif</b></a></h5>
  814: 
  815: <p>This graph exhibits the stationary prevalence in state (2)
  816: with the confidence interval in red. The green curve is the
  817: observed prevalence (or proportion of individuals in state (2)).
  818: Without discussing the results (it is not the purpose here), we
  819: observe that the green curve is rather below the stationary
  820: prevalence. It suggests an increase of the disability prevalence
  821: in the future.</p>
  822: 
  823: <p><img src="vbiaspar21.gif" width="400" height="300"></p>
  824: 
  825: <h5><font color="#EC5E5E" size="3"><b>-Convergence to the
  826: stationary prevalence of disability</b></font><b>: </b><a
  827: href="pbiaspar11.gif"><b>pbiaspar11.gif</b></a><br>
  828: <img src="pbiaspar11.gif" width="400" height="300"> </h5>
  829: 
  830: <p>This graph plots the conditional transition probabilities from
  831: an initial state (1=healthy in red at the bottom, or 2=disable in
  832: green on top) at age <em>x </em>to the final state 2=disable<em> </em>at
  833: age <em>x+h. </em>Conditional means at the condition to be alive
  834: at age <em>x+h </em>which is <i>hP12x</i> + <em>hP22x</em>. The
  835: curves <i>hP12x/(hP12x</i> + <em>hP22x) </em>and <i>hP22x/(hP12x</i>
  836: + <em>hP22x) </em>converge with <em>h, </em>to the <em>stationary
  837: prevalence of disability</em>. In order to get the stationary
  838: prevalence at age 70 we should start the process at an earlier
  839: age, i.e.50. If the disability state is defined by severe
  840: disability criteria with only a few chance to recover, then the
  841: incidence of recovery is low and the time to convergence is
  842: probably longer. But we don't have experience yet.</p>
  843: 
  844: <h5><font color="#EC5E5E" size="3"><b>- Life expectancies by age
  845: and initial health status</b></font><b>: </b><a
  846: href="erbiaspar.txt"><b>erbiaspar.txt</b></a></h5>
  847: 
  848: <pre># Health expectancies 
  849: # Age 1-1 1-2 2-1 2-2 
  850: 70 10.9226 3.0401 5.6488 6.2122 
  851: 71 10.4384 3.0461 5.2477 6.1599 
  852: 72 9.9667 3.0502 4.8663 6.1025 
  853: 73 9.5077 3.0524 4.5044 6.0401 </pre>
  854: 
  855: <pre>For example 70 10.4227 3.0402 5.6488 5.7123 means:
  856: e11=10.4227 e12=3.0402 e21=5.6488 e22=5.7123</pre>
  857: 
  858: <pre><img src="expbiaspar21.gif" width="400" height="300"><img
  859: src="expbiaspar11.gif" width="400" height="300"></pre>
  860: 
  861: <p>For example, life expectancy of a healthy individual at age 70
  862: is 10.42 in the healthy state and 3.04 in the disability state
  863: (=13.46 years). If he was disable at age 70, his life expectancy
  864: will be shorter, 5.64 in the healthy state and 5.71 in the
  865: disability state (=11.35 years). The total life expectancy is a
  866: weighted mean of both, 13.46 and 11.35; weight is the proportion
  867: of people disabled at age 70. In order to get a pure period index
  868: (i.e. based only on incidences) we use the <a
  869: href="#Stationary prevalence in each state">computed or
  870: stationary prevalence</a> at age 70 (i.e. computed from
  871: incidences at earlier ages) instead of the <a
  872: href="#Observed prevalence in each state">observed prevalence</a>
  873: (for example at first exam) (<a href="#Health expectancies">see
  874: below</a>).</p>
  875: 
  876: <h5><font color="#EC5E5E" size="3"><b>- Variances of life
  877: expectancies by age and initial health status</b></font><b>: </b><a
  878: href="vrbiaspar.txt"><b>vrbiaspar.txt</b></a></h5>
  879: 
  880: <p>For example, the covariances of life expectancies Cov(ei,ej)
  881: at age 50 are (line 3) </p>
  882: 
  883: <pre>   Cov(e1,e1)=0.4776  Cov(e1,e2)=0.0488=Cov(e2,e1)  Cov(e2,e2)=0.0424</pre>
  884: 
  885: <h5><font color="#EC5E5E" size="3"><b>- </b></font><a
  886: name="Health expectancies"><font color="#EC5E5E" size="3"><b>Health
  887: expectancies</b></font></a><font color="#EC5E5E" size="3"><b>
  888: with standard errors in parentheses</b></font><b>: </b><a
  889: href="trbiaspar.txt"><font face="Courier New"><b>trbiaspar.txt</b></font></a></h5>
  890: 
  891: <pre>#Total LEs with variances: e.. (std) e.1 (std) e.2 (std) </pre>
  892: 
  893: <pre>70 13.26 (0.22) 9.95 (0.20) 3.30 (0.14) </pre>
  894: 
  895: <p>Thus, at age 70 the total life expectancy, e..=13.26 years is
  896: the weighted mean of e1.=13.46 and e2.=11.35 by the stationary
  897: prevalence at age 70 which are 0.90134 in state 1 and 0.09866 in
  898: state 2, respectively (the sum is equal to one). e.1=9.95 is the
  899: Disability-free life expectancy at age 70 (it is again a weighted
  900: mean of e11 and e21). e.2=3.30 is also the life expectancy at age
  901: 70 to be spent in the disability state.</p>
  902: 
  903: <h5><font color="#EC5E5E" size="3"><b>-Total life expectancy by
  904: age and health expectancies in states (1=healthy) and (2=disable)</b></font><b>:
  905: </b><a href="ebiaspar1.gif"><b>ebiaspar1.gif</b></a></h5>
  906: 
  907: <p>This figure represents the health expectancies and the total
  908: life expectancy with the confident interval in dashed curve. </p>
  909: 
  910: <pre>        <img src="ebiaspar1.gif" width="400" height="300"></pre>
  911: 
  912: <p>Standard deviations (obtained from the information matrix of
  913: the model) of these quantities are very useful.
  914: Cross-longitudinal surveys are costly and do not involve huge
  915: samples, generally a few thousands; therefore it is very
  916: important to have an idea of the standard deviation of our
  917: estimates. It has been a big challenge to compute the Health
  918: Expectancy standard deviations. Don't be confuse: life expectancy
  919: is, as any expected value, the mean of a distribution; but here
  920: we are not computing the standard deviation of the distribution,
  921: but the standard deviation of the estimate of the mean.</p>
  922: 
  923: <p>Our health expectancies estimates vary according to the sample
  924: size (and the standard deviations give confidence intervals of
  925: the estimate) but also according to the model fitted. Let us
  926: explain it in more details.</p>
  927: 
  928: <p>Choosing a model means ar least two kind of choices. First we
  929: have to decide the number of disability states. Second we have to
  930: design, within the logit model family, the model: variables,
  931: covariables, confonding factors etc. to be included.</p>
  932: 
  933: <p>More disability states we have, better is our demographical
  934: approach of the disability process, but smaller are the number of
  935: transitions between each state and higher is the noise in the
  936: measurement. We do not have enough experiments of the various
  937: models to summarize the advantages and disadvantages, but it is
  938: important to say that even if we had huge and unbiased samples,
  939: the total life expectancy computed from a cross-longitudinal
  940: survey, varies with the number of states. If we define only two
  941: states, alive or dead, we find the usual life expectancy where it
  942: is assumed that at each age, people are at the same risk to die.
  943: If we are differentiating the alive state into healthy and
  944: disable, and as the mortality from the disability state is higher
  945: than the mortality from the healthy state, we are introducing
  946: heterogeneity in the risk of dying. The total mortality at each
  947: age is the weighted mean of the mortality in each state by the
  948: prevalence in each state. Therefore if the proportion of people
  949: at each age and in each state is different from the stationary
  950: equilibrium, there is no reason to find the same total mortality
  951: at a particular age. Life expectancy, even if it is a very useful
  952: tool, has a very strong hypothesis of homogeneity of the
  953: population. Our main purpose is not to measure differential
  954: mortality but to measure the expected time in a healthy or
  955: disability state in order to maximise the former and minimize the
  956: latter. But the differential in mortality complexifies the
  957: measurement.</p>
  958: 
  959: <p>Incidences of disability or recovery are not affected by the
  960: number of states if these states are independant. But incidences
  961: estimates are dependant on the specification of the model. More
  962: covariates we added in the logit model better is the model, but
  963: some covariates are not well measured, some are confounding
  964: factors like in any statistical model. The procedure to &quot;fit
  965: the best model' is similar to logistic regression which itself is
  966: similar to regression analysis. We haven't yet been sofar because
  967: we also have a severe limitation which is the speed of the
  968: convergence. On a Pentium III, 500 MHz, even the simplest model,
  969: estimated by month on 8,000 people may take 4 hours to converge.
  970: Also, the program is not yet a statistical package, which permits
  971: a simple writing of the variables and the model to take into
  972: account in the maximisation. The actual program allows only to
  973: add simple variables like age+sex or age+sex+ age*sex but will
  974: never be general enough. But what is to remember, is that
  975: incidences or probability of change from one state to another is
  976: affected by the variables specified into the model.</p>
  977: 
  978: <p>Also, the age range of the people interviewed has a link with
  979: the age range of the life expectancy which can be estimated by
  980: extrapolation. If your sample ranges from age 70 to 95, you can
  981: clearly estimate a life expectancy at age 70 and trust your
  982: confidence interval which is mostly based on your sample size,
  983: but if you want to estimate the life expectancy at age 50, you
  984: should rely in your model, but fitting a logistic model on a age
  985: range of 70-95 and estimating probabilties of transition out of
  986: this age range, say at age 50 is very dangerous. At least you
  987: should remember that the confidence interval given by the
  988: standard deviation of the health expectancies, are under the
  989: strong assumption that your model is the 'true model', which is
  990: probably not the case.</p>
  991: 
  992: <h5><font color="#EC5E5E" size="3"><b>- Copy of the parameter
  993: file</b></font><b>: </b><a href="orbiaspar.txt"><b>orbiaspar.txt</b></a></h5>
  994: 
  995: <p>This copy of the parameter file can be useful to re-run the
  996: program while saving the old output files. </p>
  997: 
  998: <h5><font color="#EC5E5E" size="3"><b>- Prevalence forecasting</b></font><b>:
  999: </b><a href="frbiaspar.txt"><b>frbiaspar.txt</b></a></h5>
 1000: 
 1001: <p
 1002: style="TEXT-ALIGN: justify; tab-stops: 45.8pt 91.6pt 137.4pt 183.2pt 229.0pt 274.8pt 320.6pt 366.4pt 412.2pt 458.0pt 503.8pt 549.6pt 595.4pt 641.2pt 687.0pt 732.8pt">First,
 1003: we have estimated the observed prevalence between 1/1/1984 and
 1004: 1/6/1988. The mean date of interview (weighed average of the
 1005: interviews performed between1/1/1984 and 1/6/1988) is estimated
 1006: to be 13/9/1985, as written on the top on the file. Then we
 1007: forecast the probability to be in each state. </p>
 1008: 
 1009: <p
 1010: style="TEXT-ALIGN: justify; tab-stops: 45.8pt 91.6pt 137.4pt 183.2pt 229.0pt 274.8pt 320.6pt 366.4pt 412.2pt 458.0pt 503.8pt 549.6pt 595.4pt 641.2pt 687.0pt 732.8pt">Example,
 1011: at date 1/1/1989 : </p>
 1012: 
 1013: <pre class="MsoNormal"># StartingAge FinalAge P.1 P.2 P.3
 1014: # Forecasting at date 1/1/1989
 1015:   73 0.807 0.078 0.115</pre>
 1016: 
 1017: <p
 1018: style="TEXT-ALIGN: justify; tab-stops: 45.8pt 91.6pt 137.4pt 183.2pt 229.0pt 274.8pt 320.6pt 366.4pt 412.2pt 458.0pt 503.8pt 549.6pt 595.4pt 641.2pt 687.0pt 732.8pt">Since
 1019: the minimum age is 70 on the 13/9/1985, the youngest forecasted
 1020: age is 73. This means that at age a person aged 70 at 13/9/1989
 1021: has a probability to enter state1 of 0.807 at age 73 on 1/1/1989.
 1022: Similarly, the probability to be in state 2 is 0.078 and the
 1023: probability to die is 0.115. Then, on the 1/1/1989, the
 1024: prevalence of disability at age 73 is estimated to be 0.088.</p>
 1025: 
 1026: <h5><font color="#EC5E5E" size="3"><b>- Population forecasting</b></font><b>:
 1027: </b><a href="poprbiaspar.txt"><b>poprbiaspar.txt</b></a></h5>
 1028: 
 1029: <pre># Age P.1 P.2 P.3 [Population]
 1030: # Forecasting at date 1/1/1989 
 1031: 75 572685.22 83798.08 
 1032: 74 621296.51 79767.99 
 1033: 73 645857.70 69320.60 </pre>
 1034: 
 1035: <pre># Forecasting at date 1/1/19909 
 1036: 76 442986.68 92721.14 120775.48
 1037: 75 487781.02 91367.97 121915.51
 1038: 74 512892.07 85003.47 117282.76 </pre>
 1039: 
 1040: <p>From the population file, we estimate the number of people in
 1041: each state. At age 73, 645857 persons are in state 1 and 69320
 1042: are in state 2. One year latter, 512892 are still in state 1,
 1043: 85003 are in state 2 and 117282 died before 1/1/1990.</p>
 1044: 
 1045: <hr>
 1046: 
 1047: <h2><a name="example"></a><font color="#00006A">Trying an example</font></h2>
 1048: 
 1049: <p>Since you know how to run the program, it is time to test it
 1050: on your own computer. Try for example on a parameter file named <a
 1051: href="..\mytry\imachpar.imach">imachpar.imach</a> which is a copy of <font
 1052: size="2" face="Courier New">mypar.imach</font> included in the
 1053: subdirectory of imach, <font size="2" face="Courier New">mytry</font>.
 1054: Edit it to change the name of the data file to <font size="2"
 1055: face="Courier New">..\data\mydata.txt</font> if you don't want to
 1056: copy it on the same directory. The file <font face="Courier New">mydata.txt</font>
 1057: is a smaller file of 3,000 people but still with 4 waves. </p>
 1058: 
 1059: <p>Click on the imach.exe icon to open a window. Answer to the
 1060: question:'<strong>Enter the parameter file name:'</strong></p>
 1061: 
 1062: <table border="1">
 1063:     <tr>
 1064:         <td width="100%"><strong>IMACH, Version 0.71</strong><p><strong>Enter
 1065:         the parameter file name: ..\mytry\imachpar.imach</strong></p>
 1066:         </td>
 1067:     </tr>
 1068: </table>
 1069: 
 1070: <p>Most of the data files or image files generated, will use the
 1071: 'imachpar' string into their name. The running time is about 2-3
 1072: minutes on a Pentium III. If the execution worked correctly, the
 1073: outputs files are created in the current directory, and should be
 1074: the same as the mypar files initially included in the directory <font
 1075: size="2" face="Courier New">mytry</font>.</p>
 1076: 
 1077: <ul>
 1078:     <li><pre><u>Output on the screen</u> The output screen looks like <a
 1079: href="imachrun.LOG">this Log file</a>
 1080: #
 1081: 
 1082: title=MLE datafile=..\data\mydata.txt lastobs=3000 firstpass=1 lastpass=3
 1083: ftol=1.000000e-008 stepm=24 ncov=2 nlstate=2 ndeath=1 maxwav=4 mle=1 weight=0</pre>
 1084:     </li>
 1085:     <li><pre>Total number of individuals= 2965, Agemin = 70.00, Agemax= 100.92
 1086: 
 1087: Warning, no any valid information for:126 line=126
 1088: Warning, no any valid information for:2307 line=2307
 1089: Delay (in months) between two waves Min=21 Max=51 Mean=24.495826
 1090: <font face="Times New Roman">These lines give some warnings on the data file and also some raw statistics on frequencies of transitions.</font>
 1091: Age 70 1.=230 loss[1]=3.5% 2.=16 loss[2]=12.5% 1.=222 prev[1]=94.1% 2.=14
 1092:  prev[2]=5.9% 1-1=8 11=200 12=7 13=15 2-1=2 21=6 22=7 23=1
 1093: Age 102 1.=0 loss[1]=NaNQ% 2.=0 loss[2]=NaNQ% 1.=0 prev[1]=NaNQ% 2.=0 </pre>
 1094:     </li>
 1095: </ul>
 1096: 
 1097: <p>&nbsp;</p>
 1098: 
 1099: <ul>
 1100:     <li>Maximisation with the Powell algorithm. 8 directions are
 1101:         given corresponding to the 8 parameters. this can be
 1102:         rather long to get convergence.<br>
 1103:         <font size="1" face="Courier New"><br>
 1104:         Powell iter=1 -2*LL=11531.405658264877 1 0.000000000000 2
 1105:         0.000000000000 3<br>
 1106:         0.000000000000 4 0.000000000000 5 0.000000000000 6
 1107:         0.000000000000 7 <br>
 1108:         0.000000000000 8 0.000000000000<br>
 1109:         1..........2.................3..........4.................5.........<br>
 1110:         6................7........8...............<br>
 1111:         Powell iter=23 -2*LL=6744.954108371555 1 -12.967632334283
 1112:         <br>
 1113:         2 0.135136681033 3 -7.402109728262 4 0.067844593326 <br>
 1114:         5 -0.673601538129 6 -0.006615504377 7 -5.051341616718 <br>
 1115:         8 0.051272038506<br>
 1116:         1..............2...........3..............4...........<br>
 1117:         5..........6................7...........8.........<br>
 1118:         #Number of iterations = 23, -2 Log likelihood =
 1119:         6744.954042573691<br>
 1120:         # Parameters<br>
 1121:         12 -12.966061 0.135117 <br>
 1122:         13 -7.401109 0.067831 <br>
 1123:         21 -0.672648 -0.006627 <br>
 1124:         23 -5.051297 0.051271 </font><br>
 1125:         </li>
 1126:     <li><pre><font size="2">Calculation of the hessian matrix. Wait...
 1127: 12345678.12.13.14.15.16.17.18.23.24.25.26.27.28.34.35.36.37.38.45.46.47.48.56.57.58.67.68.78
 1128: 
 1129: Inverting the hessian to get the covariance matrix. Wait...
 1130: 
 1131: #Hessian matrix#
 1132: 3.344e+002 2.708e+004 -4.586e+001 -3.806e+003 -1.577e+000 -1.313e+002 3.914e-001 3.166e+001 
 1133: 2.708e+004 2.204e+006 -3.805e+003 -3.174e+005 -1.303e+002 -1.091e+004 2.967e+001 2.399e+003 
 1134: -4.586e+001 -3.805e+003 4.044e+002 3.197e+004 2.431e-002 1.995e+000 1.783e-001 1.486e+001 
 1135: -3.806e+003 -3.174e+005 3.197e+004 2.541e+006 2.436e+000 2.051e+002 1.483e+001 1.244e+003 
 1136: -1.577e+000 -1.303e+002 2.431e-002 2.436e+000 1.093e+002 8.979e+003 -3.402e+001 -2.843e+003 
 1137: -1.313e+002 -1.091e+004 1.995e+000 2.051e+002 8.979e+003 7.420e+005 -2.842e+003 -2.388e+005 
 1138: 3.914e-001 2.967e+001 1.783e-001 1.483e+001 -3.402e+001 -2.842e+003 1.494e+002 1.251e+004 
 1139: 3.166e+001 2.399e+003 1.486e+001 1.244e+003 -2.843e+003 -2.388e+005 1.251e+004 1.053e+006 
 1140: # Scales
 1141: 12 1.00000e-004 1.00000e-006
 1142: 13 1.00000e-004 1.00000e-006
 1143: 21 1.00000e-003 1.00000e-005
 1144: 23 1.00000e-004 1.00000e-005
 1145: # Covariance
 1146:   1 5.90661e-001
 1147:   2 -7.26732e-003 8.98810e-005
 1148:   3 8.80177e-002 -1.12706e-003 5.15824e-001
 1149:   4 -1.13082e-003 1.45267e-005 -6.50070e-003 8.23270e-005
 1150:   5 9.31265e-003 -1.16106e-004 6.00210e-004 -8.04151e-006 1.75753e+000
 1151:   6 -1.15664e-004 1.44850e-006 -7.79995e-006 1.04770e-007 -2.12929e-002 2.59422e-004
 1152:   7 1.35103e-003 -1.75392e-005 -6.38237e-004 7.85424e-006 4.02601e-001 -4.86776e-003 1.32682e+000
 1153:   8 -1.82421e-005 2.35811e-007 7.75503e-006 -9.58687e-008 -4.86589e-003 5.91641e-005 -1.57767e-002 1.88622e-004
 1154: # agemin agemax for lifexpectancy, bage fage (if mle==0 ie no data nor Max likelihood).
 1155: 
 1156: 
 1157: agemin=70 agemax=100 bage=50 fage=100
 1158: Computing prevalence limit: result on file 'plrmypar.txt' 
 1159: Computing pij: result on file 'pijrmypar.txt' 
 1160: Computing Health Expectancies: result on file 'ermypar.txt' 
 1161: Computing Variance-covariance of DFLEs: file 'vrmypar.txt' 
 1162: Computing Total LEs with variances: file 'trmypar.txt' 
 1163: Computing Variance-covariance of Prevalence limit: file 'vplrmypar.txt' 
 1164: End of Imach
 1165: </font></pre>
 1166:     </li>
 1167: </ul>
 1168: 
 1169: <p><font size="3">Once the running is finished, the program
 1170: requires a caracter:</font></p>
 1171: 
 1172: <table border="1">
 1173:     <tr>
 1174:         <td width="100%"><strong>Type e to edit output files, c
 1175:         to start again, and q for exiting:</strong></td>
 1176:     </tr>
 1177: </table>
 1178: 
 1179: <p><font size="3">First you should enter <strong>e </strong>to
 1180: edit the master file mypar.htm. </font></p>
 1181: 
 1182: <ul>
 1183:     <li><u>Outputs files</u> <br>
 1184:         <br>
 1185:         - Observed prevalence in each state: <a
 1186:         href="..\mytry\prmypar.txt">pmypar.txt</a> <br>
 1187:         - Estimated parameters and the covariance matrix: <a
 1188:         href="..\mytry\rmypar.txt">rmypar.imach</a> <br>
 1189:         - Stationary prevalence in each state: <a
 1190:         href="..\mytry\plrmypar.txt">plrmypar.txt</a> <br>
 1191:         - Transition probabilities: <a
 1192:         href="..\mytry\pijrmypar.txt">pijrmypar.txt</a> <br>
 1193:         - Copy of the parameter file: <a
 1194:         href="..\mytry\ormypar.txt">ormypar.txt</a> <br>
 1195:         - Life expectancies by age and initial health status: <a
 1196:         href="..\mytry\ermypar.txt">ermypar.txt</a> <br>
 1197:         - Variances of life expectancies by age and initial
 1198:         health status: <a href="..\mytry\vrmypar.txt">vrmypar.txt</a>
 1199:         <br>
 1200:         - Health expectancies with their variances: <a
 1201:         href="..\mytry\trmypar.txt">trmypar.txt</a> <br>
 1202:         - Standard deviation of stationary prevalence: <a
 1203:         href="..\mytry\vplrmypar.txt">vplrmypar.txt</a><br>
 1204:         - Prevalences forecasting: <a href="frmypar.txt">frmypar.txt</a>
 1205:         <br>
 1206:         - Population forecasting (if popforecast=1): <a
 1207:         href="poprmypar.txt">poprmypar.txt</a> <br>
 1208:         </li>
 1209:     <li><u>Graphs</u> <br>
 1210:         <br>
 1211:         -<a href="../mytry/pemypar1.gif">One-step transition probabilities</a><br>
 1212:         -<a href="../mytry/pmypar11.gif">Convergence to the stationary prevalence</a><br>
 1213:         -<a href="..\mytry\vmypar11.gif">Observed and stationary prevalence in state (1) with the confident interval</a> <br>
 1214:         -<a href="..\mytry\vmypar21.gif">Observed and stationary prevalence in state (2) with the confident interval</a> <br>
 1215:         -<a href="..\mytry\expmypar11.gif">Health life expectancies by age and initial health state (1)</a> <br>
 1216:         -<a href="..\mytry\expmypar21.gif">Health life expectancies by age and initial health state (2)</a> <br>
 1217:         -<a href="..\mytry\emypar1.gif">Total life expectancy by age and health expectancies in states (1) and (2).</a> </li>
 1218: </ul>
 1219: 
 1220: <p>This software have been partly granted by <a
 1221: href="http://euroreves.ined.fr">Euro-REVES</a>, a concerted
 1222: action from the European Union. It will be copyrighted
 1223: identically to a GNU software product, i.e. program and software
 1224: can be distributed freely for non commercial use. Sources are not
 1225: widely distributed today. You can get them by asking us with a
 1226: simple justification (name, email, institute) <a
 1227: href="mailto:brouard@ined.fr">mailto:brouard@ined.fr</a> and <a
 1228: href="mailto:lievre@ined.fr">mailto:lievre@ined.fr</a> .</p>
 1229: 
 1230: <p>Latest version (0.71d of March 2002) can be accessed at <a
 1231: href="http://euroreves.ined.fr/imach">http://euroreves.ined.fr/imach</a><br>
 1232: </p>
 1233: </body>
 1234: </html>

FreeBSD-CVSweb <freebsd-cvsweb@FreeBSD.org>