File:  [Local Repository] / imach / html / doc / imach.htm
Revision 1.1: download - view: text, annotated - select for diffs
Wed Jun 16 12:05:30 2004 UTC (20 years ago) by brouard
Branches: MAIN
CVS tags: HEAD
*** empty log message ***

    1: <!-- $Id: imach.htm,v 1.1 2004/06/16 12:05:30 brouard Exp $ --!>
    2: <html>
    3: 
    4: <head>
    5: <meta http-equiv="Content-Type"
    6: content="text/html; charset=iso-8859-1">
    7: <meta name="GENERATOR" content="Microsoft FrontPage Express 2.0">
    8: <title>Computing Health Expectancies using IMaCh</title>
    9: <!-- Changed by: Agnes Lievre, 12-Oct-2000 -->
   10: <html>
   11: 
   12: <head>
   13: <meta http-equiv="Content-Type"
   14: content="text/html; charset=iso-8859-1">
   15: <title>IMaCh</title>
   16: </head>
   17: 
   18: <body bgcolor="#FFFFFF">
   19: 
   20: <hr size="3" color="#EC5E5E">
   21: 
   22: <h1 align="center"><font color="#00006A">Computing Health
   23: Expectancies using IMaCh</font></h1>
   24: 
   25: <h1 align="center"><font color="#00006A" size="5">(a Maximum
   26: Likelihood Computer Program using Interpolation of Markov Chains)</font></h1>
   27: 
   28: <p align="center">&nbsp;</p>
   29: 
   30: <p align="center"><a href="http://www.ined.fr/"><img
   31: src="logo-ined.gif" border="0" width="151" height="76"></a><img
   32: src="euroreves2.gif" width="151" height="75"></p>
   33: 
   34: <h3 align="center"><a href="http://www.ined.fr/"><font
   35: color="#00006A">INED</font></a><font color="#00006A"> and </font><a
   36: href="http://euroreves.ined.fr"><font color="#00006A">EUROREVES</font></a></h3>
   37: 
   38: <p align="center"><font color="#00006A" size="4"><strong>Version
   39: 0.8a, May 2002</strong></font></p>
   40: 
   41: <hr size="3" color="#EC5E5E">
   42: 
   43: <p align="center"><font color="#00006A"><strong>Authors of the
   44: program: </strong></font><a href="http://sauvy.ined.fr/brouard"><font
   45: color="#00006A"><strong>Nicolas Brouard</strong></font></a><font
   46: color="#00006A"><strong>, senior researcher at the </strong></font><a
   47: href="http://www.ined.fr"><font color="#00006A"><strong>Institut
   48: National d'Etudes Démographiques</strong></font></a><font
   49: color="#00006A"><strong> (INED, Paris) in the &quot;Mortality,
   50: Health and Epidemiology&quot; Research Unit </strong></font></p>
   51: 
   52: <p align="center"><font color="#00006A"><strong>and Agnès
   53: Lièvre<br clear="left">
   54: </strong></font></p>
   55: 
   56: <h4><font color="#00006A">Contribution to the mathematics: C. R.
   57: Heathcote </font><font color="#00006A" size="2">(Australian
   58: National University, Canberra).</font></h4>
   59: 
   60: <h4><font color="#00006A">Contact: Agnès Lièvre (</font><a
   61: href="mailto:lievre@ined.fr"><font color="#00006A"><i>lievre@ined.fr</i></font></a><font
   62: color="#00006A">) </font></h4>
   63: 
   64: <hr>
   65: 
   66: <ul>
   67:     <li><a href="#intro">Introduction</a> </li>
   68:     <li><a href="#data">On what kind of data can it be used?</a></li>
   69:     <li><a href="#datafile">The data file</a> </li>
   70:     <li><a href="#biaspar">The parameter file</a> </li>
   71:     <li><a href="#running">Running Imach</a> </li>
   72:     <li><a href="#output">Output files and graphs</a> </li>
   73:     <li><a href="#example">Exemple</a> </li>
   74: </ul>
   75: 
   76: <hr>
   77: 
   78: <h2><a name="intro"><font color="#00006A">Introduction</font></a></h2>
   79: 
   80: <p>This program computes <b>Healthy Life Expectancies</b> from <b>cross-longitudinal
   81: data</b> using the methodology pioneered by Laditka and Wolf (1).
   82: Within the family of Health Expectancies (HE), Disability-free
   83: life expectancy (DFLE) is probably the most important index to
   84: monitor. In low mortality countries, there is a fear that when
   85: mortality declines, the increase in DFLE is not proportionate to
   86: the increase in total Life expectancy. This case is called the <em>Expansion
   87: of morbidity</em>. Most of the data collected today, in
   88: particular by the international <a href="http://www.reves.org">REVES</a>
   89: network on Health expectancy, and most HE indices based on these
   90: data, are <em>cross-sectional</em>. It means that the information
   91: collected comes from a single cross-sectional survey: people from
   92: various ages (but mostly old people) are surveyed on their health
   93: status at a single date. Proportion of people disabled at each
   94: age, can then be measured at that date. This age-specific
   95: prevalence curve is then used to distinguish, within the
   96: stationary population (which, by definition, is the life table
   97: estimated from the vital statistics on mortality at the same
   98: date), the disable population from the disability-free
   99: population. Life expectancy (LE) (or total population divided by
  100: the yearly number of births or deaths of this stationary
  101: population) is then decomposed into DFLE and DLE. This method of
  102: computing HE is usually called the Sullivan method (from the name
  103: of the author who first described it).</p>
  104: 
  105: <p>Age-specific proportions of people disable are very difficult
  106: to forecast because each proportion corresponds to historical
  107: conditions of the cohort and it is the result of the historical
  108: flows from entering disability and recovering in the past until
  109: today. The age-specific intensities (or incidence rates) of
  110: entering disability or recovering a good health, are reflecting
  111: actual conditions and therefore can be used at each age to
  112: forecast the future of this cohort. For example if a country is
  113: improving its technology of prosthesis, the incidence of
  114: recovering the ability to walk will be higher at each (old) age,
  115: but the prevalence of disability will only slightly reflect an
  116: improve because the prevalence is mostly affected by the history
  117: of the cohort and not by recent period effects. To measure the
  118: period improvement we have to simulate the future of a cohort of
  119: new-borns entering or leaving at each age the disability state or
  120: dying according to the incidence rates measured today on
  121: different cohorts. The proportion of people disabled at each age
  122: in this simulated cohort will be much lower (using the exemple of
  123: an improvement) that the proportions observed at each age in a
  124: cross-sectional survey. This new prevalence curve introduced in a
  125: life table will give a much more actual and realistic HE level
  126: than the Sullivan method which mostly measured the History of
  127: health conditions in this country.</p>
  128: 
  129: <p>Therefore, the main question is how to measure incidence rates
  130: from cross-longitudinal surveys? This is the goal of the IMaCH
  131: program. From your data and using IMaCH you can estimate period
  132: HE and not only Sullivan's HE. Also the standard errors of the HE
  133: are computed.</p>
  134: 
  135: <p>A cross-longitudinal survey consists in a first survey
  136: (&quot;cross&quot;) where individuals from different ages are
  137: interviewed on their health status or degree of disability. At
  138: least a second wave of interviews (&quot;longitudinal&quot;)
  139: should measure each new individual health status. Health
  140: expectancies are computed from the transitions observed between
  141: waves and are computed for each degree of severity of disability
  142: (number of life states). More degrees you consider, more time is
  143: necessary to reach the Maximum Likelihood of the parameters
  144: involved in the model. Considering only two states of disability
  145: (disable and healthy) is generally enough but the computer
  146: program works also with more health statuses.<br>
  147: <br>
  148: The simplest model is the multinomial logistic model where <i>pij</i>
  149: is the probability to be observed in state <i>j</i> at the second
  150: wave conditional to be observed in state <em>i</em> at the first
  151: wave. Therefore a simple model is: log<em>(pij/pii)= aij +
  152: bij*age+ cij*sex,</em> where '<i>age</i>' is age and '<i>sex</i>'
  153: is a covariate. The advantage that this computer program claims,
  154: comes from that if the delay between waves is not identical for
  155: each individual, or if some individual missed an interview, the
  156: information is not rounded or lost, but taken into account using
  157: an interpolation or extrapolation. <i>hPijx</i> is the
  158: probability to be observed in state <i>i</i> at age <i>x+h</i>
  159: conditional to the observed state <i>i</i> at age <i>x</i>. The
  160: delay '<i>h</i>' can be split into an exact number (<i>nh*stepm</i>)
  161: of unobserved intermediate states. This elementary transition (by
  162: month or quarter trimester, semester or year) is modeled as a
  163: multinomial logistic. The <i>hPx</i> matrix is simply the matrix
  164: product of <i>nh*stepm</i> elementary matrices and the
  165: contribution of each individual to the likelihood is simply <i>hPijx</i>.
  166: <br>
  167: </p>
  168: 
  169: <p>The program presented in this manual is a quite general
  170: program named <strong>IMaCh</strong> (for <strong>I</strong>nterpolated
  171: <strong>MA</strong>rkov <strong>CH</strong>ain), designed to
  172: analyse transition data from longitudinal surveys. The first step
  173: is the parameters estimation of a transition probabilities model
  174: between an initial status and a final status. From there, the
  175: computer program produces some indicators such as observed and
  176: stationary prevalence, life expectancies and their variances and
  177: graphs. Our transition model consists in absorbing and
  178: non-absorbing states with the possibility of return across the
  179: non-absorbing states. The main advantage of this package,
  180: compared to other programs for the analysis of transition data
  181: (For example: Proc Catmod of SAS<sup>®</sup>) is that the whole
  182: individual information is used even if an interview is missing, a
  183: status or a date is unknown or when the delay between waves is
  184: not identical for each individual. The program can be executed
  185: according to parameters: selection of a sub-sample, number of
  186: absorbing and non-absorbing states, number of waves taken in
  187: account (the user inputs the first and the last interview), a
  188: tolerance level for the maximization function, the periodicity of
  189: the transitions (we can compute annual, quarterly or monthly
  190: transitions), covariates in the model. It works on Windows or on
  191: Unix.<br>
  192: </p>
  193: 
  194: <hr>
  195: 
  196: <p>(1) Laditka, Sarah B. and Wolf, Douglas A. (1998), &quot;New
  197: Methods for Analyzing Active Life Expectancy&quot;. <i>Journal of
  198: Aging and Health</i>. Vol 10, No. 2. </p>
  199: 
  200: <hr>
  201: 
  202: <h2><a name="data"><font color="#00006A">On what kind of data can
  203: it be used?</font></a></h2>
  204: 
  205: <p>The minimum data required for a transition model is the
  206: recording of a set of individuals interviewed at a first date and
  207: interviewed again at least one another time. From the
  208: observations of an individual, we obtain a follow-up over time of
  209: the occurrence of a specific event. In this documentation, the
  210: event is related to health status at older ages, but the program
  211: can be applied on a lot of longitudinal studies in different
  212: contexts. To build the data file explained into the next section,
  213: you must have the month and year of each interview and the
  214: corresponding health status. But in order to get age, date of
  215: birth (month and year) is required (missing values is allowed for
  216: month). Date of death (month and year) is an important
  217: information also required if the individual is dead. Shorter
  218: steps (i.e. a month) will more closely take into account the
  219: survival time after the last interview.</p>
  220: 
  221: <hr>
  222: 
  223: <h2><a name="datafile"><font color="#00006A">The data file</font></a></h2>
  224: 
  225: <p>In this example, 8,000 people have been interviewed in a
  226: cross-longitudinal survey of 4 waves (1984, 1986, 1988, 1990).
  227: Some people missed 1, 2 or 3 interviews. Health statuses are
  228: healthy (1) and disable (2). The survey is not a real one. It is
  229: a simulation of the American Longitudinal Survey on Aging. The
  230: disability state is defined if the individual missed one of four
  231: ADL (Activity of daily living, like bathing, eating, walking).
  232: Therefore, even is the individuals interviewed in the sample are
  233: virtual, the information brought with this sample is close to the
  234: situation of the United States. Sex is not recorded is this
  235: sample.</p>
  236: 
  237: <p>Each line of the data set (named <a href="data1.txt">data1.txt</a>
  238: in this first example) is an individual record which fields are: </p>
  239: 
  240: <ul>
  241:     <li><b>Index number</b>: positive number (field 1) </li>
  242:     <li><b>First covariate</b> positive number (field 2) </li>
  243:     <li><b>Second covariate</b> positive number (field 3) </li>
  244:     <li><a name="Weight"><b>Weight</b></a>: positive number
  245:         (field 4) . In most surveys individuals are weighted
  246:         according to the stratification of the sample.</li>
  247:     <li><b>Date of birth</b>: coded as mm/yyyy. Missing dates are
  248:         coded as 99/9999 (field 5) </li>
  249:     <li><b>Date of death</b>: coded as mm/yyyy. Missing dates are
  250:         coded as 99/9999 (field 6) </li>
  251:     <li><b>Date of first interview</b>: coded as mm/yyyy. Missing
  252:         dates are coded as 99/9999 (field 7) </li>
  253:     <li><b>Status at first interview</b>: positive number.
  254:         Missing values ar coded -1. (field 8) </li>
  255:     <li><b>Date of second interview</b>: coded as mm/yyyy.
  256:         Missing dates are coded as 99/9999 (field 9) </li>
  257:     <li><strong>Status at second interview</strong> positive
  258:         number. Missing values ar coded -1. (field 10) </li>
  259:     <li><b>Date of third interview</b>: coded as mm/yyyy. Missing
  260:         dates are coded as 99/9999 (field 11) </li>
  261:     <li><strong>Status at third interview</strong> positive
  262:         number. Missing values ar coded -1. (field 12) </li>
  263:     <li><b>Date of fourth interview</b>: coded as mm/yyyy.
  264:         Missing dates are coded as 99/9999 (field 13) </li>
  265:     <li><strong>Status at fourth interview</strong> positive
  266:         number. Missing values are coded -1. (field 14) </li>
  267:     <li>etc</li>
  268: </ul>
  269: 
  270: <p>&nbsp;</p>
  271: 
  272: <p>If your longitudinal survey do not include information about
  273: weights or covariates, you must fill the column with a number
  274: (e.g. 1) because a missing field is not allowed.</p>
  275: 
  276: <hr>
  277: 
  278: <h2><font color="#00006A">Your first example parameter file</font><a
  279: href="http://euroreves.ined.fr/imach"></a><a name="uio"></a></h2>
  280: 
  281: <h2><a name="biaspar"></a>#Imach version 0.8a, May 2002,
  282: INED-EUROREVES </h2>
  283: 
  284: <p>This is a comment. Comments start with a '#'.</p>
  285: 
  286: <h4><font color="#FF0000">First uncommented line</font></h4>
  287: 
  288: <pre>title=1st_example datafile=data1.txt lastobs=8600 firstpass=1 lastpass=4</pre>
  289: 
  290: <ul>
  291:     <li><b>title=</b> 1st_example is title of the run. </li>
  292:     <li><b>datafile=</b> data1.txt is the name of the data set.
  293:         Our example is a six years follow-up survey. It consists
  294:         in a baseline followed by 3 reinterviews. </li>
  295:     <li><b>lastobs=</b> 8600 the program is able to run on a
  296:         subsample where the last observation number is lastobs.
  297:         It can be set a bigger number than the real number of
  298:         observations (e.g. 100000). In this example, maximisation
  299:         will be done on the 8600 first records. </li>
  300:     <li><b>firstpass=1</b> , <b>lastpass=4 </b>In case of more
  301:         than two interviews in the survey, the program can be run
  302:         on selected transitions periods. firstpass=1 means the
  303:         first interview included in the calculation is the
  304:         baseline survey. lastpass=4 means that the information
  305:         brought by the 4th interview is taken into account.</li>
  306: </ul>
  307: 
  308: <p>&nbsp;</p>
  309: 
  310: <h4><a name="biaspar-2"><font color="#FF0000">Second uncommented
  311: line</font></a></h4>
  312: 
  313: <pre>ftol=1.e-08 stepm=1 ncovcol=2 nlstate=2 ndeath=1 maxwav=4 mle=1 weight=0</pre>
  314: 
  315: <ul>
  316:     <li><b>ftol=1e-8</b> Convergence tolerance on the function
  317:         value in the maximisation of the likelihood. Choosing a
  318:         correct value for ftol is difficult. 1e-8 is a correct
  319:         value for a 32 bits computer.</li>
  320:     <li><b>stepm=1</b> Time unit in months for interpolation.
  321:         Examples:<ul>
  322:             <li>If stepm=1, the unit is a month </li>
  323:             <li>If stepm=4, the unit is a trimester</li>
  324:             <li>If stepm=12, the unit is a year </li>
  325:             <li>If stepm=24, the unit is two years</li>
  326:             <li>... </li>
  327:         </ul>
  328:     </li>
  329:     <li><b>ncovcol=2</b> Number of covariate columns in the
  330:         datafile which precede the date of birth. Here you can
  331:         put variables that won't necessary be used during the
  332:         run. It is not the number of covariates that will be
  333:         specified by the model. The 'model' syntax describe the
  334:         covariates to take into account. </li>
  335:     <li><b>nlstate=2</b> Number of non-absorbing (alive) states.
  336:         Here we have two alive states: disability-free is coded 1
  337:         and disability is coded 2. </li>
  338:     <li><b>ndeath=1</b> Number of absorbing states. The absorbing
  339:         state death is coded 3. </li>
  340:     <li><b>maxwav=4</b> Number of waves in the datafile.</li>
  341:     <li><a name="mle"><b>mle</b></a><b>=1</b> Option for the
  342:         Maximisation Likelihood Estimation. <ul>
  343:             <li>If mle=1 the program does the maximisation and
  344:                 the calculation of health expectancies </li>
  345:             <li>If mle=0 the program only does the calculation of
  346:                 the health expectancies. </li>
  347:         </ul>
  348:     </li>
  349:     <li><b>weight=0</b> Possibility to add weights. <ul>
  350:             <li>If weight=0 no weights are included </li>
  351:             <li>If weight=1 the maximisation integrates the
  352:                 weights which are in field <a href="#Weight">4</a></li>
  353:         </ul>
  354:     </li>
  355: </ul>
  356: 
  357: <h4><font color="#FF0000">Covariates</font></h4>
  358: 
  359: <p>Intercept and age are systematically included in the model.
  360: Additional covariates can be included with the command: </p>
  361: 
  362: <pre>model=<em>list of covariates</em></pre>
  363: 
  364: <ul>
  365:     <li>if<strong> model=. </strong>then no covariates are
  366:         included</li>
  367:     <li>if <strong>model=V1</strong> the model includes the first
  368:         covariate (field 2)</li>
  369:     <li>if <strong>model=V2 </strong>the model includes the
  370:         second covariate (field 3)</li>
  371:     <li>if <strong>model=V1+V2 </strong>the model includes the
  372:         first and the second covariate (fields 2 and 3)</li>
  373:     <li>if <strong>model=V1*V2 </strong>the model includes the
  374:         product of the first and the second covariate (fields 2
  375:         and 3)</li>
  376:     <li>if <strong>model=V1+V1*age</strong> the model includes
  377:         the product covariate*age</li>
  378: </ul>
  379: 
  380: <p>In this example, we have two covariates in the data file
  381: (fields 2 and 3). The number of covariates included in the data
  382: file between the id and the date of birth is ncovcol=2 (it was
  383: named ncov in version prior to 0.8). If you have 3 covariates in
  384: the datafile (fields 2, 3 and 4), you will set ncovcol=3. Then
  385: you can run the programme with a new parametrisation taking into
  386: account the third covariate. For example, <strong>model=V1+V3 </strong>estimates
  387: a model with the first and third covariates. More complicated
  388: models can be used, but it will takes more time to converge. With
  389: a simple model (no covariates), the programme estimates 8
  390: parameters. Adding covariates increases the number of parameters
  391: : 12 for <strong>model=V1, </strong>16 for <strong>model=V1+V1*age
  392: </strong>and 20 for <strong>model=V1+V2+V3.</strong></p>
  393: 
  394: <h4><font color="#FF0000">Guess values for optimization</font><font
  395: color="#00006A"> </font></h4>
  396: 
  397: <p>You must write the initial guess values of the parameters for
  398: optimization. The number of parameters, <em>N</em> depends on the
  399: number of absorbing states and non-absorbing states and on the
  400: number of covariates. <br>
  401: <em>N</em> is given by the formula <em>N</em>=(<em>nlstate</em> +
  402: <em>ndeath</em>-1)*<em>nlstate</em>*<em>ncovmodel</em>&nbsp;. <br>
  403: <br>
  404: Thus in the simple case with 2 covariates (the model is log
  405: (pij/pii) = aij + bij * age where intercept and age are the two
  406: covariates), and 2 health degrees (1 for disability-free and 2
  407: for disability) and 1 absorbing state (3), you must enter 8
  408: initials values, a12, b12, a13, b13, a21, b21, a23, b23. You can
  409: start with zeros as in this example, but if you have a more
  410: precise set (for example from an earlier run) you can enter it
  411: and it will speed up them<br>
  412: Each of the four lines starts with indices &quot;ij&quot;: <b>ij
  413: aij bij</b> </p>
  414: 
  415: <blockquote>
  416:     <pre># Guess values of aij and bij in log (pij/pii) = aij + bij * age
  417: 12 -14.155633  0.110794 
  418: 13  -7.925360  0.032091 
  419: 21  -1.890135 -0.029473 
  420: 23  -6.234642  0.022315 </pre>
  421: </blockquote>
  422: 
  423: <p>or, to simplify (in most of cases it converges but there is no
  424: warranty!): </p>
  425: 
  426: <blockquote>
  427:     <pre>12 0.0 0.0
  428: 13 0.0 0.0
  429: 21 0.0 0.0
  430: 23 0.0 0.0</pre>
  431: </blockquote>
  432: 
  433: <p>In order to speed up the convergence you can make a first run
  434: with a large stepm i.e stepm=12 or 24 and then decrease the stepm
  435: until stepm=1 month. If newstepm is the new shorter stepm and
  436: stepm can be expressed as a multiple of newstepm, like newstepm=n
  437: stepm, then the following approximation holds: </p>
  438: 
  439: <pre>aij(stepm) = aij(n . stepm) - ln(n)
  440: </pre>
  441: 
  442: <p>and </p>
  443: 
  444: <pre>bij(stepm) = bij(n . stepm) .</pre>
  445: 
  446: <p>For example if you already ran for a 6 months interval and
  447: got:<br>
  448: </p>
  449: 
  450: <pre># Parameters
  451: 12 -13.390179  0.126133 
  452: 13  -7.493460  0.048069 
  453: 21   0.575975 -0.041322 
  454: 23  -4.748678  0.030626 
  455: </pre>
  456: 
  457: <p>If you now want to get the monthly estimates, you can guess
  458: the aij by substracting ln(6)= 1,7917<br>
  459: and running<br>
  460: </p>
  461: 
  462: <pre>12 -15.18193847  0.126133 
  463: 13 -9.285219469  0.048069
  464: 21 -1.215784469 -0.041322
  465: 23 -6.540437469  0.030626
  466: </pre>
  467: 
  468: <p>and get<br>
  469: </p>
  470: 
  471: <pre>12 -15.029768 0.124347 
  472: 13 -8.472981 0.036599 
  473: 21 -1.472527 -0.038394 
  474: 23 -6.553602 0.029856 
  475: 
  476: which is closer to the results. The approximation is probably useful
  477: only for very small intervals and we don't have enough experience to
  478: know if you will speed up the convergence or not.
  479: </pre>
  480: 
  481: <pre>         -ln(12)= -2.484
  482:  -ln(6/1)=-ln(6)= -1.791
  483:  -ln(3/1)=-ln(3)= -1.0986
  484: -ln(12/6)=-ln(2)= -0.693
  485: </pre>
  486: 
  487: <h4><font color="#FF0000">Guess values for computing variances</font></h4>
  488: 
  489: <p>This is an output if <a href="#mle">mle</a>=1. But it can be
  490: used as an input to get the various output data files (Health
  491: expectancies, stationary prevalence etc.) and figures without
  492: rerunning the rather long maximisation phase (mle=0). </p>
  493: 
  494: <p>The scales are small values for the evaluation of numerical
  495: derivatives. These derivatives are used to compute the hessian
  496: matrix of the parameters, that is the inverse of the covariance
  497: matrix, and the variances of health expectancies. Each line
  498: consists in indices &quot;ij&quot; followed by the initial scales
  499: (zero to simplify) associated with aij and bij. </p>
  500: 
  501: <ul>
  502:     <li>If mle=1 you can enter zeros:</li>
  503:     <li><blockquote>
  504:             <pre># Scales (for hessian or gradient estimation)
  505: 12 0. 0. 
  506: 13 0. 0. 
  507: 21 0. 0. 
  508: 23 0. 0. </pre>
  509:         </blockquote>
  510:     </li>
  511:     <li>If mle=0 you must enter a covariance matrix (usually
  512:         obtained from an earlier run).</li>
  513: </ul>
  514: 
  515: <h4><font color="#FF0000">Covariance matrix of parameters</font></h4>
  516: 
  517: <p>This is an output if <a href="#mle">mle</a>=1. But it can be
  518: used as an input to get the various output data files (Health
  519: expectancies, stationary prevalence etc.) and figures without
  520: rerunning the rather long maximisation phase (mle=0). <br>
  521: Each line starts with indices &quot;ijk&quot; followed by the
  522: covariances between aij and bij:<br>
  523: </p>
  524: 
  525: <pre>
  526:    121 Var(a12) 
  527:    122 Cov(b12,a12)  Var(b12) 
  528:           ...
  529:    232 Cov(b23,a12)  Cov(b23,b12) ... Var (b23) </pre>
  530: 
  531: <ul>
  532:     <li>If mle=1 you can enter zeros. </li>
  533:     <li><pre># Covariance matrix
  534: 121 0.
  535: 122 0. 0.
  536: 131 0. 0. 0. 
  537: 132 0. 0. 0. 0. 
  538: 211 0. 0. 0. 0. 0. 
  539: 212 0. 0. 0. 0. 0. 0. 
  540: 231 0. 0. 0. 0. 0. 0. 0. 
  541: 232 0. 0. 0. 0. 0. 0. 0. 0.</pre>
  542:     </li>
  543:     <li>If mle=0 you must enter a covariance matrix (usually
  544:         obtained from an earlier run). </li>
  545: </ul>
  546: 
  547: <h4><font color="#FF0000">Age range for calculation of stationary
  548: prevalences and health expectancies</font></h4>
  549: 
  550: <pre>agemin=70 agemax=100 bage=50 fage=100</pre>
  551: 
  552: <pre>
  553: Once we obtained the estimated parameters, the program is able
  554: to calculated stationary prevalence, transitions probabilities
  555: and life expectancies at any age. Choice of age range is useful
  556: for extrapolation. In our data file, ages varies from age 70 to
  557: 102. It is possible to get extrapolated stationary prevalence by
  558: age ranging from agemin to agemax.
  559: 
  560: 
  561: Setting bage=50 (begin age) and fage=100 (final age), makes
  562: the program computing life expectancy from age 'bage' to age
  563: 'fage'. As we use a model, we can interessingly compute life
  564: expectancy on a wider age range than the age range from the data.
  565: But the model can be rather wrong on much larger intervals.
  566: Program is limited to around 120 for upper age!
  567: </pre>
  568: 
  569: <ul>
  570:     <li><b>agemin=</b> Minimum age for calculation of the
  571:         stationary prevalence </li>
  572:     <li><b>agemax=</b> Maximum age for calculation of the
  573:         stationary prevalence </li>
  574:     <li><b>bage=</b> Minimum age for calculation of the health
  575:         expectancies </li>
  576:     <li><b>fage=</b> Maximum age for calculation of the health
  577:         expectancies </li>
  578: </ul>
  579: 
  580: <h4><a name="Computing"><font color="#FF0000">Computing</font></a><font
  581: color="#FF0000"> the observed prevalence</font></h4>
  582: 
  583: <pre>begin-prev-date=1/1/1984 end-prev-date=1/6/1988 estepm=1</pre>
  584: 
  585: <pre>
  586: Statements 'begin-prev-date' and 'end-prev-date' allow to
  587: select the period in which we calculate the observed prevalences
  588: in each state. In this example, the prevalences are calculated on
  589: data survey collected between 1 january 1984 and 1 june 1988. 
  590: </pre>
  591: 
  592: <ul>
  593:     <li><strong>begin-prev-date= </strong>Starting date
  594:         (day/month/year)</li>
  595:     <li><strong>end-prev-date= </strong>Final date
  596:         (day/month/year)</li>
  597:     <li><strong>estepm= </strong>Unit (in months).We compute the
  598:         life expectancy from trapezoids spaced every estepm
  599:         months. This is mainly to measure the difference between
  600:         two models: for example if stepm=24 months pijx are given
  601:         only every 2 years and by summing them we are calculating
  602:         an estimate of the Life Expectancy assuming a linear
  603:         progression inbetween and thus overestimating or
  604:         underestimating according to the curvature of the
  605:         survival function. If, for the same date, we estimate the
  606:         model with stepm=1 month, we can keep estepm to 24 months
  607:         to compare the new estimate of Life expectancy with the
  608:         same linear hypothesis. A more precise result, taking
  609:         into account a more precise curvature will be obtained if
  610:         estepm is as small as stepm.</li>
  611: </ul>
  612: 
  613: <h4><font color="#FF0000">Population- or status-based health
  614: expectancies</font></h4>
  615: 
  616: <pre>pop_based=0</pre>
  617: 
  618: <p>The program computes status-based health expectancies, i.e
  619: health expectancies which depends on your initial health state.
  620: If you are healthy your healthy life expectancy (e11) is higher
  621: than if you were disabled (e21, with e11 &gt; e21).<br>
  622: To compute a healthy life expectancy independant of the initial
  623: status we have to weight e11 and e21 according to the probability
  624: to be in each state at initial age or, with other word, according
  625: to the proportion of people in each state.<br>
  626: We prefer computing a 'pure' period healthy life expectancy based
  627: only on the transtion forces. Then the weights are simply the
  628: stationnary prevalences or 'implied' prevalences at the initial
  629: age.<br>
  630: Some other people would like to use the cross-sectional
  631: prevalences (the &quot;Sullivan prevalences&quot;) observed at
  632: the initial age during a period of time <a href="#Computing">defined
  633: just above</a>. <br>
  634: </p>
  635: 
  636: <ul>
  637:     <li><strong>popbased= 0 </strong>Health expectancies are
  638:         computed at each age from stationary prevalences
  639:         'expected' at this initial age.</li>
  640:     <li><strong>popbased= 1 </strong>Health expectancies are
  641:         computed at each age from cross-sectional 'observed'
  642:         prevalence at this initial age. As all the population is
  643:         not observed at the same exact date we define a short
  644:         period were the observed prevalence is computed.</li>
  645: </ul>
  646: 
  647: <h4><font color="#FF0000">Prevalence forecasting ( Experimental)</font></h4>
  648: 
  649: <pre>starting-proj-date=1/1/1989 final-proj-date=1/1/1992 mov_average=0 </pre>
  650: 
  651: <p>Prevalence and population projections are only available if
  652: the interpolation unit is a month, i.e. stepm=1 and if there are
  653: no covariate. The programme estimates the prevalence in each
  654: state at a precise date expressed in day/month/year. The
  655: programme computes one forecasted prevalence a year from a
  656: starting date (1 january of 1989 in this example) to a final date
  657: (1 january 1992). The statement mov_average allows to compute
  658: smoothed forecasted prevalences with a five-age moving average
  659: centered at the mid-age of the five-age period. <br>
  660: </p>
  661: 
  662: <ul>
  663:     <li><strong>starting-proj-date</strong>= starting date
  664:         (day/month/year) of forecasting</li>
  665:     <li><strong>final-proj-date= </strong>final date
  666:         (day/month/year) of forecasting</li>
  667:     <li><strong>mov_average</strong>= smoothing with a five-age
  668:         moving average centered at the mid-age of the five-age
  669:         period. The command<strong> mov_average</strong> takes
  670:         value 1 if the prevalences are smoothed and 0 otherwise.</li>
  671: </ul>
  672: 
  673: <h4><font color="#FF0000">Last uncommented line : Population
  674: forecasting </font></h4>
  675: 
  676: <pre>popforecast=0 popfile=pyram.txt popfiledate=1/1/1989 last-popfiledate=1/1/1992</pre>
  677: 
  678: <p>This command is available if the interpolation unit is a
  679: month, i.e. stepm=1 and if popforecast=1. From a data file
  680: including age and number of persons alive at the precise date
  681: &#145;popfiledate&#146;, you can forecast the number of persons
  682: in each state until date &#145;last-popfiledate&#146;. In this
  683: example, the popfile <a href="pyram.txt"><b>pyram.txt</b></a>
  684: includes real data which are the Japanese population in 1989.<br>
  685: </p>
  686: 
  687: <ul type="disc">
  688:     <li class="MsoNormal"
  689:     style="TEXT-ALIGN: justify; mso-margin-top-alt: auto; mso-margin-bottom-alt: auto; mso-list: l10 level1 lfo36; tab-stops: list 36.0pt"><b>popforecast=
  690:         0 </b>Option for population forecasting. If
  691:         popforecast=1, the programme does the forecasting<b>.</b></li>
  692:     <li class="MsoNormal"
  693:     style="TEXT-ALIGN: justify; mso-margin-top-alt: auto; mso-margin-bottom-alt: auto; mso-list: l10 level1 lfo36; tab-stops: list 36.0pt"><b>popfile=
  694:         </b>name of the population file</li>
  695:     <li class="MsoNormal"
  696:     style="TEXT-ALIGN: justify; mso-margin-top-alt: auto; mso-margin-bottom-alt: auto; mso-list: l10 level1 lfo36; tab-stops: list 36.0pt"><b>popfiledate=</b>
  697:         date of the population population</li>
  698:     <li class="MsoNormal"
  699:     style="TEXT-ALIGN: justify; mso-margin-top-alt: auto; mso-margin-bottom-alt: auto; mso-list: l10 level1 lfo36; tab-stops: list 36.0pt"><b>last-popfiledate</b>=
  700:         date of the last population projection&nbsp;</li>
  701: </ul>
  702: 
  703: <hr>
  704: 
  705: <h2><a name="running"></a><font color="#00006A">Running Imach
  706: with this example</font></h2>
  707: 
  708: <pre>We assume that you typed in your <a href="biaspar.imach">1st_example
  709: parameter file</a> as explained <a href="#biaspar">above</a>. 
  710: 
  711: To run the program you should either:
  712: </pre>
  713: 
  714: <ul>
  715:     <li>click on the imach.exe icon and enter the name of the
  716:         parameter file which is for example <a
  717:         href="C:\usr\imach\mle\biaspar.imach">C:\usr\imach\mle\biaspar.imach</a>
  718:     </li>
  719:     <li>You also can locate the biaspar.imach icon in <a
  720:         href="C:\usr\imach\mle">C:\usr\imach\mle</a> with your
  721:         mouse and drag it with the mouse on the imach window). </li>
  722:     <li>With latest version (0.7 and higher) if you setup windows
  723:         in order to understand &quot;.imach&quot; extension you
  724:         can right click the biaspar.imach icon and either edit
  725:         with notepad the parameter file or execute it with imach
  726:         or whatever. </li>
  727: </ul>
  728: 
  729: <pre>The time to converge depends on the step unit that you used (1
  730: month is cpu consuming), on the number of cases, and on the
  731: number of variables.
  732: 
  733: 
  734: The program outputs many files. Most of them are files which
  735: will be plotted for better understanding.
  736: 
  737: </pre>
  738: 
  739: <hr>
  740: 
  741: <h2><a name="output"><font color="#00006A">Output of the program
  742: and graphs</font> </a></h2>
  743: 
  744: <p>Once the optimization is finished, some graphics can be made
  745: with a grapher. We use Gnuplot which is an interactive plotting
  746: program copyrighted but freely distributed. A gnuplot reference
  747: manual is available <a href="http://www.gnuplot.info/">here</a>. <br>
  748: When the running is finished, the user should enter a caracter
  749: for plotting and output editing. <br>
  750: These caracters are:<br>
  751: </p>
  752: 
  753: <ul>
  754:     <li>'c' to start again the program from the beginning.</li>
  755:     <li>'e' opens the <a href="biaspar.htm"><strong>biaspar.htm</strong></a>
  756:         file to edit the output files and graphs. </li>
  757:     <li>'g' to graph again</li>
  758:     <li>'q' for exiting.</li>
  759: </ul>
  760: 
  761: <h5><font size="4"><strong>Results files </strong></font><br>
  762: <br>
  763: <font color="#EC5E5E" size="3"><strong>- </strong></font><a
  764: name="Observed prevalence in each state"><font color="#EC5E5E"
  765: size="3"><strong>Observed prevalence in each state</strong></font></a><font
  766: color="#EC5E5E" size="3"><strong> (and at first pass)</strong></font><b>:
  767: </b><a href="prbiaspar.txt"><b>prbiaspar.txt</b></a><br>
  768: </h5>
  769: 
  770: <p>The first line is the title and displays each field of the
  771: file. The first column is age. The fields 2 and 6 are the
  772: proportion of individuals in states 1 and 2 respectively as
  773: observed during the first exam. Others fields are the numbers of
  774: people in states 1, 2 or more. The number of columns increases if
  775: the number of states is higher than 2.<br>
  776: The header of the file is </p>
  777: 
  778: <pre># Age Prev(1) N(1) N Age Prev(2) N(2) N
  779: 70 1.00000 631 631 70 0.00000 0 631
  780: 71 0.99681 625 627 71 0.00319 2 627 
  781: 72 0.97125 1115 1148 72 0.02875 33 1148 </pre>
  782: 
  783: <p>It means that at age 70, the prevalence in state 1 is 1.000
  784: and in state 2 is 0.00 . At age 71 the number of individuals in
  785: state 1 is 625 and in state 2 is 2, hence the total number of
  786: people aged 71 is 625+2=627. <br>
  787: </p>
  788: 
  789: <h5><font color="#EC5E5E" size="3"><b>- Estimated parameters and
  790: covariance matrix</b></font><b>: </b><a href="rbiaspar.txt"><b>rbiaspar.imach</b></a></h5>
  791: 
  792: <p>This file contains all the maximisation results: </p>
  793: 
  794: <pre> -2 log likelihood= 21660.918613445392
  795:  Estimated parameters: a12 = -12.290174 b12 = 0.092161 
  796:                        a13 = -9.155590  b13 = 0.046627 
  797:                        a21 = -2.629849  b21 = -0.022030 
  798:                        a23 = -7.958519  b23 = 0.042614  
  799:  Covariance matrix: Var(a12) = 1.47453e-001
  800:                     Var(b12) = 2.18676e-005
  801:                     Var(a13) = 2.09715e-001
  802:                     Var(b13) = 3.28937e-005  
  803:                     Var(a21) = 9.19832e-001
  804:                     Var(b21) = 1.29229e-004
  805:                     Var(a23) = 4.48405e-001
  806:                     Var(b23) = 5.85631e-005 
  807:  </pre>
  808: 
  809: <p>By substitution of these parameters in the regression model,
  810: we obtain the elementary transition probabilities:</p>
  811: 
  812: <p><img src="pebiaspar1.gif" width="400" height="300"></p>
  813: 
  814: <h5><font color="#EC5E5E" size="3"><b>- Transition probabilities</b></font><b>:
  815: </b><a href="pijrbiaspar.txt"><b>pijrbiaspar.txt</b></a></h5>
  816: 
  817: <p>Here are the transitions probabilities Pij(x, x+nh) where nh
  818: is a multiple of 2 years. The first column is the starting age x
  819: (from age 50 to 100), the second is age (x+nh) and the others are
  820: the transition probabilities p11, p12, p13, p21, p22, p23. For
  821: example, line 5 of the file is: </p>
  822: 
  823: <pre> 100 106 0.02655 0.17622 0.79722 0.01809 0.13678 0.84513 </pre>
  824: 
  825: <p>and this means: </p>
  826: 
  827: <pre>p11(100,106)=0.02655
  828: p12(100,106)=0.17622
  829: p13(100,106)=0.79722
  830: p21(100,106)=0.01809
  831: p22(100,106)=0.13678
  832: p22(100,106)=0.84513 </pre>
  833: 
  834: <h5><font color="#EC5E5E" size="3"><b>- </b></font><a
  835: name="Stationary prevalence in each state"><font color="#EC5E5E"
  836: size="3"><b>Stationary prevalence in each state</b></font></a><b>:
  837: </b><a href="plrbiaspar.txt"><b>plrbiaspar.txt</b></a></h5>
  838: 
  839: <pre>#Prevalence
  840: #Age 1-1 2-2
  841: 
  842: #************ 
  843: 70 0.90134 0.09866
  844: 71 0.89177 0.10823 
  845: 72 0.88139 0.11861 
  846: 73 0.87015 0.12985 </pre>
  847: 
  848: <p>At age 70 the stationary prevalence is 0.90134 in state 1 and
  849: 0.09866 in state 2. This stationary prevalence differs from
  850: observed prevalence. Here is the point. The observed prevalence
  851: at age 70 results from the incidence of disability, incidence of
  852: recovery and mortality which occurred in the past of the cohort.
  853: Stationary prevalence results from a simulation with actual
  854: incidences and mortality (estimated from this cross-longitudinal
  855: survey). It is the best predictive value of the prevalence in the
  856: future if &quot;nothing changes in the future&quot;. This is
  857: exactly what demographers do with a Life table. Life expectancy
  858: is the expected mean time to survive if observed mortality rates
  859: (incidence of mortality) &quot;remains constant&quot; in the
  860: future. </p>
  861: 
  862: <h5><font color="#EC5E5E" size="3"><b>- Standard deviation of
  863: stationary prevalence</b></font><b>: </b><a
  864: href="vplrbiaspar.txt"><b>vplrbiaspar.txt</b></a></h5>
  865: 
  866: <p>The stationary prevalence has to be compared with the observed
  867: prevalence by age. But both are statistical estimates and
  868: subjected to stochastic errors due to the size of the sample, the
  869: design of the survey, and, for the stationary prevalence to the
  870: model used and fitted. It is possible to compute the standard
  871: deviation of the stationary prevalence at each age.</p>
  872: 
  873: <h5><font color="#EC5E5E" size="3">-Observed and stationary
  874: prevalence in state (2=disable) with confidence interval</font>:<b>
  875: </b><a href="vbiaspar21.htm"><b>vbiaspar21.gif</b></a></h5>
  876: 
  877: <p>This graph exhibits the stationary prevalence in state (2)
  878: with the confidence interval in red. The green curve is the
  879: observed prevalence (or proportion of individuals in state (2)).
  880: Without discussing the results (it is not the purpose here), we
  881: observe that the green curve is rather below the stationary
  882: prevalence. It suggests an increase of the disability prevalence
  883: in the future.</p>
  884: 
  885: <p><img src="vbiaspar21.gif" width="400" height="300"></p>
  886: 
  887: <h5><font color="#EC5E5E" size="3"><b>-Convergence to the
  888: stationary prevalence of disability</b></font><b>: </b><a
  889: href="pbiaspar11.gif"><b>pbiaspar11.gif</b></a><br>
  890: <img src="pbiaspar11.gif" width="400" height="300"> </h5>
  891: 
  892: <p>This graph plots the conditional transition probabilities from
  893: an initial state (1=healthy in red at the bottom, or 2=disable in
  894: green on top) at age <em>x </em>to the final state 2=disable<em> </em>at
  895: age <em>x+h. </em>Conditional means at the condition to be alive
  896: at age <em>x+h </em>which is <i>hP12x</i> + <em>hP22x</em>. The
  897: curves <i>hP12x/(hP12x</i> + <em>hP22x) </em>and <i>hP22x/(hP12x</i>
  898: + <em>hP22x) </em>converge with <em>h, </em>to the <em>stationary
  899: prevalence of disability</em>. In order to get the stationary
  900: prevalence at age 70 we should start the process at an earlier
  901: age, i.e.50. If the disability state is defined by severe
  902: disability criteria with only a few chance to recover, then the
  903: incidence of recovery is low and the time to convergence is
  904: probably longer. But we don't have experience yet.</p>
  905: 
  906: <h5><font color="#EC5E5E" size="3"><b>- Life expectancies by age
  907: and initial health status with standard deviation</b></font><b>: </b><a
  908: href="erbiaspar.txt"><b>erbiaspar.txt</b></a></h5>
  909: 
  910: <pre># Health expectancies 
  911: # Age 1-1 (SE) 1-2 (SE) 2-1 (SE) 2-2 (SE)
  912: 70 10.4171 (0.1517)    3.0433 (0.4733)    5.6641 (0.1121)    5.6907 (0.3366)
  913: 71 9.9325 (0.1409)    3.0495 (0.4234)    5.2627 (0.1107)    5.6384 (0.3129)
  914: 72 9.4603 (0.1319)    3.0540 (0.3770)    4.8810 (0.1099)    5.5811 (0.2907)
  915: 73 9.0009 (0.1246)    3.0565 (0.3345)    4.5188 (0.1098)    5.5187 (0.2702)
  916: </pre>
  917: 
  918: <pre>For example 70 10.4171 (0.1517) 3.0433 (0.4733) 5.6641 (0.1121) 5.6907 (0.3366) means:
  919: e11=10.4171 e12=3.0433 e21=5.6641 e22=5.6907 </pre>
  920: 
  921: <pre><img src="expbiaspar21.gif" width="400" height="300"><img
  922: src="expbiaspar11.gif" width="400" height="300"></pre>
  923: 
  924: <p>For example, life expectancy of a healthy individual at age 70
  925: is 10.42 in the healthy state and 3.04 in the disability state
  926: (=13.46 years). If he was disable at age 70, his life expectancy
  927: will be shorter, 5.66 in the healthy state and 5.69 in the
  928: disability state (=11.35 years). The total life expectancy is a
  929: weighted mean of both, 13.46 and 11.35; weight is the proportion
  930: of people disabled at age 70. In order to get a pure period index
  931: (i.e. based only on incidences) we use the <a
  932: href="#Stationary prevalence in each state">computed or
  933: stationary prevalence</a> at age 70 (i.e. computed from
  934: incidences at earlier ages) instead of the <a
  935: href="#Observed prevalence in each state">observed prevalence</a>
  936: (for example at first exam) (<a href="#Health expectancies">see
  937: below</a>).</p>
  938: 
  939: <h5><font color="#EC5E5E" size="3"><b>- Variances of life
  940: expectancies by age and initial health status</b></font><b>: </b><a
  941: href="vrbiaspar.txt"><b>vrbiaspar.txt</b></a></h5>
  942: 
  943: <p>For example, the covariances of life expectancies Cov(ei,ej)
  944: at age 50 are (line 3) </p>
  945: 
  946: <pre>   Cov(e1,e1)=0.4776  Cov(e1,e2)=0.0488=Cov(e2,e1)  Cov(e2,e2)=0.0424</pre>
  947: 
  948: <h5><font color="#EC5E5E" size="3"><b>-Variances of one-step
  949: probabilities </b></font><b>: </b><a href="probrbiaspar.txt"><b>probrbiaspar.txt</b></a></h5>
  950: 
  951: <p>For example, at age 65</p>
  952: 
  953: <pre>   p11=9.960e-001 standard deviation of p11=2.359e-004</pre>
  954: 
  955: <h5><font color="#EC5E5E" size="3"><b>- </b></font><a
  956: name="Health expectancies"><font color="#EC5E5E" size="3"><b>Health
  957: expectancies</b></font></a><font color="#EC5E5E" size="3"><b>
  958: with standard errors in parentheses</b></font><b>: </b><a
  959: href="trbiaspar.txt"><font face="Courier New"><b>trbiaspar.txt</b></font></a></h5>
  960: 
  961: <pre>#Total LEs with variances: e.. (std) e.1 (std) e.2 (std) </pre>
  962: 
  963: <pre>70 13.26 (0.22) 9.95 (0.20) 3.30 (0.14) </pre>
  964: 
  965: <p>Thus, at age 70 the total life expectancy, e..=13.26 years is
  966: the weighted mean of e1.=13.46 and e2.=11.35 by the stationary
  967: prevalence at age 70 which are 0.90134 in state 1 and 0.09866 in
  968: state 2, respectively (the sum is equal to one). e.1=9.95 is the
  969: Disability-free life expectancy at age 70 (it is again a weighted
  970: mean of e11 and e21). e.2=3.30 is also the life expectancy at age
  971: 70 to be spent in the disability state.</p>
  972: 
  973: <h5><font color="#EC5E5E" size="3"><b>-Total life expectancy by
  974: age and health expectancies in states (1=healthy) and (2=disable)</b></font><b>:
  975: </b><a href="ebiaspar1.gif"><b>ebiaspar1.gif</b></a></h5>
  976: 
  977: <p>This figure represents the health expectancies and the total
  978: life expectancy with the confident interval in dashed curve. </p>
  979: 
  980: <pre>        <img src="ebiaspar1.gif" width="400" height="300"></pre>
  981: 
  982: <p>Standard deviations (obtained from the information matrix of
  983: the model) of these quantities are very useful.
  984: Cross-longitudinal surveys are costly and do not involve huge
  985: samples, generally a few thousands; therefore it is very
  986: important to have an idea of the standard deviation of our
  987: estimates. It has been a big challenge to compute the Health
  988: Expectancy standard deviations. Don't be confuse: life expectancy
  989: is, as any expected value, the mean of a distribution; but here
  990: we are not computing the standard deviation of the distribution,
  991: but the standard deviation of the estimate of the mean.</p>
  992: 
  993: <p>Our health expectancies estimates vary according to the sample
  994: size (and the standard deviations give confidence intervals of
  995: the estimate) but also according to the model fitted. Let us
  996: explain it in more details.</p>
  997: 
  998: <p>Choosing a model means ar least two kind of choices. First we
  999: have to decide the number of disability states. Second we have to
 1000: design, within the logit model family, the model: variables,
 1001: covariables, confonding factors etc. to be included.</p>
 1002: 
 1003: <p>More disability states we have, better is our demographical
 1004: approach of the disability process, but smaller are the number of
 1005: transitions between each state and higher is the noise in the
 1006: measurement. We do not have enough experiments of the various
 1007: models to summarize the advantages and disadvantages, but it is
 1008: important to say that even if we had huge and unbiased samples,
 1009: the total life expectancy computed from a cross-longitudinal
 1010: survey, varies with the number of states. If we define only two
 1011: states, alive or dead, we find the usual life expectancy where it
 1012: is assumed that at each age, people are at the same risk to die.
 1013: If we are differentiating the alive state into healthy and
 1014: disable, and as the mortality from the disability state is higher
 1015: than the mortality from the healthy state, we are introducing
 1016: heterogeneity in the risk of dying. The total mortality at each
 1017: age is the weighted mean of the mortality in each state by the
 1018: prevalence in each state. Therefore if the proportion of people
 1019: at each age and in each state is different from the stationary
 1020: equilibrium, there is no reason to find the same total mortality
 1021: at a particular age. Life expectancy, even if it is a very useful
 1022: tool, has a very strong hypothesis of homogeneity of the
 1023: population. Our main purpose is not to measure differential
 1024: mortality but to measure the expected time in a healthy or
 1025: disability state in order to maximise the former and minimize the
 1026: latter. But the differential in mortality complexifies the
 1027: measurement.</p>
 1028: 
 1029: <p>Incidences of disability or recovery are not affected by the
 1030: number of states if these states are independant. But incidences
 1031: estimates are dependant on the specification of the model. More
 1032: covariates we added in the logit model better is the model, but
 1033: some covariates are not well measured, some are confounding
 1034: factors like in any statistical model. The procedure to &quot;fit
 1035: the best model' is similar to logistic regression which itself is
 1036: similar to regression analysis. We haven't yet been sofar because
 1037: we also have a severe limitation which is the speed of the
 1038: convergence. On a Pentium III, 500 MHz, even the simplest model,
 1039: estimated by month on 8,000 people may take 4 hours to converge.
 1040: Also, the program is not yet a statistical package, which permits
 1041: a simple writing of the variables and the model to take into
 1042: account in the maximisation. The actual program allows only to
 1043: add simple variables like age+sex or age+sex+ age*sex but will
 1044: never be general enough. But what is to remember, is that
 1045: incidences or probability of change from one state to another is
 1046: affected by the variables specified into the model.</p>
 1047: 
 1048: <p>Also, the age range of the people interviewed has a link with
 1049: the age range of the life expectancy which can be estimated by
 1050: extrapolation. If your sample ranges from age 70 to 95, you can
 1051: clearly estimate a life expectancy at age 70 and trust your
 1052: confidence interval which is mostly based on your sample size,
 1053: but if you want to estimate the life expectancy at age 50, you
 1054: should rely in your model, but fitting a logistic model on a age
 1055: range of 70-95 and estimating probabilties of transition out of
 1056: this age range, say at age 50 is very dangerous. At least you
 1057: should remember that the confidence interval given by the
 1058: standard deviation of the health expectancies, are under the
 1059: strong assumption that your model is the 'true model', which is
 1060: probably not the case.</p>
 1061: 
 1062: <h5><font color="#EC5E5E" size="3"><b>- Copy of the parameter
 1063: file</b></font><b>: </b><a href="orbiaspar.txt"><b>orbiaspar.txt</b></a></h5>
 1064: 
 1065: <p>This copy of the parameter file can be useful to re-run the
 1066: program while saving the old output files. </p>
 1067: 
 1068: <h5><font color="#EC5E5E" size="3"><b>- Prevalence forecasting</b></font><b>:
 1069: </b><a href="frbiaspar.txt"><b>frbiaspar.txt</b></a></h5>
 1070: 
 1071: <p
 1072: style="TEXT-ALIGN: justify; tab-stops: 45.8pt 91.6pt 137.4pt 183.2pt 229.0pt 274.8pt 320.6pt 366.4pt 412.2pt 458.0pt 503.8pt 549.6pt 595.4pt 641.2pt 687.0pt 732.8pt">First,
 1073: we have estimated the observed prevalence between 1/1/1984 and
 1074: 1/6/1988. The mean date of interview (weighed average of the
 1075: interviews performed between1/1/1984 and 1/6/1988) is estimated
 1076: to be 13/9/1985, as written on the top on the file. Then we
 1077: forecast the probability to be in each state. </p>
 1078: 
 1079: <p
 1080: style="TEXT-ALIGN: justify; tab-stops: 45.8pt 91.6pt 137.4pt 183.2pt 229.0pt 274.8pt 320.6pt 366.4pt 412.2pt 458.0pt 503.8pt 549.6pt 595.4pt 641.2pt 687.0pt 732.8pt">Example,
 1081: at date 1/1/1989 : </p>
 1082: 
 1083: <pre class="MsoNormal"># StartingAge FinalAge P.1 P.2 P.3
 1084: # Forecasting at date 1/1/1989
 1085:   73 0.807 0.078 0.115</pre>
 1086: 
 1087: <p
 1088: style="TEXT-ALIGN: justify; tab-stops: 45.8pt 91.6pt 137.4pt 183.2pt 229.0pt 274.8pt 320.6pt 366.4pt 412.2pt 458.0pt 503.8pt 549.6pt 595.4pt 641.2pt 687.0pt 732.8pt">Since
 1089: the minimum age is 70 on the 13/9/1985, the youngest forecasted
 1090: age is 73. This means that at age a person aged 70 at 13/9/1989
 1091: has a probability to enter state1 of 0.807 at age 73 on 1/1/1989.
 1092: Similarly, the probability to be in state 2 is 0.078 and the
 1093: probability to die is 0.115. Then, on the 1/1/1989, the
 1094: prevalence of disability at age 73 is estimated to be 0.088.</p>
 1095: 
 1096: <h5><font color="#EC5E5E" size="3"><b>- Population forecasting</b></font><b>:
 1097: </b><a href="poprbiaspar.txt"><b>poprbiaspar.txt</b></a></h5>
 1098: 
 1099: <pre># Age P.1 P.2 P.3 [Population]
 1100: # Forecasting at date 1/1/1989 
 1101: 75 572685.22 83798.08 
 1102: 74 621296.51 79767.99 
 1103: 73 645857.70 69320.60 </pre>
 1104: 
 1105: <pre># Forecasting at date 1/1/19909 
 1106: 76 442986.68 92721.14 120775.48
 1107: 75 487781.02 91367.97 121915.51
 1108: 74 512892.07 85003.47 117282.76 </pre>
 1109: 
 1110: <p>From the population file, we estimate the number of people in
 1111: each state. At age 73, 645857 persons are in state 1 and 69320
 1112: are in state 2. One year latter, 512892 are still in state 1,
 1113: 85003 are in state 2 and 117282 died before 1/1/1990.</p>
 1114: 
 1115: <hr>
 1116: 
 1117: <h2><a name="example"></a><font color="#00006A">Trying an example</font></h2>
 1118: 
 1119: <p>Since you know how to run the program, it is time to test it
 1120: on your own computer. Try for example on a parameter file named <a
 1121: href="..\mytry\imachpar.imach">imachpar.imach</a> which is a copy
 1122: of <font size="2" face="Courier New">mypar.imach</font> included
 1123: in the subdirectory of imach, <font size="2" face="Courier New">mytry</font>.
 1124: Edit it to change the name of the data file to <font size="2"
 1125: face="Courier New">..\data\mydata.txt</font> if you don't want to
 1126: copy it on the same directory. The file <font face="Courier New">mydata.txt</font>
 1127: is a smaller file of 3,000 people but still with 4 waves. </p>
 1128: 
 1129: <p>Click on the imach.exe icon to open a window. Answer to the
 1130: question:'<strong>Enter the parameter file name:'</strong></p>
 1131: 
 1132: <table border="1">
 1133:     <tr>
 1134:         <td width="100%"><strong>IMACH, Version 0.8a</strong><p><strong>Enter
 1135:         the parameter file name: ..\mytry\imachpar.imach</strong></p>
 1136:         </td>
 1137:     </tr>
 1138: </table>
 1139: 
 1140: <p>Most of the data files or image files generated, will use the
 1141: 'imachpar' string into their name. The running time is about 2-3
 1142: minutes on a Pentium III. If the execution worked correctly, the
 1143: outputs files are created in the current directory, and should be
 1144: the same as the mypar files initially included in the directory <font
 1145: size="2" face="Courier New">mytry</font>.</p>
 1146: 
 1147: <ul>
 1148:     <li><pre><u>Output on the screen</u> The output screen looks like <a
 1149: href="imachrun.LOG">this Log file</a>
 1150: #
 1151: 
 1152: title=MLE datafile=..\data\mydata.txt lastobs=3000 firstpass=1 lastpass=3
 1153: ftol=1.000000e-008 stepm=24 ncovcol=2 nlstate=2 ndeath=1 maxwav=4 mle=1 weight=0</pre>
 1154:     </li>
 1155:     <li><pre>Total number of individuals= 2965, Agemin = 70.00, Agemax= 100.92
 1156: 
 1157: Warning, no any valid information for:126 line=126
 1158: Warning, no any valid information for:2307 line=2307
 1159: Delay (in months) between two waves Min=21 Max=51 Mean=24.495826
 1160: <font face="Times New Roman">These lines give some warnings on the data file and also some raw statistics on frequencies of transitions.</font>
 1161: Age 70 1.=230 loss[1]=3.5% 2.=16 loss[2]=12.5% 1.=222 prev[1]=94.1% 2.=14
 1162:  prev[2]=5.9% 1-1=8 11=200 12=7 13=15 2-1=2 21=6 22=7 23=1
 1163: Age 102 1.=0 loss[1]=NaNQ% 2.=0 loss[2]=NaNQ% 1.=0 prev[1]=NaNQ% 2.=0 </pre>
 1164:     </li>
 1165: </ul>
 1166: 
 1167: <p>&nbsp;</p>
 1168: 
 1169: <ul>
 1170:     <li>Maximisation with the Powell algorithm. 8 directions are
 1171:         given corresponding to the 8 parameters. this can be
 1172:         rather long to get convergence.<br>
 1173:         <font size="1" face="Courier New"><br>
 1174:         Powell iter=1 -2*LL=11531.405658264877 1 0.000000000000 2
 1175:         0.000000000000 3<br>
 1176:         0.000000000000 4 0.000000000000 5 0.000000000000 6
 1177:         0.000000000000 7 <br>
 1178:         0.000000000000 8 0.000000000000<br>
 1179:         1..........2.................3..........4.................5.........<br>
 1180:         6................7........8...............<br>
 1181:         Powell iter=23 -2*LL=6744.954108371555 1 -12.967632334283
 1182:         <br>
 1183:         2 0.135136681033 3 -7.402109728262 4 0.067844593326 <br>
 1184:         5 -0.673601538129 6 -0.006615504377 7 -5.051341616718 <br>
 1185:         8 0.051272038506<br>
 1186:         1..............2...........3..............4...........<br>
 1187:         5..........6................7...........8.........<br>
 1188:         #Number of iterations = 23, -2 Log likelihood =
 1189:         6744.954042573691<br>
 1190:         # Parameters<br>
 1191:         12 -12.966061 0.135117 <br>
 1192:         13 -7.401109 0.067831 <br>
 1193:         21 -0.672648 -0.006627 <br>
 1194:         23 -5.051297 0.051271 </font><br>
 1195:         </li>
 1196:     <li><pre><font size="2">Calculation of the hessian matrix. Wait...
 1197: 12345678.12.13.14.15.16.17.18.23.24.25.26.27.28.34.35.36.37.38.45.46.47.48.56.57.58.67.68.78
 1198: 
 1199: Inverting the hessian to get the covariance matrix. Wait...
 1200: 
 1201: #Hessian matrix#
 1202: 3.344e+002 2.708e+004 -4.586e+001 -3.806e+003 -1.577e+000 -1.313e+002 3.914e-001 3.166e+001 
 1203: 2.708e+004 2.204e+006 -3.805e+003 -3.174e+005 -1.303e+002 -1.091e+004 2.967e+001 2.399e+003 
 1204: -4.586e+001 -3.805e+003 4.044e+002 3.197e+004 2.431e-002 1.995e+000 1.783e-001 1.486e+001 
 1205: -3.806e+003 -3.174e+005 3.197e+004 2.541e+006 2.436e+000 2.051e+002 1.483e+001 1.244e+003 
 1206: -1.577e+000 -1.303e+002 2.431e-002 2.436e+000 1.093e+002 8.979e+003 -3.402e+001 -2.843e+003 
 1207: -1.313e+002 -1.091e+004 1.995e+000 2.051e+002 8.979e+003 7.420e+005 -2.842e+003 -2.388e+005 
 1208: 3.914e-001 2.967e+001 1.783e-001 1.483e+001 -3.402e+001 -2.842e+003 1.494e+002 1.251e+004 
 1209: 3.166e+001 2.399e+003 1.486e+001 1.244e+003 -2.843e+003 -2.388e+005 1.251e+004 1.053e+006 
 1210: # Scales
 1211: 12 1.00000e-004 1.00000e-006
 1212: 13 1.00000e-004 1.00000e-006
 1213: 21 1.00000e-003 1.00000e-005
 1214: 23 1.00000e-004 1.00000e-005
 1215: # Covariance
 1216:   1 5.90661e-001
 1217:   2 -7.26732e-003 8.98810e-005
 1218:   3 8.80177e-002 -1.12706e-003 5.15824e-001
 1219:   4 -1.13082e-003 1.45267e-005 -6.50070e-003 8.23270e-005
 1220:   5 9.31265e-003 -1.16106e-004 6.00210e-004 -8.04151e-006 1.75753e+000
 1221:   6 -1.15664e-004 1.44850e-006 -7.79995e-006 1.04770e-007 -2.12929e-002 2.59422e-004
 1222:   7 1.35103e-003 -1.75392e-005 -6.38237e-004 7.85424e-006 4.02601e-001 -4.86776e-003 1.32682e+000
 1223:   8 -1.82421e-005 2.35811e-007 7.75503e-006 -9.58687e-008 -4.86589e-003 5.91641e-005 -1.57767e-002 1.88622e-004
 1224: # agemin agemax for lifexpectancy, bage fage (if mle==0 ie no data nor Max likelihood).
 1225: 
 1226: 
 1227: agemin=70 agemax=100 bage=50 fage=100
 1228: Computing prevalence limit: result on file 'plrmypar.txt' 
 1229: Computing pij: result on file 'pijrmypar.txt' 
 1230: Computing Health Expectancies: result on file 'ermypar.txt' 
 1231: Computing Variance-covariance of DFLEs: file 'vrmypar.txt' 
 1232: Computing Total LEs with variances: file 'trmypar.txt' 
 1233: Computing Variance-covariance of Prevalence limit: file 'vplrmypar.txt' 
 1234: End of Imach
 1235: </font></pre>
 1236:     </li>
 1237: </ul>
 1238: 
 1239: <p><font size="3">Once the running is finished, the program
 1240: requires a caracter:</font></p>
 1241: 
 1242: <table border="1">
 1243:     <tr>
 1244:         <td width="100%"><strong>Type e to edit output files, g
 1245:         to graph again, c to start again, and q for exiting:</strong></td>
 1246:     </tr>
 1247: </table>
 1248: 
 1249: <p><font size="3">First you should enter <strong>e </strong>to
 1250: edit the master file mypar.htm. </font></p>
 1251: 
 1252: <ul>
 1253:     <li><u>Outputs files</u> <br>
 1254:         <br>
 1255:         - Copy of the parameter file: <a href="ormypar.txt">ormypar.txt</a><br>
 1256:         - Gnuplot file name: <a href="mypar.gp.txt">mypar.gp.txt</a><br>
 1257:         - Observed prevalence in each state: <a
 1258:         href="prmypar.txt">prmypar.txt</a> <br>
 1259:         - Stationary prevalence in each state: <a
 1260:         href="plrmypar.txt">plrmypar.txt</a> <br>
 1261:         - Transition probabilities: <a href="pijrmypar.txt">pijrmypar.txt</a><br>
 1262:         - Life expectancies by age and initial health status
 1263:         (estepm=24 months): <a href="ermypar.txt">ermypar.txt</a>
 1264:         <br>
 1265:         - Parameter file with estimated parameters and the
 1266:         covariance matrix: <a href="rmypar.txt">rmypar.txt</a> <br>
 1267:         - Variance of one-step probabilities: <a
 1268:         href="probrmypar.txt">probrmypar.txt</a> <br>
 1269:         - Variances of life expectancies by age and initial
 1270:         health status (estepm=24 months): <a href="vrmypar.txt">vrmypar.txt</a><br>
 1271:         - Health expectancies with their variances: <a
 1272:         href="trmypar.txt">trmypar.txt</a> <br>
 1273:         - Standard deviation of stationary prevalences: <a
 1274:         href="vplrmypar.txt">vplrmypar.txt</a> <br>
 1275:         No population forecast: popforecast = 0 (instead of 1) or
 1276:         stepm = 24 (instead of 1) or model=. (instead of .)<br>
 1277:         <br>
 1278:         </li>
 1279:     <li><u>Graphs</u> <br>
 1280:         <br>
 1281:         -<a href="../mytry/pemypar1.gif">One-step transition
 1282:         probabilities</a><br>
 1283:         -<a href="../mytry/pmypar11.gif">Convergence to the
 1284:         stationary prevalence</a><br>
 1285:         -<a href="..\mytry\vmypar11.gif">Observed and stationary
 1286:         prevalence in state (1) with the confident interval</a> <br>
 1287:         -<a href="..\mytry\vmypar21.gif">Observed and stationary
 1288:         prevalence in state (2) with the confident interval</a> <br>
 1289:         -<a href="..\mytry\expmypar11.gif">Health life
 1290:         expectancies by age and initial health state (1)</a> <br>
 1291:         -<a href="..\mytry\expmypar21.gif">Health life
 1292:         expectancies by age and initial health state (2)</a> <br>
 1293:         -<a href="..\mytry\emypar1.gif">Total life expectancy by
 1294:         age and health expectancies in states (1) and (2).</a> </li>
 1295: </ul>
 1296: 
 1297: <p>This software have been partly granted by <a
 1298: href="http://euroreves.ined.fr">Euro-REVES</a>, a concerted
 1299: action from the European Union. It will be copyrighted
 1300: identically to a GNU software product, i.e. program and software
 1301: can be distributed freely for non commercial use. Sources are not
 1302: widely distributed today. You can get them by asking us with a
 1303: simple justification (name, email, institute) <a
 1304: href="mailto:brouard@ined.fr">mailto:brouard@ined.fr</a> and <a
 1305: href="mailto:lievre@ined.fr">mailto:lievre@ined.fr</a> .</p>
 1306: 
 1307: <p>Latest version (0.8a of May 2002) can be accessed at <a
 1308: href="http://euroreves.ined.fr/imach">http://euroreves.ined.fr/imach</a><br>
 1309: </p>
 1310: </body>
 1311: </html>

FreeBSD-CVSweb <freebsd-cvsweb@FreeBSD.org>