### Artikel

## Estimation of adjusted relative risks in log-binomial regression

### Suche in Medline nach

### Autoren

Veröffentlicht: | 6. September 2019 |
---|

### Gliederung

### Text

**Problem statement:** For binary outcome data, the relative risk (RR) is an essential measure of association, which can be estimated directly for prospective studies. Calculating the odds ratio (OR) can overestimate and magnify the risk heavily, especially if dealing with a disease outcome of high incidence. In such cases OR should be avoided and RR can be used. The log-binomial model is a straightforward statistical approach in case of risk adjustment and estimation, also it is much easier to interpret. However, the log-binomial model may have difficulties and fails to maximize the log-likelihood function due to numerical instability, implicit parameter constraints, or naÏve starting value, which leads to a dramatic increase of the number of iterations required and convergence failure. Algorithmic maximum likelihood estimation failure was observed for small data sets using standard software like IBM-SPSS or SAS [1]. "logbin" is an R package that implements the stable Expectation Maximization (EM) algorithm to tackle these issues but it is extremely slow.

**Approach:** In this study, we developed a modified-Newton-type algorithm for solving the maximum likelihood estimation problem under linear constraints. We also imposed a new system of linear inequality constraints on the number of covariates. In this approach, we maximize the log-likelihood of the log-binomial regression model sequentially. Our modified-Newton-type algorithm proceeds iteratively by generating a sequence of estimates which solves the quadratic sub-problems obtained from a second-order Taylor approximation and converges under the linear inequality constraints.

**Simulation design:** For validation and evaluation purposes, a large full-factorial simulation study was conducted in order to study the behavior of our model compared with other models investigated in this research such as FS (Fisher scoring algorithm is applied in the context of Generalized linear models (GLM)), EM, BFGS, and Nelder-Mead. Assessment of coverage probability, model accuracy, and model bias were the primary objectives, while at the same time allowing for many different scenarios (varying number of events, sample size, and number of covariates). The 12 underlying covariates were generated via copula package in R with a specified correlation structure between all variables. 140 simulations were executed for every scenario. In total, 16800 data sets were generated and analyzed. The complete simulation study including random number generation using L'Ecuyer algorithm was conducted using parallel processing in R Version 3.5. The computational costs (running time) of the five different model types were measured as well.

**Results:** Our modified-Newton-type algorithm had a higher convergence rate than FS, and was computationally faster than EM algorithm with high significance. All results which expose the behavior of the five algorithms such as bias, standard error, coverage probability, convergence rate, with Monte Carlo Standard Errors (MCSE) and confidence interval, will be shown in the presentation.

The authors declare that they have no competing interests.

The authors declare that an ethics committee vote is not required.

### References

- 1.
- Williamson T, Eliasziw M, Fick GH. Log-binomial models: exploring failed convergence. Emerging themes in epidemiology. 2013 Dec;10(1):14.