Person: Rubin, Donald
Loading...
Email Address
AA Acceptance Date
Birth Date
Research Projects
Organizational Units
Job Title
Last Name
Rubin
First Name
Donald
Name
Rubin, Donald
43 results
Search Results
Now showing 1 - 10 of 43
Publication Multiple Imputation by Ordered Monotone Blocks With Application to the Anthrax Vaccine Research Program(Informa UK Limited, 2014-06-23) Baccini, Michela; Mealli, Fabrizia; Zell, Elizabeth; Frangakis, Constantine; Rubin, Donald; Li, FanMultiple imputation (MI) has become a standard statistical technique for dealing with missing values. The CDC Anthrax Vaccine Research Program (AVRP) dataset created new challenges for MI due to the large number of variables of different types and the limited sample size. A common method for imputing missing data in such complex studies is to specify, for each of J variables with missing values, a univariate conditional distribution given all other variables, and then to draw imputations by iterating over the J conditional distributions. Such fully conditional imputation strategies have the theoretical drawback that the conditional distributions may be incompatible. When the missingness pattern is monotone, a theoretically valid approach is to specify, for each variable with missing values, a conditional distribution given the variables with fewer or the same number of missing values and sequentially draw from these distributions. In this article, we propose the “multiple imputation by ordered monotone blocks” approach, which combines these two basic approaches by decomposing any missingness pattern into a collection of smaller “constructed” monotone missingness patterns, and iterating. We apply this strategy to impute the missing data in the AVRP interim data. Supplemental materials, including all source code and a synthetic example dataset, are available online.Publication Comparing Significance Levels of Independent Studies(American Psychological Association, 1979) Rosenthal, Robert; Rubin, DonaldMethods for comparing two or more statistical significance (p) levels are described; these methods are more rigorous, systematic, and informative than the comparisons that are commonly made by using a significant/not significant dichotomy. Formulas are provided for calculating the significance level of a comparison between two or more p levels.Publication Comparing Within- and Between-Subjects Studies(Sage, 1980) Rosenthal, Robert; Rubin, DonaldStudies employing within-subjects designs may be compared with those employing between-subjects designs in a variety of ways. We discuss and illustrate the comparisons of variabilities, including within-condition variances and precisions as well as the comparisons of means and of mean differences. Our discussion emphasizes the importance of trying to understand the sources of differences.Publication A Simple, General Purpose Display of Magnitude of Experimental Effect(1982) Rosenthal, Robert; Rubin, DonaldWe introduce the binomial effect size display (BESD), which is useful because it is (a) easily understood by researchers, students, and lay persons; (b) widely applicable; and (c) conveniently computed. The BESD displays the change in success rate (e.g., survival rate, improvement rate, etc.) attributable to a new treatment procedure. For example, an r of .32, the average size of the effect of psychotherapy, is said to account for "only 10% of the variance"; however, the BESD shows that this proportion of variance accounted for is equivalent to increasing the success rate from 34% to 66%, which would mean, for example, reducing an illness rate or a death rate from 66% to 34%.Publication Comparing Effect Sizes of Independent Studies(American Psychological Association, 1982) Rosenthal, Robert; Rubin, DonaldThis article presents a general set of procedures for comparing the effect sizes of two or more independent studies. The procedures include a method for calculating the approximate significance level for the heterogeneity of effect sizes of studies and a method for calculating the approximate significance level of a contrast among the effect sizes. Although the focus is on effect size as measured by the standardized difference between the means (d) defined as (Mi — M-^/S, the procedures can be applied to any measure of effect size having an estimated variance. This extension is illustrated with effect size measured by the difference between proportions.Publication Further Meta-Analytic Procedures for Assessing Cognitive Gender Differences(American Psychological Association, 1982) Rosenthal, Robert; Rubin, DonaldWe describe procedures for (a) assessing the heterogeneity of a set of effect sixes derived from a meta-analysis, (b) testing for trends by means of contrasts among the effect sizes obtained, and (c) evaluating the practical importance of the average effect size obtained. On the basis of applying these procedures to data presented in Hyde (1981) on cognitive gender differences, we conclude the following: (a) that for all four areas of cognitive skill investigated, effect sizes for gender differences differed significantly across studies (at least at p < .001); (b) that studies of gender differences conducted more recently show a substantial gain in cognitive performance by females relative to males (unweighted mean r across four cognitive areas = .40); (c) that studies of gender differences show male versus female effect sizes of practical importance equivalent to outcome rates of 60% versus 40%.Publication Ensemble-Adjusted p Values(American Psychological Association, 1983) Rosenthal, Robert; Rubin, DonaldWhen contrasts or other tests of significance can be ordered according to their importance, adjusted p values can be computed that permit greater power to be brought to bear on contrasts of greater interest or importance. The application of these ensemble-adjusted p values is explained and illustrated.Publication Multiple Contrasts and Ordered Bonferroni Procedures(American Psychological Association, 1984) Rosenthal, Robert; Rubin, DonaldThis article presents a general yet simple system for avoiding the increases in Type I errors that typically occur when an increasing number of contrasts is to be computed. The procedures described are all based on the Bonferroni inequality, which leads to methods of correcting for the number of contrasts tested. The three types of Bonferroni tests presented differ in the degree to which we specify beforehand the relative importance we attach to each of the planned contrasts. Although a quite conservative procedure in its typical application, the Bonferroni system of procedures is recommended for its flexibility, simplicity, and generality. When the power of the basic Bonferroni method is focused by ordering the contrasts (or any other tests of significance) by their importance, the disadvantage of conservatism can be, to a great extent, overcome.Publication Statistical Analysis: Summarizing Evidence Versus Establishing Facts(American Psychological Association, 1985) Rosenthal, Robert; Rubin, DonaldWe contrast our view of the primary role of statistical analysis as an aid to summarizing evidence with the view that its primary role is to establish facts. Some implications of these differing viewpoints for the publication of nonsignificant results, the relative emphasis on Type I and Type II errors, and the weighting of contrasts by their importance and interest are discussed.Publication Meta-Analytic Procedures for Combining Studies With Multiple Effect Sizes(American Psychological Association, 1986) Rosenthal, Robert; Rubin, DonaldIn this article we present a general set of meta-analytic procedures for combining and comparing research results from studies yielding multiple effect sizes based on multiple dependent variables. These require, in addition to the individual effect sizes or significance levels, only the degrees of freedom in the study and the typical intercorrelation among the variables. Older methods are reviewed, and a new method is presented for obtaining a single summary effect size estimate from multiple effect sizes. Significance testing of this summary effect size estimate is described. Procedures for computing the effect size for a contrast, and its significance level, among the multiple effect sizes of a single study are also described. Finally, methods for dealing with problems of heterogeneous intercorrelations among the dependent variables are presented.