+

Mark Palko points me to this webpage which presents a recent research paper by Joanna Shepherd and Michael Kang. I have no comment on the research—I haven’t had a chance to read the paper—but I wanted to express how impressed I was about the presentation. It starts with a dedicated url just for this paper […]
The post Forget about pdf: this looks much better, it makes all my own papers look like kids’ crayon drawings by comparison. appeared first on Statistical
[…]

+

Jake Humphries writes: I for many years wanted to pursue medicine but after recently completing a master of public health, I caught the statistics bug. I need to complete the usual minimum prerequisites for graduate study in statistics (calculus through multivariable calculus plus linear algebra) but want to take additional math courses as highly competitive […]
The post Which of these classes should he take? appeared first on Statistical Modeling, Causal Inference, and Social Science.

+

We interrupt our usual programming of mockery of buffoons to discuss a bit of statistical theory . . . Continuing from yesterday‘s quotation of my 2012 article in Epidemiology: Like many Bayesians, I have often represented classical confidence intervals as posterior probability intervals and interpreted one-sided p-values as the posterior probability of a positive effect. […]
The post “The general problem I have with noninformatively-derived Bayesian probabilities is that
[…]

+

This story is pretty horrifying/funny. But the strangest thing was this part: [The author] and her colleague have appealed to the unnamed journal, which belongs to the PLoS family . . . I thought PLOS published just about everything! This is not a slam on PLOS. Arxiv publishes everything too, and Arxiv is great. The […]
The post There are 6 ways to get rejected from PLOS: (1) theft, (2) sexual harassment, (3) running an experiment without a control group, (4) keeping a gambling addict
[…]

+

From my 2012 article in Epidemiology: In theory the p-value is a continuous measure of evidence, but in practice it is typically trichotomized approximately into strong evidence, weak evidence, and no evidence (these can also be labeled highly significant, marginally significant, and not statistically significant at conventional levels), with cutoffs roughly at p=0.01 and 0.10. […]
The post Good, mediocre, and bad p-values appeared first on Statistical Modeling, Causal Inference, and
[…]