Good science

We chose our company name, Creascience to emphasize that creativity can mix efficiently with science. In this blog, we share illustrations of how a correct use of statistics leads to drastic improvements of scientific research. Come back often to read our latest findings or simply follow this link to get notified by email (guaranteed 100% ad-free).

  • Strategic Thinking in Biotech/Medtech R&D

    During the COVID-19 pandemic, life sciences investment surged, with biotech stock valuations and IPO funding peaking in 2021. However, investment has declined significantly, leading to tightening R&D belts and streamlined project pipelines to retain financial headroom for key programs. The Need for Thinking Differently Life science venture capital has remained above pre-pandemic levels due to…

  • Study Design: Think “Scientific Value” not “P-Values”

    “Statistically based experimental designs have been available for over a century. However, many preclinical researchers are completely unaware of these methods, and the success of experiments is usually equated only with ‘p < 0.05’. By contrast, a well-thought-out experimental design strategy provides data with evidentiary and scientific value. A value-based strategy requires implementation of statistical design principles…

  • Beyond Power Calculations: Assessing Type S (Sign) and Type M (Magnitude) Errors

    “Statistical power analysis provides the conventional approach to assess error rates when designing a research study. However, power analysis is flawed in that a narrow emphasis on statistical significance is placed as the primary focus of study design. In noisy, small-sample settings, statistically significant results can often be misleading. To help researchers address this problem…

  • The Primary Outcome Is Positive — Is That Good Enough?

    Clinical trial findings are often simplified into a binary conclusion, focusing on a P value of less than 0.05 for a treatment difference. However, a more nuanced interpretation requires examining the total evidence, including secondary end points, safety issues, and trial size and quality. This article aims to facilitate a more sophisticated and balanced interpretation…

  • Effect Size, Confidence Interval & Statistical Significance: A Practical Guide for Biologists

    Main Ideas Reference Nakagawa, S., & Cuthill, I. C. (2007). Effect size, confidence interval and statistical significance: a practical guide for biologists. Biological Reviews, 82(4), 591–605. doi:10.1111/j.1469-185x.2007.00027.x 

  • Power Failure: Why Small Sample Size Undermines the Reliability of Neuroscience

    The issues discussed in this paper are clearly not only prevalent in neuroscience… I know that this is a 2013 paper and hopefully the situation has improved, but allow me to seriously doubt it. Nevertheless, I decided to post it because of the general nature of the problems and also because this is an easy…