Fundamentals Statistical Tools
This workshop offers an introduction to the fundamental principles and concepts in statistics. The first part covers classical and more recent exploratory data analysis (EDA) techniques to describe data with numerical and graphical tools. The various uses of these methods such as outlier detection is discussed. The second part addresses, with the help of real-life examples, the principles underlying statistical testing and decision-making in the presence of uncertainty. It covers risks involved, effect size, p-values as well as statistical significance and practical relevance. The use and interpretation of confidence intervals is also discussed. An excellent introductory module and a solid basis for all other courses.
Engineering Applications of Machine Learning
Artificial Intelligence (AI) has become a prominent sale argument no matter whether you want to buy a simple toaster or an advanced assembly line robot. The purpose of this seminar is to demystify the jargon used and to provide attendees with the tools …
Introduction to the Design of Experiments ‘DOE’
Variation is present in every experiment. Learn about DoE techniques to control variation, and to maximise data quality. This workshop presents classical techniques to design efficient experiments as well as the tools to analyze their results. The principles of sample size calculations, strategies to remove undesirable sources of variability like the use of blocks and controls, as well as the most commonly used experimental designs are discussed. The statistical analysis of designed experiments is progressively introduced, starting with the t-test method used to compare two groups. Then, the analysis of variance technique (ANOVA) is extensively covered from simple one-factor experiments to more advanced multi-factor situations where the interaction between factors needs to be considered. Multiple comparisons techniques used to locate differences are also presented.
Advanced Experimental Designs
Learn about advanced experimental designs to account for experimental various types of constraints such as time, available resources, material heterogeneity, randomization restrictions when certain factors are more difficult or costly to change than others, different sizes of experimental units as well as repeated measures. In this course, the construction of advanced designs and their statistical analysis is covered with the help of real case studies.
Screening Techniques in DOE
In preliminary research phases, the number of potentially influential factors to investigate is usually large. Screening designs are experimental designs used to identify the most influential factors that influence a response or outcome in a process or system with a reasonable number of runs. These designs are typically used in the early stages of experimentation, when you want to quickly assess a large number of variables to determine which ones have the greatest effect on the response variable. The goal is to eliminate unimportant factors and focus resources on the most influential ones. Learn about the construction of fractional factorial designs, aliasing and de-aliasing strategie. A working knowledge of multiple linear regression is needed to make the most out of this workshop.
Optimisation Designs
Optimization designs refer to experimental design strategies that are specifically structured to optimize a process, product, or system. The goal of these designs is to identify the combination of factors (inputs) that lead to the best possible outcome (response) according to a defined objective, such as maximizing performance, minimizing cost, or finding the most efficient operating conditions. Learn more about experimental designs when influential factors have been identified and the goal is to optimize their levels. Principle underlying the construction of composite and Box-Behnken design are covered. Principle, model-building, and response surface methodology are reviewed.
Linear Regression Modelling Techniques
The linear regression is a method used to model the relationship between a dependent variable and one or more independent variables by fitting a linear equation to observed data. Building a regression model with stats packages has become straightforward. However, interpreting the software output and building a good model are no simple tasks. Learn about statistical modeling with a focus on linear models. What is a model? Estimating and interpreting model coefficients. Dealing with continuous and categorical predictors and interactions. Evaluating model performance: explanatory vs. predictive. Common pitfalls and best practices. Introduction to nonlinear regression.
Regression Modelling Techniques for Categorical Data
Linear regression is inappropriate to model binary responses such as pass/fail, survived/died. Learn the principle of logistic regression part of the Generalized Linear Models along with its similarities with linear regression and its specific tools. Good practices for model-building and for assessing model goodness-of-fit are presented.
Statistical Methods for Reliability Studies
Reliability studies are a type of research or experimental design focused on assessing and improving the reliability of systems, products, or processes. In these studies, the goal is to determine how consistently and dependably a system or product performs over time and under varying conditions. Reliability refers to the ability of a product, system, or component to function as intended without failure, across its expected lifespan or under specified conditions. In industrial applications, reliability is crucial and testing is expensive. Collected data must be exploited in the best way possible. Reliability data possess specific features that call for dedicated statistical methods. Learn about statistical tools for reliability analysis.
Principal Component Analysis
Learn about Principal Component Analysis, a data reduction technique, to identify, quantify & visualise the structure of a set of measurements. PCA provides insightful data visualisation tools. Learn about innovative applications. During the workshop, emphasis is put on the principles and the conditions of utilization of the method, the results they provide and their interpretation. Plenty of time is devoted to case studies and interpretation of software output.
Cluster Analysis – Unsupervised Learning
Learn how to take data (consumers, genes, …) and organise them into homogeneous groups for use in many applications, such as market analysis and biomedical data analysis, or as a pre-processing step for many data mining tasks. Learn about this very active field of research in statistics and data mining, and discover new techniques. Learn about innovative applications. During the workshop, emphasis is put on the principles and the conditions of utilization of the method, the results they provide and their interpretation. Plenty of time is devoted to case studies and interpretation of software output.