AcquiLAB Documentation
Bayesian Optimization for Scientific Experiment Design
This guide explains the Bayesian Optimization workflow used in AcquiLAB, including surrogate modeling, acquisition strategy, campaign setup, stopping criteria, and domain examples.
1. What is Bayesian Optimization
Bayesian Optimization is a method for optimizing expensive experiments. Instead of brute-force search, it builds a probabilistic model of the response surface and selects the next experiment using an explicit decision rule.
In AcquiLAB, the core loop is Gaussian Process based and has three components: a surrogate model, an acquisition function, and an iterative experiment loop.
2. The AcquiLAB Optimization Loop
The loop continues until a stopping condition is met, such as performance plateau, resource limits, or reaching the desired objective level.
3. Surrogate Models
Gaussian Process (GP) surrogates estimate the unknown response across parameter space. They provide both a mean prediction and uncertainty estimate, which is critical when data is scarce and each run is costly.
- Models nonlinear response surfaces from limited runs
- Estimates uncertainty to guide high-value experiments
- Updates after each new observation
4. Acquisition Functions
The acquisition function decides where to sample next by balancing expected gain against uncertainty.
Expected Improvement (EI)
Targets locations with strong expected gain over the current best result.
Upper Confidence Bound (UCB)
Adds an uncertainty bonus to encourage broader exploration.
Probability of Improvement (PI)
Prioritizes regions likely to improve over baseline with conservative behavior.
Exploration-exploitation tuning commonly uses kappa (UCB weight) and xi (improvement threshold), where larger values generally increase exploration.
5. Optimization Modes in AcquiLAB
The BO Designer provides multiple campaign modes for different experimental settings.
Standard BO
Classic sequential Bayesian Optimization with one recommendation per iteration.
Parallel BO
Proposes multiple experiments in one round for laboratories that run batches in parallel.
Multi-fidelity
Combines lower-cost and higher-cost evaluations to improve search efficiency.
Active Learning
Prioritizes informative sampling across parameter space when building predictive models.
Physics-aware
Incorporates hard and soft scientific constraints into recommendation logic.
Mechanism-aware
Uses domain hints, shape priors, and mixture constraints to guide search behavior.
6. Multi-Objective Bayesian Optimization (MOBO)
Multi-objective optimization is used when objectives compete, for example maximizing yield while minimizing cost, or maximizing sensitivity while minimizing noise.
MOBO searches for Pareto-optimal solutions rather than a single optimum. The result is a Pareto frontier that represents trade-off choices between objectives.
7. Exploration vs Exploitation
Exploration samples uncertain regions to improve model knowledge. Exploitation samples high-performing regions to improve objective values. Acquisition functions balance these strategies over iterations.
8. Convergence and Stopping Criteria
Typical stopping criteria include response plateau, objective threshold reached, iteration limit, or exhausted experiment budget. Convergence indicates stabilization under observed data, but does not guarantee a global optimum.
9. Example Applications Across Research Fields
Electrochemistry
Optimize electrolyte composition and operating conditions for current density or stability targets.
Spectroscopy
Tune acquisition parameters to improve signal-to-noise ratio and feature visibility.
Materials Science
Adjust synthesis temperature, time, and precursor ratios to improve material properties.
Chromatography
Optimize mobile phase composition and flow settings for separation quality and runtime.
Biological Assays
Calibrate reagent concentrations and incubation settings for stronger assay sensitivity.
Custom Experiments
Apply the same loop to any tabular experiment with adjustable inputs and measurable outputs.
Start an optimization campaign
Create a campaign in the product app to define variables, choose an optimization mode, and run iterative Bayesian Optimization with experiment feedback.