Aller au contenu

BoTorch Multi-Objective Optimization

Overview

This module provides advanced multi-objective optimization capabilities using BoTorch (Bayesian Optimization in PyTorch) for the ORENI framework. It is specifically designed to handle discrete unordered variables commonly encountered in building renovation optimization problems.

Key Features

  • Multi-objective optimization with support for multiple conflicting objectives
  • Discrete variable handling through automatic encoding/decoding
  • Bayesian optimization using Gaussian Process models
  • Expected Hypervolume Improvement (EHVI) acquisition function
  • Parallel evaluation support
  • Uncertainty propagation integration

Installation

The BoTorch optimizer requires additional dependencies. Install them using:

poetry add torch botorch scipy

Or manually add to pyproject.toml:

[tool.poetry.dependencies]
torch = "^2.0.0"
botorch = "^0.9.0"
scipy = "^1.10.0"

Usage

Basic Usage

from oreni.optim.botorch_optimization import botorch_multi_objective_optimizer

# Run optimization
uncertain_design, experience_design, uncertain_draws = botorch_multi_objective_optimizer(
    function=Simuls.compute_cost_functions,
    decision_space=Simuls.decision_space,
    uncertain_space=Simuls.uncertain_space,
    function_labels=["Life_Cycle_Cost", "Life_Cycle_Assessment", "Thermal_Comfort"],
    n_initial_points=10,
    n_optimization_iterations=20,
    n_uncs=10,
    n_occ_rdm=1,
    n_jobs=4,
    acquisition_function="qEHVI",
    batch_size=1,
)

Advanced Configuration

# Custom reference point for hypervolume calculation
reference_point = [10000, 5000, 100]  # [Cost, CO2, Comfort]

# Batch evaluation for parallel processing
uncertain_design, experience_design, uncertain_draws = botorch_multi_objective_optimizer(
    function=Simuls.compute_cost_functions,
    decision_space=Simuls.decision_space,
    uncertain_space=Simuls.uncertain_space,
    function_labels=function_labels,
    n_initial_points=15,
    n_optimization_iterations=30,
    n_uncs=5,
    acquisition_function="qEHVI",
    reference_point=reference_point,
    batch_size=2,  # Evaluate 2 designs in parallel
)

Discrete Variable Handling

The optimizer automatically handles discrete unordered variables through the DiscreteVariableEncoder class:

Encoding Process

  1. Variable Detection: Automatically identifies discrete variables in the decision space
  2. Integer Encoding: Maps discrete choices to integer indices (0, 1, 2, ...)
  3. Bounds Definition: Sets appropriate bounds for BoTorch optimization
  4. Decoding: Converts optimized continuous values back to discrete choices

Example

# Original discrete variable
choices = ["material_A", "material_B", "material_C"]

# Encoded for BoTorch
encoded_choices = [0, 1, 2]

# Bounds for optimization
bounds = (0, 2)  # min=0, max=2

Acquisition Functions

qEHVI (Expected Hypervolume Improvement)

The default acquisition function that maximizes the expected improvement in hypervolume:

  • Advantages: Well-suited for multi-objective problems, provides good exploration/exploitation balance
  • Parameters: Reference point, partitioning algorithm
  • Use case: General multi-objective optimization

Future Acquisition Functions

Additional acquisition functions can be easily added:

  • qNEHVI: Noisy Expected Hypervolume Improvement
  • qUCB: Upper Confidence Bound
  • qEI: Expected Improvement

Performance Comparison

vs Random Sampling

Metric Random Sampling BoTorch Optimization
Pareto frontier quality Lower Higher
Convergence speed Slower Faster
Function evaluations More required Fewer required
Computational overhead Low Medium

vs Other Optimizers

  • NSGA-II: Similar Pareto quality, but BoTorch requires fewer evaluations
  • MOEA/D: Better for many objectives, BoTorch better for few objectives
  • Random: BoTorch significantly outperforms in all metrics

Example Results

Pareto Frontier Comparison

from oreni.optim.botorch_optimization import compute_pareto_frontier

# Compute Pareto frontiers
random_pareto_obj, random_pareto_idx = compute_pareto_frontier(
    torch.tensor(random_objectives, dtype=torch.float64),
    minimize=[True, True, True]
)

botorch_pareto_obj, botorch_pareto_idx = compute_pareto_frontier(
    torch.tensor(botorch_objectives, dtype=torch.float64),
    minimize=[True, True, True]
)

print(f"Random: {len(random_pareto_idx)} Pareto solutions")
print(f"BoTorch: {len(botorch_pareto_idx)} Pareto solutions")

Convergence Analysis

def analyze_convergence(objectives, function_labels):
    """Analyze optimization convergence."""
    running_best = np.minimum.accumulate(objectives, axis=0)

    for i, label in enumerate(function_labels):
        plt.plot(running_best[:, i], label=f'Best {label}')
        plt.plot(objectives[:, i], alpha=0.3, label=f'All {label}')

    plt.xlabel('Iteration')
    plt.ylabel('Objective Value')
    plt.legend()
    plt.show()

Best Practices

Parameter Tuning

  1. Initial Points: Start with 10-20% of total budget for exploration
  2. Iterations: 20-50 iterations typically sufficient for good convergence
  3. Uncertainty Samples: 5-10 samples per design for robust evaluation
  4. Batch Size: 1-2 for most cases, increase for parallel evaluation

Problem Setup

  1. Objective Scaling: Ensure objectives are on similar scales
  2. Reference Point: Set realistic reference point for hypervolume calculation
  3. Discrete Variables: Verify encoding/decoding works correctly
  4. Constraints: Handle constraints through penalty functions if needed

Performance Optimization

  1. Parallel Evaluation: Use batch_size > 1 for parallel processing
  2. GPU Acceleration: BoTorch supports GPU acceleration for large problems
  3. Early Stopping: Monitor convergence and stop when improvement plateaus
  4. Memory Management: Clear GPU memory between iterations if needed

Troubleshooting

Common Issues

  1. Poor Convergence: Increase initial points or iterations
  2. Memory Issues: Reduce batch size or use CPU only
  3. Discrete Variable Errors: Check encoding/decoding mappings
  4. NaN Values: Verify objective function returns valid values

Debug Mode

import logging
logging.getLogger('oreni').setLevel(logging.DEBUG)

# Run with debug output
uncertain_design, experience_design, uncertain_draws = botorch_multi_objective_optimizer(
    function=Simuls.compute_cost_functions,
    decision_space=Simuls.decision_space,
    uncertain_space=Simuls.uncertain_space,
    function_labels=function_labels,
    n_initial_points=5,
    n_optimization_iterations=10,
    debug=True,  # Enable debug output
)

References

Contributing

To extend the BoTorch optimizer:

  1. Add new acquisition functions in _setup_acquisition_function
  2. Implement new encoding schemes in DiscreteVariableEncoder
  3. Add convergence metrics and visualization tools
  4. Extend support for different variable types (categorical, ordinal)