---
title: "Add a new Tuner"
output: rmarkdown::html_vignette
vignette: >
  %\VignetteIndexEntry{Add a new Tuner}
  %\VignetteEngine{knitr::rmarkdown}
  %\VignetteEncoding{UTF-8}
---


# Adding new Tuners

In this vignette, we show how to implement a custom tuner for `mlr3tuning`.
The main task of a tuner is to iteratively propose new hyperparameter configurations that we want to evaluate for a given task, learner and validation strategy.
The second task is to decide which configuration should be returned as a tuning result - usually it is the configuration that led to the best observed performance value.
If you want to implement your own tuner, you have to implement an R6-Object that offers an  [`.optimize`](#tuner-optimize) method that implements the iterative proposal and you are free to implement [`.assign_result`](#tuner-add-result) to differ from the before-mentioned default process of determining the result.

Before you start with the implementation make yourself familiar with the main R6-Objects in `bbotk` (Black-Box Optimization Toolkit).
This package does not only provide basic black box optimization algorithms and but also the objects that represent the optimization problem (`OptimInstance`) and the log of all evaluated configurations (`Archive`).
d
There are two ways to implement a new tuner:
a ) If your new tuner can be applied to any kind of optimization problem it should be implemented as a `Optimizer`.
Any `Optimizer` can be easily transformed to a `Tuner`.
b) If the new custom tuner is only usable for hyperparameter tuning, for example because it needs to access the task, learner or resampling objects it should be directly implemented in `mlr3tuning` as a `Tuner`.

## Adding a new Tuner

This is a summary of steps for adding a new tuner.
The fifth step is only required if the new tuner is added via `bbotk`.

1. Check the tuner does not already exist as a [`Optimizer`](https://github.com/mlr-org/bbotk/tree/master/R) or [`Tuner`](https://github.com/mlr-org/mlr3tuning/tree/master/R) in the GitHub repositories.
1. Use one of the existing optimizers / tuners as a [template](#tuner-template).
1. Overwrite the [`.optimize`](#tuner-optimize) private method of the optimizer / tuner.
1. Optionally, overwrite the default [`.assign_result`](#tuner-add-result) private method.
1. Use the [`mlr3tuning::TunerBatchFromOptimizerBatch`](#tuner-from-optimizer) class to transform the `Optimizer` to a `Tuner`.
1. Add [unit tests](#tuner-test) for the tuner and optionally for the optimizer.
1. Open a new pull request for the [`Tuner`](https://github.com/mlr-org/mlr3tuning/pulls) and optionally a second one for the [`Optimizer](https://github.com/mlr-org/bbotk/pulls).

## Template {#tuner-template}

If the new custom tuner is implemented via `bbotk`, use one of the existing optimizer as a template e.g. [`bbotk::OptimizerRandomSearch`](https://github.com/mlr-org/bbotk/blob/master/R/OptimizerBatchRandomSearch.R). There are currently only two tuners that are not based on a `Optimizer`: [`mlr3hyperband::TunerHyperband`](https://github.com/mlr-org/mlr3hyperband/blob/master/R/TunerBatchHyperband.R) and [`mlr3tuning::TunerIrace`](https://github.com/mlr-org/mlr3tuning/blob/master/R/TunerBatchIrace.R). Both are rather complex but you can still use the documentation and class structure as a template. The following steps are identical for optimizers and tuners.

Rewrite the meta information in the documentation and create a new class name.
Scientific sources can be added in `R/bibentries.R` which are added under `@source` in the documentation.
The example and dictionary sections of the documentation are auto-generated based on the `@templateVar id <tuner_id>`.
Change the parameter set of the optimizer / tuner and document them under `@section Parameters`.
Do not forget to change `mlr_optimizers$add()` / `mlr_tuners$add()` in the last line which adds the optimizer / tuner to the dictionary.

## Optimize method

The `$.optimize()` private method is the main part of the tuner.
It takes an instance, proposes new points and calls the `$eval_batch()` method of the instance to evaluate them.
Here you can go two ways: Implement the iterative process yourself or call an external optimization function that resides in another package.

### Writing a custom iteration

Usually, the proposal and evaluation is done in a `repeat`-loop which you have to implement.
Please consider the following points:

- You can evaluate one or multiple points per iteration
- You don't have to care about termination, as `$eval_batch()` won't allow more evaluations then allowed by the `bbotk::Terminator`. This implies, that code after the `repeat`-loop will not be executed.
- You don't have to care about keeping track of the evaluations as every evaluation is automatically stored in `inst$archive`.
- If you want to log additional information for each evaluation of the `Objective` in the `Archive` you can simply add columns to the `data.table` object that is passed to `$eval_batch()`.

### Calling an external optimization function

Optimization functions from external packages usually take an objective function as an argument.
In this case, you can pass `inst$objective_function` which internally calls `$eval_batch()`.
Check out [`OptimizerGenSA`](https://github.com/mlr-org/bbotk/blob/master/R/OptimizerBatchGenSA.R) for an example.

## Assign result method

The default `$.assign_result()` private method simply obtains the best performing result from the archive.
The default method can be overwritten if the new tuner determines the result of the optimization in a different way.
The new function must call the `$assign_result()` method of the instance to write the final result to the instance.
See [`mlr3tuning::TunerIrace`](https://github.com/mlr-org/mlr3tuning/blob/master/R/TunerBatchIrace.R) for an implementation of `$.assign_result()`.

## Transform optimizer to tuner

This step is only needed if you implement via `bbotk`.
The `mlr3tuning::TunerBatchFromOptimizerBatch` class transforms a `Optimizer` to a `Tuner`.
Just add the `Optimizer` to the `optimizer` field.
See [`mlr3tuning::TunerRandomSearch`](https://github.com/mlr-org/mlr3tuning/blob/master/R/TunerBatchRandomSearch.R) for an example.

## Add unit tests

The new custom tuner should be thoroughly tested with unit tests.
`Tuner`s can be tested with the `test_tuner()` helper function.
If you added the Tuner via a `Optimizer`, you should additionally test the `Optimizer` with the `test_optimizer()` helper function.