Skip to contents

The function irace implements the Iterated Racing procedure for parameter tuning. It receives a configuration scenario and a parameter space to be tuned, and returns the best configurations found, namely, the elite configurations obtained from the last iterations. As a first step, it checks the correctness of scenario using checkScenario() and recovers a previous execution if scenario$recoveryFile is set. A R data file log of the execution is created in scenario$logFile.

Usage

irace(scenario, parameters)

Arguments

scenario

(list())
Data structure containing irace settings. The data structure has to be the one returned by the function defaultScenario() or readScenario().

parameters

(list())
Data structure containing the parameter space definition. The data structure has to similar to the one returned by the function readParameters.

Value

(data.frame)

A data frame with the set of best algorithm configurations found by irace. The data frame has the following columns:

  • .ID. : Internal id of the candidate configuration.

  • Parameter names : One column per parameter name in parameters.

  • .PARENT. : Internal id of the parent candidate configuration.

Additionally, this function saves an R data file containing an object called iraceResults. The path of the file is indicated in scenario$logFile. The iraceResults object is a list with the following structure:

scenario

The scenario R object containing the irace options used for the execution. See defaultScenario for more information.

parameters

The parameters R object containing the description of the target algorithm parameters. See readParameters.

allConfigurations

The target algorithm configurations generated by irace. This object is a data frame, each row is a candidate configuration, the first column (.ID.) indicates the internal identifier of the configuration, the following columns correspond to the parameter values, each column named as the parameter name specified in the parameter object. The final column (.PARENT.) is the identifier of the configuration from which model the actual configuration was sampled.

allElites

A list that contains one element per iteration, each element contains the internal identifier of the elite candidate configurations of the corresponding iteration (identifiers correspond to allConfigurations$.ID.).

iterationElites

A vector containing the best candidate configuration internal identifier of each iteration. The best configuration found corresponds to the last one of this vector.

experiments

A matrix with configurations as columns and instances as rows. Column names correspond to the internal identifier of the configuration (allConfigurations$.ID.).

experimentLog

A matrix with columns iteration, instance, configuration, time. This matrix contains the log of all the experiments that irace performs during its execution. The instance column refers to the index of the scenario$instancesList data frame. Time is saved ONLY when reported by the targetRunner.

softRestart

A logical vector that indicates if a soft restart was performed on each iteration. If FALSE, then no soft restart was performed.

state

A list that contains the state of irace, the recovery is done using the information contained in this object.

testing

A list that contains the testing results. The elements of this list are: experiments a matrix with the testing experiments of the selected configurations in the same format as the explained above and seeds a vector with the seeds used to execute each experiment.

Details

The execution of this function is reproducible under some conditions. See the FAQ section in the User Guide.

See also

irace.main()

a higher-level interface to irace().

irace.cmdline()

a command-line interface to irace().

readScenario()

for reading a configuration scenario from a file.

readParameters()

read the target algorithm parameters from a file.

defaultScenario()

returns the default scenario settings of irace.

checkScenario()

to check that the scenario is valid.

Author

Manuel López-Ibáñez and Jérémie Dubois-Lacoste

Examples

if (FALSE) {
# In general, there are three steps: 
scenario <- readScenario(filename = "scenario.txt")
parameters <- readParameters("parameters.txt")
irace(scenario = scenario, parameters = parameters)
}
#######################################################################
# This example illustrates how to tune the parameters of the simulated
# annealing algorithm (SANN) provided by the optim() function in the
# R base package.  The goal in this example is to optimize instances of
# the following family:
#      f(x) = lambda * f_rastrigin(x) + (1 - lambda) * f_rosenbrock(x)
# where lambda follows a normal distribution whose mean is 0.9 and
# standard deviation is 0.02. f_rastrigin and f_rosenbrock are the
# well-known Rastrigin and Rosenbrock benchmark functions (taken from
# the cmaes package). In this scenario, different instances are given
# by different values of lambda.
#######################################################################
## First we provide an implementation of the functions to be optimized:
f_rosenbrock <- function (x) {
  d <- length(x)
  z <- x + 1
  hz <- z[1:(d - 1L)]
  tz <- z[2L:d]
  sum(100 * (hz^2 - tz)^2 + (hz - 1)^2)
}
f_rastrigin <- function (x) {
  sum(x * x - 10 * cos(2 * pi * x) + 10)
}

## We generate 20 instances (in this case, weights):
weights <- rnorm(20, mean = 0.9, sd = 0.02)
  
## On this set of instances, we are interested in optimizing two
## parameters of the SANN algorithm: tmax and temp. We setup the
## parameter space as follows:
parameters_table <- '
  tmax "" i,log (1, 5000)
  temp "" r (0, 100)
  '
## We use the irace function readParameters to read this table:
parameters <- readParameters(text = parameters_table)

## Next, we define the function that will evaluate each candidate
## configuration on a single instance. For simplicity, we restrict to
## three-dimensional functions and we set the maximum number of
## iterations of SANN to 1000.
target_runner <- function(experiment, scenario)
{
    instance <- experiment$instance
    configuration <- experiment$configuration
  
    D <- 3
    par <- runif(D, min=-1, max=1)
    fn <- function(x) {
      weight <- instance
      return(weight * f_rastrigin(x) + (1 - weight) * f_rosenbrock(x))
    }
    # For reproducible results, we should use the random seed given by
    # experiment$seed to set the random seed of the target algorithm.
    res <- withr::with_seed(experiment$seed,
                     stats::optim(par,fn, method="SANN",
                                  control=list(maxit=1000
                                             , tmax = as.numeric(configuration[["tmax"]])
                                             , temp = as.numeric(configuration[["temp"]])
                                               )))
    ## This list may also contain:
    ## - 'time' if irace is called with 'maxTime'
    ## - 'error' is a string used to report an error
    ## - 'outputRaw' is a string used to report the raw output of calls to
    ##   an external program or function.
    ## - 'call' is a string used to report how target_runner called the
    ##   external program or function.
    return(list(cost = res$value))
}

## We define a configuration scenario by setting targetRunner to the
## function define above, instances to the first 10 random weights, and
## a maximum budget of 'maxExperiments' calls to targetRunner.
scenario <- list(targetRunner = target_runner,
                 instances = weights[1:10],
                 maxExperiments = 500,
                 # Do not create a logFile
                 logFile = "")

## We check that the scenario is valid. This will also try to execute
## target_runner.
checkIraceScenario(scenario, parameters = parameters)
#> # 2024-04-26 11:26:48 UTC: Checking scenario
#> ## irace scenario:
#> scenarioFile = "./scenario.txt"
#> execDir = "/home/runner/work/irace/irace/docs/reference"
#> parameterFile = "/home/runner/work/irace/irace/docs/reference/parameters.txt"
#> initConfigurations = NULL
#> configurationsFile = ""
#> logFile = ""
#> recoveryFile = ""
#> instances = c(0.871999129665565, 0.905106341096905, 0.851254727775609, 0.899888574265077, 0.912431054428304, 0.922968232120521, 0.863563646780467, 0.89505349395853, 0.895116007864432, 0.894345891023711)
#> trainInstancesDir = "./Instances"
#> trainInstancesFile = ""
#> sampleInstances = TRUE
#> testInstancesDir = ""
#> testInstancesFile = ""
#> testInstances = NULL
#> testNbElites = 1L
#> testIterationElites = FALSE
#> testType = "friedman"
#> firstTest = 5L
#> blockSize = 1L
#> eachTest = 1L
#> targetRunner = function (experiment, scenario) {    instance <- experiment$instance    configuration <- experiment$configuration    D <- 3    par <- runif(D, min = -1, max = 1)    fn <- function(x) {        weight <- instance        return(weight * f_rastrigin(x) + (1 - weight) * f_rosenbrock(x))    }    res <- withr::with_seed(experiment$seed, stats::optim(par,         fn, method = "SANN", control = list(maxit = 1000, tmax = as.numeric(configuration[["tmax"]]),             temp = as.numeric(configuration[["temp"]]))))    return(list(cost = res$value))}
#> targetRunnerLauncher = ""
#> targetCmdline = "{configurationID} {instanceID} {seed} {instance} {bound} {targetRunnerArgs}"
#> targetRunnerRetries = 0L
#> targetRunnerTimeout = 0L
#> targetRunnerData = ""
#> targetRunnerParallel = NULL
#> targetEvaluator = NULL
#> deterministic = FALSE
#> maxExperiments = 500L
#> minExperiments = NA_character_
#> maxTime = 0L
#> budgetEstimation = 0.05
#> minMeasurableTime = 0.01
#> parallel = 0L
#> loadBalancing = TRUE
#> mpi = FALSE
#> batchmode = "0"
#> digits = 4L
#> quiet = FALSE
#> debugLevel = 2L
#> seed = NA_character_
#> softRestart = TRUE
#> softRestartThreshold = 1e-04
#> elitist = TRUE
#> elitistNewInstances = 1L
#> elitistLimit = 2L
#> repairConfiguration = NULL
#> capping = FALSE
#> cappingType = "median"
#> boundType = "candidate"
#> boundMax = NULL
#> boundDigits = 0L
#> boundPar = 1L
#> boundAsTimeout = TRUE
#> postselection = 0
#> aclib = FALSE
#> nbIterations = 0L
#> nbExperimentsPerIteration = 0L
#> minNbSurvival = 0L
#> nbConfigurations = 0L
#> mu = 5L
#> confidence = 0.95
#> ## end of irace scenario
#> # checkIraceScenario(): 'parameters' provided by user. Parameter file '/home/runner/work/irace/irace/docs/reference/parameters.txt' will be ignored
#> # 2024-04-26 11:26:48 UTC: Checking target runner.
#> # Executing targetRunner ( 2 times)...
#> # targetRunner returned:
#> [[1]]
#> [[1]]$cost
#> [1] 2.78951767008983
#> 
#> [[1]]$time
#> [1] NA
#> 
#> 
#> [[2]]
#> [[2]]$cost
#> [1] 2.18287728389916
#> 
#> [[2]]$time
#> [1] NA
#> 
#> 
#> # 2024-04-26 11:26:48 UTC: Check successful.
#> [1] TRUE

# \donttest{
## We are now ready to launch irace. We do it by means of the irace
## function. The function will print information about its
## progress. This may require a few minutes, so it is not run by default.
tuned_confs <- irace(scenario = scenario, parameters = parameters)
#> # 2024-04-26 11:26:48 UTC: Initialization
#> # Elitist race
#> # Elitist new instances: 1
#> # Elitist limit: 2
#> # nbIterations: 3
#> # minNbSurvival: 3
#> # nbParameters: 2
#> # seed: 676976508
#> # confidence level: 0.95
#> # budget: 500
#> # mu: 5
#> # deterministic: FALSE
#> 
#> # 2024-04-26 11:26:48 UTC: Iteration 1 of 3
#> # experimentsUsedSoFar: 0
#> # remainingBudget: 500
#> # currentBudget: 166
#> # nbConfigurations: 27
#> # Markers:
#>      x No test is performed.
#>      c Configurations are discarded only due to capping.
#>      - The test is performed and some configurations are discarded.
#>      = The test is performed but no configuration is discarded.
#>      ! The test is performed and configurations could be discarded but elite configurations are preserved.
#>      . All alive configurations are elite and nothing is discarded.
#> 
#> +-+-----------+-----------+-----------+----------------+-----------+--------+-----+----+------+
#> | |   Instance|      Alive|       Best|       Mean best| Exp so far|  W time|  rho|KenW|  Qvar|
#> +-+-----------+-----------+-----------+----------------+-----------+--------+-----+----+------+
#> |x|          1|         27|         13|   0.01229370146|         27|00:00:00|   NA|  NA|    NA|
#> |x|          2|         27|         13|    0.1825625731|         54|00:00:00|+0.62|0.81|0.5451|
#> |x|          3|         27|         18|    0.3500133183|         81|00:00:00|+0.27|0.52|0.8261|
#> |x|          4|         27|         14|     1.222332778|        108|00:00:00|+0.25|0.44|0.7497|
#> |-|          5|         17|         14|     1.120996697|        135|00:00:00|-0.07|0.14|0.9610|
#> |=|          6|         17|          7|     1.609844915|        152|00:00:00|-0.03|0.14|0.9147|
#> +-+-----------+-----------+-----------+----------------+-----------+--------+-----+----+------+
#> Best-so-far configuration:           7    mean value:      1.609844915
#> Description of the best-so-far configuration:
#>   .ID. tmax   temp .PARENT.
#> 7    7   31 84.301       NA
#> 
#> # 2024-04-26 11:26:49 UTC: Elite configurations (first number is the configuration ID; listed from best to worst according to the sum of ranks):
#>    tmax    temp
#> 7    31 84.3010
#> 14    2 58.5176
#> 18    2 87.2418
#> # 2024-04-26 11:26:49 UTC: Iteration 2 of 3
#> # experimentsUsedSoFar: 152
#> # remainingBudget: 348
#> # currentBudget: 174
#> # nbConfigurations: 27
#> # Markers:
#>      x No test is performed.
#>      c Configurations are discarded only due to capping.
#>      - The test is performed and some configurations are discarded.
#>      = The test is performed but no configuration is discarded.
#>      ! The test is performed and configurations could be discarded but elite configurations are preserved.
#>      . All alive configurations are elite and nothing is discarded.
#> 
#> +-+-----------+-----------+-----------+----------------+-----------+--------+-----+----+------+
#> | |   Instance|      Alive|       Best|       Mean best| Exp so far|  W time|  rho|KenW|  Qvar|
#> +-+-----------+-----------+-----------+----------------+-----------+--------+-----+----+------+
#> |x|          7|         27|         18|    0.1397459563|         27|00:00:00|   NA|  NA|    NA|
#> |x|          4|         27|          7|    0.2632015792|         51|00:00:00|-0.31|0.35|1.1568|
#> |x|          5|         27|          7|    0.2132118798|         75|00:00:00|-0.00|0.33|0.9795|
#> |x|          3|         27|          7|    0.3153844057|         99|00:00:00|+0.06|0.29|0.8450|
#> |=|          2|         27|          7|    0.9119196168|        123|00:00:00|+0.07|0.26|0.8459|
#> |=|          6|         27|          7|    0.8762793440|        147|00:00:00|+0.08|0.23|0.8663|
#> |=|          1|         27|         18|     1.207716718|        171|00:00:00|+0.05|0.18|0.8863|
#> +-+-----------+-----------+-----------+----------------+-----------+--------+-----+----+------+
#> Best-so-far configuration:          18    mean value:      1.207716718
#> Description of the best-so-far configuration:
#>    .ID. tmax    temp .PARENT.
#> 18   18    2 87.2418       NA
#> 
#> # 2024-04-26 11:26:50 UTC: Elite configurations (first number is the configuration ID; listed from best to worst according to the sum of ranks):
#>    tmax    temp
#> 18    2 87.2418
#> 7    31 84.3010
#> 48   28 73.9659
#> # 2024-04-26 11:26:50 UTC: Iteration 3 of 3
#> # experimentsUsedSoFar: 323
#> # remainingBudget: 177
#> # currentBudget: 177
#> # nbConfigurations: 24
#> # Markers:
#>      x No test is performed.
#>      c Configurations are discarded only due to capping.
#>      - The test is performed and some configurations are discarded.
#>      = The test is performed but no configuration is discarded.
#>      ! The test is performed and configurations could be discarded but elite configurations are preserved.
#>      . All alive configurations are elite and nothing is discarded.
#> 
#> +-+-----------+-----------+-----------+----------------+-----------+--------+-----+----+------+
#> | |   Instance|      Alive|       Best|       Mean best| Exp so far|  W time|  rho|KenW|  Qvar|
#> +-+-----------+-----------+-----------+----------------+-----------+--------+-----+----+------+
#> |x|          8|         24|         62|    0.5031672765|         24|00:00:00|   NA|  NA|    NA|
#> |x|          3|         24|         18|    0.5521110709|         45|00:00:00|+0.33|0.67|0.6598|
#> |x|          2|         24|         18|    0.4627906408|         66|00:00:00|+0.15|0.44|0.7979|
#> |x|          1|         24|         18|    0.4213446493|         87|00:00:00|+0.22|0.41|0.7537|
#> |-|          7|          5|         18|    0.3650249107|        108|00:00:00|+0.45|0.56|0.7610|
#> |-|          6|          3|         18|    0.5615673739|        110|00:00:00|+0.10|0.25|0.6783|
#> |.|          4|          3|         18|     1.102587690|        110|00:00:00|-0.07|0.08|0.7493|
#> |.|          5|          3|         18|     1.136169458|        110|00:00:00|-0.12|0.02|0.7572|
#> +-+-----------+-----------+-----------+----------------+-----------+--------+-----+----+------+
#> Best-so-far configuration:          18    mean value:      1.136169458
#> Description of the best-so-far configuration:
#>    .ID. tmax    temp .PARENT.
#> 18   18    2 87.2418       NA
#> 
#> # 2024-04-26 11:26:51 UTC: Elite configurations (first number is the configuration ID; listed from best to worst according to the sum of ranks):
#>    tmax    temp
#> 18    2 87.2418
#> 48   28 73.9659
#> 7    31 84.3010
#> # 2024-04-26 11:26:51 UTC: Iteration 4 of 4
#> # experimentsUsedSoFar: 433
#> # remainingBudget: 67
#> # currentBudget: 67
#> # nbConfigurations: 10
#> # Markers:
#>      x No test is performed.
#>      c Configurations are discarded only due to capping.
#>      - The test is performed and some configurations are discarded.
#>      = The test is performed but no configuration is discarded.
#>      ! The test is performed and configurations could be discarded but elite configurations are preserved.
#>      . All alive configurations are elite and nothing is discarded.
#> 
#> +-+-----------+-----------+-----------+----------------+-----------+--------+-----+----+------+
#> | |   Instance|      Alive|       Best|       Mean best| Exp so far|  W time|  rho|KenW|  Qvar|
#> +-+-----------+-----------+-----------+----------------+-----------+--------+-----+----+------+
#> |x|          9|         10|          7|    0.3812092627|         10|00:00:00|   NA|  NA|    NA|
#> |x|          8|         10|         48|     1.131643754|         17|00:00:00|-0.77|0.12|1.4870|
#> |x|          4|         10|          7|     3.501741362|         24|00:00:00|-0.26|0.16|1.0805|
#> |x|          7|         10|          7|     2.724791966|         31|00:00:00|-0.16|0.13|1.0846|
#> |=|          5|         10|          7|     2.202480069|         38|00:00:00|+0.01|0.20|0.9667|
#> |=|          2|         10|          7|     2.385076801|         45|00:00:00|-0.01|0.16|0.9818|
#> |=|          6|         10|          7|     2.144076969|         52|00:00:00|+0.07|0.20|0.9332|
#> |=|          1|         10|         73|     2.141185907|         59|00:00:00|+0.07|0.18|0.9537|
#> |=|          3|         10|         73|     1.924656365|         66|00:00:00|+0.09|0.19|0.9187|
#> +-+-----------+-----------+-----------+----------------+-----------+--------+-----+----+------+
#> Best-so-far configuration:          73    mean value:      1.924656365
#> Description of the best-so-far configuration:
#>    .ID. tmax    temp .PARENT.
#> 73   73    2 86.2498       18
#> 
#> # 2024-04-26 11:26:51 UTC: Elite configurations (first number is the configuration ID; listed from best to worst according to the sum of ranks):
#>    tmax    temp
#> 73    2 86.2498
#> 7    31 84.3010
#> 18    2 87.2418
#> # 2024-04-26 11:26:51 UTC: Stopped because there is not enough budget left to race more than the minimum (3).
#> # You may either increase the budget or set 'minNbSurvival' to a lower value.
#> # Iteration: 5
#> # nbIterations: 5
#> # experimentsUsedSoFar: 499
#> # timeUsed: 0
#> # remainingBudget: 1
#> # currentBudget: 1
#> # number of elites: 3
#> # nbConfigurations: 2
#> # Total CPU user time: 3.464, CPU sys time: 0.008, Wall-clock time: 3.473

## We can print the best configurations found by irace as follows:
configurations.print(tuned_confs)
#>    tmax    temp
#> 73    2 86.2498
#> 7    31 84.3010
#> 18    2 87.2418

## We can evaluate the quality of the best configuration found by
## irace versus the default configuration of the SANN algorithm on
## the other 10 instances previously generated.
test_index <- 11:20
test_seeds <- sample.int(2147483647L, size = length(test_index), replace = TRUE)
test <- function(configuration)
{
  res <- lapply(seq_along(test_index),
                function(x) target_runner(
                              experiment = list(instance = weights[test_index[x]],
                                                seed = test_seeds[x],
                                                configuration = configuration),
                              scenario = scenario))
  return (sapply(res, getElement, name = "cost"))
}
## To do so, first we apply the default configuration of the SANN
## algorithm to these instances:
default <- test(data.frame(tmax=10, temp=10))

## We extract and apply the winning configuration found by irace
## to these instances:
tuned <- test(removeConfigurationsMetaData(tuned_confs[1,]))

## Finally, we can compare using a boxplot the quality obtained with the
## default parametrization of SANN and the quality obtained with the
## best configuration found by irace.
boxplot(list(default = default, tuned = tuned))

# }