This unit focuses on quasirandom search.
Why use quasirandom search?
Quasirandom search (based on lowdiscrepancy sequences) is our preference over fancier blackbox optimization tools when used as part of an iterative tuning process intended to maximize insight into the tuning problem (what we refer to as the "exploration phase"). Bayesian optimization and similar tools are more appropriate for the exploitation phase. Quasirandom search based on randomly shifted lowdiscrepancy sequences can be thought of as "jittered, shuffled grid search", since it uniformly, but randomly, explores a given search space and spreads out the search points more than random search.
The advantages of quasirandom search over more sophisticated blackbox optimization tools (e.g. Bayesian optimization, evolutionary algorithms) include:
 Sampling the search space nonadaptively makes it possible to change the tuning objective in post hoc analysis without rerunning experiments. For example, we usually want to find the best trial in terms of validation error achieved at any point in training. However, the nonadaptive nature of quasirandom search makes it possible to find the best trial based on final validation error, training error, or some alternative evaluation metric without rerunning any experiments.
 Quasirandom search behaves in a consistent and statistically reproducible way. It should be possible to reproduce a study from six months ago even if the implementation of the search algorithm changes, as long as it maintains the same uniformity properties. If using sophisticated Bayesian optimization software, the implementation might change in an important way between versions, making it much harder to reproduce an old search. It isn't always possible to roll back to an old implementation (e.g. if the optimization tool is run as a service).
 Its uniform exploration of the search space makes it easier to reason about the results and what they might suggest about the search space. For example, if the best point in the traversal of quasirandom search is at the boundary of the search space, this is a good (but not foolproof) signal that the search space bounds should be changed. However, an adaptive blackbox optimization algorithm might have neglected the middle of the search space because of some unlucky early trials even if it happens to contain equally good points, since it is this exact sort of nonuniformity that a good optimization algorithm needs to employ to speed up the search.
 Running different numbers of trials in parallel versus sequentially does not produce statistically different results when using quasirandom search (or other nonadaptive search algorithms), unlike with adaptive algorithms.
 More sophisticated search algorithms may not always handle infeasible points correctly, especially if they aren't designed with neural network hyperparameter tuning in mind.
 Quasirandom search is simple and works especially well when many tuning trials are running in parallel. Anecdotally^{1}, it is very hard for an adaptive algorithm to beat a quasirandom search that has 2X its budget, especially when many trials need to be run in parallel (and thus there are very few chances to make use of previous trial results when launching new trials). Without expertise in Bayesian optimization and other advanced blackbox optimization methods, you might not achieve the benefits they are, in principle, capable of providing. It is hard to benchmark advanced blackbox optimization algorithms in realistic deep learning tuning conditions. They are a very active area of current research, and the more sophisticated algorithms come with their own pitfalls for inexperienced users. Experts in these methods are able to get good results, but in highparallelism conditions the search space and budget tend to matter a lot more.
That said, if your computational resources only allow a small number of trials to run in parallel and you can afford to run many trials in sequence, Bayesian optimization becomes much more attractive despite making your tuning results harder to interpret.
Where can I find an implementation of quasirandom search?
OpenSource Vizier has
an implementation of quasirandom
search.
Set algorithm="QUASI_RANDOM_SEARCH"
in this Vizier usage
example.
An alternative implementation exists in this hyperparameter sweeps
example.
Both of these implementations generate a Halton sequence for a given search
space (intended to implement a shifted, scrambled Halton sequence as
recommended in
Critical HyperParameters: No Random, No
Cry.
If a quasirandom search algorithm based on a lowdiscrepancy sequence is not available, it is possible to substitute pseudo random uniform search instead, although this is likely to be slightly less efficient. In 12 dimensions, grid search is also acceptable, although not in higher dimensions. (See Bergstra & Bengio, 2012).
How many trials are needed to get good results with quasirandom search?
There is no way to determine how many trials are needed to get results with quasirandom search in general, but you can look at specific examples. As Figure 3 shows, the number of trials in a study can have a substantial impact on the results:
Figure 3: ResNet50 tuned on ImageNet with 100 trials. Using bootstrapping, different amounts of tuning budget were simulated. Box plots of the best performances for each trial budget are plotted.
Notice the following about Figure 3:
 The interquartile ranges when 6 trials were sampled are much larger than when 20 trials were sampled.
 Even with 20 trials, the difference between especially lucky and unlucky studies are likely larger than the typical variation between retrains of this model on different random seeds, with fixed hyperparameters, which for this workload might be around +/ 0.1% on a validation error rate of ~23%.

Ben Recht and Kevin Jamieson pointed out how strong 2Xbudget random search is as a baseline (the Hyperband paper makes similar arguments), but it is certainly possible to find search spaces and problems where stateoftheart Bayesian optimization techniques crush random search that has 2X the budget. However, in our experience beating 2Xbudget random search gets much harder in the highparallelism regime since Bayesian optimization has no opportunity to observe the results of previous trials. ↩