Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Nevergrad: OnePlusOne Optimiser addition #576

Open
wants to merge 4 commits into
base: main
Choose a base branch
from

Conversation

gulshan-123
Copy link

@gulshan-123 gulshan-123 commented Apr 2, 2025

I have written a basic structure of OnePlusOne optimisation wrapper.
Partially fixes #560

@gulshan-123
Copy link
Author

gulshan-123 commented Apr 2, 2025

@janosg
I have some doubts:

  • When I try to print the lower and upper bounds, if no bounds are provided, it prints an array of size 10 filled with -10, regardless of the size of X0. This is causing an error in the set_bound function of Nevergrad.
  • The OnePlusOne function is designed to run for the specified budget (a parameter in Nevergrad) number of iterations, irrespective of the function’s values (no by default stopping condition, budget is also required parameter).
  • However, it has an option for early stopping using a boolean flag. How can I configure it so that both MAX_ITER and MAX_EVALUATION are set, while still allowing it to stop early if needed?

def _solve_internal_problem(
self, problem: InternalOptimizationProblem, x0: NDArray[np.float64]
) -> InternalOptimizeResult:
print(problem.bounds)
Copy link
Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Here I have tried to print bound

@janosg
Copy link
Member

janosg commented Apr 2, 2025

  • When I try to print the lower and upper bounds, if no bounds are provided, it prints an array of size 10 filled with -10, regardless of the size of X0. This is causing an error in the set_bound function of Nevergrad.

This does not contain enough information to understand your problem. We need at least the following:

  • What did you do
  • What did you expect to happen
  • What happened

Ideally you follow this blogpost when describing problems you encounter.

  • The OnePlusOne function is designed to run for the specified budget (a parameter in Nevergrad) number of iterations, irrespective of the function’s values (no by default stopping condition, budget is also required parameter).

What is your question or doubt? If I understand correctly, this is how most global optimizers behave.

  • However, it has an option for early stopping using a boolean flag. How can I configure it so that both MAX_ITER and MAX_EVALUATION are set, while still allowing it to stop early if needed?

Algorithms in optimagic can have as many options as you need and they can have any type. So it should not be a problem to allow complete configurability of the algorithm.

Copy link

Check out this pull request on  ReviewNB

See visual diffs & provide feedback on Jupyter Notebooks.


Powered by ReviewNB

@gulshan-123
Copy link
Author

Hi @janosg, I've corrected the error from my previous message. I've now added the OnePlusOne wrapper, and it's ready for your review. There's a setting to control the maximum optimization time in Nevergrad – any ideas for a more descriptive name? I'm still working on adding constraints.

@janosg
Copy link
Member

janosg commented Apr 3, 2025

Hi @gulshan-123, thanks for the PR. I'll do a thorough review once your wrapper is feature complete. So here are just some quick comments to help you get there:

  • The OnePlusOne algorithm seems to support many more tuning parameters than what you currently expose in the wrapper. Examples are noise_handling, mutation and crossover. In optimagic we always want complete wrappers that allow full customization of the wrapped algorithms.
  • You set is_global=False; What is your source for that? I am not familiar with the algorithm but from everything I see, it looks like this is a genetic algorithm and those are usually global
  • We need documentation for the algorithm, including sources like the paper that introduced the algorithm.
  • You currently have disable_history=False but you rely on the algorithm's parallelization. This will not work and there are two options around it:
    1. Using disable_history and find another way to collect the optimizer history. This is usually hard and not desirable for maintenance reasons
    2. Use problem.batch_fun for the parallelization; This should be possible if you don't use their minimize interface but the lower level ask-and-tell interface. You would use ask multiple times to create a batch of candidate parameters of size n_cores and then use problem.batch_fun to evaluate the function on this batch in parallel while preserving the history collection.

The documentation is an essential part of the PR and needs to convince us that you did a thorough job in exploring all the tuning parameters of the algorithm.

@gulshan-123
Copy link
Author

gulshan-123 commented Apr 6, 2025

  • You set is_global=False; What is your source for that? I am not familiar with the algorithm but from everything I see, it looks like this is a genetic algorithm and those are usually global

OnePlusOne, as implemented in Nevergrad, is a variant of the (1+1)-Evolution Strategy. It operates by maintaining a single current solution a and generating a mutated candidate a + da. The algorithm accepts the new point only if it improves the objective value. Thus it should be local.
I also found it here: Rastrigin function using 1+1-trapped in local optimum from the Medium Article: EE-(1+1)

2. Use problem.batch_fun for the parallelization; This should be possible if you don't use their minimize interface but the lower level ask-and-tell interface. You would use ask multiple times to create a batch of candidate parameters of size n_cores and then use problem.batch_fun to evaluate the function on this batch in parallel while preserving the history collection.

I will try this.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

Add wrappers for Nevergrad
2 participants