Skip to content

Latest commit

 

History

History
160 lines (103 loc) · 12.3 KB

notes.md

File metadata and controls

160 lines (103 loc) · 12.3 KB

Cutting Plane Feasibility Algorithm

def cutting_plane_feas(omega, space, options=Options()):
    for niter in range(options.max_iters):
        cut = omega.assess_feas(space.xc())  # query the oracle at space.xc()
        if cut is None:  # feasible solution obtained
            return space.xc(), niter
        status = space.update_bias_cut(cut)  # update space
        if status != CutStatus.Success or space.tsq() < options.tolerance:
            return None, niter
    return None, options.max_iters

This code defines a function called cutting_plane_feas which implements a cutting plane algorithm for finding a feasible point in a convex set. The purpose of this algorithm is to solve a feasibility problem, which means finding a point that satisfies certain constraints.

The function takes three inputs:

  1. omega: An oracle that can assess the feasibility of a point and provide cutting planes.
  2. space: A search space object that represents the area where we're looking for a solution.
  3. options: Optional settings for the algorithm, like the maximum number of iterations.

The output of this function is a tuple containing:

  1. Either a feasible solution (as an array) or None if no solution is found.
  2. The number of iterations the algorithm performed.

The algorithm works by repeatedly querying the oracle and updating the search space. Here's how it achieves its purpose:

  1. It starts a loop that runs for a maximum number of iterations (specified in the options).
  2. In each iteration, it asks the oracle to assess the current point in the search space.
  3. If the oracle says the point is feasible, the algorithm returns this point as the solution.
  4. If the point is not feasible, the oracle provides a "cut" (a way to divide the search space).
  5. The algorithm then updates the search space based on this cut.
  6. If the update is successful and the search space is still large enough, it continues to the next iteration.
  7. If the update fails or the search space becomes too small, it stops and returns None (no solution found).

The main logic flow is a loop of asking the oracle about a point, then either returning that point as a solution or updating the search space to look somewhere else. This process continues until a solution is found, the maximum iterations are reached, or the search space becomes too small.

An important concept here is the "cutting plane". This is like drawing a line through the search space that separates the area where solutions might be from the area where they definitely aren't. By repeatedly adding these cuts, the algorithm narrows down the search space until it either finds a solution or determines that no solution exists within the given constraints.

This algorithm is particularly useful for solving complex optimization problems where it's hard to directly find a solution, but it's easier to check if a given point is a solution and, if not, determine which direction to look next.

Cutting-plane Optimization Function

This code defines a function called cutting_plane_optim that implements a cutting-plane method for solving convex optimization problems. The purpose of this function is to find the best solution to an optimization problem within a given search space.

The function takes four inputs:

  1. omega: An object that can assess potential solutions and provide information about how to improve them.
  2. space: An object representing the search space where solutions can be found.
  3. gamma: The initial best value found so far.
  4. options: Optional settings for the algorithm, with default values if not provided.

The function produces three outputs as a tuple:

  1. The best solution found (or None if no solution was found).
  2. The best objective value (gamma) achieved.
  3. The number of iterations performed.

The algorithm works by iteratively refining the search space to find better solutions. It starts with an initial search space and repeatedly asks the omega object to assess the current best point in the space. Based on this assessment, it either updates the best solution found so far or adjusts the search space to exclude areas that can't contain the optimal solution.

Here's how the function achieves its purpose:

  1. It initializes x_best to None, which will store the best solution found.
  2. It enters a loop that runs for a maximum number of iterations (defined in options).
  3. In each iteration, it asks omega to assess the current center point of the search space (space.xc()).
  4. If a better solution is found (indicated by gamma1 not being None), it updates the best solution and value.
  5. It then updates the search space based on the assessment, either with a central cut or a biased cut.
  6. If the update is unsuccessful or the search space becomes too small, it terminates and returns the best solution found.
  7. If the maximum number of iterations is reached, it returns the best solution found so far.

The main logic flow involves the repeated process of assessing the current best point, potentially updating the best known solution, and then refining the search space. This process continues until either a satisfactory solution is found, the search space becomes too small, or the maximum number of iterations is reached.

This algorithm is particularly useful for solving complex optimization problems where the optimal solution can't be directly calculated, but where it's possible to iteratively improve and narrow down the search space.

Cutting-plane method for solving convex quantized discrete optimization problems

This code defines a function called cutting_plane_optim_q that implements a cutting-plane method for solving convex quantized discrete optimization problems. The purpose of this function is to find the best solution to a complex optimization problem by iteratively refining the search space.

The function takes four inputs:

  1. omega: An object that can assess potential solutions and provide information about how to improve them.
  2. space_q: An object representing the search space where potential solutions exist.
  3. gamma: The initial best-known value for the optimization problem.
  4. options: Optional settings for the algorithm, with default values if not provided.

The function produces three outputs as a tuple:

  1. The best solution found (which can be None if no solution is found).
  2. The final best value (gamma) achieved.
  3. The number of iterations performed.

The algorithm works by repeatedly trying to improve the current best solution. It does this through a loop that continues until it reaches the maximum number of allowed iterations or finds a satisfactory solution. In each iteration, it performs the following steps:

  1. It asks the omega object to assess the current solution and provide information on how to improve it. This information is called a "cut".
  2. If a better solution is found, it updates the best-known solution and value.
  3. It then updates the search space based on the cut information.
  4. Depending on the result of the update, it decides whether to continue searching, retry with an alternative cut, or return the current best solution.
  5. The algorithm also checks if the search space has become too small (using the tsq() method). If it has, it returns the current best solution.

Throughout this process, the algorithm is trying to balance between exploring new potential solutions and refining the current best solution. It uses the concept of "cuts" to gradually eliminate parts of the search space that are unlikely to contain the optimal solution.

This method is particularly useful for problems where the solution space is discrete (meaning it consists of distinct, separate values) and quantized (meaning the values are restricted to certain levels or steps). It's a powerful technique for solving complex optimization problems in a structured and efficient manner.

Binary Search Function (bsearch)

This code defines a function called "bsearch" which performs a binary search to find an optimal solution within a given interval. The purpose of this function is to efficiently search for a value that satisfies certain conditions, narrowing down the search range with each iteration.

The function takes three inputs:

  1. omega: An object that can assess whether a given value is feasible or not.
  2. intrvl: A tuple containing the lower and upper bounds of the search interval.
  3. options: An optional parameter that provides configuration settings for the search.

The function outputs a tuple containing two elements:

  1. The best solution found (which is the upper bound of the final interval).
  2. The number of iterations performed during the search.

The binary search algorithm works by repeatedly dividing the search interval in half. It starts with the given interval and calculates the midpoint. Then, it checks if this midpoint is a feasible solution using the omega object. If it is feasible, the upper bound is updated to this midpoint; otherwise, the lower bound is updated. This process continues, narrowing down the search range until either the maximum number of iterations is reached or the interval becomes smaller than a specified tolerance.

The function begins by unpacking the lower and upper bounds from the input interval. It determines the type of the upper bound (which could be an integer) and uses this type for calculations. In each iteration, it calculates the midpoint (gamma) between the current lower and upper bounds. If the difference between the bounds (tau) becomes smaller than the tolerance specified in the options, the function returns the current upper bound as the best solution.

The key logic flow is the if-else statement that decides whether to update the upper or lower bound based on the feasibility of the current midpoint. This allows the function to efficiently narrow down the search range, focusing on the area where the optimal solution is likely to be found.

An important aspect of this implementation is that it assumes the feasibility is monotonic, meaning that if a value is feasible, all values below it are also feasible, and if a value is not feasible, all values above it are also not feasible. This property allows the binary search to converge on the optimal solution.

Overall, this binary search function provides an efficient way to find an optimal value within a given range, using repeated halving of the search interval and feasibility checks to quickly converge on the best solution.

BSearchAdaptor Class

This code defines a class called BSearchAdaptor, which is designed to adapt a binary search algorithm for use in optimization problems. The purpose of this class is to provide a way to check if a certain value (gamma) is the best-so-far optimal value in a given search space.

The class takes three inputs when initialized:

  1. omega: An instance of OracleFeas2, which is used to check the feasibility of solutions.
  2. space: An instance of SearchSpace2, representing the area where the optimization algorithm will look for the best solution.
  3. options: An instance of Options, containing various settings for the algorithm (this is optional and has default values).

The class doesn't produce direct outputs, but it provides methods that can be used in a larger optimization process:

  1. The x_best property returns the current best solution found in the search space.
  2. The assess_bs method checks if a given value (gamma) is the best-so-far optimal value.

The assess_bs method is the core of this class. It works like this:

  1. It creates a copy of the current search space.
  2. It updates the omega object with the new gamma value.
  3. It then uses a function called cutting_plane_feas to try to find a feasible solution in the search space with the new gamma value.
  4. If a feasible solution is found, it updates the original search space with this new solution and returns True.
  5. If no feasible solution is found, it returns False.

The important logic flow here is the binary search adaptation. By repeatedly calling assess_bs with different gamma values, an optimization algorithm can narrow down the best possible value. If assess_bs returns True, it means the current gamma is feasible, and the algorithm might try a better (lower) value next time. If it returns False, the algorithm would try a worse (higher) value.

This class acts as a bridge between a binary search algorithm and a more complex optimization problem, allowing the binary search to be used in contexts where simply comparing two numbers isn't enough to determine the best solution.