Skip to content

Problem and algorithms ignore the torch.set_default_dtype #51

@miguelgondu

Description

@miguelgondu

I just ran into a problem when trying to run problems with double precision. I thought that defining torch.set_default_dtype(torch.float64) would be enough for evotorch to define all the tensors internally to be of double precision, but this is not the case.

Consider the following simple example of running CMA-ES for a single step:

import torch
import numpy as np

from evotorch import Problem
from evotorch.algorithms import CMAES

torch.set_default_dtype(torch.float64)

# A simple objective function
def objective_function(xy: torch.Tensor) -> torch.Tensor:
    x = xy[..., 0]
    y = xy[..., 1]
    return x + y


# Defining the problem
problem = Problem(
    "max",
    objective_function,
    bounds=[0.0, 1.0],
    solution_length=2,
    vectorized=True,
)

# Defining the searcher
cmaes = CMAES(
    problem,
    popsize=100,
    stdev_init=1.0,
    center_learning_rate=0.1,
    cov_learning_rate=0.1,
)

# Taking a single step
cmaes.step()

# Accessing the current best's dtype
# (thought it was float64, but **it's only float32**)
print(cmaes.get_status_value("pop_best").values.dtype)

If we want it to be float64, we have to specify it in the definition of problem. Indeed, running this with

problem = Problem(
    "max",
    objective_function,
    bounds=[0.0, 1.0],
    solution_length=2,
    vectorized=True,
    dtype=torch.float64,
)

gets us a best candidate with double precision. Why do we have to specify the type twice? Wouldn't we want Problem to inherit the default float dtype?

Metadata

Metadata

Assignees

No one assigned

    Labels

    No labels
    No labels

    Type

    No type

    Projects

    No projects

    Milestone

    Relationships

    None yet

    Development

    No branches or pull requests

    Issue actions