Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Computational performance is significantly worse when using Birkhoff #1084

Open
TidoHoutepen opened this issue Jul 17, 2024 · 3 comments
Open
Assignees
Labels
bug Something isn't working

Comments

@TidoHoutepen
Copy link

Description

Hi everyone,

I have been experimenting with using the Birkhoff transcription for my optimisation problem, as it might be more robust for my usecase. However, it seems like Birkhoff runs a lot slower for my problem (computing the coloring for my 750 node problem took about 1.5 hours, while for Radau it takes 15 minutes max).

I went back to the basics and compared the performance of Birkhoff to Radau on the Brachistrochrone problem. I used [5,10,20,40] third order segments and [20,40,80,160] nodes for Radau and Birkhoff respectively (as far as I understand this should give them the same resolution). I used IPOPT and gave both transcriptions 100 iterations to solve the problem

The Birkhoff was much slower (see image) and did not converge within the 100 iterations for the three most dense grids.
RadauVSBirkhoff
On top of that the jacobian computations of the Birkhoff optimisation became slower compared to the Radau computations as the amount of points grew:

Full total jacobian for problem 'FirstExample2' was computed 3 times, taking 0.1714210999780334 seconds. --> Birkhoff 20 nodes
Full total jacobian for problem 'FirstExample3' was computed 3 times, taking 0.3252433000016026 seconds. --> Radau 10 segments
Full total jacobian for problem 'FirstExample4' was computed 3 times, taking 0.4679974000318907 seconds. --> Birkhoff 40 nodes
Full total jacobian for problem 'FirstExample5' was computed 3 times, taking 0.5151562999817543 seconds. --> Radau 20 segments
Full total jacobian for problem 'FirstExample6' was computed 3 times, taking 1.0468163999612443 seconds. --> Birkhoff 80 nodes
Full total jacobian for problem 'FirstExample7' was computed 3 times, taking 1.3801417999784462 seconds. --> Radau 40 segments
Full total jacobian for problem 'FirstExample8' was computed 3 times, taking 3.3809573000180535 seconds. --> Birkhoff 160 nodes

As far as I understood Birkhoff should scale better with larger problem sizes, however that does not seem to be case, is there something wrong with my implementation or could this be a bug?

Example

import numpy as np
import openmdao.api as om
import dymos as dm
from dymos.examples.brachistochrone import BrachistochroneODE
import matplotlib.pyplot as plt
import time        

def solveProblem(n_segments:int):

    # Initialize the problem and the optimization driver
    p = om.Problem(model=om.Group())
    p.driver = om.pyOptSparseDriver(optimizer='IPOPT')
    p.driver.declare_coloring()
    p.driver.opt_settings['max_iter'] = 100

    # Create a trajectory and add a phase to it
    traj = p.model.add_subsystem('traj',dm.Trajectory())

    phase = traj.add_phase('phase0', dm.Phase(ode_class=BrachistochroneODE, transcription=dm.Radau(num_segments = n_segments)))

    # Set the state variables
    phase.set_time_options(fix_initial =True, duration_bounds =(0.5,10))
    phase.add_state('y', fix_initial = True, fix_final = True)
    phase.add_state('x', fix_initial = True, fix_final = True)
    phase.add_state('v', fix_initial = True, fix_final = False)

    # Set BC of controls
    phase.add_control('theta', continuity=True, rate_continuity=True, units='deg', lower=0.01, upper=179.9)

    # Set parameters
    phase.add_parameter('g', units='m/s**2', val= 9.80665) 

    # Put in objective function
    phase.add_objective('time', loc='final', scaler=10)

    # Add solver
    #p.model.linear_solver = om.DirectSolver()

    # Setup the problem
    p.setup()

    # Now set the initial and final values 
    p['traj.phase0.t_initial'] = 0.0
    p['traj.phase0.t_duration'] = 2.0

    p.set_val('traj.phase0.states:x', phase.interp('x', ys=[0, 10]))
    p.set_val('traj.phase0.states:y', phase.interp('y', ys=[10, 5]))
    p.set_val('traj.phase0.states:v', phase.interp('v', ys=[0, 0]))
    p.set_val('traj.phase0.controls:theta', phase.interp('theta', ys=[0, 0]))

    dm.run_problem(p)

    print(p.get_val('traj.phase0.timeseries.time')[-1])



def solveProblemBirkhoff(n_nodes:int):

    # Initialize the problem and the optimization driver
    p = om.Problem(model=om.Group())
    p.driver = om.pyOptSparseDriver(optimizer='IPOPT')
    p.driver.declare_coloring()
    p.driver.opt_settings['max_iter'] = 100

    # Create a trajectory and add a phase to it
    traj = p.model.add_subsystem('traj',dm.Trajectory())

    phase = traj.add_phase('phase0', dm.Phase(ode_class=BrachistochroneODE, transcription=dm.Birkhoff(grid=dm.BirkhoffGrid(num_nodes=n_nodes))))

    # Set the state variables
    phase.set_time_options(fix_initial =True, duration_bounds =(0.5,10))
    phase.add_state('y', fix_initial = True, fix_final = True)
    phase.add_state('x', fix_initial = True, fix_final = True)
    phase.add_state('v', fix_initial = True, fix_final = False)

    # Set BC of controls
    phase.add_control('theta', continuity=True, rate_continuity=True, units='deg', lower=0.01, upper=179.9)

    # Set parameters
    phase.add_parameter('g', units='m/s**2', val= 9.80665) 

    # Put in objective function
    phase.add_objective('time', loc='final', scaler=10)

    # Add solver
    #p.model.linear_solver = om.DirectSolver()

    # Setup the problem
    p.setup()

    # Now set the initial and final values 
    p['traj.phase0.t_initial'] = 0.0
    p['traj.phase0.t_duration'] = 2.0

    phase.set_state_val('x',[0,10])
    phase.set_state_val('y',[10,5])
    phase.set_state_val('v',0)
    phase.set_control_val('theta',0.0)

    dm.run_problem(p)

    print(p.get_val('traj.phase0.timeseries.time')[-1])
    


# Lets try these different transcriptions with different number of nodes
n_nodes = [20,40,80,160]
n_segments = [5,10,20,40]

t_birkhoff = []
t_radau = []

for i in range(len(n_nodes)):
    t_start = time.time()
    solveProblem(n_segments[i])
    t_radau.append(time.time()-t_start)
    t_start=time.time()
    solveProblemBirkhoff(n_nodes[i])
    t_birkhoff.append(time.time()-t_start)

plt.plot(n_nodes,t_birkhoff, label='Brikhoff')
plt.plot(n_nodes,t_radau, label='Radau')
plt.xlabel('Number of nodes [-]')
plt.ylabel('Computational time [s]')
plt.legend()
plt.minorticks_on()
plt.grid(which='minor', linestyle='--', alpha=0.4)
plt.grid(which='major', linestyle='-')
plt.tight_layout()
plt.show()

Dymos Version

1.10.1dev0

Relevant environment information

No response

@TidoHoutepen TidoHoutepen added the bug Something isn't working label Jul 17, 2024
@robfalck
Copy link
Contributor

Thanks for stress testing for us. What version of OpenMDAO are you using? There's a component in the Birkhoff transcription that needs to use complex step in older versions of OpenMDAO. That might be slowing things down but I'll take a look at it.

@TidoHoutepen
Copy link
Author

I was using version 3.31.1, upgraded to the latest release (3.33.0) but did not seem to make any difference.

@robfalck
Copy link
Contributor

We've seen some evidence that, because Birkhoff makes for a larger problem with basically twice as many design variables and constraints, it can take significantly longer to reach an optimum.

We have not seen the slow coloring (I'm actually wondering if this is the scaling report being built and taking a long time, but I'll get to more profiling on that in a bit).

Depending on how deep your particular model is, you can get a pretty big chunk of performance back (with either transcription) by applying a solver that uses an assembled jacobian at the top level of your model.

p.model.linear_solver = DirectSolver(assemble_jac=True)

The default LinearRunOnce solver uses a dictionary jacobian that is memory efficient, but basically recurses down into the model to perform all of the math necessary to perform the linear solve which generates the total derivatives.

By using an assembled jacobian that's done once at the top level. In this case, the brachistocrhone with 160 nodes on my system went from ~90 seconds to ~45 seconds.

The bulk of the time here is spend computing the total derivatives, so I'm looking into other solvers that we can potentially throw at this to get more performance back.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
bug Something isn't working
Projects
None yet
Development

No branches or pull requests

2 participants