Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Ipynb benchmarking #2

Merged
merged 10 commits into from
Aug 11, 2023
180 changes: 180 additions & 0 deletions examples/benchmarking.ipynb
Original file line number Diff line number Diff line change
@@ -0,0 +1,180 @@
{
"cells": [
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"%matplotlib inline"
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Is this supposed to show up at the top? (the answer might be yes, I'm not very familiar enough with jupyter notebooks)

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Typically, no, but I think with how we are formatting the Jupyter Notebooks, we want that cell there.

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I remember us talking about it at one point in the past, but I can't remember why it's here. Can you explain it for me?

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

We mentioned how the automated notebooks had this at the top. But from what I can tell, this does not serve a purpose for us, so I'll remove this cell.

]
},
{
"attachments": {},
"cell_type": "markdown",
"metadata": {},
"source": [
"# Welcome to ProgPy's Benchmarking Example"
]
},
{
"attachments": {},
"cell_type": "markdown",
"metadata": {},
"source": [
"The goal of this notebook is to demonstrate the Benchmarking feature offered for Prognostic Models. Specifically, we will demonstrate how to benchmark the computational efficiency of models with a simple example."
aqitya marked this conversation as resolved.
Show resolved Hide resolved
]
},
{
"attachments": {},
"cell_type": "markdown",
"metadata": {},
"source": [
"First, we need to import the necessary modules."
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"from prog_models.models import BatteryCircuit\n",
aqitya marked this conversation as resolved.
Show resolved Hide resolved
"from timeit import timeit"
]
},
{
"attachments": {},
"cell_type": "markdown",
"metadata": {},
"source": [
"The first import is importing ProgPy's BatteryCircuit Model, and the second import will be used to benchmark our model!"
aqitya marked this conversation as resolved.
Show resolved Hide resolved
]
},
{
"attachments": {},
"cell_type": "markdown",
"metadata": {},
"source": [
"Next, let's initialize our Battery Circuit by creating a model object."
aqitya marked this conversation as resolved.
Show resolved Hide resolved
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"# Step 1: Create a model object\n",
"batt = BatteryCircuit()"
]
},
{
"attachments": {},
"cell_type": "markdown",
"metadata": {},
"source": [
"Then, for our model, we will need to define a future loading function. More information on what a future loading function is and how to use it can be found here: https://nasa.github.io/progpy/prog_models_guide.html#future-loading"
]
},
{
"attachments": {},
"cell_type": "markdown",
"metadata": {},
"source": [
"Since this is a simple example, we are going to have a constant loading!"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"# Step 2: Define future loading function \n",
"loading = batt.InputContainer({'i': 2}) # Constant loading\n",
"def future_loading(t, x=None):\n",
" # Constant Loading\n",
" return loading"
]
},
{
"attachments": {},
"cell_type": "markdown",
"metadata": {},
"source": [
"Finally, we are ready to benchmark the simulation.\n",
"\n",
"We can do this by using the `timeit()` function and pass in our `simulate_to()` or `simulate_to_threshold()` function for the `stmt` argument. For more information regarding the `timeit()` function, please read its documentation located here: https://docs.python.org/3/library/timeit.html"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"# Step 3: Benchmark simulation of 600 seconds\n",
"def sim(): \n",
" batt.simulate_to(600, future_loading)\n",
"time = timeit(sim, number=500)"
]
},
{
"attachments": {},
"cell_type": "markdown",
"metadata": {},
"source": [
"In this example, we are benchmarking the simulation for the BatteryCircuit model up to 600 seconds. Furthermore, we define our `number` argument to be 500 for sake of runtime."
]
},
{
"attachments": {},
"cell_type": "markdown",
"metadata": {},
"source": [
"Let's print out the results of the benchmark test!"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"# Print results\n",
"print('Simulation Time: {} ms/sim'.format(time))"
]
},
{
"attachments": {},
"cell_type": "markdown",
"metadata": {},
"source": [
"In this example, we benchmarked the simulation of the BatteryCircuit model up to 600 seconds by utilizing the `time` package!"
]
}
],
"metadata": {
"kernelspec": {
"display_name": "Python 3",
"language": "python",
"name": "python3"
},
"language_info": {
"codemirror_mode": {
"name": "ipython",
"version": 3
},
"file_extension": ".py",
"mimetype": "text/x-python",
"name": "python",
"nbconvert_exporter": "python",
"pygments_lexer": "ipython3",
"version": "3.10.8"
},
"orig_nbformat": 4
},
"nbformat": 4,
"nbformat_minor": 2
}
Loading