Skip to content

How to profile the memoryProfileSnapshots more than 1000? #684

Open
@TKH666

Description

@TKH666

I want to profile the memory usage for all op during training. Here is the code for profiling, But I found the result of profiling only records 1000 snapshots of memory allocation/deallocation. How to profile the memoryProfileSnapshots more than 1000?
`loss_fn = tf.keras.losses.SparseCategoricalCrossentropy(from_logits=True)

@tf.function
def train_step(x, y):
    with tf.GradientTape() as tape:
        y_pred = model(x, training=True)
        loss = loss_fn(y, y_pred)
    gradients = tape.gradient(loss, model.trainable_weights)
    return gradients

# dummy training data
x = tf.random.normal((batch_size, input_shape[0], input_shape[1], input_shape[2]))
y = tf.ones((batch_size,))

print("Warmup...")
for k in tqdm(range(1)):
    train_step(x, y)

t0 = time.time()

print("Profiling the model...")
tf.profiler.experimental.start(logdir)
for k in range(num_iterations):
    with tf.profiler.experimental.Trace('train', step_num=k):
        train_step(x, y)
tf.profiler.experimental.stop()`

Metadata

Metadata

Assignees

No one assigned

    Labels

    No labels
    No labels

    Type

    No type

    Projects

    No projects

    Milestone

    No milestone

    Relationships

    None yet

    Development

    No branches or pull requests

    Issue actions