Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

[XNNPACK][Weights Cache] Enable in XNNPACK #9297

Merged
merged 3 commits into from
Mar 15, 2025
Merged

Conversation

pytorchbot
Copy link
Collaborator

This PR was created by the merge bot to help merge the original PR into the main branch.
ghstack PR number: #9155 by @mcr229
^ Please use this as the source of truth for the PR details, comments, and reviews
ghstack PR base: https://github.com/pytorch/executorch/tree/gh/mcr229/11/base
ghstack PR head: https://github.com/pytorch/executorch/tree/gh/mcr229/11/head
Merge bot PR base: https://github.com/pytorch/executorch/tree/gh/mcr229/10/orig
Merge bot PR head: https://github.com/pytorch/executorch/tree/gh/mcr229/11/orig
@diff-train-skip-merge

mcr229 added 3 commits March 13, 2025 23:59
… named data map

Pull Request resolved: #9153

We serialize tensors into the named data map, and return the output in preprocess result. Allowing for XNNPACK to share tensors with the same name (instead of duplicating).

A key change here is with fused tensors. For BN and Convolution Fusion, we fuse the conv weights and bias with the BN parameters creating new tensors. We then create get_attr nodes for these new parameters. Due to the graph.fx interpreter in export pass base, the new names we create for these new tensors are lost each time. As a result, at the end we introduce a new pass to preserve the names we created. This seems a little hacky for now, but is the only way to preserve the new fused names.

Differential Revision: [D70315207](https://our.internmc.facebook.com/intern/diff/D70315207/)

**NOTE FOR REVIEWERS**: This PR has internal Meta-specific changes or comments, please review them on [Phabricator](https://our.internmc.facebook.com/intern/diff/D70315207/)!
ghstack-source-id: 271732046
Pull Request resolved: #9154

XNNWeightsCache Design with NamedDataMap. The intent of the weights cache is for tensors to be loaded (via name) through the named data map.

APIs to be used by XNNCompiler:

- load_unpacked_data
    - Takes in a string name (tensor name). The weights cache loads the data for this string from the named data map and returns the pointer. It also creates a mapping of this pointer to the name which is later used by the XNNPACK's internal weight cache implementation

- free_unpacked_data
    - Frees all the unpacked data loaded from NamedDataMap. This is only safe to call after xnn_create_runtime has been called. This is because create_runtime takes unpacked data pointers and packs them into a separate buffer.

- a couple getter methods
    - get_packed_data_names
    - get_unpacked_data_names
    - get_num_packed_data
    - get() (get's the xnn_weights_cache object)


Internal APIs used by XNNPACK Library

- look_up
    - takes a cache key (weight and bias pointers) and looks up the offset to the packed weight if it exists
- look_up_or_insert
    - takes a cache key and pointer to packed weights and looks_up the offset if it exists, or inserts a new packed weight into the cache and returns that offset
- offset_to_addr
    - gets offset and returns address to packed pointer
- reserve_space
    - returns memory address with appropriate sie for XNNPACK to populate with packed weights ( I want to use the runtime_allocator for this but i don't think we have the right sizes, so for now we are just using a string buffer and resizing it)
- is_finalized
     - since this cache doesn't necessarily need to care about a finalized state we always return true.
- delete_cache
    - deletes cache
ghstack-source-id: 271823384
@exported-using-ghexport

Differential Revision: [D70885917](https://our.internmc.facebook.com/intern/diff/D70885917/)
Pull Request resolved: #9155

We enable the XNNPACK Weights cache in XNNPACK.

the weights cache is initialized for the runtime with the named data map and a memory allocator (for now the memory allocator is not used, but i hope in the future this can be used to managed the memory for packed weights).

Before Creating the runtime, we first initialize the weights cache, this sets the finalization state to false. As we add weight/bias tensors to the graph, we load them through the named data map in the weights cache, and keep a map of the pointer to the name. When XNNPACK Creates the runtime and packs the weights, it uses the weights_cache method look_up_or_insert. We use the pointers provided in the cache key to look up their names and append them together like ("weightsbias"). We then insert the packed weights with that key.

In future look ups, we just use the pointer cached at the named pack tensor key, saving us from packing in the future.

After creating the runtime and packing the weights, we finalize the cache. This sets is_finalized to true. We also free all unpacked buffers loaded from the named data map as they are no longer needed. We also keep reference counts for all the packed weights incrementing the packed weights which were used by this runtime. We return a vector of all the packed weight names to the xnn_executor runner. When the XNNExecutor is destroyed, we decrement the counts of the packed buffers and destroy them if necessary.

Note that this feature is gated behind the XNN_ENABLE_WEIGHTS_CACHE flag. Since the weights_cache is a global member of the singleton xnnpack backend class, and it is also read/write, we add a mutex to ensure that access to the weights_cache is thread safe.

We added a new mutex, so the mutex hiearchy is:
workspace_mutex_ -> weights_cache_mutex_
ghstack-source-id: 271823386
@exported-using-ghexport

Internal:
I ran a simple experiment with the Machine translation model. I loaded encode_first executed it, and then loaded forward and executed (both methods staying in memory). I measured the RSS after Encode First and then measured the RSS after forward. We saw the following results:

|                      | RSS after Encode First (MiB) | RSS after Forward (MiB) |
|----------------------|------------------------------|-------------------------|
| Without Weight Cache    | 62.765625                    | 130.019531              |
| With Weight Cache | 62.789062                    | 93.222656               |

Which shows that with the weight cache and two methods, we can see around ~28% reduction in memory usage

Differential Revision: [D70885926](https://our.internmc.facebook.com/intern/diff/D70885926/)
Copy link

pytorch-bot bot commented Mar 14, 2025

🔗 Helpful Links

🧪 See artifacts and rendered test results at hud.pytorch.org/pr/pytorch/executorch/9297

Note: Links to docs will display an error until the docs builds have been completed.

✅ No Failures

As of commit 664940b with merge base 630d0cc (image):
💚 Looks good so far! There are no failures yet. 💚

This comment was automatically generated by Dr. CI and updates every 15 minutes.

@facebook-github-bot facebook-github-bot added the CLA Signed This label is managed by the Facebook bot. Authors need to sign the CLA before a PR can be reviewed. label Mar 14, 2025
Base automatically changed from gh/mcr229/10/orig to gh/mcr229/8/orig March 15, 2025 02:31
@SS-JIA SS-JIA merged commit 2a903f9 into gh/mcr229/8/orig Mar 15, 2025
75 checks passed
@SS-JIA SS-JIA deleted the gh/mcr229/11/orig branch March 15, 2025 02:31
@SS-JIA SS-JIA restored the gh/mcr229/11/orig branch March 15, 2025 03:11
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
CLA Signed This label is managed by the Facebook bot. Authors need to sign the CLA before a PR can be reviewed.
Projects
None yet
Development

Successfully merging this pull request may close these issues.

None yet

4 participants