Skip to content
This repository was archived by the owner on Jul 1, 2025. It is now read-only.

Feature: Enable codellama on Intel GPUs#90

Open
abhilash1910 wants to merge 2 commits intometa-llama:mainfrom
abhilash1910:feature_xpu_support
Open

Feature: Enable codellama on Intel GPUs#90
abhilash1910 wants to merge 2 commits intometa-llama:mainfrom
abhilash1910:feature_xpu_support

Conversation

@abhilash1910
Copy link

Motivation:
Thanks for creating this repository . There is an ongoing effort planned to collaborate from Intel GPU to enable out of the box runtime functionality of code llama on our XPU/GPU Devices. There is also a parallel effort on Llama recipes which is under discussion between Intel and Meta , and we plan to provide consolidated support to all framework/models from Meta research to be able to run on our graphics cards.
Mentioning PR : meta-llama/llama-cookbook#116 (llama recipes also in progress).

@abhilash1910
Copy link
Author

@Brozi @syhw requesting review and also wanted to discuss future collaborative development plan with support from Intel GPUs. Would be great to have a discussion on this . Thanks.

Sign up for free to subscribe to this conversation on GitHub. Already have an account? Sign in.

Projects

None yet

Development

Successfully merging this pull request may close these issues.

2 participants