Native Support or Extension Layer for Running AI/ML Models in PHP #17
gavriel-adi
started this conversation in
Ideas
Replies: 2 comments 1 reply
-
This is a really cool idea! |
Beta Was this translation helpful? Give feedback.
0 replies
-
@gavriel-adi what do you think about https://github.com/ankane/onnxruntime-php ? |
Beta Was this translation helpful? Give feedback.
1 reply
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
Uh oh!
There was an error while loading. Please reload this page.
-
As AI and machine learning continue transforming how applications interact with users, many PHP developers are starting to integrate AI-powered features — from image analysis and text generation to recommendation engines and embeddings. However, PHP currently lacks a robust, standardized way to run or interact with large AI models directly within the language.
Proposal: Introduce a native extension or FFI interface to enable safe and efficient execution of AI/ML models — either via integration with ONNX Runtime, TensorFlow Lite, or even through PHP bindings for stable diffusion and LLMs (e.g., ggml, llama.cpp).
This could include:
php_ai_run_model('model.onnx', $inputData)
Memory-efficient tensor structures
Streamed inference results
Optional GPU acceleration (via Vulkan, CUDA, or Metal)
Benefits:
Allows developers to build AI-enhanced PHP apps without relying on external APIs or services
Reduces latency, cost, and privacy concerns associated with cloud AI
Empowers CMSs, e-commerce platforms, chat systems, and media platforms with embedded intelligence
Aligns with the direction of true_async and high-performance PHP runtimes
Today, many developers rely on exec() to call Python or external binaries, which is slow, unsafe, and limits scaling. A native solution would significantly expand what PHP is capable of.
Beta Was this translation helpful? Give feedback.
All reactions