Replies: 1 comment 1 reply
-
|
Thanks for the detailed report! It all sounds alright, with a large enough team of maybe 10 good engineers I'm confident we can get all that running in a couple of years, but it needs to be clear that they can't expect all of that to get done in any reasonable time frame with only $1000 per month. I'm OK with starting working on small tasks and see how things go after a few months, for example. Also, they need to understand that, for example, PyTorch needs Python to optimize models. That's not something that the C++ API can do. So we still need Python for many tasks. As for licensing, JavaCPP itself and everything written as part of Bytedeco is as per this file, which should be fine: |
Beta Was this translation helpful? Give feedback.
Uh oh!
There was an error while loading. Please reload this page.
-
Hi @saudet @sbrunk , I'd like to share with you some communication, demonstrations, and the final intention to cooperate with Huawei engineers today.
First, I showed them the process of model development and training in a regular IDE. Then, I demonstrated the effect of PyTorch on Scala3 using a Jupyter notebook with the Scala kernel based on amond.sh. After that, I showed them the recently completed Gloo distributed training results. Finally, I showed them the code for my PyTorch on Scala3 lessons and the content of the Chinese book. They were very satisfied and surprised, and I believe this exceeded their expectations. They mentioned that in previous collaborations, no one had ever provided such a rich range of solutions.
After that, we discussed the terms of cooperation with Huawei:
In summary, I think Huawei is sincere about cooperating, and their needs are something that others cannot solve. Bytedeco will be their best choice. We need to use our advantages to make them rely on us. Huawei has a good reputation in terms of sponsorship; it is a responsible large company, and cooperating with them will also promote the development of the Bytedeco ecosystem.
In addition, in the future, I will also contact other Chinese GPU chip manufacturers, including Cambricon, Enflame, and Moore Threads. They may also follow Huawei's lead in creating Java versions of PyTorch for GPU adaptation in the future. I will establish contact with them and persuade them to cooperate and sponsor Bytedeco.
At this stage, I think our users are not just Huawei; Huawei is just a custom client. We have many more ordinary users who use NVIDIA CUDA GPUs. In mid-December, there will be a Scala3 meetup in Beijing, and I will be giving a speech on PyTorch on Scala3. I have already received inquiries from some readers asking if our underlying javacpp-pytorch supports ProcessGroupNccl, as this will determine whether they will actually use javacpp-pytorch in their work. Their company's computing power servers are all Ubuntu 22 or 24, with NVIDIA A100, H20, GTX-6000 GPUs. They tested that Gloo's performance is only about 35% of NCCL's speed. I said that it should not be a problem in the future, and we will consider supporting ProcessGroupNccl as soon as possible.
In conclusion, I personally feel that in the future, more and more people will recognize javacpp-pytorch. The scenarios we support are richer, and people are willing to use it.
Beta Was this translation helpful? Give feedback.
All reactions