-
Notifications
You must be signed in to change notification settings - Fork 60
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Requesting assistance from the Laghos Team: partial assembly Implementation Inquiry #183
Comments
Certainly, we can provide assistance by addressing specific questions and offering general guidance. However we won't have the availability to assist with actual code writing. |
@vladotomov, Yes, coding is my work. Addressing questions and offering general guidance would be sufficient. My plan is to use the existing structure in Laghos. First, I added a new grid function for the stress vector, which has a size of 3*(dim - 1). So, in 2D and 3D, it has 3 and 6 vectors, respectively. I referenced this vector to a block vector (S). To calculate the stress rate, I tried to use In I called After doing this calculation, I divided each component by mass using Any comment is going to be helpful.
|
I don't understand what you're trying to do. I'm not sure if this makes sense, you're not moving properly from quadrature points to dofs.
They should be the same if the stress (the quadrature data structure) is indeed |
@vladotomov, I've managed to solve the issue, and reshaping from vector to tensor had some problems. However, I've encountered another problem related to memory management in CUDA. I have a vector called Just like This addition works correctly when done on the CPU, but when I try to perform it in CUDA, it seems like the summation is incorrect, or perhaps the addition isn't happening at all. I suspect that the To address this, I tried explicitly assigning the body_force vector to reside in GPU memory using It seems like the problem lies in how I'm managing memory in CUDA, particularly with ensuring that the necessary data is available on the GPU for computation. Any comments or suggestions on how to properly handle memory allocation and data transfer between the CPU and GPU would be greatly appreciated. |
Have you looked at the Memory manager section in the website docs? |
@vladotomov Yes, I did but it is over my head. Since Maybe I should use ////
|
If To get the current data on the host (CPU), you need a You should try to get the partial assembly implementation first, before worrying about GPU execution. PA by itself can be fully tested on the CPU. |
@vladotomov Thank you for your comment, but I think I take my time to digest it. I just tested vector operations within MFEM. In Laghos, velocity is calculated component-wise, as shown in the attached code. I printed out Or is there any way to use operations defined in MFEM on the GPU?
|
Can you show the code that prints |
Thank you for your comments. I ran a problem (consolidation due to body force) using CPU and CUDA. The results of vertical stress and displacement are identical to my eyes. However, horizontal displacement is a bit different. Of course, horizontal displacement is negligible in comparison with vertical displacement. But I was just wondering if this is a natural thing due to differences in the machine or something else. What do you think? |
The differences seem too big, I'd guess it's a bug. |
@vladotomov I see, but I don't know the reason for this yet, and I'll let you know if I figure it out. Test1: ./laghos -p 4 -m data/square_gresho.mesh -rs 4 -ok I measured the calculation time and found that tests 1 to 6 took 3.878s, 6.128s, 8.854s, 1m2.657s, 2m48.928s, and 8m34.461s. There is a time jump after Test3. Do you have any thoughts on this? This is strange to me. DOFs don't increase drastically, but time increases a lot. |
Strange indeed. Are you running this in your branch? Can I see the code somehow? |
I used the master version of Laghos with a few modifications to run high-order elements greater than 4. For example, in The remaining parts are identical to the master version." Also, you provided a link to the laghos_pa.zip file on GitHub. If you need assistance with the contents of this file or any further clarification, feel free to ask. |
The problem was that the corresponding mass kernels for the velocity and energy matrices were not in MFEM. |
Thanks @vladotomov, it works now. It takes 0m17.078s and computation time became almost linearly scaled.
How can I expand them to Q6Q5, Q7Q6, and Q8Q7 using some order or rules based on your adding? |
@sungho91, sure, more orders can be added.
|
Hi,
I'm working on developing a tectonic solver using Laghos. It still has many areas for improvement, but now I can use it for some applications.
The main features I've added to Laghos are elasticity and (brittle)plasticity.
To achieve this, a lot of parts, I referred to "Dobrev, V.A. et al (2014), High order curvilinear finite elements for elastic–plastic Lagrangian dynamics".
However, I was only able to implement the assembly for stress rate in full assembly mode.
Therefore, I'm unable to fully leverage the benefits of Laghos.
I'm wondering if I can get assistance from the Laghos team to implement stress assembly in partial assembly mode.
Sungho
The text was updated successfully, but these errors were encountered: