Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Networking/Filesystem: add 9P2000 network file client. #28977

Draft
wants to merge 34 commits into
base: master
Choose a base branch
from

Conversation

IamPete1
Copy link
Member

@IamPete1 IamPete1 commented Jan 1, 2025

This allows AP to access network storage. This implements a full set of filesystem functions. Future work would be to allow logging to and loading scripts.

This has only been tested in SITL so far, some more work is needed to get this working on real hardware (mostly just fixing the compiler warnings). It would be good to more to more union/structure based packing rather than lots of memcpy-ing. But there are some slightly tricky variable length fields. There are also loads of internal errors here to catch issues, we may want to deal with in a better way. This is also a blocking, not for use in the main thread.

9P2000 docs are a bit thin on the ground, there is this: https://ericvh.github.io/9p-rfc/rfc9p2000.html and https://www.usenix.org/legacy/event/usenix05/tech/freenix/full_papers/hensbergen/hensbergen.pdf.

I have been testing against a server hosted with: https://github.com/knusbaum/go9p

Note that this is client only. You cannot access the AP file system remotely, the point is that you use a external file system in the first place which you can accesses remotely without AP slowing things down.

@IamPete1
Copy link
Member Author

IamPete1 commented Jan 5, 2025

Tested on CubeRed, works the same as SITL.

@davidbuzz davidbuzz changed the title Networking/Filesystem: add P92000 network file client. Networking/Filesystem: add 9P2000 network file client. Jan 7, 2025
('AP_NETWORKING_BACKEND_PPP', 'AP_Networking_PPP::init'),
('AP_NETWORKING_CAN_MCAST_ENABLED', 'AP_Networking_CAN::start'),
('AP_NETWORKING_FILESYSTEM_ENABLED', r'AP_Networking::NineP2000::init'),
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Suggested change
('AP_NETWORKING_FILESYSTEM_ENABLED', r'AP_Networking::NineP2000::init'),
('AP_NETWORKING_FILESYSTEM_9P2000_ENABLED', r'AP_Networking::NineP2000::init'),

@davidbuzz
Copy link
Collaborator

"diod" is another impl...
https://github.com/chaos/diod/tree/master
... and it includes a good protocol.md
https://github.com/chaos/diod/blob/master/protocol.md

@IamPete1
Copy link
Member Author

Some bench marking with the new script, although I'm not sure how much I trust lua with writing larger block sizes. It just assumes that the write worked and does some buffering. For reads we know the length of the data returned so the script deals with incomplete results.

image

  • Read and write speeds are very similar, the network cost is the same, the underlying storage speed on the server is negligible in comparison.
  • Almost constant time, so bigger blocks boost the data rate. Small block sizes result in terrible data rates.
  • Nothing above 4096 bytes worked reliably, I'm not sure why this is. It works in SITL, maybe it is just a robustness issue.
  • No sure how optimized the 9P2000 server I'm using is, its possible this its not just a AP issue.
  • Using lua for the benchmark means were never going to get the top speeds that compare to what we might see with a native benchmark. Hopefully lua is not the limiting factor here.

I think the results are positive, SD similar speeds for write between 1024 and 4096.

@IamPete1
Copy link
Member Author

I have fixed up the higher block size read/write. It now splits long writes into multiple commands. As before we see read and write speeds the same. Speed increases linearly with block size until you have to add another command. This is why 512 and 1024 block size are similar, the with a 1024 buffer size the headers take up some space so 1024 bytes of data have to be sent in two messages. So we send the same number of messages total as we do for the 512 block size.

image

Currently the splits are all sent sequentially, the next speedup would probably be to send them in parallel. Or increase the buffer size.....

@IamPete1
Copy link
Member Author

Some messing about with buffers gets the write speeds on par with the SD card.
image

The read/write discrepancy might mean were dropping some messages, I need to look into it some more.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Projects
None yet
Development

Successfully merging this pull request may close these issues.

4 participants