-
Notifications
You must be signed in to change notification settings - Fork 13
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Improve Examples Annex (Draft v0.0.7 some feedback) #186
Comments
Thanks a lot for the feedback and for joining the meeting @jzvolensky .
Fully agreed. That will take some efforts, but I think it is well worth it, and we need to improve on this before the OAB review.
I tried to explain this during the meeting, but it is indeed a bit confusing. Because of the resampling permission, a server supporting scaling can return a downsampled version of the coverage when asking for a large area of a high resolution coverage. So in order to allow a client which absolutely only wants the full resolution data, without having to care whether the server supports down-sampling or not, that client needs to be able to always pass I hope these explanations makes it more clear. If you still feel this approach is too convoluted and confusing for its benefits, we could discuss further.
The server can have limits on the maximum size in a particular dimensions, or overall number of cells being returned, and these limits can be advertised in the service metadata. |
I filed this related issue for |
Hi Jerome, I read your response earlier but forgot to reply haha! Regarding the To me doing it the other way sounds more reasonable or is probably more common. As a user I want to get a coverage. It is too large, and the server does not want to give it to me, instead returns an error that I should set the scale-factor to 0.5 or choose a smaller bbox, less timestamps etc. to fit within the limits. In this draft spec, If I am a user and I want to get coverage data for my research project, I request the data and because it is huge, an automatic scale factor of 0.2 is applied for example, right? However, I am not aware of this because I did not specify scale-factor myself so the data may not be usable for my use case anymore and I have to go back again and update my request to fit the limits and ensure I get full-scale data back. This creates a redundant request to the server. I think it is better to throw an error and be explicit instead of doing behind the scenes magic and auto-scaling just so the users do not get an error. Of course, all of this depends on the real-world implementations. |
Thanks for the reply @jzvolensky . There was a previous long discussion and resolution on this topic e.g., see #54 (comment) . I personally strongly feel that a client asking for the whole coverage cares more about getting something back for the full extent at the best resolution possible. If no subsetting was used, it's probably because the area of interest is the whole area. If the client wants to ensure the original resolution, it just needs to include Regarding the |
Okay, I think I get it now. Thanks for the thorough explanations! I also read the comment you linked, and I can see what you mean now with the 4xx errors. I suppose time and future real-world implementations will show how this is perceived among general users (hopefully well). Thanks again! |
Hello, I had a chance to look through the document and wanted to leave some feedback/thoughts here.
Overall, I think the draft looking good so far. Perhaps as someone not so familiar with the standardization documents, more examples would be useful. We have also discussed including responses to the examples as well, as it has been done in other OGC standards. This would certainly make it easier to follow, especially when you can do the "same" operation in multiple ways e.g. using subset or bbox, time. The figures illustrating the point/area overlaps are nice and helpful.
7.2.4 Coverage data retrieval requirement J:
If we are not supporting scaling, why do we need to accept the parameter with a default value and error out otherwise? Wouldn't we just naturally provide the original data in the full scale since we do not support scaling? /per/core/limits below describes the limits so that point is also a bit confusing. I don't quite understand the purpose of this.
Thanks!
The text was updated successfully, but these errors were encountered: