Skip to content

Commit 1ad4a99

Browse files
authored
Port api reference (#11152)
1 parent 6710d78 commit 1ad4a99

32 files changed

+296
-61
lines changed

docs/IE_PLUGIN_DG/Intro.md

Lines changed: 4 additions & 3 deletions
Original file line numberDiff line numberDiff line change
@@ -9,11 +9,12 @@
99

1010
Implement Plugin Functionality <openvino_docs_ie_plugin_dg_plugin>
1111
Implement Executable Network Functionality <openvino_docs_ie_plugin_dg_executable_network>
12-
openvino_docs_ie_plugin_dg_quantized_networks
1312
Implement Synchronous Inference Request <openvino_docs_ie_plugin_dg_infer_request>
1413
Implement Asynchronous Inference Request <openvino_docs_ie_plugin_dg_async_infer_request>
1514
openvino_docs_ie_plugin_dg_plugin_build
1615
openvino_docs_ie_plugin_dg_plugin_testing
16+
openvino_docs_ie_plugin_detailed_guides
17+
openvino_docs_ie_plugin_api_references
1718

1819
@endsphinxdirective
1920

@@ -61,5 +62,5 @@ Detailed guides
6162
API References
6263
-----------------------
6364

64-
* [Inference Engine Plugin API](groupie_dev_api.html)
65-
* [Inference Engine Transformation API](groupie_transformation_api.html)
65+
* [Inference Engine Plugin API](@ref ie_dev_api)
66+
* [Inference Engine Transformation API](@ref ie_transformation_api)

docs/IE_PLUGIN_DG/detailed_guides.md

Lines changed: 18 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,18 @@
1+
# Advanced Topics {#openvino_docs_ie_plugin_detailed_guides}
2+
3+
@sphinxdirective
4+
5+
.. toctree::
6+
:maxdepth: 1
7+
:hidden:
8+
9+
openvino_docs_ie_plugin_dg_quantized_networks
10+
openvino_docs_IE_DG_lpt
11+
12+
@endsphinxdirective
13+
14+
The guides below provides extra information about specific features of OpenVINO needed for understanding during OpenVINO plugin development:
15+
16+
* [Quantized networks](@ref openvino_docs_ie_plugin_dg_quantized_networks)
17+
* [Low precision transformations](@ref openvino_docs_IE_DG_lpt) guide
18+
* [Writing OpenVINO™ transformations](@ref openvino_docs_transformations) guide
Lines changed: 17 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,17 @@
1+
# Plugin API Reference {#openvino_docs_ie_plugin_api_references}
2+
3+
@sphinxdirective
4+
5+
.. toctree::
6+
:maxdepth: 1
7+
:hidden:
8+
9+
../groupie_dev_api
10+
../groupie_transformation_api
11+
12+
@endsphinxdirective
13+
14+
The guides below provides extra API references needed for OpenVINO plugin development:
15+
16+
* [OpenVINO Plugin API](@ref ie_dev_api)
17+
* [OpenVINO Transformation API](@ref ie_transformation_api)

docs/IE_PLUGIN_DG/plugin_transformation_pipeline/PluginTransformationPipeline.md

Lines changed: 0 additions & 17 deletions
This file was deleted.

docs/api/api_reference.rst

Lines changed: 3 additions & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -10,6 +10,7 @@ API references available:
1010

1111
.. toctree::
1212
:maxdepth: 2
13-
14-
../groupie_cpp_api
13+
14+
../groupov_cpp_api
15+
../groupie_c_api
1516
ie_python_api/api

docs/documentation.md

Lines changed: 1 addition & 3 deletions
Original file line numberDiff line numberDiff line change
@@ -72,9 +72,7 @@
7272

7373
openvino_docs_Extensibility_UG_Intro
7474
openvino_docs_transformations
75-
Inference Engine Plugin Developer Guide <openvino_docs_ie_plugin_dg_overview>
76-
groupie_dev_api
77-
Plugin Transformation Pipeline <openvino_docs_IE_DG_plugin_transformation_pipeline>
75+
OpenVINO Plugin Developer Guide <openvino_docs_ie_plugin_dg_overview>
7876

7977
.. toctree::
8078
:maxdepth: 1

docs/doxyrest-config.lua

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -79,7 +79,7 @@ EXTRA_PAGE_LIST = {}
7979
--! is not set (otherwise, the title of intro file will be used).
8080
--!
8181

82-
INDEX_TITLE = "Inference Engine C++ API Reference"
82+
INDEX_TITLE = "OpenVINO Runtime C++ API Reference"
8383

8484
--!
8585
--! File with project introduction (reStructuredText). When non-nil, this file

src/common/transformations/include/transformations_visibility.hpp

Lines changed: 0 additions & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -12,7 +12,6 @@
1212
*/
1313

1414
/**
15-
* @ingroup ie_cpp_api
1615
* @defgroup ie_transformation_api Inference Engine Transformation API
1716
* @brief Defines Inference Engine Transformations API which is used to transform ngraph::Function
1817
*

src/core/include/openvino/core/core_visibility.hpp

Lines changed: 29 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -11,6 +11,35 @@
1111
// OPENVINO_API is used for the public API symbols. It either DLL imports or DLL exports
1212
// (or does nothing for static build)
1313

14+
/**
15+
* @defgroup ov_cpp_api OpenVINO Runtime C++ API
16+
* OpenVINO Runtime C++ API
17+
*
18+
* @defgroup ov_model_cpp_api Basics
19+
* @ingroup ov_cpp_api
20+
* OpenVINO Core C++ API to work with ov::Model, dynamic and static shapes, types
21+
*
22+
* @defgroup ov_ops_cpp_api Operations
23+
* @ingroup ov_cpp_api
24+
* OpenVINO C++ API to create operations from different opsets. Such API is used to
25+
* creation models from code, write transformations and traverse the model graph
26+
*
27+
* @defgroup ov_opset_cpp_api Operation sets
28+
* @ingroup ov_cpp_api
29+
* OpenVINO C++ API to work with operation sets
30+
*
31+
* @defgroup ov_runtime_cpp_api Inference
32+
* @ingroup ov_cpp_api
33+
* OpenVINO Inference C++ API provides ov::Core, ov::CompiledModel, ov::InferRequest
34+
* and ov::Tensor classes
35+
*/
36+
37+
/**
38+
* @brief OpenVINO C++ API
39+
* @ingroup ov_cpp_api
40+
*/
41+
namespace ov {} // namespace ov
42+
1443
#ifdef _WIN32
1544
# pragma warning(disable : 4251)
1645
# pragma warning(disable : 4275)

src/core/include/openvino/core/model.hpp

Lines changed: 5 additions & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -34,7 +34,11 @@ class FrontEnd;
3434
}
3535

3636
class ModelAccessor;
37-
/// A user-defined model.
37+
38+
/**
39+
* @brief A user-defined model
40+
* @ingroup ov_model_cpp_api
41+
*/
3842
class OPENVINO_API Model : public std::enable_shared_from_this<Model> {
3943
friend class frontend::FrontEnd;
4044
friend OPENVINO_API std::shared_ptr<Model> clone_model(const Model& func,

src/core/include/openvino/runtime/allocator.hpp

Lines changed: 1 addition & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -61,6 +61,7 @@ class Tensor;
6161
/**
6262
* @brief Wraps allocator implementation to provide safe way to store allocater loaded from shared library
6363
* And constructs default based on `new` `delete` c++ calls allocator if created without parameters
64+
* @ingroup ov_runtime_cpp_api
6465
*/
6566
class OPENVINO_API Allocator {
6667
AllocatorImpl::Ptr _impl;

src/core/include/openvino/runtime/tensor.hpp

Lines changed: 4 additions & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -30,8 +30,8 @@ class VariableState;
3030

3131
/**
3232
* @brief Tensor API holding host memory
33-
*
3433
* It can throw exceptions safely for the application, where it is properly handled.
34+
* @ingroup ov_runtime_cpp_api
3535
*/
3636
class OPENVINO_API Tensor {
3737
protected:
@@ -208,6 +208,9 @@ class OPENVINO_API Tensor {
208208
}
209209
};
210210

211+
/**
212+
* @brief A vector of Tensor's
213+
*/
211214
using TensorVector = std::vector<Tensor>;
212215

213216
namespace runtime {

src/inference/include/ie/ie_api.h

Lines changed: 1 addition & 6 deletions
Original file line numberDiff line numberDiff line change
@@ -2,16 +2,11 @@
22
// SPDX-License-Identifier: Apache-2.0
33
//
44

5-
/**
6-
* @defgroup ie_cpp_api Inference Engine C++ API
7-
* Inference Engine C++ API
8-
*/
9-
105
/**
116
* @brief The macro defines a symbol import/export mechanism essential for Microsoft Windows(R) OS.
12-
*
137
* @file ie_api.h
148
*/
9+
1510
#pragma once
1611

1712
#if defined(OPENVINO_STATIC_LIBRARY) || defined(USE_STATIC_IE) || (defined(__GNUC__) && (__GNUC__ < 4))

src/inference/include/ie/ie_version.hpp

Lines changed: 0 additions & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -27,7 +27,6 @@
2727
#include "ie_api.h"
2828

2929
/**
30-
* @ingroup ie_cpp_api
3130
* @brief Inference Engine C++ API
3231
*/
3332
namespace InferenceEngine {

src/inference/include/openvino/runtime/common.hpp

Lines changed: 1 addition & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -46,6 +46,7 @@ namespace ie = InferenceEngine;
4646

4747
/**
4848
* @brief This type of map is used for result of Core::query_model
49+
* @ingroup ov_runtime_cpp_api
4950
* - `key` means operation name
5051
* - `value` means device name supporting this operation
5152
*/

src/inference/include/openvino/runtime/compiled_model.hpp

Lines changed: 1 addition & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -32,6 +32,7 @@ class InferRequest;
3232

3333
/**
3434
* @brief This class represents a compiled model.
35+
* @ingroup ov_runtime_cpp_api
3536
* A model is compiled by a specific device by applying multiple optimization
3637
* transformations, then mapping to compute kernels.
3738
*/

src/inference/include/openvino/runtime/core.hpp

Lines changed: 1 addition & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -34,6 +34,7 @@ namespace ov {
3434

3535
/**
3636
* @brief This class represents an OpenVINO runtime Core entity.
37+
* @ingroup ov_runtime_cpp_api
3738
* User applications can create several Core class instances, but in this case the underlying plugins
3839
* are created multiple times and not shared between several Core instances. The recommended way is to have
3940
* a single Core instance per application.

src/inference/include/openvino/runtime/exception.hpp

Lines changed: 5 additions & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -11,16 +11,19 @@ namespace ov {
1111

1212
/**
1313
* @brief Thrown in case of cancelled asynchronous operation.
14+
* @ingroup ov_runtime_cpp_api
1415
*/
1516
class OPENVINO_RUNTIME_API Cancelled : public Exception {
1617
using Exception::Exception;
1718
};
1819

1920
/**
20-
* @brief Thrown in case of calling the InferRequest methods while the request is busy with compute operation.
21+
* @brief Thrown in case of calling the InferRequest methods while the request is
22+
* busy with compute operation.
23+
* @ingroup ov_runtime_cpp_api
2124
*/
2225
class OPENVINO_RUNTIME_API Busy : public Exception {
2326
using Exception::Exception;
2427
};
2528

26-
} // namespace ov
29+
} // namespace ov

src/inference/include/openvino/runtime/infer_request.hpp

Lines changed: 1 addition & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -29,6 +29,7 @@ class CompiledModel;
2929

3030
/**
3131
* @brief This is a class of infer request that can be run in asynchronous or synchronous manners.
32+
* @ingroup ov_runtime_cpp_api
3233
*/
3334
class OPENVINO_RUNTIME_API InferRequest {
3435
std::shared_ptr<InferenceEngine::IInferRequestInternal> _impl;

src/inference/include/openvino/runtime/intel_gna/properties.hpp

Lines changed: 18 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -14,6 +14,12 @@
1414

1515
namespace ov {
1616

17+
/**
18+
* @defgroup ov_runtime_gna_prop_cpp_api Intel GNA specific properties
19+
* @ingroup ov_runtime_cpp_api
20+
* Set of Intel GNA specific properties.
21+
*/
22+
1723
/**
1824
* @brief Namespace with Intel GNA specific properties
1925
*/
@@ -22,12 +28,14 @@ namespace intel_gna {
2228
/**
2329
* @brief Property to get an std::string of GNA Library version, usually in the form
2430
* <API_REVISION>.<RELEASE_LINE>.<RELEASE>.<BUILD>
31+
* @ingroup ov_runtime_gna_prop_cpp_api
2532
*/
2633
static constexpr Property<std::string, PropertyMutability::RO> library_full_version{"GNA_LIBRARY_FULL_VERSION"};
2734

2835
/**
2936
* @brief Scale factor provided by the user to use static quantization.
3037
* This option should be used with floating point value serialized to string with . (dot) as a decimal separator
38+
* @ingroup ov_runtime_gna_prop_cpp_api
3139
* @details In the case of multiple inputs, individual scale factors can be provided using the
3240
* map where key is layer name and value is scale factor
3341
* Example:
@@ -45,11 +53,13 @@ static constexpr Property<std::map<std::string, float>> scale_factors_per_input{
4553

4654
/**
4755
* @brief if turned on, dump GNA firmware model into specified file
56+
* @ingroup ov_runtime_gna_prop_cpp_api
4857
*/
4958
static constexpr Property<std::string> firmware_model_image_path{"GNA_FIRMWARE_MODEL_IMAGE"};
5059

5160
/**
5261
* @brief Enum to define software acceleration mode
62+
* @ingroup ov_runtime_gna_prop_cpp_api
5363
*/
5464
enum class ExecutionMode {
5565
AUTO = 0, //!< Uses Intel GNA if available, otherwise uses software execution mode on CPU.
@@ -103,6 +113,7 @@ inline std::istream& operator>>(std::istream& is, ExecutionMode& execution_mode)
103113

104114
/**
105115
* @brief Enum to define HW compile and execution targets
116+
* @ingroup ov_runtime_gna_prop_cpp_api
106117
*/
107118
enum class HWGeneration {
108119
UNDEFINED = 0, //!< GNA HW generation is undefined
@@ -143,6 +154,7 @@ inline std::istream& operator>>(std::istream& is, HWGeneration& hw_generation) {
143154
/**
144155
* @brief GNA proc_type setting that should be one of AUTO, HW, GNA_HW_WITH_SW_FBACK,
145156
* GNA_SW_EXACT or SW_FP32
157+
* @ingroup ov_runtime_gna_prop_cpp_api
146158
*/
147159
static constexpr Property<ExecutionMode> execution_mode{"GNA_DEVICE_MODE"};
148160

@@ -153,22 +165,26 @@ static constexpr Property<ExecutionMode> execution_mode{"GNA_DEVICE_MODE"};
153165
* If HW is not present, use the option corresponding to the latest fully supported GNA HW generation.
154166
* A fully supported GNA HW generation means it must be supported by both the OV GNA Plugin and the core GNA Library.
155167
* Currently, the latest supported GNA HW generation corresponds to GNA_3_0.
168+
* @ingroup ov_runtime_gna_prop_cpp_api
156169
*/
157170
static constexpr Property<HWGeneration> execution_target{"GNA_HW_EXECUTION_TARGET"};
158171

159172
/**
160173
* @brief The option to override the GNA HW compile target. May be one of GNA_2_0, GNA_3_0.
161174
* By default the same as execution_target.
175+
* @ingroup ov_runtime_gna_prop_cpp_api
162176
*/
163177
static constexpr Property<HWGeneration> compile_target{"GNA_HW_COMPILE_TARGET"};
164178

165179
/**
166180
* @brief if enabled produced minimum memory footprint for compiled model in GNA memory, default value is true
181+
* @ingroup ov_runtime_gna_prop_cpp_api
167182
*/
168183
static constexpr Property<bool> memory_reuse{"GNA_COMPACT_MODE"};
169184

170185
/**
171186
* @brief Enum to define PWL design algorithm
187+
* @ingroup ov_runtime_gna_prop_cpp_api
172188
*/
173189
enum class PWLDesignAlgorithm {
174190
UNDEFINED = 0, //!< PWL approximation algorithm is undefined
@@ -213,13 +229,15 @@ inline std::istream& operator>>(std::istream& is, PWLDesignAlgorithm& pwl_design
213229
* If value is UNIFORM_DISTRIBUTION then simple uniform distribution is used to create
214230
* PWL approximation of activation functions.
215231
* Uniform distribution usually gives poor approximation with the same number of segments
232+
* @ingroup ov_runtime_gna_prop_cpp_api
216233
*/
217234
static constexpr Property<PWLDesignAlgorithm> pwl_design_algorithm{"GNA_PWL_DESIGN_ALGORITHM"};
218235

219236
/**
220237
* @brief The option to allow to specify the maximum error percent that the optimized algorithm finding
221238
* will be used to find PWL functions.
222239
* By default (in case of NO value set), 1.0 value is used.
240+
* @ingroup ov_runtime_gna_prop_cpp_api
223241
*/
224242
static constexpr Property<float> pwl_max_error_percent{"GNA_PWL_MAX_ERROR_PERCENT"};
225243

0 commit comments

Comments
 (0)