Skip to content

Commit

Permalink
[exporterhelper] Add queue options to the new exporter helper
Browse files Browse the repository at this point in the history
This change enabled queue capability for the new exporter helper. For now, it preserves the same user configuration interface as the existing exporter helper has. The only difference is that implementing persistence is optional now as it requires providing marshal and unmarshal functions for the custom request. 

Later, it's possible to introduce more options for controlling the queue: count of items or bytes in the queue.
  • Loading branch information
dmitryax committed Aug 31, 2023
1 parent 7030388 commit beabbce
Show file tree
Hide file tree
Showing 10 changed files with 234 additions and 13 deletions.
34 changes: 34 additions & 0 deletions .chloggen/exporter-helper-v2.yaml
Original file line number Diff line number Diff line change
@@ -0,0 +1,34 @@
# One of 'breaking', 'deprecation', 'new_component', 'enhancement', 'bug_fix'
change_type: enhancement

# The name of the component, or a single word describing the area of concern, (e.g. otlpreceiver)
component: exporter/exporterhelper

# A brief description of the change. Surround your text with quotes ("") if it needs to start with a backtick (`).
note: Add API for enabling queue in the new exporter helpers.

# One or more tracking issues or pull requests related to the change
issues: [7874]

# (Optional) One or more lines of additional information to render under the primary note.
# These lines will be padded with 2 spaces and then inserted directly into the document.
# Use pipe (|) for multiline entries.
subtext: |
The following experimental API is introduced in exporter/exporterhelper package:
- `WithRequestQueue`: a new exporter helper option for using a queue.
- `Queue`: an interface for queueing requests.
- `NewMemoryQueue`: a function for creating a new memory queue.
- `NewPersistentQueue`: a function for creating a new persistent queue.
- `QueueConfig`: a configuration for queueing requests used by WithMemoryQueue option.
- `NewDefaultQueueConfig`: a function for creating a default QueueConfig.
- `PersistentQueueConfig`: a configuration for queueing requests in persistent storage used by WithPersistentQueue option.
- `NewDefaultPersistentQueueConfig`: a function for creating a default PersistentQueueConfig.
All the new APIs are intended to be used by exporters that need to operate over client-provided requests instead of pdata.
# Optional: The change log or logs in which this entry should be included.
# e.g. '[user]' or '[user, api]'
# Include 'user' if the change is relevant to end users.
# Include 'api' if there is a change to a library API.
# Default: '[user]'
change_logs: [api]
2 changes: 1 addition & 1 deletion CHANGELOG-API.md
Original file line number Diff line number Diff line change
Expand Up @@ -21,7 +21,7 @@ If you are looking for user-facing changes, check out [CHANGELOG.md](./CHANGELOG
- `LogsConverter`: an interface for converting plog.Logs to Request.
- `MetricsConverter`: an interface for converting pmetric.Metrics to Request.
- `TracesConverter`: an interface for converting ptrace.Traces to Request.
All the new APIs are intended to be used by exporters that need to operate over client-provided requests instead of pdata.
All the new APIs are intended to be used by exporters that operate over client-provided requests instead of pdata.

- `otlpreceiver`: Export HTTPConfig as part of the API for creating the otlpreceiver configuration. (#8175)
Changes signature of receiver/otlpreceiver/config.go type httpServerSettings to HTTPConfig.
Expand Down
14 changes: 12 additions & 2 deletions exporter/exporterhelper/common.go
Original file line number Diff line number Diff line change
Expand Up @@ -62,7 +62,7 @@ type baseSettings struct {
component.ShutdownFunc
consumerOptions []consumer.Option
TimeoutSettings
queue internal.ProducerConsumerQueue
queue Queue
RetrySettings
requestExporter bool
marshaler internal.RequestMarshaler
Expand Down Expand Up @@ -131,7 +131,8 @@ func WithRetry(retrySettings RetrySettings) Option {
func WithQueue(config QueueSettings) Option {
return func(o *baseSettings) {
if o.requestExporter {
panic("queueing is not available for the new request exporters yet")
panic("this option is not available for the new request exporters, " +
"use WithMemoryQueue or WithPersistentQueue instead")
}
if !config.Enabled {
return
Expand All @@ -144,6 +145,15 @@ func WithQueue(config QueueSettings) Option {
}
}

// WithRequestQueue enables queueing for an exporter.
// This API is at the early stage of development and may change without backward compatibility
// until https://github.com/open-telemetry/opentelemetry-collector/issues/8122 is resolved.
func WithRequestQueue(queue Queue) Option {
return func(o *baseSettings) {
o.queue = queue
}
}

// WithCapabilities overrides the default Capabilities() function for a Consumer.
// The default is non-mutable data.
// TODO: Verify if we can change the default to be mutable as we do for processors.
Expand Down
2 changes: 2 additions & 0 deletions exporter/exporterhelper/internal/bounded_memory_queue.go
Original file line number Diff line number Diff line change
Expand Up @@ -101,3 +101,5 @@ func (q *boundedMemoryQueue) Capacity() int {
func (q *boundedMemoryQueue) IsPersistent() bool {
return false
}

func (q *boundedMemoryQueue) unexported() {}

Check warning on line 105 in exporter/exporterhelper/internal/bounded_memory_queue.go

View check run for this annotation

Codecov / codecov/patch

exporter/exporterhelper/internal/bounded_memory_queue.go#L105

Added line #L105 was not covered by tests
2 changes: 2 additions & 0 deletions exporter/exporterhelper/internal/persistent_queue.go
Original file line number Diff line number Diff line change
Expand Up @@ -108,6 +108,8 @@ func (pq *persistentQueue) IsPersistent() bool {
return true
}

func (pq *persistentQueue) unexported() {}

Check warning on line 111 in exporter/exporterhelper/internal/persistent_queue.go

View check run for this annotation

Codecov / codecov/patch

exporter/exporterhelper/internal/persistent_queue.go#L111

Added line #L111 was not covered by tests

func toStorageClient(ctx context.Context, storageID component.ID, host component.Host, ownerID component.ID, signal component.DataType) (storage.Client, error) {
extension, err := getStorageExtension(host.GetExtensions(), storageID)
if err != nil {
Expand Down
2 changes: 2 additions & 0 deletions exporter/exporterhelper/internal/producer_consumer_queue.go
Original file line number Diff line number Diff line change
Expand Up @@ -37,4 +37,6 @@ type ProducerConsumerQueue interface {
// IsPersistent returns true if the queue is persistent.
// TODO: Do not expose this method if the interface moves to a public package.
IsPersistent() bool

unexported()
}
116 changes: 113 additions & 3 deletions exporter/exporterhelper/queued_retry.go
Original file line number Diff line number Diff line change
Expand Up @@ -27,6 +27,58 @@ const defaultQueueSize = 1000

var errSendingQueueIsFull = errors.New("sending_queue is full")

// Queue defines the queue interface for exporterhelper.
// This API is at the early stage of development and may change without backward compatibility
// until https://github.com/open-telemetry/opentelemetry-collector/issues/8122 is resolved.
type Queue = internal.ProducerConsumerQueue

// NewMemoryQueue creates a new in-memory queue. If config.Enabled is false, it returns nil.
// Should be used with WithQueue option.
// This API is at the early stage of development and may change without backward compatibility
// until https://github.com/open-telemetry/opentelemetry-collector/issues/8122 is resolved.
func NewMemoryQueue(config QueueConfig) Queue {
if !config.Enabled {
return nil
}
return internal.NewBoundedMemoryQueue(config.QueueSize, config.NumConsumers)

Check warning on line 43 in exporter/exporterhelper/queued_retry.go

View check run for this annotation

Codecov / codecov/patch

exporter/exporterhelper/queued_retry.go#L39-L43

Added lines #L39 - L43 were not covered by tests
}

// NewPersistentQueue creates a new queue backed by file storage if config.StorageID is not nil.
// If config.StorageID is nil, it creates a new in-memory queue. If config.Enabled is false, it returns nil.
// Should be used with WithQueue option.
// This API is at the early stage of development and may change without backward compatibility
// until https://github.com/open-telemetry/opentelemetry-collector/issues/8122 is resolved.
func NewPersistentQueue(config PersistentQueueConfig, marshaler RequestMarshaler, unmarshaler RequestUnmarshaler) Queue {
if !config.Enabled {
return nil
}

Check warning on line 54 in exporter/exporterhelper/queued_retry.go

View check run for this annotation

Codecov / codecov/patch

exporter/exporterhelper/queued_retry.go#L53-L54

Added lines #L53 - L54 were not covered by tests
if config.StorageID == nil {
return internal.NewBoundedMemoryQueue(config.QueueSize, config.NumConsumers)
}

Check warning on line 57 in exporter/exporterhelper/queued_retry.go

View check run for this annotation

Codecov / codecov/patch

exporter/exporterhelper/queued_retry.go#L56-L57

Added lines #L56 - L57 were not covered by tests
return internal.NewPersistentQueue(
config.QueueSize,
config.NumConsumers,
*config.StorageID,
func(req internal.Request) ([]byte, error) {
r, ok := req.(*request)
if !ok {
return nil, fmt.Errorf("invalid request type: %T", req)
}
return marshaler(r.Request)

Check warning on line 67 in exporter/exporterhelper/queued_retry.go

View check run for this annotation

Codecov / codecov/patch

exporter/exporterhelper/queued_retry.go#L63-L67

Added lines #L63 - L67 were not covered by tests
},
func(data []byte) (internal.Request, error) {
req, err := unmarshaler(data)
if err != nil {
return nil, err
}
return &request{
Request: req,
baseRequest: baseRequest{ctx: context.Background()},
}, nil

Check warning on line 77 in exporter/exporterhelper/queued_retry.go

View check run for this annotation

Codecov / codecov/patch

exporter/exporterhelper/queued_retry.go#L69-L77

Added lines #L69 - L77 were not covered by tests
},
)
}

// QueueSettings defines configuration for queueing batches before sending to the consumerSender.
type QueueSettings struct {
// Enabled indicates whether to not enqueue batches before sending to the consumerSender.
Expand Down Expand Up @@ -65,20 +117,78 @@ func (qCfg *QueueSettings) Validate() error {
return nil
}

// QueueConfig defines configuration for queueing requests before exporting.
// It's supposed to be used with the new exporter helpers New[Traces|Metrics|Logs]RequestExporter.
// This API is at the early stage of development and may change without backward compatibility
// until https://github.com/open-telemetry/opentelemetry-collector/issues/8122 is resolved.
type QueueConfig struct {
// Enabled indicates whether to not enqueue batches before exporting.
Enabled bool `mapstructure:"enabled"`
// NumConsumers is the number of consumers from the queue.
NumConsumers int `mapstructure:"num_consumers"`
// QueueSize is the maximum number of batches allowed in queue at a given time.
// This field is left for backward compatibility with QueueSettings.
// Later, it will be replaced with size fields specified explicitly in terms of items or batches.
QueueSize int `mapstructure:"queue_size"`
}

// NewDefaultQueueConfig returns the default QueueConfig.
// This API is at the early stage of development and may change without backward compatibility
// until https://github.com/open-telemetry/opentelemetry-collector/issues/8122 is resolved.
func NewDefaultQueueConfig() QueueConfig {
return QueueConfig{
Enabled: true,
NumConsumers: 10,
QueueSize: defaultQueueSize,
}
}

// PersistentQueueConfig defines configuration for queueing requests before exporting using a persistent storage.
// It's supposed to be used with the new exporter helpers New[Traces|Metrics|Logs]RequestExporter and will replace
// QueueSettings in the future.
// This API is at the early stage of development and may change without backward compatibility
// until https://github.com/open-telemetry/opentelemetry-collector/issues/8122 is resolved.
type PersistentQueueConfig struct {
QueueConfig `mapstructure:",squash"`
// StorageID if not empty, enables the persistent storage and uses the component specified
// as a storage extension for the persistent queue
StorageID *component.ID `mapstructure:"storage"`
}

// NewDefaultPersistentQueueConfig returns the default PersistentQueueConfig.
// This API is at the early stage of development and may change without backward compatibility
// until https://github.com/open-telemetry/opentelemetry-collector/issues/8122 is resolved.
func NewDefaultPersistentQueueConfig() PersistentQueueConfig {
return PersistentQueueConfig{
QueueConfig: NewDefaultQueueConfig(),
}
}

// Validate checks if the QueueSettings configuration is valid
func (qCfg *QueueConfig) Validate() error {
if !qCfg.Enabled {
return nil
}
if qCfg.QueueSize <= 0 {
return errors.New("queue size must be positive")
}
return nil
}

type queuedRetrySender struct {
fullName string
id component.ID
signal component.DataType
consumerSender requestSender
queue internal.ProducerConsumerQueue
queue Queue
retryStopCh chan struct{}
traceAttribute attribute.KeyValue
logger *zap.Logger
requeuingEnabled bool
}

func newQueuedRetrySender(id component.ID, signal component.DataType, queue internal.ProducerConsumerQueue,
rCfg RetrySettings, nextSender requestSender, logger *zap.Logger) *queuedRetrySender {
func newQueuedRetrySender(id component.ID, signal component.DataType, queue Queue, rCfg RetrySettings,
nextSender requestSender, logger *zap.Logger) *queuedRetrySender {
retryStopCh := make(chan struct{})
sampledLogger := createSampledLogger(logger)
traceAttr := attribute.String(obsmetrics.ExporterKey, id.String())
Expand Down
37 changes: 37 additions & 0 deletions exporter/exporterhelper/queued_retry_test.go
Original file line number Diff line number Diff line change
Expand Up @@ -392,6 +392,18 @@ func TestQueueSettings_Validate(t *testing.T) {
assert.NoError(t, qCfg.Validate())
}

func TestQueueConfig_Validate(t *testing.T) {
qCfg := NewDefaultQueueConfig()
assert.NoError(t, qCfg.Validate())

qCfg.QueueSize = 0
assert.EqualError(t, qCfg.Validate(), "queue size must be positive")

// Confirm Validate doesn't return error with invalid config when feature is disabled
qCfg.Enabled = false
assert.NoError(t, qCfg.Validate())
}

// if requeueing is enabled, we eventually retry even if we failed at first
func TestQueuedRetry_RequeuingEnabled(t *testing.T) {
qCfg := NewDefaultQueueSettings()
Expand Down Expand Up @@ -505,6 +517,31 @@ func TestQueuedRetryPersistenceEnabledStorageError(t *testing.T) {
require.Error(t, be.Start(context.Background(), host), "could not get storage client")
}

func TestPersistentQueueRetryPersistenceEnabledStorageError(t *testing.T) {
storageError := errors.New("could not get storage client")
tt, err := obsreporttest.SetupTelemetry(defaultID)
require.NoError(t, err)
t.Cleanup(func() { require.NoError(t, tt.Shutdown(context.Background())) })

qCfg := NewDefaultPersistentQueueConfig()
storageID := component.NewIDWithName("file_storage", "storage")
qCfg.StorageID = &storageID // enable persistence
rCfg := NewDefaultRetrySettings()
set := tt.ToExporterCreateSettings()
bs := newBaseSettings(true, nil, nil, WithRetry(rCfg),
WithRequestQueue(NewPersistentQueue(qCfg, fakeRequestMarshaler, fakeRequestUnmarshaler)))
be, err := newBaseExporter(set, bs, "")
require.NoError(t, err)

var extensions = map[component.ID]component.Component{
storageID: &mockStorageExtension{GetClientError: storageError},
}
host := &mockHost{ext: extensions}

// we fail to start if we get an error creating the storage client
require.Error(t, be.Start(context.Background(), host), "could not get storage client")
}

func TestQueuedRetryPersistentEnabled_shutdown_dataIsRequeued(t *testing.T) {

produceCounter := &atomic.Uint32{}
Expand Down
6 changes: 6 additions & 0 deletions exporter/exporterhelper/request.go
Original file line number Diff line number Diff line change
Expand Up @@ -52,3 +52,9 @@ func (req *request) Count() int {
}
return 0
}

// RequestMarshaler is a function that can marshal a Request into bytes.
type RequestMarshaler func(req Request) ([]byte, error)

// RequestUnmarshaler is a function that can unmarshal bytes into a Request.
type RequestUnmarshaler func(data []byte) (Request, error)
32 changes: 25 additions & 7 deletions exporter/exporterhelper/request_test.go
Original file line number Diff line number Diff line change
Expand Up @@ -5,23 +5,25 @@ package exporterhelper

import (
"context"
"encoding/json"
"errors"

"go.opentelemetry.io/collector/pdata/plog"
"go.opentelemetry.io/collector/pdata/pmetric"
"go.opentelemetry.io/collector/pdata/ptrace"
)

type fakeRequest struct {
items int
err error
Items int
Err error
}

func (r fakeRequest) Export(_ context.Context) error {
return r.err
return r.Err
}

func (r fakeRequest) ItemsCount() int {
return r.items
return r.Items
}

type fakeRequestConverter struct {
Expand All @@ -32,13 +34,29 @@ type fakeRequestConverter struct {
}

func (c fakeRequestConverter) RequestFromMetrics(_ context.Context, md pmetric.Metrics) (Request, error) {
return fakeRequest{items: md.DataPointCount(), err: c.requestError}, c.metricsError
return fakeRequest{Items: md.DataPointCount(), Err: c.requestError}, c.metricsError
}

func (c fakeRequestConverter) RequestFromTraces(_ context.Context, td ptrace.Traces) (Request, error) {
return fakeRequest{items: td.SpanCount(), err: c.requestError}, c.tracesError
return fakeRequest{Items: td.SpanCount(), Err: c.requestError}, c.tracesError
}

func (c fakeRequestConverter) RequestFromLogs(_ context.Context, ld plog.Logs) (Request, error) {
return fakeRequest{items: ld.LogRecordCount(), err: c.requestError}, c.logsError
return fakeRequest{Items: ld.LogRecordCount(), Err: c.requestError}, c.logsError
}

func fakeRequestMarshaler(req Request) ([]byte, error) {
r, ok := req.(fakeRequest)
if !ok {
return nil, errors.New("invalid request type")
}
return json.Marshal(r)
}

func fakeRequestUnmarshaler(bytes []byte) (Request, error) {
var r fakeRequest
if err := json.Unmarshal(bytes, &r); err != nil {
return nil, err
}
return r, nil
}

0 comments on commit beabbce

Please sign in to comment.