Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

[exporter/elasticsearch] Use exporterhelper/batchsender for reliability #32632

Closed
Closed
Show file tree
Hide file tree
Changes from 132 commits
Commits
Show all changes
144 commits
Select commit Hold shift + click to select a range
3cd3903
WIP
carsonip Apr 23, 2024
4341a37
Pass integration test
carsonip Apr 23, 2024
5c04204
Refactor
carsonip Apr 23, 2024
39f0fa1
Unexport BulkIndexerItem
carsonip Apr 23, 2024
bc692f0
Unexport Request
carsonip Apr 23, 2024
c5fd34c
Fix es bulk test
carsonip Apr 23, 2024
7fdf150
Fix logs exporter test
carsonip Apr 23, 2024
97d5c19
Add traces
carsonip Apr 23, 2024
5608755
Log once only
carsonip May 9, 2024
c5cbe79
Merge branch 'main' into exporterhelper-batchsender
carsonip May 13, 2024
889fe7f
gofmt
carsonip May 13, 2024
f71d3b2
Fix lifecycle tests
carsonip May 13, 2024
709ca01
Use PersistentQueueConfig
carsonip May 13, 2024
4474b3b
Fix failing tests
carsonip May 13, 2024
215295e
Missing newline
carsonip May 13, 2024
961ffab
Add FIXME
carsonip May 13, 2024
683dca3
Configure batcher
carsonip May 14, 2024
e46b165
FIXME
carsonip May 14, 2024
ca810d4
Clean up
carsonip May 14, 2024
364e27d
Enable remaining integration tests
carsonip May 14, 2024
946bf2f
Double select to abort early when ctx is done
carsonip May 14, 2024
ea3edd3
Ensure all retries are finished when flush returns
carsonip May 14, 2024
16ece1c
Remove bulk indexer pool background goroutine
carsonip May 16, 2024
9d83826
Disable failing integration test
carsonip May 16, 2024
bf115c2
Add -> add
carsonip May 16, 2024
7258704
Remove bulkindexerpool wg
carsonip May 16, 2024
2958362
Remove flush settings in bulk indexer
carsonip May 16, 2024
6db1c65
Add flush.documents, ignore flush.bytes
carsonip May 16, 2024
ae1c8cb
Merge branch 'main' into exporterhelper-batchsender
carsonip May 16, 2024
332ee78
Warn about flush.bytes removal
carsonip May 16, 2024
7978e89
WIP: bench with persistent queue
carsonip May 16, 2024
33c4bfe
Bench with ecs, worker=1
carsonip May 17, 2024
6bcb999
Rename documents -> min_documents
carsonip May 17, 2024
8aa720b
Add changelog
carsonip May 17, 2024
160de51
Clarify flush.interval
carsonip May 17, 2024
686e8ae
Refactor and fix TODOs
carsonip May 17, 2024
10a2748
Make hack clear
carsonip May 17, 2024
562ebaa
Merge branch 'main' into exporterhelper-batchsender
lahsivjar May 22, 2024
d48f75f
Switch to msgpack and add merge split func
lahsivjar May 29, 2024
6ecce33
Add FIXME
carsonip May 30, 2024
6178e2f
Merge branch 'main' into exporterhelper-batchsender
carsonip Jun 3, 2024
8afdc94
Remove custom request
carsonip Jun 3, 2024
04495ff
Add FIXME for retry_sender
carsonip Jun 3, 2024
33df16b
Merge branch 'main' into exporterhelper-batchsender
carsonip Jun 4, 2024
7c4b5b2
Clean up
carsonip Jun 4, 2024
91ea93d
Add max_documents
carsonip Jun 4, 2024
ec37579
Use errors.New
carsonip Jun 4, 2024
f596602
Respect context in Export
carsonip Jun 4, 2024
32029c2
Change FIXME to TODO
carsonip Jun 4, 2024
babf6ad
Explicitly deprecate flush.bytes
carsonip Jun 4, 2024
454540a
Comment for EnableRetryOnTimeout
carsonip Jun 4, 2024
14e1804
Disable timeout_sender
carsonip Jun 4, 2024
8e15426
Merge branch 'main' into exporterhelper-batchsender
carsonip Jun 4, 2024
7dfea92
Update sending queue default consumers
carsonip Jun 4, 2024
622d6e8
Add persistent queue to readme
carsonip Jun 4, 2024
36bcd9c
Fix num_consumers config test
carsonip Jun 5, 2024
18e9ce4
Remove use of num_workers
carsonip Jun 6, 2024
179430b
Deprecate flush.*, use batcher.*
carsonip Jun 7, 2024
6fc2df9
Remove select in AddBatchAndFlush
carsonip Jun 7, 2024
da8a530
Use semaphore in bulkindexer
carsonip Jun 7, 2024
caa68cf
Push semaphore up from worker to manager
carsonip Jun 7, 2024
428f7d7
Move wg before sem
carsonip Jun 7, 2024
714e26d
Handle retry disabled
carsonip Jun 7, 2024
cc83ff1
Merge branch 'main' into exporterhelper-batchsender
carsonip Jun 7, 2024
99f2b63
Update changelog
carsonip Jun 7, 2024
5f05116
Enable queue sender by default
carsonip Jun 10, 2024
b91a2ad
Remove force enabled batcher
carsonip Jun 10, 2024
3a7c19c
Handle deprecated options properly
carsonip Jun 10, 2024
ca69b64
Update changelog
carsonip Jun 10, 2024
e68b427
Update readme num_workers
carsonip Jun 10, 2024
7312e36
Use sync.Pool for bulk indexer
carsonip Jun 10, 2024
5d428e1
Update description
carsonip Jun 10, 2024
d7fbaa8
Merge branch 'main' into exporterhelper-batchsender
carsonip Jun 10, 2024
27f055d
Better log
carsonip Jun 10, 2024
3a17fa5
Update TODO in integrationtest
carsonip Jun 11, 2024
a354670
Refactor default config
carsonip Jun 11, 2024
f6910bb
Update changelog
carsonip Jun 11, 2024
9313acb
Return err on error adding item to bulk indexer
carsonip Jun 11, 2024
96fa3f8
Inline custom request model to avoid confusion
carsonip Jun 11, 2024
4f591d0
Refactor functions in exporter
carsonip Jun 11, 2024
cf6f599
Remove unused param
carsonip Jun 11, 2024
01e1d2c
Merge branch 'main' into exporterhelper-batchsender
carsonip Jun 12, 2024
0e5c381
Fix merge conflicts
carsonip Jun 12, 2024
f6ad0e2
Handle 0 flushTimeout
carsonip Jun 12, 2024
16b3a79
gofmt
carsonip Jun 12, 2024
b98f7b2
Convert flush.bytes to batcher.min_size_items
carsonip Jun 13, 2024
d17554d
Set default max_size_items
carsonip Jun 13, 2024
77e99c9
changelog: change to deprecation
carsonip Jun 13, 2024
78a8800
Log deprecated value
carsonip Jun 13, 2024
0939b45
Clean up
carsonip Jun 13, 2024
71d84f6
Bench persistent queue
carsonip Jun 13, 2024
4bd68f5
Merge branch 'main' into exporterhelper-batchsender
carsonip Jun 13, 2024
6fb2fc3
[chore][exporter/elasticsearch] Use RunParallel in bench
carsonip Jun 18, 2024
4c2ba65
Use parallel bench
carsonip Jun 18, 2024
1b14f4a
Merge branch 'main' into exporterhelper-batchsender
carsonip Jun 18, 2024
ad23b96
Merge branch 'main' into bench-parallel
carsonip Jun 18, 2024
8d4d4c1
Measure docs/s
carsonip Jun 18, 2024
97494ed
Measure docs/s
carsonip Jun 18, 2024
d9bd55f
Explicitly disable queue
carsonip Jun 18, 2024
0e75f8b
Remove bulk indexer pooling
carsonip Jun 24, 2024
664849f
Merge branch 'main' into exporterhelper-batchsender
carsonip Jun 24, 2024
4006cd9
Fix http request body too large
carsonip Jun 24, 2024
3828a87
Merge branch 'main' into bench-parallel
carsonip Jun 24, 2024
dae3590
Fix http request body too large
carsonip Jun 24, 2024
1d10bbf
Print error
carsonip Jun 25, 2024
ebd99e2
Record bulkReqs/s
carsonip Jun 25, 2024
a64b62b
Use http.StatusBadRequest instead of 400
carsonip Jun 25, 2024
d5a0d16
Merge branch 'main' into bench-parallel
carsonip Jun 25, 2024
0b4acf3
Call next Consume*
carsonip Jun 26, 2024
ae9440d
Merge branch 'main' into bench-parallel
carsonip Jun 26, 2024
8591c12
Merge branch 'main' into exporterhelper-batchsender
carsonip Jun 26, 2024
031c3c8
Bench with different parallelisms
carsonip Jun 26, 2024
8d698f2
Merge branch 'main' into exporterhelper-batchsender
carsonip Jul 1, 2024
eae71c0
Ignore max_size_items for metrics exporter
carsonip Jul 1, 2024
0c39b73
Fix metrics support
carsonip Jul 1, 2024
a0c4c06
Log at info
carsonip Jul 1, 2024
887b06d
Fix tests
carsonip Jul 1, 2024
20c5799
Merge branch 'bench-parallel' into exporterhelper-batchsender
carsonip Jul 1, 2024
18ce06d
Add BenchmarkExporterFlushItems
carsonip Jul 1, 2024
5652cbf
Use slices instead of x/exp/slices
carsonip Jul 2, 2024
4acd29e
Merge branch 'main' into exporterhelper-batchsender
carsonip Jul 2, 2024
6140933
Update exporter/elasticsearchexporter/factory.go
carsonip Jul 2, 2024
5b25bba
Use warn
carsonip Jul 2, 2024
c580f37
Merge branch 'exporterhelper-batchsender' of github.com:carsonip/open…
carsonip Jul 2, 2024
ac900bd
Remove flush_timeout-based bench
carsonip Jul 2, 2024
fd56e5f
Make linter happy
carsonip Jul 2, 2024
96ce9d6
Fix integration test
carsonip Jul 2, 2024
7c24c27
Revert otelcontribcol
carsonip Jul 2, 2024
c62b153
Merge branch 'main' into exporterhelper-batchsender
carsonip Jul 3, 2024
fb6fa02
Make a copy of batcherConfig
carsonip Jul 3, 2024
b860dc0
Add handleDeprecations to metrics
carsonip Jul 3, 2024
9ee882e
Merge branch 'main' into exporterhelper-batchsender
carsonip Jul 4, 2024
e783105
Merge branch 'main' into exporterhelper-batchsender
carsonip Jul 9, 2024
8d61908
Merge branch 'main' into exporterhelper-batchsender
carsonip Jul 9, 2024
6a5a2e3
Add link to batcher settings and mention experimental
carsonip Jul 10, 2024
50e9353
sending_queue.num_consumers vs num_workers
carsonip Jul 10, 2024
e87e797
Link to persistent queue
carsonip Jul 10, 2024
38c01eb
Mention flush.bytes to batcher.min_size_items translation
carsonip Jul 10, 2024
9113d1c
Refactor handleDeprecations
carsonip Jul 10, 2024
1ad29d4
Remove getTimeoutConfig
carsonip Jul 10, 2024
862876e
Stop using type alias
carsonip Jul 10, 2024
7b289f1
Merge branch 'main' into exporterhelper-batchsender
carsonip Jul 10, 2024
d7881b3
Merge branch 'main' into exporterhelper-batchsender
carsonip Jul 12, 2024
be1a7f7
Update exporter/elasticsearchexporter/README.md
carsonip Jul 16, 2024
File filter

Filter by extension

Filter by extension


Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
33 changes: 33 additions & 0 deletions .chloggen/elasticsearchexporter_batchsender.yaml
Original file line number Diff line number Diff line change
@@ -0,0 +1,33 @@
# Use this changelog template to create an entry for release notes.

# One of 'breaking', 'deprecation', 'new_component', 'enhancement', 'bug_fix'
change_type: deprecation

# The name of the component, or a single word describing the area of concern, (e.g. filelogreceiver)
component: elasticsearchexporter

# A brief description of the change. Surround your text with quotes ("") if it needs to start with a backtick (`).
note: Improve reliability when used with persistent queue. Deprecate config options `flush.*`, use `batcher.*` instead.

# Mandatory: One or more tracking issues related to the change. You can use the PR number here if no issue exists.
issues: [32377]

# (Optional) One or more lines of additional information to render under the primary note.
# These lines will be padded with 2 spaces and then inserted directly into the document.
# Use pipe (|) for multiline entries.
subtext: |
Move buffering from bulk indexer to batch sender to improve reliability.
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

bike shed, but mostly interested if this is for parity with commit messages or a desire for how release notes/change log reads

I understand rationale of imperative form in commit messages (do this, vs this does this). That said, is it more natural to as a reader of release notes/change log to say the action taking place?

Suggested change
Move buffering from bulk indexer to batch sender to improve reliability.
Moves buffering from bulk indexer to batch sender to improve reliability.

Copy link
Contributor Author

@carsonip carsonip Jun 11, 2024

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I don't mind the bike shed. Not a native speaker so both read identical to me. I use imperative form in git commit messages and therefore the same here.

Would be nice if we can be consistent within this component and even across components. There is no guideline in CONTRIBUTING on what's preferred. Looking at https://github.com/open-telemetry/opentelemetry-collector-contrib/releases it is ~50/50 at the moment.

Would appreciate your thoughts on how to move forward in this PR and in future PRs

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

TL;DR; I suggest we follow Go on commit style, and happy to raise a PR to the CONTRIBUTING file if there are a couple 👍

Personally, I think release notes are different than commit messages, but I think it helps to consolidate the commit style since you noticed it is 50/50. Release notes, basically I would save to after we agree on commit style ;)

What reads imperative to me ("do this") sounds different if you prefix first. For example, the reason it doesn't sound bad in Go, is because they spell it out. They suggest to think of the prefix "This change modifies Go to _____.", and if that works out, use it as the commit first line (after the prefix of the package affected). You can imagine in this project, we could say "This change modifies the opentelemetry collector exporter/elasticsearch to use exporthelper/batchsender for reliability. Sounds fine to me!

Here's the doc on Go, and I suggest we use/blame it on them, basically derive and cite this is following go contribution style. https://go.dev/doc/contribute#commit_messages

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

TIL. Yes, I agree on the commit style. It is also close to my usual practice.

With this change, there should be no event loss when used with persistent queue in the event of a collector crash.
Introduce `batcher.*` to configure the batch sender which is now enabled by default.
Option `flush.bytes` is deprecated. Use the new `batcher.min_size_items` option to control the minimum number of items (log records, spans) to trigger a flush. `batcher.min_size_items` will be set to the value of `flush.bytes` / 1000 if `flush.bytes` is non-zero.
Option `flush.interval` is deprecated. Use the new `batcher.flush_timeout` option to control max age of buffer. `batcher.flush_timeout` will be set to the value of `flush.interval` if `flush.interval` is non-zero.
Queue sender `sending_queue.enabled` defaults to `true`.

# If your change doesn't affect end users or the exported elements of any package,
# you should instead start your pull request title with [chore] or use the "Skip Changelog" label.
# Optional: The change log or logs in which this entry should be included.
# e.g. '[user]' or '[user, api]'
# Include 'user' if the change is relevant to end users.
# Include 'api' if there is a change to a library API.
# Default: '[user]'
change_logs: [user]
21 changes: 17 additions & 4 deletions exporter/elasticsearchexporter/README.md
Original file line number Diff line number Diff line change
Expand Up @@ -78,7 +78,20 @@ All other defaults are as defined by [confighttp].

### Queuing

The Elasticsearch exporter supports the common [`sending_queue` settings][exporterhelper]. However, the sending queue is currently disabled by default.
The Elasticsearch exporter supports the common [`sending_queue` settings][exporterhelper]. The sending queue is enabled by default.

Default `num_consumers` is `100`.
carsonip marked this conversation as resolved.
Show resolved Hide resolved

When persistent queue is used, there should be no event loss even on collector crashes.
carsonip marked this conversation as resolved.
Show resolved Hide resolved

### Batching

The Elasticsearch exporter supports the common `batcher` settings.
carsonip marked this conversation as resolved.
Show resolved Hide resolved
carsonip marked this conversation as resolved.
Show resolved Hide resolved

- `enabled` (default=true): Enable batching of requests into a single bulk request.
- `min_size_items` (default=5000): Minimum number of log records / spans in the buffer to trigger a flush immediately.
- `max_size_items` (default=10000): Maximum number of log records / spans in a request.
- `flush_timeout` (default=30s): Maximum time of the oldest item spent inside the buffer, aka "max age of buffer". A flush will happen regardless of the size of content in buffer.

### Elasticsearch document routing

Expand Down Expand Up @@ -160,10 +173,10 @@ This can be configured through the following settings:
The Elasticsearch exporter uses the [Elasticsearch Bulk API] for indexing documents.
The behaviour of this bulk indexing can be configured with the following settings:

- `num_workers` (default=runtime.NumCPU()): Number of workers publishing bulk requests concurrently.
- `num_workers` (default=runtime.NumCPU()): Maximum number of concurrent bulk requests.
- `flush`: Event bulk indexer buffer flush settings
- `bytes` (default=5000000): Write buffer flush size limit.
- `interval` (default=30s): Write buffer flush time limit.
- `bytes` (DEPRECATED, use `batcher.min_size_items` instead): Write buffer flush size limit.
- `interval` (DEPRECATED, use `batcher.flush_timeout` instead): Maximum time of the oldest item spent inside the buffer, aka "max age of buffer". A flush will happen regardless of the size of content in buffer.
Copy link
Member

@andrzej-stencel andrzej-stencel Jul 10, 2024

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Given that the batcher options are experimental, I don't think we should be deprecating these options at this point. Users would be left with two options: one deprecated and one experimental, which could cause confusion. Users should have at least one non-deprecated, non-experimental batching configuration to use.

What we could do is mention here the advantages of using the batcher options instead of flush.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

It controls the same behavior of batching, and flush settings are converted to batcher settings anyway. It is marked deprecated to make it obvious to users that using batcher settings is the recommended way.

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I understand the benefits of using the batch sender (configured in the batcher option) for reliability, and I applaud this addition. I still believe we cannot force users into this new feature, for the following reasons:

  • The batch sender feature is experimental - this means that the configuration options may change.
  • The batch sender feature is not well tested in production. The author of this PR is the best person to confirm this, given the issues they've raised (and fixed! 👏) so far #9952, #10166, #10306.
  • The batch sender feature currently lacks the option to specify batch size in bytes.

This pull request in current state not only makes the experimental batch sender the default, but also leaves users with it as the only option, effectively removing the functionality that has existed before. If I understand correctly, disabling the batcher sender by setting batcher::enabled to false and specifying the flush::bytes and flush::interval disables batching altoghether, whereas I would expect this configuration to work as in previous versions. Please correct me if I'm wrong @carsonip.

I propose the following changes to this pull request:

  1. Set batcher::enabled to false by default. In this case, use the current flush settings. This means there is no change in behavior for users upgrading to new version.
  2. Do not deprecate the flush options.
  3. Describe in the README the benefits of enabling the new batcher options for reliability, but also warn that the feature is experimental and usage in production scenarios is not encouraged.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

If I understand correctly, disabling the batcher sender by setting batcher::enabled to false and specifying the flush::bytes and flush::interval disables batching altoghether, whereas I would expect this configuration to work as in previous versions. Please correct me if I'm wrong @carsonip.

That is incorrect. There is batching baked into bulk indexer implementation in the current main (see code). Batcher is added and enabled by default to replace the existing functionality of batching within bulk indexer without change in behavior.

Set batcher::enabled to false by default

As explained above, setting batcher::enabled to true is actually breaking, rather than the opposite.

Do not deprecate the flush options.

I believe this is a separate topic. Marking it deprecated or not doesn't affect the actual behavior. Therefore, I would like to know if you are concerned about the fact that bytes based flushing will only be approximated after the PR, or the fact that new recommended config is experiemental.

enabling the new batcher options for reliability, but also warn that the feature is experimental and usage in production scenarios is not encouraged.

Although batcher is new, I don't see why an upstream helper is necessarily less preferable to an existing batching feature within the exporter with possibly lower test coverage.

- `retry`: Elasticsearch bulk request retry settings
- `enabled` (default=true): Enable/Disable request retry on error. Failed requests are retried with exponential backoff.
- `max_requests` (default=3): Number of HTTP request retries.
Expand Down
12 changes: 11 additions & 1 deletion exporter/elasticsearchexporter/config.go
Original file line number Diff line number Diff line change
Expand Up @@ -14,13 +14,19 @@ import (

"go.opentelemetry.io/collector/config/confighttp"
"go.opentelemetry.io/collector/config/configopaque"
"go.opentelemetry.io/collector/exporter/exporterbatcher"
"go.opentelemetry.io/collector/exporter/exporterhelper"
"go.uber.org/zap"
)

// Config defines configuration for Elastic exporter.
type Config struct {
exporterhelper.QueueSettings `mapstructure:"sending_queue"`

// Experimental: This configuration is at the early stage of development and may change without backward compatibility
// until https://github.com/open-telemetry/opentelemetry-collector/issues/8122 is resolved.
BatcherConfig exporterbatcher.Config `mapstructure:"batcher"`

// Endpoints holds the Elasticsearch URLs the exporter should send events to.
//
// This setting is required if CloudID is not set and if the
Expand Down Expand Up @@ -69,7 +75,7 @@ type Config struct {
Authentication AuthenticationSettings `mapstructure:",squash"`
Discovery DiscoverySettings `mapstructure:"discover"`
Retry RetrySettings `mapstructure:"retry"`
Flush FlushSettings `mapstructure:"flush"`
Flush FlushSettings `mapstructure:"flush"` // Deprecated: use `batcher` instead.
Mapping MappingsSettings `mapstructure:"mapping"`
LogstashFormat LogstashFormatSettings `mapstructure:"logstash_format"`

Expand Down Expand Up @@ -131,9 +137,13 @@ type DiscoverySettings struct {
// all events already serialized into the send-buffer.
type FlushSettings struct {
// Bytes sets the send buffer flushing limit.
//
// Deprecated: Use `batcher.min_size_items` instead.
Bytes int `mapstructure:"bytes"`

// Interval configures the max age of a document in the send buffer.
//
// Deprecated: Use `batcher.flush_timeout` instead.
Interval time.Duration `mapstructure:"interval"`
}

Expand Down
40 changes: 27 additions & 13 deletions exporter/elasticsearchexporter/config_test.go
Original file line number Diff line number Diff line change
Expand Up @@ -16,7 +16,9 @@ import (
"go.opentelemetry.io/collector/config/confighttp"
"go.opentelemetry.io/collector/config/configopaque"
"go.opentelemetry.io/collector/confmap/confmaptest"
"go.opentelemetry.io/collector/exporter/exporterbatcher"
"go.opentelemetry.io/collector/exporter/exporterhelper"
"go.opentelemetry.io/collector/exporter/exporterqueue"

"github.com/open-telemetry/opentelemetry-collector-contrib/exporter/elasticsearchexporter/internal/metadata"
)
Expand Down Expand Up @@ -53,9 +55,9 @@ func TestConfig(t *testing.T) {
configFile: "config.yaml",
expected: &Config{
QueueSettings: exporterhelper.QueueSettings{
Enabled: false,
NumConsumers: exporterhelper.NewDefaultQueueSettings().NumConsumers,
QueueSize: exporterhelper.NewDefaultQueueSettings().QueueSize,
Enabled: true,
NumConsumers: 100,
QueueSize: exporterqueue.NewDefaultConfig().QueueSize,
},
Endpoints: []string{"https://elastic.example.com:9200"},
Index: "",
Expand Down Expand Up @@ -88,8 +90,11 @@ func TestConfig(t *testing.T) {
Discovery: DiscoverySettings{
OnStart: true,
},
Flush: FlushSettings{
Bytes: 10485760,
BatcherConfig: exporterbatcher.Config{
Enabled: true,
FlushTimeout: 5 * time.Second,
MinSizeConfig: exporterbatcher.MinSizeConfig{MinSizeItems: 100},
MaxSizeConfig: exporterbatcher.MaxSizeConfig{MaxSizeItems: 200},
},
Retry: RetrySettings{
Enabled: true,
Expand All @@ -108,16 +113,17 @@ func TestConfig(t *testing.T) {
PrefixSeparator: "-",
DateFormat: "%Y.%m.%d",
},
NumWorkers: 1,
},
},
{
id: component.NewIDWithName(metadata.Type, "log"),
configFile: "config.yaml",
expected: &Config{
QueueSettings: exporterhelper.QueueSettings{
Enabled: true,
NumConsumers: exporterhelper.NewDefaultQueueSettings().NumConsumers,
QueueSize: exporterhelper.NewDefaultQueueSettings().QueueSize,
Enabled: false,
NumConsumers: 100,
QueueSize: exporterqueue.NewDefaultConfig().QueueSize,
},
Endpoints: []string{"http://localhost:9200"},
Index: "",
Expand Down Expand Up @@ -150,8 +156,11 @@ func TestConfig(t *testing.T) {
Discovery: DiscoverySettings{
OnStart: true,
},
Flush: FlushSettings{
Bytes: 10485760,
BatcherConfig: exporterbatcher.Config{
Enabled: true,
FlushTimeout: 5 * time.Second,
MinSizeConfig: exporterbatcher.MinSizeConfig{MinSizeItems: 100},
MaxSizeConfig: exporterbatcher.MaxSizeConfig{MaxSizeItems: 200},
},
Retry: RetrySettings{
Enabled: true,
Expand All @@ -170,6 +179,7 @@ func TestConfig(t *testing.T) {
PrefixSeparator: "-",
DateFormat: "%Y.%m.%d",
},
NumWorkers: 1,
},
},
{
Expand All @@ -178,7 +188,7 @@ func TestConfig(t *testing.T) {
expected: &Config{
QueueSettings: exporterhelper.QueueSettings{
Enabled: true,
NumConsumers: exporterhelper.NewDefaultQueueSettings().NumConsumers,
NumConsumers: 100,
QueueSize: exporterhelper.NewDefaultQueueSettings().QueueSize,
},
Endpoints: []string{"http://localhost:9200"},
Expand Down Expand Up @@ -212,8 +222,11 @@ func TestConfig(t *testing.T) {
Discovery: DiscoverySettings{
OnStart: true,
},
Flush: FlushSettings{
Bytes: 10485760,
BatcherConfig: exporterbatcher.Config{
Enabled: true,
FlushTimeout: 5 * time.Second,
MinSizeConfig: exporterbatcher.MinSizeConfig{MinSizeItems: 100},
MaxSizeConfig: exporterbatcher.MaxSizeConfig{MaxSizeItems: 200},
},
Retry: RetrySettings{
Enabled: true,
Expand All @@ -232,6 +245,7 @@ func TestConfig(t *testing.T) {
PrefixSeparator: "-",
DateFormat: "%Y.%m.%d",
},
NumWorkers: 1,
},
},
{
Expand Down
Loading