label
that can uniquely identify them in observability data such as logs.
batching
fields. When doing this the feeds from all child inputs are combined. Some inputs do not support broker based batching and specify this in their documentation.
inputs
will be created this many times.
Type: int
Default: 1
array
object
0
disables count based batching.
Type: int
Default: 0
0
disables size based batching.
Type: int
Default: 0
string
Default: ""
array
url
and headers
fields where data from the previous successfully consumed message (if there was one) can be referenced. This can be used in order to support basic levels of pagination.
url
and headers
fields can be used to reference the previously consumed message, which allows simple pagination.
string
string
Default: "GET"
object
Default: {}
object
array
Default: []
array
Default: []
string
Default: ""
Options: TRACE
, DEBUG
, INFO
, WARN
, ERROR
, FATAL
, “.
object
bool
Default: false
string
Default: ""
string
Default: ""
string
Default: ""
string
Default: ""
object
bool
Default: false
string
Default: ""
string
Default: ""
string
Default: ""
array
Default: []
object
Default: {}
object
bool
Default: false
string
Default: ""
string
Default: ""
object
bool
Default: false
string
Default: ""
string
Default: ""
object
Default: {}
object
Default: {}
object
bool
Default: false
bool
Default: false
local error: tls: no renegotiation
.
Type: bool
Default: false
string
Default: ""
string
Default: ""
cert
and key
, or cert_file
and key_file
should be specified, but not both.
Type: array
Default: []
string
Default: ""
string
Default: ""
string
Default: ""
string
Default: ""
pbeWithMD5AndDES-CBC
algorithm is not supported for the PKCS#8 format. Warning: Since it does not authenticate the ciphertext, it is vulnerable to padding oracle attacks that can let an attacker recover the plaintext.
Type: string
Default: ""
object
array
Default: []
array
Default: []
string
Default: "5s"
string
Default: "1s"
string
Default: "300s"
int
Default: 3
array
Default: [429]
array
Default: []
backoff_on
or drop_on
, regardless of this field.
Type: array
Default: []
string
string
bool
Default: true
object
bool
Default: false
bool
Default: true
false
these messages will instead be deleted. Disabling auto replays can greatly improve memory efficiency of high throughput streams as the original shape of the data can be discarded immediately upon consumption and mutation.
Type: bool
Default: true
/{foo}
, which are added to ingested messages as metadata. A path ending in /
will match against all extensions of that path:
/post
)content-type
header as per rfc1341 then the multiple parts are consumed as a batch of messages, where each body part is a message of the batch.
/post/ws
)http_server
inputs or outputs (either within brokers or from cohabiting streams) is not possible in a predictable way.
This ambiguity makes it difficult to ensure that paths which are both a subset of a path registered by a separate component, and end in a slash (/
) and will therefore match against all extensions of that path, do not prevent the more specific path from matching against requests.
It is therefore recommended that you ensure paths of separate components do not collide unless they are explicitly non-competing.
For example, if you were to deploy two separate http_server
inputs, one with a path /foo/
and the other with a path /foo/bar
, it would not be possible to ensure that the path /foo/
does not swallow requests made to /foo/bar
.
You may specify an optional ws_welcome_message
, which is a static payload to be sent to all clients once a websocket connection is first established.
http_server
input that captures all requests and processes them by switching on that path:
http_server
input that mocks an OAuth 2.0 Client Credentials flow server at the endpoint /oauth2_test
:
string
Default: ""
string
Default: "/post"
string
Default: "/post/ws"
string
Default: ""
path
endpoint.
Type: array
Default: ["POST"]
Requires version 3.33.0 or newer
string
Default: "5s"
Type: string
Default: ""
address
.
Type: string
Default: ""
address
.
Type: string
Default: ""
address
.
Type: object
Requires version 3.63.0 or newer
bool
Default: false
array
Default: []
object
string
Default: "200"
object
Default: {"Content-Type":"application/octet-stream"}
object
array
Default: []
array
Default: []
1
and this will force partitions to be processed in lock-step, where a message will only be processed once the prior message is delivered.
Batching messages before processing can be enabled using the batching field, and this batching is performed per-partition such that messages of a batch will always originate from the same partition. This batching mechanism is capable of creating batches of greater size than the checkpoint_limit, in which case the next batch will only be created upon delivery of the current one.
kafka_lag
is the calculated difference between the high water mark offset of the partition at the time of ingestion and the current message offset.
checkpoint_limit
. However, if strict ordered processing is required then this value must be set to 1 in order to process shard messages in lock-step. When doing so it is recommended that you perform batching at this component for performance as it will not be possible to batch lock-stepped messages at the output level.
Failed to connect to kafka: kafka: client has run out of available brokers to talk to (Is your cluster reachable?)
, but the brokers are definitely reachable.array
foo:0
would consume the partition 0 of the topic foo. This syntax supports ranges, e.g. foo:0-10
would consume partitions 0 through to 10 inclusive.
Type: array
Requires version 3.33.0 or newer
string
object
bool
Default: false
bool
Default: false
local error: tls: no renegotiation
.
Type: bool
Default: false
Requires version 3.45.0 or newer
string
Default: ""
string
Default: ""
cert
and key
, or cert_file
and key_file
should be specified, but not both.
Type: array
Default: []
string
Default: ""
string
Default: ""
string
Default: ""
string
Default: ""
pbeWithMD5AndDES-CBC
algorithm is not supported for the PKCS#8 format. Warning: Since it does not authenticate the ciphertext, it is vulnerable to padding oracle attacks that can let an attacker recover the plaintext.
Type: string
Default: ""
object
string
Default: "none"
Option | Summary |
---|---|
OAUTHBEARER | OAuth Bearer based authentication. |
PLAIN | Plain text authentication. NOTE: When using plain text auth it is extremely likely that you’ll also need to enable TLS. |
SCRAM-SHA-256 | Authentication using the SCRAM-SHA-256 mechanism. |
SCRAM-SHA-512 | Authentication using the SCRAM-SHA-512 mechanism. |
none | Default, no SASL authentication. |
string
Default: ""
string
Default: ""
string
Default: ""
Type: string
Default: ""
token_cache
, the key to query the cache with for tokens.
Type: string
Default: ""
string
Default: ""
string
Default: "tyk"
string
Default: ""
bool
Default: true
int
Default: 1024
Requires version 3.33.0 or newer
false
these messages will instead be deleted. Disabling auto replays can greatly improve memory efficiency of high throughput streams as the original shape of the data can be discarded immediately upon consumption and mutation.
Type: bool
Default: true
string
Default: "1s"
string
Default: "100ms"
object
string
Default: "10s"
string
Default: "3s"
string
Default: "60s"
int
Default: 256
bool
Default: false
object
0
disables count based batching.
Type: int
Default: 0
0
disables size based batching.
Type: int
Default: 0
string
Default: ""
array
label
that can uniquely identify them in observability data such as logs.
int
Default: 1
string
Default: "fan_out"
Options: fan_out
, fan_out_fail_fast
, fan_out_sequential
, fan_out_sequential_fail_fast
, round_robin
, greedy
.
array
object
0
disables count based batching.
Type: int
Default: 0
0
disables size based batching.
Type: int
Default: 0
string
Default: ""
array
fan_out
pattern, except that output failures will not be automatically retried. This pattern should be used with caution as busy retry loops could result in unlimited duplicates being introduced into the non-failure outputs.
fan_out_sequential
pattern, except that output failures will not be automatically retried. This pattern should be used with caution as busy retry loops could result in unlimited duplicates being introduced into the non-failure outputs.
false
.
propagate_response
to true
. Only inputs that support synchronous responses are able to make use of these propagated responses.
max_in_flight
.
This output benefits from sending messages as a batch for improved performance. Batches can be formed at both the input and output level.
string
string
Default: "POST"
object
Default: {}
object
array
Default: []
array
Default: []
string
Default: ""
Options: TRACE
, DEBUG
, INFO
, WARN
, ERROR
, FATAL
, “.
object
bool
Default: false
string
Default: ""
string
Default: ""
string
Default: ""
string
Default: ""
object
bool
Default: false
string
Default: ""
string
Default: ""
string
Default: ""
array
Default: []
object
Default: {}
object
bool
Default: false
string
Default: ""
string
Default: ""
object
bool
Default: false
string
Default: ""
string
Default: ""
object
Default: {}
object
Default: {}
object
bool
Default: false
bool
Default: false
local error: tls: no renegotiation
.
Type: bool
Default: false
string
Default: ""
string
Default: ""
cert
and key
, or cert_file
and key_file
should be specified, but not both.
Type: array
Default: []
string
Default: ""
string
Default: ""
string
Default: ""
string
Default: ""
pbeWithMD5AndDES-CBC
algorithm is not supported for the PKCS#8 format. Warning: Since it does not authenticate the ciphertext, it is vulnerable to padding oracle attacks that can let an attacker recover the plaintext.
Type: string
Default: ""
propagate_response
is set to true
.
Type: object
array
Default: []
array
Default: []
string
Default: "5s"
string
Default: "1s"
string
Default: "300s"
int
Default: 3
array
Default: [429]
array
Default: []
backoff_on
or drop_on
, regardless of this field.
Type: array
Default: []
string
bool
Default: false
bool
Default: false
int
Default: 64
object
0
disables count based batching.
Type: int
Default: 0
0
disables size based batching.
Type: int
Default: 0
string
Default: ""
array
array
Default: []
string
Default: ""
string
Default: ""
string
Default: ""
path
, stream_path
and ws_path
. Which allow you to consume a single message batch, a continuous stream of line delimited messages, or a websocket of messages for each request respectively.
When messages are batched the path
endpoint encodes the batch according to RFC1341.
Please note, messages are considered delivered as soon as the data is written to the client. There is no concept of at least once delivery on this output.
Please note that components within a Tyk config will register their respective endpoints in a non-deterministic order. This means that establishing precedence of endpoints that are registered via multiple http_server
inputs or outputs (either within brokers or from cohabiting streams) is not possible in a predictable way.
This ambiguity makes it difficult to ensure that paths which are both a subset of a path registered by a separate component, and end in a slash (/
) and will therefore match against all extensions of that path, do not prevent the more specific path from matching against requests.
It is therefore recommended that you ensure paths of separate components do not collide unless they are explicitly non-competing.
For example, if you were to deploy two separate http_server
inputs, one with a path /foo/
and the other with a path /foo/bar
, it would not be possible to ensure that the path /foo/
does not swallow requests made to /foo/bar
.
string
Default: ""
string
Default: "/get"
string
Default: "/get/stream"
string
Default: "/get/ws"
path
and stream_path
HTTP endpoint.
Type: array
Default: ["GET"]
path
endpoint).
Type: string
Default: "5s"
address
.
Type: string
Default: ""
address
.
Type: string
Default: ""
address
.
Type: object
bool
Default: false
array
Default: []
ack_replicas
determines whether we wait for acknowledgment from all replicas or just a single broker.
Metadata will be added to each message sent as headers (version 0.11+), but can be restricted using the field metadata.
max_in_flight
is set to 1
and that the field retry_as_batch
is set to true
.
You must also ensure that failed batches are never rerouted back to the same output. This can be done by setting the field max_retries
to 0
and backoff.max_elapsed_time
to empty, which will apply back pressure indefinitely until the batch is sent successfully.
However, this also means that manual intervention will eventually be required in cases where the batch cannot be sent due to configuration problems such as an incorrect max_msg_bytes
estimate. A less strict but automated alternative would be to route failed batches to a dead letter queue using a fallback
broker, but this would allow subsequent batches to be delivered in the meantime whilst those failed batches are dealt with.
Failed to connect to kafka: kafka: client has run out of available brokers to talk to (Is your cluster reachable?)
, but the brokers are definitely reachable.max_in_flight
.
This output benefits from sending messages as a batch for improved performance. Batches can be formed at both the input and output level.
array
object
bool
Default: false
bool
Default: false
local error: tls: no renegotiation
.
Type: bool
Default: false
string
Default: ""
string
Default: ""
cert
and key
, or cert_file
and key_file
should be specified, but not both.
Type: array
Default: []
string
Default: ""
string
Default: ""
string
Default: ""
string
Default: ""
pbeWithMD5AndDES-CBC
algorithm is not supported for the PKCS#8 format. Warning: Since it does not authenticate the ciphertext, it is vulnerable to padding oracle attacks that can let an attacker recover the plaintext.
Type: string
Default: ""
object
string
Default: "none"
Option | Summary |
---|---|
OAUTHBEARER | OAuth Bearer based authentication. |
PLAIN | Plain text authentication. NOTE: When using plain text auth it is extremely likely that you’ll also need to enable TLS. |
SCRAM-SHA-256 | Authentication using the SCRAM-SHA-256 mechanism. |
SCRAM-SHA-512 | Authentication using the SCRAM-SHA-512 mechanism. |
none | Default, no SASL authentication. |
string
Default: ""
string
Default: ""
string
Default: ""
access_token
allows you to query a cache
resource to fetch OAUTHBEARER tokens from
Type: string
Default: ""
token_cache
, the key to query the cache with for tokens.
Type: string
Default: ""
string
string
Default: "tyk"
string
string
Default: ""
string
Default: ""
string
Default: "fnv1a_hash"
Options: fnv1a_hash
, murmur2_hash
, random
, round_robin
, manual
.
partitioner
is set to manual
. Must be able to parse as a 32-bit integer.
Type: string
Default: ""
object
bool
Default: false
be >= 1
.
Type: int
Default: -1
int
Default: -1
string
Default: "none"
Options: none
, snappy
, lz4
, gzip
, zstd
.
object
object
array
Default: []
int
Default: 64
IDEMPOTENT_WRITE
permission on CLUSTER
and can be disabled if this permission is not available.
Type: bool
Default: false
bool
Default: false
int
Default: 1000000
string
Default: "5s"
bool
Default: false
object
0
disables count based batching.
Type: int
Default: 0
0
disables size based batching.
Type: int
Default: 0
string
Default: ""
array
int
Default: 0
object
string
Default: "3s"
string
Default: "10s"
0s
) will result in unbounded retries.
Type: string
Default: "30s"
threads
field in the pipeline section determines how many parallel processing threads are created. You can read more about parallel processing in the pipeline guide.
label
that can uniquely identify them in observability data such as logs.
string
Options: to_json
, from_json
.
string
Default: "textual"
Options: textual
, binary
, single
.
string
Default: ""
schema
field.
Type: string
Default: ""
http_server
and http_client
, are capable of extracting a root span from the source of the message (HTTP headers). This is
a work in progress and should eventually expand so that all inputs have a way of doing so.
Other inputs, such as kafka
can be configured to extract a root span by using the extract_tracing_map
field.
A tracer config section looks like this:
string
Default: ""
agent_address
.
Type: string
Default: ""
string
Default: "const"
Option | Summary |
---|---|
const | Sample a percentage of traces. 1 or more means all traces are sampled, 0 means no traces are sampled and anything in between means a percentage of traces are sampled. Tuning the sampling rate is recommended for high-volume production workloads. |
float
Default: 1
object
Default: {}
string
array
string
bool
Default: false
array
string
bool
Default: false
object
Default: {}
object
bool
Default: false
float
metrics
, which describes a metrics format and destination. For example, if you wished to push them via the Prometheus protocol you could use this configuration:
prefix
. The default prefix is bento
. The following metrics are emitted with the respective types:
{prefix}_input_count
Number of inputs currently active.{prefix}_output_count
Number of outputs currently active.{prefix}_processor_count
Number of processors currently active.{prefix}_cache_count
Number of caches currently active.{prefix}_condition_count
Number of conditions currently active.{prefix}_input_connection_up
1 if a particular input is connected, 0 if it is not.{prefix}_output_connection_up
1 if a particular output is connected, 0 if it is not.{prefix}_input_running
1 if a particular input is running, 0 if it is not.{prefix}_output_running
1 if a particular output is running, 0 if it is not.{prefix}_processor_running
1 if a particular processor is running, 0 if it is not.{prefix}_cache_running
1 if a particular cache is running, 0 if it is not.{prefix}_condition_running
1 if a particular condition is running, 0 if it is not.{prefix}_buffer_running
1 if a particular buffer is running, 0 if it is not.{prefix}_buffer_available
The number of messages that can be read from a buffer.{prefix}_input_retry
The number of active retry attempts for a particular input.{prefix}_output_retry
The number of active retry attempts for a particular output.{prefix}_processor_retry
The number of active retry attempts for a particular processor.{prefix}_cache_retry
The number of active retry attempts for a particular cache.{prefix}_condition_retry
The number of active retry attempts for a particular condition.{prefix}_buffer_retry
The number of active retry attempts for a particular buffer.{prefix}_threads_active
The number of processing threads currently active.{prefix}_input_received
Count of messages received by a particular input.{prefix}_input_batch_received
Count of batches received by a particular input.{prefix}_output_sent
Count of messages sent by a particular output.{prefix}_output_batch_sent
Count of batches sent by a particular output.{prefix}_processor_processed
Count of messages processed by a particular processor.{prefix}_processor_batch_processed
Count of batches processed by a particular processor.{prefix}_processor_dropped
Count of messages dropped by a particular processor.{prefix}_processor_batch_dropped
Count of batches dropped by a particular processor.{prefix}_processor_error
Count of errors returned by a particular processor.{prefix}_processor_batch_error
Count of batch errors returned by a particular processor.{prefix}_cache_hit
Count of cache key lookups that found a value.{prefix}_cache_miss
Count of cache key lookups that did not find a value.{prefix}_cache_added
Count of new cache entries.{prefix}_cache_err
Count of errors that occurred during a cache operation.{prefix}_condition_hit
Count of condition checks that passed.{prefix}_condition_miss
Count of condition checks that failed.{prefix}_condition_error
Count of errors that occurred during a condition check.{prefix}_buffer_added
Count of messages added to a particular buffer.{prefix}_buffer_batch_added
Count of batches added to a particular buffer.{prefix}_buffer_read
Count of messages read from a particular buffer.{prefix}_buffer_batch_read
Count of batches read from a particular buffer.{prefix}_buffer_ack
Count of messages removed from a particular buffer.{prefix}_buffer_batch_ack
Count of batches removed from a particular buffer.{prefix}_buffer_nack
Count of messages that failed to be removed from a particular buffer.{prefix}_buffer_batch_nack
Count of batches that failed to be removed from a particular buffer.{prefix}_buffer_err
Count of errors that occurred during a buffer operation.{prefix}_buffer_batch_err
Count of batch errors that occurred during a buffer operation.{prefix}_input_error
Count of errors that occurred during an input operation.{prefix}_input_batch_error
Count of batch errors that occurred during an input operation.{prefix}_output_error
Count of errors that occurred during an output operation.{prefix}_output_batch_error
Count of batch errors that occurred during an output operation.{prefix}_resource_cache_error
Count of errors that occurred during a resource cache operation.{prefix}_resource_condition_error
Count of errors that occurred during a resource condition operation.{prefix}_resource_input_error
Count of errors that occurred during a resource input operation.{prefix}_resource_processor_error
Count of errors that occurred during a resource processor operation.{prefix}_resource_output_error
Count of errors that occurred during a resource output operation.{prefix}_resource_rate_limit_error
Count of errors that occurred during a resource rate limit operation.{prefix}_input_latency
Latency of a particular input.{prefix}_input_batch_latency
Latency of a particular input at the batch level.{prefix}_output_latency
Latency of a particular output.{prefix}_output_batch_latency
Latency of a particular output at the batch level.{prefix}_processor_latency
Latency of a particular processor.{prefix}_processor_batch_latency
Latency of a particular processor at the batch level.{prefix}_condition_latency
Latency of a particular condition.{prefix}_condition_batch_latency
Latency of a particular condition at the batch level.{prefix}_cache_latency
Latency of a particular cache.{prefix}_buffer_latency
Latency of a particular buffer.{prefix}_buffer_batch_latency
Latency of a particular buffer at the batch level.path
The path of the component within the config.label
A custom label for the component, which is optional and falls back to the component type.string
Default: "bento"
string
Default: ""
string
Default: "bento_push"
string
Default: ""
object
bool
Default: false
string
Default: ""
string
Default: ""
string
Default: ""
bool
Default: false
array
Default: [0.000001, 0.00001, 0.0001, 0.001, 0.01, 0.1, 1.0]
batching
configuration block this means it benefits from batching and requires you to specify how you’d like your batches to be formed by configuring a batching policy:
batching
configuration block.
Sometimes you may prefer to create your batches before processing, in which case if your input doesn’t already support a batch policy you can instead use a broker, which also allows you to combine inputs with a single batch policy:
batching
that means it supports a batch policy. This is a mechanism that allows you to configure exactly how your batching should work on messages before they are routed to the input or output it’s associated with. Batches are considered complete and will be flushed downstream when either of the following conditions are met:
byte_size
field is non-zero and the total size of the batch in bytes matches or exceeds it (disregarding metadata.)count
field is non-zero and the total number of messages in the batch matches or exceeds it.period
field is non-empty and the time since the last batch exceeds its value.foo.bar
would return 21
.
The characters ~
(%x7E) and .
(%x2E) have special meaning in Tyk Streams paths. Therefore ~
needs to be encoded as ~0
and .
needs to be encoded as ~1
when these characters appear within a key.
For example, if we had the following JSON structure:
foo~1foo.bar~0bo..baz
would return 22
.
*
or -
respectively.
For example, if we had the following JSON structure:
foo.2.bar
would return 23
.
*
indicates that the query should return the value of the remaining path from each element of the array (within an array.)
-
indicates that a new element should be appended to the end of the existing elements, if this character is not the final segment of the path then an object is created.
input
and output
, is a pipeline
section. This section describes an array of processors that are to be applied to all messages, and are not bound to any particular input or output.
If you have processors that are heavy on CPU and aren’t specific to a certain input or output they are best suited for the pipeline section. It is advantageous to use the pipeline section as it allows you to set an explicit number of parallel threads of execution:
threads
is set to -1
(the default) it will automatically match the number of logical CPUs available. By default almost all Tyk Streams sources will utilize as many processing threads as have been configured, which makes horizontal scaling easy.