Extra Features

SmallRye Fault Tolerance provides several features that are not present in the MicroProfile Fault Tolerance specification. Their public API is present in the io.smallrye:smallrye-fault-tolerance-api artifact.

Note that these features may have experimental status, marked by the @Experimental annotation.

Circuit Breaker Maintenance

It is sometimes useful to see the circuit breaker status from within the application, or reset it to the initial state. This is possible in two steps:

  1. Give the circuit breaker a name by annotating the guarded method with @CircuitBreakerName:

    @ApplicationScoped
    public class MyService {
        @CircuitBreaker
        @CircuitBreakerName("hello-cb") (1)
        public String hello() {
            ...
        }
    }
    1 The circuit breaker guarding the MyService.hello method is given a name hello-cb.
  2. Inject CircuitBreakerMaintenance and call its methods:

    @ApplicationScoped
    public class Example {
        @Inject
        CircuitBreakerMaintenance maintenance;
    
        public void test() {
            System.out.println("Circuit breaker state: "
                + maintenance.currentState("hello-cb")); (1)
            maintenance.resetAll(); (2)
        }
    }
    1 Obtains current circuit breaker state.
    2 Resets all circuit breakers to the initial state.

The CircuitBreakerMaintenance interface provides 4 methods:

  1. currentState(name): returns current state of given circuit breaker. The return type CircuitBreakerState is an enum with 3 values: CLOSED, OPEN, HALF_OPEN.

    • onStateChange(name, callback): registers a callback that will be called when given circuit breaker changes state.

  2. reset(name): resets given circuit breaker to the initial state.

  3. resetAll(): resets all circuit breakers in the application to the initial state.

See the javadoc of those methods for more information.

@AsynchronousNonBlocking

In addition to the MicroProfile Fault Tolerance @Asynchronous annotation, SmallRye Fault Tolerance offers an annotation for asynchronous non-blocking processing: @AsynchronousNonBlocking.

This annotation may only be placed on methods that return CompletionStage, because the Future type can’t really be used for non-blocking processing. It may also be placed on a class, in which case all business methods must return CompletionStage.

As opposed to @Asynchronous, the @AsynchronousNonBlocking annotation does not offload the method execution to another thread. The method is allowed to proceed on the original thread, assuming that it returns quickly and all processing it does is performed in a non-blocking way.

Annotation Method return type Execution

@Asynchronous

Future or CompletionStage

offloaded to a thread pool

@AsynchronousNonBlocking

CompletionStage

proceeds on the original thread

When to Use @Asynchronous

Use @Asynchronous if the method has blocking logic, but you don’t want to block the caller thread. For example:

@ApplicationScoped
public class MyService {
    @Retry
    @Asynchronous (1)
    CompletionStage<String> hello() {
        ...
    }
}
1 Using the @Asynchronous annotation, because the method blocks, and it is necessary to offload its execution to another thread.

The thread pool that is used for offloading method calls is provided by the runtime that integrates SmallRye Fault Tolerance.

When to Use @AsynchronousNonBlocking

Use @AsynchronousNonBlocking if the method does not have blocking logic, and you want the execution to stay on the caller thread. For example:

@ApplicationScoped
public class MyService {
    @Retry
    @AsynchronousNonBlocking (1)
    CompletionStage<String> hello() {
        ...
    }
}
1 Using the @AsynchronousNonBlocking annotation, because the method does not block and offloading execution to another thread is not necessary.

If the guarded method uses @Retry and some delay between retries is configured, only the initial execution is guaranteed to occur on the original thread. Subsequent attempts may be offloaded to an extra thread, so that the original thread is not blocked on the delay.

If the guarded method uses @Bulkhead, the execution is not guaranteed to occur on the original thread. If the execution has to wait in the bulkhead queue, it may later end up on a different thread.

If the original thread is an event loop thread and event loop integration is enabled, then the event loop is always used to execute the guarded method. In such case, all retry attempts and queued bulkhead executions are guaranteed to happen on the original thread.

How to Combine Them

When these annotations are combined, an annotation on a method has priority over an annotation on a class. For example:

@ApplicationScoped
@AsynchronousNonBlocking
public class MyService {
    @Retry
    CompletionStage<String> hello() { (1)
        ...
    }

    @Retry
    @Asynchronous
    CompletionStage<String> helloBlocking() { (2)
        ...
    }
}
1 Treated as @AsynchronousNonBlocking, based on the class annotation.
2 Treated as @Asynchronous, the method annotation has priority over the class annotation.

It is an error to put both @Asynchronous and @AsynchronousNonBlocking on the same program element.

Recommendation

Use the non-compatible mode. In this mode, methods returning CompletionStage are automatically treated as if they were @AsynchronousNonBlocking. Use @Asynchronous to mark blocking methods for thread offload.

If you can’t use the non-compatible mode, use @AsynchronousNonBlocking or @Asynchronous to mark all asynchronous methods.

In previous releases, SmallRye Fault Tolerance recommended to use the @Blocking and @NonBlocking annotations. Using these annotations for fault tolerance purposes is now deprecated. They are still supported, but at some point, SmallRye Fault Tolerance will stop recognizing them.

We also recommend avoiding @Asynchronous methods that return Future, because the only way to obtain the future value is blocking.

Additional Asynchronous Types

MicroProfile Fault Tolerance supports asynchronous fault tolerance for methods that return CompletionStage. (The Future type is not truly asynchronous, so we won’t take it into account here.) SmallRye Fault Tolerance adds support for additional asynchronous types:

  • Mutiny: Uni

  • RxJava: Single, Maybe, Completable

These types are treated just like CompletionStage, so everything that works for CompletionStage works for these types as well. Stream-like types (Multi, Observable, Flowable) are not supported, because their semantics can’t be easily expressed in terms of CompletionStage.

For example:

@ApplicationScoped
public class MyService {
    @Retry
    @AsynchronousNonBlocking (1)
    Uni<String> hello() { (2)
        ...
    }
}
1 Using the @AsynchronousNonBlocking annotation described in @AsynchronousNonBlocking, because the method doesn’t block and offloading execution to another thread is not necessary.
2 Returning the Uni type from Mutiny. This shows that whatever works for CompletionStage also works for the other async types.

The implementation internally converts the async types to a CompletionStage and back. This means that to be able to use any particular asynchronous type, the corresponding converter must be present. SmallRye Fault Tolerance provides support libraries for popular asynchronous types, and these support libraries include the corresponding converters.

It is possible that the runtime you use already provides the correct integration. Otherwise, add a dependency to your application:

  • Mutiny: io.smallrye:smallrye-fault-tolerance-mutiny

  • RxJava 3: io.smallrye:smallrye-fault-tolerance-rxjava3

Quarkus

In Quarkus, the Mutiny support library is present by default. You can use fault tolerance on methods that return Uni out of the box.

Backoff Strategies for @Retry

When retrying failed operations, it is often useful to make a delay between retry attempts. This delay is also called "backoff". The @Retry annotation in MicroProfile Fault Tolerance supports a single backoff strategy: constant. That is, the delay between all retry attempts is identical (with the exception of a random jitter).

SmallRye Fault Tolerance offers 3 annotations to specify a different backoff strategy:

  • @ExponentialBackoff

  • @FibonacciBackoff

  • @CustomBackoff

One of these annotations may be present on any program element (method or class) that also has the @Retry annotation. For example:

package com.example;

@ApplicationScoped
public class MyService {
    @Retry
    @ExponentialBackoff
    public void hello() {
        ...
    }
}

It is an error to add a backoff annotation to a program element that doesn’t have @Retry (e.g. add @Retry on a class and @ExponentialBackoff on a method). It is also an error to add more than one of these annotations to the same program element.

When any one of these annotations is present, it modifies the behavior specified by the @Retry annotation. The new behavior is as follows:

For @ExponentialBackoff, the delays between retry attempts grow exponentially, using a defined factor. By default, the factor is 2, so each delay is 2 * the previous delay. For example, if the initial delay (specified by @Retry) is 1 second, then the second delay is 2 seconds, third delay is 4 seconds, fourth delay is 8 seconds etc. It is possible to define a maxDelay, so that this growth has a limit.

For @FibonacciBackoff, the delays between retry attempts grow per the Fibonacci sequence. For example, if the initial delay (specified by @Retry) is 1 second, then the second delay is 2 seconds, third delay is 3 seconds, fourth delay is 5 seconds etc. It is possible to define a maxDelay, so that this growth has a limit.

Both @ExponentialBackoff and @FibonacciBackoff also apply jitter, exactly like plain @Retry.

Also, since @Retry has a default maxDuration of 3 minutes and default maxRetries of 3, both @ExponentialBackoff and @FibonacciBackoff define a maxDelay of 1 minute. If we redefine maxRetries to a much higher value, and the guarded method keeps failing, the delay would eventually become higher than 1 minute. In that case, it will be limited to 1 minute. Of course, maxDelay can be configured. If set to 0, there’s no limit, and the delays will grow without bounds.

For @CustomBackoff, computing the delays between retry attempts is delegated to a specified implementation of CustomBackoffStrategy. This is an advanced option.

For more information about these backoff strategies, see the javadoc of the annotations.

Configuration

These annotations may be configured using the same mechanism as MicroProfile Fault Tolerance annotations. For example, to modify the factor of the @ExponentialBackoff annotation above, you can use:

com.example.MyService/hello/ExponentialBackoff/factor=3

Metrics

These annotations do not have any special metrics. All @Retry metrics are still present and reflect the altered behavior.

Non-compatible Mode

SmallRye Fault Tolerance offers a mode where certain features are improved beyond specification, as described below. This mode is not compatible with the MicroProfile Fault Tolerance specification (and doesn’t necessarily pass the entire TCK).

This mode is disabled by default. To enable, set the configuration property smallrye.faulttolerance.mp-compatibility to false.

Quarkus

In Quarkus, the non-compatible mode is enabled by default. To restore compatibility, add the following to your application.properties:

smallrye.faulttolerance.mp-compatibility=true

Note that the non-compatible mode is available since SmallRye Fault Tolerance 5.2.0 and Quarkus 2.1.0.Final. Previous versions are always compatible.

Determining Asynchrony from Method Signature

In the non-compatible mode, method asynchrony is determined solely from its signature. That is, methods that

  • have some fault tolerance annotation (such as @Retry),

  • return CompletionStage (or some other async type),

always have asynchronous fault tolerance applied.

For example:

@ApplicationScoped
public class MyService {
    @Retry
    CompletionStage<String> hello() { (1)
        ...
    }

    @Retry
    Uni<String> helloMutiny() { (2)
        ...
    }

    @Retry
    @Asynchronous
    CompletionStage<String> helloBlocking() { (3)
        ...
    }
}
1 Executed on the original thread, because the method returns CompletionStage. It is as if the method was annotated @AsynchronousNonBlocking.
2 Executed on the original thread, because the method returns an async type. It is as if the method was annotated @AsynchronousNonBlocking.
3 The explicit @Asynchronous annotation is honored. The method is executed on a thread pool.

Note that the existing annotations still work without a change, both in compatible and non-compatible mode. That is, if a method (or class) is annotated @Asynchronous, execution will be offloaded to a thread pool. If a method (or class) is annotated @AsynchronousNonBlocking, execution will happen on the original thread.

Also note that this doesn’t affect methods returning Future. You still have to annotate them @Asynchronous to make sure they are executed on a thread pool and are guarded properly. As mentioned in the @AsynchronousNonBlocking section, we discourage using these methods, because the only way to obtain the future value is blocking.

Inspecting Exception Cause Chains

The @CircuitBreaker, @Fallback and @Retry annotations can be used to specify that certain exceptions should be treated as failures and others as successes. This is limited to inspecting the actual exception that was thrown. However, in many usecases, exceptions are wrapped and the exception the user wants to decide on is only present in the cause chain.

In the non-compatible mode, if the actual thrown exception isn’t known failure or known success, SmallRye Fault Tolerance inspects the cause chain. To be specific, in case a @Fallback method throws an exception, the decision process is:

  1. if the exception is assignable to one of the skipOn exceptions, fallback is skipped and the exception is rethrown;

  2. otherwise, if the exception is assignable to one of the applyOn exceptions, fallback is applied;

  3. otherwise, if the cause chain of the exception contains an exception assignable to one of the skipOn exceptions, fallback is skipped and the exception is rethrown;

  4. otherwise, if the cause chain of the exception contains an exception assignable to one of the applyOn exceptions, fallback is applied;

  5. otherwise, the exception is rethrown.

For example, say we have this method:

@Fallback(fallbackMethod = "fallback",
    skipOn = ExpectedOutcomeException.class,
    applyOn = IOException.class)
public Result doSomething() {
    ...
}

public Result fallback() {
    ...
}

If doSomething throws an ExpectedOutcomeException, fallback is skipped and the exception is thrown. If doSomething throws an IOException, fallback is applied. If doSomething throws a WrapperException whose cause is ExpectedOutcomeException, fallback is skipped and the exception is thrown. If doSomething throws a WrapperException whose cause is IOException, fallback is applied.

Comparing with the @Fallback specification, SmallRye Fault Tolerance inserts 2 more steps into the decision process that inspect the cause chain. Note that these steps are executed if and only if the thrown exception matches neither skipOn nor applyOn. If the thrown exception matches either of them, the cause chain is not inspected at all.

Similar behavior applies to @CircuitBreaker and @Retry. All 3 annotations follow the same principle: exceptions considered success have priority over those considered failure.

Fault Tolerance annotation Exception is first tested against and then against

@Fallback

skipOn

applyOn

@CircuitBreaker

skipOn

failOn

@Retry

abortOn

retryOn

Fallback Method with Exception Parameter

In the non-compatible mode, SmallRye Fault Tolerance supports access to the causing exception in a @Fallback method.

A fallback method, as defind by the MicroProfile Fault Tolerance specification, must have the same parameters as the guarded method. SmallRye Fault Tolerance permits defining one additional parameter, at the end of the parameter list, which must be of an exception type. If such parameter is defined, the exception that caused the fallback will be supplied in it.

For example:

@ApplicationScoped
public class MyService {
    @Fallback(fallbackMethod = "fallback")
    public String doSomething(String param) {
        ...
    }

    public String fallback(String param, IllegalArgumentException cause) { (1)
        ...
    }
}
1 The fallback method matches the guarded method signature, except for one additional parameter at the end.

All rules of MicroProfile Fault Tolerance specification related to looking up fallback methods still apply. That is, the return types must match, the parameter types must match (with this one exception), etc.

If the thrown exception is not assignable to the exception parameter type, it is rethrown as if no fallback was declared. In the previous example, if IllegalStateException was thrown, the fallback method would not be called, as IllegalStateException is not a subtype of IllegalArgumentException.

If the guarded method has a vararg parameter and you want to declare a fallback method with an exception parameter, simply replace the vararg syntax with an array type:

@ApplicationScoped
public class MyService {
    @Fallback(fallbackMethod = "fallback")
    public String doSomething(String... params) {
        ...
    }

    public String fallback(String[] params, IllegalArgumentException cause) {
        ...
    }
}

Multiple Fallback Methods with Exception Parameter

It is possible to declare multiple overloads of the fallback method, each having different type of the exception parameter:

@ApplicationScoped
public class MyService {
    @Fallback(fallbackMethod = "fallback")
    public String doSomething(String param) {
        ...
    }

    public String fallback(String param, IllegalArgumentException cause) {
        ...
    }

    public String fallback(String param, RuntimeException cause) {
        ...
    }
}

In that case, which fallback method is called depends on the type of thrown exception. The method that declares a most-specific supertype of the actual exception is selected.

In the previous example, if IllegalArgumentException was thrown by doSomething, the first fallback method would be called. If IllegalStateException was thrown, the second fallback method would be called.

If the thrown exception is not assignable to the exception parameter type of any fallback method, it is rethrown as if no fallback was declared.

Fallback Methods with and without Exception Parameter

It is possible to declare the fallback method with and without an exception parameter at the same time:

@ApplicationScoped
public class MyService {
    @Fallback(fallbackMethod = "fallback")
    public String doSomething(String param) {
        ...
    }

    public String fallback(String param, IllegalArgumentException cause) {
        ...
    }

    public String fallback(String param, RuntimeException cause) {
        ...
    }

    public String fallback(String param) {
        ...
    }
}

The fallback methods with an exception parameter have precedence. The fallback method without an exception parameter is only called if the thrown exception is not assignable to any declared exception parameter.

Interactions with applyOn / skipOn

The presence or absence of a fallback method with specific exception parameter may seem related to the usage of applyOn / skipOn on the @Fallback annotation, but in fact, it is not. These features are completely independent.

Simply put, the applyOn / skipOn configuration is always evaluated first. A fallback method is only selected and invoked when this configuration indicates that a fallback should apply.

If @Fallback is configured to skip IllegalStateException and IllegalStateException is thrown, no fallback method is invoked. That applies even if a fallback method with a matching exception parameter exists.

For example:

@ApplicationScoped
public class MyService {
    @Fallback(fallbackMethod = "fallback", skipOn = IllegalStateException.class)
    public String doSomething(String param) {
        ...
    }

    public String fallback(String param, IllegalArgumentException cause) {
        ...
    }

    public String fallback(String param, RuntimeException cause) {
        ...
    }

    public String fallback(String param) {
        ...
    }
}

In this case:

  • if doSomething throws IllegalArgumentException, the first fallback method is called;

  • if doSomething throws IllegalStateException, no fallback method is called, because this exception type is skipped;

  • if doSomething throws any other RuntimeException, the second fallback method is called;

  • if doSomething throws any other exception, the last fallback method is called.

Kotlin suspend Functions

SmallRye Fault Tolerance includes support for Kotlin suspending functions. They are treated as Additional Asynchronous Types, even though the internal implementation is more complex than support for Mutiny or RxJava.

For example:

@ApplicationScoped
open class MyService {
    @Retry(maxRetries = 2)
    @Fallback(fallbackMethod = "helloFallback")
    open suspend fun hello(): String { (1)
        delay(100)
        throw IllegalArgumentException()
    }

    private suspend fun helloFallback(): String { (2)
        delay(100)
        return "hello"
    }
}
1 As a suspending function, this method can only be called from another suspending function. It will be guarded by the retry and fallback strategies, as defined using the annotations.
2 Similarly to fallback methods in Java, fallback methods in Kotlin must have the same signature as the guarded method. Since the guarded method is suspending, the fallback method must be suspending.

As mentioned above, suspending functions are treated as async types. This means that for asynchronous fault tolerance to work correctly on suspending functions, they must be determined to be asynchronous. That happens automatically in the non-compatible mode, based on the method signature, but if you use strictly compatible mode, the @Asynchronous or @AsynchronousNonBlocking annotation must be present. It is expected that most users will use the Kotlin support in the non-compatible mode, so the example above does not include any such annotation.

To be able to use this, a support library must be present. It is possible that the runtime you use already provides the correct integration. Otherwise, add a dependency to your application: io.smallrye:smallrye-fault-tolerance-kotlin.

Quarkus

In Quarkus, the Kotlin support library is present by default, if you use the Quarkus Kotlin support. You can declare fault tolerance annotations on suspending methods out of the box.

Programmatic API

Suspending functions are currently only supported in the declarative, annotation-based API, as shown in the example above. The Programmatic API of SmallRye Fault Tolerance does not support suspending functions, but other than that, it can of course be used from Kotlin through its Java interop.

Reusable, Preconfigured Fault Tolerance

The declarative, annotation-based API of MicroProfile Fault Tolerance doesn’t allow sharing configuration of fault tolerance strategies across multiple classes. In a single class, the configuration may be shared across all methods by putting the annotations on the class instead of individual methods, but even then, stateful fault tolerance strategies are not shared. Each method has its own bulkhead and/or circuit breaker, which is often not what you want.

The programmatic API of SmallRye Fault Tolerance allows using a single FaultTolerance object to guard multiple disparate actions, which allows reuse and state sharing. It is possible to use a programmatically constructed FaultTolerance object declaratively, using the @ApplyFaultTolerance annotation.

To be able to do that, we need a bean of type FaultTolerance with the @Identifier qualifier:

@ApplicationScoped
public class PreconfiguredFaultTolerance {
    @Produces
    @Identifier("my-fault-tolerance")
    public static final FaultTolerance<String> FT = FaultTolerance.<String>create()
            .withRetry().maxRetries(2).done()
            .withFallback().handler(() -> "fallback").done()
            .build();
}

See the programmatic API documentation for more information about creating the FaultTolerance instance.

It is customary to create the bean by declaring a static producer field, just like in the previous example.

Once we have that, we can apply my-fault-tolerance to synchronous methods that return String:

@ApplicationScoped
public class MyService {
    @ApplyFaultTolerance("my-fault-tolerance")
    public String doSomething() {
        ...
    }
}

It is also possible to create a bean of type FaultTolerance<Object> and apply it to synchronous methods that return many different types. Note that this effectively precludes defining a useful fallback, because fallback can only be defined when the value type is known.

It is also possible to define a bean of type FaultTolerance<CompletionStage<T>> and apply it to asynchronous methods that return CompletionStage<T>. Likewise, it is possible to do this for Additional Asynchronous Types.

Note that you can’t define a synchronous FaultTolerance<T> object and apply it to any asynchronous method. Similarly, you can’t define an asynchronous FaultTolerance<CompletionStage<T>> and apply it to a synchronous method or an asynchronous method with different asynchronous type. This limitation will be lifted in the future.

Rate Limit

SmallRye Fault Tolerance includes an additional fault tolerance strategy, not prescribed by the MicroProfile Fault Tolerance specification: rate limit.

Rate limit enforces a maximum number of permitted invocations in a time window of some length. For example, with a rate limit, one can make sure that a method may only be called 50 times per minute. Invocations that would exceed the limit are rejected with an exception of type RateLimitException.

Additionally, it is possible to define minimum spacing between invocations. For example, with minimum spacing of 1 second, if a second invocation happens 500 millis after the first, it is rejected even if the limit wouldn’t be exceeded yet.

Rate limit is superficially similar to a bulkhead (concurrency limit), but is in fact quite different. Bulkhead limits the number of executions happening concurrently at any point in time. Rate limit limits the number of executions in a time window of some length, without considering concurrency.

Rate Limit Usage

A method or a class can be annotated with @RateLimit, which means the method or the methods in the class will apply the rate limit strategy.

The following annotation members control the rate limit behavior:

  • value: the limit of maximum invocations to be permitted in the time window;

  • window and windowUnit: the length of the time window;

  • minSpacing and minSpacingUnit: the minimum spacing between two consecutive invocations;

  • type: the type of time windows used for rate limiting, described below.

The previous example with 50 maximum invocations per minute and minimum spacing of 1 second would look like this:

@RateLimit(value = 50,
        window = 1, windowUnit = ChronoUnit.MINUTES,
        minSpacing = 1, minSpacingUnit = ChronoUnit.SECONDS)
public void doSomething() {
    ...
}

Time Window Type

SmallRye Fault Tolerance supports three types of a time window used for rate limiting: fixed, rolling and smooth.

Fixed time windows are a result of dividing time into non-overlapping intervals of given length. The invocation limit is enforced for each interval independently. This means that short bursts of invocations occuring near the time window boundaries may temporarily exceed the configured rate limit. This kind of rate limiting is also called fixed window rate limiting.

Rolling time windows enforce the limit continuously, instead of dividing time into independent intervals. The invocation limit is enforced for all possible time intervals of given length, regardless of overlap. This is more precise, but requires more memory and may be slower. This kind of rate limiting is also called sliding log rate limiting.

Smooth time windows enforce a uniform distribution of invocations under a rate calculated from given time window length and given limit. If recent rate of invocations is under the limit, a subsequent burst of invocations is allowed during a shorter time span, but the calculated rate is never exceeded. This kind of rate limiting is also called token bucket or leaky bucket (as a meter) rate limiting, with the additional property that all work units are considered to have the same size.

The type of time window used for rate limiting is configured using the type annotation member:

@RateLimit(value = 50,
        window = 1, windowUnit = ChronoUnit.MINUTES,
        minSpacing = 1, minSpacingUnit = ChronoUnit.SECONDS,
        type = RateLimitType.ROLLING)
public void doSomething() {
    ...
}

Lifecycle

Rate limit needs to maintain some state between invocations, depending on the time window type. It may be the number of recent invocations or the time stamp of last invocation. This state is a singleton, irrespective of the lifecycle of the bean that uses the @RateLimit annotation.

More specifically, the rate limit state is uniquely identified by the combination of the bean class (java.lang.Class) and the method object (java.lang.reflect.Method) representing the guarded method.

For example, if there’s a guarded method doWork on a bean which is @RequestScoped, each request will have its own instance of the bean, but all invocations of doWork will share the same rate limit state.

Interactions with Other Annotations

The @RateLimit annotation can be used together with all other fault tolerance annotations. If a method would hypothetically declare all fault tolerance annotations, the fault tolerance strategies would be nested like this:

Fallback(
    Retry(
        CircuitBreaker(
            RateLimit(
                Timeout(
                    Bulkhead(
                        ... the guarded method ...
                    )
                )
            )
        )
    )
)

If @Fallback is used with @RateLimit, the fallback method or handler may be invoked if a RateLimitException is thrown, depending on the fallback configuration.

If @Retry is used with @RateLimit, each retry attempt is processed by the rate limit as an independent invocation. If RateLimitException is thrown, the execution may be retried, depending on how retry is configured.

If @CircuitBreaker is used with @RateLimit, the circuit breaker is checked before enforcing the rate limit. If rate limiting results in RateLimitException, this may be counted as a failure, depending on how the circuit breaker is configured.

Configuration

The configuration system of MicroProfile Fault Tolerance applies to @RateLimit without a change.

For example, assume the following class:

package com.example;

@ApplicationScoped
public class MyService {
    @RateLimit(value = 50,
            window = 1, windowUnit = ChronoUnit.MINUTES,
            minSpacing = 1, minSpacingUnit = ChronoUnit.SECONDS)
    public void doSomething() {
        ...
    }
}

The rate limit configured for the doSomething method may be reconfigured like this:

com.example.MyService/doSomething/RateLimit/value=150

Metrics

Rate limit exposes metrics to MicroProfile Metrics, following other MicroProfile Fault Tolerance metrics.

Name

ft.ratelimit.calls.total

Type

Counter

Unit

None

Description

The number of times the rate limit logic was run. This will usually be once per method call, but may be zero times if the circuit breaker prevented execution or more than once if the method call is retried.

Tags

  • method - the fully qualified method name

  • rateLimitResult = [permitted|rejected] - whether the rate limit permitted the method call

Micrometer is of course also supported, as described in Metrics.