java.lang.Object | |
↳ | io.reactivex.parallel.ParallelFlowable<T> |
Abstract base class for Parallel publishers that take an array of Subscribers.
Use from()
to start processing a regular Publisher in 'rails'.
Use runOn()
to introduce where each 'rail' should run on thread-vise.
Use sequential()
to merge the sources back into a single Flowable.
History: 2.0.5 - experimental
Public Constructors | |||||||||||
---|---|---|---|---|---|---|---|---|---|---|---|
ParallelFlowable() |
Public Methods | |||||||||||
---|---|---|---|---|---|---|---|---|---|---|---|
final <C> ParallelFlowable<C> |
collect(Callable<? extends C> collectionSupplier, BiConsumer<? super C, ? super T> collector)
Collect the elements in each rail into a collection supplied via a collectionSupplier
and collected into with a collector action, emitting the collection at the end.
| ||||||||||
final <U> ParallelFlowable<U> |
compose(ParallelTransformer<T, U> composer)
Allows composing operators, in assembly time, on top of this ParallelFlowable
and returns another ParallelFlowable with composed features.
| ||||||||||
final <R> ParallelFlowable<R> |
concatMap(Function<? super T, ? extends Publisher<? extends R>> mapper, int prefetch)
Generates and concatenates Publishers on each 'rail', signalling errors immediately
and using the given prefetch amount for generating Publishers upfront.
| ||||||||||
final <R> ParallelFlowable<R> |
concatMap(Function<? super T, ? extends Publisher<? extends R>> mapper)
Generates and concatenates Publishers on each 'rail', signalling errors immediately
and generating 2 publishers upfront.
| ||||||||||
final <R> ParallelFlowable<R> |
concatMapDelayError(Function<? super T, ? extends Publisher<? extends R>> mapper, int prefetch, boolean tillTheEnd)
Generates and concatenates Publishers on each 'rail', optionally delaying errors
and using the given prefetch amount for generating Publishers upfront.
| ||||||||||
final <R> ParallelFlowable<R> |
concatMapDelayError(Function<? super T, ? extends Publisher<? extends R>> mapper, boolean tillTheEnd)
Generates and concatenates Publishers on each 'rail', optionally delaying errors
and generating 2 publishers upfront.
| ||||||||||
final ParallelFlowable<T> |
doAfterNext(Consumer<? super T> onAfterNext)
Call the specified consumer with the current element passing through any 'rail'
after it has been delivered to downstream within the rail.
| ||||||||||
final ParallelFlowable<T> |
doAfterTerminated(Action onAfterTerminate)
Run the specified Action when a 'rail' completes or signals an error.
| ||||||||||
final ParallelFlowable<T> |
doOnCancel(Action onCancel)
Run the specified Action when a 'rail' receives a cancellation.
| ||||||||||
final ParallelFlowable<T> |
doOnComplete(Action onComplete)
Run the specified Action when a 'rail' completes.
| ||||||||||
final ParallelFlowable<T> |
doOnError(Consumer<Throwable> onError)
Call the specified consumer with the exception passing through any 'rail'.
| ||||||||||
final ParallelFlowable<T> |
doOnNext(Consumer<? super T> onNext)
Call the specified consumer with the current element passing through any 'rail'.
| ||||||||||
final ParallelFlowable<T> |
doOnNext(Consumer<? super T> onNext, ParallelFailureHandling errorHandler)
Call the specified consumer with the current element passing through any 'rail' and
handles errors based on the given
ParallelFailureHandling enumeration value. | ||||||||||
final ParallelFlowable<T> |
doOnNext(Consumer<? super T> onNext, BiFunction<? super Long, ? super Throwable, ParallelFailureHandling> errorHandler)
Call the specified consumer with the current element passing through any 'rail' and
handles errors based on the returned value by the handler function.
| ||||||||||
final ParallelFlowable<T> |
doOnRequest(LongConsumer onRequest)
Call the specified consumer with the request amount if any rail receives a request.
| ||||||||||
final ParallelFlowable<T> |
doOnSubscribe(Consumer<? super Subscription> onSubscribe)
Call the specified callback when a 'rail' receives a Subscription from its upstream.
| ||||||||||
final ParallelFlowable<T> |
filter(Predicate<? super T> predicate, BiFunction<? super Long, ? super Throwable, ParallelFailureHandling> errorHandler)
Filters the source values on each 'rail' and
handles errors based on the returned value by the handler function.
| ||||||||||
final ParallelFlowable<T> |
filter(Predicate<? super T> predicate)
Filters the source values on each 'rail'.
| ||||||||||
final ParallelFlowable<T> |
filter(Predicate<? super T> predicate, ParallelFailureHandling errorHandler)
Filters the source values on each 'rail' and
handles errors based on the given
ParallelFailureHandling enumeration value. | ||||||||||
final <R> ParallelFlowable<R> |
flatMap(Function<? super T, ? extends Publisher<? extends R>> mapper, boolean delayError)
Generates and flattens Publishers on each 'rail', optionally delaying errors.
| ||||||||||
final <R> ParallelFlowable<R> |
flatMap(Function<? super T, ? extends Publisher<? extends R>> mapper)
Generates and flattens Publishers on each 'rail'.
| ||||||||||
final <R> ParallelFlowable<R> |
flatMap(Function<? super T, ? extends Publisher<? extends R>> mapper, boolean delayError, int maxConcurrency)
Generates and flattens Publishers on each 'rail', optionally delaying errors
and having a total number of simultaneous subscriptions to the inner Publishers.
| ||||||||||
final <R> ParallelFlowable<R> |
flatMap(Function<? super T, ? extends Publisher<? extends R>> mapper, boolean delayError, int maxConcurrency, int prefetch)
Generates and flattens Publishers on each 'rail', optionally delaying errors,
having a total number of simultaneous subscriptions to the inner Publishers
and using the given prefetch amount for the inner Publishers.
| ||||||||||
static <T> ParallelFlowable<T> |
from(Publisher<? extends T> source, int parallelism)
Take a Publisher and prepare to consume it on parallelism number of 'rails' in a round-robin fashion.
| ||||||||||
static <T> ParallelFlowable<T> |
from(Publisher<? extends T> source, int parallelism, int prefetch)
Take a Publisher and prepare to consume it on parallelism number of 'rails' ,
possibly ordered and round-robin fashion and use custom prefetch amount and queue
for dealing with the source Publisher's values.
| ||||||||||
static <T> ParallelFlowable<T> |
from(Publisher<? extends T> source)
Take a Publisher and prepare to consume it on multiple 'rails' (number of CPUs)
in a round-robin fashion.
| ||||||||||
static <T> ParallelFlowable<T> |
fromArray(Publisher...<T> publishers)
Wraps multiple Publishers into a ParallelFlowable which runs them
in parallel and unordered.
| ||||||||||
final <R> ParallelFlowable<R> |
map(Function<? super T, ? extends R> mapper, ParallelFailureHandling errorHandler)
Maps the source values on each 'rail' to another value and
handles errors based on the given
ParallelFailureHandling enumeration value. | ||||||||||
final <R> ParallelFlowable<R> |
map(Function<? super T, ? extends R> mapper, BiFunction<? super Long, ? super Throwable, ParallelFailureHandling> errorHandler)
Maps the source values on each 'rail' to another value and
handles errors based on the returned value by the handler function.
| ||||||||||
final <R> ParallelFlowable<R> |
map(Function<? super T, ? extends R> mapper)
Maps the source values on each 'rail' to another value.
| ||||||||||
abstract int |
parallelism()
Returns the number of expected parallel Subscribers.
| ||||||||||
final Flowable<T> |
reduce(BiFunction<T, T, T> reducer)
Reduces all values within a 'rail' and across 'rails' with a reducer function into a single
sequential value.
| ||||||||||
final <R> ParallelFlowable<R> |
reduce(Callable<R> initialSupplier, BiFunction<R, ? super T, R> reducer)
Reduces all values within a 'rail' to a single value (with a possibly different type) via
a reducer function that is initialized on each rail from an initialSupplier value.
| ||||||||||
final ParallelFlowable<T> |
runOn(Scheduler scheduler)
Specifies where each 'rail' will observe its incoming values with
no work-stealing and default prefetch amount.
| ||||||||||
final ParallelFlowable<T> |
runOn(Scheduler scheduler, int prefetch)
Specifies where each 'rail' will observe its incoming values with
possibly work-stealing and a given prefetch amount.
| ||||||||||
final Flowable<T> |
sequential(int prefetch)
Merges the values from each 'rail' in a round-robin or same-order fashion and
exposes it as a regular Publisher sequence, running with a give prefetch value
for the rails.
| ||||||||||
final Flowable<T> |
sequential()
Merges the values from each 'rail' in a round-robin or same-order fashion and
exposes it as a regular Publisher sequence, running with a default prefetch value
for the rails.
| ||||||||||
final Flowable<T> |
sequentialDelayError()
Merges the values from each 'rail' in a round-robin or same-order fashion and
exposes it as a regular Flowable sequence, running with a default prefetch value
for the rails and delaying errors from all rails till all terminate.
| ||||||||||
final Flowable<T> |
sequentialDelayError(int prefetch)
Merges the values from each 'rail' in a round-robin or same-order fashion and
exposes it as a regular Publisher sequence, running with a give prefetch value
for the rails and delaying errors from all rails till all terminate.
| ||||||||||
final Flowable<T> |
sorted(Comparator<? super T> comparator)
Sorts the 'rails' of this ParallelFlowable and returns a Publisher that sequentially
picks the smallest next value from the rails.
| ||||||||||
final Flowable<T> |
sorted(Comparator<? super T> comparator, int capacityHint)
Sorts the 'rails' of this ParallelFlowable and returns a Publisher that sequentially
picks the smallest next value from the rails.
| ||||||||||
abstract void |
subscribe(Subscriber[]<? super T> subscribers)
Subscribes an array of Subscribers to this ParallelFlowable and triggers
the execution chain for all 'rails'.
| ||||||||||
final <U> U |
to(Function<? super ParallelFlowable<T>, U> converter)
Perform a fluent transformation to a value via a converter function which
receives this ParallelFlowable.
| ||||||||||
final Flowable<List<T>> |
toSortedList(Comparator<? super T> comparator)
Sorts the 'rails' according to the comparator and returns a full sorted list as a Publisher.
| ||||||||||
final Flowable<List<T>> |
toSortedList(Comparator<? super T> comparator, int capacityHint)
Sorts the 'rails' according to the comparator and returns a full sorted list as a Publisher.
|
Protected Methods | |||||||||||
---|---|---|---|---|---|---|---|---|---|---|---|
final boolean |
validate(Subscriber[]<?> subscribers)
Validates the number of subscribers and returns true if their number
matches the parallelism level of this ParallelFlowable.
|
[Expand]
Inherited Methods | |||||||||||
---|---|---|---|---|---|---|---|---|---|---|---|
![]() |
Collect the elements in each rail into a collection supplied via a collectionSupplier and collected into with a collector action, emitting the collection at the end.
collectionSupplier | the supplier of the collection in each rail |
---|---|
collector | the collector, taking the per-rail collection and the current item |
Allows composing operators, in assembly time, on top of this ParallelFlowable and returns another ParallelFlowable with composed features.
composer | the composer function from ParallelFlowable (this) to another ParallelFlowable |
---|
Generates and concatenates Publishers on each 'rail', signalling errors immediately and using the given prefetch amount for generating Publishers upfront.
mapper | the function to map each rail's value into a Publisher |
---|---|
prefetch | the number of items to prefetch from each inner Publisher source and the inner Publishers (immediate, boundary, end) |
Generates and concatenates Publishers on each 'rail', signalling errors immediately and generating 2 publishers upfront.
mapper | the function to map each rail's value into a Publisher source and the inner Publishers (immediate, boundary, end) |
---|
Generates and concatenates Publishers on each 'rail', optionally delaying errors and using the given prefetch amount for generating Publishers upfront.
mapper | the function to map each rail's value into a Publisher |
---|---|
prefetch | the number of items to prefetch from each inner Publisher |
tillTheEnd | if true all errors from the upstream and inner Publishers are delayed till all of them terminate, if false, the error is emitted when an inner Publisher terminates. |
Generates and concatenates Publishers on each 'rail', optionally delaying errors and generating 2 publishers upfront.
mapper | the function to map each rail's value into a Publisher |
---|---|
tillTheEnd | if true all errors from the upstream and inner Publishers are delayed till all of them terminate, if false, the error is emitted when an inner Publisher terminates. source and the inner Publishers (immediate, boundary, end) |
Call the specified consumer with the current element passing through any 'rail' after it has been delivered to downstream within the rail.
onAfterNext | the callback |
---|
Run the specified Action when a 'rail' completes or signals an error.
onAfterTerminate | the callback |
---|
Run the specified Action when a 'rail' receives a cancellation.
onCancel | the callback |
---|
Run the specified Action when a 'rail' completes.
onComplete | the callback |
---|
Call the specified consumer with the exception passing through any 'rail'.
onError | the callback |
---|
Call the specified consumer with the current element passing through any 'rail'.
onNext | the callback |
---|
Call the specified consumer with the current element passing through any 'rail' and
handles errors based on the given ParallelFailureHandling
enumeration value.
onNext | the callback |
---|---|
errorHandler | the enumeration that defines how to handle errors thrown from the onNext consumer |
Call the specified consumer with the current element passing through any 'rail' and handles errors based on the returned value by the handler function.
onNext | the callback |
---|---|
errorHandler | the function called with the current repeat count and
failure Throwable and should return one of the ParallelFailureHandling
enumeration values to indicate how to proceed. |
Call the specified consumer with the request amount if any rail receives a request.
onRequest | the callback |
---|
Call the specified callback when a 'rail' receives a Subscription from its upstream.
onSubscribe | the callback |
---|
Filters the source values on each 'rail' and handles errors based on the returned value by the handler function.
Note that the same predicate may be called from multiple threads concurrently.
predicate | the function returning true to keep a value or false to drop a value |
---|---|
errorHandler | the function called with the current repeat count and
failure Throwable and should return one of the ParallelFailureHandling
enumeration values to indicate how to proceed. |
Filters the source values on each 'rail'.
Note that the same predicate may be called from multiple threads concurrently.
predicate | the function returning true to keep a value or false to drop a value |
---|
Filters the source values on each 'rail' and
handles errors based on the given ParallelFailureHandling
enumeration value.
Note that the same predicate may be called from multiple threads concurrently.
predicate | the function returning true to keep a value or false to drop a value |
---|---|
errorHandler | the enumeration that defines how to handle errors thrown from the predicate |
Generates and flattens Publishers on each 'rail', optionally delaying errors.
It uses unbounded concurrency along with default inner prefetch.
mapper | the function to map each rail's value into a Publisher |
---|---|
delayError | should the errors from the main and the inner sources delayed till everybody terminates? |
Generates and flattens Publishers on each 'rail'.
Errors are not delayed and uses unbounded concurrency along with default inner prefetch.
mapper | the function to map each rail's value into a Publisher |
---|
Generates and flattens Publishers on each 'rail', optionally delaying errors and having a total number of simultaneous subscriptions to the inner Publishers.
It uses a default inner prefetch.
mapper | the function to map each rail's value into a Publisher |
---|---|
delayError | should the errors from the main and the inner sources delayed till everybody terminates? |
maxConcurrency | the maximum number of simultaneous subscriptions to the generated inner Publishers |
Generates and flattens Publishers on each 'rail', optionally delaying errors, having a total number of simultaneous subscriptions to the inner Publishers and using the given prefetch amount for the inner Publishers.
mapper | the function to map each rail's value into a Publisher |
---|---|
delayError | should the errors from the main and the inner sources delayed till everybody terminates? |
maxConcurrency | the maximum number of simultaneous subscriptions to the generated inner Publishers |
prefetch | the number of items to prefetch from each inner Publisher |
Take a Publisher and prepare to consume it on parallelism number of 'rails' in a round-robin fashion.
source | the source Publisher |
---|---|
parallelism | the number of parallel rails |
Take a Publisher and prepare to consume it on parallelism number of 'rails' , possibly ordered and round-robin fashion and use custom prefetch amount and queue for dealing with the source Publisher's values.
source | the source Publisher |
---|---|
parallelism | the number of parallel rails |
prefetch | the number of values to prefetch from the source the source until there is a rail ready to process it. |
Take a Publisher and prepare to consume it on multiple 'rails' (number of CPUs) in a round-robin fashion.
source | the source Publisher |
---|
Wraps multiple Publishers into a ParallelFlowable which runs them in parallel and unordered.
publishers | the array of publishers |
---|
Maps the source values on each 'rail' to another value and
handles errors based on the given ParallelFailureHandling
enumeration value.
Note that the same mapper function may be called from multiple threads concurrently.
mapper | the mapper function turning Ts into Us. |
---|---|
errorHandler | the enumeration that defines how to handle errors thrown from the mapper function |
Maps the source values on each 'rail' to another value and handles errors based on the returned value by the handler function.
Note that the same mapper function may be called from multiple threads concurrently.
mapper | the mapper function turning Ts into Us. |
---|---|
errorHandler | the function called with the current repeat count and
failure Throwable and should return one of the ParallelFailureHandling
enumeration values to indicate how to proceed. |
Maps the source values on each 'rail' to another value.
Note that the same mapper function may be called from multiple threads concurrently.
mapper | the mapper function turning Ts into Us. |
---|
Returns the number of expected parallel Subscribers.
Reduces all values within a 'rail' and across 'rails' with a reducer function into a single sequential value.
Note that the same reducer function may be called from multiple threads concurrently.
reducer | the function to reduce two values into one. |
---|
Reduces all values within a 'rail' to a single value (with a possibly different type) via a reducer function that is initialized on each rail from an initialSupplier value.
Note that the same mapper function may be called from multiple threads concurrently.
initialSupplier | the supplier for the initial value |
---|---|
reducer | the function to reduce a previous output of reduce (or the initial value supplied) with a current source value. |
Specifies where each 'rail' will observe its incoming values with no work-stealing and default prefetch amount.
This operator uses the default prefetch size returned by Flowable.bufferSize()
.
The operator will call Scheduler.createWorker()
as many
times as this ParallelFlowable's parallelism level is.
No assumptions are made about the Scheduler's parallelism level, if the Scheduler's parallelism level is lower than the ParallelFlowable's, some rails may end up on the same thread/worker.
This operator doesn't require the Scheduler to be trampolining as it does its own built-in trampolining logic.
scheduler | the scheduler to use |
---|
Specifies where each 'rail' will observe its incoming values with possibly work-stealing and a given prefetch amount.
This operator uses the default prefetch size returned by Flowable.bufferSize()
.
The operator will call Scheduler.createWorker()
as many
times as this ParallelFlowable's parallelism level is.
No assumptions are made about the Scheduler's parallelism level, if the Scheduler's parallelism level is lower than the ParallelFlowable's, some rails may end up on the same thread/worker.
This operator doesn't require the Scheduler to be trampolining as it does its own built-in trampolining logic.
scheduler | the scheduler to use that rail's worker has run out of work. |
---|---|
prefetch | the number of values to request on each 'rail' from the source |
Merges the values from each 'rail' in a round-robin or same-order fashion and
exposes it as a regular Publisher sequence, running with a give prefetch value
for the rails.
sequential
does not operate by default on a particular Scheduler
.prefetch | the prefetch amount to use for each rail |
---|
Merges the values from each 'rail' in a round-robin or same-order fashion and exposes it as a regular Publisher sequence, running with a default prefetch value for the rails.
This operator uses the default prefetch size returned by Flowable.bufferSize()
.
sequential
does not operate by default on a particular Scheduler
.Merges the values from each 'rail' in a round-robin or same-order fashion and exposes it as a regular Flowable sequence, running with a default prefetch value for the rails and delaying errors from all rails till all terminate.
This operator uses the default prefetch size returned by Flowable.bufferSize()
.
sequentialDelayError
does not operate by default on a particular Scheduler
.Merges the values from each 'rail' in a round-robin or same-order fashion and
exposes it as a regular Publisher sequence, running with a give prefetch value
for the rails and delaying errors from all rails till all terminate.
sequentialDelayError
does not operate by default on a particular Scheduler
.prefetch | the prefetch amount to use for each rail |
---|
Sorts the 'rails' of this ParallelFlowable and returns a Publisher that sequentially picks the smallest next value from the rails.
This operator requires a finite source ParallelFlowable.
comparator | the comparator to use |
---|
Sorts the 'rails' of this ParallelFlowable and returns a Publisher that sequentially picks the smallest next value from the rails.
This operator requires a finite source ParallelFlowable.
comparator | the comparator to use |
---|---|
capacityHint | the expected number of total elements |
Subscribes an array of Subscribers to this ParallelFlowable and triggers the execution chain for all 'rails'.
subscribers | the subscribers array to run in parallel, the number of items must be equal to the parallelism level of this ParallelFlowable |
---|
Perform a fluent transformation to a value via a converter function which receives this ParallelFlowable.
converter | the converter function from ParallelFlowable to some type |
---|
Sorts the 'rails' according to the comparator and returns a full sorted list as a Publisher.
This operator requires a finite source ParallelFlowable.
comparator | the comparator to compare elements |
---|
Sorts the 'rails' according to the comparator and returns a full sorted list as a Publisher.
This operator requires a finite source ParallelFlowable.
comparator | the comparator to compare elements |
---|---|
capacityHint | the expected number of total elements |
Validates the number of subscribers and returns true if their number matches the parallelism level of this ParallelFlowable.
subscribers | the array of Subscribers |
---|