Changelog for Oban Pro v1.1.0
🔗 Chain Worker
Chain workers link jobs together to ensure they run in a strict, sequential, FIFO order. Downstream
jobs automatically suspend execution until the upstream job is completed
. Jobs in a chain only
run after the previous job completes successfully, regardless of snoozing or retries.
Through a declarative syntax, chains can partition by worker
, args
, or meta
, and each
partition runs sequentially while the queue runs with any level of concurrency:
defmodule MyApp.WebhookWorker do
use Oban.Pro.Workers.Chain, on_cancelled: :halt, by: [:worker, args: :account_id]
...
Previously, you could approximate chain behaviour with a global limit of 1, but it lacked guarantees around retries, cancellation, or even scheduled jobs.
See the Chain docs for details and examples of how to handle customize error handling.
📐 Structured Additions
Structured jobs, as declared with args_schema
, gain a few new capabilities in this release.
:default
— Any field can have a default value, calculated at compilation time and applied at runtime.{:array, :enum}
— Now it's possible to cast and validate the values of a list with an array of enums, e.g.{:array, :enum}, values: ~w(foo bar baz)a
:term
— Safely encodes any Elixir term as a string for storage, then decodes it back to the original term on load. This is similar to:any
, but works with terms like tuples or pids that can't usually be serialized. For safety, terms are encoded with the:safe
option to prevent decoding data that may be used to attack the runtime.
Here's a toy example that demonstrates all three additions:
args_schema do
field :pid, :term, required: true
field :flags, {:array, :enum}, values: ~(fast heavy lazy)a
field :debug, :boolean, default: false
end
def process(job) do
send(job.args.pid, :pids_work!)
end
🧰 New assert_enqueue/2 and refute_enqueue/2 helpers
Assert or refute that jobs were enqueued during a function call. These new test helpers receive a function and detect only jobs that were (or weren't) enqueued while running the function.
More specifically, the helpers:
- Ignore any jobs enqueued before the function ran
- Return whatever the function returned when the assertion passes
- Respect the same filtering syntax as other assertion helpers
For example, this combines assert_enqueue/2
with perform_job/2
to assert that a job is
enqueued by a worker's perform
:
assert_enqueue [worker: MyApp.OtherWorker, queue: :business], fn ->
perform_job(MyApp.SomeWorker, %{id: account.id})
end
v1.1.6 — 2023-11-09
[DynamicQueues] Scale running queues after
update/2
changesPreviously, running queues and the persisted record were updated after a
scale
event but not after callingupdate/2
. Now either mechanism persists the change and scales running queues.[DynamicQueues] Correctly persist
limit
changes fromscale_queue
eventsA string/atom key mismatch prevented persisting
limit
changes for dynamic queues.[DynamicCron] Restore overwritten cron validators
The new cron validation tests persisted while the validator changes were dropped due to a bad merge resolution.
v1.1.5 — 2023-10-26
Bug Fixes
[Smart] Safely cast mixed keys when scaling
global_limit
.A mixture of string and atom keys broke casting scale values in the global limit changeset. Now values are correctly normalized to a single key type and scaling functions as expected.
[Chain] Persist all options passed to
use
macro.Some core options like
hold_snooze
,on_cancelled
, andon_discarded
weren't whitelisted for storage inmeta
. Now all options except forby
are stored as expected.[DynamicCron] Restore expression validation for
insert/1
andupdate/2
Move expression validation back into the Cron schema to guard insert/1 and update/1 against invalid expressions.
v1.1.4 — 2023-10-06
Bug Fixes
[Smart] Prevent head of line blocking for partitions
The previous query used a low limit in the windowed query to avoid partitioning the all
available
jobs, which could be an extremely slow operation. That low limit cause head-of-line blocking when there were more jobs in a single partition than the limit checked (and it was applied too late to benefit from indexes).The updated query alleviates the blocking issue and it's ~12x times faster (from 240ms down to 20ms for 200k available jobs in local tests).
[DynamicCron] Allow setting
:meta
as a stored job option[DynamicQueues] Merge
opts
when upserting queuesOpts provided to DynamicQueues were overwritten rather than merged. The docs implied the desired behavior of merging opts when fields were omitted, but the
replace
operation didn't actually merge them. Now that's fixed and the behavior matches the docs.[DynamicQueues] Correct queue_name and queue_input typespecs
The previous version was incomplete, as
queue_name
also supported binaries andqueue_input
expects a list ofqueue_opts
.[Worker] Preserve
Decimal
structs while dumping args to JSONThis adds a clause to handle the final missing native Ecto type, Decimal. Now Decimal structs aren't expanded into maps before dumping structured args.
v1.1.3 — 2023-09-25
Loosen Oban dependency to allow recently released v2.16 without an override.
Enhancements
- [Smart] Update insert_* functions to support new unique
:timestamp
option from Oban v2.16
Bug Fixes
- [Worker] Skip dumping Date or Time when serializing args
v1.1.2 — 2023-09-22
Bug Fixes
[Chain] Inject top-level
wait_*
options into job metaThe Chain worker's
wait_*
options were silently ignored when passed touse
at the top level. Now they're correctly injected into job meta during construction.[Batcher] Release batch debounce lock before checking counts
At high load, there was a race condition between checking counts and inserting callback jobs that could result in missing callbacks. Now, debouncing releases the lock before checking to eliminate the gap between checking and releasing.
[Worker] Skip dumping DateTime or NaiveDateTime when serializing args
A recent change to arg dumping converted structs to maps in all cases, even for types that are natively handled by Jason. This changes dumping to skip DateTime and NaiveDateTime structs.
v1.1.1 — 2023-09-20
Bug Fixes
[Smart] Prevent overwriting meta when setting snoozed count
Reading and then setting the meta in snooze_job could overwrite changes made to the job's meta during job execution.
[Batcher] Prevent batcher collisions with ETS based tracking
The previous atomics based approach with limited key-space was prone to collisions with multiple active batches. Now we avoid hash collision entirely by using ETS to track waiting callback checks.
v1.1.0 — 2023-09-19
Enhancements
[Smart] Record snoozes without incrementing attempts
Rather than increasing attempts and max_attempts on snooze, we now roll back the attempt count and increment a
snoozed
counter inmeta
. With this new strategy it's simple to differentiate between "real" attempts and snoozes, and backoff calculation remains accurate.[Smart] Cleaner partition syntax to match
by
format for Chain and Chunk workers.The newer syntax is more expressive with less verbose syntax. The new syntax is translated into the legacy format, which is also accepted for seamless backward compatibility.
Here's an example of using the newer format:
- partition: [fields: [:worker, :args], keys: [:account_id]] + partition: [:worker, args: :account_id]
[Smart] Include producer
uuid
incheck_meta
output for more detailed tracking of producer gossip events.[Worker] Allow
fetch_recorded/1
from unrecorded workersThe previous implementation required that the calling worker was recorded, rather than the called worker. That forced marking extraneous workers as recorded to access
fetch_recorded/1
.[Worker] Include caught stacktrace in logging from unhandled hook errors
[Relay] Use a dedicated channel for relay messages
Due to historic restrictions, relay was built to subscribe and publish messages on the
gossip
channel. That channel was also used for true node "gossip" about activity, and it could generate a lot of messages. Now relay messages use a dedicated channel to minimize accumulating notifications in long running processes.
Bug Fixes
[Batch] Bump batch hash key by 1 to maintain atomics range
Atomics have an inclusive range starting at 1, while
phash
has an exclusive range ofrange - 1
. That combination would cause batch debouncing to fail when the hashed batch id was 0.[Batch] Trigger
handle_cancelled
callback aftercancel_*
function calls.Cancelling jobs that weren't executing, i.e.
scheduled
, failed to trigger thehandled_cancelled
callback job.[Smart] Reverse xact to uniq check for larger transactions.
Within
insert_all_jobs
thexact_limit
logic was reversed and unique inserts over the defaultxact_limit
were inserted in a single transaction.[Testing] Set engine to
Smart
inperform_job/3
function calls to ensure the recorded feature, or anything else that relies on theSmart
engine, works.[Testing] Preprocess jobs before process in
perform_chunk/3
The
perform_chunk/3
helper didn't process jobs prior to processing the way running a real chunk worker did. That prevented applying Pro.Worker features like structured args or inline hooks from functioning.[Testing] Handle all supported
Oban.Config
opts in testing functions