What is a scheduler?
Someone said to me today, in the context of C++26's new std::execution, "starting from the scheduler, I do not understand how this works or how it is integrated in the sender/receiver paradigm." I decided that expanding on the private answer I gave would make for a good blog post and might also help with the dearth of documentation about std::execution so here we go.
If you have any thoughts, send me an email!
Breaking down the official definition
The official definition of a scheduler is in [exec.async.ops], paragraph 10, which reads:
A scheduler is an abstraction of an execution resource with a uniform, generic interface for scheduling work onto that resource. It is a factory for senders whose asynchronous operations execute value completion operations on an execution agent belonging to the scheduler's associated execution resource. A schedule-expression obtains such a sender from a scheduler. A schedule sender is the result of a schedule expression. On success, an asynchronous operation produced by a schedule sender executes a value completion operation with an empty set of result datums. Multiple schedulers can refer to the same execution resource. A scheduler can be valid or invalid. A scheduler becomes invalid when the execution resource to which it refers becomes invalid, as do any schedule senders obtained from the scheduler, and any operation states obtained from those senders.
Let's take that piece by piece. First, a scheduler is "an abstraction of an execution resource…." Execution resource is itself defined before it's used in [exec.async.ops], paragraph 1, which reads:
An execution resource is a program entity that manages a (possibly dynamic) set of execution agents ([thread.req.lockable.general]), which it uses to execute parallel work on behalf of callers.
[Example 1: The currently active thread, a system-provided thread pool, and uses of an API associated with an external hardware accelerator are all examples of execution resources. — end example]
Execution resources execute asynchronous operations. An execution resource is either valid or invalid.
I think the embedded example is quite good; the simplest execution resource is an ordinary thread, but there are others, including thread pools, fibers, and "external hardware accelerators", which you might recognize as a category that includes GPUs.
So an execution resource is just a "thing" that can "execute code". It's the answer to "where is my code running?", which is typically "on a CPU" but might also be "on a GPU". And a scheduler is an abstraction of an execution resource; I usually say schedulers are "lightweight handles to execution resources" because they are objects that you pass around as values and they are sort of like tickets that can be redeemed for access to the execution resource they're associated with.
The next part of the sentence defining schedulers says that schedulers have "a uniform, generic interface for scheduling work onto that resource." The meaning of this sentence fragment is given in the rest of paragraph 10, but I'll try to illuminate it with code instead of more prose.
Given an object named sch that is a scheduler, you can "redeem" sch for access to the associated execution resource with the std::execution::schedule algorithm, like so:
#include <execution>
namespace ex = std::execution;
ex::sender auto redeemScheduler(ex::scheduler auto sch) {
return ex::schedule(sch);
}
In the above, redeemScheduler takes an arbitrary scheduler (note that it takes it by value) and returns a "schedule sender". Senders are yet another kind of handle, this time to some work that could be executed. What kind of potential work is a schedule sender a handle to? A no-op with a valuable side-effect: once started, the no-op completes on the execution resource associated with the scheduler. In other words, a schedule sender is a handle to the ability to do the simplest thing you could possibly do with an execution resource: do literally nothing, but do it on that resource (i.e. on the thread pool, GPU, or whatever). It's with this simplest of building blocks that std::execution gives you the ability to precisely control where your code runs.
Show me the code
A simple example might help:
#include <execution>
#include <print>
#include <thread>
namespace ex = std::execution;
int main() {
// Ask for a handle to the "parallel scheduler"; this is
// expected to be a "system thread pool", which might be
// the Windows Thread Pool on Windows, Grand Central
// Dispatch on macOS, or a standard library-provided pool
// on Linux-based systems
ex::scheduler auto sch = ex::get_parallel_scheduler();
// Get the thread ID of the thread in the system thread pool
// that services our request.
auto [poolThreadId] =
// sync_wait starts the sender it's given and waits for
// the result; the return value is an optional<tuple<…>>
// so we unwrap it with .value() and destructure the
// result.
std::this_thread::sync_wait(
// Construct a schedule sender from sch that will
// do nothing, but do so on the system thread pool.
ex::schedule(sch)
// Send the result of the schedule sender into a
// continuation function with then; the empty pack
// of results produced by the schedule sender becomes
// the empty argument list passed to the following
// lambda.
| ex::then([]() noexcept {
// Return the ID of the current thread, which
// will be the ID of a thread in the thread
// pool.
return std::this_thread::get_id();
}))
.value();
// For comparison purposes, also capture the ID of the thread
// running main.
auto mainThreadId = std::this_thread::get_id();
std::println("The pool thread had ID {}.", poolThreadId);
std::println("The main thread has ID {}.", mainThreadId);
}
(You can see a version of the above running on godbolt.org.)
The above code produces output similar to the following:
The pool thread had ID 137816384202304.
The main thread has ID 137816395574208.
As the inline comments hopefully make clear, the above example simply jumps from the main thread in main to a thread in the system thread pool, asks for that thread's ID, and returns the result back as the result of the waiting sync_wait function before printing the two thread IDs to stdout.
Building a scheduler
You might be wondering how a scheduler works under the hood. Let's build a simple one here.
Suppose you have an "executor" interface like the following one that's loosely based on Folly's folly::Executor:
#include <functional>
namespace bigcorp {
struct Executor {
// Invokes work() on this executor's execution resource
virtual void add(std::function<void()> work) noexcept = 0;
};
}
Hopefully, you can imagine building all sorts of executors that implement the above interface.
Given the above interface, the following code builds a scheduler that can produce schedule senders for any Executor&. The magic happens near the top of the example in bigcorp::scheduler::operation<…>::start().
#include <concepts>
#include <execution>
#include <type_traits>
#include <utility>
namespace bigcorp {
class scheduler {
Executor* exec_;
// Our schedule operation
template <class Receiver>
struct operation {
// Advertise we're an operation state
using operation_state_concept = std::execution::operation_state_tag;
operation(Executor* exec, Receiver rcvr) noexcept
: exec_(exec), rcvr_(std::move(rcvr)) {}
void start() & noexcept {
// When start() is called, we invoke exec_->add to put some
// work on our executor's queue…
auto work = [this]() noexcept {
// …but we don't actually complete our receiver until we
// get woken up on the appropriate execution context.
std::execution::set_value(std::move(rcvr_));
};
// start() must be noexcept; if this static assertion fails
// then we ought to handle the exception by catching it and
// delivering it to the receiver through set_error, but that
// complicates what is supposed to be demo code
static_assert(noexcept(exec_->add(std::move(work))));
exec_->add(std::move(work));
}
private:
Executor* exec_;
Receiver rcvr_;
};
// Our schedule sender
struct sender {
// Advertise that we're a sender
using sender_concept = std::execution::sender_tag;
// Advertise that we are infallible and only ever complete
// by invoking set_value with no arguments.
template <class Sndr, class... Env>
static consteval auto get_completion_signatures()
-> std::execution::completion_signatures<
std::execution::set_value_t()> {
return {};
}
// Implement connect so we can create an operation state
// given a receiver. It's in connect where we learn what
// our continuation will be.
template <class Receiver>
operation<Receiver> connect(Receiver rcvr) const noexcept {
return operation<Receiver>(exec_, std::move(rcvr));
}
private:
friend scheduler;
explicit sender(Executor* exec) noexcept : exec_(exec) {}
Executor* exec_;
};
public:
using scheduler_concept = std::execution::scheduler_tag;
explicit scheduler(Executor& exec) noexcept
: exec_(&exec) {}
sender schedule() const noexcept {
return sender{exec_};
}
friend bool operator==(scheduler, scheduler) noexcept = default;
};
template <std::derived_from<Executor> Exec>
class execution_context {
Exec exec_;
public:
template <class... T>
requires std::constructible_from<Exec, T...>
execution_context(T&&... t)
noexcept(std::is_nothrow_constructible_v<Exec, T...>)
: exec_(std::forward<T>(t)...) {}
scheduler get_scheduler() noexcept {
return scheduler(exec_);
}
};
// A simple Executor implementation
struct InlineExecutor : Executor {
void add(std::function<void()> work) noexcept override {
work();
}
};
}
int main() {
bigcorp::execution_context<bigcorp::InlineExecutor> ctx;
auto [fortytwo] = std::this_thread::sync_wait(
std::execution::schedule(ctx.get_scheduler())
| std::execution::then([]() noexcept { return 42; }));
return fortytwo;
}
(You can see this at godbolt.org, too.)
Edited 2026/04/07: Robert Leahy noticed my final example of building a scheduler wasn't noexcept-correct, and had used the old spelling of the various tag types (e.g. scheduler_t instead of scheduler_tag). I've updated the code based on his corrections.