While Swift Concurrency does a remarkable job of abstracting thread management, iOS developers eventually reach a point where they need to understand what’s happening under the hood. We’ve touched on thread behavior in previous articles, but it’s time to consolidate that knowledge — you are absolutely going to need it for the advanced concepts ahead.
At the end of the day, code execution still happens on threads. Swift Concurrency doesn’t reinvent this fundamental reality (even if Apple loves to introduce its own proprietary naming conventions). We already know that within a single Task, the underlying thread can switch between suspension points (await). But how exactly does the system choose which thread to use?
To answer this, we first need to break down the components that make up the Swift Concurrency execution chain.
The Anatomy of Swift Concurrency
Some of these components will be familiar, some you may have heard of in passing, and others might be entirely new. Let’s structure our knowledge:
- Task & Child Task: The primary units of Swift Concurrency that we, as developers, interact with. This is the environment where asynchronous functions execute. To quote Apple’s brilliant analogy: > “A task is to asynchronous functions what a thread is to synchronous functions.”
- Job: A fragment of a
Taskrepresenting a set of synchronous instructions executed between suspension points (await). EveryTaskconsists of at least oneJob. Jobs are the true, fundamental building blocks of the concurrency system. - Executor: The “system scheduler” we’ve briefly mentioned in past articles. The Executor acts as the dispatcher — its sole responsibility is taking
Jobsand scheduling them for execution on available threads. - Cooperative Thread Pool: A system-managed pool of threads, strictly limited to the number of CPU cores on the device. This limitation is a deliberate design choice to prevent thread explosion and costly context switching.
Putting It All Together
If we look at the lifecycle from a high level, the flow is highly logical:
As developers, we spawn a Task. The system then breaks this Task down into individual Jobs. These Jobs are fed into an Executor, which intelligently distributes them across the available threads inside the Cooperative Thread Pool.

As a reminder, a crucial detail is that the number of threads in the cooperative thread pool equals the number of CPU cores on the device.
We already know how to create a Task — we’ve spawned a good dozen of them. But what about Executor and Job? Speaking of executors, there are built-in implementations that operate behind the scenes without our direct involvement (which is logical, otherwise we would have been creating them manually long ago). They fall into the following types:
- Global concurrent executor — the default scheduler that dispatches jobs across threads from the cooperative thread pool. In most cases, it is the one scheduling all our jobs (concurrently). The diagram above illustrates exactly how it works.
- Serial executors — each actor has its own serial executor, which also executes jobs on threads from the cooperative thread pool, but does so sequentially (we will cover actors in more detail later).
- Main Actor executor — a special serial executor that executes jobs strictly on the main thread.
Aside from the default ones, we also have the ability to create our own custom executor. This allows us to look under the hood and see how this entire mechanism works from the inside. Let’s not waste any time and sketch out a naive custom executor implementation.
Native Executor Implementation
Currently, there are two protocols for implementing custom executors:
TaskExecutor: For use within aTaskhierarchy.SerialExecutor: For use within anActor.
Let’s start by implementing the first one:
// 1
@available(iOS 18.0, *)
final class CustomExecutor: TaskExecutor {
// 2
func enqueue(_ job: consuming ExecutorJob) {
// 3
job.runSynchronously(on: asUnownedTaskExecutor())
}
// 4
func asUnownedTaskExecutor() -> UnownedTaskExecutor {
UnownedTaskExecutor(ordinary: self)
}
}
TaskExecutorwas introduced recently — the minimum version requirement starts at iOS 18. We won’t be able to use it in production for quite a while, but it is quite convenient for observing how the system works under the hood.- To conform to the protocol, we must implement the
enqueuemethod, which takes aJobparameter representing a portion of aTask‘s execution. Theconsumingkeyword also appeared relatively recently. While it is not strictly part of Swift Concurrency, it is worth highlighting what it means.consumingindicates that the function takes exclusive ownership of the object, and the caller loses the ability to use it after passing it to the function. The opposite concept isborrowing. You can learn more about them in the relevant Swift Evolution proposal. What matters to us here is that once theJobis passed into our function, it cannot be used anywhere else outside of it. We acquire exclusive ownership rights. - Jobs consist of a set of instructions that need to be executed somewhere. The
runSynchronouslymethod is exactly what executes the job. In our naive implementation, we execute it immediately, without any preliminary actions. - This method is also required by the protocol to obtain an unowned reference to the current Executor. We already utilized it in the previous step, as executing
runSynchronouslyrequires passing an unowned reference to the current executor.
Let’s wrap the method with logs in advance:
@available(iOS 18.0, *)
final class CustomExecutor: TaskExecutor {
func enqueue(_ job: consuming ExecutorJob) {
let jobDescription = job.description
print("(#function) before job.runSynchronously (jobDescription)")
print(Thread.current)
job.runSynchronously(on: asUnownedTaskExecutor())
print("(#function) after job.runSynchronously (jobDescription)")
print(Thread.current))
}
func asUnownedTaskExecutor() -> UnownedTaskExecutor {
UnownedTaskExecutor(ordinary: self)
}
}
Next, we need to somehow use our executor to run a Job within a Task. Up until now, we have been creating Tasks using the default initializer (Task {}), which relies on the Global concurrent executor by default. To assign our custom Executor, we will use an alternative initializer:
print("Before task (Thread.current)")
// 1
Task.detached(executorPreference: CustomExecutor()) {
print("Task work item")
}
print("After task (Thread.current)")
We use detached to avoid inheriting the surrounding context. The standard initializer has a similar parameter (executorPreference). Keep in mind that these initializers are also @available(iOS 18.0), just like the TaskExecutor implementation itself.
Running our code, we see the following:
1 Before task <_NSMainThread: 0x60000241cac0>{number = 1, name = main} n 2 enqueue(_:) before job.runSynchronously ExecutorJob(id: 1) n 3 <_NSMainThread: 0x60000241cac0>{number = 1, name = main} n 4 Task work item n 5 enqueue(_:) after job.runSynchronously ExecutorJob(id: 1) n 6 <_NSMainThread: 0x60000241cac0>{number = 1, name = main} n 7 After task <_NSMainThread: 0x60000241cac0>{number = 1, name = main}
Let’s analyze the logs line by line:
- I executed this code on the main thread, and as expected, the logs show that I am on the main thread right before creating the
Task. - Immediately after the
Taskis created, we get a log in the console from our executor indicating that it has received its firstJobto process. - We are still on the exact same thread where the task was created. The initial call to
enqueuethe job happens synchronously. - At this point, our job is executed (the
printstatement inside theTaskis triggered). - We see the log confirming that the job’s execution is complete.
- The underlying thread remains unchanged.
- Finally, we see the log that was placed sequentially after the
Taskinitialization.
In our current example, only a single job was created because there are no suspension points (await calls) inside the task. Let’s fix that and take a look at the updated logs:
print("Before task (Thread.current)")
Task.detached(executorPreference: CustomExecutor()) {
print("🟩")
try await Task.sleep(for: .seconds(1))
print("🟥")
}
print("After task (Thread.current)")
// 1
Before task <_NSMainThread: 0x6000037fc000>{number = 1, name = main}
// 2
enqueue(_:) before job.runSynchronously ExecutorJob(id: 1)
<_NSMainThread: 0x6000037fc000>{number = 1, name = main}
🟩
enqueue(_:) after job.runSynchronously ExecutorJob(id: 1)
<_NSMainThread: 0x6000037fc000>{number = 1, name = main}
// 3
After task <_NSMainThread: 0x6000037fc000>{number = 1, name = main}
// 4
enqueue(_:) before job.runSynchronously ExecutorJob(id: 1)
<NSThread: 0x6000037fdf00>{number = 5, name = (null)}
🟥
enqueue(_:) after job.runSynchronously ExecutorJob(id: 1)
<NSThread: 0x6000037fdf00>{number = 5, name = (null)}
Let’s break down these logs step by step:
- Pre-Task State: The log preceding the
Taskcreation remains unchanged. We are still initiating this work from the Main thread. - First Job Enqueued: As the
Taskis created, the firstJobis dispatched to our executor to handle the initial block of code (the print statement with the green square). - Thread Yielding: Next, we see the log that was placed synchronously after the
Taskinitialization. TheTaskitself hasn’t completed; instead, it has relinquished the thread because it is currently suspended (having reached thesleepcall). - The Continuation Job: A second
Jobis then enqueued into the executor. This job represents the continuation after thetry await sleep(the suspension point) and executes the print statement with the red square. Notice the thread switch: the underlying thread has changed, and we are no longer executing on the Main thread.
This example perfectly illustrates how Swift Concurrency fragments a single Task into discrete Jobs separated by suspension points (await). It also proves that the system can seamlessly switch the underlying thread between these jobs.
⚠️ A Crucial Caveat However, it is vital to understand a common misconception: the presence of the await keyword is not a definitive indicator that an additional job will be created, nor does it guarantee that an actual suspension of execution will occur.
For example:
nonisolated func asyncWorkItem() async {
print(#function)
}
print("Before task (Thread.current)")
Task.detached(executorPreference: CustomExecutor()) {
print("🟩")
// Replaced the Task.sleep call with a call to our asynchronous function.
await asyncWorkItem() print("🟥")
}
print("After task (Thread.current)")
When we run this code, we’ll see the following result in the console:
Before task <_NSMainThread: 0x600001e84a40>{number = 1, name = main}
enqueue(_:) before job.runSynchronously ExecutorJob(id: 1)
<_NSMainThread: 0x600001e84a40>{number = 1, name = main}
🟩
asyncWorkItem()
🟥
enqueue(_:) after job.runSynchronously ExecutorJob(id: 1)
<_NSMainThread: 0x600001e84a40>{number = 1, name = main}
After task <_NSMainThread: 0x600001e84a40>{number = 1, name = main}
The logs for the green square, the asynchronous function, and the red square all executed within a single Job, despite the explicit await keyword in the middle of the Task.
This happens because our asynchronous function doesn’t actually contain any true suspension points under the hood. Consequently, the Swift Concurrency engine optimizes the execution, determining that it is far more efficient to execute the entire block in a single continuous run rather than incurring the overhead of context switching.
Binding an Executor to a Specific Queue
The custom Executor we implemented above is, in practice, essentially useless for production code. Aside from printing logs to help us understand the internal plumbing, it doesn’t perform any meaningful scheduling. It simply takes a Job and executes it synchronously on the caller’s thread, effectively blocking it.
We can clearly observe this blocking behavior with a simple example:
print("Before task")
Task.detached {
print("Task work item")
}
print("After task")
Console output:
Before task n After task n Task work item
In this snippet, we didn’t inject our custom executor. As a result, the Task was handled by the default Global Concurrent Executor, which schedules the Job onto an available thread from the pool rather than executing it synchronously on the spot. This asynchronous dispatch is exactly why we see the body of the Task executing last in our logs.
However, watch what happens when we explicitly assign our CustomExecutor to the Task:
print("Before task")
Task.detached(executorPreference: CustomExecutor()) {
print("Task work item")
}
print("After task")
Eagle-eyed readers may have already caught this nuance when examining the previous logs. But for those who missed it, here is another opportunity to see this blocking behavior in action:
Console output:
Before task n Task work item n After task
Because our naive executor processes the Job the exact moment it is enqueued (which happens immediately upon Task initialization, originating from the caller’s thread), it forces the body of the Task to execute synchronously.
While this blocking implementation is a fantastic educational tool for understanding the internal wiring, it serves absolutely no practical purpose in a real-world codebase.
Let’s step it up and build a custom executor that actually resembles a production-ready use case:
@available(iOS 18.0, *)
final class QueueExecutor: TaskExecutor {
private let queue: DispatchQueue
init(queue: DispatchQueue) {
self.queue = queue }
func enqueue(_ job: UnownedJob) {
// Execute each task in the queue from the initializer.
queue.async { print("job.runSynchronously on (Thread.current)")
job.runSynchronously(on: self.asUnownedTaskExecutor())
}
}
func asUnownedTaskExecutor() -> UnownedTaskExecutor {
UnownedTaskExecutor(ordinary: self)
}
}
Now, every time a new Job is enqueued into our executor, we will redirect its execution to a predetermined, dedicated queue. Here is an example of how we can utilize this approach in practice:
print("Before task (Thread.current)")
Task.detached(executorPreference: QueueExecutor(queue: .main)) {
print("🟩")
try await Task.sleep(for: .seconds(1))
print("🟥")
}
print("After task (Thread.current)")
Checking the logs, we will observe that both Jobs executed strictly on the main thread. In essence, we have just built a handcrafted, rudimentary equivalent of the @MainActor.
Before task <_NSMainThread: 0x600001708040>{number = 1, name = main}
After task <_NSMainThread: 0x600001708040>{number = 1, name = main}
job.runSynchronously on <_NSMainThread: 0x600001708040>{number = 1, name = main}
🟩
job.runSynchronously on <_NSMainThread: 0x600001708040>{number = 1, name = main}
🟥
Additionally, unlike the previous executor, the “after task” log is triggered before the actual execution of the Job, mirroring the behavior of the Global Concurrent Executor during a standard Task {} initialization.

We aren’t limited to the main queue. If we inject a global concurrent queue instead, the executor’s behavior will closely mirror the system’s default Global Concurrent Executor:
Task(executorPreference: QueueExecutor(queue: .global())) {
print("🟩")
try await Task.sleep(for: .seconds(1))
print("🟥")
}
Conclusion:
job.runSynchronously on <NSThread: 0x6000017083c0>{number = 7, name = (null)}
🟩
job.runSynchronously on <NSThread: 0x600001720080>{number = 9, name = (null)}
🟥
Looking at the output, we can observe that the underlying thread changes once again. This happens because each individual Job is now being scheduled onto the concurrent global queue independently. Consequently, the actual thread executing the code can easily differ from one Job to the next.
This is precisely why the underlying thread can change across await suspension points when using the default system Executor. While Apple’s internal scheduling algorithms are vastly more sophisticated than our naive example, the fundamental reason for the “thread hop” remains exactly the same.

Actors and Their Executors
We haven’t yet explored Actors in practice throughout this series of articles, but that shouldn’t stop us from understanding their core concept—and, more importantly, why their Jobs require a dedicated Executor. We will get our hands dirty with the practical application of Actors in the next article, but we need to touch on them here to complete our big-picture understanding of the Executor ecosystem.
First, a brief primer on Actors for those who might be completely new to the concept:
An Actor is a reference type (under the hood, it is essentially a class with strict, compiler-enforced restrictions) designed to protect its mutable state from data races—situations where multiple threads attempt to write to the same memory location simultaneously. In other words, it is an inherently thread-safe object.
An Actor guarantees the safety of its internal state by ensuring that all calls to its methods and accesses to its properties are executed purely sequentially, one after another.
Let’s look at a practical example:
actor SafeArray<T> {
private var array: [T] = []
func append(_ element: T) {
array.append(element)
}
func removeLast() -> T {
array.removeLast()
}
}
In this example, we encapsulated a standard Swift array within our Actor. This instantly grants it thread safety—a crucial characteristic that standard collections in Swift lack by default. Now, let’s see how we actually interact with it in practice:
let array = SafeArray<Int>()
array.append(1) // error: Call to actor-isolated instance method 'append' in a synchronous nonisolated context
Accessing an Actor’s state from the outside requires the await keyword precisely because the Actor guarantees sequential execution of its isolated methods.
If we could call these methods without await, the execution would happen synchronously—right here on the caller’s thread, at the exact moment of invocation. This would instantly destroy any guarantees of thread safety, as the Actor’s internal state could be simultaneously mutated from a completely different thread.
Therefore, the await keyword signals a potential suspension point. It tells the system that the method might not execute immediately. Instead, the current task may yield its underlying thread back to the system so it can be used to execute other jobs from the queue (since await is non-blocking). The task will only resume once the Actor’s internal executor is free to process our request.
Let’s verify this sequential behavior in action. To do this, we’ll slightly tweak our previous example:
actor SafeArray<T> {
private var array: [T] = []
func append(_ element: T) {
// 1
print("Will append (element)")
// 2
Thread.sleep(forTimeInterval: 0.1)
array.append(element)
// 1
print("Did append (element)")
}
}
I added logs to track the start and completion of the operation. I also simulated a long-running task by intentionally blocking the thread using Thread.sleep.
let array = SafeArray<Int>()
Task {
// Using TaskGroup to work in parallel with the array
await withTaskGroup {
group in for i in 0..<10 {
group.addTask {
await array.append(i)
}
}
await group.waitForAll()
}
}
and console output is:
Will append 0 n Did append 0 n Will append 1 n Did append 1 n Will append 2 n Did append 2 n Will append 3 n Did append 3 n Will append 4 n Did append 4 n Will append 5 n Did append 5 n Will append 6 n Did append 6 n Will append 8 n Did append 8 n Will append 9 n Did append 9 n Will append 7 n Did append 7
Analyzing the output, we can verify that all operations are executed strictly sequentially. This is clearly evidenced by the fact that there are no overlapping “will” or “did” logs — the next operation only begins after the previous one has fully completed.
You might notice that the numerical order (from 0 to 9) is not maintained. This is entirely expected behavior. When spawning child tasks within a TaskGroup, the Swift runtime makes no guarantees regarding the order in which they will be scheduled or executed.
So, how exactly is this strict sequential execution achieved? Let’s recall the architectural diagram from the beginning of the article.
As we established, Executors are responsible for scheduling and executing Jobs. It is the Actor’s dedicated Serial Executor that handles all the under-the-hood synchronization mechanics. By default, any Job associated with an Actor is enqueued directly onto its own Serial Executor, bypassing the Global Concurrent Executor entirely.

The default Serial Executor still relies on the Cooperative Thread Pool for execution, but it orchestrates the dispatching to guarantee strict sequential processing.
Just as we did with the standard TaskExecutor, we can override the default behavior and inject our own custom implementation for an Actor’s internal executor. To achieve this, we need to conform to the SerialExecutor protocol. Let’s start once again with a naive implementation:
@available(iOS 18.0, *)
final class CustomSerialExecutor: SerialExecutor {
func enqueue(_ job: consuming ExecutorJob) {
job.runSynchronously(on: asUnownedSerialExecutor())
}
func asUnownedSerialExecutor() -> UnownedSerialExecutor {
UnownedSerialExecutor(ordinary: self)
}
}
The implementation closely mirrors what we built for the TaskExecutor. Note that this API is also restricted to iOS 18 and newer.
To instruct our Actor to utilize this custom executor, we need to apply the following modifications:
// 1
@available(iOS 18.0, *)
actor SafeArray<T> {
// 2
private let executor = CustomSerialExecutor()
// 3
nonisolated var unownedExecutor: UnownedSerialExecutor {
executor.asUnownedSerialExecutor()
}
private var array: [T] = []
func append(_ element: T) {
print("Will append (element)")
Thread.sleep(forTimeInterval: 0.1)
array.append(element)
print("Did append (element)")
}
}
-
We must mark the entire
actoras restricted to iOS 18, as the ability to assign custom executors is simply not supported on earlier versions of the OS. -
Here, we instantiate our custom Executor. Storing it as a private property is entirely optional: you could easily inject it from the outside (via Dependency Injection) or implement it as a shared singleton. However, a word of caution — if you opt for a singleton, multiple actors will be forced to share the exact same executor. This is a critical architectural detail you must account for in your implementation to avoid unintentional bottlenecks.
-
This is where we fulfill the
Actorprotocol requirements by defining theunownedExecutorproperty. This step is strictly mandatory; it is the bridge that tells the actor to bypass the default system scheduler and route itsJobsto our custom implementation. -
And that covers the complete setup. Let’s execute our previous test case with this modified actor and examine the logs:
Will append 4 n Will append 9 n Will append 0 n Will append 3 n Will append 2 n Will append 8 n Will append 7 n Will append 1 n Will append 5 n Will append 6 n Did append 0 n Did append 4 n Did append 6
Something is clearly wrong here. The operations are executing in parallel, and by the fourth append operation, the code completely crashes, throwing an EXC_BAD_ACCESS error.

It doesn’t take much digging to uncover the root cause. Our naive executor completely lacks any actual synchronization logic. It simply executes Jobs immediately upon receiving them—and crucially, it does so synchronously on the caller’s thread.
Because our TaskGroup is firing off child tasks concurrently across multiple threads, these jobs are being fed into our executor simultaneously. (You can easily verify this concurrency by dropping a print(Thread.current) inside the executor’s enqueue method). The result? A classic Data Race leading straight to a crash.
This perfectly demonstrates a fundamental truth of Swift Concurrency: an Actor is only as safe as its underlying executor. Without a robust serialization mechanism, the actor keyword is just syntactic sugar. Stripped of a proper executor, it loses its sole purpose—guaranteed thread safety.
Let’s fix this glaring flaw by falling back on a tried-and-true Apple primitive: a good old serial DispatchQueue.
@available(iOS 18.0, *)
final class CustomSerialExecutor: SerialExecutor {
private let queue = DispatchQueue(label: "com.example.serialExecutorQueue")
func enqueue(_ job: UnownedJob) {
queue.async {
job.runSynchronously(on: self.asUnownedSerialExecutor())
}
}
func asUnownedSerialExecutor() -> UnownedSerialExecutor {
UnownedSerialExecutor(ordinary: self)
}
}
After running the example with this executor implementation, our logs will return to the expected format, and the crashes will disappear.
This example is purely academic. Do not use this implementation in production projects. It is less optimal than the default serial executor for actors. The default one does not create new threads, as it uses threads from the pool, building sequential execution on top of them.
How our CustomSerialExecutor works on the diagram:
n 
How the Executor is Determined
Let’s summarize our discussion about Executors by looking at how they are determined in code. We have an asynchronous function. How do we determine which executor it will be executed on? Let’s start with a simple example and build upon it:
@available(iOS 18.0, *)
func asyncWorkItem(id: Int) async {
// 1
withUnsafeCurrentTask { task in
guard let task else { return }
// 2
print(id, task.unownedTaskExecutor ?? globalConcurrentExecutor)
}
}
// 3
Task {
await asyncWorkItem(id: 1)
}
Using the global function withUnsafeCurrentTask, you can get a reference to the current Task within which the function is executing. This function can also be called inside a synchronous (regular) function; it will return a Task only if the function is being executed within an asynchronous context.
We print the current unownedTaskExecutor. It will be non-nil only if we explicitly assign our custom executor for execution. Otherwise, we can assume that the current executor is the globalConcurrentExecutor.
The example in its initial form.
When running this code, we will see the following in the console
1 Swift._DefaultGlobalConcurrentExecutor
We didn’t assign our Executor, so the Task executes on the global one. Let’s now assign our Executor:
Task {
await asyncWorkItem(id: 1)
// 1
await withTaskExecutorPreference(CustomExecutor()) {
await asyncWorkItem(id: 2)
// 2
async let workItem: Void = asyncWorkItem(id: 3)
await workItem
}
await asyncWorkItem(id: 4)
}
The Executor can be changed “on the fly” within a Task using this function. This is useful if you do not need to assign an Executor for the entire task.
Child Tasks also inherit the Executor (it will be the same with TaskGroup).
Output:
1 Swift._DefaultGlobalConcurrentExecutor n 2 UnownedTaskExecutor(executor: (Opaque Value)) n 3 UnownedTaskExecutor(executor: (Opaque Value)) n 4 Swift._DefaultGlobalConcurrentExecutor
The second and third IDs executed within our Executor — this is generally logical. But what if we want to do the opposite and reset the custom Executor “on the fly”? We do it like this:
// 1
Task(executorPreference: CustomExecutor()) {
await asyncWorkItem(id: 1)
// 2
await withTaskExecutorPreference(globalConcurrentExecutor) {
await asyncWorkItem(id: 2)
async let workItem: Void = asyncWorkItem(id: 3)
await workItem
}
await asyncWorkItem(id: 4)}
We assign our Executor for the execution of the entire Task.
To reset it, we simply reassign globalConcurrentExecutor using the same withTaskExecutorPreference function.
Output:
1 UnownedTaskExecutor(executor: (Opaque Value)) n 2 Swift._DefaultGlobalConcurrentExecutor n 3 Swift._DefaultGlobalConcurrentExecutor n 4 UnownedTaskExecutor(executor: (Opaque Value))
Until Actors are added, everything is quite simple. If a taskExecutor is assigned, it will schedule the Jobs. If not, the globalConcurrentExecutor will schedule them. Now let’s add Actors:
// 1
@available(iOS 18.0, *)
actor MyActor {
func asyncWorkItem(id: Int) async {
withUnsafeCurrentTask { task in
guard let task else { return }
print(id, task.unownedTaskExecutor ?? globalConcurrentExecutor)
}
}
}
// 2
@available(iOS 18.0, *)
actor MyActorWithCustomExecutor {
private let executor = CustomSerialExecutor()
nonisolated var unownedExecutor: UnownedSerialExecutor {
executor.asUnownedSerialExecutor()
}
func asyncWorkItem(id: Int) async {
withUnsafeCurrentTask { task in
guard let task else { return }
print(id, task.unownedTaskExecutor ?? globalConcurrentExecutor)
}
}
}
Task {
await asyncWorkItem(id: 1)
await withTaskExecutorPreference(CustomExecutor()) {
await asyncWorkItem(id: 2)
async let workItem: Void = asyncWorkItem(id: 3)
await workItem
// 3
await MyActor().asyncWorkItem(id: 4)
await MyActorWithCustomExecutor().asyncWorkItem(id: 5)
}
await asyncWorkItem(id: 6)
// 4
await MyActor().asyncWorkItem(id: 7)
await MyActorWithCustomExecutor().asyncWorkItem(id: 8)
}
- We isolate our function inside the actor.
2. A similar action, but with assigning an executor for the actor.
3. Calling two instances inside the block with a CustomExecutor preference.
4. Calling two instances outside the block with a CustomExecutor preference.
Let’s look at the logs. This time we are interested in IDs: 4, 5, 7, 8.
1 Swift._DefaultGlobalConcurrentExecutor n 2 UnownedTaskExecutor(executor: (Opaque Value)) n 3 UnownedTaskExecutor(executor: (Opaque Value)) n 4 UnownedTaskExecutor(executor: (Opaque Value)) n 5 UnownedTaskExecutor(executor: (Opaque Value)) n 6 Swift._DefaultGlobalConcurrentExecutor n 7 Swift._DefaultGlobalConcurrentExecutor n 8 Swift._DefaultGlobalConcurrentExecutor
Overall, it makes sense: in 4 and 5 we see the passed executor, and in 7 and 8 we don’t, but there are nuances here. As you remember, actor executors have a different protocol (SerialExecutor), while we, in turn, are passing a TaskExecutor. How does this work?
The mechanism is as follows:
If the actor has an internal SerialExecutor assigned, then the TaskExecutor is completely ignored. In our example, this is the log with id 5. It outputted UnownedTaskExecutor because we are still executing within a Task (and printing the executor from it). In fact, it is not used in any way for this case. With id 8, I think there are no questions either (since there are no conflicts between Executors there).
If we didn’t set either a TaskExecutor or a SerialExecutor for the Actor, the execution will be on the default SerialExecutor. In our case, this is id 7. Again: DefaultGlobalConcurrentExecutor in the logs is the TaskExecutor; it is not the one doing the scheduling for the actor.
The most ambiguous case is id 4. The Actor from this example doesn’t have its own SerialExecutor, but a TaskExecutor is passed. In this case, the behavior is quite specific: the default SerialExecutor guarantees sequential execution, and the TaskExecutor determines the thread of execution. As a result, the two executors work in tandem, and each is responsible for its own area of responsibility. That is, passing a TaskExecutor (which does not support sequential execution) will not break the thread safety of execution for our actor (you can check this on our SafeArray).
In summary, determining the executor for any case can be done using the following diagram:

Conclusion
In this part, we’ve taken a much deeper dive into the inner workings of Swift Concurrency and, hopefully, introduced you to several new under-the-hood primitives. I believe the concept of an Executor has finally shed its “black box” status and become a much more tangible, controllable entity.
The deeper our understanding of these underlying implementations, the better equipped we are to control and predict the behavior of our concurrency tools. And as any experienced developer knows, predictability is absolutely paramount when operating in a complex multithreaded environment.
Useful Links: