Redesign of core / inner working #1

Open
opened 2025-12-24 23:05:13 +01:00 by mai-lapyst · 4 comments
Owner

New Async considerations:

We tried the "Futures" approach, but it has several severe drawbacks when implemented in DLang.

The implementation was based on classes, that implement interface Future. This was done because only classes allow for (checked) polymorphism, but at the cost of heap-allocated AND GC'd objects, which are mostly short-lived anyway.

Why short lived? Because the code where they where used (IO), boiled down to socket.read(buffer).await(). Since it didnt make sense to do anything other until the buffer was filled (even partially!), we awaited the future directly and thus the heap-allocated class is a waste of resources and could more fittingly be an struct that was stack-allocated instead.

On the topic of (checked) polymorphism: we could use generics, but that would bear the cost of monomorphisation as well as not easily debugable tools inside dlang to implement polymorphism via generics; tho it could be an evenue to persue in testing.


The most looming question which arises it more basic: why do we NEED futures for IO where we (ideally) already know that we can't do shit until the event we're waiting on (data to process) has arived? Vibe.d for example does this: all socket-read's are fiber-blocking in nature: they don't block the process/thread itself, but rather the current fiber/task thats running inside the scheduler.

The only time we would want to have an future-esc "promise" object is when we're using concurrency/parallelism directly. Think of an problem, where we need to start N asyncronous task to "do" something, that we need to be sure to have all been cleared BEFORE we're continuing, or alternatively, we want to wait on any of them and process their result as soon it's available. This is another problem-category than IO/Waiting for events entirely!

Lets look at some other languages/libraries shall we?

New Async considerations: We tried the "Futures" approach, but it has several severe drawbacks when implemented in DLang. The implementation was based on `class`es, that implement `interface Future`. This was done because only classes allow for (checked) polymorphism, but at the cost of heap-allocated AND GC'd objects, which are mostly short-lived anyway. Why short lived? Because the code where they where used (IO), boiled down to `socket.read(buffer).await()`. Since it didnt make sense to do anything other until the buffer was filled (even partially!), we awaited the future directly and thus the heap-allocated class is a waste of resources and could more fittingly be an struct that was stack-allocated instead. On the topic of (checked) polymorphism: we *could* use generics, but that would bear the cost of monomorphisation as well as not easily debugable tools inside dlang to implement polymorphism via generics; tho it could be an evenue to persue in testing. --- The most looming question which arises it more basic: why do we NEED futures for IO where we (ideally) already know that we can't do shit until the event we're waiting on (data to process) has arived? Vibe.d for example does this: all socket-read's are fiber-blocking in nature: they don't block the process/thread itself, but rather the current fiber/task thats running inside the scheduler. The only time we *would* want to have an future-esc "promise" object is when we're using concurrency/parallelism directly. Think of an problem, where we need to start N asyncronous task to "do" something, that we need to be sure to have all been cleared BEFORE we're continuing, or alternatively, we want to wait on any of them and process their result as soon it's available. This is another problem-category than IO/Waiting for events entirely! Lets look at some other languages/libraries shall we?
Author
Owner

Zig

Introduces an Io interface to pass around. While not entirely our thing here, it has an very important concept: it splits of IO from asyncronicity. For example, the Io interface uses at it's heart an vtable, that has different functions for io (i.e. netRead etc) and asyncronus functions (i.e. async). It even has support for waiting groups AND cancelation!

There aren't many examples / infos out there, but from what I could gather, io is blocking by default. If you want to have non-blocking io operations, it seems to be that you need to wrap it inside an call to async() and await() the future. (More testing needed!)

Zig Io Interface
const Io = struct {
  userdata: ?*anyopaque,
  vtable: *const VTable,

  pub fn async(io: Io, function: anytype, args: std.meta.ArgsTuple) Future(...);
  pub fn concurrent(io: Io, function: anytype, args: std.meta.ArgsTuple) !Future(...);
  pub fn cancelRequested(io: Io) bool;
};

const AnyFuture = opaque {};

// Generic!!
const Future(Result: type) type {
  return struct {
    any_future: ?*AnyFuture,
    result: Result,

    pub fn cancel(f: *@This(), io: Io) Result;
    pub fn await(f: *@This(), io: Io) Result;
  };
}

pub const Group = struct {
  state: usize,
  context: ?*anyopaque,
  token: ?*anyopaque,

  pub fn async(g: *Group, io: Io, function: anytype, args: std.meta.ArgsTuple) void;
  pub fn concurrent(g: *Group, io: Io, function: anytype, args: std.meta.ArgsTuple) !void;
  pub fn wait(g: *Group, io: Io) void;
  pub fn cancel(g: *Group, io: Io) void;
};
## Zig Introduces an `Io` interface to pass around. While not entirely our thing here, it has an very important concept: it splits of IO from asyncronicity. For example, the `Io` interface uses at it's heart an vtable, that has different functions for io (i.e. `netRead` etc) and asyncronus functions (i.e. `async`). It even has support for waiting groups AND cancelation! There aren't many examples / infos out there, but from what I could gather, io is blocking by default. If you want to have non-blocking io operations, it seems to be that you need to wrap it inside an call to `async()` and `await()` the future. (More testing needed!) - https://github.com/mk12/zig-server - https://github.com/ziglang/zig/issues/26056 <details> <summary>Zig Io Interface</summary> ```zig const Io = struct { userdata: ?*anyopaque, vtable: *const VTable, pub fn async(io: Io, function: anytype, args: std.meta.ArgsTuple) Future(...); pub fn concurrent(io: Io, function: anytype, args: std.meta.ArgsTuple) !Future(...); pub fn cancelRequested(io: Io) bool; }; const AnyFuture = opaque {}; // Generic!! const Future(Result: type) type { return struct { any_future: ?*AnyFuture, result: Result, pub fn cancel(f: *@This(), io: Io) Result; pub fn await(f: *@This(), io: Io) Result; }; } pub const Group = struct { state: usize, context: ?*anyopaque, token: ?*anyopaque, pub fn async(g: *Group, io: Io, function: anytype, args: std.meta.ArgsTuple) void; pub fn concurrent(g: *Group, io: Io, function: anytype, args: std.meta.ArgsTuple) !void; pub fn wait(g: *Group, io: Io) void; pub fn cancel(g: *Group, io: Io) void; }; ``` </details>
Author
Owner

Dlang - Vibe.d

In vibe.d, every Io operation is fiber-blocking in nature; this means doing an read() on an socket will yield the calling fiber and park it if not enough data is available at that moment. For asyncronicity, it (seems) to be expected to spin up a new task to handle stuff inside that.

Cancellation is also (somehow) supported, but this needs more testing/reading.

## Dlang - Vibe.d In vibe.d, every Io operation is fiber-blocking in nature; this means doing an `read()` on an socket will yield the calling fiber and park it if not enough data is available at that moment. For asyncronicity, it (seems) to be expected to spin up a new task to handle stuff inside that. Cancellation is also (somehow) supported, but this needs more testing/reading.
Author
Owner

Golang

Go does something similar like vibe.d, in that it deploys an "blocking" detection, and marks it's current worker thread as "to be blocking", moves task out of the way and then blocks. (Atleast that what I've read). It employs epoll as an notification mechanism, so it also could be that reads via epoll-ed resources are only blocking a goroutine, not the entire process/thread. (More testing?)

## Golang Go does something similar like vibe.d, in that it deploys an "blocking" detection, and marks it's current worker thread as "to be blocking", moves task out of the way and then blocks. (Atleast that what I've read). It employs epoll as an notification mechanism, so it also could be that reads via epoll-ed resources are only blocking a goroutine, not the entire process/thread. (More testing?)
Author
Owner

Trio (???)

Idk really much here, else than it's based on structured concurrency. To achieve it it essentially wraps all other io/might-be-blocking constructs and adds "syncronisation-points" where it suspends the current task and goes do other stuff. Might also be a form of "block-fiber-not-thread" when it comes to io.

## Trio (???) Idk really much here, else than it's based on structured concurrency. To achieve it it essentially wraps all other io/might-be-blocking constructs and adds "syncronisation-points" where it suspends the current task and goes do other stuff. Might also be a form of "block-fiber-not-thread" when it comes to io.
Sign in to join this conversation.
No labels
No milestone
No project
No assignees
1 participant
Notifications
Due date
The due date is invalid or out of range. Please use the format "yyyy-mm-dd".

No due date set.

Dependencies

No dependencies set.

Reference
bithero-dlang/ninox.d-async#1
No description provided.