Files
@ ccd08a8d8365
Branch filter:
Location: CSY/reowolf/src/runtime2/runtime.rs
ccd08a8d8365
5.1 KiB
application/rls-services+xml
Preparing component store
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72 73 74 75 76 77 78 79 80 81 82 83 84 85 86 87 88 89 90 91 92 93 94 95 96 97 98 99 100 101 102 103 104 105 106 107 108 109 110 111 112 113 114 115 116 117 118 119 | use std::sync::Arc;
use std::sync::atomic::AtomicU32;
use crate::protocol::*;
// -----------------------------------------------------------------------------
// Component
// -----------------------------------------------------------------------------
/// Key to a component. Type system somewhat ensures that there can only be one
/// of these. Only with a key one may retrieve privately-accessible memory for
/// a component. Practically just a generational index, like `CompId` is.
#[derive(Copy, Clone)]
pub(crate) struct CompKey(CompId);
/// Generational ID of a component
#[derive(Copy, Clone)]
pub(crate) struct CompId {
pub index: u32,
pub generation: u32,
}
impl PartialEq for CompId {
fn eq(&self, other: &Self) -> bool {
return self.index.eq(&other.index);
}
}
impl Eq for CompId {}
/// In-runtime storage of a component
pub(crate) struct RtComp {
}
// -----------------------------------------------------------------------------
// Runtime
// -----------------------------------------------------------------------------
type RuntimeHandle = Arc<Runtime>;
/// Memory that is maintained by "the runtime". In practice it is maintained by
/// multiple schedulers, and this serves as the common interface to that memory.
pub struct Runtime {
active_elements: AtomicU32, // active components and APIs (i.e. component creators)
}
impl Runtime {
pub fn new(num_threads: u32, protocol_description: ProtocolDescription) -> Runtime {
assert!(num_threads > 0, "need a thread to perform work");
return Runtime{
active_elements: AtomicU32::new(0),
};
}
}
// -----------------------------------------------------------------------------
// Runtime containers
// -----------------------------------------------------------------------------
/// Component storage. Note that it shouldn't be polymorphic, but making it so
/// allows us to test it.
// Requirements:
// 1. Performance "fastness" in order of most important:
// 1. Access (should be just index retrieval)
// 2. Creation (because we want to execute code as fast as possible)
// 3. Destruction (because create-and-run is more important than quick dying)
// 2. Somewhat safe, with most performance spent in the incorrect case
// 3. Thread-safe. Everyone and their dog will be creating and indexing into
// the components concurrently.
// 4. Assume low contention.
//
// Some trade-offs:
// We could perhaps make component IDs just a pointer to that component. With
// an atomic counter managed by the runtime containing the number of owners
// (always starts at 1). However, this feels like too early to do something like
// that, especially because I would like to do direct messaging. Even though
// sending two u32s is the same as sending a pointer, it feels wrong for now.
//
// So instead we'll have some kind of concurrent store where we can index into.
// This means that it might have to resize. Resizing implies that everyone must
// wait until it is resized.
//
// Furthermore, it would be nice to reuse slots. That is to say: if we create a
// bunch of components and then destroy a couple of them, then the storage we
// reserved for them should be reusable.
//
// We'll go the somewhat simple route for now:
// 1. Each component will get allocated individually (and we'll define exactly
// what we mean by this sometime later, when we start with the bytecode). This
// way the components are pointer-stable for their lifetime.
// 2. We need to have some array that contains these pointers. We index into
// this array with our IDs.
// 3. When we destroy components we call the destructor on the allocated memory
// and add the index to some kind of freelist. Because only one thread can ever
// create and/or destroy a component we have an imaginary lock on that
// particular component's index. The freelist acts like a concurrent stack
// where we can push/pop. If we ensure that the freelist is the same size as
// the ID array then we can never run out of size.
// 4. At some point the array ID might be full and have to be resized. If we
// ensure that there is only one thread which can ever fill up the array (this
// means we *always* have one slot free, such that we can do a CAS) then we can
// do a pointer-swap on the base pointer of all storage. This takes care of
// resizing due to creation.
//
// However, with a freelist accessed at the same time, we must make sure that
// we do the copying of the old freelist and the old ID array correctly. While
// we're creating the new array we might still be destroying components. So
// one component calls a destructor (not too bad) and then pushes the resulting
// ID onto the freelist stack (which is bad). We can either somehow forbid
// destroying during resizing (which feels ridiculous) or try to be smart. Note
// that destruction might cause later creations as well!
//
// Since components might have to read a base pointer anyway to arrive at a
// freelist entry or component pointer, we could set it to null and let the
// others spinlock (or take a mutex?). So then the resizer will notice the
//
struct CompStore {
}
|