Changeset - 031c9d14adaa
[Not reviewed]
Merge
0 13 3
MH - 4 years ago 2021-07-27 14:55:04
contact@maxhenger.nl
Merge branch 'feat-bytecode'

Adds size/alignment/offset computations to the type system and detects
potentially infinite types. If the type is potentially infinite but
contains a union that can break that type loop, then all other variants
of that union are supposed to be allocated on the heap. If the type
is potentially infinite but cannot be broken up, then we throw the
appropriate error.

The size/alignment/offset computations are not yet employed in the
runtime. But prepares Reowolf for a proper bytecode/IR implementation.
6 files changed:
0 comments (0 inline, 0 general)
docs/types/infinite_types.md
Show inline comments
 
new file 100644
 
# Dealing with "Infinite Types" in a Value-Based Language
 

	
 
## The Problem
 

	
 
In many languages constructing `struct`s with members that, when expanded, end up having a field that refers to the struct itself by value is illegal. This makes sense because, firstly, one could never actually construct such a `struct`, if one would want to construct a literal then one would be typing an infinite amount of time. Secondly, because even if the struct could be constructed, one would have to take up an infinite amount of memory. As a practical example: a binary tree can conceptually grow to an infinite size, and such an infinite tree cannot be constructed.
 

	
 
And so, in low-level programming languages one would replace these with pointers (or something equivalent) in order to deal with both points. The first point is no longer true, because the pointer can be `null` (in the example: this would terminate a node in the tree, making it a leaf of the tree). And secondly the size is no longer infinite and is calculable because a pointer will always have a fixed size.
 

	
 
In Reowolf we have value-based semantics. To have reasonable cache coherency, quick `struct`/`union` member access, and prevent memory (de)allocation overhead we want to put as much values onto a stack-like construct. For this reason we would like to precalculate the alignment and offset of each type and its members.
 

	
 
Point one above still stands: if one wishes to construct a type that never terminates, then the compiler should throw an error. An example of which would be:
 

	
 
```pdl
 
// Simple truly infinite type
 
struct Infinite { 
 
    Infinite member,
 
}
 

	
 
// Infinite type of cycle length 2
 
struct Left { Right other }
 
struct Right { Left other }
 

	
 
// Infinite type of cycle length 3
 
struct A { B b }
 
struct B { C c }
 
struct C { A a }
 

	
 
// Etcetera
 

	
 
// But we can also do this using unions
 
union A { B(B), C(C) }
 
union B { A(A), C(C) }
 
union C { A(A), B(B) }
 
```
 

	
 
If one wishes to express a type whose value causes it to terminate somewhere, the only option in this language is to use `union`s to indicate the optionality of a certain member. For example:
 

	
 
```pdl
 
// One option, allows setting branches individually
 
union Option<T>{ Some(T), None }
 
struct U32TreeExample {
 
    u32 value, 
 
    Option<TreeExample> left,
 
    Option<TreeExample> right,
 
}
 

	
 
// Another option, must be a well-formed binary tree
 
union U32TreeLink {
 
    Leaf,
 
    Node(U32TreeLink, U32TreeLink)
 
}
 
struct U32TreeNode {
 
    u32 value,
 
    U32TreeLink link,
 
}
 

	
 
// Another kind of thing
 
union List<T> { 
 
    End,
 
    Entry(T, List<T>)
 
}
 
```
 

	
 
These are all valid types, and we should be able to express them, but naively calculating their size causes one to run into issues. 
 

	
 
## Onwards to an Algorithm
 

	
 
So far one can see that `struct`s and `union`s are the elements that might constitute an infinite type. All of the other types (user defined `enum`s and all of the builtins like `u32`, `string`, `s64`, etc.) do not have any members and have a fixed size. Whenever there is a union in the type, we might have a type that is not truly infinite, but may terminate somewhere.
 

	
 
Imagine a graph containing all user-defined `struct` and `union` types. The edges in this graph are directional, and indicate that one type is embedded in the other. It is important to construct this graph on a per-member basis. `struct` members may have one edge running towards it, coming from the member's type. `union`s can have multiple edges running towards it, one for each embedded type in the union member. Note that types may refer to themselves.
 

	
 
Having constructed this graph, one can visualise the potentially infinite types by loops in the graph. A struct is potentially infinite if one of its members is part of a loop. A union is potentially infinite if all of its members are part of a loop. A potentially infinite type is a truly infinite type if the relevant loops all contain potentially infinite types. Or, conversely: a potentially infinite type is not actually infinite if the relevant loops contain a union that is not potentially infinite.
 

	
 
More practically: a potentially infinite struct (implying that at least one of its members is part of a type loop) is not infinite if each of its "looped members" is part of a loop which contains a non-infinite union. Likewise, a potentially infinite union (implying that all of its members are part of a loop) is not actually infinite if, for one of its members that contains a set of loops (because a union can embed multiple types per variant), each of those loops contain a non-infinite union.
 

	
 
And finally, in human terms: a type is not actually infinite if, given enough time, a programmer can express a literal of that type. So one may type (using the examples above):
 

	
 
```pdl
 
auto thing = List::Entry(3, List::Entry(2, List::Entry(1, List::End)));
 
auto flong = U32TreeExample{
 
    value: 0,
 
    left: Option::Some(U32TreeExample{
 
        value: 1,
 
        left: Option::None
 
    },
 
    right: Option::None
 
}
 
```
 

	
 
But one simply cannot express:
 

	
 
```pdl
 
auto cannot_construct = A::B(B::C(C::A(A::C(C::B( /* onwards, to infinity )))));
 
```
 

	
 
Now we arrive at several conditions to end up at a concrete algorithm. Firsly we have that types whose literals are expressable should be constructable in the language. Secondly, for the performance reasons mentioned above we would like to specify the memory layout for each type. So that includes the size and alignment of the type itself, and the offset (with implicit alignment and size) of each of its members. Thirdly, because we're going to have to put pointers somewhere in the aforementioned type loops (necessarily, because we have variably sized values, we need to perform some kind of allocation somewhere, hence we need to have pointers somewhere), we need these pointers to be placed consistently throughout the code. This last point is a design decision: we could decide to have some types including pointers to members, and some types excluding pointers to members. This would then require special conversion functions implemented when two values of the same type, but with a different memory layout, are converted into one another. This on top of the fact that each of the types containing a pointer will require a constructor and a destructor. Finally, albeit this point looks ahead a bit at the implementation, we desire that subsequent compilations of the same files, but potentially in a different order, result in the same layout of each of the types.
 

	
 
We may observe that only unions may break up potentially infinite types. So for each loop that contains a non-infinite union we need at least one union that is implemented using some kind of pointer. There are multiple ways we can fix the byte size of the union, thereby breaking the type loop:
 

	
 
1. Make each instance of the union a pointer to the data in that union: both the tag and the potentially embedded values. This has the upside of being relatively simple to implement. Furthermore the union itself will just have a fixed size. The downside is that for a tag lookup we have to follow the pointer, generally causing cache misses in the case that we just want to check the tag.
 
   
 
2. Keep the tag in a union value, but dynamically allocate all of the contents. This way the size of a union value becomes a tag and a pointer. On 32-bit systems this is most likely fine, on 64-bit systems (which is the norm, if we're not talking about embedded computers) alignment will generally cause the union size to bloat to 16-bytes. This kind of implementation has two subcases to consider. 
 
   
 
    - Allocate as much memory as the maximum union variant. This way changing the union variant will not force a reallocation. Instead we can use up the allocated memory.
 
      
 
    - Allocate as much memory as the variant requires. This way we will not waste any memory, at the cost of having to reallocate when the variant is changed.
 
    
 
3. Keep the tag in the union value, with the same up- and downsides as the previous point. Then dynamically allocate the members of the variants (that is, if we have `union Foo { Bar(CyclicType, u64, u64), Qux }`, then we will just make `CyclicType` a pointer). The obvious downside is that changing variants might cause (de)allocations. And if we have multiple members that cause a loop then we need to allocate each of them. The upside is that when we're only accessing the non-pointer members that they're likely already in the cache.
 

	
 
There are many more tiny variants here that are not explicitly stated. I'll document my reasons for choosing the variant I think works best:
 

	
 
- The whole idea of breaking the type cycles is not to allocate an infinite amount of memory, so always allocating the union is naturally out of the question. Furthermore type loops will generally only occur in particular datastructures that are potentially infinite: trees, linked lists, hashmaps, etc.
 
  
 
- The contract when using a union is that the byte size of a union value is the size of the largest member plus the size of the tag (plus padding). So allocating the union just once, with enough size to fit all possible allocated variants seems like a good idea to me. I suspect that in most cases these unions will express something along the lines of: there is nothing, or there is nothing.
 

	
 
Finally, as a note: all of this type-loop checking has to be performed per monomorphization. A polymorphic type specification itself does not need to be checked for type loops.
 

	
 
Lastly, we have to consider the point of making sure that multiple compilations, where the AST is constructed in a different order, hence where the types are "discovered" in a different order, will result in the same types. As a rather simple solution, instead of just marking one union in the type loop as pointerlike, all of the unions in the type loops will be implemented using some kind of pointer scheme.
 

	
 
## Concluding on the Algorithm
 

	
 
As an initial conclusion:
 

	
 
- All non-infinite unions in type loops are implemented using dynamic allocation to break the type loops and to prevent potentially infinite types from being truly infinite.
 
- The tag of the union will be a non-allocated value associated with the union. Considering that most of the time these kinds of unions will represent an option-like type (there is one more entry in the linked list, there is one more node in the tree, etc.) I think it is nice to not have cache misses when checking whether a value is present, or whether it is not present.
 
- Considering that we cannot allocate all of the time (because then we will still end up with an infinite amount of allocated memory), but I do not currently want to implement very complicated logic in the compiler for handling memory allocations, I will initially implement the following scheme: all union variants that do not contain any types part of a type loop will be stored on the stack, contributing to the byte size of the union. All variants that do contain such types will be fully allocated on the heap in one allocation. The size of the allocated region will be the size of the largest variant that requires dynamic allocation.
 

	
 
## Rough Specification of the Algorithm
 

	
 
I might be missing some stuff here, that stuff will probably pop up while I'm implementing this:
 

	
 
- 
 
   Perform the basic pre-compilation for symbol discovery and definition discovery. This constructs the AST such that we can parse the AST to determine what fields are present on each `struct` and what variants are used in each `union`.
 

	
 
-
 
    For each type added to the type table (including monomorphs where we have all of the polymorphic variables fully specified), we will traverse all of the members and their types. For each type we will check if was already checked not to be cyclic. If it was already completely checked, then all types up until the most recently checked type cannot be cyclic (another member might still make them cyclic). If the type was not completely checked, but previously encountered in our search for loops, then all of types from the first encounter of that type are considered part of a type loop.
 

	
 
- 
 
    Note that this algorithm is recursive in the sense that for structs we need to check each member for type loops. For unions we need to check each embedded type in each variant. So when we encounter a type loop for a specific set of members, we walk back and mark each of the unions along the way as requiring allocation. If there were no such unions then we immediately return an error. We keep all of the unions that require allocation around in a list.
 
  
 
- 
 
   While we're doing the above, if we encounter a union who has a member that does not exist in a type loop then we mark it as non-infinite.
 
   
 
- 
 
   After all of the above is done, we need to check all of the unions we have encountered. Necessarily if we started with a particular type, and recursively checked all of the embedded types, then we will discover *all* type loops. If none of the unions is non-infinite, then we throw an error (which might be kind of hard, because what kind of error do we show in the case of the `union A; union B; union C;` example?). With the algorithm above, we now know that if there are type loops, that each of them contains at least one union. With the condition that there is at least one non-infinite union we now know that the type is expressable. The constructed list of unions only contains unions that were part of a type loop.
 

	
 
   With that knowledge, to guarantee consistent compilation, we will decide to make each union in each type loop, whether it is infinite or non-infinite, a pointerlike union.
 

	
 
-
 
   A pointerlike union's value will consist of a tag, plus reserved size (with the proper alignment) for the largest non-pointerlike union variant if it is larger than the size of a pointer on the particular machine. If it is smaller, than we just need a tag, and a pointer that is properly aligned. So the union value will have a fixed size that is not dependent on the types that were part of the type cycle. I'll call the variants that were not part of type cycles "value variants", and parts that *were* part of type cycles "pointer variants".
 

	
 
Then, during runtime, as an initial implementation (we could be smarter if we know the old value and know the new value, but that would take a lot of static analysis, we're not yet at that point in the compiler), we will check the tag value whether the old and new values are pointers. If the old and new variants are both pointer variants, or when they're both value variants, then we do not (re)allocate. If we go from a value variant to a pointer variant then we allocate. If we go from a pointer variant to a value variant then we deallocate.
 

	
 
## Some Examples of the Algorithm
 

	
 
I wrote these before I started writing all of the above, I'll keep it here for some concrete examples of the algorithm.
 

	
 
### Truly Infinite Example with Structs
 

	
 
```pdl
 
// Most simple case
 
struct SelfReferential{ SelfReferential field }
 

	
 
// Slightly more complicated case
 
struct DualA { DualB b }
 
struct DualB { DualA a }
 
```
 

	
 
In some sense we would be able to compute the sizes of these datastructures, we simply make their fields pointers. Let's assume 64 bit systems for all of these examples. For just the first case we would then have that `SelfReferential` has a size of 8 bytes. However, we always need to allocate its member `field`. So this would always take up an infinite amount of memory. We disallow this case because we would never be able to construct a literal of these kinds of datatypes.
 

	
 
### Truly Infinite Example with Unions
 

	
 
```pdl
 
struct S { UOne one, UTwo two }
 
union UOne { Struct(S), One(UOne), Two(UTwo) }
 
union UTwo { One(UOne), Two(UTwo) } 
 
```
 

	
 
Again, here we have a case where we can compute the sizes. We make the unions have pointerlike contents. So the unions will be 16 bytes (8 bytes for the tag and padding for pointer alignment, then an 8 byte pointer). Hence the struct will be 32 bytes. But once more we can never construct a literal and instantiating an element would always take up an infinite amount of memory.
 

	
 
### Slightly Ambiguous Example
 

	
 
With the following example we get one union which is non-infinite.
 

	
 
```pdl
 
struct SEnd { u32 some_value }
 
struct SMiddle {
 
   SEnd this_ends,
 
   UOne union_one,
 
   UTwo union_two,
 
}
 
union UOne { One(UOne), Two(UTwo), Struct(SMiddle) }
 
union UTwo { One(UOne), Two(UTwo), End(u32) }
 
```
 

	
 
If we draw the type graph, we get something like:
 

	
 
```pdl
 
+--------+        +---------+        +-------+        +-------+
 
| struct |        | struct  | <----- | union | <----- | union |
 
| SEnd   | -----> | SMiddle | -----> | UOne  | -----> | UTwo  |
 
+--------+        +---------+        +-------+        +-------+
 
                                      |     ^          |     ^
 
                                      |     |          |     |
 
                                      \-----/          \-----/
 
```
 

	
 
For this case we can actually always construct a literal of `SMiddle`. Viewing the literal as a tree (each node a type instance) then we can terminate the tree branches using `UTwo::End`. However, if we would just make `UTwo` pointerlike, we cannot compute the size of `UOne` (it contains a reference to itself, so would grow infinitely in size). This hints at that we should make `UOne` pointerlike as well.
 

	
 
### Slightly Ambiguous Example, Version 2
 

	
 
Not really ambiguous, but just to show that not all unions which are part of a connected type graph need to be turned into pointerlike unions.
 

	
 
```pdl
 
union Value {
 
    Unsigned(u64),
 
    Signed(s64),
 
    Undefined
 
}
 
struct Node {
 
    Value value,
 
    Children children
 
}
 

	
 
union Children {
 
    One(Node),
 
    Two(Node, Node),
 
    None
 
}
 
```
 

	
 
We get a cycle between `Node` and `Children`, so we would want to implement `Children` as being pointerlike. But we don't have to make `Value` pointerlike. In terms of the described algorithm: `Value` is not part of a type loop, so is not considered a candidate for being pointerlike.
 

	
 
### Slightly Ambigious Example, Version 3
 

	
 
Contains part of the type graph that is not part of a type loop. So the associated union should not be pointerlike.
 

	
 
```pdl
 
// Parts that are outside of the type loop
 
struct SOutside {
 
    UOutside next,
 
}
 
union UOutside {
 
    Next(SInside)
 
}
 

	
 
// Parts that are inside the type loop
 
struct SInside {
 
    UInside next,
 
}
 
union UInside {
 
    Next(SInside),
 
    NoNext,
 
}
 
```
 

	
 
Here `UInside` should become pointerlike, but `UOutside` should remain a normal value.
 

	
 
### Which Union Breaks the Cycle?
 

	
 
```pdl
 
struct S { UOne one }
 
union UOne { Two(UTwo), Nope }
 
union UTwo { Struct(S), Nope }
 
```
 

	
 
Here we see that we have a type loop to which both unions contribute. We can either lay out `UOne` as pointerlike, or `UTwo`. Both would allow us to calculate the size of the types and make the type expressable. However we need consistent compilation. For this reason and this reason only (because it is much more efficient to only lay out one of the unions as a pointer) we have to make both unions pointerlike. Perhaps in the future some other consistent metric can be applied.
 

	
 
### Does it even Matter?
 

	
 
```pdl
 
// Obviously doesn't use `T`
 
enum Something<T> { A, B }
 

	
 
// So this should be fine
 
struct Foo {
 
    Something<Foo> some_member,
 
}
 
```
 

	
 
Here we have a struct `Foo` that has a reliance on `Something<Foo>`, but this shouldn't case a type loop. Because `Foo` doesn't actually appear in `Something`. So whatever algorithm we come up with should resolve member types not by iterating over each individual type, but by attempting to instantiate a monomorph of that type, then to check the members of that monomorph.
 
\ No newline at end of file
src/protocol/ast.rs
Show inline comments
 
@@ -401,206 +401,360 @@ impl ParserTypeVariant {
 
/// implicit, meaning that the user didn't specify the type, but it was set by
 
/// the compiler.
 
#[derive(Debug, Clone)]
 
pub struct ParserTypeElement {
 
    // TODO: @Fix span
 
    pub element_span: InputSpan, // span of this element, not including the child types
 
    pub variant: ParserTypeVariant,
 
}
 

	
 
/// ParserType is a specification of a type during the parsing phase and initial
 
/// linker/validator phase of the compilation process. These types may be
 
/// (partially) inferred or represent literals (e.g. a integer whose bytesize is
 
/// not yet determined).
 
///
 
/// Its contents are the depth-first serialization of the type tree. Each node
 
/// is a type that may accept polymorphic arguments. The polymorphic arguments
 
/// are then the children of the node.
 
#[derive(Debug, Clone)]
 
pub struct ParserType {
 
    pub elements: Vec<ParserTypeElement>,
 
    pub full_span: InputSpan,
 
}
 

	
 
impl ParserType {
 
    pub(crate) fn iter_embedded(&self, parent_idx: usize) -> ParserTypeIter {
 
        ParserTypeIter::new(&self.elements, parent_idx)
 
    }
 
}
 

	
 
/// Iterator over the embedded elements of a specific element.
 
pub struct ParserTypeIter<'a> {
 
    pub elements: &'a [ParserTypeElement],
 
    pub cur_embedded_idx: usize,
 
}
 

	
 
impl<'a> ParserTypeIter<'a> {
 
    fn new(elements: &'a [ParserTypeElement], parent_idx: usize) -> Self {
 
        debug_assert!(parent_idx < elements.len(), "parent index exceeds number of elements in ParserType");
 
        if elements[0].variant.num_embedded() == 0 {
 
            // Parent element does not have any embedded types, place
 
            // `cur_embedded_idx` at end so we will always return `None`
 
            Self{ elements, cur_embedded_idx: elements.len() }
 
        } else {
 
            // Parent element has an embedded type
 
            Self{ elements, cur_embedded_idx: parent_idx + 1 }
 
        }
 
    }
 
}
 

	
 
impl<'a> Iterator for ParserTypeIter<'a> {
 
    type Item = &'a [ParserTypeElement];
 

	
 
    fn next(&mut self) -> Option<Self::Item> {
 
        let elements_len = self.elements.len();
 
        if self.cur_embedded_idx >= elements_len {
 
            return None;
 
        }
 

	
 
        // Seek to the end of the subtree
 
        let mut depth = 1;
 
        let start_element = self.cur_embedded_idx;
 
        while self.cur_embedded_idx < elements_len {
 
            let cur_element = &self.elements[self.cur_embedded_idx];
 
            let depth_change = cur_element.variant.num_embedded() as i32 - 1;
 
            depth += depth_change;
 
            debug_assert!(depth >= 0, "illegally constructed ParserType: {:?}", self.elements);
 

	
 
            self.cur_embedded_idx += 1;
 
            if depth == 0 {
 
                break;
 
            }
 
        }
 

	
 
        debug_assert!(depth == 0, "illegally constructed ParserType: {:?}", self.elements);
 
        return Some(&self.elements[start_element..self.cur_embedded_idx]);
 
    }
 
}
 

	
 
/// ConcreteType is the representation of a type after the type inference and
 
/// checker is finished. These are fully typed.
 
#[derive(Debug, Clone, Copy, Eq, PartialEq)]
 
pub enum ConcreteTypePart {
 
    // Special types (cannot be explicitly constructed by the programmer)
 
    Void,
 
    // Builtin types without nested types
 
    Message,
 
    Bool,
 
    UInt8, UInt16, UInt32, UInt64,
 
    SInt8, SInt16, SInt32, SInt64,
 
    Character, String,
 
    // Builtin types with one nested type
 
    Array,
 
    Slice,
 
    Input,
 
    Output,
 
    // User defined type with any number of nested types
 
    Instance(DefinitionId, u32),
 
    Instance(DefinitionId, u32),    // instance of data type
 
    Function(DefinitionId, u32),    // instance of function
 
    Component(DefinitionId, u32),   // instance of a connector
 
}
 

	
 
impl ConcreteTypePart {
 
    fn num_embedded(&self) -> u32 {
 
        use ConcreteTypePart::*;
 

	
 
        match self {
 
            Void | Message | Bool |
 
            UInt8 | UInt16 | UInt32 | UInt64 |
 
            SInt8 | SInt16 | SInt32 | SInt64 |
 
            Character | String =>
 
                0,
 
            Array | Slice | Input | Output =>
 
                1,
 
            Instance(_, num_embedded) => *num_embedded,
 
            Function(_, num_embedded) => *num_embedded,
 
            Component(_, num_embedded) => *num_embedded,
 
        }
 
    }
 
}
 

	
 
#[derive(Debug, Clone, Eq, PartialEq)]
 
pub struct ConcreteType {
 
    pub(crate) parts: Vec<ConcreteTypePart>
 
}
 

	
 
impl Default for ConcreteType {
 
    fn default() -> Self {
 
        Self{ parts: Vec::new() }
 
    }
 
}
 

	
 
impl ConcreteType {
 
    /// Returns an iterator over the subtrees that are type arguments (e.g. an
 
    /// array element's type, or a polymorphic type's arguments) to the
 
    /// provided parent type (specified by its index in the `parts` array).
 
    pub(crate) fn embedded_iter<'a>(&'a self, parent_part_idx: usize) -> ConcreteTypeIter<'a> {
 
        let num_embedded = self.parts[parent_part_idx].num_embedded();
 
        return ConcreteTypeIter{
 
            concrete: self,
 
            idx_embedded: 0,
 
            num_embedded,
 
            part_idx: parent_part_idx + 1,
 
        }
 
    }
 

	
 
    /// Given the starting position of a type tree, determine the exclusive
 
    /// ending index.
 
    pub(crate) fn subtree_end_idx(&self, start_idx: usize) -> usize {
 
        let mut depth = 1;
 
        let num_parts = self.parts.len();
 
        debug_assert!(start_idx < num_parts);
 

	
 
        for part_idx in start_idx..self.parts.len() {
 
            let depth_change = self.parts[part_idx].num_embedded() as i32 - 1;
 
            depth += depth_change;
 
            debug_assert!(depth >= 0);
 

	
 
            if depth == 0 {
 
                return part_idx + 1;
 
            }
 
        }
 

	
 
        debug_assert!(false, "incorrectly constructed ConcreteType instance");
 
        return 0;
 
    }
 

	
 
    /// Construct a human-readable name for the type. Because this performs
 
    /// a string allocation don't use it for anything else then displaying the
 
    /// type to the user.
 
    pub(crate) fn display_name(&self, heap: &Heap) -> String {
 
        fn display_part(parts: &[ConcreteTypePart], heap: &Heap, mut idx: usize, target: &mut String) -> usize {
 
            use ConcreteTypePart as CTP;
 
            use crate::protocol::parser::token_parsing::*;
 

	
 
            let cur_idx = idx;
 
            idx += 1; // increment by 1, because it always happens
 

	
 
            match parts[cur_idx] {
 
                CTP::Void => { target.push_str("void"); },
 
                CTP::Message => { target.push_str(KW_TYPE_MESSAGE_STR); },
 
                CTP::Bool => { target.push_str(KW_TYPE_BOOL_STR); },
 
                CTP::UInt8 => { target.push_str(KW_TYPE_UINT8_STR); },
 
                CTP::UInt16 => { target.push_str(KW_TYPE_UINT16_STR); },
 
                CTP::UInt32 => { target.push_str(KW_TYPE_UINT32_STR); },
 
                CTP::UInt64 => { target.push_str(KW_TYPE_UINT64_STR); },
 
                CTP::SInt8 => { target.push_str(KW_TYPE_SINT8_STR); },
 
                CTP::SInt16 => { target.push_str(KW_TYPE_SINT16_STR); },
 
                CTP::SInt32 => { target.push_str(KW_TYPE_SINT32_STR); },
 
                CTP::SInt64 => { target.push_str(KW_TYPE_SINT64_STR); },
 
                CTP::Character => { target.push_str(KW_TYPE_CHAR_STR); },
 
                CTP::String => { target.push_str(KW_TYPE_STRING_STR); },
 
                CTP::Array | CTP::Slice => {
 
                    idx = display_part(parts, heap, idx, target);
 
                    target.push_str("[]");
 
                },
 
                CTP::Input => {
 
                    target.push_str(KW_TYPE_IN_PORT_STR);
 
                    target.push('<');
 
                    idx = display_part(parts, heap, idx, target);
 
                    target.push('>');
 
                },
 
                CTP::Output => {
 
                    target.push_str(KW_TYPE_OUT_PORT_STR);
 
                    target.push('<');
 
                    idx = display_part(parts, heap, idx, target);
 
                    target.push('>');
 
                },
 
                CTP::Instance(definition_id, num_poly_args) |
 
                CTP::Function(definition_id, num_poly_args) |
 
                CTP::Component(definition_id, num_poly_args) => {
 
                    let definition = &heap[definition_id];
 
                    target.push_str(definition.identifier().value.as_str());
 

	
 
                    if num_poly_args != 0 {
 
                        target.push('<');
 
                        for poly_arg_idx in 0..num_poly_args {
 
                            if poly_arg_idx != 0 {
 
                                target.push(',');
 
                                idx = display_part(parts, heap, idx, target);
 
                            }
 
                        }
 
                        target.push('>');
 
                    }
 
                }
 
            }
 

	
 
            idx
 
        }
 

	
 
        let mut name = String::with_capacity(128);
 
        let _final_idx = display_part(&self.parts, heap, 0, &mut name);
 
        debug_assert_eq!(_final_idx, self.parts.len());
 

	
 
        return name;
 
    }
 
}
 

	
 
#[derive(Debug)]
 
pub struct ConcreteTypeIter<'a> {
 
    concrete: &'a ConcreteType,
 
    idx_embedded: u32,
 
    num_embedded: u32,
 
    part_idx: usize,
 
}
 

	
 
impl<'a> Iterator for ConcreteTypeIter<'a> {
 
    type Item = &'a [ConcreteTypePart];
 

	
 
    fn next(&mut self) -> Option<Self::Item> {
 
        if self.idx_embedded == self.num_embedded {
 
            return None;
 
        }
 

	
 
        // Retrieve the subtree of interest
 
        let start_idx = self.part_idx;
 
        let end_idx = self.concrete.subtree_end_idx(start_idx);
 

	
 
        self.idx_embedded += 1;
 
        self.part_idx = end_idx;
 

	
 
        return Some(&self.concrete.parts[start_idx..end_idx]);
 
    }
 
}
 

	
 
#[derive(Debug, Clone, Copy)]
 
pub enum Scope {
 
    Definition(DefinitionId),
 
    Regular(BlockStatementId),
 
    Synchronous((SynchronousStatementId, BlockStatementId)),
 
}
 

	
 
impl Scope {
 
    pub fn is_block(&self) -> bool {
 
        match &self {
 
            Scope::Definition(_) => false,
 
            Scope::Regular(_) => true,
 
            Scope::Synchronous(_) => true,
 
        }
 
    }
 
    pub fn to_block(&self) -> BlockStatementId {
 
        match &self {
 
            Scope::Regular(id) => *id,
 
            Scope::Synchronous((_, id)) => *id,
 
            _ => panic!("unable to get BlockStatement from Scope")
 
        }
 
    }
 
}
 

	
 
/// `ScopeNode` is a helper that links scopes in two directions. It doesn't
 
/// actually contain any information associated with the scope, this may be
 
/// found on the AST elements that `Scope` points to.
 
#[derive(Debug, Clone)]
 
pub struct ScopeNode {
 
    pub parent: Scope,
 
    pub nested: Vec<Scope>,
 
}
 

	
 
impl ScopeNode {
 
    pub(crate) fn new_invalid() -> Self {
 
        ScopeNode{
 
            parent: Scope::Definition(DefinitionId::new_invalid()),
 
            nested: Vec::new(),
 
        }
 
    }
 
}
 

	
 
#[derive(Debug, Clone, PartialEq, Eq)]
 
pub enum VariableKind {
 
    Parameter,      // in parameter list of function/component
 
    Local,          // declared in function/component body
 
    Binding,        // may be bound to in a binding expression (determined in validator/linker)
 
}
 

	
 
#[derive(Debug, Clone)]
 
pub struct Variable {
 
    pub this: VariableId,
 
    // Parsing
 
    pub kind: VariableKind,
 
    pub parser_type: ParserType,
 
    pub identifier: Identifier,
 
    // Validator/linker
 
    pub relative_pos_in_block: u32,
 
    pub unique_id_in_scope: i32, // Temporary fix until proper bytecode/asm is generated
 
}
 

	
 
#[derive(Debug, Clone)]
 
pub enum Definition {
 
    Struct(StructDefinition),
 
    Enum(EnumDefinition),
 
    Union(UnionDefinition),
 
    Component(ComponentDefinition),
 
    Function(FunctionDefinition),
 
}
 

	
 
impl Definition {
 
    pub fn is_struct(&self) -> bool {
 
        match self {
 
            Definition::Struct(_) => true,
 
            _ => false
 
        }
 
    }
 
    pub(crate) fn as_struct(&self) -> &StructDefinition {
 
        match self {
 
            Definition::Struct(result) => result,
 
            _ => panic!("Unable to cast 'Definition' to 'StructDefinition'"),
 
        }
 
    }
 
    pub(crate) fn as_struct_mut(&mut self) -> &mut StructDefinition {
 
        match self {
 
            Definition::Struct(result) => result,
 
            _ => panic!("Unable to cast 'Definition' to 'StructDefinition'"),
 
        }
 
    }
 
    pub fn is_enum(&self) -> bool {
 
        match self {
 
            Definition::Enum(_) => true,
 
            _ => false,
 
        }
 
    }
 
    pub(crate) fn as_enum(&self) -> &EnumDefinition {
 
@@ -640,236 +794,230 @@ impl Definition {
 
        }
 
    }
 
    pub(crate) fn as_component(&self) -> &ComponentDefinition {
 
        match self {
 
            Definition::Component(result) => result,
 
            _ => panic!("Unable to cast `Definition` to `Component`"),
 
        }
 
    }
 
    pub(crate) fn as_component_mut(&mut self) -> &mut ComponentDefinition {
 
        match self {
 
            Definition::Component(result) => result,
 
            _ => panic!("Unable to cast `Definition` to `Component`"),
 
        }
 
    }
 
    pub fn is_function(&self) -> bool {
 
        match self {
 
            Definition::Function(_) => true,
 
            _ => false,
 
        }
 
    }
 
    pub(crate) fn as_function(&self) -> &FunctionDefinition {
 
        match self {
 
            Definition::Function(result) => result,
 
            _ => panic!("Unable to cast `Definition` to `Function`"),
 
        }
 
    }
 
    pub(crate) fn as_function_mut(&mut self) -> &mut FunctionDefinition {
 
        match self {
 
            Definition::Function(result) => result,
 
            _ => panic!("Unable to cast `Definition` to `Function`"),
 
        }
 
    }
 
    pub fn parameters(&self) -> &Vec<VariableId> {
 
        match self {
 
            Definition::Component(def) => &def.parameters,
 
            Definition::Function(def) => &def.parameters,
 
            _ => panic!("Called parameters() on {:?}", self)
 
        }
 
    }
 
    pub fn defined_in(&self) -> RootId {
 
        match self {
 
            Definition::Struct(def) => def.defined_in,
 
            Definition::Enum(def) => def.defined_in,
 
            Definition::Union(def) => def.defined_in,
 
            Definition::Component(def) => def.defined_in,
 
            Definition::Function(def) => def.defined_in,
 
        }
 
    }
 
    pub fn identifier(&self) -> &Identifier {
 
        match self {
 
            Definition::Struct(def) => &def.identifier,
 
            Definition::Enum(def) => &def.identifier,
 
            Definition::Union(def) => &def.identifier,
 
            Definition::Component(def) => &def.identifier,
 
            Definition::Function(def) => &def.identifier,
 
        }
 
    }
 
    pub fn poly_vars(&self) -> &Vec<Identifier> {
 
        match self {
 
            Definition::Struct(def) => &def.poly_vars,
 
            Definition::Enum(def) => &def.poly_vars,
 
            Definition::Union(def) => &def.poly_vars,
 
            Definition::Component(def) => &def.poly_vars,
 
            Definition::Function(def) => &def.poly_vars,
 
        }
 
    }
 
}
 

	
 
#[derive(Debug, Clone)]
 
pub struct StructFieldDefinition {
 
    pub span: InputSpan,
 
    pub field: Identifier,
 
    pub parser_type: ParserType,
 
}
 

	
 
#[derive(Debug, Clone)]
 
pub struct StructDefinition {
 
    pub this: StructDefinitionId,
 
    pub defined_in: RootId,
 
    // Symbol scanning
 
    pub span: InputSpan,
 
    pub identifier: Identifier,
 
    pub poly_vars: Vec<Identifier>,
 
    // Parsing
 
    pub fields: Vec<StructFieldDefinition>
 
}
 

	
 
impl StructDefinition {
 
    pub(crate) fn new_empty(
 
        this: StructDefinitionId, defined_in: RootId, span: InputSpan,
 
        identifier: Identifier, poly_vars: Vec<Identifier>
 
    ) -> Self {
 
        Self{ this, defined_in, span, identifier, poly_vars, fields: Vec::new() }
 
    }
 
}
 

	
 
#[derive(Debug, Clone)]
 
#[derive(Debug, Clone, Copy)]
 
pub enum EnumVariantValue {
 
    None,
 
    Integer(i64),
 
}
 

	
 
#[derive(Debug, Clone)]
 
pub struct EnumVariantDefinition {
 
    pub identifier: Identifier,
 
    pub value: EnumVariantValue,
 
}
 

	
 
#[derive(Debug, Clone)]
 
pub struct EnumDefinition {
 
    pub this: EnumDefinitionId,
 
    pub defined_in: RootId,
 
    // Symbol scanning
 
    pub span: InputSpan,
 
    pub identifier: Identifier,
 
    pub poly_vars: Vec<Identifier>,
 
    // Parsing
 
    pub variants: Vec<EnumVariantDefinition>,
 
}
 

	
 
impl EnumDefinition {
 
    pub(crate) fn new_empty(
 
        this: EnumDefinitionId, defined_in: RootId, span: InputSpan,
 
        identifier: Identifier, poly_vars: Vec<Identifier>
 
    ) -> Self {
 
        Self{ this, defined_in, span, identifier, poly_vars, variants: Vec::new() }
 
    }
 
}
 

	
 
#[derive(Debug, Clone)]
 
pub enum UnionVariantValue {
 
    None,
 
    Embedded(Vec<ParserType>),
 
}
 

	
 
#[derive(Debug, Clone)]
 
pub struct UnionVariantDefinition {
 
    pub span: InputSpan,
 
    pub identifier: Identifier,
 
    pub value: UnionVariantValue,
 
    pub value: Vec<ParserType>, // if empty, then union variant does not contain any embedded types
 
}
 

	
 
#[derive(Debug, Clone)]
 
pub struct UnionDefinition {
 
    pub this: UnionDefinitionId,
 
    pub defined_in: RootId,
 
    // Phase 1: symbol scanning
 
    pub span: InputSpan,
 
    pub identifier: Identifier,
 
    pub poly_vars: Vec<Identifier>,
 
    // Phase 2: parsing
 
    pub variants: Vec<UnionVariantDefinition>,
 
}
 

	
 
impl UnionDefinition {
 
    pub(crate) fn new_empty(
 
        this: UnionDefinitionId, defined_in: RootId, span: InputSpan,
 
        identifier: Identifier, poly_vars: Vec<Identifier>
 
    ) -> Self {
 
        Self{ this, defined_in, span, identifier, poly_vars, variants: Vec::new() }
 
    }
 
}
 

	
 
#[derive(Debug, Clone, Copy)]
 
pub enum ComponentVariant {
 
    Primitive,
 
    Composite,
 
}
 

	
 
#[derive(Debug, Clone)]
 
pub struct ComponentDefinition {
 
    pub this: ComponentDefinitionId,
 
    pub defined_in: RootId,
 
    // Symbol scanning
 
    pub span: InputSpan,
 
    pub variant: ComponentVariant,
 
    pub identifier: Identifier,
 
    pub poly_vars: Vec<Identifier>,
 
    // Parsing
 
    pub parameters: Vec<VariableId>,
 
    pub body: BlockStatementId,
 
    // Validation/linking
 
    pub num_expressions_in_body: i32,
 
}
 

	
 
impl ComponentDefinition {
 
    // Used for preallocation during symbol scanning
 
    pub(crate) fn new_empty(
 
        this: ComponentDefinitionId, defined_in: RootId, span: InputSpan,
 
        variant: ComponentVariant, identifier: Identifier, poly_vars: Vec<Identifier>
 
    ) -> Self {
 
        Self{ 
 
            this, defined_in, span, variant, identifier, poly_vars,
 
            parameters: Vec::new(), 
 
            body: BlockStatementId::new_invalid(),
 
            num_expressions_in_body: -1,
 
        }
 
    }
 
}
 

	
 
// Note that we will have function definitions for builtin functions as well. In
 
// that case the span, the identifier span and the body are all invalid.
 
#[derive(Debug, Clone)]
 
pub struct FunctionDefinition {
 
    pub this: FunctionDefinitionId,
 
    pub defined_in: RootId,
 
    // Symbol scanning
 
    pub builtin: bool,
 
    pub span: InputSpan,
 
    pub identifier: Identifier,
 
    pub poly_vars: Vec<Identifier>,
 
    // Parser
 
    pub return_types: Vec<ParserType>,
 
    pub parameters: Vec<VariableId>,
 
    pub body: BlockStatementId,
 
    // Validation/linking
 
    pub num_expressions_in_body: i32,
 
}
 

	
 
impl FunctionDefinition {
 
    pub(crate) fn new_empty(
 
        this: FunctionDefinitionId, defined_in: RootId, span: InputSpan,
 
        identifier: Identifier, poly_vars: Vec<Identifier>
 
    ) -> Self {
 
        Self {
 
            this, defined_in,
 
            builtin: false,
 
            span, identifier, poly_vars,
 
            return_types: Vec::new(),
 
            parameters: Vec::new(),
 
            body: BlockStatementId::new_invalid(),
 
            num_expressions_in_body: -1,
 
        }
 
    }
 
}
 

	
src/protocol/ast_printer.rs
Show inline comments
 
@@ -237,206 +237,203 @@ impl ASTWriter {
 

	
 
        match import {
 
            Import::Module(import) => {
 
                self.kv(indent).with_id(PREFIX_IMPORT_ID, import.this.index)
 
                    .with_s_key("ImportModule");
 

	
 
                self.kv(indent2).with_s_key("Name").with_identifier_val(&import.module);
 
                self.kv(indent2).with_s_key("Alias").with_identifier_val(&import.alias);
 
                self.kv(indent2).with_s_key("Target").with_disp_val(&import.module_id.index);
 
            },
 
            Import::Symbols(import) => {
 
                self.kv(indent).with_id(PREFIX_IMPORT_ID, import.this.index)
 
                    .with_s_key("ImportSymbol");
 

	
 
                self.kv(indent2).with_s_key("Name").with_identifier_val(&import.module);
 
                self.kv(indent2).with_s_key("Target").with_disp_val(&import.module_id.index);
 

	
 
                self.kv(indent2).with_s_key("Symbols");
 

	
 
                let indent3 = indent2 + 1;
 
                let indent4 = indent3 + 1;
 
                for symbol in &import.symbols {
 
                    self.kv(indent3).with_s_key("AliasedSymbol");
 
                    self.kv(indent4).with_s_key("Name").with_identifier_val(&symbol.name);
 
                    self.kv(indent4).with_s_key("Alias").with_opt_identifier_val(symbol.alias.as_ref());
 
                    self.kv(indent4).with_s_key("Definition").with_disp_val(&symbol.definition_id.index);
 
                }
 
            }
 
        }
 
    }
 

	
 
    //--------------------------------------------------------------------------
 
    // Top-level definition writing
 
    //--------------------------------------------------------------------------
 

	
 
    fn write_definition(&mut self, heap: &Heap, def_id: DefinitionId, indent: usize) {
 
        self.cur_definition = Some(def_id);
 
        let indent2 = indent + 1;
 
        let indent3 = indent2 + 1;
 
        let indent4 = indent3 + 1;
 

	
 
        match &heap[def_id] {
 
            Definition::Struct(def) => {
 
                self.kv(indent).with_id(PREFIX_STRUCT_ID, def.this.0.index)
 
                    .with_s_key("DefinitionStruct");
 

	
 
                self.kv(indent2).with_s_key("Name").with_identifier_val(&def.identifier);
 
                for poly_var_id in &def.poly_vars {
 
                    self.kv(indent3).with_s_key("PolyVar").with_identifier_val(&poly_var_id);
 
                }
 

	
 
                self.kv(indent2).with_s_key("Fields");
 
                for field in &def.fields {
 
                    self.kv(indent3).with_s_key("Field");
 
                    self.kv(indent4).with_s_key("Name")
 
                        .with_identifier_val(&field.field);
 
                    self.kv(indent4).with_s_key("Type")
 
                        .with_custom_val(|s| write_parser_type(s, heap, &field.parser_type));
 
                }
 
            },
 
            Definition::Enum(def) => {
 
                self.kv(indent).with_id(PREFIX_ENUM_ID, def.this.0.index)
 
                    .with_s_key("DefinitionEnum");
 

	
 
                self.kv(indent2).with_s_key("Name").with_identifier_val(&def.identifier);
 
                for poly_var_id in &def.poly_vars {
 
                    self.kv(indent3).with_s_key("PolyVar").with_identifier_val(&poly_var_id);
 
                }
 

	
 
                self.kv(indent2).with_s_key("Variants");
 
                for variant in &def.variants {
 
                    self.kv(indent3).with_s_key("Variant");
 
                    self.kv(indent4).with_s_key("Name")
 
                        .with_identifier_val(&variant.identifier);
 
                    let variant_value = self.kv(indent4).with_s_key("Value");
 
                    match &variant.value {
 
                        EnumVariantValue::None => variant_value.with_s_val("None"),
 
                        EnumVariantValue::Integer(value) => variant_value.with_disp_val(value),
 
                    };
 
                }
 
            },
 
            Definition::Union(def) => {
 
                self.kv(indent).with_id(PREFIX_UNION_ID, def.this.0.index)
 
                    .with_s_key("DefinitionUnion");
 

	
 
                self.kv(indent2).with_s_key("Name").with_identifier_val(&def.identifier);
 
                for poly_var_id in &def.poly_vars {
 
                    self.kv(indent3).with_s_key("PolyVar").with_identifier_val(&poly_var_id);
 
                }
 

	
 
                self.kv(indent2).with_s_key("Variants");
 
                for variant in &def.variants {
 
                    self.kv(indent3).with_s_key("Variant");
 
                    self.kv(indent4).with_s_key("Name")
 
                        .with_identifier_val(&variant.identifier);
 
                        
 
                    match &variant.value {
 
                        UnionVariantValue::None => {
 
                    if variant.value.is_empty() {
 
                        self.kv(indent4).with_s_key("Value").with_s_val("None");
 
                        }
 
                        UnionVariantValue::Embedded(embedded) => {
 
                    } else {
 
                        self.kv(indent4).with_s_key("Values");
 
                            for embedded in embedded {
 
                        for embedded in &variant.value {
 
                            self.kv(indent4+1).with_s_key("Value")
 
                                .with_custom_val(|v| write_parser_type(v, heap, embedded));
 
                        }
 
                    }
 
                }
 
            }
 
            }
 
            Definition::Function(def) => {
 
                self.kv(indent).with_id(PREFIX_FUNCTION_ID, def.this.0.index)
 
                    .with_s_key("DefinitionFunction");
 

	
 
                self.kv(indent2).with_s_key("Name").with_identifier_val(&def.identifier);
 
                for poly_var_id in &def.poly_vars {
 
                    self.kv(indent3).with_s_key("PolyVar").with_identifier_val(&poly_var_id);
 
                }
 

	
 
                self.kv(indent2).with_s_key("ReturnParserTypes");
 
                for return_type in &def.return_types {
 
                    self.kv(indent3).with_s_key("ReturnParserType")
 
                        .with_custom_val(|s| write_parser_type(s, heap, return_type));
 
                }
 

	
 
                self.kv(indent2).with_s_key("Parameters");
 
                for variable_id in &def.parameters {
 
                    self.write_variable(heap, *variable_id, indent3);
 
                }
 

	
 
                self.kv(indent2).with_s_key("Body");
 
                self.write_stmt(heap, def.body.upcast(), indent3);
 
            },
 
            Definition::Component(def) => {
 
                self.kv(indent).with_id(PREFIX_COMPONENT_ID,def.this.0.index)
 
                    .with_s_key("DefinitionComponent");
 

	
 
                self.kv(indent2).with_s_key("Name").with_identifier_val(&def.identifier);
 
                self.kv(indent2).with_s_key("Variant").with_debug_val(&def.variant);
 

	
 
                self.kv(indent2).with_s_key("PolymorphicVariables");
 
                for poly_var_id in &def.poly_vars {
 
                    self.kv(indent3).with_s_key("PolyVar").with_identifier_val(&poly_var_id);
 
                }
 

	
 
                self.kv(indent2).with_s_key("Parameters");
 
                for variable_id in &def.parameters {
 
                    self.write_variable(heap, *variable_id, indent3)
 
                }
 

	
 
                self.kv(indent2).with_s_key("Body");
 
                self.write_stmt(heap, def.body.upcast(), indent3);
 
            }
 
        }
 
    }
 

	
 
    fn write_stmt(&mut self, heap: &Heap, stmt_id: StatementId, indent: usize) {
 
        let stmt = &heap[stmt_id];
 
        let indent2 = indent + 1;
 
        let indent3 = indent2 + 1;
 

	
 
        match stmt {
 
            Statement::Block(stmt) => {
 
                self.kv(indent).with_id(PREFIX_BLOCK_STMT_ID, stmt.this.0.index)
 
                    .with_s_key("Block");
 
                self.kv(indent2).with_s_key("EndBlockID").with_disp_val(&stmt.end_block.0.index);
 
                self.kv(indent2).with_s_key("FirstUniqueScopeID").with_disp_val(&stmt.first_unique_id_in_scope);
 
                self.kv(indent2).with_s_key("NextUniqueScopeID").with_disp_val(&stmt.next_unique_id_in_scope);
 
                self.kv(indent2).with_s_key("RelativePos").with_disp_val(&stmt.relative_pos_in_parent);
 

	
 
                self.kv(indent2).with_s_key("Statements");
 
                for stmt_id in &stmt.statements {
 
                    self.write_stmt(heap, *stmt_id, indent3);
 
                }
 
            },
 
            Statement::EndBlock(stmt) => {
 
                self.kv(indent).with_id(PREFIX_ENDBLOCK_STMT_ID, stmt.this.0.index)
 
                    .with_s_key("EndBlock");
 
                self.kv(indent2).with_s_key("StartBlockID").with_disp_val(&stmt.start_block.0.index);
 
            }
 
            Statement::Local(stmt) => {
 
                match stmt {
 
                    LocalStatement::Channel(stmt) => {
 
                        self.kv(indent).with_id(PREFIX_CHANNEL_STMT_ID, stmt.this.0.0.index)
 
                            .with_s_key("LocalChannel");
 

	
 
                        self.kv(indent2).with_s_key("From");
 
                        self.write_variable(heap, stmt.from, indent3);
 
                        self.kv(indent2).with_s_key("To");
 
                        self.write_variable(heap, stmt.to, indent3);
 
                        self.kv(indent2).with_s_key("Next").with_disp_val(&stmt.next.index);
 
                    },
 
                    LocalStatement::Memory(stmt) => {
 
                        self.kv(indent).with_id(PREFIX_MEM_STMT_ID, stmt.this.0.0.index)
 
                            .with_s_key("LocalMemory");
 

	
 
                        self.kv(indent2).with_s_key("Variable");
 
                        self.write_variable(heap, stmt.variable, indent3);
 
                        self.kv(indent2).with_s_key("Next").with_disp_val(&stmt.next.index);
 
                    }
 
                }
 
            },
 
            Statement::Labeled(stmt) => {
 
                self.kv(indent).with_id(PREFIX_LABELED_STMT_ID, stmt.this.0.index)
 
                    .with_s_key("Labeled");
 

	
 
@@ -829,118 +826,120 @@ fn write_parser_type(target: &mut String, heap: &Heap, t: &ParserType) {
 
                target.push_str("[]");
 
            },
 
            PTV::Input => {
 
                target.push_str(KW_TYPE_IN_PORT_STR);
 
                target.push('<');
 
                element_idx = write_element(target, heap, t, element_idx + 1);
 
                target.push('>');
 
            },
 
            PTV::Output => {
 
                target.push_str(KW_TYPE_OUT_PORT_STR);
 
                target.push('<');
 
                element_idx = write_element(target, heap, t, element_idx + 1);
 
                target.push('>');
 
            },
 
            PTV::PolymorphicArgument(definition_id, arg_idx) => {
 
                let definition = &heap[*definition_id];
 
                let poly_var = &definition.poly_vars()[*arg_idx as usize].value;
 
                target.push_str(poly_var.as_str());
 
            },
 
            PTV::Definition(definition_id, num_embedded) => {
 
                let definition = &heap[*definition_id];
 
                let definition_ident = definition.identifier().value.as_str();
 
                target.push_str(definition_ident);
 

	
 
                let num_embedded = *num_embedded;
 
                if num_embedded != 0 {
 
                    target.push('<');
 
                    for embedded_idx in 0..num_embedded {
 
                        if embedded_idx != 0 {
 
                            target.push(',');
 
                        }
 
                        element_idx = write_element(target, heap, t, element_idx + 1);
 
                    }
 
                    target.push('>');
 
                }
 
            }
 
        }
 

	
 
        element_idx
 
    }
 

	
 
    write_element(target, heap, t, 0);
 
}
 

	
 
// TODO: @Cleanup, this is littered at three places in the codebase
 
fn write_concrete_type(target: &mut String, heap: &Heap, def_id: DefinitionId, t: &ConcreteType) {
 
    use ConcreteTypePart as CTP;
 

	
 
    fn write_concrete_part(target: &mut String, heap: &Heap, def_id: DefinitionId, t: &ConcreteType, mut idx: usize) -> usize {
 
        if idx >= t.parts.len() {
 
            return idx;
 
        }
 

	
 
        match &t.parts[idx] {
 
            CTP::Void => target.push_str("void"),
 
            CTP::Message => target.push_str("msg"),
 
            CTP::Bool => target.push_str("bool"),
 
            CTP::UInt8 => target.push_str(KW_TYPE_UINT8_STR),
 
            CTP::UInt16 => target.push_str(KW_TYPE_UINT16_STR),
 
            CTP::UInt32 => target.push_str(KW_TYPE_UINT32_STR),
 
            CTP::UInt64 => target.push_str(KW_TYPE_UINT64_STR),
 
            CTP::SInt8 => target.push_str(KW_TYPE_SINT8_STR),
 
            CTP::SInt16 => target.push_str(KW_TYPE_SINT16_STR),
 
            CTP::SInt32 => target.push_str(KW_TYPE_SINT32_STR),
 
            CTP::SInt64 => target.push_str(KW_TYPE_SINT64_STR),
 
            CTP::Character => target.push_str(KW_TYPE_CHAR_STR),
 
            CTP::String => target.push_str(KW_TYPE_STRING_STR),
 
            CTP::Array => {
 
                idx = write_concrete_part(target, heap, def_id, t, idx + 1);
 
                target.push_str("[]");
 
            },
 
            CTP::Slice => {
 
                idx = write_concrete_part(target, heap, def_id, t, idx + 1);
 
                target.push_str("[..]");
 
            }
 
            CTP::Input => {
 
                target.push_str("in<");
 
                idx = write_concrete_part(target, heap, def_id, t, idx + 1);
 
                target.push('>');
 
            },
 
            CTP::Output => {
 
                target.push_str("out<");
 
                idx = write_concrete_part(target, heap, def_id, t, idx + 1);
 
                target.push('>')
 
            },
 
            CTP::Instance(definition_id, num_embedded) => {
 
                let identifier = heap[*definition_id].identifier();
 
                target.push_str(identifier.value.as_str());
 
                target.push('<');
 
                for idx_embedded in 0..*num_embedded {
 
                    if idx_embedded != 0 {
 
                        target.push_str(", ");
 
                    }
 
                    idx = write_concrete_part(target, heap, def_id, t, idx + 1);
 
                }
 
                target.push('>');
 
            }
 
            },
 
            CTP::Function(_, _) => todo!("AST printer for ConcreteTypePart::Function"),
 
            CTP::Component(_, _) => todo!("AST printer for ConcreteTypePart::Component"),
 
        }
 

	
 
        idx + 1
 
    }
 

	
 
    write_concrete_part(target, heap, def_id, t, 0);
 
}
 

	
 
fn write_expression_parent(target: &mut String, parent: &ExpressionParent) {
 
    use ExpressionParent as EP;
 

	
 
    *target = match parent {
 
        EP::None => String::from("None"),
 
        EP::If(id) => format!("IfStmt({})", id.0.index),
 
        EP::While(id) => format!("WhileStmt({})", id.0.index),
 
        EP::Return(id) => format!("ReturnStmt({})", id.0.index),
 
        EP::New(id) => format!("NewStmt({})", id.0.index),
 
        EP::ExpressionStmt(id) => format!("ExprStmt({})", id.0.index),
 
        EP::Expression(id, idx) => format!("Expr({}, {})", id.index, idx)
 
    };
 
}
 
\ No newline at end of file
src/protocol/parser/mod.rs
Show inline comments
 
pub(crate) mod symbol_table;
 
pub(crate) mod type_table;
 
pub(crate) mod tokens;
 
pub(crate) mod token_parsing;
 
pub(crate) mod pass_tokenizer;
 
pub(crate) mod pass_symbols;
 
pub(crate) mod pass_imports;
 
pub(crate) mod pass_definitions;
 
pub(crate) mod pass_validation_linking;
 
pub(crate) mod pass_typing;
 
mod visitor;
 

	
 
use tokens::*;
 
use crate::collections::*;
 
use visitor::Visitor;
 
use pass_tokenizer::PassTokenizer;
 
use pass_symbols::PassSymbols;
 
use pass_imports::PassImport;
 
use pass_definitions::PassDefinitions;
 
use pass_validation_linking::PassValidationLinking;
 
use pass_typing::{PassTyping, ResolveQueue};
 
use symbol_table::*;
 
use type_table::TypeTable;
 

	
 
use crate::protocol::ast::*;
 
use crate::protocol::input_source::*;
 

	
 
use crate::protocol::ast_printer::ASTWriter;
 

	
 
#[derive(Debug, PartialEq, Eq, PartialOrd, Ord)]
 
pub enum ModuleCompilationPhase {
 
    Tokenized,              // source is tokenized
 
    SymbolsScanned,         // all definitions are linked to their type class
 
    ImportsResolved,        // all imports are added to the symbol table
 
    DefinitionsParsed,      // produced the AST for the entire module
 
    TypesAddedToTable,      // added all definitions to the type table
 
    ValidatedAndLinked,     // AST is traversed and has linked the required AST nodes
 
    // When we continue with the compiler:
 
    // Typed,                  // Type inference and checking has been performed
 
}
 

	
 
pub struct Module {
 
    // Buffers
 
    pub source: InputSource,
 
    pub tokens: TokenBuffer,
 
    // Identifiers
 
    pub root_id: RootId,
 
    pub name: Option<(PragmaId, StringRef<'static>)>,
 
    pub version: Option<(PragmaId, i64)>,
 
    pub phase: ModuleCompilationPhase,
 
}
 

	
 
// TODO: This is kind of wrong. Because when we're producing bytecode we would
 
//       like the bytecode itself to not have the notion of the size of a pointer
 
//       type. But until I figure out what we do want I'll just set everything
 
//       to a 64-bit architecture.
 
pub struct TargetArch {
 
    pub array_size_alignment: (usize, usize),
 
    pub slice_size_alignment: (usize, usize),
 
    pub string_size_alignment: (usize, usize),
 
    pub port_size_alignment: (usize, usize),
 
    pub pointer_size_alignment: (usize, usize),
 
}
 

	
 
pub struct PassCtx<'a> {
 
    heap: &'a mut Heap,
 
    symbols: &'a mut SymbolTable,
 
    pool: &'a mut StringPool,
 
    arch: &'a TargetArch,
 
}
 

	
 
pub struct Parser {
 
    // Storage of all information created/gathered during compilation.
 
    pub(crate) heap: Heap,
 
    pub(crate) string_pool: StringPool, // Do not deallocate, holds all strings
 
    pub(crate) modules: Vec<Module>,
 
    pub(crate) symbol_table: SymbolTable,
 
    pub(crate) type_table: TypeTable,
 
    // Compiler passes, used as little state machine that keep their memory
 
    // around.
 
    pass_tokenizer: PassTokenizer,
 
    pass_symbols: PassSymbols,
 
    pass_import: PassImport,
 
    pass_definitions: PassDefinitions,
 
    pass_validation: PassValidationLinking,
 
    pass_typing: PassTyping,
 
    // Compiler options
 
    pub write_ast_to: Option<String>,
 
    pub(crate) arch: TargetArch,
 
}
 

	
 
impl Parser {
 
    pub fn new() -> Self {
 
        let mut parser = Parser{
 
            heap: Heap::new(),
 
            string_pool: StringPool::new(),
 
            modules: Vec::new(),
 
            symbol_table: SymbolTable::new(),
 
            type_table: TypeTable::new(),
 
            pass_tokenizer: PassTokenizer::new(),
 
            pass_symbols: PassSymbols::new(),
 
            pass_import: PassImport::new(),
 
            pass_definitions: PassDefinitions::new(),
 
            pass_validation: PassValidationLinking::new(),
 
            pass_typing: PassTyping::new(),
 
            write_ast_to: None,
 
            arch: TargetArch {
 
                array_size_alignment: (3*8, 8), // pointer, length, capacity
 
                slice_size_alignment: (2*8, 8), // pointer, length
 
                string_size_alignment: (3*8, 8), // pointer, length, capacity
 
                port_size_alignment: (3*4, 4), // two u32s: connector + port ID
 
                pointer_size_alignment: (8, 8),
 
            }
 
        };
 

	
 
        parser.symbol_table.insert_scope(None, SymbolScope::Global);
 

	
 
        fn quick_type(variants: &[ParserTypeVariant]) -> ParserType {
 
            let mut t = ParserType{ elements: Vec::with_capacity(variants.len()), full_span: InputSpan::new() };
 
            for variant in variants {
 
                t.elements.push(ParserTypeElement{ element_span: InputSpan::new(), variant: variant.clone() });
 
            }
 
            t
 
        }
 

	
 
        use ParserTypeVariant as PTV;
 
        insert_builtin_function(&mut parser, "get", &["T"], |id| (
 
            vec![
 
                ("input", quick_type(&[PTV::Input, PTV::PolymorphicArgument(id.upcast(), 0)]))
 
            ],
 
            quick_type(&[PTV::PolymorphicArgument(id.upcast(), 0)])
 
        ));
 
        insert_builtin_function(&mut parser, "put", &["T"], |id| (
 
            vec![
 
                ("output", quick_type(&[PTV::Output, PTV::PolymorphicArgument(id.upcast(), 0)])),
 
                ("value", quick_type(&[PTV::PolymorphicArgument(id.upcast(), 0)])),
 
            ],
 
            quick_type(&[PTV::Void])
 
        ));
 
        insert_builtin_function(&mut parser, "fires", &["T"], |id| (
 
            vec![
 
                ("port", quick_type(&[PTV::InputOrOutput, PTV::PolymorphicArgument(id.upcast(), 0)]))
 
            ],
 
            quick_type(&[PTV::Bool])
 
        ));
 
        insert_builtin_function(&mut parser, "create", &["T"], |id| (
 
            vec![
 
                ("length", quick_type(&[PTV::IntegerLike]))
 
            ],
 
            quick_type(&[PTV::ArrayLike, PTV::PolymorphicArgument(id.upcast(), 0)])
 
        ));
 
        insert_builtin_function(&mut parser, "length", &["T"], |id| (
 
            vec![
 
                ("array", quick_type(&[PTV::ArrayLike, PTV::PolymorphicArgument(id.upcast(), 0)]))
 
            ],
 
            quick_type(&[PTV::UInt32]) // TODO: @PtrInt
 
        ));
 
        insert_builtin_function(&mut parser, "assert", &[], |_id| (
 
            vec![
 
                ("condition", quick_type(&[PTV::Bool])),
 
            ],
 
            quick_type(&[PTV::Void])
 
        ));
 

	
 
        parser
 
    }
 

	
 
    pub fn feed(&mut self, mut source: InputSource) -> Result<(), ParseError> {
 
        // TODO: @Optimize
 
        let mut token_buffer = TokenBuffer::new();
 
        self.pass_tokenizer.tokenize(&mut source, &mut token_buffer)?;
 

	
 
        let module = Module{
 
            source,
 
            tokens: token_buffer,
 
            root_id: RootId::new_invalid(),
 
            name: None,
 
            version: None,
 
            phase: ModuleCompilationPhase::Tokenized,
 
        };
 
        self.modules.push(module);
 

	
 
        Ok(())
 
    }
 

	
 
    pub fn parse(&mut self) -> Result<(), ParseError> {
 
        let mut pass_ctx = PassCtx{
 
            heap: &mut self.heap,
 
            symbols: &mut self.symbol_table,
 
            pool: &mut self.string_pool,
 
            arch: &self.arch,
 
        };
 

	
 
        // Advance all modules to the phase where all symbols are scanned
 
        for module_idx in 0..self.modules.len() {
 
            self.pass_symbols.parse(&mut self.modules, module_idx, &mut pass_ctx)?;
 
        }
 

	
 
        // With all symbols scanned, perform further compilation until we can
 
        // add all base types to the type table.
 
        for module_idx in 0..self.modules.len() {
 
            self.pass_import.parse(&mut self.modules, module_idx, &mut pass_ctx)?;
 
            self.pass_definitions.parse(&mut self.modules, module_idx, &mut pass_ctx)?;
 
        }
 

	
 
        // Add every known type to the type table
 
        self.type_table.build_base_types(&mut self.modules, &mut pass_ctx)?;
 

	
 
        // Continue compilation with the remaining phases now that the types
 
        // are all in the type table
 
        for module_idx in 0..self.modules.len() {
 
            let mut ctx = visitor::Ctx{
 
                heap: &mut self.heap,
 
                module: &mut self.modules[module_idx],
 
                modules: &mut self.modules,
 
                module_idx,
 
                symbols: &mut self.symbol_table,
 
                types: &mut self.type_table,
 
                arch: &self.arch,
 
            };
 
            self.pass_validation.visit_module(&mut ctx)?;
 
        }
 

	
 
        // Perform typechecking on all modules
 
        let mut queue = ResolveQueue::new();
 
        for module in &mut self.modules {
 
        for module_idx in 0..self.modules.len() {
 
            let mut ctx = visitor::Ctx{
 
                heap: &mut self.heap,
 
                module,
 
                modules: &mut self.modules,
 
                module_idx,
 
                symbols: &mut self.symbol_table,
 
                types: &mut self.type_table,
 
                arch: &self.arch,
 
            };
 
            PassTyping::queue_module_definitions(&mut ctx, &mut queue);
 
        };
 
        while !queue.is_empty() {
 
            let top = queue.pop().unwrap();
 
            let mut ctx = visitor::Ctx{
 
                heap: &mut self.heap,
 
                module: &mut self.modules[top.root_id.index as usize],
 
                modules: &mut self.modules,
 
                module_idx: top.root_id.index as usize,
 
                symbols: &mut self.symbol_table,
 
                types: &mut self.type_table,
 
                arch: &self.arch,
 
            };
 
            self.pass_typing.handle_module_definition(&mut ctx, &mut queue, top)?;
 
        }
 

	
 
        // Write out desired information
 
        if let Some(filename) = &self.write_ast_to {
 
            let mut writer = ASTWriter::new();
 
            let mut file = std::fs::File::create(std::path::Path::new(filename)).unwrap();
 
            writer.write_ast(&mut file, &self.heap);
 
        }
 

	
 
        Ok(())
 
    }
 
}
 

	
 
// Note: args and return type need to be a function because we need to know the function ID.
 
fn insert_builtin_function<T: Fn(FunctionDefinitionId) -> (Vec<(&'static str, ParserType)>, ParserType)> (
 
    p: &mut Parser, func_name: &str, polymorphic: &[&str], arg_and_return_fn: T) {
 

	
 
    let mut poly_vars = Vec::with_capacity(polymorphic.len());
 
    for poly_var in polymorphic {
 
        poly_vars.push(Identifier{ span: InputSpan::new(), value: p.string_pool.intern(poly_var.as_bytes()) });
 
    }
 

	
 
    let func_ident_ref = p.string_pool.intern(func_name.as_bytes());
 
    let func_id = p.heap.alloc_function_definition(|this| FunctionDefinition{
 
        this,
 
        defined_in: RootId::new_invalid(),
 
        builtin: true,
 
        span: InputSpan::new(),
 
        identifier: Identifier{ span: InputSpan::new(), value: func_ident_ref.clone() },
 
        poly_vars,
 
        return_types: Vec::new(),
 
        parameters: Vec::new(),
 
        body: BlockStatementId::new_invalid(),
 
        num_expressions_in_body: -1,
 
    });
 

	
 
    let (args, ret) = arg_and_return_fn(func_id);
 

	
 
    let mut parameters = Vec::with_capacity(args.len());
 
    for (arg_name, arg_type) in args {
 
        let identifier = Identifier{ span: InputSpan::new(), value: p.string_pool.intern(arg_name.as_bytes()) };
 
        let param_id = p.heap.alloc_variable(|this| Variable{
 
            this,
 
            kind: VariableKind::Parameter,
 
            parser_type: arg_type.clone(),
 
            identifier,
 
            relative_pos_in_block: 0,
 
            unique_id_in_scope: 0
 
        });
 
        parameters.push(param_id);
 
    }
 

	
 
    let func = &mut p.heap[func_id];
 
    func.parameters = parameters;
 
    func.return_types.push(ret);
 

	
 
    p.symbol_table.insert_symbol(SymbolScope::Global, Symbol{
 
        name: func_ident_ref,
 
        variant: SymbolVariant::Definition(SymbolDefinition{
 
            defined_in_module: RootId::new_invalid(),
 
            defined_in_scope: SymbolScope::Global,
 
            definition_span: InputSpan::new(),
 
            identifier_span: InputSpan::new(),
 
            imported_at: None,
 
            class: DefinitionClass::Function,
 
            definition_id: func_id.upcast(),
 
        })
 
    }).unwrap();
 
}
 
\ No newline at end of file
src/protocol/parser/pass_definitions.rs
Show inline comments
 
@@ -129,196 +129,196 @@ impl PassDefinitions {
 
                let start_pos = iter.last_valid_pos();
 
                let parser_type = consume_parser_type(
 
                    source, iter, &ctx.symbols, &ctx.heap, poly_vars, module_scope,
 
                    definition_id, false, 0
 
                )?;
 
                let field = consume_ident_interned(source, iter, ctx)?;
 
                Ok(StructFieldDefinition{
 
                    span: InputSpan::from_positions(start_pos, field.span.end),
 
                    field, parser_type
 
                })
 
            },
 
            &mut fields_section, "a struct field", "a list of struct fields", None
 
        )?;
 

	
 
        // Transfer to preallocated definition
 
        let struct_def = ctx.heap[definition_id].as_struct_mut();
 
        struct_def.fields = fields_section.into_vec();
 

	
 
        Ok(())
 
    }
 

	
 
    fn visit_enum_definition(
 
        &mut self, module: &Module, iter: &mut TokenIter, ctx: &mut PassCtx
 
    ) -> Result<(), ParseError> {
 
        consume_exact_ident(&module.source, iter, KW_ENUM)?;
 
        let (ident_text, _) = consume_ident(&module.source, iter)?;
 

	
 
        // Retrieve preallocated DefinitionId
 
        let module_scope = SymbolScope::Module(module.root_id);
 
        let definition_id = ctx.symbols.get_symbol_by_name_defined_in_scope(module_scope, ident_text)
 
            .unwrap().variant.as_definition().definition_id;
 
        self.cur_definition = definition_id;
 

	
 
        // Parse enum definition
 
        consume_polymorphic_vars_spilled(&module.source, iter, ctx)?;
 

	
 
        let mut enum_section = self.enum_variants.start_section();
 
        consume_comma_separated(
 
            TokenKind::OpenCurly, TokenKind::CloseCurly, &module.source, iter, ctx,
 
            |source, iter, ctx| {
 
                let identifier = consume_ident_interned(source, iter, ctx)?;
 
                let value = if iter.next() == Some(TokenKind::Equal) {
 
                    iter.consume();
 
                    let (variant_number, _) = consume_integer_literal(source, iter, &mut self.buffer)?;
 
                    EnumVariantValue::Integer(variant_number as i64) // TODO: @int
 
                } else {
 
                    EnumVariantValue::None
 
                };
 
                Ok(EnumVariantDefinition{ identifier, value })
 
            },
 
            &mut enum_section, "an enum variant", "a list of enum variants", None
 
        )?;
 

	
 
        // Transfer to definition
 
        let enum_def = ctx.heap[definition_id].as_enum_mut();
 
        enum_def.variants = enum_section.into_vec();
 

	
 
        Ok(())
 
    }
 

	
 
    fn visit_union_definition(
 
        &mut self, module: &Module, iter: &mut TokenIter, ctx: &mut PassCtx
 
    ) -> Result<(), ParseError> {
 
        consume_exact_ident(&module.source, iter, KW_UNION)?;
 
        let (ident_text, _) = consume_ident(&module.source, iter)?;
 

	
 
        // Retrieve preallocated DefinitionId
 
        let module_scope = SymbolScope::Module(module.root_id);
 
        let definition_id = ctx.symbols.get_symbol_by_name_defined_in_scope(module_scope, ident_text)
 
            .unwrap().variant.as_definition().definition_id;
 
        self.cur_definition = definition_id;
 

	
 
        // Parse union definition
 
        consume_polymorphic_vars_spilled(&module.source, iter, ctx)?;
 

	
 
        let mut variants_section = self.union_variants.start_section();
 
        consume_comma_separated(
 
            TokenKind::OpenCurly, TokenKind::CloseCurly, &module.source, iter, ctx,
 
            |source, iter, ctx| {
 
                let identifier = consume_ident_interned(source, iter, ctx)?;
 
                let mut close_pos = identifier.span.end;
 

	
 
                let mut types_section = self.parser_types.start_section();
 

	
 
                let has_embedded = maybe_consume_comma_separated(
 
                    TokenKind::OpenParen, TokenKind::CloseParen, source, iter, ctx,
 
                    |source, iter, ctx| {
 
                        let poly_vars = ctx.heap[definition_id].poly_vars();
 
                        consume_parser_type(
 
                            source, iter, &ctx.symbols, &ctx.heap, poly_vars,
 
                            module_scope, definition_id, false, 0
 
                        )
 
                    },
 
                    &mut types_section, "an embedded type", Some(&mut close_pos)
 
                )?;
 
                let value = if has_embedded {
 
                    UnionVariantValue::Embedded(types_section.into_vec())
 
                    types_section.into_vec()
 
                } else {
 
                    types_section.forget();
 
                    UnionVariantValue::None
 
                    Vec::new()
 
                };
 

	
 
                Ok(UnionVariantDefinition{
 
                    span: InputSpan::from_positions(identifier.span.begin, close_pos),
 
                    identifier,
 
                    value
 
                })
 
            },
 
            &mut variants_section, "a union variant", "a list of union variants", None
 
        )?;
 

	
 
        // Transfer to AST
 
        let union_def = ctx.heap[definition_id].as_union_mut();
 
        union_def.variants = variants_section.into_vec();
 

	
 
        Ok(())
 
    }
 

	
 
    fn visit_function_definition(
 
        &mut self, module: &Module, iter: &mut TokenIter, ctx: &mut PassCtx
 
    ) -> Result<(), ParseError> {
 
        // Retrieve function name
 
        consume_exact_ident(&module.source, iter, KW_FUNCTION)?;
 
        let (ident_text, _) = consume_ident(&module.source, iter)?;
 

	
 
        // Retrieve preallocated DefinitionId
 
        let module_scope = SymbolScope::Module(module.root_id);
 
        let definition_id = ctx.symbols.get_symbol_by_name_defined_in_scope(module_scope, ident_text)
 
            .unwrap().variant.as_definition().definition_id;
 
        self.cur_definition = definition_id;
 

	
 
        consume_polymorphic_vars_spilled(&module.source, iter, ctx)?;
 

	
 
        // Parse function's argument list
 
        let mut parameter_section = self.variables.start_section();
 
        consume_parameter_list(
 
            &module.source, iter, ctx, &mut parameter_section, module_scope, definition_id
 
        )?;
 
        let parameters = parameter_section.into_vec();
 

	
 
        // Consume return types
 
        consume_token(&module.source, iter, TokenKind::ArrowRight)?;
 
        let mut return_types = self.parser_types.start_section();
 
        let mut open_curly_pos = iter.last_valid_pos(); // bogus value
 
        consume_comma_separated_until(
 
            TokenKind::OpenCurly, &module.source, iter, ctx,
 
            |source, iter, ctx| {
 
                let poly_vars = ctx.heap[definition_id].poly_vars();
 
                consume_parser_type(source, iter, &ctx.symbols, &ctx.heap, poly_vars, module_scope, definition_id, false, 0)
 
            },
 
            &mut return_types, "a return type", Some(&mut open_curly_pos)
 
        )?;
 
        let return_types = return_types.into_vec();
 

	
 
        // TODO: @ReturnValues
 
        match return_types.len() {
 
            0 => return Err(ParseError::new_error_str_at_pos(&module.source, open_curly_pos, "expected a return type")),
 
            1 => {},
 
            _ => return Err(ParseError::new_error_str_at_pos(&module.source, open_curly_pos, "multiple return types are not (yet) allowed")),
 
        }
 

	
 
        // Consume block
 
        let body = self.consume_block_statement_without_leading_curly(module, iter, ctx, open_curly_pos)?;
 

	
 
        // Assign everything in the preallocated AST node
 
        let function = ctx.heap[definition_id].as_function_mut();
 
        function.return_types = return_types;
 
        function.parameters = parameters;
 
        function.body = body;
 

	
 
        Ok(())
 
    }
 

	
 
    fn visit_component_definition(
 
        &mut self, module: &Module, iter: &mut TokenIter, ctx: &mut PassCtx
 
    ) -> Result<(), ParseError> {
 
        // Consume component variant and name
 
        let (_variant_text, _) = consume_any_ident(&module.source, iter)?;
 
        debug_assert!(_variant_text == KW_PRIMITIVE || _variant_text == KW_COMPOSITE);
 
        let (ident_text, _) = consume_ident(&module.source, iter)?;
 

	
 
        // Retrieve preallocated definition
 
        let module_scope = SymbolScope::Module(module.root_id);
 
        let definition_id = ctx.symbols.get_symbol_by_name_defined_in_scope(module_scope, ident_text)
 
            .unwrap().variant.as_definition().definition_id;
 
        self.cur_definition = definition_id;
 

	
 
        consume_polymorphic_vars_spilled(&module.source, iter, ctx)?;
 

	
 
        // Parse component's argument list
 
        let mut parameter_section = self.variables.start_section();
 
        consume_parameter_list(
 
            &module.source, iter, ctx, &mut parameter_section, module_scope, definition_id
 
        )?;
 
        let parameters = parameter_section.into_vec();
 

	
src/protocol/parser/pass_typing.rs
Show inline comments
 
@@ -466,200 +466,201 @@ impl InferenceType {
 
        let mut modified = false;
 
        let mut depth = 1;
 

	
 
        while depth > 0 {
 
            let to_infer_part = &to_infer.parts[to_infer_idx];
 
            let template_part = &template[template_idx];
 

	
 
            if to_infer_part == template_part {
 
                let depth_change = to_infer_part.depth_change();
 
                depth += depth_change;
 
                debug_assert_eq!(depth_change, template_part.depth_change());
 
                to_infer_idx += 1;
 
                template_idx += 1;
 
                continue;
 
            }
 
            if to_infer_part.is_marker() { to_infer_idx += 1; continue; }
 
            if template_part.is_marker() { template_idx += 1; continue; }
 

	
 
            // Types are not equal and not markers. So check if we can infer 
 
            // anything
 
            if let Some(depth_change) = Self::infer_part_for_single_type(
 
                to_infer, &mut to_infer_idx, template, &mut template_idx
 
            ) {
 
                depth += depth_change;
 
                modified = true;
 
                continue;
 
            }
 

	
 
            if !forced_template {
 
                // We cannot infer anything, but the template may still be
 
                // compatible with the type we're inferring
 
                if let Some(depth_change) = Self::check_part_for_single_type(
 
                    template, &mut template_idx, &to_infer.parts, &mut to_infer_idx
 
                ) {
 
                    depth += depth_change;
 
                    continue;
 
                }
 
            }
 

	
 
            return SingleInferenceResult::Incompatible
 
        }
 

	
 
        if modified {
 
            to_infer.recompute_is_done();
 
            return SingleInferenceResult::Modified;
 
        } else {
 
            return SingleInferenceResult::Unmodified;
 
        }
 
    }
 

	
 
    /// Checks if both types are compatible, doesn't perform any inference
 
    fn check_subtrees(
 
        type_parts_a: &[InferenceTypePart], start_idx_a: usize,
 
        type_parts_b: &[InferenceTypePart], start_idx_b: usize
 
    ) -> bool {
 
        let mut depth = 1;
 
        let mut idx_a = start_idx_a;
 
        let mut idx_b = start_idx_b;
 

	
 
        while depth > 0 {
 
            let part_a = &type_parts_a[idx_a];
 
            let part_b = &type_parts_b[idx_b];
 

	
 
            if part_a == part_b {
 
                let depth_change = part_a.depth_change();
 
                depth += depth_change;
 
                debug_assert_eq!(depth_change, part_b.depth_change());
 
                idx_a += 1;
 
                idx_b += 1;
 
                continue;
 
            }
 
            
 
            if part_a.is_marker() { idx_a += 1; continue; }
 
            if part_b.is_marker() { idx_b += 1; continue; }
 

	
 
            if let Some(depth_change) = Self::check_part_for_single_type(
 
                type_parts_a, &mut idx_a, type_parts_b, &mut idx_b
 
            ) {
 
                depth += depth_change;
 
                continue;
 
            }
 
            if let Some(depth_change) = Self::check_part_for_single_type(
 
                type_parts_b, &mut idx_b, type_parts_a, &mut idx_a
 
            ) {
 
                depth += depth_change;
 
                continue;
 
            }
 

	
 
            return false;
 
        }
 

	
 
        true
 
    }
 

	
 
    /// Performs the conversion of the inference type into a concrete type.
 
    /// By calling this function you must make sure that no unspecified types
 
    /// (e.g. Unknown or IntegerLike) exist in the type.
 
    /// (e.g. Unknown or IntegerLike) exist in the type. Will not clear or check
 
    /// if the supplied `ConcreteType` is empty, will simply append to the parts
 
    /// vector.
 
    fn write_concrete_type(&self, concrete_type: &mut ConcreteType) {
 
        use InferenceTypePart as ITP;
 
        use ConcreteTypePart as CTP;
 

	
 
        // Make sure inference type is specified but concrete type is not yet specified
 
        debug_assert!(!self.parts.is_empty());
 
        debug_assert!(concrete_type.parts.is_empty());
 
        concrete_type.parts.reserve(self.parts.len());
 

	
 
        let mut idx = 0;
 
        while idx < self.parts.len() {
 
            let part = &self.parts[idx];
 
            let converted_part = match part {
 
                ITP::Marker(_) => {
 
                    // Markers are removed when writing to the concrete type.
 
                    idx += 1;
 
                    continue;
 
                },
 
                ITP::Unknown | ITP::NumberLike |
 
                ITP::IntegerLike | ITP::ArrayLike | ITP::PortLike => {
 
                    // Should not happen if type inferencing works correctly: we
 
                    // should have returned a programmer-readable error or have
 
                    // inferred all types.
 
                    unreachable!("attempted to convert inference type part {:?} into concrete type", part);
 
                },
 
                ITP::Void => CTP::Void,
 
                ITP::Message => CTP::Message,
 
                ITP::Bool => CTP::Bool,
 
                ITP::UInt8 => CTP::UInt8,
 
                ITP::UInt16 => CTP::UInt16,
 
                ITP::UInt32 => CTP::UInt32,
 
                ITP::UInt64 => CTP::UInt64,
 
                ITP::SInt8 => CTP::SInt8,
 
                ITP::SInt16 => CTP::SInt16,
 
                ITP::SInt32 => CTP::SInt32,
 
                ITP::SInt64 => CTP::SInt64,
 
                ITP::Character => CTP::Character,
 
                ITP::String => {
 
                    // Inferred type has a 'char' subtype to simplify array
 
                    // checking, we remove it here.
 
                    debug_assert_eq!(self.parts[idx + 1], InferenceTypePart::Character);
 
                    idx += 1;
 
                    CTP::String
 
                },
 
                ITP::Array => CTP::Array,
 
                ITP::Slice => CTP::Slice,
 
                ITP::Input => CTP::Input,
 
                ITP::Output => CTP::Output,
 
                ITP::Instance(id, num) => CTP::Instance(*id, *num),
 
            };
 

	
 
            concrete_type.parts.push(converted_part);
 
            idx += 1;
 
        }
 
    }
 

	
 
    /// Writes a human-readable version of the type to a string. This is used
 
    /// to display error messages
 
    fn write_display_name(
 
        buffer: &mut String, heap: &Heap, parts: &[InferenceTypePart], mut idx: usize
 
    ) -> usize {
 
        use InferenceTypePart as ITP;
 

	
 
        match &parts[idx] {
 
            ITP::Marker(_marker_idx) => {
 
                if debug_log_enabled!() {
 
                    buffer.push_str(&format!("{{Marker:{}}}", *_marker_idx));
 
                }
 
                idx = Self::write_display_name(buffer, heap, parts, idx + 1);
 
            },
 
            ITP::Unknown => buffer.push_str("?"),
 
            ITP::NumberLike => buffer.push_str("numberlike"),
 
            ITP::IntegerLike => buffer.push_str("integerlike"),
 
            ITP::ArrayLike => {
 
                idx = Self::write_display_name(buffer, heap, parts, idx + 1);
 
                buffer.push_str("[?]");
 
            },
 
            ITP::PortLike => {
 
                buffer.push_str("portlike<");
 
                idx = Self::write_display_name(buffer, heap, parts, idx + 1);
 
                buffer.push('>');
 
            }
 
            ITP::Void => buffer.push_str("void"),
 
            ITP::Bool => buffer.push_str(KW_TYPE_BOOL_STR),
 
            ITP::UInt8 => buffer.push_str(KW_TYPE_UINT8_STR),
 
            ITP::UInt16 => buffer.push_str(KW_TYPE_UINT16_STR),
 
            ITP::UInt32 => buffer.push_str(KW_TYPE_UINT32_STR),
 
            ITP::UInt64 => buffer.push_str(KW_TYPE_UINT64_STR),
 
            ITP::SInt8 => buffer.push_str(KW_TYPE_SINT8_STR),
 
            ITP::SInt16 => buffer.push_str(KW_TYPE_SINT16_STR),
 
            ITP::SInt32 => buffer.push_str(KW_TYPE_SINT32_STR),
 
            ITP::SInt64 => buffer.push_str(KW_TYPE_SINT64_STR),
 
            ITP::Character => buffer.push_str(KW_TYPE_CHAR_STR),
 
            ITP::String => {
 
                buffer.push_str(KW_TYPE_STRING_STR);
 
                idx += 1; // skip the 'char' subtype
 
            },
 
            ITP::Message => {
 
                buffer.push_str(KW_TYPE_MESSAGE_STR);
 
                buffer.push('<');
 
                idx = Self::write_display_name(buffer, heap, parts, idx + 1);
 
                buffer.push('>');
 
            },
 
@@ -714,348 +715,363 @@ impl InferenceType {
 
        Self::partial_display_name(heap, &self.parts)
 
    }
 
}
 

	
 
impl Default for InferenceType {
 
    fn default() -> Self {
 
        Self{
 
            has_marker: false,
 
            is_done: false,
 
            parts: Vec::new(),
 
        }
 
    }
 
}
 

	
 
/// Iterator over the subtrees that follow a marker in an `InferenceType`
 
/// instance. Returns immutable slices over the internal parts
 
struct InferenceTypeMarkerIter<'a> {
 
    parts: &'a [InferenceTypePart],
 
    idx: usize,
 
}
 

	
 
impl<'a> InferenceTypeMarkerIter<'a> {
 
    fn new(parts: &'a [InferenceTypePart]) -> Self {
 
        Self{ parts, idx: 0 }
 
    }
 
}
 

	
 
impl<'a> Iterator for InferenceTypeMarkerIter<'a> {
 
    type Item = (u32, &'a [InferenceTypePart]);
 

	
 
    fn next(&mut self) -> Option<Self::Item> {
 
        // Iterate until we find a marker
 
        while self.idx < self.parts.len() {
 
            if let InferenceTypePart::Marker(marker) = self.parts[self.idx] {
 
                // Found a marker, find the subtree end
 
                let start_idx = self.idx + 1;
 
                let end_idx = InferenceType::find_subtree_end_idx(self.parts, start_idx);
 

	
 
                // Modify internal index, then return items
 
                self.idx = end_idx;
 
                return Some((marker, &self.parts[start_idx..end_idx]));
 
            }
 

	
 
            self.idx += 1;
 
        }
 

	
 
        None
 
    }
 
}
 

	
 
#[derive(Debug, PartialEq, Eq)]
 
enum DualInferenceResult {
 
    Neither,        // neither argument is clarified
 
    First,          // first argument is clarified using the second one
 
    Second,         // second argument is clarified using the first one
 
    Both,           // both arguments are clarified
 
    Incompatible,   // types are incompatible: programmer error
 
}
 

	
 
impl DualInferenceResult {
 
    fn modified_lhs(&self) -> bool {
 
        match self {
 
            DualInferenceResult::First | DualInferenceResult::Both => true,
 
            _ => false
 
        }
 
    }
 
    fn modified_rhs(&self) -> bool {
 
        match self {
 
            DualInferenceResult::Second | DualInferenceResult::Both => true,
 
            _ => false
 
        }
 
    }
 
}
 

	
 
#[derive(Debug, PartialEq, Eq)]
 
enum SingleInferenceResult {
 
    Unmodified,
 
    Modified,
 
    Incompatible
 
}
 

	
 
enum DefinitionType{
 
    Component(ComponentDefinitionId),
 
    Function(FunctionDefinitionId),
 
}
 

	
 
impl DefinitionType {
 
    fn definition_id(&self) -> DefinitionId {
 
        match self {
 
            DefinitionType::Component(v) => v.upcast(),
 
            DefinitionType::Function(v) => v.upcast(),
 
        }
 
    }
 
}
 

	
 
pub(crate) struct ResolveQueueElement {
 
    // Note that using the `definition_id` and the `monomorph_idx` one may
 
    // query the type table for the full procedure type, thereby retrieving
 
    // the polymorphic arguments to the procedure.
 
    pub(crate) root_id: RootId,
 
    pub(crate) definition_id: DefinitionId,
 
    pub(crate) monomorph_types: Vec<ConcreteType>,
 
    pub(crate) reserved_monomorph_idx: i32,
 
}
 

	
 
impl PartialEq for ResolveQueueElement {
 
    fn eq(&self, other: &Self) -> bool {
 
        return
 
            self.root_id == other.root_id &&
 
            self.definition_id == other.definition_id &&
 
            self.monomorph_types == other.monomorph_types;
 
    }
 
}
 
impl Eq for ResolveQueueElement {}
 

	
 
pub(crate) type ResolveQueue = Vec<ResolveQueueElement>;
 

	
 
#[derive(Clone)]
 
struct InferenceExpression {
 
    expr_type: InferenceType,       // result type from expression
 
    expr_id: ExpressionId,          // expression that is evaluated
 
    field_or_monomorph_idx: i32,    // index of field, of index of monomorph array in type table
 
    extra_data_idx: i32,     // index of extra data needed for inference
 
}
 

	
 
impl Default for InferenceExpression {
 
    fn default() -> Self {
 
        Self{
 
            expr_type: InferenceType::default(),
 
            expr_id: ExpressionId::new_invalid(),
 
            field_or_monomorph_idx: -1,
 
            extra_data_idx: -1,
 
        }
 
    }
 
}
 

	
 
/// This particular visitor will recurse depth-first into the AST and ensures
 
/// that all expressions have the appropriate types.
 
pub(crate) struct PassTyping {
 
    // Current definition we're typechecking.
 
    reserved_idx: i32,
 
    definition_type: DefinitionType,
 
    poly_vars: Vec<ConcreteType>,
 

	
 
    // Buffers for iteration over substatements and subexpressions
 
    stmt_buffer: Vec<StatementId>,
 
    expr_buffer: Vec<ExpressionId>,
 

	
 
    // Mapping from parser type to inferred type. We attempt to continue to
 
    // specify these types until we're stuck or we've fully determined the type.
 
    var_types: HashMap<VariableId, VarData>,            // types of variables
 
    expr_types: Vec<InferenceExpression>,                     // will be transferred to type table at end
 
    extra_data: Vec<ExtraData>,       // data for polymorph inference
 
    // Keeping track of which expressions need to be reinferred because the
 
    // expressions they're linked to made progression on an associated type
 
    expr_queued: DequeSet<i32>,
 
}
 

	
 
// TODO: @Rename, this is used for a lot of type inferencing. It seems like
 
//  there is a different underlying architecture waiting to surface.
 
struct ExtraData {
 
    expr_id: ExpressionId, // the expression with which this data is associated
 
    definition_id: DefinitionId, // the definition, only used for user feedback
 
    /// Progression of polymorphic variables (if any)
 
    poly_vars: Vec<InferenceType>,
 
    /// Progression of types of call arguments or struct members
 
    embedded: Vec<InferenceType>,
 
    returned: InferenceType,
 
}
 

	
 
impl Default for ExtraData {
 
    fn default() -> Self {
 
        Self{
 
            expr_id: ExpressionId::new_invalid(),
 
            definition_id: DefinitionId::new_invalid(),
 
            poly_vars: Vec::new(),
 
            embedded: Vec::new(),
 
            returned: InferenceType::default(),
 
        }
 
    }
 
}
 

	
 
struct VarData {
 
    /// Type of the variable
 
    var_type: InferenceType,
 
    /// VariableExpressions that use the variable
 
    used_at: Vec<ExpressionId>,
 
    /// For channel statements we link to the other variable such that when one
 
    /// channel's interior type is resolved, we can also resolve the other one.
 
    linked_var: Option<VariableId>,
 
}
 

	
 
impl VarData {
 
    fn new_channel(var_type: InferenceType, other_port: VariableId) -> Self {
 
        Self{ var_type, used_at: Vec::new(), linked_var: Some(other_port) }
 
    }
 
    fn new_local(var_type: InferenceType) -> Self {
 
        Self{ var_type, used_at: Vec::new(), linked_var: None }
 
    }
 
}
 

	
 
impl PassTyping {
 
    pub(crate) fn new() -> Self {
 
        PassTyping {
 
            reserved_idx: -1,
 
            definition_type: DefinitionType::Function(FunctionDefinitionId::new_invalid()),
 
            poly_vars: Vec::new(),
 
            stmt_buffer: Vec::with_capacity(STMT_BUFFER_INIT_CAPACITY),
 
            expr_buffer: Vec::with_capacity(EXPR_BUFFER_INIT_CAPACITY),
 
            var_types: HashMap::new(),
 
            expr_types: Vec::new(),
 
            extra_data: Vec::new(),
 
            expr_queued: DequeSet::new(),
 
        }
 
    }
 

	
 
    // TODO: @cleanup Unsure about this, maybe a pattern will arise after
 
    //  a while.
 
    pub(crate) fn queue_module_definitions(ctx: &mut Ctx, queue: &mut ResolveQueue) {
 
        debug_assert_eq!(ctx.module.phase, ModuleCompilationPhase::ValidatedAndLinked);
 
        let root_id = ctx.module.root_id;
 
        debug_assert_eq!(ctx.module().phase, ModuleCompilationPhase::ValidatedAndLinked);
 
        let root_id = ctx.module().root_id;
 
        let root = &ctx.heap.protocol_descriptions[root_id];
 
        for definition_id in &root.definitions {
 
            let definition = &ctx.heap[*definition_id];
 

	
 
            let should_add_to_queue = match definition {
 
                Definition::Function(definition) => definition.poly_vars.is_empty(),
 
                Definition::Component(definition) => definition.poly_vars.is_empty(),
 
                Definition::Enum(_) | Definition::Struct(_) | Definition::Union(_) => false,
 
            let first_concrete_part = match definition {
 
                Definition::Function(definition) => {
 
                    if definition.poly_vars.is_empty() {
 
                        Some(ConcreteTypePart::Function(*definition_id, 0))
 
                    } else {
 
                        None
 
                    }
 
                }
 
                Definition::Component(definition) => {
 
                    if definition.poly_vars.is_empty() {
 
                        Some(ConcreteTypePart::Component(*definition_id, 0))
 
                    } else {
 
                        None
 
                    }
 
                },
 
                Definition::Enum(_) | Definition::Struct(_) | Definition::Union(_) => None,
 
            };
 

	
 
            if should_add_to_queue {
 
                let reserved_idx = ctx.types.reserve_procedure_monomorph_index(definition_id, None);
 
            if let Some(first_concrete_part) = first_concrete_part {
 
                let concrete_type = ConcreteType{ parts: vec![first_concrete_part] };
 
                let reserved_idx = ctx.types.reserve_procedure_monomorph_index(definition_id, concrete_type);
 
                queue.push(ResolveQueueElement{
 
                    root_id,
 
                    definition_id: *definition_id,
 
                    monomorph_types: Vec::new(),
 
                    reserved_monomorph_idx: reserved_idx,
 
                })
 
            }
 
        }
 
    }
 

	
 
    pub(crate) fn handle_module_definition(
 
        &mut self, ctx: &mut Ctx, queue: &mut ResolveQueue, element: ResolveQueueElement
 
    ) -> VisitorResult {
 
        // Visit the definition
 
        debug_assert_eq!(ctx.module.root_id, element.root_id);
 
        self.reset();
 
        debug_assert_eq!(ctx.module().root_id, element.root_id);
 
        debug_assert!(self.poly_vars.is_empty());
 

	
 
        // Prepare for visiting the definition
 
        self.reserved_idx = element.reserved_monomorph_idx;
 
        self.poly_vars = element.monomorph_types;
 
        self.visit_definition(ctx, element.definition_id)?;
 

	
 
        // Keep resolving types
 
        let proc_base = ctx.types.get_base_definition(&element.definition_id).unwrap();
 
        if proc_base.is_polymorph {
 
            let proc_monos = proc_base.definition.procedure_monomorphs();
 
            let proc_mono = &(*proc_monos)[element.reserved_monomorph_idx as usize];
 

	
 
            for poly_arg in proc_mono.concrete_type.embedded_iter(0) {
 
                self.poly_vars.push(ConcreteType{ parts: Vec::from(poly_arg) });
 
            }
 
        }
 

	
 
        // Visit the definition, setting up the type resolving process, then
 
        // (attempt to) resolve all types
 
        self.visit_definition(ctx, element.definition_id)?;
 
        self.resolve_types(ctx, queue)?;
 
        Ok(())
 
    }
 

	
 
    fn reset(&mut self) {
 
        self.reserved_idx = -1;
 
        self.definition_type = DefinitionType::Function(FunctionDefinitionId::new_invalid());
 
        self.poly_vars.clear();
 
        self.stmt_buffer.clear();
 
        self.expr_buffer.clear();
 
        self.var_types.clear();
 
        self.expr_types.clear();
 
        self.extra_data.clear();
 
        self.expr_queued.clear();
 
    }
 
}
 

	
 
impl Visitor for PassTyping {
 
    // Definitions
 

	
 
    fn visit_component_definition(&mut self, ctx: &mut Ctx, id: ComponentDefinitionId) -> VisitorResult {
 
        self.definition_type = DefinitionType::Component(id);
 

	
 
        let comp_def = &ctx.heap[id];
 
        debug_assert_eq!(comp_def.poly_vars.len(), self.poly_vars.len(), "component polyvars do not match imposed polyvars");
 

	
 
        debug_log!("{}", "-".repeat(50));
 
        debug_log!("Visiting component '{}': {}", comp_def.identifier.value.as_str(), id.0.index);
 
        debug_log!("{}", "-".repeat(50));
 

	
 
        // Reserve data for expression types
 
        debug_assert!(self.expr_types.is_empty());
 
        self.expr_types.resize(comp_def.num_expressions_in_body as usize, Default::default());
 

	
 
        // Visit parameters
 
        for param_id in comp_def.parameters.clone() {
 
            let param = &ctx.heap[param_id];
 
            let var_type = self.determine_inference_type_from_parser_type_elements(&param.parser_type.elements, true);
 
            debug_assert!(var_type.is_done, "expected component arguments to be concrete types");
 
            self.var_types.insert(param_id, VarData::new_local(var_type));
 
        }
 

	
 
        // Visit the body and all of its expressions
 
        let body_stmt_id = ctx.heap[id].body;
 
        self.visit_block_stmt(ctx, body_stmt_id)
 
    }
 

	
 
    fn visit_function_definition(&mut self, ctx: &mut Ctx, id: FunctionDefinitionId) -> VisitorResult {
 
        self.definition_type = DefinitionType::Function(id);
 

	
 
        let func_def = &ctx.heap[id];
 
        debug_assert_eq!(func_def.poly_vars.len(), self.poly_vars.len(), "function polyvars do not match imposed polyvars");
 

	
 
        debug_log!("{}", "-".repeat(50));
 
        debug_log!("Visiting function '{}': {}", func_def.identifier.value.as_str(), id.0.index);
 
        if debug_log_enabled!() {
 
            debug_log!("Polymorphic variables:");
 
            for (_idx, poly_var) in self.poly_vars.iter().enumerate() {
 
                let mut infer_type_parts = Vec::new();
 
                Self::determine_inference_type_from_concrete_type(
 
                    &mut infer_type_parts, &poly_var.parts
 
                );
 
                let _infer_type = InferenceType::new(false, true, infer_type_parts);
 
                debug_log!(" - [{:03}] {:?}", _idx, _infer_type.display_name(&ctx.heap));
 
            }
 
        }
 
        debug_log!("{}", "-".repeat(50));
 

	
 
        // Reserve data for expression types
 
        debug_assert!(self.expr_types.is_empty());
 
        self.expr_types.resize(func_def.num_expressions_in_body as usize, Default::default());
 

	
 
        // Visit parameters
 
        for param_id in func_def.parameters.clone() {
 
            let param = &ctx.heap[param_id];
 
            let var_type = self.determine_inference_type_from_parser_type_elements(&param.parser_type.elements, true);
 
            debug_assert!(var_type.is_done, "expected function arguments to be concrete types");
 
            self.var_types.insert(param_id, VarData::new_local(var_type));
 
        }
 

	
 
        // Visit all of the expressions within the body
 
        let body_stmt_id = ctx.heap[id].body;
 
        self.visit_block_stmt(ctx, body_stmt_id)
 
    }
 

	
 
    // Statements
 

	
 
    fn visit_block_stmt(&mut self, ctx: &mut Ctx, id: BlockStatementId) -> VisitorResult {
 
        // Transfer statements for traversal
 
        let block = &ctx.heap[id];
 

	
 
        for stmt_id in block.statements.clone() {
 
            self.visit_stmt(ctx, stmt_id)?;
 
        }
 

	
 
        Ok(())
 
@@ -1298,322 +1314,341 @@ impl Visitor for PassTyping {
 
                let expr_ids = literal.values.clone();
 
                self.insert_initial_union_polymorph_data(ctx, id);
 

	
 
                for expr_id in expr_ids {
 
                    self.visit_expr(ctx, expr_id)?;
 
                }
 
            },
 
            Literal::Array(expressions) => {
 
                // TODO: @performance
 
                let expr_ids = expressions.clone();
 
                for expr_id in expr_ids {
 
                    self.visit_expr(ctx, expr_id)?;
 
                }
 
            }
 
        }
 

	
 
        self.progress_literal_expr(ctx, id)
 
    }
 

	
 
    fn visit_cast_expr(&mut self, ctx: &mut Ctx, id: CastExpressionId) -> VisitorResult {
 
        let upcast_id = id.upcast();
 
        self.insert_initial_expr_inference_type(ctx, upcast_id)?;
 

	
 
        let cast_expr = &ctx.heap[id];
 
        let subject_expr_id = cast_expr.subject;
 

	
 
        self.visit_expr(ctx, subject_expr_id)?;
 

	
 
        self.progress_cast_expr(ctx, id)
 
    }
 

	
 
    fn visit_call_expr(&mut self, ctx: &mut Ctx, id: CallExpressionId) -> VisitorResult {
 
        let upcast_id = id.upcast();
 
        self.insert_initial_expr_inference_type(ctx, upcast_id)?;
 
        self.insert_initial_call_polymorph_data(ctx, id);
 

	
 
        // By default we set the polymorph idx for calls to 0. If the call ends
 
        // up not being a polymorphic one, then we will select the default
 
        // expression types in the type table
 
        let call_expr = &ctx.heap[id];
 
        self.expr_types[call_expr.unique_id_in_definition as usize].field_or_monomorph_idx = 0;
 

	
 
        // Visit all arguments
 
        for arg_expr_id in call_expr.arguments.clone() { // TODO: @Performance
 
            self.visit_expr(ctx, arg_expr_id)?;
 
        }
 

	
 
        self.progress_call_expr(ctx, id)
 
    }
 

	
 
    fn visit_variable_expr(&mut self, ctx: &mut Ctx, id: VariableExpressionId) -> VisitorResult {
 
        let upcast_id = id.upcast();
 
        self.insert_initial_expr_inference_type(ctx, upcast_id)?;
 

	
 
        let var_expr = &ctx.heap[id];
 
        debug_assert!(var_expr.declaration.is_some());
 

	
 
        // Not pretty: if a binding expression, then this is the first time we
 
        // encounter the variable, so we still need to insert the variable data.
 
        let declaration = &ctx.heap[var_expr.declaration.unwrap()];
 
        if !self.var_types.contains_key(&declaration.this)  {
 
            debug_assert!(declaration.kind == VariableKind::Binding);
 
            let var_type = self.determine_inference_type_from_parser_type_elements(
 
                &declaration.parser_type.elements, true
 
            );
 
            self.var_types.insert(declaration.this, VarData{
 
                var_type,
 
                used_at: vec![upcast_id],
 
                linked_var: None
 
            });
 
        } else {
 
            let var_data = self.var_types.get_mut(&declaration.this).unwrap();
 
            var_data.used_at.push(upcast_id);
 
        }
 

	
 
        self.progress_variable_expr(ctx, id)
 
    }
 
}
 

	
 
impl PassTyping {
 
    #[allow(dead_code)] // used when debug flag at the top of this file is true.
 
    fn debug_get_display_name(&self, ctx: &Ctx, expr_id: ExpressionId) -> String {
 
        let expr_idx = ctx.heap[expr_id].get_unique_id_in_definition();
 
        let expr_type = &self.expr_types[expr_idx as usize].expr_type;
 
        expr_type.display_name(&ctx.heap)
 
    }
 

	
 
    fn resolve_types(&mut self, ctx: &mut Ctx, queue: &mut ResolveQueue) -> Result<(), ParseError> {
 
        // Keep inferring until we can no longer make any progress
 
        while !self.expr_queued.is_empty() {
 
            let next_expr_idx = self.expr_queued.pop_front().unwrap();
 
            self.progress_expr(ctx, next_expr_idx)?;
 
        }
 

	
 
        // Helper for transferring polymorphic variables to concrete types and
 
        // checking if they're completely specified
 
        fn poly_inference_to_concrete_type(
 
            ctx: &Ctx, expr_id: ExpressionId, inference: &Vec<InferenceType>
 
        ) -> Result<Vec<ConcreteType>, ParseError> {
 
            let mut concrete = Vec::with_capacity(inference.len());
 
        fn inference_type_to_concrete_type(
 
            ctx: &Ctx, expr_id: ExpressionId, inference: &Vec<InferenceType>,
 
            first_concrete_part: ConcreteTypePart,
 
        ) -> Result<ConcreteType, ParseError> {
 
            // Prepare storage vector
 
            let mut num_inference_parts = 0;
 
            for inference_type in inference {
 
                num_inference_parts += inference_type.parts.len();
 
            }
 

	
 
            let mut concrete_type = ConcreteType{
 
                parts: Vec::with_capacity(1 + num_inference_parts),
 
            };
 
            concrete_type.parts.push(first_concrete_part);
 

	
 
            // Go through all polymorphic arguments and add them to the concrete
 
            // types.
 
            for (poly_idx, poly_type) in inference.iter().enumerate() {
 
                if !poly_type.is_done {
 
                    let expr = &ctx.heap[expr_id];
 
                    let definition = match expr {
 
                        Expression::Call(expr) => expr.definition,
 
                        Expression::Literal(expr) => match &expr.value {
 
                            Literal::Enum(lit) => lit.definition,
 
                            Literal::Union(lit) => lit.definition,
 
                            Literal::Struct(lit) => lit.definition,
 
                            _ => unreachable!()
 
                        },
 
                        _ => unreachable!(),
 
                    };
 
                    let poly_vars = ctx.heap[definition].poly_vars();
 
                    return Err(ParseError::new_error_at_span(
 
                        &ctx.module.source, expr.operation_span(), format!(
 
                        &ctx.module().source, expr.operation_span(), format!(
 
                            "could not fully infer the type of polymorphic variable '{}' of this expression (got '{}')",
 
                            poly_vars[poly_idx].value.as_str(), poly_type.display_name(&ctx.heap)
 
                        )
 
                    ));
 
                }
 

	
 
                let mut concrete_type = ConcreteType::default();
 
                poly_type.write_concrete_type(&mut concrete_type);
 
                concrete.push(concrete_type);
 
            }
 

	
 
            Ok(concrete)
 
            Ok(concrete_type)
 
        }
 

	
 
        // Inference is now done. But we may still have uninferred types. So we
 
        // check for these.
 
        for (infer_expr_idx, infer_expr) in self.expr_types.iter_mut().enumerate() {
 
            let expr_type = &mut infer_expr.expr_type;
 
            if !expr_type.is_done {
 
                // Auto-infer numberlike/integerlike types to a regular int
 
                if expr_type.parts.len() == 1 && expr_type.parts[0] == InferenceTypePart::IntegerLike {
 
                    expr_type.parts[0] = InferenceTypePart::SInt32;
 
                    self.expr_queued.push_back(infer_expr_idx as i32);
 
                } else {
 
                    let expr = &ctx.heap[infer_expr.expr_id];
 
                    return Err(ParseError::new_error_at_span(
 
                        &ctx.module.source, expr.full_span(), format!(
 
                        &ctx.module().source, expr.full_span(), format!(
 
                            "could not fully infer the type of this expression (got '{}')",
 
                            expr_type.display_name(&ctx.heap)
 
                        )
 
                    ));
 
                }
 
            }
 

	
 
            // Expression is fine, check if any extra data is attached
 
            if infer_expr.extra_data_idx < 0 { continue; }
 

	
 
            // Extra data is attached, perform typechecking and transfer
 
            // resolved information to the expression
 
            let extra_data = &self.extra_data[infer_expr.extra_data_idx as usize];
 
            if extra_data.poly_vars.is_empty() { continue; }
 

	
 
            // Note that only call and literal expressions need full inference.
 
            // Select expressions also use `extra_data`, but only for temporary
 
            // storage of the struct type whose field it is selecting.
 
            match &ctx.heap[extra_data.expr_id] {
 
                Expression::Call(expr) => {
 
                    if expr.method != Method::UserFunction && expr.method != Method::UserComponent {
 
                    // Check if it is not a builtin function. If not, then
 
                    // construct the first part of the concrete type.
 
                    let first_concrete_part = if expr.method == Method::UserFunction {
 
                        ConcreteTypePart::Function(expr.definition, extra_data.poly_vars.len() as u32)
 
                    } else if expr.method == Method::UserComponent {
 
                        ConcreteTypePart::Component(expr.definition, extra_data.poly_vars.len() as u32)
 
                    } else {
 
                        // Builtin function
 
                        continue;
 
                    }
 
                    };
 

	
 
                    let definition_id = expr.definition;
 
                    let poly_types = poly_inference_to_concrete_type(ctx, extra_data.expr_id, &extra_data.poly_vars)?;
 
                    let concrete_type = inference_type_to_concrete_type(
 
                        ctx, extra_data.expr_id, &extra_data.poly_vars, first_concrete_part
 
                    )?;
 

	
 
                    match ctx.types.get_procedure_monomorph_index(&definition_id, &poly_types) {
 
                    match ctx.types.get_procedure_monomorph_index(&definition_id, &concrete_type) {
 
                        Some(reserved_idx) => {
 
                            // Already typechecked, or already put into the resolve queue
 
                            infer_expr.field_or_monomorph_idx = reserved_idx;
 
                        },
 
                        None => {
 
                            // Not typechecked yet, so add an entry in the queue
 
                            let reserved_idx = ctx.types.reserve_procedure_monomorph_index(&definition_id, Some(poly_types.clone()));
 
                            let reserved_idx = ctx.types.reserve_procedure_monomorph_index(&definition_id, concrete_type);
 
                            infer_expr.field_or_monomorph_idx = reserved_idx;
 
                            queue.push(ResolveQueueElement{
 
                                root_id: ctx.heap[definition_id].defined_in(),
 
                                definition_id,
 
                                monomorph_types: poly_types,
 
                                reserved_monomorph_idx: reserved_idx,
 
                            });
 
                        }
 
                    }
 
                },
 
                Expression::Literal(expr) => {
 
                    let definition_id = match &expr.value {
 
                        Literal::Enum(lit) => lit.definition,
 
                        Literal::Union(lit) => lit.definition,
 
                        Literal::Struct(lit) => lit.definition,
 
                        _ => unreachable!(),
 
                    };
 

	
 
                    let poly_types = poly_inference_to_concrete_type(ctx, extra_data.expr_id, &extra_data.poly_vars)?;
 
                    let mono_index = ctx.types.add_data_monomorph(&definition_id, poly_types);
 
                    let first_concrete_part = ConcreteTypePart::Instance(definition_id, extra_data.poly_vars.len() as u32);
 
                    let concrete_type = inference_type_to_concrete_type(
 
                        ctx, extra_data.expr_id, &extra_data.poly_vars, first_concrete_part
 
                    )?;
 
                    let mono_index = ctx.types.add_data_monomorph(ctx.modules, ctx.heap, ctx.arch, definition_id, concrete_type)?;
 
                    infer_expr.field_or_monomorph_idx = mono_index;
 
                },
 
                Expression::Select(_) => {
 
                    debug_assert!(infer_expr.field_or_monomorph_idx >= 0);
 
                },
 
                _ => {
 
                    unreachable!("handling extra data for expression {:?}", &ctx.heap[extra_data.expr_id]);
 
                }
 
            }
 
        }
 

	
 
        // If we did any implicit type forcing, then our queue isn't empty
 
        // anymore
 
        while !self.expr_queued.is_empty() {
 
            let expr_idx = self.expr_queued.pop_back().unwrap();
 
            self.progress_expr(ctx, expr_idx)?;
 
        }
 

	
 
        // Every expression checked, and new monomorphs are queued. Transfer the
 
        // expression information to the type table.
 
        let definition_id = match &self.definition_type {
 
            DefinitionType::Component(id) => id.upcast(),
 
            DefinitionType::Function(id) => id.upcast(),
 
        };
 

	
 
        let target = ctx.types.get_procedure_expression_data_mut(&definition_id, self.reserved_idx);
 
        debug_assert!(target.poly_args == self.poly_vars);
 
        debug_assert!(target.expr_data.is_empty()); // makes sure we never queue something twice
 

	
 
        target.expr_data.reserve(self.expr_types.len());
 
        for infer_expr in self.expr_types.iter() {
 
            let mut concrete = ConcreteType::default();
 
            infer_expr.expr_type.write_concrete_type(&mut concrete);
 
            target.expr_data.push(MonomorphExpression{
 
                expr_type: concrete,
 
                field_or_monomorph_idx: infer_expr.field_or_monomorph_idx
 
            });
 
        }
 

	
 
        Ok(())
 
    }
 

	
 
    fn progress_expr(&mut self, ctx: &mut Ctx, idx: i32) -> Result<(), ParseError> {
 
        let id = self.expr_types[idx as usize].expr_id; // TODO: @Temp
 
        match &ctx.heap[id] {
 
            Expression::Assignment(expr) => {
 
                let id = expr.this;
 
                self.progress_assignment_expr(ctx, id)
 
            },
 
            Expression::Binding(expr) => {
 
                let id = expr.this;
 
                self.progress_binding_expr(ctx, id)
 
            },
 
            Expression::Conditional(expr) => {
 
                let id = expr.this;
 
                self.progress_conditional_expr(ctx, id)
 
            },
 
            Expression::Binary(expr) => {
 
                let id = expr.this;
 
                self.progress_binary_expr(ctx, id)
 
            },
 
            Expression::Unary(expr) => {
 
                let id = expr.this;
 
                self.progress_unary_expr(ctx, id)
 
            },
 
            Expression::Indexing(expr) => {
 
                let id = expr.this;
 
                self.progress_indexing_expr(ctx, id)
 
            },
 
            Expression::Slicing(expr) => {
 
                let id = expr.this;
 
                self.progress_slicing_expr(ctx, id)
 
            },
 
            Expression::Select(expr) => {
 
                let id = expr.this;
 
                self.progress_select_expr(ctx, id)
 
            },
 
            Expression::Literal(expr) => {
 
                let id = expr.this;
 
                self.progress_literal_expr(ctx, id)
 
            },
 
            Expression::Cast(expr) => {
 
                let id = expr.this;
 
                self.progress_cast_expr(ctx, id)
 
            },
 
            Expression::Call(expr) => {
 
                let id = expr.this;
 
                self.progress_call_expr(ctx, id)
 
            },
 
            Expression::Variable(expr) => {
 
                let id = expr.this;
 
                self.progress_variable_expr(ctx, id)
 
            }
 
        }
 
    }
 

	
 
    fn progress_assignment_expr(&mut self, ctx: &mut Ctx, id: AssignmentExpressionId) -> Result<(), ParseError> {
 
        use AssignmentOperator as AO;
 

	
 
        let upcast_id = id.upcast();
 

	
 
        let expr = &ctx.heap[id];
 
        let arg1_expr_id = expr.left;
 
        let arg2_expr_id = expr.right;
 

	
 
        debug_log!("Assignment expr '{:?}': {}", expr.operation, upcast_id.index);
 
        debug_log!(" * Before:");
 
        debug_log!("   - Arg1 type: {}", self.debug_get_display_name(ctx, arg1_expr_id));
 
        debug_log!("   - Arg2 type: {}", self.debug_get_display_name(ctx, arg2_expr_id));
 
        debug_log!("   - Expr type: {}", self.debug_get_display_name(ctx, upcast_id));
 

	
 
        // Assignment does not return anything (it operates like a statement)
 
        let progress_expr = self.apply_forced_constraint(ctx, upcast_id, &VOID_TEMPLATE)?;
 

	
 
        // Apply forced constraint to LHS value
 
        let progress_forced = match expr.operation {
 
            AO::Set =>
 
                false,
 
            AO::Concatenated =>
 
                self.apply_template_constraint(ctx, arg1_expr_id, &ARRAYLIKE_TEMPLATE)?,
 
            AO::Multiplied | AO::Divided | AO::Added | AO::Subtracted =>
 
                self.apply_template_constraint(ctx, arg1_expr_id, &NUMBERLIKE_TEMPLATE)?,
 
            AO::Remained | AO::ShiftedLeft | AO::ShiftedRight |
 
@@ -1900,233 +1935,233 @@ impl PassTyping {
 
        let progress_subject_base = self.apply_template_constraint(ctx, subject_id, &ARRAYLIKE_TEMPLATE)?;
 
        let progress_idx_base = self.apply_template_constraint(ctx, from_id, &INTEGERLIKE_TEMPLATE)?;
 
        let (progress_from, progress_to) = self.apply_equal2_constraint(ctx, upcast_id, from_id, 0, to_id, 0)?;
 

	
 
        let (progress_expr, progress_subject) = match self.type_is_certainly_or_certainly_not_string(ctx, subject_id) {
 
            (true, _) => {
 
                // Certainly a string
 
                (self.apply_forced_constraint(ctx, upcast_id, &STRING_TEMPLATE)?, false)
 
            },
 
            (_, true) => {
 
                // Certainly not a string
 
                let progress_expr_base = self.apply_template_constraint(ctx, upcast_id, &SLICE_TEMPLATE)?;
 
                let (progress_expr, progress_subject) =
 
                    self.apply_equal2_constraint(ctx, upcast_id, upcast_id, 1, subject_id, 1)?;
 

	
 
                (progress_expr_base || progress_expr, progress_subject)
 
            },
 
            _ => {
 
                // Could be anything, at least attempt to progress subtype
 
                let progress_expr_base = self.apply_template_constraint(ctx, upcast_id, &ARRAYLIKE_TEMPLATE)?;
 
                let (progress_expr, progress_subject) =
 
                    self.apply_equal2_constraint(ctx, upcast_id, upcast_id, 1, subject_id, 1)?;
 

	
 
                (progress_expr_base || progress_expr, progress_subject)
 
            }
 
        };
 

	
 
        debug_log!(" * After:");
 
        debug_log!("   - Subject type [{}]: {}", progress_subject_base || progress_subject, self.debug_get_display_name(ctx, subject_id));
 
        debug_log!("   - FromIdx type [{}]: {}", progress_idx_base || progress_from, self.debug_get_display_name(ctx, from_id));
 
        debug_log!("   - ToIdx   type [{}]: {}", progress_idx_base || progress_to, self.debug_get_display_name(ctx, to_id));
 
        debug_log!("   - Expr    type [{}]: {}", progress_expr, self.debug_get_display_name(ctx, upcast_id));
 

	
 
        if progress_expr { self.queue_expr_parent(ctx, upcast_id); }
 
        if progress_subject_base || progress_subject { self.queue_expr(ctx, subject_id); }
 
        if progress_idx_base || progress_from { self.queue_expr(ctx, from_id); }
 
        if progress_idx_base || progress_to { self.queue_expr(ctx, to_id); }
 

	
 
        Ok(())
 
    }
 

	
 
    fn progress_select_expr(&mut self, ctx: &mut Ctx, id: SelectExpressionId) -> Result<(), ParseError> {
 
        let upcast_id = id.upcast();
 
        
 
        debug_log!("Select expr: {}", upcast_id.index);
 
        debug_log!(" * Before:");
 
        debug_log!("   - Subject type: {}", self.debug_get_display_name(ctx, ctx.heap[id].subject));
 
        debug_log!("   - Expr    type: {}", self.debug_get_display_name(ctx, upcast_id));
 

	
 
        let subject_id = ctx.heap[id].subject;
 
        let subject_expr_idx = ctx.heap[subject_id].get_unique_id_in_definition();
 
        let select_expr = &ctx.heap[id];
 
        let expr_idx = select_expr.unique_id_in_definition;
 

	
 
        let infer_expr = &self.expr_types[expr_idx as usize];
 
        let extra_idx = infer_expr.extra_data_idx;
 

	
 
        fn determine_inference_type_instance<'a>(types: &'a TypeTable, infer_type: &InferenceType) -> Result<Option<&'a DefinedType>, ()> {
 
            for part in &infer_type.parts {
 
                if part.is_marker() || !part.is_concrete() {
 
                    continue;
 
                }
 

	
 
                // Part is concrete, check if it is an instance of something
 
                if let InferenceTypePart::Instance(definition_id, _num_sub) = part {
 
                    // Lookup type definition and ensure the specified field 
 
                    // name exists on the struct
 
                    let definition = types.get_base_definition(definition_id);
 
                    debug_assert!(definition.is_some());
 
                    let definition = definition.unwrap();
 

	
 
                    return Ok(Some(definition))
 
                } else {
 
                    // Expected an instance of something
 
                    return Err(())
 
                }
 
            }
 

	
 
            // Nothing is concrete yet
 
            Ok(None)
 
        }
 

	
 
        if infer_expr.field_or_monomorph_idx < 0 {
 
            // We don't know the field or the definition it is pointing to yet
 
            // Not yet known, check if we can determine it
 
            let subject_type = &self.expr_types[subject_expr_idx as usize].expr_type;
 
            let type_def = determine_inference_type_instance(&ctx.types, subject_type);
 

	
 
            match type_def {
 
                Ok(Some(type_def)) => {
 
                    // Subject type is known, check if it is a
 
                    // struct and the field exists on the struct
 
                    let struct_def = if let DefinedTypeVariant::Struct(struct_def) = &type_def.definition {
 
                        struct_def
 
                    } else {
 
                        return Err(ParseError::new_error_at_span(
 
                            &ctx.module.source, select_expr.field_name.span, format!(
 
                            &ctx.module().source, select_expr.field_name.span, format!(
 
                                "Can only apply field access to structs, got a subject of type '{}'",
 
                                subject_type.display_name(&ctx.heap)
 
                            )
 
                        ));
 
                    };
 

	
 
                    let mut struct_def_id = None;
 

	
 
                    for (field_def_idx, field_def) in struct_def.fields.iter().enumerate() {
 
                        if field_def.identifier == select_expr.field_name {
 
                            // Set field definition and index
 
                            let infer_expr = &mut self.expr_types[expr_idx as usize];
 
                            infer_expr.field_or_monomorph_idx = field_def_idx as i32;
 
                            struct_def_id = Some(type_def.ast_definition);
 
                            break;
 
                        }
 
                    }
 

	
 
                    if struct_def_id.is_none() {
 
                        let ast_struct_def = ctx.heap[type_def.ast_definition].as_struct();
 
                        return Err(ParseError::new_error_at_span(
 
                            &ctx.module.source, select_expr.field_name.span, format!(
 
                            &ctx.module().source, select_expr.field_name.span, format!(
 
                                "this field does not exist on the struct '{}'",
 
                                ast_struct_def.identifier.value.as_str()
 
                            )
 
                        ))
 
                    }
 

	
 
                    // Encountered definition and field index for the
 
                    // first time
 
                    self.insert_initial_select_polymorph_data(ctx, id, struct_def_id.unwrap());
 
                },
 
                Ok(None) => {
 
                    // Type of subject is not yet known, so we
 
                    // cannot make any progress yet
 
                    return Ok(())
 
                },
 
                Err(()) => {
 
                    return Err(ParseError::new_error_at_span(
 
                        &ctx.module.source, select_expr.field_name.span, format!(
 
                        &ctx.module().source, select_expr.field_name.span, format!(
 
                            "Can only apply field access to structs, got a subject of type '{}'",
 
                            subject_type.display_name(&ctx.heap)
 
                        )
 
                    ));
 
                }
 
            }
 
        }
 

	
 
        // If here then field index is known, and the referenced struct type
 
        // information is inserted into `extra_data`. Check to see if we can
 
        // do some mutual inference.
 
        let poly_data = &mut self.extra_data[extra_idx as usize];
 
        let mut poly_progress = HashSet::new();
 

	
 
        // Apply to struct's type
 
        let signature_type: *mut _ = &mut poly_data.embedded[0];
 
        let subject_type: *mut _ = &mut self.expr_types[subject_expr_idx as usize].expr_type;
 

	
 
        let (_, progress_subject) = Self::apply_equal2_signature_constraint(
 
            ctx, upcast_id, Some(subject_id), poly_data, &mut poly_progress,
 
            signature_type, 0, subject_type, 0
 
        )?;
 

	
 
        if progress_subject {
 
            self.expr_queued.push_back(subject_expr_idx);
 
        }
 

	
 
        // Apply to field's type
 
        let signature_type: *mut _ = &mut poly_data.returned;
 
        let expr_type: *mut _ = &mut self.expr_types[expr_idx as usize].expr_type;
 

	
 
        let (_, progress_expr) = Self::apply_equal2_signature_constraint(
 
            ctx, upcast_id, None, poly_data, &mut poly_progress,
 
            signature_type, 0, expr_type, 0
 
        )?;
 

	
 
        if progress_expr {
 
            if let Some(parent_id) = ctx.heap[upcast_id].parent_expr_id() {
 
                let parent_idx = ctx.heap[parent_id].get_unique_id_in_definition();
 
                self.expr_queued.push_back(parent_idx);
 
            }
 
        }
 

	
 
        // Reapply progress in polymorphic variables to struct's type
 
        let signature_type: *mut _ = &mut poly_data.embedded[0];
 
        let subject_type: *mut _ = &mut self.expr_types[subject_expr_idx as usize].expr_type;
 

	
 
        let progress_subject = Self::apply_equal2_polyvar_constraint(
 
            poly_data, &poly_progress, signature_type, subject_type
 
        );
 

	
 
        let signature_type: *mut _ = &mut poly_data.returned;
 
        let expr_type: *mut _ = &mut self.expr_types[expr_idx as usize].expr_type;
 

	
 
        let progress_expr = Self::apply_equal2_polyvar_constraint(
 
            poly_data, &poly_progress, signature_type, expr_type
 
        );
 

	
 
        if progress_subject { self.queue_expr(ctx, subject_id); }
 
        if progress_expr { self.queue_expr_parent(ctx, upcast_id); }
 

	
 
        debug_log!(" * After:");
 
        debug_log!("   - Subject type [{}]: {}", progress_subject, self.debug_get_display_name(ctx, subject_id));
 
        debug_log!("   - Expr    type [{}]: {}", progress_expr, self.debug_get_display_name(ctx, upcast_id));
 

	
 
        Ok(())
 
    }
 

	
 
    fn progress_literal_expr(&mut self, ctx: &mut Ctx, id: LiteralExpressionId) -> Result<(), ParseError> {
 
        let upcast_id = id.upcast();
 
        let expr = &ctx.heap[id];
 
        let expr_idx = expr.unique_id_in_definition;
 
        let extra_idx = self.expr_types[expr_idx as usize].extra_data_idx;
 

	
 
        debug_log!("Literal expr: {}", upcast_id.index);
 
        debug_log!(" * Before:");
 
        debug_log!("   - Expr type: {}", self.debug_get_display_name(ctx, upcast_id));
 

	
 
        let progress_expr = match &expr.value {
 
            Literal::Null => {
 
                self.apply_template_constraint(ctx, upcast_id, &MESSAGE_TEMPLATE)?
 
            },
 
            Literal::Integer(_) => {
 
                self.apply_template_constraint(ctx, upcast_id, &INTEGERLIKE_TEMPLATE)?
 
            },
 
            Literal::True | Literal::False => {
 
                self.apply_forced_constraint(ctx, upcast_id, &BOOL_TEMPLATE)?
 
            },
 
            Literal::Character(_) => {
 
                self.apply_forced_constraint(ctx, upcast_id, &CHARACTER_TEMPLATE)?
 
            },
 
            Literal::String(_) => {
 
                self.apply_forced_constraint(ctx, upcast_id, &STRING_TEMPLATE)?
 
            },
 
            Literal::Struct(data) => {
 
                let extra = &mut self.extra_data[extra_idx as usize];
 
@@ -2366,565 +2401,565 @@ impl PassTyping {
 
                    }
 
                }
 

	
 
                // And the output should be an array of the element types
 
                let mut progress_expr = self.apply_template_constraint(ctx, upcast_id, &ARRAY_TEMPLATE)?;
 
                if !expr_elements.is_empty() {
 
                    let first_arg_id = expr_elements[0];
 
                    let (inner_expr_progress, arg_progress) = self.apply_equal2_constraint(
 
                        ctx, upcast_id, upcast_id, 1, first_arg_id, 0
 
                    )?;
 

	
 
                    progress_expr = progress_expr || inner_expr_progress;
 

	
 
                    // Note that if the array type progressed the type of the arguments,
 
                    // then we should enqueue this progression function again
 
                    // TODO: @fix Make apply_equal_n accept a start idx as well
 
                    if arg_progress { self.queue_expr(ctx, upcast_id); }
 
                }
 

	
 
                debug_log!(" * After:");
 
                debug_log!("   - Expr type [{}]: {}", progress_expr, self.debug_get_display_name(ctx, upcast_id));
 

	
 
                progress_expr
 
            },
 
        };
 

	
 
        debug_log!(" * After:");
 
        debug_log!("   - Expr type: {}", self.debug_get_display_name(ctx, upcast_id));
 

	
 
        if progress_expr { self.queue_expr_parent(ctx, upcast_id); }
 

	
 
        Ok(())
 
    }
 

	
 
    fn progress_cast_expr(&mut self, ctx: &mut Ctx, id: CastExpressionId) -> Result<(), ParseError> {
 
        let upcast_id = id.upcast();
 
        let expr = &ctx.heap[id];
 
        let expr_idx = expr.unique_id_in_definition;
 

	
 
        debug_log!("Casting expr: {}", upcast_id.index);
 
        debug_log!(" * Before:");
 
        debug_log!("   - Expr type:    {}", self.debug_get_display_name(ctx, upcast_id));
 
        debug_log!("   - Subject type: {}", self.debug_get_display_name(ctx, expr.subject));
 

	
 
        // The cast expression might have its output type fixed by the
 
        // programmer, so apply that type to the output. Apart from that casting
 
        // acts like a blocker for two-way inference. So we'll just have to wait
 
        // until we know if the cast is valid.
 
        // TODO: Another thing that has to be updated the moment the type
 
        //  inferencer is fully index/job-based
 
        let infer_type = self.determine_inference_type_from_parser_type_elements(&expr.to_type.elements, true);
 
        let expr_progress = self.apply_template_constraint(ctx, upcast_id, &infer_type.parts)?;
 

	
 
        if expr_progress {
 
            self.queue_expr_parent(ctx, upcast_id);
 
        }
 

	
 
        // Check if the two types are compatible
 
        debug_log!(" * After:");
 
        debug_log!("   - Expr type [{}]: {}", expr_progress, self.debug_get_display_name(ctx, upcast_id));
 
        debug_log!("   - Note that the subject type can never be inferred");
 
        debug_log!(" * Decision:");
 

	
 
        let subject_idx = ctx.heap[expr.subject].get_unique_id_in_definition();
 
        let expr_type = &self.expr_types[expr_idx as usize].expr_type;
 
        let subject_type = &self.expr_types[subject_idx as usize].expr_type;
 
        if !expr_type.is_done || !subject_type.is_done {
 
            // Not yet done
 
            debug_log!("   - Casting is valid: unknown as the types are not yet complete");
 
            return Ok(())
 
        }
 

	
 
        // Valid casts: (bool, integer, character) can always be cast to one
 
        // another. A cast from a type to itself is also valid.
 
        fn is_bool_int_or_char(parts: &[InferenceTypePart]) -> bool {
 
            return parts.len() == 1 && (
 
                parts[0] == InferenceTypePart::Bool ||
 
                parts[0] == InferenceTypePart::Character ||
 
                parts[0].is_concrete_integer()
 
            );
 
        }
 

	
 
        let is_valid = if is_bool_int_or_char(&expr_type.parts) && is_bool_int_or_char(&subject_type.parts) {
 
            true
 
        } else if expr_type.parts == subject_type.parts {
 
            true
 
        } else {
 
            false
 
        };
 

	
 
        debug_log!("   - Casting is valid: {}", is_valid);
 

	
 
        if !is_valid {
 
            let cast_expr = &ctx.heap[id];
 
            let subject_expr = &ctx.heap[cast_expr.subject];
 
            return Err(ParseError::new_error_str_at_span(
 
                &ctx.module.source, cast_expr.full_span, "invalid casting operation"
 
                &ctx.module().source, cast_expr.full_span, "invalid casting operation"
 
            ).with_info_at_span(
 
                &ctx.module.source, subject_expr.full_span(), format!(
 
                &ctx.module().source, subject_expr.full_span(), format!(
 
                    "cannot cast the argument type '{}' to the cast type '{}'",
 
                    subject_type.display_name(&ctx.heap),
 
                    expr_type.display_name(&ctx.heap)
 
                )
 
            ));
 
        }
 

	
 
        Ok(())
 
    }
 

	
 
    // TODO: @cleanup, see how this can be cleaned up once I implement
 
    //  polymorphic struct/enum/union literals. These likely follow the same
 
    //  pattern as here.
 
    fn progress_call_expr(&mut self, ctx: &mut Ctx, id: CallExpressionId) -> Result<(), ParseError> {
 
        let upcast_id = id.upcast();
 
        let expr = &ctx.heap[id];
 
        let expr_idx = expr.unique_id_in_definition;
 
        let extra_idx = self.expr_types[expr_idx as usize].extra_data_idx;
 

	
 
        debug_log!("Call expr '{}': {}", ctx.heap[expr.definition].identifier().value.as_str(), upcast_id.index);
 
        debug_log!(" * Before:");
 
        debug_log!("   - Expr type: {}", self.debug_get_display_name(ctx, upcast_id));
 
        debug_log!(" * During (inferring types from arguments and return type):");
 

	
 
        let extra = &mut self.extra_data[extra_idx as usize];
 

	
 
        // Check if we can make progress using the arguments and/or return types
 
        // while keeping track of the polyvars we've extended
 
        let mut poly_progress = HashSet::new();
 
        debug_assert_eq!(extra.embedded.len(), expr.arguments.len());
 

	
 
        for (call_arg_idx, arg_id) in expr.arguments.clone().into_iter().enumerate() {
 
            let arg_expr_idx = ctx.heap[arg_id].get_unique_id_in_definition();
 
            let signature_type: *mut _ = &mut extra.embedded[call_arg_idx];
 
            let argument_type: *mut _ = &mut self.expr_types[arg_expr_idx as usize].expr_type;
 
            let (_, progress_arg) = Self::apply_equal2_signature_constraint(
 
                ctx, upcast_id, Some(arg_id), extra, &mut poly_progress,
 
                signature_type, 0, argument_type, 0
 
            )?;
 

	
 
            debug_log!(
 
                "   - Arg {} type | sig: {}, arg: {}", call_arg_idx,
 
                unsafe{&*signature_type}.display_name(&ctx.heap), 
 
                unsafe{&*argument_type}.display_name(&ctx.heap));
 

	
 
            if progress_arg {
 
                // Progressed argument expression
 
                self.expr_queued.push_back(arg_expr_idx);
 
            }
 
        }
 

	
 
        // Do the same for the return type
 
        let signature_type: *mut _ = &mut extra.returned;
 
        let expr_type: *mut _ = &mut self.expr_types[expr_idx as usize].expr_type;
 
        let (_, progress_expr) = Self::apply_equal2_signature_constraint(
 
            ctx, upcast_id, None, extra, &mut poly_progress,
 
            signature_type, 0, expr_type, 0
 
        )?;
 

	
 
        debug_log!(
 
            "   - Ret type | sig: {}, expr: {}", 
 
            unsafe{&*signature_type}.display_name(&ctx.heap), 
 
            unsafe{&*expr_type}.display_name(&ctx.heap)
 
        );
 

	
 
        if progress_expr {
 
            // TODO: @cleanup, cannot call utility self.queue_parent thingo
 
            if let Some(parent_id) = ctx.heap[upcast_id].parent_expr_id() {
 
                let parent_idx = ctx.heap[parent_id].get_unique_id_in_definition();
 
                self.expr_queued.push_back(parent_idx);
 
            }
 
        }
 

	
 
        // If we did not have an error in the polymorph inference above, then
 
        // reapplying the polymorph type to each argument type and the return
 
        // type should always succeed.
 
        debug_log!(" * During (reinferring from progressed polyvars):");
 
        for (_poly_idx, _poly_var) in extra.poly_vars.iter().enumerate() {
 
            debug_log!("   - Poly {} | sig: {}", _poly_idx, _poly_var.display_name(&ctx.heap));
 
        }
 
        // TODO: @performance If the algorithm is changed to be more "on demand
 
        //  argument re-evaluation", instead of "all-argument re-evaluation",
 
        //  then this is no longer true
 
        for arg_idx in 0..extra.embedded.len() {
 
            let signature_type: *mut _ = &mut extra.embedded[arg_idx];
 
            let arg_expr_id = expr.arguments[arg_idx];
 
            let arg_expr_idx = ctx.heap[arg_expr_id].get_unique_id_in_definition();
 
            let arg_type: *mut _ = &mut self.expr_types[arg_expr_idx as usize].expr_type;
 
            
 
            let progress_arg = Self::apply_equal2_polyvar_constraint(
 
                extra, &poly_progress,
 
                signature_type, arg_type
 
            );
 
            
 
            debug_log!(
 
                "   - Arg {} type | sig: {}, arg: {}", arg_idx, 
 
                unsafe{&*signature_type}.display_name(&ctx.heap), 
 
                unsafe{&*arg_type}.display_name(&ctx.heap)
 
            );
 
            if progress_arg {
 
                self.expr_queued.push_back(arg_expr_idx);
 
            }
 
        }
 

	
 
        // Once more for the return type
 
        let signature_type: *mut _ = &mut extra.returned;
 
        let ret_type: *mut _ = &mut self.expr_types[expr_idx as usize].expr_type;
 

	
 
        let progress_ret = Self::apply_equal2_polyvar_constraint(
 
            extra, &poly_progress, signature_type, ret_type
 
        );
 
        debug_log!(
 
            "   - Ret type | sig: {}, arg: {}", 
 
            unsafe{&*signature_type}.display_name(&ctx.heap), 
 
            unsafe{&*ret_type}.display_name(&ctx.heap)
 
        );
 
        if progress_ret {
 
            self.queue_expr_parent(ctx, upcast_id);
 
        }
 

	
 
        debug_log!(" * After:");
 
        debug_log!("   - Expr type: {}", self.debug_get_display_name(ctx, upcast_id));
 

	
 
        Ok(())
 
    }
 

	
 
    fn progress_variable_expr(&mut self, ctx: &mut Ctx, id: VariableExpressionId) -> Result<(), ParseError> {
 
        let upcast_id = id.upcast();
 
        let var_expr = &ctx.heap[id];
 
        let var_expr_idx = var_expr.unique_id_in_definition;
 
        let var_id = var_expr.declaration.unwrap();
 

	
 
        debug_log!("Variable expr '{}': {}", ctx.heap[var_id].identifier.value.as_str(), upcast_id.index);
 
        debug_log!(" * Before:");
 
        debug_log!("   - Var  type: {}", self.var_types.get(&var_id).unwrap().var_type.display_name(&ctx.heap));
 
        debug_log!("   - Expr type: {}", self.debug_get_display_name(ctx, upcast_id));
 

	
 
        // Retrieve shared variable type and expression type and apply inference
 
        let var_data = self.var_types.get_mut(&var_id).unwrap();
 
        let expr_type = &mut self.expr_types[var_expr_idx as usize].expr_type;
 

	
 
        let infer_res = unsafe{ InferenceType::infer_subtrees_for_both_types(
 
            &mut var_data.var_type as *mut _, 0, expr_type, 0
 
        ) };
 
        if infer_res == DualInferenceResult::Incompatible {
 
            let var_decl = &ctx.heap[var_id];
 
            return Err(ParseError::new_error_at_span(
 
                &ctx.module.source, var_decl.identifier.span, format!(
 
                &ctx.module().source, var_decl.identifier.span, format!(
 
                    "Conflicting types for this variable, previously assigned the type '{}'",
 
                    var_data.var_type.display_name(&ctx.heap)
 
                )
 
            ).with_info_at_span(
 
                &ctx.module.source, var_expr.identifier.span, format!(
 
                &ctx.module().source, var_expr.identifier.span, format!(
 
                    "But inferred to have incompatible type '{}' here",
 
                    expr_type.display_name(&ctx.heap)
 
                )
 
            ))
 
        }
 

	
 
        let progress_var = infer_res.modified_lhs();
 
        let progress_expr = infer_res.modified_rhs();
 

	
 
        if progress_var {
 
            // Let other variable expressions using this type progress as well
 
            for other_expr in var_data.used_at.iter() {
 
                if *other_expr != upcast_id {
 
                    let other_expr_idx = ctx.heap[*other_expr].get_unique_id_in_definition();
 
                    self.expr_queued.push_back(other_expr_idx);
 
                }
 
            }
 

	
 
            // Let a linked port know that our type has updated
 
            if let Some(linked_id) = var_data.linked_var {
 
                // Only perform one-way inference to prevent updating our type,
 
                // this would lead to an inconsistency in the type inference
 
                // algorithm otherwise.
 
                let var_type: *mut _ = &mut var_data.var_type;
 
                let link_data = self.var_types.get_mut(&linked_id).unwrap();
 

	
 
                debug_assert!(
 
                    unsafe{&*var_type}.parts[0] == InferenceTypePart::Input ||
 
                    unsafe{&*var_type}.parts[0] == InferenceTypePart::Output
 
                );
 
                debug_assert!(
 
                    link_data.var_type.parts[0] == InferenceTypePart::Input ||
 
                    link_data.var_type.parts[0] == InferenceTypePart::Output
 
                );
 
                match InferenceType::infer_subtree_for_single_type(&mut link_data.var_type, 1, &unsafe{&*var_type}.parts, 1, false) {
 
                    SingleInferenceResult::Modified => {
 
                        for other_expr in &link_data.used_at {
 
                            let other_expr_idx = ctx.heap[*other_expr].get_unique_id_in_definition();
 
                            self.expr_queued.push_back(other_expr_idx);
 
                        }
 
                    },
 
                    SingleInferenceResult::Unmodified => {},
 
                    SingleInferenceResult::Incompatible => {
 
                        let var_data = self.var_types.get(&var_id).unwrap();
 
                        let link_data = self.var_types.get(&linked_id).unwrap();
 
                        let var_decl = &ctx.heap[var_id];
 
                        let link_decl = &ctx.heap[linked_id];
 

	
 
                        return Err(ParseError::new_error_at_span(
 
                            &ctx.module.source, var_decl.identifier.span, format!(
 
                            &ctx.module().source, var_decl.identifier.span, format!(
 
                                "Conflicting types for this variable, assigned the type '{}'",
 
                                var_data.var_type.display_name(&ctx.heap)
 
                            )
 
                        ).with_info_at_span(
 
                            &ctx.module.source, link_decl.identifier.span, format!(
 
                            &ctx.module().source, link_decl.identifier.span, format!(
 
                                "Because it is incompatible with this variable, assigned the type '{}'",
 
                                link_data.var_type.display_name(&ctx.heap)
 
                            )
 
                        ));
 
                    }
 
                }
 
            }
 
        }
 
        if progress_expr { self.queue_expr_parent(ctx, upcast_id); }
 

	
 
        debug_log!(" * After:");
 
        debug_log!("   - Var  type [{}]: {}", progress_var, self.var_types.get(&var_id).unwrap().var_type.display_name(&ctx.heap));
 
        debug_log!("   - Expr type [{}]: {}", progress_expr, self.debug_get_display_name(ctx, upcast_id));
 

	
 

	
 
        Ok(())
 
    }
 

	
 
    fn queue_expr_parent(&mut self, ctx: &Ctx, expr_id: ExpressionId) {
 
        if let ExpressionParent::Expression(parent_expr_id, _) = &ctx.heap[expr_id].parent() {
 
            let expr_idx = ctx.heap[*parent_expr_id].get_unique_id_in_definition();
 
            self.expr_queued.push_back(expr_idx);
 
        }
 
    }
 

	
 
    fn queue_expr(&mut self, ctx: &Ctx, expr_id: ExpressionId) {
 
        let expr_idx = ctx.heap[expr_id].get_unique_id_in_definition();
 
        self.expr_queued.push_back(expr_idx);
 
    }
 

	
 

	
 
    // first returned is certainly string, second is certainly not
 
    fn type_is_certainly_or_certainly_not_string(&self, ctx: &Ctx, expr_id: ExpressionId) -> (bool, bool) {
 
        let expr_idx = ctx.heap[expr_id].get_unique_id_in_definition();
 
        let expr_type = &self.expr_types[expr_idx as usize].expr_type;
 
        if expr_type.is_done {
 
            if expr_type.parts[0] == InferenceTypePart::String {
 
                return (true, false);
 
            } else {
 
                return (false, true);
 
            }
 
        }
 

	
 
        (false, false)
 
    }
 

	
 
    /// Applies a template type constraint: the type associated with the
 
    /// supplied expression will be molded into the provided `template`. But
 
    /// will be considered valid if the template could've been molded into the
 
    /// expression type as well. Hence the template may be fully specified (e.g.
 
    /// a bool) or contain "inference" variables (e.g. an array of T)
 
    fn apply_template_constraint(
 
        &mut self, ctx: &Ctx, expr_id: ExpressionId, template: &[InferenceTypePart]
 
    ) -> Result<bool, ParseError> {
 
        let expr_idx = ctx.heap[expr_id].get_unique_id_in_definition(); // TODO: @Temp
 
        let expr_type = &mut self.expr_types[expr_idx as usize].expr_type;
 
        match InferenceType::infer_subtree_for_single_type(expr_type, 0, template, 0, false) {
 
            SingleInferenceResult::Modified => Ok(true),
 
            SingleInferenceResult::Unmodified => Ok(false),
 
            SingleInferenceResult::Incompatible => Err(
 
                self.construct_template_type_error(ctx, expr_id, template)
 
            )
 
        }
 
    }
 

	
 
    fn apply_template_constraint_to_types(
 
        to_infer: *mut InferenceType, to_infer_start_idx: usize,
 
        template: &[InferenceTypePart], template_start_idx: usize
 
    ) -> Result<bool, ()> {
 
        match InferenceType::infer_subtree_for_single_type(
 
            unsafe{ &mut *to_infer }, to_infer_start_idx,
 
            template, template_start_idx, false
 
        ) {
 
            SingleInferenceResult::Modified => Ok(true),
 
            SingleInferenceResult::Unmodified => Ok(false),
 
            SingleInferenceResult::Incompatible => Err(()),
 
        }
 
    }
 

	
 
    /// Applies a forced constraint: the supplied expression's type MUST be
 
    /// inferred from the template, the other way around is considered invalid.
 
    fn apply_forced_constraint(
 
        &mut self, ctx: &Ctx, expr_id: ExpressionId, template: &[InferenceTypePart]
 
    ) -> Result<bool, ParseError> {
 
        let expr_idx = ctx.heap[expr_id].get_unique_id_in_definition();
 
        let expr_type = &mut self.expr_types[expr_idx as usize].expr_type;
 
        match InferenceType::infer_subtree_for_single_type(expr_type, 0, template, 0, true) {
 
            SingleInferenceResult::Modified => Ok(true),
 
            SingleInferenceResult::Unmodified => Ok(false),
 
            SingleInferenceResult::Incompatible => Err(
 
                self.construct_template_type_error(ctx, expr_id, template)
 
            )
 
        }
 
    }
 

	
 
    /// Applies a type constraint that expects the two provided types to be
 
    /// equal. We attempt to make progress in inferring the types. If the call
 
    /// is successful then the composition of all types are made equal.
 
    /// The "parent" `expr_id` is provided to construct errors.
 
    fn apply_equal2_constraint(
 
        &mut self, ctx: &Ctx, expr_id: ExpressionId,
 
        arg1_id: ExpressionId, arg1_start_idx: usize,
 
        arg2_id: ExpressionId, arg2_start_idx: usize
 
    ) -> Result<(bool, bool), ParseError> {
 
        let arg1_expr_idx = ctx.heap[arg1_id].get_unique_id_in_definition(); // TODO: @Temp
 
        let arg2_expr_idx = ctx.heap[arg2_id].get_unique_id_in_definition();
 
        let arg1_type: *mut _ = &mut self.expr_types[arg1_expr_idx as usize].expr_type;
 
        let arg2_type: *mut _ = &mut self.expr_types[arg2_expr_idx as usize].expr_type;
 

	
 
        let infer_res = unsafe{ InferenceType::infer_subtrees_for_both_types(
 
            arg1_type, arg1_start_idx,
 
            arg2_type, arg2_start_idx
 
        ) };
 
        if infer_res == DualInferenceResult::Incompatible {
 
            return Err(self.construct_arg_type_error(ctx, expr_id, arg1_id, arg2_id));
 
        }
 

	
 
        Ok((infer_res.modified_lhs(), infer_res.modified_rhs()))
 
    }
 

	
 
    /// Applies an equal2 constraint between a signature type (e.g. a function
 
    /// argument or struct field) and an expression whose type should match that
 
    /// expression. If we make progress on the signature, then we try to see if
 
    /// any of the embedded polymorphic types can be progressed.
 
    ///
 
    /// `outer_expr_id` is the main expression we're progressing (e.g. a 
 
    /// function call), while `expr_id` is the embedded expression we're 
 
    /// matching against the signature. `expression_type` and 
 
    /// `expression_start_idx` belong to `expr_id`.
 
    fn apply_equal2_signature_constraint(
 
        ctx: &Ctx, outer_expr_id: ExpressionId, expr_id: Option<ExpressionId>,
 
        polymorph_data: &mut ExtraData, polymorph_progress: &mut HashSet<u32>,
 
        signature_type: *mut InferenceType, signature_start_idx: usize,
 
        expression_type: *mut InferenceType, expression_start_idx: usize
 
    ) -> Result<(bool, bool), ParseError> {
 
        // Safety: all pointers distinct
 

	
 
        // Infer the signature and expression type
 
        let infer_res = unsafe { 
 
            InferenceType::infer_subtrees_for_both_types(
 
                signature_type, signature_start_idx,
 
                expression_type, expression_start_idx
 
            ) 
 
        };
 

	
 
        if infer_res == DualInferenceResult::Incompatible {
 
            // TODO: Check if I still need to use this
 
            let outer_span = ctx.heap[outer_expr_id].full_span();
 
            let (span_name, span) = match expr_id {
 
                Some(expr_id) => ("argument's", ctx.heap[expr_id].full_span()),
 
                None => ("type's", outer_span)
 
            };
 
            let (signature_display_type, expression_display_type) = unsafe { (
 
                (&*signature_type).display_name(&ctx.heap),
 
                (&*expression_type).display_name(&ctx.heap)
 
            ) };
 

	
 
            return Err(ParseError::new_error_str_at_span(
 
                &ctx.module.source, outer_span,
 
                &ctx.module().source, outer_span,
 
                "failed to fully resolve the types of this expression"
 
            ).with_info_at_span(
 
                &ctx.module.source, span, format!(
 
                &ctx.module().source, span, format!(
 
                    "because the {} signature has been resolved to '{}', but the expression has been resolved to '{}'",
 
                    span_name, signature_display_type, expression_display_type
 
                )
 
            ));
 
        }
 

	
 
        // Try to see if we can progress any of the polymorphic variables
 
        let progress_sig = infer_res.modified_lhs();
 
        let progress_expr = infer_res.modified_rhs();
 

	
 
        if progress_sig {
 
            let signature_type = unsafe{&mut *signature_type};
 
            debug_assert!(
 
                signature_type.has_marker,
 
                "made progress on signature type, but it doesn't have a marker"
 
            );
 
            for (poly_idx, poly_section) in signature_type.marker_iter() {
 
                let polymorph_type = &mut polymorph_data.poly_vars[poly_idx as usize];
 
                match Self::apply_template_constraint_to_types(
 
                    polymorph_type, 0, poly_section, 0
 
                ) {
 
                    Ok(true) => { polymorph_progress.insert(poly_idx); },
 
                    Ok(false) => {},
 
                    Err(()) => { return Err(Self::construct_poly_arg_error(ctx, polymorph_data, outer_expr_id))}
 
                }
 
            }
 
        }
 
        Ok((progress_sig, progress_expr))
 
    }
 

	
 
    /// Applies equal2 constraints on the signature type for each of the 
 
    /// polymorphic variables. If the signature type is progressed then we 
 
    /// progress the expression type as well.
 
    ///
 
    /// This function assumes that the polymorphic variables have already been
 
    /// progressed as far as possible by calling 
 
    /// `apply_equal2_signature_constraint`. As such, we expect to not encounter
 
    /// any errors.
 
    ///
 
    /// This function returns true if the expression's type has been progressed
 
    fn apply_equal2_polyvar_constraint(
 
        polymorph_data: &ExtraData, _polymorph_progress: &HashSet<u32>,
 
        signature_type: *mut InferenceType, expr_type: *mut InferenceType
 
    ) -> bool {
 
        // Safety: all pointers should be distinct
 
        //         polymorph_data containers may not be modified
 
        let signature_type = unsafe{&mut *signature_type};
 
        let expr_type = unsafe{&mut *expr_type};
 

	
 
        // Iterate through markers in signature type to try and make progress
 
        // on the polymorphic variable        
 
        let mut seek_idx = 0;
 
        let mut modified_sig = false;
 
        
 
        while let Some((poly_idx, start_idx)) = signature_type.find_marker(seek_idx) {
 
            let end_idx = InferenceType::find_subtree_end_idx(&signature_type.parts, start_idx);
 
            // if polymorph_progress.contains(&poly_idx) {
 
                // Need to match subtrees
 
                let polymorph_type = &polymorph_data.poly_vars[poly_idx as usize];
 
                let modified_at_marker = Self::apply_template_constraint_to_types(
 
                    signature_type, start_idx, 
 
                    &polymorph_type.parts, 0
 
                ).expect("no failure when applying polyvar constraints");
 

	
 
                modified_sig = modified_sig || modified_at_marker;
 
            // }
 

	
 
            seek_idx = end_idx;
 
        }
 

	
 
        // If we made any progress on the signature's type, then we also need to
 
        // apply it to the expression that is supposed to match the signature.
 
        if modified_sig {
 
            match InferenceType::infer_subtree_for_single_type(
 
                expr_type, 0, &signature_type.parts, 0, true
 
            ) {
 
                SingleInferenceResult::Modified => true,
 
                SingleInferenceResult::Unmodified => false,
 
                SingleInferenceResult::Incompatible =>
 
                    unreachable!("encountered failure while reapplying modified signature to expression after polyvar inference")
 
            }
 
        } else {
 
            false
 
        }
 
    }
 

	
 
    /// Applies a type constraint that expects all three provided types to be
 
    /// equal. In case we can make progress in inferring the types then we
 
    /// attempt to do so. If the call is successful then the composition of all
 
    /// types is made equal.
 
    fn apply_equal3_constraint(
 
        &mut self, ctx: &Ctx, expr_id: ExpressionId,
 
        arg1_id: ExpressionId, arg2_id: ExpressionId,
 
        start_idx: usize
 
    ) -> Result<(bool, bool, bool), ParseError> {
 
        // Safety: all points are unique
 
@@ -3406,493 +3441,498 @@ impl PassTyping {
 
    /// may be called with two kinds of intentions:
 
    /// 1. To resolve a ParserType within the body of a function, or on
 
    ///     polymorphic arguments to calls/instantiations within that body. This
 
    ///     means that the polymorphic variables are known and can be replaced
 
    ///     with the monomorph we're instantiating.
 
    /// 2. To resolve a ParserType on a called function's definition or on
 
    ///     an instantiated datatype's members. This means that the polymorphic
 
    ///     arguments inside those ParserTypes refer to the polymorphic
 
    ///     variables in the called/instantiated type's definition.
 
    /// In the second case we place InferenceTypePart::Marker instances such
 
    /// that we can perform type inference on the polymorphic variables.
 
    fn determine_inference_type_from_parser_type_elements(
 
        &mut self, elements: &[ParserTypeElement],
 
        use_definitions_known_poly_args: bool
 
    ) -> InferenceType {
 
        use ParserTypeVariant as PTV;
 
        use InferenceTypePart as ITP;
 

	
 
        let mut infer_type = Vec::with_capacity(elements.len());
 
        let mut has_inferred = false;
 
        let mut has_markers = false;
 

	
 
        for element in elements {
 
            match &element.variant {
 
                // Compiler-only types
 
                PTV::Void => { infer_type.push(ITP::Void); },
 
                PTV::InputOrOutput => { infer_type.push(ITP::PortLike); has_inferred = true },
 
                PTV::ArrayLike => { infer_type.push(ITP::ArrayLike); has_inferred = true },
 
                PTV::IntegerLike => { infer_type.push(ITP::IntegerLike); has_inferred = true },
 
                // Builtins
 
                PTV::Message => {
 
                    // TODO: @types Remove the Message -> Byte hack at some point...
 
                    infer_type.push(ITP::Message);
 
                    infer_type.push(ITP::UInt8);
 
                },
 
                PTV::Bool => { infer_type.push(ITP::Bool); },
 
                PTV::UInt8 => { infer_type.push(ITP::UInt8); },
 
                PTV::UInt16 => { infer_type.push(ITP::UInt16); },
 
                PTV::UInt32 => { infer_type.push(ITP::UInt32); },
 
                PTV::UInt64 => { infer_type.push(ITP::UInt64); },
 
                PTV::SInt8 => { infer_type.push(ITP::SInt8); },
 
                PTV::SInt16 => { infer_type.push(ITP::SInt16); },
 
                PTV::SInt32 => { infer_type.push(ITP::SInt32); },
 
                PTV::SInt64 => { infer_type.push(ITP::SInt64); },
 
                PTV::Character => { infer_type.push(ITP::Character); },
 
                PTV::String => {
 
                    infer_type.push(ITP::String);
 
                    infer_type.push(ITP::Character);
 
                },
 
                // Special markers
 
                PTV::IntegerLiteral => { unreachable!("integer literal type on variable type"); },
 
                PTV::Inferred => {
 
                    infer_type.push(ITP::Unknown);
 
                    has_inferred = true;
 
                },
 
                // With nested types
 
                PTV::Array => { infer_type.push(ITP::Array); },
 
                PTV::Input => { infer_type.push(ITP::Input); },
 
                PTV::Output => { infer_type.push(ITP::Output); },
 
                PTV::PolymorphicArgument(belongs_to_definition, poly_arg_idx) => {
 
                    let poly_arg_idx = *poly_arg_idx;
 
                    if use_definitions_known_poly_args {
 
                        // Refers to polymorphic argument on procedure we're currently processing.
 
                        // This argument is already known.
 
                        debug_assert_eq!(*belongs_to_definition, self.definition_type.definition_id());
 
                        debug_assert!((poly_arg_idx as usize) < self.poly_vars.len());
 

	
 
                        Self::determine_inference_type_from_concrete_type(
 
                            &mut infer_type, &self.poly_vars[poly_arg_idx as usize].parts
 
                        );
 
                    } else {
 
                        // Polymorphic argument has to be inferred
 
                        has_markers = true;
 
                        has_inferred = true;
 
                        infer_type.push(ITP::Marker(poly_arg_idx));
 
                        infer_type.push(ITP::Unknown)
 
                    }
 
                },
 
                PTV::Definition(definition_id, num_embedded) => {
 
                    infer_type.push(ITP::Instance(*definition_id, *num_embedded));
 
                }
 
            }
 
        }
 

	
 
        InferenceType::new(has_markers, !has_inferred, infer_type)
 
    }
 

	
 
    /// Determines the inference type from an already concrete type. Applies the
 
    /// various type "hacks" inside the type inferencer.
 
    fn determine_inference_type_from_concrete_type(parser_type: &mut Vec<InferenceTypePart>, concrete_type: &[ConcreteTypePart]) {
 
        use InferenceTypePart as ITP;
 
        use ConcreteTypePart as CTP;
 

	
 
        for concrete_part in concrete_type {
 
            match concrete_part {
 
                CTP::Void => parser_type.push(ITP::Void),
 
                CTP::Message => parser_type.push(ITP::Message),
 
                CTP::Message => {
 
                    parser_type.push(ITP::Message);
 
                    parser_type.push(ITP::UInt8)
 
                },
 
                CTP::Bool => parser_type.push(ITP::Bool),
 
                CTP::UInt8 => parser_type.push(ITP::UInt8),
 
                CTP::UInt16 => parser_type.push(ITP::UInt16),
 
                CTP::UInt32 => parser_type.push(ITP::UInt32),
 
                CTP::UInt64 => parser_type.push(ITP::UInt64),
 
                CTP::SInt8 => parser_type.push(ITP::SInt8),
 
                CTP::SInt16 => parser_type.push(ITP::SInt16),
 
                CTP::SInt32 => parser_type.push(ITP::SInt32),
 
                CTP::SInt64 => parser_type.push(ITP::SInt64),
 
                CTP::Character => parser_type.push(ITP::Character),
 
                CTP::String => {
 
                    parser_type.push(ITP::String);
 
                    parser_type.push(ITP::Character)
 
                },
 
                CTP::Array => parser_type.push(ITP::Array),
 
                CTP::Slice => parser_type.push(ITP::Slice),
 
                CTP::Input => parser_type.push(ITP::Input),
 
                CTP::Output => parser_type.push(ITP::Output),
 
                CTP::Instance(id, num) => parser_type.push(ITP::Instance(*id, *num)),
 
                CTP::Function(_, _) => unreachable!("function type during concrete to inference type conversion"),
 
                CTP::Component(_, _) => unreachable!("component type during concrete to inference type conversion"),
 
            }
 
        }
 
    }
 

	
 
    /// Construct an error when an expression's type does not match. This
 
    /// happens if we infer the expression type from its arguments (e.g. the
 
    /// expression type of an addition operator is the type of the arguments)
 
    /// But the expression type was already set due to our parent (e.g. an
 
    /// "if statement" or a "logical not" always expecting a boolean)
 
    fn construct_expr_type_error(
 
        &self, ctx: &Ctx, expr_id: ExpressionId, arg_id: ExpressionId
 
    ) -> ParseError {
 
        // TODO: Expand and provide more meaningful information for humans
 
        let expr = &ctx.heap[expr_id];
 
        let arg_expr = &ctx.heap[arg_id];
 
        let expr_idx = expr.get_unique_id_in_definition();
 
        let arg_expr_idx = arg_expr.get_unique_id_in_definition();
 
        let expr_type = &self.expr_types[expr_idx as usize].expr_type;
 
        let arg_type = &self.expr_types[arg_expr_idx as usize].expr_type;
 

	
 
        return ParseError::new_error_at_span(
 
            &ctx.module.source, expr.operation_span(), format!(
 
            &ctx.module().source, expr.operation_span(), format!(
 
                "incompatible types: this expression expected a '{}'",
 
                expr_type.display_name(&ctx.heap)
 
            )
 
        ).with_info_at_span(
 
            &ctx.module.source, arg_expr.full_span(), format!(
 
            &ctx.module().source, arg_expr.full_span(), format!(
 
                "but this expression yields a '{}'",
 
                arg_type.display_name(&ctx.heap)
 
            )
 
        )
 
    }
 

	
 
    fn construct_arg_type_error(
 
        &self, ctx: &Ctx, expr_id: ExpressionId,
 
        arg1_id: ExpressionId, arg2_id: ExpressionId
 
    ) -> ParseError {
 
        let expr = &ctx.heap[expr_id];
 
        let arg1 = &ctx.heap[arg1_id];
 
        let arg2 = &ctx.heap[arg2_id];
 

	
 
        let arg1_idx = arg1.get_unique_id_in_definition();
 
        let arg1_type = &self.expr_types[arg1_idx as usize].expr_type;
 
        let arg2_idx = arg2.get_unique_id_in_definition();
 
        let arg2_type = &self.expr_types[arg2_idx as usize].expr_type;
 

	
 
        return ParseError::new_error_str_at_span(
 
            &ctx.module.source, expr.operation_span(),
 
            &ctx.module().source, expr.operation_span(),
 
            "incompatible types: cannot apply this expression"
 
        ).with_info_at_span(
 
            &ctx.module.source, arg1.full_span(), format!(
 
            &ctx.module().source, arg1.full_span(), format!(
 
                "Because this expression has type '{}'",
 
                arg1_type.display_name(&ctx.heap)
 
            )
 
        ).with_info_at_span(
 
            &ctx.module.source, arg2.full_span(), format!(
 
            &ctx.module().source, arg2.full_span(), format!(
 
                "But this expression has type '{}'",
 
                arg2_type.display_name(&ctx.heap)
 
            )
 
        )
 
    }
 

	
 
    fn construct_template_type_error(
 
        &self, ctx: &Ctx, expr_id: ExpressionId, template: &[InferenceTypePart]
 
    ) -> ParseError {
 
        let expr = &ctx.heap[expr_id];
 
        let expr_idx = expr.get_unique_id_in_definition();
 
        let expr_type = &self.expr_types[expr_idx as usize].expr_type;
 

	
 
        return ParseError::new_error_at_span(
 
            &ctx.module.source, expr.full_span(), format!(
 
            &ctx.module().source, expr.full_span(), format!(
 
                "incompatible types: got a '{}' but expected a '{}'",
 
                expr_type.display_name(&ctx.heap), 
 
                InferenceType::partial_display_name(&ctx.heap, template)
 
            )
 
        )
 
    }
 

	
 
    /// Constructs a human interpretable error in the case that type inference
 
    /// on a polymorphic variable to a function call or literal construction 
 
    /// failed. This may only be caused by a pair of inference types (which may 
 
    /// come from arguments or the return type) having two different inferred 
 
    /// values for that polymorphic variable.
 
    ///
 
    /// So we find this pair and construct the error using it.
 
    ///
 
    /// We assume that the expression is a function call or a struct literal,
 
    /// and that an actual error has occurred.
 
    fn construct_poly_arg_error(
 
        ctx: &Ctx, poly_data: &ExtraData, expr_id: ExpressionId
 
    ) -> ParseError {
 
        // Helper function to check for polymorph mismatch between two inference
 
        // types.
 
        fn has_poly_mismatch<'a>(type_a: &'a InferenceType, type_b: &'a InferenceType) -> Option<(u32, &'a [InferenceTypePart], &'a [InferenceTypePart])> {
 
            if !type_a.has_marker || !type_b.has_marker {
 
                return None
 
            }
 

	
 
            for (marker_a, section_a) in type_a.marker_iter() {
 
                for (marker_b, section_b) in type_b.marker_iter() {
 
                    if marker_a != marker_b {
 
                        // Not the same polymorphic variable
 
                        continue;
 
                    }
 

	
 
                    if !InferenceType::check_subtrees(section_a, 0, section_b, 0) {
 
                        // Not compatible
 
                        return Some((marker_a, section_a, section_b))
 
                    }
 
                }
 
            }
 

	
 
            None
 
        }
 

	
 
        // Helper function to check for polymorph mismatch between an inference
 
        // type and the polymorphic variables in the poly_data struct.
 
        fn has_explicit_poly_mismatch<'a>(
 
            poly_vars: &'a [InferenceType], arg: &'a InferenceType
 
        ) -> Option<(u32, &'a [InferenceTypePart], &'a [InferenceTypePart])> {
 
            for (marker, section) in arg.marker_iter() {
 
                debug_assert!((marker as usize) < poly_vars.len());
 
                let poly_section = &poly_vars[marker as usize].parts;
 
                if !InferenceType::check_subtrees(poly_section, 0, section, 0) {
 
                    return Some((marker, poly_section, section))
 
                }
 
            }
 

	
 
            None
 
        }
 

	
 
        // Helpers function to retrieve polyvar name and definition name
 
        fn get_poly_var_and_definition_name<'a>(ctx: &'a Ctx, poly_var_idx: u32, definition_id: DefinitionId) -> (&'a str, &'a str) {
 
            let definition = &ctx.heap[definition_id];
 
            let poly_var = definition.poly_vars()[poly_var_idx as usize].value.as_str();
 
            let func_name = definition.identifier().value.as_str();
 

	
 
            (poly_var, func_name)
 
        }
 

	
 
        // Helper function to construct initial error
 
        fn construct_main_error(ctx: &Ctx, poly_data: &ExtraData, poly_var_idx: u32, expr: &Expression) -> ParseError {
 
            match expr {
 
                Expression::Call(expr) => {
 
                    let (poly_var, func_name) = get_poly_var_and_definition_name(ctx, poly_var_idx, poly_data.definition_id);
 
                    return ParseError::new_error_at_span(
 
                        &ctx.module.source, expr.func_span, format!(
 
                        &ctx.module().source, expr.func_span, format!(
 
                            "Conflicting type for polymorphic variable '{}' of '{}'",
 
                            poly_var, func_name
 
                        )
 
                    )
 
                },
 
                Expression::Literal(expr) => {
 
                    let (poly_var, type_name) = get_poly_var_and_definition_name(ctx, poly_var_idx, poly_data.definition_id);
 
                    return ParseError::new_error_at_span(
 
                        &ctx.module.source, expr.span, format!(
 
                        &ctx.module().source, expr.span, format!(
 
                            "Conflicting type for polymorphic variable '{}' of instantiation of '{}'",
 
                            poly_var, type_name
 
                        )
 
                    );
 
                },
 
                Expression::Select(expr) => {
 
                    let (poly_var, struct_name) = get_poly_var_and_definition_name(ctx, poly_var_idx, poly_data.definition_id);
 
                    return ParseError::new_error_at_span(
 
                        &ctx.module.source, expr.full_span, format!(
 
                        &ctx.module().source, expr.full_span, format!(
 
                            "Conflicting type for polymorphic variable '{}' while accessing field '{}' of '{}'",
 
                            poly_var, expr.field_name.value.as_str(), struct_name
 
                        )
 
                    )
 
                }
 
                _ => unreachable!("called construct_poly_arg_error without an expected expression, got: {:?}", expr)
 
            }
 
        }
 

	
 
        // Actual checking
 
        let expr = &ctx.heap[expr_id];
 
        let (expr_args, expr_return_name) = match expr {
 
            Expression::Call(expr) => 
 
                (
 
                    expr.arguments.clone(),
 
                    "return type"
 
                ),
 
            Expression::Literal(expr) => {
 
                let expressions = match &expr.value {
 
                    Literal::Struct(v) => v.fields.iter()
 
                        .map(|f| f.value)
 
                        .collect(),
 
                    Literal::Enum(_) => Vec::new(),
 
                    Literal::Union(v) => v.values.clone(),
 
                    _ => unreachable!()
 
                };
 

	
 
                ( expressions, "literal" )
 
            },
 
            Expression::Select(expr) =>
 
                // Select expression uses the polymorphic variables of the 
 
                // struct it is accessing, so get the subject expression.
 
                (
 
                    vec![expr.subject],
 
                    "selected field"
 
                ),
 
            _ => unreachable!(),
 
        };
 

	
 
        // - check return type with itself
 
        if let Some((poly_idx, section_a, section_b)) = has_poly_mismatch(
 
            &poly_data.returned, &poly_data.returned
 
        ) {
 
            return construct_main_error(ctx, poly_data, poly_idx, expr)
 
                .with_info_at_span(
 
                    &ctx.module.source, expr.full_span(), format!(
 
                    &ctx.module().source, expr.full_span(), format!(
 
                        "The {} inferred the conflicting types '{}' and '{}'",
 
                        expr_return_name,
 
                        InferenceType::partial_display_name(&ctx.heap, section_a),
 
                        InferenceType::partial_display_name(&ctx.heap, section_b)
 
                    )
 
                );
 
        }
 

	
 
        // - check arguments with each other argument and with return type
 
        for (arg_a_idx, arg_a) in poly_data.embedded.iter().enumerate() {
 
            for (arg_b_idx, arg_b) in poly_data.embedded.iter().enumerate() {
 
                if arg_b_idx > arg_a_idx {
 
                    break;
 
                }
 

	
 
                if let Some((poly_idx, section_a, section_b)) = has_poly_mismatch(&arg_a, &arg_b) {
 
                    let error = construct_main_error(ctx, poly_data, poly_idx, expr);
 
                    if arg_a_idx == arg_b_idx {
 
                        // Same argument
 
                        let arg = &ctx.heap[expr_args[arg_a_idx]];
 
                        return error.with_info_at_span(
 
                            &ctx.module.source, arg.full_span(), format!(
 
                            &ctx.module().source, arg.full_span(), format!(
 
                                "This argument inferred the conflicting types '{}' and '{}'",
 
                                InferenceType::partial_display_name(&ctx.heap, section_a),
 
                                InferenceType::partial_display_name(&ctx.heap, section_b)
 
                            )
 
                        );
 
                    } else {
 
                        let arg_a = &ctx.heap[expr_args[arg_a_idx]];
 
                        let arg_b = &ctx.heap[expr_args[arg_b_idx]];
 
                        return error.with_info_at_span(
 
                            &ctx.module.source, arg_a.full_span(), format!(
 
                            &ctx.module().source, arg_a.full_span(), format!(
 
                                "This argument inferred it to '{}'",
 
                                InferenceType::partial_display_name(&ctx.heap, section_a)
 
                            )
 
                        ).with_info_at_span(
 
                            &ctx.module.source, arg_b.full_span(), format!(
 
                            &ctx.module().source, arg_b.full_span(), format!(
 
                                "While this argument inferred it to '{}'",
 
                                InferenceType::partial_display_name(&ctx.heap, section_b)
 
                            )
 
                        )
 
                    }
 
                }
 
            }
 

	
 
            // Check with return type
 
            if let Some((poly_idx, section_arg, section_ret)) = has_poly_mismatch(arg_a, &poly_data.returned) {
 
                let arg = &ctx.heap[expr_args[arg_a_idx]];
 
                return construct_main_error(ctx, poly_data, poly_idx, expr)
 
                    .with_info_at_span(
 
                        &ctx.module.source, arg.full_span(), format!(
 
                        &ctx.module().source, arg.full_span(), format!(
 
                            "This argument inferred it to '{}'",
 
                            InferenceType::partial_display_name(&ctx.heap, section_arg)
 
                        )
 
                    )
 
                    .with_info_at_span(
 
                        &ctx.module.source, expr.full_span(), format!(
 
                        &ctx.module().source, expr.full_span(), format!(
 
                            "While the {} inferred it to '{}'",
 
                            expr_return_name,
 
                            InferenceType::partial_display_name(&ctx.heap, section_ret)
 
                        )
 
                    );
 
            }
 
        }
 

	
 
        // Now check against the explicitly specified polymorphic variables (if
 
        // any).
 
        for (arg_idx, arg) in poly_data.embedded.iter().enumerate() {
 
            if let Some((poly_idx, poly_section, arg_section)) = has_explicit_poly_mismatch(&poly_data.poly_vars, arg) {
 
                let arg = &ctx.heap[expr_args[arg_idx]];
 
                return construct_main_error(ctx, poly_data, poly_idx, expr)
 
                    .with_info_at_span(
 
                        &ctx.module.source, arg.full_span(), format!(
 
                        &ctx.module().source, arg.full_span(), format!(
 
                            "The polymorphic variable has type '{}' (which might have been partially inferred) while the argument inferred it to '{}'",
 
                            InferenceType::partial_display_name(&ctx.heap, poly_section),
 
                            InferenceType::partial_display_name(&ctx.heap, arg_section)
 
                        )
 
                    );
 
            }
 
        }
 

	
 
        if let Some((poly_idx, poly_section, ret_section)) = has_explicit_poly_mismatch(&poly_data.poly_vars, &poly_data.returned) {
 
            return construct_main_error(ctx, poly_data, poly_idx, expr)
 
                .with_info_at_span(
 
                    &ctx.module.source, expr.full_span(), format!(
 
                    &ctx.module().source, expr.full_span(), format!(
 
                        "The polymorphic variable has type '{}' (which might have been partially inferred) while the {} inferred it to '{}'",
 
                        InferenceType::partial_display_name(&ctx.heap, poly_section),
 
                        expr_return_name,
 
                        InferenceType::partial_display_name(&ctx.heap, ret_section)
 
                    )
 
                )
 
        }
 

	
 
        unreachable!("construct_poly_arg_error without actual error found?")
 
    }
 
}
 

	
 
#[cfg(test)]
 
mod tests {
 
    use super::*;
 
    use crate::protocol::arena::Id;
 
    use InferenceTypePart as ITP;
 
    use InferenceType as IT;
 

	
 
    #[test]
 
    fn test_single_part_inference() {
 
        // lhs argument inferred from rhs
 
        let pairs = [
 
            (ITP::NumberLike, ITP::UInt8),
 
            (ITP::IntegerLike, ITP::SInt32),
 
            (ITP::Unknown, ITP::UInt64),
 
            (ITP::Unknown, ITP::Bool)
 
        ];
 
        for (lhs, rhs) in pairs.iter() {
 
            // Using infer-both
 
            let mut lhs_type = IT::new(false, false, vec![lhs.clone()]);
 
            let mut rhs_type = IT::new(false, true, vec![rhs.clone()]);
 
            let result = unsafe{ IT::infer_subtrees_for_both_types(
 
                &mut lhs_type, 0, &mut rhs_type, 0
 
            ) };
 
            assert_eq!(DualInferenceResult::First, result);
 
            assert_eq!(lhs_type.parts, rhs_type.parts);
 

	
 
            // Using infer-single
 
            let mut lhs_type = IT::new(false, false, vec![lhs.clone()]);
 
            let rhs_type = IT::new(false, true, vec![rhs.clone()]);
 
            let result = IT::infer_subtree_for_single_type(
 
                &mut lhs_type, 0, &rhs_type.parts, 0, false
 
            );
 
            assert_eq!(SingleInferenceResult::Modified, result);
 
            assert_eq!(lhs_type.parts, rhs_type.parts);
 
        }
 
    }
 

	
 
    #[test]
 
    fn test_multi_part_inference() {
 
        let pairs = [
 
            (vec![ITP::ArrayLike, ITP::NumberLike], vec![ITP::Slice, ITP::SInt8]),
 
            (vec![ITP::Unknown], vec![ITP::Input, ITP::Array, ITP::String, ITP::Character]),
 
            (vec![ITP::PortLike, ITP::SInt32], vec![ITP::Input, ITP::SInt32]),
 
            (vec![ITP::Unknown], vec![ITP::Output, ITP::SInt32]),
 
            (
 
                vec![ITP::Instance(Id::new(0), 2), ITP::Input, ITP::Unknown, ITP::Output, ITP::Unknown],
 
                vec![ITP::Instance(Id::new(0), 2), ITP::Input, ITP::Array, ITP::SInt32, ITP::Output, ITP::SInt32]
 
            )
 
        ];
 

	
 
        for (lhs, rhs) in pairs.iter() {
 
            let mut lhs_type = IT::new(false, false, lhs.clone());
 
            let mut rhs_type = IT::new(false, true, rhs.clone());
 
            let result = unsafe{ IT::infer_subtrees_for_both_types(
 
                &mut lhs_type, 0, &mut rhs_type, 0
 
            ) };
 
            assert_eq!(DualInferenceResult::First, result);
 
            assert_eq!(lhs_type.parts, rhs_type.parts);
 

	
 
            let mut lhs_type = IT::new(false, false, lhs.clone());
 
            let rhs_type = IT::new(false, true, rhs.clone());
 
            let result = IT::infer_subtree_for_single_type(
 
                &mut lhs_type, 0, &rhs_type.parts, 0, false
 
            );
 
            assert_eq!(SingleInferenceResult::Modified, result);
 
            assert_eq!(lhs_type.parts, rhs_type.parts)
 
        }
 
    }
 
}
 
\ No newline at end of file

Changeset was too big and was cut off... Show full diff anyway

0 comments (0 inline, 0 general)