Devirtualization and Static Polymorphism

david.alvarezrosa.com

37 points

dalvrosa

7 hours ago


16 comments

gignico 4 hours ago

> Under the hood, a virtual table (vtable) is created for each class, and a pointer (vptr) to the vtable is added to each instance.

Coming from C++ I assumed this was the only way but Rust has an interesting approach where the single objects do not pay any cost because virtual dispatch is handled by fat pointers. So you carry around the `vptr` in fat pointers (`&dyn MyTrait`) only when needed, not in every instance.

  • cataphract 3 hours ago

    There have been type-erasure libraries in c++ for a longish time that allow choosing inline vtables and inline storage. It's definitely been a widely talked about technique for at least 10 years (I see talks about Dyno from 2017).

  • anon291 an hour ago

    This is the standard type class approach. Haskell does the same thing.

  • dalvrosa 4 hours ago

    Good point, thanks for sharing!

hinkley 4 hours ago

I wonder if I still have the link.

One of the papers I had bookmarked when toying with my own language design was someone that had worked out how to make interfaces as fast or faster than vtables by using perfect hashing and using the vtable as a hash table instead of a list.

You can also, when inlining a polymorphic call, put a conditional block in that bounces back to full dispatch if the call occasionally doesn’t match the common case. The problem with polymorphic inlining though is that it quickly resembles the exact sort of code we delete and replace with polymorphic dispatch:

    if (typeof arg1 == “string”) {
    } else if typeof arg1 === …) {
    } else if {
    } else if {
    } else {
    }
  • anitil 4 hours ago

    I've been thinking through what features I'd want in a language if I were designing one myself, and one of my desires is to have exhaustive matches on enums (which could be made of any primitive type) and sum types. The ability to generate perfect hashes at compile time was one of the things that falls out nicely from that

  • dalvrosa 4 hours ago

    Nice one, TIL

    One caveat with "hash vtables" is that you only really see a performance win when the interface has a lot of specializations.

    • hinkley 2 hours ago

      As I just mentioned in another reply, the problem they were trying to solve was hierarchies where it makes sense for a group of types to be constructed by the combination of two or three narrowly scoped interfaces.

      For instance, if you treat some collections as read only, you can define comprehensions across them with a single implementation. But that means the mutators have to be contained in another type, which a subset will implement, and may have covariant inputs.

  • akoboldfrying 3 hours ago

    > using the vtable as a hash table instead of a list.

    Could you explain this a bit more? The word "list" makes me think you might be thinking that virtual method lookup iterates over each element of the vtable, doing comparisons until it finds a match -- but I'm certain that this is not how virtual method invocation works in C++. The vtable is constructed at compile time and is already the simplest possible "perfect hashtable": a short, dense array with each virtual method mapping to a function pointer at a statically known index.

    • hinkley 2 hours ago

      The problem they were trying to solve was multiple inheritance, and by nominal type not by code reuse. So interfaces, basically.

      So these guys essentially assigned a hashcode to every function of every interface and then you would do dispatch instead of obj.vtable[12] you would do modular math x = singature.hash % len(obj.vtable) and call that.

      I believe this was sometime around 2005-2008 and they found that it was fast enough on hardware of that era to be usable.

  • anon291 an hour ago

    That's because that type of code is actually better performing than the dynamic dispatch.

    There's absolutely nothing wrong with this code. It's just that it's not as extensible

    It's a 'closed world' representation where the code assumes it knows about every possibility. This make extension more difficult

    The code itself is extraordinarily good and performant.

pjmlp 5 hours ago

Nice overview, it misses other kinds of dispatch though.

With concepts, templates and compile time execution, there is no need for CRTP, and in addition it can cover for better error messages regarding what methods to dispatch to.

  • dalvrosa 4 hours ago

    Fair. New C++ standards are providing great tools for compile-time everything

    But still CRTP is widely used in low-latency environments :)

TimorousBestie 4 hours ago

Good article, rare to see simple explanations of intricate C++ ideas.