According to this page, LLVM-MOS seems to be pretty soundly beaten in performance of generated code by Oscar64.
https://thred.github.io/c-bench-64/
I think the ideal compiler for 6502, and maybe any of the memory-poor 8-bit systems would be one that supported both native code generation where speed is needed as well as virtual machine code for compactness. Ideally would also support inline assembler.
The LLVM-MOS approach of reserving some of zero page as registers is a good start, but given how valuable zero page is, it would also be useful to be able to designate static/global variables as zero page or not.
I've implemented Atari 2600 library support for both LLVM-MOS and CC65, but there are too many compromises to make it suitable for writing a game.
The lack of RAM is a major factor; stack usage must be kept to a minimum and you can forget any kind of heap. RAM can be extended with a special mapper, but due to the lack of a R/W pin on the cartridge, reads and writes use different address ranges, and C does not handle this without a hacky macro solution.
Not to mention the timing constraints with 2600 display kernels and page-crossing limitations, bank switching, inefficient pointer chasing, etc. etc. My intuition is you'd need a SMT solver to write a language that compiles for this system without needing inline assembly.
A very simple BASIC compiled pretty well! It did feature online assembly, and I agree with you on this necessary point especially concerning the 2600!
See Batari Basic
AIUI, Oscar64 does not aim to implement a standard C/C++ compiler as LLVM does, so the LLVM-MOS approach is still very much worthwhile. You can help by figuring out which relevant optimizations LLVM-MOS seems to be missing compared to SOTA (compiled or human-written) 6502 code, and filing issues.
We already know what the main remaining issue is - LLVM-MOS's register allocator is far from optimal for the 6502 architecture. mysterymath is slowly working on what may become a more sutiable allocator.
- [deleted]
I feel like no amount of optimizations will close the gap - it's an intractable problem.
It's performance of generated code, not performance of the compiler.
Aztec C had both native and interpreted code generation, back in the day.
> I think the ideal compiler for 6502, and maybe any of the memory-poor 8-bit systems would be one that supported both native code generation where speed is needed as well as virtual machine code for compactness.
Threaded code might be a worthwhile middle-of-the-way approach that spans freely across the "native" and "pure VM interpreter" extremes.
With regard to code size in this comparison someone associated with llvm-mos remarked that some factors are: their libc is written in C and tries to be multi-platform friendly, stdio takes up space, the division functions are large, and their float support is not asm optimized.
I wasn't really thinking of the binary sizes presented in the benchmarks, but more in general. 6502 assembler is compact enough if you are manipulating bytes, but not if you are manipulating 16 bit pointers or doing things like array indexing, which is where a 16-bit virtual machine (with zero page registers?) would help. Obviously there is a trade-off between speed and memory size, but on a 6502 target both are an issue and it'd be useful to be able to choose - perhaps VM by default and native code for "fast" procedures or code sections.
A lot of the C library outside of math isn't going to be speed critical - things like IO and heap for example, and there could also be dual versions to choose from if needed. Especially for retrocomputing, IO devices themselves were so slow that software overhead is less important.
More often than not the slow IO devices were coupled with optimized speed critical code due to cost savings or hardware simplification. Heap is an approach that rarely works well on a 6502 machine - there are no 16 bit stack pointers and it's just slower than doing without - However I tend to agree that a middle ground 16 bit virtual machine is a great idea. The first one I ever saw was Sweet16 by Woz.
I agree about heap - too much overhead to be a great approach on such a constrained target, but of course the standard library for C has to include it all the same.
Memory is better allocated in more of a customized application specific way, such as an arena allocator, or just avoid dynamic allocation altogether if possible.
I was co-author of Acorn's ISO-Pascal system for the 6502-based BBC micro (16KB or 32KB RAM) back in the day, and one part I was proud of was a pretty full featured (for the time) code editor that was included, written in 4KB of heavily optimized assembler. The memory allocation I used was just to take ownership of all free RAM, and maintain the edit buffer before the cursor at one end of memory, and the buffer content after the cursor at the other end. This meant that as you typed and entered new text, it was just appended to the "before cursor" block, with no text movement or memory allocation needed.
If it runs fast under an AppleI, it will run fine in the rest.