Mechanism for assigning inputs to the specified variable is clumsy and uses
binary space. Always using A is much simpler and doesn't seem very limiting to
me. I do that because there's many more "input" commands I'd like to add.
That's my mega-commit you've all been waiting for.
The code for the shell share more routines with userspace apps than with kernel
units, because, well, its behavior is that of a userspace app, not a device
driver.
This created a weird situation with libraries and jump tables. Some routine
belonging to the `kernel/` directory felt weird there.
And then comes `apps/basic`, which will likely share even more code with the
shell. I was seeing myself creating huge jump tables to reuse code from the
shell. It didn't feel right.
Moreover, we'll probably want basic-like apps to optionnally replace the shell.
So here I am with this huge change in the project structure. I didn't test all
recipes on hardware yet, I will do later. I might have broken some...
But now, the structure feels better and the line between what belongs to
`kernel` and what belongs to `apps` feels clearer.
Instead of expecting a `USER_CODE` symbol to be set, we expect `.org` to be
set in all userspace glue code. This gives us more flexibility with regards to
how we manage that.
Moreover, instead of making `USER_RAMSTART` mandatory, we make it default to
the end of the binary, which is adequate in a majority of cases.
Will be useful for my upcoming mega-commit... :)
Most of register fiddling routines (which is now the only thing contained
in care.asm) are used by almost all userspace apps, often in inner loops.
That makes the penalty of using jump tables for those a bit too high.
Moreover, it burdens jump tables needlessly.
Because this unit is very small (now that string routines are out), it makes
sense to always include it in binaries.
There's a lot of looping through that table. At first, I wanted to add some
bisecting, but 16-bit additions and multiplications involved made the idea a
bit less appealing. I went with a very basic, hardcoded index which should
speed things quite a bit at a minimal complexity cost.
There's a couple of bit fiddling instructions that didn't have their
IX/IY variant implemented yet and without this commit, each of them
would have required a special routine. Not anymore.
The goal is to avoid mixing those routines with "character devices"
(acia, vpd, kbd) which aren't block devices and have routines that
have different expectations.
This is a first step to fixing #64.
* Optimised parsing functions and other minor optimisations
UnsetZ has been reduced by a byte, and between 17 and 28 cycles saved based on branching. Since branching is based on a being 0, it shouldn't have to branch very often and so be 28 cycles saved most the time. Including the initial call, the old version was 60 cycles, so this should be nearly twice as fast.
fmtHex has been reduced by 4 bytes and between 3 and 8 cycles based on branching.
fmtHexPair had a redundant "and" removed, saving two bytes and seven cycles.
parseHex has been reduced by 7 bytes. Due to so much branching, it's hard to say if it's faster, but it should be since it's fewer operations and now conditional returns are used which are a cycle faster than conditional jumps. I think there's more to improve here, but I haven't come up with anything yet.
* Major parsing optimisations
Totally reworked both parseDecimal and parseDecimalDigit
parseDecimalDigit no longer exists, as it could be replaced by an inline alternative in the 4 places it appeared. This saves one byte overall, as the inline version is 4 bytes, 1 byte more than a call, and removing the function saved 5 bytes. It has been reduced from between 52 and 35 cycles (35 on error, so we'd expect 52 cycles to be more common unless someone's really bad at programming) to 14 cycles, so 2-3 times faster.
parseDecimal has been reduced by a byte, and now the main loop is just about twice as fast, but with increased overhead. To put this into perspective, if we ignore error cases:
For decimals of length 1 it'll be 1.20x faster, for decimals of length 2, 1.41x faster, for length 3, 1.51x faster, for length 4, 1.57x faster, and for length 5 and above, at least 1.48x faster (even faster if there's leading zeroes or not the worst case scenario).
I believe there is still room for improvement, since the first iteration can be nearly replaced with "ld l, c" since 0*10=0, but when I tried this I could either add a zero check into the main loop, adding around 40 cycles and 10 bytes, or add 20 bytes to the overhead, and I don't think either of those options are worth it.
* Inlined parseDecimalDigit
See previous commit, and /lib/parse.asm, for details
* Fixed tabs and spacing
* Fixed tabs and spacing
* Better explanation and layout
* Corrected error in comments, and a new parseHex
5 bytes saved in parseHex, again hard to say what that does to speed, the shortest possible speed is probably a little slower but I think non-error cases should be around 9 cycles faster for decimal and 18 cycles faster for hex as there's now only two conditional returns and no compliment carries.
* Fixed the new parseHex
I accidentally did `add 0xe9` without specifying `a`
* Commented the use of daa
I made the comments surrounding my use of daa much clearer, so it isn't quite so mystical what's being done here.
* Removed skip leading zeroes, added skip first multiply
Now instead of skipping leading zeroes, the first digit is loaded directly into hl without first multiplying by 10. This means the first loop is skipped in the overhead, making the method 2-3 times faster overall, and is now faster for the more common fewer digit cases too. The number of bytes is exactly the same, and the inner loop is slightly faster too thanks to no longer needing to load a into c.
To be more precise about the speed increase over the current code, for decimals of length 1 it'll be 3.18x faster, for decimals of length 2, 2.50x faster, for length 3, 2.31x faster, for length 4, 2.22x faster, and for length 5 and above, at least 2.03x faster. In terms of cycles, this is around 100+(132*length) cycles saved per decimal.
* Fixed erroring out for all number >0x1999
I fixed the errors for numbers >0x1999, sadly it is now 6 bytes bigger, so 5 bytes larger than the original, but the speed increases should still hold.
* Fixed more errors, clearer choice of constants
* Clearer choice of constants
* Moved and indented comment about fmtHex's method
* Marked inlined parseDecimalDigit uses
* Renamed .error, removed trailing whitespace, more verbose comments.
* addHL and subHL affect flags, and are smaller
Most importantly, addHL and subHL now affect the flags as you would expect from a 16 bit addition/subtraction. This seems like it'd be preferred behaviour, however I realise any code relying on it not affecting flags would break. One byte saved in addHL, and two bytes saved in subHL. Due to the branching nature of the original code, it's difficult to compare speeds, subHL is either 1 or 6 cycles faster depending on branching, and addHL is between -1 and 3 cycles faster. If the chance of a carry is 50%, addHL is expected to be a cycle faster, but for a chance of carry below 25% (so a < 0x40) this will be up to a cycle slower.
* Update core.asm
* Reworked one use of addHL
By essentially inlining both addHL and cpHLDE, 100 cycles are saved, but due to the registers not needing preserving, a byte is saved too.
* Corrected spelling error in comment
* Reworked second use of addHL
43 cycles saved, and no more addHL in critical loops. No bytes saved or used.
* Fixed tabs and spacing, and made a comment clearer.
* Clearer comments
* Adopted push/pop notation
I've tested RAM usage when self-assembling and there weren't as high
as I thought. zasm's defaults now use less than 0x1800 bytes of RAM,
making it possible, theoretically for now, for a Sega Master System
to assemble Collapse OS from within itself.
Yup, that's ultimately why I've just made this whole big zasm
refactoring in the previous commits. To allow for this.
But also, zasm is in much better shape now...
I'm about to split the global registry in two (labels and consts)
and the previous state of registry selection made things murky.
Now it's much better.
The dual scraptchpad thing doesn't work. Things become very
complicated when it's time to write that back to the file. We
overwrite our contents and end up with garbage.
This hard-binds ed to the filesystem (I liked the idea of working
only with blockdevs though...), but this is necessary for the
upcoming `w` command. We need some kind of way to tell the
destination to write to truncate itself.
This only has a meaning in the filesystem, but it's necessary to
let the file know that its registered file size has possibly
shrunk.
I thought of alternatives that would have allowed me to keep ed
blkdev-centered, but they were all too hackish to my own taste.
Hence, this new hard-bind on files.