The javascript implementation of automerge sorts actor IDs
lexicographically when encoding changes. We were sorting actor IDs in
the order the appear in the change we're encoding. This meant that the
index that we assigned to operations in the encoded change was different
to that which the javascript implementation assigns, resulting in
mismatched head errors as the hashes we created did not match the
javascript implementation.
This change fixes the issue by sorting actor IDs lexicographically. We
make a pass over the operations in the change before encoding to collect
the actor IDs and sort them. This means we no longer need to pass a
mutable `Vec<ActorId>` to the various encode functions, which cleans
things up a little.
Fixes#240
This resulted in one failing test which was due to the pending_changes
we report for a patch being incorrectly calculated from missing
dependencies. I've added a test for this failure and fixed it, interop
tests now pass.
* Flatten object type
* Use separate construct functions
* Use separate gen_*_diff functions
* Remove maptype and seqtype from Diffs
* Preallocate ops in new_map_or_table
* More preallocations
It wasn't really uncompressed as we have compressed and uncompressed
changes in the backend. It is just not encoded into the binary format.
The module separation (protocol vs backend) should help with the
distinction.
* Fix a panic when indexing the bytes
* Fix leb failing to read enough bytes
* Fix another panic out of bounds
* Use get rather than checking
* Check addition with arbitrary val
* Add backend load fuzzing
* Handle no ops sub
* Fix another index out of bounds
* Ensure that ChangeBytes::compressed contains the original compressed bytes, fixes#95
* Fix clippy
* Move bytes into decompress check
Co-authored-by: Andrew Jeffery <dev@jeffas.io>
This brings the DiffEdit types more in line with the typescript ones and
fixes the names. Also changed to using match guards in the append_edit
function so we can take edit by value.