The javascript implementation of automerge sorts actor IDs
lexicographically when encoding changes. We were sorting actor IDs in
the order the appear in the change we're encoding. This meant that the
index that we assigned to operations in the encoded change was different
to that which the javascript implementation assigns, resulting in
mismatched head errors as the hashes we created did not match the
javascript implementation.
This change fixes the issue by sorting actor IDs lexicographically. We
make a pass over the operations in the change before encoding to collect
the actor IDs and sort them. This means we no longer need to pass a
mutable `Vec<ActorId>` to the various encode functions, which cleans
things up a little.
Fixes#240
The last commit added a `-o` parameter to `automerge export` for
consistency with `automerge merge`. This commit makes it possible to
omit this parameter.
`merge` loads documents either from a list of path, or from standard
input, merges them and emits the merged document to standard output or a
given path.
It wasn't really uncompressed as we have compressed and uncompressed
changes in the backend. It is just not encoded into the binary format.
The module separation (protocol vs backend) should help with the
distinction.
* cli: wip Add import command
* cli: wip Save bytes to out file
* cli: Update `export` for reader/writer interface
* cli: Update import for reader/writer interface
* cli: Add `atty` to check if stdin/out is a TTY
* cli: Require file path if not streaming in or out
* cli: Align naming of the binary changes file, whether in or out
* cli: Small documentation fixes
* cli: Allow specifying an input file for import
* cli: Add `duct` crate for testing
* cli: comment-out println that was showing up in output files
* cli: Add basic CLI tests for import, export, and import -> export
* cli: EOF NL
* cli: Remove a few redundant calls to clone
* cli: Move duct to dev-dependencies
* Remove debug message