The JS package is now written in typescript so we don't need to manually
maintain an index.d.ts file. Generate the index.d.ts file from source
and ship it with the JS package.
This reduces the size of the WASM bundle which is generated to around
800kb. Unfortunately wasm-pack doesn't allow us to use arbitrary
profiles when building and the optimization level has to be set at the
workspace root - consequently this flag is set for all packages in the
workspace. This shouldn't be an issue really as all our dependents in
the Rust world will be setting their own optimization flags anyway.
JS packaging is complicated and testing it manually is irritating. Add a
tool in `automerge-js/e2e` which stands up a local NPM registry and
publishes the various packages to that registry for use in automated and
manual tests. Update the test script in `scripts/ci/js_tests` to run the
tests using this tool
By moving to wasm-bindgens `bundler` target rather than using the `web`
target we remove the need for an async initialization step on the
automerge-wasm package. This means that the automerge-js package can now
depend directly on automerge-wasm and perform initialization itself,
thus making automerge-js a drop in replacement for the `automerge` JS
package (hopefully).
We bump the versions of automerge-wasm
Sync messages encode changes as length prefixed byte arrays. We were
calculating the length using the uncompressed bytes of a change but
encoding the bytes of the change using the (possibly) compressed bytes.
This meant that if a change was large enough to compress then it would
fail to decode. Switch to using uncompressed bytes in sync messages.
The logic for loading compressed document chunks has a check that the
`max_op` of a change is valid. This check was overly strict in that it
checked that the max op was strictly larger than the max op of a
previous strange - this rejects valid documents which contain changes
with no ops in them, in which case the max op can be equal to the max op
of the previous change. Loosen the logic to allow empty changes.
The logic for reconstructing changes from the compressed document format
records operations which set a key in an object so that it can later
reconstruct delete operations from the successor list of the document
format operations. The logic to do this was only recording set
operations and not `make*` operations. This meant that delete operations
targeting `make*` operations could not be loaded correctly.
Correctly record `make*` operations for later use in constructing delete
operations.
Somehow the `devDependencies` for `automerge-js` dependended on the
released `automerge-wasm` package, rather than the local version, which
means that the JS tests are not actually testing the current
implementation. Depend on the local `automerge-wasm` package to fix
this.
Occasionally one needs to debug problems in a document with a large
number of objects. In this case it is unhelpful to print a graphviz of
the whole opset because there are too many objects. Add a
`Option<Vec<ObjId>>` argument to `OpSet::visualise` to filter the
objects which are visualised.
The compressed document format includes at the end of the document chunk
the indicies of the heads of the document. Older versions of the
javascript implementation do not include these indicies so we allow them
to be omitted when decoding.
Whilst we're here add some tracing::trace logs to make it easier to
understand where parsing is failing.
The latest clippy (90.1.65 for me) added a lint which checks for types
that implement `PartialEq` and could implement `Eq`
(`derive_partial_eq_without_eq`). Add a `derive(Eq)` in a bunch of
places to satisfy this lint.
Expose `automerge::AutoCommit::with_actor()` through `AMcreate()`.
Add notes to clarify the purpose of `AMfreeStack()`, `AMpop()`,
`AMpush()`, `AMpushCallback()`, and `AMresultStack`.
source files.
Normalize the header include statement within the documentation.
Limit `AMpush()` usage within the quickstart example to variable
assignment.
for toggling the "storage-v2" feature flag in a Cargo invocation.
Correct the `AMunknownValue` struct misnomer.
Ease the rebasing of changes to the `AMvalue` struct declaration with
pending upstream changes to same.
Now that all crates support the storage-v2 feature flag of the automerge
crate we update CI to run tests for '--workspace --all-features'
Signed-off-by: Alex Good <alex@memoryandthought.me>
For some usecases the overhead of compressed columns in the document
format is not worth it. Add `Automerge::save_nocompress` to save without
compressing columns.
Signed-off-by: Alex Good <alex@memoryandthought.me>
This is achieved by liberal use of feature flags. Main additions are:
* Build the OpSet more efficiently when loading from compressed
document storage using a DocObserver as implemented in
`automerge::op_tree::load`
* Reimplement the parsing login in the various types in
`automerge::sync`
There are numerous other small changes required to get the types to line
up.
Signed-off-by: Alex Good <alex@memoryandthought.me>
It is useful to be able to generate a `serde::Value` representation of
an automerge document. We can do this without an intermediate type by
iterating over the keys of the document recursively. Add
`autoeserde::AutoSerde` to implement this.
Signed-off-by: Alex Good <alex@memoryandthought.me>
Implement parsing the binary format using the new parser library and the
new encoding types. This is superior to the previous parsing
implementation in that invalid data should never cause panics and it
exposes and interface to construct an OpSet from a saved document much
more efficiently.
Signed-off-by: Alex Good <alex@memoryandthought.me>
The representation of changes in storage-v2 is different to the existing
representation so add accessor methods to the fields of `Change` and
make all accesses go through them. This allows the change representation
in storage-v2 to be a drop-in.
Signed-off-by: Alex Good <alex@memoryandthought.me>
The existing implementation of the columnar format elides a lot of error
handling (by converting `Err` to `None`) and doesn't allow writing to a
single chunk of memory when encoding. Implement a new set of encoding and
decoding primitives which handle errors more robustly and allow us to
use a single chunk of memory when reading and writing.
Signed-off-by: Alex Good <alex@memoryandthought.me>