Compare commits

..

462 commits

Author SHA1 Message Date
alexjg
cb409b6ffe
docs: timestamp -> time in automerge.change examples (#548) 2023-03-09 18:10:23 +00:00
Conrad Irwin
b34b46fa16
smaller automerge c (#545)
* Fix automerge-c tests on mac

* Generate significantly smaller automerge-c builds

This cuts the size of libautomerge_core.a from 25Mb to 1.6Mb on macOS
and 53Mb to 2.7Mb on Linux.

As a side-effect of setting codegen-units = 1 for all release builds the
optimized wasm files are also 100kb smaller.
2023-03-09 15:09:43 +00:00
Conrad Irwin
7b747b8341
Error instead of corrupt large op counters (#543)
Since b78211ca6, OpIds have been silently truncated to 2**32. This
causes corruption in the case the op id overflows.

This change converts the silent error to a panic, and guards against the
panic on the codepath found by the fuzzer.
2023-03-07 16:49:04 +00:00
Conrad Irwin
2c1970f664
Fix panic on invalid action (#541)
We make the validation on parsing operations in the encoded changes stricter to avoid a possible panic when applying changes.
2023-03-04 12:09:08 +00:00
christine betts
63b761c0d1
Suppress clippy warning in parse.rs + bump toolchain (#542)
* Fix rust error in parse.rs
* Bump toolchain to 1.67.0
2023-03-03 22:42:40 +00:00
Conrad Irwin
44fa7ac416
Don't panic on missing deps of change chunks (#538)
* Fix doubly-reported ops in load of change chunks

Since c3c04128f5, observers have been
called twice when calling Automerge::load() with change chunks.

* Better handle change chunks with missing deps

Before this change Automerge::load would panic if you passed a change
chunk that was missing a dependency, or multiple change chunks not in
strict dependency order. After this change these cases will error
instead.
2023-02-27 20:12:09 +00:00
Jason Kankiewicz
8de2fa9bd4
C API 2 (#530)
The AMvalue union, AMlistItem struct, AMmapItem struct, and AMobjItem struct are gone, replaced by the AMitem struct.

The AMchangeHashes, AMchanges, AMlistItems, AMmapItems, AMobjItems, AMstrs, and AMsyncHaves iterators are gone, replaced by the AMitems iterator.

The AMitem struct is opaque, getting and setting values is now achieved exclusively through function calls.

The AMitemsNext(), AMitemsPrev(), and AMresultItem() functions return a pointer to an AMitem struct so you ultimately get the same thing whether you're iterating over a sequence or calling AMmapGet() or AMlistGet().

Calling AMitemResult() on an AMitem struct will produce a new AMresult struct referencing its storage so now the AMresult struct for an iterator can be subsequently freed without affecting the AMitem structs that were filtered out of it.

The storage for a set of AMitem structs can be recombined into a single AMresult struct by passing pointers to their corresponding AMresult structs to AMresultCat().

For C/C++ programmers, I've added AMstrCmp(), AMstrdup(), AM{idxType,objType,status,valType}ToString() and AM{idxType,objType,status,valType}FromString(). It's also now possible to pass arbitrary parameters through AMstack{Item,Items,Result}() to a callback function.
2023-02-25 18:47:00 +00:00
Philip Schatz
407faefa6e
A few setup fixes (#529)
* include deno in dependencies

* install javascript dependencies

* remove redundant operation
2023-02-15 09:23:02 +00:00
Alex Good
1425af43cd @automerge/automerge@2.0.2 2023-02-15 00:06:23 +00:00
Alex Good
c92d042c87 @automerge/automerge-wasm@0.1.24 and @automerge/automerge@2.0.2-alpha.2 2023-02-14 17:59:23 +00:00
Alex Good
9271b20cf5 Correct logic when skip = B and fix formatting
A few tests were failing which exposed the fact that if skip is `B` (the
out factor of the OpTree) then we set `skip = None` and this causes us
to attempt to return `Skip` in a non root node. I ported the failing
test from JS to Rust and fixed the problem.

I also fixed the formatting issues.
2023-02-14 17:21:59 +00:00
Orion Henry
5e82dbc3c8 rework how skip works to push the logic into node 2023-02-14 17:21:59 +00:00
Conrad Irwin
2cd7427f35 Use our leb128 parser for values
This ensures that values in automerge documents are encoded correctly,
and that no extra data is smuggled in any LEB fields.
2023-02-09 15:46:22 +00:00
Alex Good
11f063cbfe
Remove nightly from CI 2023-02-09 11:06:24 +00:00
Alex Good
a24d536d16 Move automerge::SequenceTree to automerge_wasm::SequenceTree
The `SequenceTree` is only ever used in `automerge_wasm` so move it
there.
2023-02-05 11:08:33 +00:00
Alex Good
c5fde2802f @automerge/automerge-wasm@0.1.24 and @automerge/automerge@2.0.2-alpha.1 2023-02-03 16:31:46 +00:00
Alex Good
13a775ed9a Speed up loading by generating clocks on demand
Context: currently we store a mapping from ChangeHash -> Clock, where
`Clock` is the set of (ActorId, (Sequence number, max Op)) pairs derived
from the given change and it's dependencies. This clock is used to
determine what operations are visible at a given set of heads.

Problem: populating this mapping for documents with large histories
containing many actors can be very slow as for each change we have to
allocate and merge a bunch of hashmaps.

Solution: instead of creating the clocks on load, create an adjacency
list based representation of the change graph and then derive the clock
from this graph when it is needed. Traversing even large graphs is still
almost as fast as looking up the clock in a hashmap.
2023-02-03 16:15:15 +00:00
Alex Good
1e33c9d9e0 Use Automerge::load instead of load_incremental if empty
Problem: when running the sync protocol for a new document the API
requires that the user create an empty document and then call
`receive_sync_message` on that document. This results in the OpObserver
for the new document being called with every single op in the document
history. For documents with a large history this can be extremely time
consuming, but the OpObserver doesn't need to know about all the hidden
states.

Solution: Modify `Automerge::load_with` and
`Automerge::apply_changes_with` to check if the document is empty before
applying changes. If the document _is_ empty then we don't call the
observer for every change, but instead use
`automerge::observe_current_state` to notify the observer of the new
state once all the changes have been applied.
2023-02-03 10:01:12 +00:00
Alex Good
c3c04128f5 Only observe the current state on load
Problem: When loading a document whilst passing an `OpObserver` we call
the OpObserver for every change in the loaded document. This slows down
the loading process for two reasons: 1) we have to make a call to the
observer for every op 2) we cannot just stream the ops into the OpSet in
topological order but must instead buffer them to pass to the observer.

Solution: Construct the OpSet first, then only traverse the visible ops
in the OpSet, calling the observer. For documents with a deep history
this results in vastly fewer calls to the observer and also allows us to
construct the OpSet much more quickly. It is slightly different
semantically because the observer never gets notified of changes which
are not visible, but that shouldn't matter to most observers.
2023-02-03 10:01:12 +00:00
Alex Good
da55dfac7a refactor: make fields of Automerge private
The fields of `automerge::Automerge` were crate public, which made it
hard to change the structure of `Automerge` with confidence. Make all
fields private and put them behind accessors where necessary to allow
for easy internal changes.
2023-02-03 10:01:12 +00:00
alexjg
9195e9cb76
Fix deny errors (#518)
* Ignore deny errors on duplicate windows-sys

* Delete spurious lockfile in automerge-cli
2023-02-02 15:02:53 +00:00
dependabot[bot]
f8d5a8ea98
Bump json5 from 1.0.1 to 1.0.2 in /javascript/examples/create-react-app (#487)
Bumps [json5](https://github.com/json5/json5) from 1.0.1 to 1.0.2. in javascript/examples/create-react-app
2023-02-01 09:15:54 +00:00
alexjg
2a9652e642
typescript: Hide API type and make SyncState opaque (#514) 2023-02-01 09:15:00 +00:00
Conrad Irwin
a6959e70e8
More robust leb128 parsing (#515)
Before this change i64 decoding did not work for negative numbers (not a
real problem because it is only used for the timestamp of a change),
and both u64 and i64 would allow overlong LEB encodings.
2023-01-31 17:54:54 +00:00
alexjg
de5af2fffa
automerge-rs 0.3.0 and automerge-test 0.2.0 (#512) 2023-01-30 19:58:35 +00:00
alexjg
08801ab580
automerge-rs: Introduce ReadDoc and SyncDoc traits and add documentation (#511)
The Rust API has so far grown somewhat organically driven by the needs of the
javascript implementation. This has led to an API which is quite awkward and
unfamiliar to Rust programmers. Additionally there is no documentation to speak
of. This commit is the first movement towards cleaning things up a bit. We touch
a lot of files but the changes are all very mechanical. We introduce a few
traits to abstract over the common operations between `Automerge` and
`AutoCommit`, and add a whole bunch of documentation.

* Add a `ReadDoc` trait to describe methods which read value from a document.
  make `Transactable` extend `ReadDoc`
* Add a `SyncDoc` trait to describe methods necessary for synchronizing
  documents.
* Put the `SyncDoc` implementation for `AutoCommit` behind `AutoCommit::sync` to
  ensure that any open transactions are closed before taking part in the sync
  protocol
* Split `OpObserver` into two traits: `OpObserver` + `BranchableObserver`.
  `BranchableObserver` captures the methods which are only needed for observing
  transactions.
* Add a whole bunch of documentation.

The main changes Rust users will need to make is:

* Import the `ReadDoc` trait wherever you are using the methods which have been
  moved to it. Optionally change concrete paramters on functions to `ReadDoc`
  constraints.
* Likewise import the `SyncDoc` trait wherever you are doing synchronisation
  work
* If you are using the `AutoCommit::*_sync_message` methods you will need to add
  a call to `AutoCommit::sync()` first. E.g. `doc.generate_sync_message` becomes
  `doc.sync().generate_sync_message`
* If you have an implementation of `OpObserver` which you are using in an
  `AutoCommit` then split it into an implementation of `OpObserver` and
  `BranchableObserver`
2023-01-30 19:37:03 +00:00
alexjg
89a0866272
@automerge/automerge@2.0.1 (#510) 2023-01-28 21:22:45 +00:00
Alex Good
9b6a3c8691
Update README 2023-01-28 09:32:21 +00:00
alexjg
58a7a06b75
@automerge/automerge-wasm@0.1.23 and @automerge/automerge@2.0.1-alpha.6 (#509) 2023-01-27 20:27:11 +00:00
alexjg
f428fe0169
Improve typescript types (#508) 2023-01-27 17:23:13 +00:00
Conrad Irwin
931ee7e77b
Add Fuzz Testing (#498)
* Add fuzz testing for document load

* Fix fuzz crashers and add to test suite
2023-01-25 16:03:05 +00:00
alexjg
819767cc33
fix: use saturating_sub when updating cached text width (#505)
Problem: In `automerge::query::Index::change_vis` we use `-=` to
subtract the width of an operation which is being hidden from the text
widths which we store on the index of each node in the optree. This
index represents the width of all the visible text operations in this
node and below. This was causing an integer underflow error when
encountering some list operations. More specifically, when a
`ScalarValue::Str` in a list was made invisible by a later operation
which contained a _shorter_ string, the width subtracted from the indexed
text widths could be longer than the current index.

Solution: use `saturating_sub` instead. This is technically papering
over the problem because really the width should never go below zero,
but the text widths are only relevant for text objects where the
existing logic works as advertised because we don't have a `set`
operation for text indices. A more robust solution would be to track the
type of the Index (and consequently of the `OpTree`) at the type level,
but time is limited and problems are infinite.

Also, add a lengthy description of the reason we are using
`saturating_sub` so that when I read it in about a month I don't have
to redo the painful debugging process that got me to this commit.
2023-01-23 19:19:55 +00:00
Alex Currie-Clark
78adbc4ff9
Update patch types (#499)
* Update `Patch` types

* Clarify that the splice patch applies to text

* Add Splice patch type to exports

* Add new patches to javascript
2023-01-23 17:02:02 +00:00
Andrew Jeffery
1f7b109dcd
Add From<SmolStr> for ScalarValue::Str (#506) 2023-01-23 17:01:41 +00:00
Conrad Irwin
98e755106f
Fix and simplify lebsize calculations (#503)
Before this change numbits_i64() was incorrect for every value of the
form 0 - 2^x. This only manifested in a visible error if x%7 == 6 (so
for -64, -8192, etc.) at which point `lebsize` would return a value one
too large, causing a panic in commit().
2023-01-23 11:01:05 +00:00
alexjg
6b0ee6da2e
Bump js to 2.0.1-alpha.5 and automerge-wasm to 0.1.22 (#497) 2023-01-19 22:15:06 +00:00
alexjg
9b44a75f69
fix: don't panic when generating parents for hidden objects (#500)
Problem: the `OpSet::export_key` method uses `query::ElemIdPos` to
determine the index of sequence elements when exporting a key. This
query returned `None` for invisible elements. The `Parents` iterator
which is used to generate paths to objects in patches in
`automerge-wasm` used `export_key`. The end result is that applying a
remote change which deletes an object in a sequence would panic as it
tries to generate a path for an invisible object.

Solution: modify `query::ElemIdPos` to include invisible objects. This
does mean that the path generated will refer to the previous visible
object in the sequence as it's index, but this is probably fine as for
an invisible object the path shouldn't be used anyway.

While we're here also change the return value of `OpSet::export_key` to
an `Option` and make `query::Index::ops` private as obeisance to the
Lady of the Golden Blade.
2023-01-19 21:11:36 +00:00
alexjg
d8baa116e7
automerge-rs: Add ExId::to_bytes (#491)
The `ExId` structure has some internal details which make lookups for
object IDs which were produced by the document doing the looking up
faster. These internal details are quite specific to the implementation
so we don't want to expose them as a public API. On the other hand, we
need to be able to serialize `ExId`s so that FFI clients can hold on to
them without referencing memory which is owned by the document (ahem,
looking at you Java).

Introduce `ExId::to_bytes` and `TryFrom<&[u8]> ExId` implementing a
canonical serialization which includes a version tag, giveing us
compatibility options if we decide to change the implementation.
2023-01-19 17:02:47 +00:00
alexjg
5629a7bec4
Various CI script fixes (#501)
Some of the scripts in scripts/ci were not reliable detecting the path
they were operating in. Additionally the deno_tests script was not
correctly picking up the ROOT_MODULE environment variable. Add more
robust path handling and fix the deno_tests script.
2023-01-19 15:38:27 +00:00
alexjg
964ae2bd81
Fix SeekOpWithPatch on optrees with only internal optrees (#496)
In #480 we fixed an issue where `SeekOp` calculated an incorrect
insertion index on optrees where the only visible ops were on internal
nodes. We forgot to port this fix to `SeekOpWithPatch`, which has almost
the same logic just with additional work done in order to notify an
`OpObserver` of changes. Add a test and fix to `SeekOpWithPatch`
2023-01-14 11:27:48 +00:00
Alex Good
d8df1707d9
Update rust toolchain for "linux" step 2023-01-14 11:06:58 +00:00
Alex Currie-Clark
681a3f1f3f
Add github action to deploy deno package 2023-01-13 10:33:47 +00:00
Alex Good
22e9915fac automerge-wasm: publish release build in Github Action 2023-01-12 12:42:19 +00:00
Alex Good
2d8df12522
re-enable version check for WASM release 2023-01-12 11:35:48 +00:00
Alex Good
f073dbf701
use setup-node prior to attempting to publish in release action 2023-01-12 11:04:22 +00:00
Alex Good
5c02445bee
Bump automerge-wasm, again
In order to re-trigger the release action we are testing we bump the
version which was de-bumped in the last commit.
2023-01-12 10:39:11 +00:00
Alex Good
3ef60747f4
Roll back automerge-wasm to test release action
The release action we are working conditionally executes based on the
version of `automerge-wasm` in the previous commit. We need to trigger
it even though the version has not changed so we roll back the version
in this commit and the commit immediately following this will bump it
again.
2023-01-12 10:37:11 +00:00
Alex Good
d12bd3bb06
correctly call npm publish in release action 2023-01-12 10:27:03 +00:00
Alex Good
a0d698dc8e
Version bump js and wasm
js: 2.0.1-alpha.3
wasm: 0.1.20
2023-01-12 09:55:12 +00:00
Alex Currie-Clark
93a257896e Release action: Fix for check that WASM version has been updated before publishing 2023-01-12 09:44:48 +00:00
Alex Currie-Clark
9c3d0976c8 Add workflow to generate a deno.land and npm release when pushing a new automerge-wasm version to #main 2023-01-11 17:19:24 +00:00
Orion Henry
1ca1cc38ef
Merge pull request #484 from automerge/text2-compat
Text2 compat
2023-01-10 09:16:22 -08:00
Alex Good
0e7fb6cc10
javascript: Add @packageDocumentation TSDoc
Instead of using the `--readme` argument to `typedoc` use the
`@packageDocumentation` TSDoc tag to include the readme text in the
typedoc output.
2023-01-10 15:02:56 +00:00
Alex Good
d1220b9dd0
javascript: Use glob to list files in package.json
We have been listing all the files to be included in the distributed
package in package.json:files. This is tedious and error prone. We
change to using globs instead, to do this without also including the
test and src files when outputting declarations we add a new typescript
config file for the declaration generation which excludes tests.
2023-01-10 12:52:21 +00:00
Alex Good
6c0d102032
automerge-js: Add backwards compatibility text layer
The new text features are faster and more ergonomic but not backwards
compatible. In order to make them backwards compatible re-expose the
original functionality and move the new API under a `future` export.
This allows users to interoperably use both implementations.
2023-01-10 12:52:21 +00:00
Alex Good
5763210b07
wasm: Allow a choice of text representations
The wasm codebase assumed that clients want to represent text as a
string of characters. This is faster, but in order to enable backwards
compatibility we add a `TextRepresentation` argument to
`automerge_wasm::Automerge::new` to allow clients to choose between a
`string` or `Array<any>` representation. The `automerge_wasm::Observer`
will consult this setting to determine what kind of diffs to generate.
2023-01-10 12:52:19 +00:00
Alex Good
18a3f61704 Update rust toolchain to 1.66 2023-01-10 12:51:56 +00:00
Alex Currie-Clark
0306ade939 Update action name on IncPatch type 2023-01-06 15:23:41 +00:00
Alex Good
1e7dcdedec automerge-js: Add prettier
It's christmas, everyone is on holiday, it's time to change every single
file in the repository!
2022-12-22 17:33:14 +00:00
Alex Good
8a645bb193 js: Enable typescript for the JS tests
The tsconfig.json was setup to not include the JS tests. Update the
config to include the tests when checking typescript and fix all the
consequent errors. None of this is semantically meaningful _except_ for
a few incorrect usages of the API which were leading to flaky tests.
Hooray for types!
2022-12-22 11:48:06 +00:00
Alex Good
4de0756bb4 Correctly handle ops on optree node boundaries
The `SeekOp` query can produce incorrect results when the optree it is
searching only has visible ops on the internal nodes. Add some tests to
demonstrate the issue as well as a fix.
2022-12-20 20:38:29 +00:00
Alex Good
d678280b57 automerge-cli: Add an examine-sync command
This is useful when receiving sync messages that behave in unexptected
ways
2022-12-19 16:30:14 +00:00
Alex Good
f682db3039 automerge-cli: Add a flag to skip verifiying heads 2022-12-19 16:30:14 +00:00
Alex Good
6da93b6adc Correctly implement colored json
My quickly thrown together implementation had somem mistakes in it which
meant that the JSON produced was malformed.
2022-12-19 16:30:14 +00:00
Alex Good
0f90fe4d02 Add a method for loading a document without verifying heads
This is primarily useful when debugging documents which have been
corrupted somehow so you would like to see the ops even if you can't
trust them. Note that this is _not_ currently useful for performance
reasons as the hash graph is still constructed, just not verified.
2022-12-19 16:30:14 +00:00
alexjg
8aff1296b9
automerge-cli: remove a bunch of bad dependencies (#478)
Automerge CLI depends transitively (via and old version of `clap` and
via `colored_json` on `atty` and `ansi_term`. These crates are both
marked as unmaintained and this generates irritating `cargo deny`
messages. To avoid this, implement colored JSON ourselves using the
`termcolor` crate - colored JSON is pretty mechanical. Also update
criterion and cbindgen dependencies and ignore the criterion tree in
deny.toml as we only ever use it in benchmarks.

All that's left now is a warning about atty in cbindgen, we'll just have
to wait for cbindgen to fix that, it's a build time dependency anyway so
it's not really an issue.
2022-12-14 18:06:19 +00:00
Conrad Irwin
6dad2b7df1
Don't panic on invalid gzip stream (#477)
* Don't panic on invalid gzip stream

Before this change automerge-rs would panic if the gzip data in
a raw column was invalid; after this change the error is propagated
to the caller correctly.
2022-12-14 17:34:22 +00:00
patryk
e75ca2a834
Update README.md (Update Slack invite link) (#475)
Slack invite link updated to the one used on the website, as the current one returns "This link is no longer active".
2022-12-14 11:41:21 +00:00
Orion Henry
3229548fc7
update js dependencies and some lint errors (#474) 2022-12-11 21:26:00 +00:00
Orion Henry
a96f77c96b
Merge pull request #458 from automerge/dependabot/npm_and_yarn/javascript/examples/create-react-app/loader-utils-2.0.4
Bump loader-utils from 2.0.2 to 2.0.4 in /javascript/examples/create-react-app
2022-12-11 11:36:38 -08:00
Orion Henry
b78211ca65
change opid to (u32,u32) - 10% performance uptick (#473) 2022-12-11 18:56:20 +00:00
Orion Henry
1222fc0df1
rewrite opnode to store usize instead of Op (#471) 2022-12-10 10:36:05 +00:00
Orion Henry
2db9e78f2a
Text v2. JS Api now uses text by default (#462) 2022-12-09 23:48:07 +00:00
Conrad Irwin
b05c9e83a4
Use AMbyteSpan for AM{list,map}PutBytes (#464)
* Use AMbyteSpan for byte values

Before this change there was an inconsistency between AMmapPutString
(which took an AMbyteSpan) and AMmapPutBytes (which took a pointer +
length).

Either is fine, but we should do the same in both places. I chose this
path to make it clear that the value passed in was an automerge value,
and to be symmetric with AMvalue.bytes when you do an AMmapGet().

I did not update other APIs (like load) that take a pointer + length, as
that is idiomatic usage for C, and these functions are not operating on
byte values stored in automerge.
2022-12-09 16:11:23 +00:00
Conrad Irwin
c3932e6267
Improve docs for building automerge-c on a mac (#465)
* More detailed instructions in README

I struggled to get the project to build for a while when first getting
started, so have added some instructions; and also some usage
instructions for automerge-c that show more clearly what is happening
without `AMpush()`
2022-12-09 13:46:23 +00:00
Alex Good
becc301877
automerge-wasm@0.1.19 & automerge-js@2.0.1-alpha.2 2022-12-02 15:10:24 +00:00
Alex Good
0ab6a770d8 wasm: improve error messages
The error messages produced by various conversions in `automerge-wasm`
were quite uninformative - often consisting of just returning the
offending value with no description of the problem. The logic of these
error messages was often hard to trace due to the use of `JsValue` to
represent both error conditions and valid values - evidenced by most of
the public functions of `automerge-wasm` having return types of
`Result<JsValue, JsValue>`. Change these return types to mention
specific errors, thus enlisting the compilers help in ensuring that
specific error messages are emitted.
2022-12-02 14:42:55 +00:00
Alex Currie-Clark
2826f4f08c
automerge-wasm: Add deno as a target 2022-12-02 14:42:13 +00:00
Alex Good
de16adbcc5 Explicity create empty changes
Transactions with no ops in them are generally undesirable. They take up
space in the change log but do nothing else. They are not useless
though, it may occasionally be necessary to create an empty change in
order to list all the current heads of the document as dependents of the
empty change.

The current API makes no distinction between empty changes and non-empty
changes. If the user calls `Transaction::commit` a change is created
regardless of whether there are ops to commit. To provide a more useful
API modify `commit` so that if there is a no-op transaction then no
changes are created, but provide explicit methods to create an empty
change via `Transaction::empty_change`, `Automerge::empty_change` and
`Autocommit::empty_change`. Also make these APIs available in Javascript
and C.
2022-12-02 12:12:54 +00:00
Alex Good
ea5688e418 rust: Make fields of Transaction and TransactionInner private
It's tricky to modify these structs with the fields public as every
change requires scanning the codebase for references to make sure you're
not breaking any invariants. Make the fields private to ease
development.
2022-12-02 12:12:54 +00:00
Alex Good
149f870102 rust: Remove Default constraint from OpObserver 2022-12-02 12:12:54 +00:00
Andrew Jeffery
e0b2bc995a
Update nix flake and add formatter and dead code check (#466)
* Add formatter for flake

* Update flake inputs

* Remove unused vars in flake

* Add deadnix check and fixup devshells naming
2022-11-30 12:57:59 +00:00
Orion Henry
aaddb3c9ea fix error message 2022-11-28 15:43:27 -06:00
Orion Henry
2400d67755
Merge pull request #457 from jkankiewicz/return_NUL_string_as_bytes
Prevent panic when string contains a null character.
2022-11-28 12:34:45 -08:00
Jason Kankiewicz
d3885a3443 Hard-coded automerge-c's initial independent
version number to "0.0.1" for @alexjg.
2022-11-28 00:08:33 -08:00
Jason Kankiewicz
f8428896bd Added a test case for a map key containing NUL
('\0') based on #455.
2022-11-27 23:52:47 -08:00
Jason Kankiewicz
fb0c69cc52 Updated the quickstart example to work with
`AMbyteSpan` values instead of `*const libc::c_char` values.
2022-11-27 23:52:47 -08:00
Jason Kankiewicz
edbb33522d Replaced the C string (*const libc::c_char)
value of the `AMresult::Error` variant with a UTF-8 string view
(`AMbyteSpan`).
2022-11-27 23:52:47 -08:00
Jason Kankiewicz
625f48f33a Fixed clippy violations. 2022-11-27 23:52:47 -08:00
Jason Kankiewicz
7c9f927136 Fixed code formatting violations. 2022-11-27 23:52:47 -08:00
Jason Kankiewicz
b60c310f5c Changed Default::default() calls to be through
the trait.
2022-11-27 23:52:47 -08:00
Jason Kankiewicz
3dd954d5b7 Moved the to_obj_id macro in with AMobjId. 2022-11-27 23:52:47 -08:00
Jason Kankiewicz
3e2e697504 Replaced C string (*const libc::c_char) values
with UTF-8 string view  (`AMbyteSpan`) values except with the
`AMresult::Error` variant.
Added `AMstr()` for creating an `AMbyteSpan` from a C string.
2022-11-27 23:52:47 -08:00
Jason Kankiewicz
a324b02005 Added automerge::AutomergeError::InvalidActorId.
Added `automerge::AutomergeError::InvalidCharacter`.
Alphabetized the `automerge::AutomergeError` variants.
2022-11-27 23:52:47 -08:00
Alex Good
d26cb0c0cb
rust:automerge-test:0.1.0 2022-11-27 16:54:41 +00:00
Alex Good
ed108ba6fc
rust:automerge:0.2.0 2022-11-27 16:44:26 +00:00
Alex Good
484a5bac4f
rust: Add Transactable::base_heads
Sometimes it is necessary to query the heads of a document at the time a
transaction started without having a mutable reference to the
transactable. Add `Transactable::base_heads` to do this.
2022-11-27 16:39:02 +00:00
Alex Good
01350c2b3f
automerge-wasm@0.1.18 and automerge@2.0.1-alpha.1 2022-11-22 19:37:01 +00:00
alexjg
22d60987f6
Dont send duplicate sync messages (#460)
The API of Automerge::generate_sync_message requires that the user keep
track of in flight messages themselves if they want to avoid sending
duplicate messages. To avoid this add a flag to `automerge::sync::State`
to track if there are any in flight messages and return `None` from
`generate_sync_message` if there are.
2022-11-22 18:29:06 +00:00
Alex Good
bbf729e1d6
@automerge/automerge 2.0.0 2022-11-22 12:13:42 +00:00
Orion Henry
ca25ed0ca0
automerge-wasm: Use a SequenceTree in the OpObserver
Generating patches to text objects (a la the edit-trace benchmark) was
very slow due to appending to the back of a Vec. Use the SequenceTree
(effectively a B-tree) instead so as to speed up sequence patch
generation.
2022-11-22 12:13:42 +00:00
Alex Good
03b3da203d
@automerge/automerge-wasm 0.1.16 2022-11-22 00:02:28 +00:00
Alex Good
e713c35d21
Fix some typescript errors 2022-11-21 18:26:28 +00:00
dependabot[bot]
92c044eadb
Bump loader-utils in /javascript/examples/create-react-app
Bumps [loader-utils](https://github.com/webpack/loader-utils) from 2.0.2 to 2.0.4.
- [Release notes](https://github.com/webpack/loader-utils/releases)
- [Changelog](https://github.com/webpack/loader-utils/blob/v2.0.4/CHANGELOG.md)
- [Commits](https://github.com/webpack/loader-utils/compare/v2.0.2...v2.0.4)

---
updated-dependencies:
- dependency-name: loader-utils
  dependency-type: indirect
...

Signed-off-by: dependabot[bot] <support@github.com>
2022-11-16 13:35:34 +00:00
Jason Kankiewicz
a7656b999b
Add AMobjObjType() (#454)
automerge-c: Add AmobjObjType()
2022-11-07 23:10:53 +00:00
Alex Good
05093071ce
rust/automerge-test: add From<f64> for RealizedObject 2022-11-07 12:08:12 +00:00
Alex Good
bcab3b6e47 Move automerge/tests::helpers to crate automerge-test
The assert_doc and assert_obj macros in automerge/tests::helpers are
useful for writing tests for any application working with automerge
documents. Typically however, you only want these utilities in tests so
rather than packaging them in the main `automerge` crate move them to a
new crate (in the spirit of `tokio_test`)
2022-11-06 19:52:21 +00:00
Alex Good
b53584bec0
Ritual obeisance before the altar of clippy 2022-11-05 22:48:43 +00:00
Orion Henry
91f313bb83 revert compiler flags to max opt 2022-11-04 18:02:32 +00:00
tosti007
6bbed76f0f Update uuid dependency to v1.2.1 2022-11-01 11:39:24 +00:00
Alex Good
bba4fe2c36
@automerge/automerge@2.0.0-beta.4 2022-10-28 11:31:51 +01:00
Alex Good
61aaa52718 Allow changing a cloned document
The logic for `clone` which was updated to support cloning a viewed
document inadverantly left the heads of the cloned document state in
place, which meant that cloned documents could not be `change`d. Set
state.heads to undefined when cloning to allow changing them.
2022-10-27 19:20:41 +01:00
Alex Good
20d543d28d
@automerge/automerge@2.0.0-beta.3 2022-10-26 14:14:01 +01:00
Alex Good
5adb6952e9
@automerge/automerge@2.0.0-beta.2 and @automerge/automerge-wasm@0.1.15 2022-10-26 14:03:12 +01:00
Orion Henry
3705212747
js: Add Automerge.clone(_, heads) and Automerge.view
Sometimes you need a cheap copy of a document at a given set of heads
just so you can see what has changed. Cloning the document to do this is
quite expensive when you don't need a writable copy. Add automerge.view
to allow a cheap read only copy of a document at a given set of heads
and add an additional heads argument to clone for when you do want a
writable copy.
2022-10-26 14:01:11 +01:00
Orion Henry
d7d2916acb tiny change that might remove a bloom filter false positive error 2022-10-21 15:15:30 -05:00
Alex Good
3482e06b15
javascript 2.0.0-beta1 2022-10-18 19:43:46 +01:00
Orion Henry
59289f67b1 consolidate inserts and deletes more aggressivly into a single splice 2022-10-18 13:29:56 +01:00
Alex Good
a4a3dd9ed3
Fix docs CI 2022-10-18 13:08:08 +01:00
Alex Good
ac6eeb8711
Another attempt at fixing cmake build CI 2022-10-18 12:46:22 +01:00
Alex Good
20adff0071
Fix cmake CI
The cmake CI seemed to reference a few nonexistent targets for docs and
tests. Remove the doc generation step and point the test CI script at
the generated test program.
2022-10-18 11:56:37 +01:00
Alex Good
6bb611e4b3
Update CI to rust 1.64.0 2022-10-18 11:49:46 +01:00
Alex Good
e8309495ce
Update cargo deny to point at rust subdirectory 2022-10-18 11:28:56 +01:00
Orion Henry
4755c5bf5e
Merge pull request #444 from automerge/freeze
make freeze work recursively
2022-10-17 16:31:52 -07:00
Orion Henry
38205fbcc2 enableFreeze() instead of implicit freeze 2022-10-17 17:35:34 -05:00
Orion Henry
a2704bac4b Merge remote-tracking branch 'origin/main' into f2 2022-10-17 16:32:23 -05:00
Orion Henry
ac90f8f028 Merge remote-tracking branch 'origin/freeze' into f2 2022-10-17 16:21:35 -05:00
Orion Henry
c602e9e7ed update build to match directory restructuring 2022-10-17 16:20:25 -05:00
Alex Good
1c6da6f9a3
Add JS worker config to Vite app example
Vite apps which use SharedWorker of WebWorker require additional
configuration to get WebAssembly imports to work effectively, add these
to the example.
2022-10-17 01:09:13 +01:00
Alex Good
24dcf8270a
Add typedoc comments to the entire public JS API 2022-10-17 00:41:06 +01:00
Alex Good
e189ec9ca8
Add some READMEs to the javascript directory 2022-10-16 20:01:49 +01:00
Alex Good
96f15c6e00
Update main README to reflect new repo layout 2022-10-16 20:01:45 +01:00
Alex Good
8e131922e7
Move wrappers/javascript -> javascript
Continuing our theme of treating all languages equally, move
wrappers/javascript to javascrpit. Automerge libraries for new languages
should be built at this top level if possible.
2022-10-16 19:55:54 +01:00
Alex Good
dd3c6d1303
Move rust workspace into ./rust
After some discussion with PVH I realise that the repo structure in the
last reorg was very rust-centric. In an attempt to put each language on
a level footing move the rust code and project files into ./rust
2022-10-16 19:55:51 +01:00
Orion Henry
5ce3a556a9 weak_refs 2022-10-16 19:55:25 +01:00
Orion Henry
dd5edafa9d make freeze work recursively 2022-10-15 21:21:18 -05:00
Alex Good
cd2997e63f
@automerge/automerge@2.0.0-alpha.5 and @automerge/automerge-wasm@0.1.10 2022-10-13 23:13:09 +01:00
Orion Henry
f0f036eb89
add loadIncremental to js 2022-10-13 23:03:01 +01:00
Alex Good
ee0c3ef3ac javascript: Make getObjectId tolerate non object arguments
Fixes #433. `getObjectId` was previously throwing an error if passed
something which was not an object. In the process of fixing this I
simplified the logic of `getObjectId` by modifying automerge-wasm to not
set the OBJECT_ID hidden property on objects which are not maps, lists,
or text - it was previously setting this property on anything which was
a JS object, including `Date` and `Uint8Array`.
2022-10-13 21:37:37 +01:00
Orion Henry
e6d1828c12
Merge pull request #440 from automerge/repo-reorg
Repo reorg
2022-10-12 10:07:02 -07:00
Alex Good
4c17fd9c00
Update README
We're making this project the primary implementation of automerge.
Update the README to provide more context and signpost other resources.
2022-10-12 16:25:43 +01:00
Alex Good
660678d038
remove unneeded files 2022-10-12 16:25:43 +01:00
Alex Good
a7a4bd42f1
Move automerge-js -> wrappers/javascript
Whilst we only have one wrapper library, we anticipate more.
Furthermore, the naming of the `wrappers` directory makes it clear what
the role of the JS codebase is.
2022-10-12 16:25:43 +01:00
Alex Good
352a0127c7
Move all rust code into crates/*
For larger rust projects it's common to put all rust code in a directory
called `crates`. This helps in general by reducing the number of
directories in the top level but it's particularly helpful for us
because some directories _do not_ contain Rust code. In particular
`automerge-js`. Move rust code into `/crates` to make the repo easier
to navigate.
2022-10-12 16:25:38 +01:00
Alex Good
ed0da24020
Track whether a transaction is observed in types
With the `OpObserver` moving to the transaction rather than being passed
in to the `Transaction::commit` method we have needed to add a way to
get the observer back out of the transaction (via
`Transaction::observer` and `AutoCommit::observer`). This `Observer`
type is then used to handle patch generation logic. However, there are
cases where we might not want an `OpObserver` and in these cases we can
execute various things fast - so we need to have something like an
`Option<OpObserver>`. In order to track the presence or otherwise of the
observer at the type level introduce
`automerge::transaction::observation`, which is a type level `Option`.
This allows us to efficiently choose the right code paths whilst
maintaining correct types for `Transaction::observer` and
`AutoCommit::observer`
2022-10-12 16:11:23 +01:00
Orion Henry
3989cac405
Merge pull request #439 from automerge/type-patchcallback
Add TypeScript type for PatchCallback
2022-10-10 14:25:43 -07:00
Alex Good
2d072d81fb
Add TypeScript type for PatchCallback 2022-10-10 21:19:39 +01:00
Alex Good
430d842343
Update vite.config.js in Vite Example README 2022-10-10 14:14:38 +01:00
Alex Good
dff0fc2b21
Remove automerge-wasm devDependency
This dependency was added in a PR which is no longer relevant as we've
switched to depending directly on `@automerge/automerge-wasm` and
testing by running a local NPM registry.
2022-10-10 13:05:10 +01:00
Orion Henry
9e1fe65a64
Merge pull request #429 from automerge/actually-run-js-tests
Use the local automerge-wasm in automerge-js tests
2022-10-06 15:41:07 -07:00
Orion Henry
3d5fe83e2b
Merge branch 'main' into actually-run-js-tests 2022-10-06 15:41:01 -07:00
Alex Good
ba328992ff
bump @automerge/automerge-wasm and @automerge/automerge versions 2022-10-06 22:53:21 +01:00
Orion Henry
23a07699e2
typescript fixes 2022-10-06 22:42:33 +01:00
Orion Henry
238d05a0e3
move automerge-js onto the applyPatches model 2022-10-06 22:42:31 +01:00
Orion Henry
7a6dfcc289
The patch interface needs an accurate path per patch op
For the path to be accurate it needs to be calculated at the moment of op insert
not at commit.  This is because the path may contain list indexes in parent
objects that could change by inserts and deletes later in the transaction.

The primary change was adding op_observer to the transaction object and
removing it from commit options.  The beginnings of a wasm level
`applyPatch` system is laid out here.
2022-10-06 22:41:37 +01:00
Alex Good
92145e6131
@automerge/automerge-wasm 0.1.8 2022-10-05 00:55:10 +01:00
Alex Good
2012f5c6e4
Fix some typescript bugs, automerge-js 2.0.0-alpha.3 2022-10-05 00:52:36 +01:00
Alex Good
fb4d1f4361
Ship generated typescript types correctly
Generated typescript types were being shipped in the `dist/cjs` and `dist/mjs`
directories but are referenced at the top level in package.json. Add a
step to generate `*.d.ts` files in the top level `dist/*.d.ts`.
2022-10-04 22:54:19 +01:00
Alex Good
74af537800
Rename automerge and automerge-wasm packages
In an attempt to make our package naming more understandable we move all
our packages to a single NPM scope. `automerge` ->
`@automerge/automerge` and `automerge-wasm` ->
@automerge/automerge-wasm`
2022-10-04 22:05:56 +01:00
Alex Good
29f2c9945e query::Prop: don't scan past end of OpTree
The logic in `query::Prop` works by first doing a binary search in the
OpTree for the node where the key we are looking for starts, and then
proceeding from this point forwards skipping over nodes which contain
only invisible ops. This logic was incorrect if the start index returned
by the binary search was in the last child of the optree and the last
child only contains invisible ops. In this case the index returned by
the query would be greater than the length of the optree.

Clamp the index returned by the query to the total length of the opset.
2022-10-04 17:25:56 +01:00
Alex Good
d6a8d41e0a Update JS README 2022-10-04 17:23:37 +01:00
Alex Good
b6c375efb9 Fix a few small typescript complaints 2022-10-04 17:23:37 +01:00
Alex Good
16f2272b5b Generate index.d.ts from source
The JS package is now written in typescript so we don't need to manually
maintain an index.d.ts file. Generate the index.d.ts file from source
and ship it with the JS package.
2022-10-04 17:23:37 +01:00
Alex Good
da51492327 build both nodejs and bundler packages in yarn build 2022-10-04 17:23:37 +01:00
Alex Good
577bda3e7f update wasm-bindgen 2022-10-04 17:23:37 +01:00
Alex Good
20dc0fb54e Set optimization levels to 'Z' for release profile
This reduces the size of the WASM bundle which is generated to around
800kb. Unfortunately wasm-pack doesn't allow us to use arbitrary
profiles when building and the optimization level has to be set at the
workspace root - consequently this flag is set for all packages in the
workspace. This shouldn't be an issue really as all our dependents in
the Rust world will be setting their own optimization flags anyway.
2022-10-04 17:23:37 +01:00
Alex Good
4f03cd2a37 Add an e2e testing tool for the JS packaging
JS packaging is complicated and testing it manually is irritating. Add a
tool in `automerge-js/e2e` which stands up a local NPM registry and
publishes the various packages to that registry for use in automated and
manual tests. Update the test script in `scripts/ci/js_tests` to run the
tests using this tool
2022-10-04 17:23:37 +01:00
Alex Good
7825da3ab9 Add examples of using automerge with bundlers 2022-10-04 17:23:37 +01:00
Alex Good
8557ce0b69 Rename automerge-js to automerge
Now that automerge-js is ready to go we rename it to `automerge-js` and
set the version to `2.0.0-alpha.1`
2022-10-04 17:23:37 +01:00
Alex Good
a9e23308ce Remove async automerge-wasm wrapper
By moving to wasm-bindgens `bundler` target rather than using the `web`
target we remove the need for an async initialization step on the
automerge-wasm package. This means that the automerge-js package can now
depend directly on automerge-wasm and perform initialization itself,
thus making automerge-js a drop in replacement for the `automerge` JS
package (hopefully).

We bump the versions of automerge-wasm
2022-10-04 17:23:37 +01:00
Alex Good
837c07b23a
Correctly encode compressed changes in sync messages
Sync messages encode changes as length prefixed byte arrays. We were
calculating the length using the uncompressed bytes of a change but
encoding the bytes of the change using the (possibly) compressed bytes.
This meant that if a change was large enough to compress then it  would
fail to decode. Switch to using uncompressed bytes in sync messages.
2022-10-02 18:59:41 +01:00
Alex Good
3d59e61cd6
Allow empty changes when loading document format
The logic for loading compressed document chunks has a check that the
`max_op` of a change is valid. This check was overly strict in that it
checked that the max op was strictly larger than the max op of a
previous strange - this rejects valid documents which contain changes
with no ops in them, in which case the max op can be equal to the max op
of the previous change. Loosen the logic to allow empty changes.
2022-09-30 19:00:48 +01:00
Alex Good
e57548f6e2
Fix broken encode/decode change
Previous ceremonies to appease clippy resulted in the
encodeChange/decodeChange wasm functions being slightly broken. Here we
fix them.
2022-09-29 15:49:31 -05:00
Alex Good
c7e370a1df
Appease clippy 2022-09-28 17:18:37 -05:00
Alex Good
427002caf3 Correctly load documents with deleted objects
The logic for reconstructing changes from the compressed document format
records operations which set a key in an object so that it can later
reconstruct delete operations from the successor list of the document
format operations. The logic to do this was only recording set
operations and not `make*` operations. This meant that delete operations
targeting `make*` operations could not be loaded correctly.

Correctly record `make*` operations for later use in constructing delete
operations.
2022-09-12 12:38:57 +01:00
Alex Good
fc9cb17b34
Use the local automerge-wasm in automerge-js tests
Somehow the `devDependencies` for `automerge-js` dependended on the
released `automerge-wasm` package, rather than the local version, which
means that the JS tests are not actually testing the current
implementation. Depend on the local `automerge-wasm` package to fix
this.
2022-09-08 16:27:30 +01:00
Alex Good
f586c82557 OpSet::visualise: add argument to filter by obj ID
Occasionally one needs to debug problems in a document with a large
number of objects. In this case it is unhelpful to print a graphviz of
the whole opset because there are too many objects. Add a
`Option<Vec<ObjId>>` argument to `OpSet::visualise` to filter the
objects which are visualised.
2022-09-08 12:48:53 +01:00
+merlan #flirora
649b75deb1 Correct documentation for AutoSerde 2022-09-05 21:11:13 +01:00
Alex Good
eba7038bd2 Allow for empty head indices when decoding doc
The compressed document format includes at the end of the document chunk
the indicies of the heads of the document. Older versions of the
javascript implementation do not include these indicies so we allow them
to be omitted when decoding.

Whilst we're here add some tracing::trace logs to make it easier to
understand where parsing is failing.
2022-09-02 14:59:51 +01:00
Alex Good
dd69f6f7b4
Add readme field to automerge/Cargo.toml 2022-09-01 12:27:34 +01:00
Alex Good
e295a55b41 Add #[derive(Eq)] to satisfy clippy
The latest clippy (90.1.65 for me) added a lint which checks for types
that implement `PartialEq` and could implement `Eq`
(`derive_partial_eq_without_eq`). Add a `derive(Eq)` in a bunch of
places to satisfy this lint.
2022-09-01 12:24:00 +01:00
Orion Henry
c2ed212dbc
Merge pull request #422 from automerge/fix-transaction-put-doc
Update docs for Transaction::put
2022-08-29 13:35:42 -05:00
Orion Henry
1817e98ec9
Merge pull request #418 from jkankiewicz/normalize_C_API_header_include
Expose `Vec<automerge::Change>` initialization and `automerge::AutoCommit::with_actor()` to the C API
2022-08-29 13:35:01 -05:00
Alex Good
a0eb4218d8
Update docs for Transaction::put
Fixes #420
2022-08-27 11:59:14 +01:00
Orion Henry
9879fd9342 copy pasta typo fix 2022-08-26 14:19:28 -05:00
Orion Henry
59bde120ee automerge-js adding trace to out of date errors 2022-08-26 14:17:56 -05:00
Jason Kankiewicz
22f720c465 Emphasize that an AMbyteSpan is only a view onto
the memory that it references.
2022-08-25 13:51:15 -07:00
Orion Henry
e6cd366aa0 automerge-js 0.1.12 2022-08-24 19:12:47 -05:00
Orion Henry
6d05cbd9e3 fix indexOf 2022-08-23 12:13:32 -05:00
Peter van Hardenberg
43bdd60904 the fields in a doc are not docs themselves 2022-08-23 09:31:09 -07:00
Orion Henry
363ad7d59a automerge-js ts fixes 2022-08-23 11:12:22 -05:00
Jason Kankiewicz
7da1832b52 Fix documentation bug caused by missing /. 2022-08-23 06:04:22 -07:00
Jason Kankiewicz
5e37ebfed0 Add AMchangesInit() for @rkuhn in #411.
Expose `automerge::AutoCommit::with_actor()` through `AMcreate()`.
Add notes to clarify the purpose of `AMfreeStack()`, `AMpop()`,
`AMpush()`, `AMpushCallback()`, and `AMresultStack`.
2022-08-23 05:34:45 -07:00
Jason Kankiewicz
1ed67a7658 Add missing documentation for the AMvalue.unknown
variant, the `AMunknownValue.bytes` member and the
`AMunknownValue.type_code` member.
2022-08-22 23:31:55 -07:00
Jason Kankiewicz
3ddde2fff2 Normalize the header include statement for all C
source files.
Normalize the header include statement within the documentation.
Limit `AMpush()` usage within the quickstart example to variable
assignment.
2022-08-22 22:28:23 -07:00
Orion Henry
b4705691c2
Merge pull request #355 from automerge/storage-v2
Storage v2
2022-08-22 18:18:50 -05:00
Alex Good
9ac8827219
Remove storage-v2 feature flag
Signed-off-by: Alex Good <alex@memoryandthought.me>
2022-08-22 21:21:21 +01:00
Alex Good
9c86c09aaa
Rename Change::compressed_bytes -> Change::bytes 2022-08-22 21:18:11 +01:00
Jason Kankiewicz
632da04d60
Add the -DFEATURE_FLAG_STORAGE_V2 CMake option
for toggling the "storage-v2" feature flag in a Cargo invocation.
Correct the `AMunknownValue` struct misnomer.
Ease the rebasing of changes to the `AMvalue` struct declaration with
pending upstream changes to same.
2022-08-22 21:18:07 +01:00
Alex Good
8f2d4a494f
Test entire workspace for storage-v2 in CI
Now that all crates support the storage-v2 feature flag of the automerge
crate we update CI to run tests for '--workspace --all-features'

Signed-off-by: Alex Good <alex@memoryandthought.me>
2022-08-22 21:16:48 +01:00
Alex Good
db4cb52750
Add a storage-v2 feature flag to edit-trace
Signed-off-by: Alex Good <alex@memoryandthought.me>
2022-08-22 21:16:48 +01:00
Alex Good
fc94d43e53
Expose storage-v2 in automerge-wasm
Signed-off-by: Alex Good <alex@memoryandthought.me>
2022-08-22 21:16:47 +01:00
Alex Good
d53d107076
Expose storage-v2 in automerge-c
Signed-off-by: Alex Good <alex@memoryandthought.me>
2022-08-22 21:16:47 +01:00
Alex Good
63dca26fe2
Additional tests for storage-v2
Various tests were required to cover edge cases in the new storage-v2
implementation.

Signed-off-by: Alex Good <alex@memoryandthought.me>
2022-08-22 21:16:47 +01:00
Alex Good
252a7eb8a5
Add automerge::Automerge::save_nocompress
For some usecases the overhead of compressed columns in the document
format is not worth it. Add `Automerge::save_nocompress` to save without
compressing columns.

Signed-off-by: Alex Good <alex@memoryandthought.me>
2022-08-22 21:16:47 +01:00
Alex Good
34e919a4c8
Plumb in storage-v2
This is achieved by liberal use of feature flags. Main additions are:

* Build the OpSet more efficiently when loading from compressed
  document storage using a DocObserver as implemented in
  `automerge::op_tree::load`
* Reimplement the parsing login in the various types in
  `automerge::sync`

There are numerous other small changes required to get the types to line
up.

Signed-off-by: Alex Good <alex@memoryandthought.me>
2022-08-22 21:16:47 +01:00
Alex Good
fc7657bcc6
Add a wrapper to implement Deserialize for Automerge
It is useful to be able to generate a `serde::Value` representation of
an automerge document. We can do this without an intermediate type by
iterating over the keys of the document recursively. Add
`autoeserde::AutoSerde` to implement this.

Signed-off-by: Alex Good <alex@memoryandthought.me>
2022-08-22 21:16:47 +01:00
Alex Good
771733deac
Implement storage-v2
Implement parsing the binary format using the new parser library and the
new encoding types. This is superior to the previous parsing
implementation in that invalid data should never cause panics and it
exposes and interface to construct an OpSet from a saved document much
more efficiently.

Signed-off-by: Alex Good <alex@memoryandthought.me>
2022-08-22 21:16:47 +01:00
Alex Good
3a3df45b85
Access change fields through field accessors
The representation of changes in storage-v2 is different to the existing
representation so add accessor methods to the fields of `Change` and
make all accesses go through them. This allows the change representation
in storage-v2 to be a drop-in.

Signed-off-by: Alex Good <alex@memoryandthought.me>
2022-08-22 21:16:42 +01:00
Orion Henry
d28767e689 automerge-js v0.1.10 2022-08-22 15:13:08 -05:00
Alex Good
de997e2c50
Reimplement columnar decoding types
The existing implementation of the columnar format elides a lot of error
handling (by converting `Err` to `None`) and doesn't allow writing to a
single chunk of memory when encoding. Implement a new set of encoding and
decoding primitives which handle errors more robustly and allow us to
use a single chunk of memory when reading and writing.

Signed-off-by: Alex Good <alex@memoryandthought.me>
2022-08-22 21:06:35 +01:00
Alex Good
782f351322
Add types to convert between different Op types
Op IDs in the OpSet are represented using an index into a set of actor
IDs. This is efficient but requires conversion when reading and
writing from storage (where the set of actors might be different from
ths in the OpSet). Add a trait for converting between different
representations of an OpID.

Signed-off-by: Alex Good <alex@memoryandthought.me>
2022-08-22 21:06:35 +01:00
Alex Good
e1295b9daa
Add a simple parser combinator library
We have parsing needs which are slightly more complex than just reading
stuff from a buffer, but not complex enough to justify a dependency on a
parsing library. Implement a simple parser combinator library for use in
parsing the binary storage format.

Signed-off-by: Alex Good <alex@memoryandthought.me>
2022-08-22 21:06:35 +01:00
Alex Good
d785c319b8
Add ScalarValue::Unknown
The colunar storage format allows for values which we do not know the
type of. In order that we can handle these types in a forward compatible
way we add ScalarValue::Unknown.

Signed-off-by: Alex Good <alex@memoryandthought.me>
2022-08-22 21:04:19 +01:00
Orion Henry
88f8976d0a automerge-js 0.1.9 2022-08-22 14:58:13 -05:00
Alex Good
56563a4a60
Add a storage-v2 feature flag
The new storage implementation is sufficiently large a change that it
warrants a period of testing. To facilitate testing the new and old
implementations side by side we slightly abuse cargo's feature flags and
add a storage-v2 feature which enables the new storage and disables the
old storage.

Note that this commit doesn't use `--all-features` when building the
workspace in scripts/ci/build-test. This will be rectified in a later
commit once the storage-v2 feature is integrated into the other crates
in the workspace.

Signed-off-by: Alex Good <alex@memoryandthought.me>
2022-08-20 17:36:26 +01:00
Orion Henry
ff90327b52
Merge pull request #414 from jkankiewicz/port_WASM_basic_tests_to_C
Port the WASM API's basic tests to C
2022-08-11 19:00:11 -05:00
Orion Henry
ece66396e8
Merge pull request #417 from tombh/readme-update
Readme updates
2022-08-11 18:58:44 -05:00
Orion Henry
d1a926bcbe fix ownKeys bug in automerge-js 2022-08-11 18:49:42 -05:00
Orion Henry
1a955e1f0d fix some typescript errors - depricate default export of the wasm package 2022-08-11 18:24:21 -05:00
Thomas Buckley-Houston
f89e9ad9cc
Readme updates 2022-08-10 08:46:05 -04:00
Jason Kankiewicz
bc28faee71 Replace NULL with std::ptr::null() within the
safety notes for @alexjg in #414.
2022-08-07 20:04:49 -07:00
Jason Kankiewicz
50981acc5a Replace to_del!() and to_pos!() with
`to_index!()` for @alexjg in #414.
2022-08-07 19:37:48 -07:00
Jason Kankiewicz
7ec17b26a9 Replace `From<&AMvalue<'_>> for Result<
am::ScalarValue, am::AutomergeError>` with `TryFrom<&AMvalue<'_>> for
am::ScalarValue` for @alexjg in #414.
2022-08-07 19:24:47 -07:00
Jason Kankiewicz
825342cbb1 Remove reflexive struct reference from a Doxygen
variable declaration.
2022-08-07 08:07:00 -07:00
Jason Kankiewicz
04d0175113 Add missing past-the-end checks to the unit tests
for `AMmapRange()`.
2022-08-06 16:20:35 -07:00
Jason Kankiewicz
14bd8fbe97 Port the WASM API's basic unit tests to C.
Weave the original TypeScript code into the C ports of the WASM API's
sync tests.
Fix misnomers in the WASM API's basic and sync unit tests.
Fix misspellings in the WASM API's basic and sync unit tests.
2022-08-06 16:18:59 -07:00
Jason Kankiewicz
d48e366272 Fix some documentation content bugs.
Fix some documentation formatting bugs.
2022-08-06 15:56:21 -07:00
Jason Kankiewicz
4217019cbc Expose automerge::AutoCommit::get_all() as AMlistGetAll() and AMmapGetAll().
Add symbolic last index specification to `AMlist{Delete,Get,Increment}()`.
Add symbolic last index specification to `AMlistPut{Bool,Bytes,Counter,
F64,Int,Null,Object,Str,Timestamp,Uint}()`.
Prevent `doc::utils::to_str(NULL)` from segfaulting.
Fix some documentation content bugs.
Fix some documentation formatting bugs.
2022-08-06 15:47:53 -07:00
Jason Kankiewicz
eeb75f74f4 Fix AMstrsCmp().
Fix some documentation content bugs.
Fix some documentation formatting bugs.
2022-08-06 15:07:48 -07:00
Jason Kankiewicz
a22afdd70d Expose automerge::AutoCommit::get_change_by_hash()
as `AMgetChangeByHash()`.
Add the `AM_CHANGE_HASH_SIZE` macro define constant for
`AMgetChangeByHash()`.
Replace the literal `32` with the `automerge::types::HASH_SIZE` constant.
Expose `automerge::AutoCommit::splice()` as `AMsplice()`.
Add the `automerge::error::AutomergeError::InvalidValueType` variant for
`AMsplice()`.
Add push functionality to `AMspliceText()`.
Fix some documentation content bugs.
Fix some documentation formatting bugs.
2022-08-06 15:04:46 -07:00
Orion Henry
5e8f4caed6
Merge pull request #392 from rf-/rf-fix-export-default-syntax
Fix TypeScript syntax error in `automerge-wasm` definitions
2022-08-03 11:01:11 -05:00
Orion Henry
5c6f375f99
Merge pull request #410 from jkankiewicz/add_range_functions_to_C_API
Add range functions to C API
2022-08-03 10:47:44 -05:00
Jason Kankiewicz
3a556c5991 Expose Autocommit::fork_at().
Rename `AMdup()` to `AMclone()` to match the WASM API.
Rename `AMgetActor()` to `AMgetActorId()` to match the WASM API.
Rename `AMsetActor()` to `AMsetActorId()` to match the WASM API.
2022-08-01 07:02:30 -07:00
Orion Henry
1bc5fbb81e
Merge pull request #413 from jkankiewicz/remove_original_C_API_files
Remove original C API files
2022-07-29 16:06:45 -05:00
Jason Kankiewicz
69de8187a5 Update the build system with the added and
renamed source files.
Defer `BTreeMap` creation until necessary for  `AMresult::Changes`.
Add `AMvalueEqual()` to enable direct comparison of two `AMvalue` structs regardless of their respective variants.
2022-07-25 01:41:52 -07:00
Jason Kankiewicz
877744d40b Add equality comparison to the AM* types from
which it was missing.
Add equality comparison to `automerge::sync::message`.
Defer `std::ffi::CString` creation until necessary.
2022-07-25 01:33:50 -07:00
Jason Kankiewicz
14b55c4a73 Fix a bug with the iterators when they pass their
initial positions in reverse.
Rename `AMstrings` to `AMstrs` for consistency with the `AMvalue.str`
field.
2022-07-25 01:23:26 -07:00
Jason Kankiewicz
23fbb4917a Replace _INCLUDED with _H as the suffix for
include guards in C headers like the one generated by cbindgen.
2022-07-25 01:04:35 -07:00
Jason Kankiewicz
877dbbfce8 Simplify the unit tests with AMresultStack et.
al.
2022-07-25 01:00:50 -07:00
Jason Kankiewicz
a22bcb916b Promoted ResultStack/StackNode from the
quickstart example up to the library as `AMresultStack` so that it can
appear in the README.md and be used to simplify the unit tests.
Promoted `free_results()` to `AMfreeStack()` and `push()` to `AMpush()`.
Added `AMpop()` because no stack should be without one.
2022-07-25 00:50:40 -07:00
Jason Kankiewicz
42ab1639db Add heads argument to AMmapGet() to expose
`automerge::AutoCommit::get_at()`.
Add `AMmapRange()` to expose `automerge::AutoCommit::map_range()` and
`automerge::AutoCommit::map_range_at()`.
Add `AMmapItems` for `AMlistRange()`.
Add `AMmapItem` for `AMmapItems`.
2022-07-25 00:11:00 -07:00
Jason Kankiewicz
eba18d1ad6 Add heads argument to AMlistGet() to expose
`automerge::AutoCommit::get_at()`.
Add `AMlistRange()` to expose `automerge::AutoCommit::list_range()` and
`automerge::AutoCommit::list_range_at()`.
Add `AMlistItems` for `AMlistRange()`.
Add `AMlistItem` for `AMlistItems`.
2022-07-24 22:41:32 -07:00
Jason Kankiewicz
ee68645f31 Add AMfork() to expose `automerge::AutoCommit::
fork()`.
Add `AMobjValues()` to expose `automerge::AutoCommit::values()` and
`automerge::AutoCommit::values_at()`.
Add `AMobjIdActorId()`, `AMobjIdCounter()`, and `AMobjIdIndex()` to expose `automerge::ObjId::Id` fields.
Change `AMactorId` to reference an `automerge::ActorId` instead of
owning one for `AMobjIdActorId()`.
Add `AMactorIdCmp()` for `AMobjIdActorId()` comparison.
Add `AMobjItems` for `AMobjValues()`.
Add `AMobjItem` for `AMobjItems`.
Add `AMobjIdEqual()` for property comparison.
Rename `to_doc!()` to `to_doc_mut!()` and `to_doc_const!()` to `to_doc!()`
for consistency with the Rust standard library.
2022-07-24 22:23:54 -07:00
Jason Kankiewicz
cc19a37f01 Remove the makefile for the original C
API to prevent confusion.
2022-07-23 08:48:19 -07:00
Jason Kankiewicz
15c9adf965 Remove the obsolete test suite for the original
C API to prevent confusion.
2022-07-23 08:47:21 -07:00
Jason Kankiewicz
52a558ee4d Cease writing a pristine copy of the generated
header file into the root of the C API's source directory to prevent
confusion.
2022-07-23 08:44:41 -07:00
Alex Good
668b7b86ca Add license for unicode-idents
`unicode-idents` distributes some data tables from unicode.org which
require an additional license. This doesn't affect our licensing because
we don't distribute the data files - just the generated code. Explicitly
allow the Unicode-DFS-2016 license for unicode-idents.
2022-07-18 10:50:32 +01:00
Alex Good
d71a734e49 Add OpIds to enforce ordering of Op::succ and Op::pred
The ordering of opids in the successor and predecessors of an op is
relevant when encoding because inconsistent ordering changes the
hashgraph. This means we must maintain the invariant that opids are
encoded in ascending lamport order. We have been maintaining this
invariant in the encoding implementation - however, this is not ideal
because it requires allocating for every op in the change when we commit
a transaction.

Add `types::OpIds` and use it in place of `Vec<OpId>` for `Op::succ` and
`Op::pred`. `OpIds` maintains the invariant that the IDs it contains
must be ordered with respect to some comparator function - which is
always `OpSetMetadata::lamport_cmp`. Remove the sorting of opids in
SuccEncoder::append.
2022-07-17 20:58:47 +01:00
Andrew Jeffery
97575d3a90
Merge pull request #408 from jeffa5/automerge-description
publish: Add description to automerge crate
2022-07-14 19:01:09 +01:00
Andrew Jeffery
359376b3db publish: Add description to automerge crate
Came up as a warning in a dry-run publish.
2022-07-14 18:33:00 +01:00
Andrew Jeffery
5452aa4e4d
Merge pull request #406 from jeffa5/docs-ci
ci: Rename docs script to rust-docs and build cmake docs in CI
2022-07-13 19:47:42 +01:00
Andrew Jeffery
8c93d498b3 ci: Rename docs script to rust-docs and build cmake docs in CI 2022-07-13 18:25:25 +01:00
Adel Salakh
f14a61e581 Sort successors in SuccEncoder
Makes SuccEncoder sort successors in Lamport clock order.
Such an ordering is expected by automerge js when loading documents,
otherwise some documents fail to load with a "operation IDs are not in
ascending order" error.
2022-07-13 11:25:12 +01:00
Andrew Jeffery
65c478981c
Merge pull request #403 from jeffa5/parents-error
Change parents to return result if objid is not an object
2022-07-12 21:19:31 +01:00
Andrew Jeffery
75fb4f0f0c
Merge pull request #404 from jeffa5/trim-deps
Clean up automerge dependencies
2022-07-12 19:26:48 +01:00
Andrew Jeffery
be439892a4 Clean up automerge dependencies 2022-07-12 19:09:47 +01:00
Andrew Jeffery
6ea5982c16 Change parents to return result if objid is not an object
There is easy confusion when calling parents with the id of a scalar,
wanting it to get the parent object first but that is not implemented.
To get the parent object of a scalar id would mean searching every
object for the OpId which may get too expensive when lots of objects are
around, this may be reconsidered later but the result would still be
useful to indicate when the id doesn't exist in the document vs has no
parents.
2022-07-12 18:36:47 +01:00
Andrew Jeffery
0cd515526d
Merge pull request #402 from jeffa5/fix-cmake-docs
Don't build tests for docs
2022-07-12 10:17:43 +01:00
Andrew Jeffery
246ed4afab Test building docs on PRs 2022-07-12 10:12:07 +01:00
Andrew Jeffery
0a86a4d92c Don't build tests for docs
The test `CMakeLists.txt` brings in cmocka but we don't actually need to
build the tests to get the docs. This just makes the cmake docs script
tell cmake not to build docs.
2022-07-12 09:59:03 +01:00
Andrew Jeffery
1d3263c002
Merge pull request #397 from jeffa5/edit-trace-improvements
Fixup js edit-trace script and documentation bits
2022-07-07 09:47:43 +01:00
Andrew Jeffery
7e8cbf510a Add links to projects 2022-07-07 09:40:18 +01:00
Andrew Jeffery
c49ba5ea98 Fixup js edit-trace script and documentation bits 2022-07-07 09:25:45 +01:00
Andrew Jeffery
fe4071316d Add docs workflow status badge to README 2022-07-07 09:24:57 +01:00
Orion Henry
1a6f56f7e6
Merge pull request #393 from jkankiewicz/add_AMkeys_to_C_API
Add AMkeys() to the C API
2022-06-21 20:24:44 +02:00
Orion Henry
d5ca0947c0 minor update on js wrapper 2022-06-21 13:40:15 -04:00
Jason Kankiewicz
e5a8b67b11 Added AMspliceText().
Added `AMtext()`.
Replaced `*mut` function arguments with `*const`
function arguments where possible.
Added "out" directions to the documentation for
out function parameters.
2022-06-20 23:15:25 -07:00
Jason Kankiewicz
aeb8db556c Added "out" directions to the documentation for
out function parameters.
2022-06-20 23:11:03 -07:00
Jason Kankiewicz
eb462cb228 Made free_results() reset the stack pointer. 2022-06-20 15:55:31 -07:00
Jason Kankiewicz
0cbacaebb6 Simplified the AMstrings struct to directly
reference `std::ffi::CString` values.
Switched the `AMresult` struct to store a `Vec<CString>` instead of a
`Vec<String>`.
2022-06-20 14:35:30 -07:00
Jason Kankiewicz
bf4988dcca Fixed AM{change_hashes,changes,haves,strings}Prev(). 2022-06-20 13:50:05 -07:00
Jason Kankiewicz
770c064978 Made cosmetic changes to the quickstart example. 2022-06-20 13:45:32 -07:00
Jason Kankiewicz
db0333fc5a Added AM_ROOT usage to the documentation.
Renamed the `value` argument of `AM{list,map}PutBytes()` to `src` for
consistency with standard `memcpy()`.
2022-06-20 02:16:33 -07:00
Jason Kankiewicz
7bdf726ce1 Sublimated memory management in the quickstart
example.
2022-06-20 02:07:33 -07:00
Jason Kankiewicz
47c5277406 Added AMkeys().
Removed `AMobjSizeAt()`.
Added an optional `AMchangeHashes` argument to `AMobjSize()`.
Replaced the term "length" with "size" in the
documentation.
2022-06-20 01:53:31 -07:00
Jason Kankiewicz
ea8bd32cc1 Added the AMstrings type. 2022-06-20 01:38:32 -07:00
Jason Kankiewicz
be130560f0 Added a check for a 0 increment in the iterator
types.
Improved the documentation for the `detail` field in the iterator types.
2022-06-20 01:34:36 -07:00
Jason Kankiewicz
103d729bd1 Replaced the term "length" with "size" in the
documentation.
2022-06-20 01:31:08 -07:00
Jason Kankiewicz
7b30c84a4c Added AMchangeHashesInit(). 2022-06-20 01:17:20 -07:00
Jason Kankiewicz
39db64e5d9 Publicized the AMbyteSpan fields. 2022-06-20 01:11:30 -07:00
Jason Kankiewicz
32baae1a31 Hoisted InvalidChangeHashSlice into the
`Automerge` namespace.
2022-06-20 01:09:50 -07:00
Ryan Fitzgerald
88073c0cf4 Fix TypeScript syntax error in automerge-wasm definitions
I'm not sure if there are some configurations under which this works,
but I get

    index.d.ts:2:21 - error TS1005: ';' expected.

    2 export default from "automerge-types"
                          ~~~~~~~~~~~~~~~~~

both in my project that depends on `automerge-wasm` and when I run `tsc`
in this repo.

It seems like `export default from` is still a Stage 1 proposal, so I
wouldn't expect it to be supported by TS, although I couldn't really
find hard evidence one way or the other. It does seem like this syntax
should be exactly equivalent based on the proposal doc though.
2022-06-17 20:11:26 -07:00
Orion Henry
f5e9e3537d v0.1.4 2022-06-16 17:50:46 -04:00
Orion Henry
44b6709a60 add getBackend to automerge-js 2022-06-16 17:49:32 -04:00
Orion Henry
1610f6d6a6
Merge pull request #391 from jkankiewicz/expose_ActorId_to_C_API
Add `AMactorId` to the C API
2022-06-16 21:57:56 +02:00
Orion Henry
40b32566f4
Merge pull request #390 from jkankiewicz/make_C_API_testing_explicit
Make C API testing explicit
2022-06-16 21:56:26 +02:00
Orion Henry
3a4af9a719
Merge pull request #371 from automerge/typescript
Convert automerge-js to typescript
2022-06-16 21:52:22 +02:00
Jason Kankiewicz
400b8acdff Switched the AMactorId unit test suite to group
setup/teardown.
Removed superfluous group state from the `AMactorIdInit()` test.
2022-06-14 23:16:45 -07:00
Jason Kankiewicz
2f37d194ba Asserted that the string forms of two random
`AMactorId` structs are unequal.
2022-06-14 23:04:18 -07:00
Orion Henry
ceecef3b87 update list of read methods in c readme 2022-06-14 21:28:10 -04:00
Jason Kankiewicz
6de9ff620d Moved hex_to_bytes() so that it could be shared
by the unit test suites for `AMactorId` and `AMdoc` functions.
2022-06-14 00:52:06 -07:00
Jason Kankiewicz
84fa83a3f0 Added AMactorId.
Updated `AMchangeActorId()`.
Updated `AMsetActor()`.
Removed `AMgetActorHex()`.
Removed `AMsetActorHex()`.
2022-06-14 00:49:20 -07:00
Jason Kankiewicz
ac3709e670 Hoisted InvalidActorId into the automerge
namespace.
2022-06-14 00:38:55 -07:00
Jason Kankiewicz
71d8a7e717 Removed the superfluous AutomergeError::HexDecode
variant.
2022-06-14 00:37:42 -07:00
Jason Kankiewicz
bdedafa021 Decouple the "test_automerge" build target from
the "ALL" target.
2022-06-13 12:01:54 -07:00
Jason Kankiewicz
efa0a5624a Removed renamed unit test suite source files. 2022-06-11 21:04:36 -07:00
Jason Kankiewicz
4efe9a4f68 Replaced "cmake -E make_directory" invocation with
"mkdir -p" invocation for consistency with the other CI scripts.
2022-06-11 21:03:26 -07:00
Jason Kankiewicz
4f7843e007 Removed CMocka from the "docs" CI workflow's list
of dependencies.
2022-06-11 20:57:28 -07:00
Jason Kankiewicz
30dd3da578 Updated the CMake build CI script to build the
"test_automerge" target explicitly.
2022-06-11 20:55:44 -07:00
Jason Kankiewicz
6668f79a6e Decouple the "test_automerge" build target from
the "ALL" target.
2022-06-11 20:53:17 -07:00
Orion Henry
0c9e77b644 added a test to ensure we dont break counter serialization 2022-06-09 12:45:20 +02:00
Orion Henry
d6bce697a5 normalize edit trace 2022-06-09 12:42:43 +02:00
Orion Henry
22117f4997
Merge pull request #387 from jeromegn/counter-ser-current
Serialize Counter with it's current value instead of start value
2022-06-09 03:42:28 -07:00
Jerome Gravel-Niquet
b20d04b0f2
serialize Counter with it's current value instead of start value 2022-06-08 14:00:03 -04:00
Orion Henry
d5c07f22af
Merge pull request #385 from jkankiewicz/add_some_functions_from_README
Add some functions from the README.md file
2022-06-07 06:33:44 -07:00
Jason Kankiewicz
bfa85050b8 Fix Rust code formatting violations. 2022-06-07 00:29:58 -07:00
Jason Kankiewicz
1c78aab5f0 Fixed the AMsyncStateDecode() documentation. 2022-06-07 00:23:41 -07:00
Jason Kankiewicz
ad7dd07cf7 Simplify the names of the unit test suites' source
files.
2022-06-07 00:21:22 -07:00
Jason Kankiewicz
2e84c6e9ef Added AMlistIncrement(). 2022-06-07 00:15:37 -07:00
Jason Kankiewicz
0ecb9e7dce Added AMmapIncrement(). 2022-06-07 00:14:42 -07:00
Jason Kankiewicz
99ab5b4ed7 Added AMgetChangesAdded().
Added `AMpendingOps()`.
Added `AMrollback()`.
Added `AMsaveIncremental()`.
Fixed the `AMmerge()` documentation.
2022-06-07 00:14:11 -07:00
Andrew Jeffery
7439a49e37 Fix automerge-c html nesting 2022-06-06 19:49:18 +01:00
Andrew Jeffery
7a9786a146 Fix index.html location 2022-06-06 19:35:50 +01:00
Andrew Jeffery
82fe420a10 Use cmocka dev instead of lib 2022-06-06 19:11:07 +01:00
Andrew Jeffery
7d2be219ac Update cmocka to be libcmocka0 for install 2022-06-06 19:05:02 +01:00
Andrew Jeffery
00ab853813 Add cmake docs deps 2022-06-06 18:40:25 +01:00
Andrew Jeffery
97ef4fe7cd
Merge pull request #384 from jeffa5/serve-c-docs
Build c docs in CI
2022-06-06 18:31:48 +01:00
Andrew Jeffery
5c1cbc8eeb Build c docs in CI 2022-06-06 18:21:14 +01:00
Orion Henry
cf264f3bf4
Merge pull request #382 from jkankiewicz/obfuscate_iterator_fields
Remove artificial iteration from the C API
2022-06-06 06:41:45 -07:00
Jason Kankiewicz
8222ec1705 Move the AMsyncHaves.ptr field into the
`sync::haves::Detail` struct.
Change `AMsyncHavesAdvance()`, `AMsyncHavesNext()` and `AMsyncHavesPrev()`
to interpret their `n` argument relatively instead of absolutely.
Renamed `AMsyncHavesReverse()` to `AMsyncHavesReversed()`.
Updated the C API's documentation for the `AMsyncHaves` struct.
2022-06-05 14:41:48 -07:00
Jason Kankiewicz
74632a0512 Move the AMchanges.ptr field into the
`changes::Detail` struct.
Change `AMchangesAdvance()`, `AMchangesNext()` and `AMchangesPrev()` to
interpret their `n` argument relatively instead of absolutely.
Renamed `AMchangesReverse()` to `AMchangesReversed()`.
Updated the C API's documentation for the `AMchanges` struct.
2022-06-05 14:37:32 -07:00
Jason Kankiewicz
7e1ae60bdc Move the AMchangeHashes.ptr field into the
`change_hashes::Detail` struct.
Change `AMchangeHashesAdvance()`, `AMchangeHashesNext()` and
`AMchangeHashesPrev()` to interpret their `n` argument relatively
instead of absolutely.
Renamed `AMchangeHashesReverse()` to `AMchangeHashesReversed()`.
Updated the C API's documentation for the `AMchangeHashes` struct.
2022-06-05 14:32:55 -07:00
Jason Kankiewicz
92d6fff22f Compensate for the removal of the AMchanges.ptr
member.
2022-06-05 14:28:33 -07:00
Jason Kankiewicz
92f3efd6e0 Removed the 0 argument from AMresultValue()
calls.
2022-06-04 22:31:15 -07:00
Jason Kankiewicz
31fe8dbb36 Renamed the AMresult::Scalars variant to
`AMresult::Value`.
Removed the `Vec` wrapping the 0th field of an `AMresult::Value`.
Removed the `index` argument from `AMresultValue()`.
2022-06-04 22:24:02 -07:00
Jason Kankiewicz
d4d1b64cf4 Compensate for cbindgen issue #252. 2022-06-04 19:18:47 -07:00
Jason Kankiewicz
92b1216101 Obfuscated most implementation details of the
`AMsyncHaves` struct.
Added `AMsyncHavesReverse()`.
2022-06-04 19:14:31 -07:00
Jason Kankiewicz
1990f29c60 Obfuscated most implementation details of the
`AMChanges` struct.
Added `AMchangesReverse()`.
2022-06-04 19:13:22 -07:00
Jason Kankiewicz
b38be0750b Obfuscated most implementation details of the
`AMChangeHashes` struct.
Added `AMchangeHashesReverse()`.
2022-06-04 18:51:57 -07:00
Orion Henry
3866e9066f
Merge pull request #381 from jkankiewicz/unify_C_API_results
Simplify management of memory allocated by C API calls
2022-06-02 10:14:55 -07:00
Orion Henry
51554e7793
Merge pull request #377 from jeffa5/more-sync-opt
Some more sync optimisations
2022-06-02 10:14:44 -07:00
Jason Kankiewicz
afddf7d508 Fix "fmt" script violations.
Fix "lint" script violations.
2022-06-01 23:34:28 -07:00
Jason Kankiewicz
ca383f03e4 Wrapped all newly-allocated values in an AMresult struct.
Removed `AMfree()`.
Renamed `AMresultFree()` to `AMfree()`.
Removed type names from brief descriptions.
2022-06-01 23:10:23 -07:00
Orion Henry
de25e8f7c8
Merge pull request #380 from jkankiewicz/add_syncing_to_C_API
Add syncing to C API
2022-06-01 13:46:55 -07:00
Orion Henry
27dfa4ca27 missed some bugs related to the wasm api change 2022-06-01 16:31:18 -04:00
Orion Henry
9a0dd24714 fmt / tests 2022-06-01 08:08:01 -04:00
Orion Henry
8ce10dab69 some api changes/tweaks - basic js package 2022-05-31 13:49:18 -04:00
Jason Kankiewicz
fbdb5da508 Ported 17 synchronization unit test cases from JS
to C.
2022-05-30 23:17:44 -07:00
Jason Kankiewicz
cdcd5156db Added the synchronization unit test suite to the
CTest suite.
2022-05-30 23:16:14 -07:00
Jason Kankiewicz
d08eeeed61 Renamed AMfreeDoc() to AMFree(). 2022-05-30 23:15:20 -07:00
Jason Kankiewicz
472b5dc348 Added the synchronization unit test suite to the
CTest suite.
2022-05-30 23:14:38 -07:00
Jason Kankiewicz
846b96bc9a Renamed AMfreeResult() to AMresultFree(). 2022-05-30 23:11:56 -07:00
Jason Kankiewicz
4cb7481a1b Moved the AMsyncState struct into its own
source file.
Added `AMsyncStateDecode()`.
Added `AMsyncStateEncode()`.
Added `AMsyncStateEqual()`.
Added `AMsyncStateSharedHeads()`.
Added `AMsyncStateLastSentHeads()`.
Added `AMsyncStateTheirHaves()`.
Added `AMsyncStateTheirHeads()`.
Added `AMsyncStateTheirNeeds()`.
2022-05-30 23:07:55 -07:00
Jason Kankiewicz
3c11946c16 Moved the AMsyncMessage struct into its own
source file.
Added `AMsyncMessageChanges()`.
Added `AMsyncMessageDecode()`.
Added `AMsyncMessageEncode()`.
Added `AMsyncMessageHaves()`.
Added `AMsyncMessageHeads()`.
Added `AMsyncMessageNeeds()`.
2022-05-30 22:58:45 -07:00
Jason Kankiewicz
c5d3d1b0a0 Added the AMsyncHaves struct.
Added `AMsyncHavesAdvance()`.
Added `AMsyncHavesNext()`.
Added `AMsyncHavesPrev()`.
Added `AMsyncHavesSize()`.
2022-05-30 22:55:34 -07:00
Jason Kankiewicz
be3c7d6233 Added the AMsyncHave struct.
Added `AMsyncHaveLastSync()`.
2022-05-30 22:54:02 -07:00
Jason Kankiewicz
9213d43850 Grouped some common macros and functions into
their own source file.
2022-05-30 22:53:09 -07:00
Jason Kankiewicz
18ee9b71e0 Grouped the AMmap*() functions into their own
source file.
2022-05-30 22:52:02 -07:00
Jason Kankiewicz
a9912d4b9f Grouped the AMlist*() functions into their own
source file.
2022-05-30 22:51:41 -07:00
Jason Kankiewicz
d9bf29e8fd Grouped AMsyncMessage and AMsyncState into
separate source files.
2022-05-30 22:50:26 -07:00
Jason Kankiewicz
546b6ccbbd Moved AMobjId into its own source file.
Added the `AMvalue::SyncState` variant.
Enabled `AMchange` structs to be lazily created.
Added the `AMresult::SyncState` variant.
Added an `Option<&automerge::Change>` conversion for `AMresult`.
Added a `Result<automerge::Change, automerge::DecodingError>` conversion
for `AMresult`.
Added a `Result<automerge::sync::Message, automerge::DecodingError>`
conversion for `AMresult`.
Added a `Result<automerge::sync::State, automerge::DecodingError>`
conversion for `AMresult`.
Moved `AMerrorMessage()` and `AMresult*()` into the source file for
`AMresult`.
2022-05-30 22:49:23 -07:00
Jason Kankiewicz
bb0b023c9a Moved AMobjId into its own source file. 2022-05-30 22:37:22 -07:00
Jason Kankiewicz
c3554199f3 Grouped related AM*() functions into separate source files. 2022-05-30 22:36:26 -07:00
Jason Kankiewicz
e56fe64a18 Added AMapplyChanges().
Fixed `AMdup()`.
Added `AMequal()`.
Renamed `AMfreeDoc()` to `AMfree()`.
Added `AMgetHeads()`.
Added `AMgetMissingDeps()`.
Added `AMgetLastLocalChange()`.
2022-05-30 22:34:01 -07:00
Jason Kankiewicz
007253d6ae Updated the file dependencies of the CMake custom
command for Cargo.
2022-05-30 22:27:14 -07:00
Jason Kankiewicz
e8f1f07f21 Changed AMchanges to lazily create AMchange structs.
Renamed `AMadvanceChanges()` to `AMchangesAdvance()`.
Added `AMchangesEqual()`.
Renamed `AMnextChange()` to `AMchangesNext()`.
Renamed `AMprevChange()` to `AMchangesPrev()`.
2022-05-30 22:24:53 -07:00
Jason Kankiewicz
3ad979a178 Added AMchangeActorId().
Added `AMchangeCompress()`.
Added `AMchangeDeps()`.
Added `AMchangeExtraBytes()`.
Added `AMchangeFromBytes()`.
Added `AMchangeHash()`.
Added `AMchangeIsEmpty()`.
Added `AMchangeMaxOp()`.
Added `AMchangeMessage()`.
Added `AMchangeSeq()`.
Added `AMchangeSize()`.
Added `AMchangeStartOp()`.
Added `AMchangeTime()`.
Added `AMchangeRawBytes()`.
Added `AMchangeLoadDocument()`.
2022-05-30 22:19:54 -07:00
Jason Kankiewicz
fb0ea2c7a4 Renamed AMadvanceChangeHashes() to AMchangeHashesAdvance().
Added `AMchangeHashesCmp()`.
Renamed `AMnextChangeHash()` to `AMchangeHashesNext()`.
2022-05-30 22:12:03 -07:00
Jason Kankiewicz
a31a65033f Renamed AMfreeResult() to AMresultFree().
Renamed `AMfreeDoc()` to `AMfree()`.
Renamed `AMnextChange()` to `AMchangesNext()`.
Renamed `AMgetMessage()` to `AMchangeMessage()`.
2022-05-30 22:08:27 -07:00
Jason Kankiewicz
5765fea771 Renamed AMfreeResult() to AMresultFree().
Remove the `&AMchange` conversion for `AMbyteSpan`.
Add a `&automerge::ActorId` conversion to for `AMbyteSpan`.
Remove the `&Vec<u8>` conversion for `AMbyteSpan`.
Add a `&[u8]` conversion for `AMbyteSpan`.
2022-05-30 22:06:22 -07:00
Jason Kankiewicz
4bed03f008 Added the AMsyncMessage struct.
Added the `AMsyncState` struct.
Added the `AMfreeSyncState()` function.
Added the `AMgenerateSyncMessage()` function.
Added the `AMinitSyncState()` function.
Added the `AMreceiveSyncMessage()` function.
2022-05-30 08:22:17 -07:00
Orion Henry
210c6d2045 move types to their own package 2022-05-27 10:23:51 -07:00
Andrew Jeffery
a569611d83 Use clock_at for filter_changes 2022-05-26 19:03:09 +01:00
Andrew Jeffery
03a635a926 Extend last_sync_hashes 2022-05-26 19:03:09 +01:00
Andrew Jeffery
97a5144d59 Reduce the amount of shuffling data for changes_to_send 2022-05-26 19:03:09 +01:00
Andrew Jeffery
03289510d6 Remove cloning their_have in sync 2022-05-26 19:03:09 +01:00
Andrew Jeffery
b1712cb0c6
Merge pull request #379 from jeffa5/apply-changes-iter
Update autocommit's apply_changes to take an iterator
2022-05-26 13:11:51 +01:00
Andrew Jeffery
dae6509e13 Update autocommit's apply_changes to take an iterator 2022-05-26 09:02:59 +01:00
Andrew Jeffery
587adf7418 Add Eq to ObjType 2022-05-24 09:48:55 +01:00
Orion Henry
df8cae8a2b README 2022-05-23 19:25:23 +02:00
Orion Henry
3a44ccd52d clean up lint, simplify package, hand write an index.d.ts 2022-05-23 19:04:31 +02:00
Orion Henry
07f5678a2b linting in wasm 2022-05-22 13:54:59 -04:00
Orion Henry
d638a41a6c record type 2022-05-22 13:53:11 -04:00
Orion Henry
bd35361354 fixed typescript errors, pull wasm dep (mostly) out 2022-05-22 13:53:11 -04:00
Scott Trinh
d2fba6bf04 Use an UnknownObject type alias 2022-05-22 13:53:11 -04:00
Orion Henry
fd02585d2a removed a bunch of lint errors 2022-05-22 13:53:11 -04:00
Orion Henry
515a2eb94b removing some ts errors 2022-05-22 13:53:11 -04:00
Orion Henry
5e1bdb79ed eslint --fix 2022-05-22 13:53:11 -04:00
Orion Henry
1cf8f80ba4 pull wasm out of deps 2022-05-22 13:53:11 -04:00
Orion Henry
226bbeb023 tslint to eslint 2022-05-22 13:53:11 -04:00
Orion Henry
1eec70f116 example webpack for js 2022-05-22 13:53:11 -04:00
Orion Henry
4f898b67b3 able to build npm package 2022-05-22 13:53:11 -04:00
Orion Henry
551f6e1343 convert automerge-js to typescript 2022-05-22 13:53:11 -04:00
Orion Henry
c353abfe4e
Merge pull request #375 from jeffa5/get-changes-opt
Get changes opt
2022-05-22 10:30:24 -07:00
Orion Henry
f0abcf0605
Merge pull request #376 from jeffa5/api-interoperability
Use BTreeSet for sync::State to allow deriving Hash
2022-05-22 10:30:07 -07:00
Andrew Jeffery
2c1a71e143 Use expect for getting clock 2022-05-20 18:01:46 +01:00
Andrew Jeffery
8b1c3c73cd Use BTreeSet for sync::State to allow deriving Hash 2022-05-20 16:13:10 +01:00
Andrew Jeffery
3a8e833187 Document num_ops on change 2022-05-20 10:05:08 +01:00
Andrew Jeffery
1355a024a7 Use actor_index to get state in update_history 2022-05-20 10:05:08 +01:00
Andrew Jeffery
e5b527e17d Remove old functions 2022-05-20 10:05:08 +01:00
Andrew Jeffery
4b344ac308 Add sync benchmark 2022-05-20 10:05:08 +01:00
Andrew Jeffery
36857e0f6b Store seq in clock to remove binary_search_by_key 2022-05-20 10:05:08 +01:00
Andrew Jeffery
b7c50e47b9 Just use get_changes_clock 2022-05-20 10:05:08 +01:00
Andrew Jeffery
16f1304345 Fix wasm test calling getChanges with wrong heads 2022-05-20 10:05:08 +01:00
Andrew Jeffery
933bf5ee07 Return an error when getting clock for missing hash 2022-05-20 10:05:08 +01:00
Andrew Jeffery
c2765885fd Maintain incremental clocks 2022-05-20 10:05:08 +01:00
Andrew Jeffery
5e088ee9e0 Document clock module and add merge function 2022-05-20 10:05:08 +01:00
Andrew Jeffery
1b34892585 Add num_ops to change to quickly get the len 2022-05-20 10:05:08 +01:00
Andrew Jeffery
0de37d292d Sort change results from clock search 2022-05-20 10:05:08 +01:00
Andrew Jeffery
b9a6b3129f Add method to get changes by clock 2022-05-20 10:05:08 +01:00
Andrew Jeffery
11fbde47bb Use HASH_SIZE const in ChangeHash definition 2022-05-20 10:04:32 +01:00
Andrew Jeffery
70021556c0
Merge pull request #373 from jeffa5/sync-opt
Sync opt
2022-05-19 13:42:10 +01:00
Andrew Jeffery
e8e42b2d16 Remove need to collect hashes when building bloom filter 2022-05-19 10:41:23 +01:00
Andrew Jeffery
6bce8bf4fd Use vec with capacity when calculating bloom probes 2022-05-19 10:40:44 +01:00
Orion Henry
c7429abbf5
Merge pull request #369 from automerge/webpack
Webpack
2022-05-17 10:28:12 -07:00
Orion Henry
24fa61c11d
Merge pull request #370 from jeffa5/opt-seek-op
Optimise seek op and seek op with patch
2022-05-17 10:27:58 -07:00
Andrew Jeffery
d89669fcaa Add apply benchmarks 2022-05-16 23:13:35 +01:00
Andrew Jeffery
43c4ce76fb Optimise seek op with patch 2022-05-16 23:07:45 +01:00
Andrew Jeffery
531e434bf6 Optimise seek op 2022-05-16 22:45:41 +01:00
Orion Henry
e1f3ecfcf5 typescript implicit any 2022-05-16 15:09:55 -04:00
Orion Henry
409189e36a
Merge pull request #368 from jeromegn/rollback-no-actors
Don't remove last actor when there are none
2022-05-16 08:39:03 -07:00
Orion Henry
81dd1a56eb add start script - split up outputs 2022-05-16 11:33:08 -04:00
Jerome Gravel-Niquet
7acb9ed0e2
don't remove last actor when there are none 2022-05-16 10:56:10 -04:00
Orion Henry
d01e7ceb0e add webpack example and move into wasm folder 2022-05-15 11:53:55 -04:00
Orion Henry
aa5a03a0c4 webpack example config 2022-05-15 11:53:04 -04:00
Orion Henry
f6eca5eec6
Merge pull request #362 from jeffa5/range-rev
Add tests and fixes for double ended map range iterator
2022-05-12 09:02:05 -07:00
Orion Henry
b17c86e36e
Merge pull request #365 from automerge/opset-iter-nth
Implement OpTreeIter::nth correctly
2022-05-12 09:00:29 -07:00
Andrew Jeffery
f373deba6b Add length assertion 2022-05-11 21:15:50 +01:00
Andrew Jeffery
8f71ac30a4 Add index info to op_tree panic message 2022-05-11 20:26:39 +01:00
Alex Good
4e431c00a1
Implement OpTreeIter::nth correctly
The previous implementation of nth was incorrect, it returned the nth
element of the optree but it did not modify the internal state of the
iterator such that future calls to `next()` were after the nth element.
This commit fixes that.

Signed-off-by: Alex Good <alex@memoryandthought.me>
2022-05-09 23:11:18 +01:00
Alex Good
004d1a0cf2
Update CI toolchain to 1.60 2022-05-09 22:53:39 +01:00
Orion Henry
d6a6b34e99
Merge pull request #364 from jkankiewicz/improve_symmetry
C API symmetry improvements
2022-05-07 11:03:34 -04:00
Jason Kankiewicz
fdd3880bd3 Renamed AMalloc() to AMcreate().
Renamed `AMload()` to `AMloadIncremental()`.
Added the `AMload()` function.
2022-05-07 09:55:05 -05:00
Orion Henry
f0da2d2348
Merge pull request #361 from jkankiewicz/quickstart_error_reporting
Improve error reporting in C quickstart example.
2022-05-06 11:18:01 -04:00
Jason Kankiewicz
b56464c2e7 Switched to C comment delimiting. 2022-05-06 04:59:47 -05:00
Jason Kankiewicz
bb3d75604a Improved the documentation slightly. 2022-05-06 04:51:44 -05:00
Jason Kankiewicz
eb3155e49b Sorted main() to the top. Documented test(). 2022-05-06 04:50:02 -05:00
Andrew Jeffery
28a61f2dcd Add tests and fixes for double ended map range iterator 2022-05-06 09:49:00 +01:00
Jason Kankiewicz
944e5d8001 Trap and report all errors. 2022-05-05 21:21:46 -05:00
Andrew Jeffery
7d5eaa0b7f Move automerge unit tests to new file for clarity 2022-05-05 14:58:22 +01:00
Andrew Jeffery
5b15a04516 Some tidies 2022-05-05 14:52:01 +01:00
Orion Henry
dc441a1a61
Merge pull request #360 from jkankiewicz/add_quickstart
Add a port of Rust's quickstart to the C API
2022-05-04 10:28:14 -04:00
Orion Henry
3f746a0dc3
Merge pull request #358 from jeffa5/msrv
Use an MSRV in CI
2022-05-04 10:23:58 -04:00
Orion Henry
c43f672924
Merge pull request #356 from automerge/values_range_fix
fixed panic in doc.values() - fixed concurrency bugs in range
2022-05-04 10:22:46 -04:00
Orion Henry
fb8f3e5d4e fixme: performance 2022-05-04 10:09:50 -04:00
Orion Henry
54042bcf96 and unimplemented double ended iterator 2022-05-04 09:50:27 -04:00
Jason Kankiewicz
729752dac2 De-emphasized the AMload() call's result. 2022-05-04 08:27:15 -05:00
Jason Kankiewicz
3cf990eabf Fixed some minor inconsistencies in quickstart.c. 2022-05-04 07:45:05 -05:00
Jason Kankiewicz
069c33a13e Moved the AMbyteSpan struct into its own source
file.
Added the `AMchangeHashes` struct.
Added the `AMchange` and `AMchanges` structs.
Tied the lifetime of an `AMobjId` struct to the `AMresult` struct that
it's returned through so that it can be used to reach equivalent objects
within multiple `AMdoc` structs.
Removed the `AMfreeObjId()` function.
Renamed `AMallocDoc()` to `AMalloc()`.
Added the `AMcommit()` function.
Added the `AMgetChangeHash()` function.
Added the `AMgetChanges()` function.
Added the `AMgetMessage()` function.
Added the `AMlistDelete()` function.
Added the `AMlistPutBool()` function.
Added the `AMmapDelete()` function.
Added the `AMmapPutBool()` function.
Added the `AMobjSizeAt()` function.
Added the `AMsave()` function.
Renamed the `AMvalue::Nothing` variant to `AMvalue::Void`.
Changed all `AMobjId` struct function arguments to be immutable.
2022-05-04 01:04:43 -05:00
Jason Kankiewicz
58e0ce5efb Renamed the AMvalue::Nothing variant to AMvalue::Void.
Tied the lifetime of an `AMobjId` struct to the `AMresult` struct that
it's returned through so that it can be used to reach equivalent objects
within multiple `AMdoc` structs.
Added test cases for the `AMlistPutBool()` function.
Added a test case for the `AMmapPutBool()` function.
2022-05-04 01:04:43 -05:00
Jason Kankiewicz
c6e7f993fd Moved the AMbyteSpan struct into its own source
file.
Added the `AMchangeHashes` struct.
Added the `AMchange` and `AMchanges` structs.
Added `ChangeHashes` and `Changes` variants to the `AMresult` struct.
Renamed the `AMvalue::Nothing` variant to `AMvalue::Void`.
Tied the lifetime of an `AMobjId` struct to the `AMresult` struct that
it's returned through so that it can be used to reach equivalent objects
within multiple `AMdoc` structs.
Consolidated the `AMresult` struct's related trait implementations.
2022-05-04 01:04:43 -05:00
Jason Kankiewicz
30b220d9b7 Added a port of the Rust quickstart example. 2022-05-04 01:04:43 -05:00
Jason Kankiewicz
bf6ee85c58 Added the time_t header. 2022-05-04 01:04:43 -05:00
Orion Henry
a728b8216b range -> map_range(), added list_range() values() works on both 2022-05-03 19:27:51 -04:00
Andrew Jeffery
0aab13a990 Set rust-version in cargo.tomls 2022-05-02 21:18:00 +01:00
Andrew Jeffery
3ec1127b50 Try 1.57.0 as msrv 2022-05-02 21:18:00 +01:00
Orion Henry
291557a019
Merge pull request #350 from jeffa5/opt-prop
Optimise prop query
2022-05-02 14:15:53 -04:00
Orion Henry
cc4b8399b1
Merge pull request #357 from automerge/faster-opset-iterator
Make the OpSet iterator faster
2022-05-02 14:15:06 -04:00
Orion Henry
bcdc8a2752 fmt 2022-05-02 13:32:59 -04:00
Orion Henry
0d3eb07f3f fix key/elemid bug and rename range to map_range 2022-05-02 13:30:59 -04:00
Alex Good
7f4460f200
Make the OpSet iterator faster
The opset iterator was using `OpTreeInternal::get(index)` to fetch each
successive element of the OpSet. This is pretty slow. We make this much
faster by implementing an iterator which is aware of the internal
structure of the OpTree.

This speeds up the save benchmark by about 10%.

Signed-off-by: Alex Good <alex@memoryandthought.me>
2022-05-01 00:07:39 +01:00
Orion Henry
9e6044c128 fixed panic in doc.values() - fixed concurrency bugs in range 2022-04-29 15:11:07 -04:00
Andrew Jeffery
6bf03e006c Add ability to skip in tree searches 2022-04-28 14:14:03 +01:00
Andrew Jeffery
8baacb281b Add save and load map benchmarks 2022-04-28 14:14:03 +01:00
Andrew Jeffery
7de0cff2c9 Rework benchmarks to be in a group 2022-04-28 14:14:03 +01:00
Andrew Jeffery
c38b49609f Remove clone from update
The cloning of the op was eating up a significant part of the increment
operation's time. This makes it zero-clone and just extracts the fields
needed.
2022-04-28 14:14:03 +01:00
Andrew Jeffery
db280c3d1d prop: Skip over nodes 2022-04-28 14:14:03 +01:00
Andrew Jeffery
7dfe311aae Store keys as well as elemids in visible index 2022-04-28 14:14:03 +01:00
Andrew Jeffery
bb4727ac34 Skip empty nodes in prop query 2022-04-28 14:14:03 +01:00
Andrew Jeffery
bdacaa1703 Use treequery rather than repeated gets 2022-04-28 14:14:03 +01:00
Andrew Jeffery
a388ffbf19 Add some benches 2022-04-28 14:14:03 +01:00
498 changed files with 67014 additions and 21476 deletions

View file

@ -14,7 +14,8 @@ jobs:
- uses: actions-rs/toolchain@v1 - uses: actions-rs/toolchain@v1
with: with:
profile: minimal profile: minimal
toolchain: stable toolchain: 1.67.0
default: true
components: rustfmt components: rustfmt
- uses: Swatinem/rust-cache@v1 - uses: Swatinem/rust-cache@v1
- run: ./scripts/ci/fmt - run: ./scripts/ci/fmt
@ -27,7 +28,8 @@ jobs:
- uses: actions-rs/toolchain@v1 - uses: actions-rs/toolchain@v1
with: with:
profile: minimal profile: minimal
toolchain: stable toolchain: 1.67.0
default: true
components: clippy components: clippy
- uses: Swatinem/rust-cache@v1 - uses: Swatinem/rust-cache@v1
- run: ./scripts/ci/lint - run: ./scripts/ci/lint
@ -40,9 +42,14 @@ jobs:
- uses: actions-rs/toolchain@v1 - uses: actions-rs/toolchain@v1
with: with:
profile: minimal profile: minimal
toolchain: stable toolchain: 1.67.0
default: true
- uses: Swatinem/rust-cache@v1 - uses: Swatinem/rust-cache@v1
- run: ./scripts/ci/docs - name: Build rust docs
run: ./scripts/ci/rust-docs
shell: bash
- name: Install doxygen
run: sudo apt-get install -y doxygen
shell: bash shell: bash
cargo-deny: cargo-deny:
@ -57,23 +64,50 @@ jobs:
- uses: actions/checkout@v2 - uses: actions/checkout@v2
- uses: EmbarkStudios/cargo-deny-action@v1 - uses: EmbarkStudios/cargo-deny-action@v1
with: with:
arguments: '--manifest-path ./rust/Cargo.toml'
command: check ${{ matrix.checks }} command: check ${{ matrix.checks }}
wasm_tests: wasm_tests:
runs-on: ubuntu-latest runs-on: ubuntu-latest
steps: steps:
- uses: actions/checkout@v2 - uses: actions/checkout@v2
- name: Install wasm-pack - name: Install wasm-bindgen-cli
run: curl https://rustwasm.github.io/wasm-pack/installer/init.sh -sSf | sh run: cargo install wasm-bindgen-cli wasm-opt
- name: Install wasm32 target
run: rustup target add wasm32-unknown-unknown
- name: run tests - name: run tests
run: ./scripts/ci/wasm_tests run: ./scripts/ci/wasm_tests
deno_tests:
runs-on: ubuntu-latest
steps:
- uses: actions/checkout@v2
- uses: denoland/setup-deno@v1
with:
deno-version: v1.x
- name: Install wasm-bindgen-cli
run: cargo install wasm-bindgen-cli wasm-opt
- name: Install wasm32 target
run: rustup target add wasm32-unknown-unknown
- name: run tests
run: ./scripts/ci/deno_tests
js_fmt:
runs-on: ubuntu-latest
steps:
- uses: actions/checkout@v2
- name: install
run: yarn global add prettier
- name: format
run: prettier -c javascript/.prettierrc javascript
js_tests: js_tests:
runs-on: ubuntu-latest runs-on: ubuntu-latest
steps: steps:
- uses: actions/checkout@v2 - uses: actions/checkout@v2
- name: Install wasm-pack - name: Install wasm-bindgen-cli
run: curl https://rustwasm.github.io/wasm-pack/installer/init.sh -sSf | sh run: cargo install wasm-bindgen-cli wasm-opt
- name: Install wasm32 target
run: rustup target add wasm32-unknown-unknown
- name: run tests - name: run tests
run: ./scripts/ci/js_tests run: ./scripts/ci/js_tests
@ -84,7 +118,8 @@ jobs:
- uses: actions-rs/toolchain@v1 - uses: actions-rs/toolchain@v1
with: with:
profile: minimal profile: minimal
toolchain: stable toolchain: nightly-2023-01-26
default: true
- uses: Swatinem/rust-cache@v1 - uses: Swatinem/rust-cache@v1
- name: Install CMocka - name: Install CMocka
run: sudo apt-get install -y libcmocka-dev run: sudo apt-get install -y libcmocka-dev
@ -92,6 +127,8 @@ jobs:
uses: jwlawson/actions-setup-cmake@v1.12 uses: jwlawson/actions-setup-cmake@v1.12
with: with:
cmake-version: latest cmake-version: latest
- name: Install rust-src
run: rustup component add rust-src
- name: Build and test C bindings - name: Build and test C bindings
run: ./scripts/ci/cmake-build Release Static run: ./scripts/ci/cmake-build Release Static
shell: bash shell: bash
@ -101,15 +138,14 @@ jobs:
strategy: strategy:
matrix: matrix:
toolchain: toolchain:
- stable - 1.67.0
- nightly
continue-on-error: ${{ matrix.toolchain == 'nightly' }}
steps: steps:
- uses: actions/checkout@v2 - uses: actions/checkout@v2
- uses: actions-rs/toolchain@v1 - uses: actions-rs/toolchain@v1
with: with:
profile: minimal profile: minimal
toolchain: ${{ matrix.toolchain }} toolchain: ${{ matrix.toolchain }}
default: true
- uses: Swatinem/rust-cache@v1 - uses: Swatinem/rust-cache@v1
- run: ./scripts/ci/build-test - run: ./scripts/ci/build-test
shell: bash shell: bash
@ -121,7 +157,8 @@ jobs:
- uses: actions-rs/toolchain@v1 - uses: actions-rs/toolchain@v1
with: with:
profile: minimal profile: minimal
toolchain: stable toolchain: 1.67.0
default: true
- uses: Swatinem/rust-cache@v1 - uses: Swatinem/rust-cache@v1
- run: ./scripts/ci/build-test - run: ./scripts/ci/build-test
shell: bash shell: bash
@ -133,8 +170,8 @@ jobs:
- uses: actions-rs/toolchain@v1 - uses: actions-rs/toolchain@v1
with: with:
profile: minimal profile: minimal
toolchain: stable toolchain: 1.67.0
default: true
- uses: Swatinem/rust-cache@v1 - uses: Swatinem/rust-cache@v1
- run: ./scripts/ci/build-test - run: ./scripts/ci/build-test
shell: bash shell: bash

View file

@ -23,22 +23,30 @@ jobs:
uses: Swatinem/rust-cache@v1 uses: Swatinem/rust-cache@v1
- name: Clean docs dir - name: Clean docs dir
run: rm -rf docs
shell: bash
- name: Clean Rust docs dir
uses: actions-rs/cargo@v1 uses: actions-rs/cargo@v1
with: with:
command: clean command: clean
args: --doc args: --manifest-path ./rust/Cargo.toml --doc
- name: Build docs - name: Build Rust docs
uses: actions-rs/cargo@v1 uses: actions-rs/cargo@v1
with: with:
command: doc command: doc
args: --workspace --all-features --no-deps args: --manifest-path ./rust/Cargo.toml --workspace --all-features --no-deps
- name: Move Rust docs
run: mkdir -p docs && mv rust/target/doc/* docs/.
shell: bash
- name: Configure root page - name: Configure root page
run: echo '<meta http-equiv="refresh" content="0; url=automerge">' > target/doc/index.html run: echo '<meta http-equiv="refresh" content="0; url=automerge">' > docs/index.html
- name: Deploy docs - name: Deploy docs
uses: peaceiris/actions-gh-pages@v3 uses: peaceiris/actions-gh-pages@v3
with: with:
github_token: ${{ secrets.GITHUB_TOKEN }} github_token: ${{ secrets.GITHUB_TOKEN }}
publish_dir: ./target/doc publish_dir: ./docs

214
.github/workflows/release.yaml vendored Normal file
View file

@ -0,0 +1,214 @@
name: Release
on:
push:
branches:
- main
jobs:
check_if_wasm_version_upgraded:
name: Check if WASM version has been upgraded
runs-on: ubuntu-latest
outputs:
wasm_version: ${{ steps.version-updated.outputs.current-package-version }}
wasm_has_updated: ${{ steps.version-updated.outputs.has-updated }}
steps:
- uses: JiPaix/package-json-updated-action@v1.0.5
id: version-updated
with:
path: rust/automerge-wasm/package.json
env:
GITHUB_TOKEN: ${{ secrets.GITHUB_TOKEN }}
publish-wasm:
name: Publish WASM package
runs-on: ubuntu-latest
needs:
- check_if_wasm_version_upgraded
# We create release only if the version in the package.json has been upgraded
if: needs.check_if_wasm_version_upgraded.outputs.wasm_has_updated == 'true'
steps:
- uses: actions/setup-node@v3
with:
node-version: '16.x'
registry-url: 'https://registry.npmjs.org'
- uses: denoland/setup-deno@v1
- uses: actions/checkout@v3
with:
fetch-depth: 0
ref: ${{ github.ref }}
- name: Get rid of local github workflows
run: rm -r .github/workflows
- name: Remove tmp_branch if it exists
run: git push origin :tmp_branch || true
- run: git checkout -b tmp_branch
- name: Install wasm-bindgen-cli
run: cargo install wasm-bindgen-cli wasm-opt
- name: Install wasm32 target
run: rustup target add wasm32-unknown-unknown
- name: run wasm js tests
id: wasm_js_tests
run: ./scripts/ci/wasm_tests
- name: run wasm deno tests
id: wasm_deno_tests
run: ./scripts/ci/deno_tests
- name: build release
id: build_release
run: |
npm --prefix $GITHUB_WORKSPACE/rust/automerge-wasm run release
- name: Collate deno release files
if: steps.wasm_js_tests.outcome == 'success' && steps.wasm_deno_tests.outcome == 'success'
run: |
mkdir $GITHUB_WORKSPACE/deno_wasm_dist
cp $GITHUB_WORKSPACE/rust/automerge-wasm/deno/* $GITHUB_WORKSPACE/deno_wasm_dist
cp $GITHUB_WORKSPACE/rust/automerge-wasm/index.d.ts $GITHUB_WORKSPACE/deno_wasm_dist
cp $GITHUB_WORKSPACE/rust/automerge-wasm/README.md $GITHUB_WORKSPACE/deno_wasm_dist
cp $GITHUB_WORKSPACE/rust/automerge-wasm/LICENSE $GITHUB_WORKSPACE/deno_wasm_dist
sed -i '1i /// <reference types="./index.d.ts" />' $GITHUB_WORKSPACE/deno_wasm_dist/automerge_wasm.js
- name: Create npm release
if: steps.wasm_js_tests.outcome == 'success' && steps.wasm_deno_tests.outcome == 'success'
run: |
if [ "$(npm --prefix $GITHUB_WORKSPACE/rust/automerge-wasm show . version)" = "$VERSION" ]; then
echo "This version is already published"
exit 0
fi
EXTRA_ARGS="--access public"
if [[ $VERSION == *"alpha."* ]] || [[ $VERSION == *"beta."* ]] || [[ $VERSION == *"rc."* ]]; then
echo "Is pre-release version"
EXTRA_ARGS="$EXTRA_ARGS --tag next"
fi
if [ "$NODE_AUTH_TOKEN" = "" ]; then
echo "Can't publish on NPM, You need a NPM_TOKEN secret."
false
fi
npm publish $GITHUB_WORKSPACE/rust/automerge-wasm $EXTRA_ARGS
env:
NODE_AUTH_TOKEN: ${{secrets.NPM_TOKEN}}
VERSION: ${{ needs.check_if_wasm_version_upgraded.outputs.wasm_version }}
- name: Commit wasm deno release files
run: |
git config --global user.name "actions"
git config --global user.email actions@github.com
git add $GITHUB_WORKSPACE/deno_wasm_dist
git commit -am "Add deno release files"
git push origin tmp_branch
- name: Tag wasm release
if: steps.wasm_js_tests.outcome == 'success' && steps.wasm_deno_tests.outcome == 'success'
uses: softprops/action-gh-release@v1
with:
name: Automerge Wasm v${{ needs.check_if_wasm_version_upgraded.outputs.wasm_version }}
tag_name: js/automerge-wasm-${{ needs.check_if_wasm_version_upgraded.outputs.wasm_version }}
target_commitish: tmp_branch
generate_release_notes: false
draft: false
env:
GITHUB_TOKEN: ${{ secrets.GITHUB_TOKEN }}
- name: Remove tmp_branch
run: git push origin :tmp_branch
check_if_js_version_upgraded:
name: Check if JS version has been upgraded
runs-on: ubuntu-latest
outputs:
js_version: ${{ steps.version-updated.outputs.current-package-version }}
js_has_updated: ${{ steps.version-updated.outputs.has-updated }}
steps:
- uses: JiPaix/package-json-updated-action@v1.0.5
id: version-updated
with:
path: javascript/package.json
env:
GITHUB_TOKEN: ${{ secrets.GITHUB_TOKEN }}
publish-js:
name: Publish JS package
runs-on: ubuntu-latest
needs:
- check_if_js_version_upgraded
- check_if_wasm_version_upgraded
- publish-wasm
# We create release only if the version in the package.json has been upgraded and after the WASM release
if: |
(always() && ! cancelled()) &&
(needs.publish-wasm.result == 'success' || needs.publish-wasm.result == 'skipped') &&
needs.check_if_js_version_upgraded.outputs.js_has_updated == 'true'
steps:
- uses: actions/setup-node@v3
with:
node-version: '16.x'
registry-url: 'https://registry.npmjs.org'
- uses: denoland/setup-deno@v1
- uses: actions/checkout@v3
with:
fetch-depth: 0
ref: ${{ github.ref }}
- name: Get rid of local github workflows
run: rm -r .github/workflows
- name: Remove js_tmp_branch if it exists
run: git push origin :js_tmp_branch || true
- run: git checkout -b js_tmp_branch
- name: check js formatting
run: |
yarn global add prettier
prettier -c javascript/.prettierrc javascript
- name: run js tests
id: js_tests
run: |
cargo install wasm-bindgen-cli wasm-opt
rustup target add wasm32-unknown-unknown
./scripts/ci/js_tests
- name: build js release
id: build_release
run: |
npm --prefix $GITHUB_WORKSPACE/javascript run build
- name: build js deno release
id: build_deno_release
run: |
VERSION=$WASM_VERSION npm --prefix $GITHUB_WORKSPACE/javascript run deno:build
env:
WASM_VERSION: ${{ needs.check_if_wasm_version_upgraded.outputs.wasm_version }}
- name: run deno tests
id: deno_tests
run: |
npm --prefix $GITHUB_WORKSPACE/javascript run deno:test
- name: Collate deno release files
if: steps.js_tests.outcome == 'success' && steps.deno_tests.outcome == 'success'
run: |
mkdir $GITHUB_WORKSPACE/deno_js_dist
cp $GITHUB_WORKSPACE/javascript/deno_dist/* $GITHUB_WORKSPACE/deno_js_dist
- name: Create npm release
if: steps.js_tests.outcome == 'success' && steps.deno_tests.outcome == 'success'
run: |
if [ "$(npm --prefix $GITHUB_WORKSPACE/javascript show . version)" = "$VERSION" ]; then
echo "This version is already published"
exit 0
fi
EXTRA_ARGS="--access public"
if [[ $VERSION == *"alpha."* ]] || [[ $VERSION == *"beta."* ]] || [[ $VERSION == *"rc."* ]]; then
echo "Is pre-release version"
EXTRA_ARGS="$EXTRA_ARGS --tag next"
fi
if [ "$NODE_AUTH_TOKEN" = "" ]; then
echo "Can't publish on NPM, You need a NPM_TOKEN secret."
false
fi
npm publish $GITHUB_WORKSPACE/javascript $EXTRA_ARGS
env:
NODE_AUTH_TOKEN: ${{secrets.NPM_TOKEN}}
VERSION: ${{ needs.check_if_js_version_upgraded.outputs.js_version }}
- name: Commit js deno release files
run: |
git config --global user.name "actions"
git config --global user.email actions@github.com
git add $GITHUB_WORKSPACE/deno_js_dist
git commit -am "Add deno js release files"
git push origin js_tmp_branch
- name: Tag JS release
if: steps.js_tests.outcome == 'success' && steps.deno_tests.outcome == 'success'
uses: softprops/action-gh-release@v1
with:
name: Automerge v${{ needs.check_if_js_version_upgraded.outputs.js_version }}
tag_name: js/automerge-${{ needs.check_if_js_version_upgraded.outputs.js_version }}
target_commitish: js_tmp_branch
generate_release_notes: false
draft: false
env:
GITHUB_TOKEN: ${{ secrets.GITHUB_TOKEN }}
- name: Remove js_tmp_branch
run: git push origin :js_tmp_branch

3
.gitignore vendored
View file

@ -1,5 +1,6 @@
/target
/.direnv /.direnv
perf.* perf.*
/Cargo.lock /Cargo.lock
build/ build/
.vim/*
/target

View file

@ -1,13 +0,0 @@
rust:
cd automerge && cargo test
wasm:
cd automerge-wasm && yarn
cd automerge-wasm && yarn build
cd automerge-wasm && yarn test
cd automerge-wasm && yarn link
js: wasm
cd automerge-js && yarn
cd automerge-js && yarn link "automerge-wasm"
cd automerge-js && yarn test

199
README.md
View file

@ -1,110 +1,147 @@
# Automerge RS # Automerge
<img src='./img/sign.svg' width='500' alt='Automerge logo' /> <img src='./img/sign.svg' width='500' alt='Automerge logo' />
[![homepage](https://img.shields.io/badge/homepage-published-informational)](https://automerge.org/) [![homepage](https://img.shields.io/badge/homepage-published-informational)](https://automerge.org/)
[![main docs](https://img.shields.io/badge/docs-main-informational)](https://automerge.org/automerge-rs/automerge/) [![main docs](https://img.shields.io/badge/docs-main-informational)](https://automerge.org/automerge-rs/automerge/)
[![ci](https://github.com/automerge/automerge-rs/actions/workflows/ci.yaml/badge.svg)](https://github.com/automerge/automerge-rs/actions/workflows/ci.yaml) [![ci](https://github.com/automerge/automerge-rs/actions/workflows/ci.yaml/badge.svg)](https://github.com/automerge/automerge-rs/actions/workflows/ci.yaml)
[![docs](https://github.com/automerge/automerge-rs/actions/workflows/docs.yaml/badge.svg)](https://github.com/automerge/automerge-rs/actions/workflows/docs.yaml)
This is a rust implementation of the [Automerge](https://github.com/automerge/automerge) file format and network protocol. Automerge is a library which provides fast implementations of several different
CRDTs, a compact compression format for these CRDTs, and a sync protocol for
efficiently transmitting those changes over the network. The objective of the
project is to support [local-first](https://www.inkandswitch.com/local-first/) applications in the same way that relational
databases support server applications - by providing mechanisms for persistence
which allow application developers to avoid thinking about hard distributed
computing problems. Automerge aims to be PostgreSQL for your local-first app.
If you are looking for the origional `automerge-rs` project that can be used as a wasm backend to the javascript implementation, it can be found [here](https://github.com/automerge/automerge-rs/tree/automerge-1.0). If you're looking for documentation on the JavaScript implementation take a look
at https://automerge.org/docs/hello/. There are other implementations in both
Rust and C, but they are earlier and don't have documentation yet. You can find
them in `rust/automerge` and `rust/automerge-c` if you are comfortable
reading the code and tests to figure out how to use them.
If you're familiar with CRDTs and interested in the design of Automerge in
particular take a look at https://automerge.org/docs/how-it-works/backend/
Finally, if you want to talk to us about this project please [join the
Slack](https://join.slack.com/t/automerge/shared_invite/zt-e4p3760n-kKh7r3KRH1YwwNfiZM8ktw)
## Status ## Status
This project has 4 components: This project is formed of a core Rust implementation which is exposed via FFI in
javascript+WASM, C, and soon other languages. Alex
([@alexjg](https://github.com/alexjg/)]) is working full time on maintaining
automerge, other members of Ink and Switch are also contributing time and there
are several other maintainers. The focus is currently on shipping the new JS
package. We expect to be iterating the API and adding new features over the next
six months so there will likely be several major version bumps in all packages
in that time.
1. _automerge_ - a rust implementation of the library. This project is the most mature and being used in a handful of small applications. In general we try and respect semver.
2. _automerge-wasm_ - a js/wasm interface to the underlying rust library. This api is generally mature and in use in a handful of projects as well.
3. _automerge-js_ - this is a javascript library using the wasm interface to export the same public api of the primary automerge project. Currently this project passes all of automerge's tests but has not been used in any real project or packaged as an NPM. Alpha testers welcome.
4. _automerge-c_ - this is a c library intended to be an ffi integration point for all other languages. It is currently a work in progress and not yet ready for any testing.
## How? ### JavaScript
The current iteration of automerge-rs is complicated to work with because it A stable release of the javascript package is currently available as
adopts the frontend/backend split architecture of the JS implementation. This `@automerge/automerge@2.0.0` where. pre-release verisions of the `2.0.1` are
architecture was necessary due to basic operations on the automerge opset being available as `2.0.1-alpha.n`. `2.0.1*` packages are also available for Deno at
too slow to perform on the UI thread. Recently @orionz has been able to improve https://deno.land/x/automerge
the performance to the point where the split is no longer necessary. This means
we can adopt a much simpler mutable API.
The architecture is now built around the `OpTree`. This is a data structure ### Rust
which supports efficiently inserting new operations and realising values of
existing operations. Most interactions with the `OpTree` are in the form of
implementations of `TreeQuery` - a trait which can be used to traverse the
optree and producing state of some kind. User facing operations are exposed on
an `Automerge` object, under the covers these operations typically instantiate
some `TreeQuery` and run it over the `OpTree`.
## Development The rust codebase is currently oriented around producing a performant backend
for the Javascript wrapper and as such the API for Rust code is low level and
not well documented. We will be returning to this over the next few months but
for now you will need to be comfortable reading the tests and asking questions
to figure out how to use it. If you are looking to build rust applications which
use automerge you may want to look into
[autosurgeon](https://github.com/alexjg/autosurgeon)
Please feel free to open issues and pull requests. ## Repository Organisation
### Running CI - `./rust` - the rust rust implementation and also the Rust components of
platform specific wrappers (e.g. `automerge-wasm` for the WASM API or
`automerge-c` for the C FFI bindings)
- `./javascript` - The javascript library which uses `automerge-wasm`
internally but presents a more idiomatic javascript interface
- `./scripts` - scripts which are useful to maintenance of the repository.
This includes the scripts which are run in CI.
- `./img` - static assets for use in `.md` files
The steps CI will run are all defined in `./scripts/ci`. Obviously CI will run ## Building
everything when you submit a PR, but if you want to run everything locally
before you push you can run `./scripts/ci/run` to run everything.
### Running the JS tests To build this codebase you will need:
You will need to have [node](https://nodejs.org/en/), [yarn](https://yarnpkg.com/getting-started/install), [rust](https://rustup.rs/) and [wasm-pack](https://rustwasm.github.io/wasm-pack/installer/) installed. - `rust`
- `node`
- `yarn`
- `cmake`
- `cmocka`
To build and test the rust library: You will also need to install the following with `cargo install`
```shell - `wasm-bindgen-cli`
$ cd automerge - `wasm-opt`
$ cargo test - `cargo-deny`
And ensure you have added the `wasm32-unknown-unknown` target for rust cross-compilation.
The various subprojects (the rust code, the wrapper projects) have their own
build instructions, but to run the tests that will be run in CI you can run
`./scripts/ci/run`.
### For macOS
These instructions worked to build locally on macOS 13.1 (arm64) as of
Nov 29th 2022.
```bash
# clone the repo
git clone https://github.com/automerge/automerge-rs
cd automerge-rs
# install rustup
curl --proto '=https' --tlsv1.2 -sSf https://sh.rustup.rs | sh
# install homebrew
/bin/bash -c "$(curl -fsSL https://raw.githubusercontent.com/Homebrew/install/HEAD/install.sh)"
# install cmake, node, cmocka
brew install cmake node cmocka
# install yarn
npm install --global yarn
# install javascript dependencies
yarn --cwd ./javascript
# install rust dependencies
cargo install wasm-bindgen-cli wasm-opt cargo-deny
# get nightly rust to produce optimized automerge-c builds
rustup toolchain install nightly
rustup component add rust-src --toolchain nightly
# add wasm target in addition to current architecture
rustup target add wasm32-unknown-unknown
# Run ci script
./scripts/ci/run
``` ```
To build and test the wasm library: If your build fails to find `cmocka.h` you may need to teach it about homebrew's
installation location:
```shell ```
## setup export CPATH=/opt/homebrew/include
$ cd automerge-wasm export LIBRARY_PATH=/opt/homebrew/lib
$ yarn ./scripts/ci/run
## building or testing
$ yarn build
$ yarn test
## without this the js library wont automatically use changes
$ yarn link
## cutting a release or doing benchmarking
$ yarn release
``` ```
To test the js library. This is where most of the tests reside. ## Contributing
```shell Please try and split your changes up into relatively independent commits which
## setup change one subsystem at a time and add good commit messages which describe what
$ cd automerge-js the change is and why you're making it (err on the side of longer commit
$ yarn messages). `git blame` should give future maintainers a good idea of why
$ yarn link "automerge-wasm" something is the way it is.
## testing
$ yarn test
```
And finally, to build and test the C bindings with CMake:
```shell
## setup
$ cd automerge-c
$ mkdir -p build
$ cd build
$ cmake -S .. -DCMAKE_BUILD_TYPE=Release -DBUILD_SHARED_LIBS=OFF
## building and testing
$ cmake --build .
```
To add debugging symbols, replace `Release` with `Debug`.
To build a shared library instead of a static one, replace `OFF` with `ON`.
The C bindings can be built and tested on any platform for which CMake is
available but the steps for doing so vary across platforms and are too numerous
to list here.
## Benchmarking
The `edit-trace` folder has the main code for running the edit trace benchmarking.

32
TODO.md
View file

@ -1,32 +0,0 @@
### next steps:
1. C API
2. port rust command line tool
3. fast load
### ergonomics:
1. value() -> () or something that into's a value
### automerge:
1. single pass (fast) load
2. micro-patches / bare bones observation API / fully hydrated documents
### future:
1. handle columns with unknown data in and out
2. branches with different indexes
### Peritext
1. add mark / remove mark -- type, start/end elemid (inclusive,exclusive)
2. track any formatting ops that start or end on a character
3. ops right before the character, ops right after that character
4. query a single character - character, plus marks that start or end on that character
what is its current formatting,
what are the ops that include that in their span,
None = same as last time, Set( bold, italic ),
keep these on index
5. op probably belongs with the start character - possible packed at the beginning or end of the list
### maybe:
1. tables
### no:
1. cursors

View file

@ -1,3 +0,0 @@
automerge
automerge.h
automerge.o

View file

@ -1,135 +0,0 @@
cmake_minimum_required(VERSION 3.18 FATAL_ERROR)
set(CMAKE_MODULE_PATH "${CMAKE_SOURCE_DIR}/cmake")
# Parse the library name, project name and project version out of Cargo's TOML file.
set(CARGO_LIB_SECTION OFF)
set(LIBRARY_NAME "")
set(CARGO_PKG_SECTION OFF)
set(CARGO_PKG_NAME "")
set(CARGO_PKG_VERSION "")
file(READ Cargo.toml TOML_STRING)
string(REPLACE ";" "\\\\;" TOML_STRING "${TOML_STRING}")
string(REPLACE "\n" ";" TOML_LINES "${TOML_STRING}")
foreach(TOML_LINE IN ITEMS ${TOML_LINES})
string(REGEX MATCH "^\\[(lib|package)\\]$" _ ${TOML_LINE})
if(CMAKE_MATCH_1 STREQUAL "lib")
set(CARGO_LIB_SECTION ON)
set(CARGO_PKG_SECTION OFF)
elseif(CMAKE_MATCH_1 STREQUAL "package")
set(CARGO_LIB_SECTION OFF)
set(CARGO_PKG_SECTION ON)
endif()
string(REGEX MATCH "^name += +\"([^\"]+)\"$" _ ${TOML_LINE})
if(CMAKE_MATCH_1 AND (CARGO_LIB_SECTION AND NOT CARGO_PKG_SECTION))
set(LIBRARY_NAME "${CMAKE_MATCH_1}")
elseif(CMAKE_MATCH_1 AND (NOT CARGO_LIB_SECTION AND CARGO_PKG_SECTION))
set(CARGO_PKG_NAME "${CMAKE_MATCH_1}")
endif()
string(REGEX MATCH "^version += +\"([^\"]+)\"$" _ ${TOML_LINE})
if(CMAKE_MATCH_1 AND CARGO_PKG_SECTION)
set(CARGO_PKG_VERSION "${CMAKE_MATCH_1}")
endif()
if(LIBRARY_NAME AND (CARGO_PKG_NAME AND CARGO_PKG_VERSION))
break()
endif()
endforeach()
project(${CARGO_PKG_NAME} VERSION ${CARGO_PKG_VERSION} LANGUAGES C DESCRIPTION "C bindings for the Automerge Rust backend.")
include(CTest)
option(BUILD_SHARED_LIBS "Enable the choice of a shared or static library.")
include(CMakePackageConfigHelpers)
include(GNUInstallDirs)
string(MAKE_C_IDENTIFIER ${PROJECT_NAME} SYMBOL_PREFIX)
string(TOUPPER ${SYMBOL_PREFIX} SYMBOL_PREFIX)
set(CARGO_TARGET_DIR "${CMAKE_CURRENT_BINARY_DIR}/Cargo/target")
add_subdirectory(src)
# Generate and install the configuration header.
math(EXPR INTEGER_PROJECT_VERSION_MAJOR "${PROJECT_VERSION_MAJOR} * 100000")
math(EXPR INTEGER_PROJECT_VERSION_MINOR "${PROJECT_VERSION_MINOR} * 100")
math(EXPR INTEGER_PROJECT_VERSION_PATCH "${PROJECT_VERSION_PATCH}")
math(EXPR INTEGER_PROJECT_VERSION "${INTEGER_PROJECT_VERSION_MAJOR} + ${INTEGER_PROJECT_VERSION_MINOR} + ${INTEGER_PROJECT_VERSION_PATCH}")
configure_file(
${CMAKE_MODULE_PATH}/config.h.in
config.h
@ONLY
NEWLINE_STYLE LF
)
install(
FILES ${CMAKE_BINARY_DIR}/config.h
DESTINATION ${CMAKE_INSTALL_INCLUDEDIR}/${PROJECT_NAME}
)
if(BUILD_TESTING)
add_subdirectory(test)
enable_testing()
endif()
# Generate and install .cmake files
set(PROJECT_CONFIG_NAME "${PROJECT_NAME}-config")
set(PROJECT_CONFIG_VERSION_NAME "${PROJECT_CONFIG_NAME}-version")
write_basic_package_version_file(
${CMAKE_CURRENT_BINARY_DIR}/${PROJECT_CONFIG_VERSION_NAME}.cmake
VERSION ${PROJECT_VERSION}
COMPATIBILITY ExactVersion
)
# The namespace label starts with the title-cased library name.
string(SUBSTRING ${LIBRARY_NAME} 0 1 NS_FIRST)
string(SUBSTRING ${LIBRARY_NAME} 1 -1 NS_REST)
string(TOUPPER ${NS_FIRST} NS_FIRST)
string(TOLOWER ${NS_REST} NS_REST)
string(CONCAT NAMESPACE ${NS_FIRST} ${NS_REST} "::")
# \note CMake doesn't automate the exporting of an imported library's targets
# so the package configuration script must do it.
configure_package_config_file(
${CMAKE_MODULE_PATH}/${PROJECT_CONFIG_NAME}.cmake.in
${CMAKE_CURRENT_BINARY_DIR}/${PROJECT_CONFIG_NAME}.cmake
INSTALL_DESTINATION ${CMAKE_INSTALL_LIBDIR}/cmake/${PROJECT_NAME}
)
install(
FILES
${CMAKE_CURRENT_BINARY_DIR}/${PROJECT_CONFIG_NAME}.cmake
${CMAKE_CURRENT_BINARY_DIR}/${PROJECT_CONFIG_VERSION_NAME}.cmake
DESTINATION
${CMAKE_INSTALL_LIBDIR}/cmake/${PROJECT_NAME}
)

View file

@ -1,30 +0,0 @@
CC=gcc
CFLAGS=-I.
DEPS=automerge.h
LIBS=-lpthread -ldl -lm
LDIR=../target/release
LIB=../target/release/libautomerge.a
DEBUG_LIB=../target/debug/libautomerge.a
all: $(DEBUG_LIB) automerge
debug: LDIR=../target/debug
debug: automerge $(DEBUG_LIB)
automerge: automerge.o $(LDIR)/libautomerge.a
$(CC) -o $@ automerge.o $(LDIR)/libautomerge.a $(LIBS) -L$(LDIR)
$(DEBUG_LIB): src/*.rs
cargo build
$(LIB): src/*.rs
cargo build --release
%.o: %.c $(DEPS)
$(CC) -c -o $@ $< $(CFLAGS)
.PHONY: clean
clean:
rm -f *.o automerge $(LIB) $(DEBUG_LIB)

View file

@ -1,95 +0,0 @@
## Methods we need to support
### Basic management
1. `AMcreate()`
1. `AMclone(doc)`
1. `AMfree(doc)`
1. `AMconfig(doc, key, val)` // set actor
1. `actor = get_actor(doc)`
### Transactions
1. `AMpendingOps(doc)`
1. `AMcommit(doc, message, time)`
1. `AMrollback(doc)`
### Write
1. `AMset{Map|List}(doc, obj, prop, value)`
1. `AMinsert(doc, obj, index, value)`
1. `AMpush(doc, obj, value)`
1. `AMdel{Map|List}(doc, obj, prop)`
1. `AMinc{Map|List}(doc, obj, prop, value)`
1. `AMspliceText(doc, obj, start, num_del, text)`
### Read
1. `AMkeys(doc, obj, heads)`
1. `AMlength(doc, obj, heads)`
1. `AMvalues(doc, obj, heads)`
1. `AMtext(doc, obj, heads)`
### Sync
1. `AMgenerateSyncMessage(doc, state)`
1. `AMreceiveSyncMessage(doc, state, message)`
1. `AMinitSyncState()`
### Save / Load
1. `AMload(data)`
1. `AMloadIncremental(doc, data)`
1. `AMsave(doc)`
1. `AMsaveIncremental(doc)`
### Low Level Access
1. `AMapplyChanges(doc, changes)`
1. `AMgetChanges(doc, deps)`
1. `AMgetChangesAdded(doc1, doc2)`
1. `AMgetHeads(doc)`
1. `AMgetLastLocalChange(doc)`
1. `AMgetMissingDeps(doc, heads)`
### Encode/Decode
1. `AMencodeChange(change)`
1. `AMdecodeChange(change)`
1. `AMencodeSyncMessage(change)`
1. `AMdecodeSyncMessage(change)`
1. `AMencodeSyncState(change)`
1. `AMdecodeSyncState(change)`
## Open Question - Memory management
Most of these calls return one or more items of arbitrary length. Doing memory management in C is tricky. This is my proposed solution...
###
```
// returns 1 or zero opids
n = automerge_set(doc, "_root", "hello", datatype, value);
if (n) {
automerge_pop(doc, &obj, len);
}
// returns n values
n = automerge_values(doc, "_root", "hello");
for (i = 0; i<n ;i ++) {
automerge_pop_value(doc, &value, &datatype, len);
}
```
There would be one pop method per object type. Users allocs and frees the buffers. Multiple return values would result in multiple pops. Too small buffers would error and allow retry.
### Formats
Actors - We could do (bytes,len) or a hex encoded string?.
ObjIds - We could do flat bytes of the ExId struct but lets do human readable strings for now - the struct would be faster but opque
Heads - Might as well make it a flat buffer `(n, hash, hash, ...)`
Changes - Put them all in a flat concatenated buffer
Encode/Decode - to json strings?

View file

@ -1,36 +0,0 @@
#include <stdio.h>
#include <stdlib.h>
#include <string.h>
#include <assert.h>
#include "automerge.h"
#define MAX_BUFF_SIZE 4096
int main() {
int n = 0;
int data_type = 0;
char buff[MAX_BUFF_SIZE];
char obj[MAX_BUFF_SIZE];
AMresult* res = NULL;
printf("begin\n");
AMdoc* doc = AMcreate();
printf("AMconfig()...");
AMconfig(doc, "actor", "aabbcc");
printf("pass!\n");
printf("AMmapSetStr()...\n");
res = AMmapSetStr(doc, NULL, "string", "hello world");
if (AMresultStatus(res) != AM_STATUS_COMMAND_OK)
{
printf("AMmapSet() failed: %s\n", AMerrorMessage(res));
return 1;
}
AMclear(res);
printf("pass!\n");
AMdestroy(doc);
printf("end\n");
}

View file

@ -1,25 +0,0 @@
extern crate cbindgen;
use std::{env, path::PathBuf};
fn main() {
let crate_dir = PathBuf::from(
env::var("CARGO_MANIFEST_DIR").expect("CARGO_MANIFEST_DIR env var is not defined"),
);
let config = cbindgen::Config::from_file("cbindgen.toml")
.expect("Unable to find cbindgen.toml configuration file");
// let mut config: cbindgen::Config = Default::default();
// config.language = cbindgen::Language::C;
if let Ok(writer) = cbindgen::generate_with_config(&crate_dir, config) {
writer.write_to_file(crate_dir.join("automerge.h"));
// Also write the generated header into the target directory when
// specified (necessary for an out-of-source build a la CMake).
if let Ok(target_dir) = env::var("CARGO_TARGET_DIR") {
writer.write_to_file(PathBuf::from(target_dir).join("automerge.h"));
}
}
}

View file

@ -1,39 +0,0 @@
after_includes = """\n
/**
* \\defgroup enumerations Public Enumerations
Symbolic names for integer constants.
*/
/**
* \\memberof AMdoc
* \\def AM_ROOT
* \\brief The root object of an `AMdoc` struct.
*/
#define AM_ROOT NULL
"""
autogen_warning = "/* Warning, this file is autogenerated by cbindgen. Don't modify this manually. */"
documentation = true
documentation_style = "doxy"
header = """
/** \\file
* All constants, functions and types in the Automerge library's C API.
*/
"""
include_guard = "automerge_h"
includes = []
language = "C"
line_length = 140
no_includes = true
style = "both"
sys_includes = ["stdbool.h", "stddef.h", "stdint.h"]
usize_is_size_t = true
[enum]
derive_const_casts = true
enum_class = true
must_use = "MUST_USE_ENUM"
prefix_with_name = true
rename_variants = "ScreamingSnakeCase"
[export]
item_types = ["enums", "structs", "opaque", "constants", "functions"]

View file

@ -1,14 +0,0 @@
#ifndef @SYMBOL_PREFIX@_CONFIG_INCLUDED
#define @SYMBOL_PREFIX@_CONFIG_INCLUDED
/* This header is auto-generated by CMake. */
#define @SYMBOL_PREFIX@_VERSION @INTEGER_PROJECT_VERSION@
#define @SYMBOL_PREFIX@_MAJOR_VERSION (@SYMBOL_PREFIX@_VERSION / 100000)
#define @SYMBOL_PREFIX@_MINOR_VERSION ((@SYMBOL_PREFIX@_VERSION / 100) % 1000)
#define @SYMBOL_PREFIX@_PATCH_VERSION (@SYMBOL_PREFIX@_VERSION % 100)
#endif /* @SYMBOL_PREFIX@_CONFIG_INCLUDED */

View file

@ -1,220 +0,0 @@
cmake_minimum_required(VERSION 3.18 FATAL_ERROR)
find_program (
CARGO_CMD
"cargo"
PATHS "$ENV{CARGO_HOME}/bin"
DOC "The Cargo command"
)
if(NOT CARGO_CMD)
message(FATAL_ERROR "Cargo (Rust package manager) not found! Install it and/or set the CARGO_HOME environment variable.")
endif()
string(TOLOWER "${CMAKE_BUILD_TYPE}" BUILD_TYPE_LOWER)
if(BUILD_TYPE_LOWER STREQUAL debug)
set(CARGO_BUILD_TYPE "debug")
set(CARGO_FLAG "")
else()
set(CARGO_BUILD_TYPE "release")
set(CARGO_FLAG "--release")
endif()
set(CARGO_CURRENT_BINARY_DIR "${CARGO_TARGET_DIR}/${CARGO_BUILD_TYPE}")
set(
CARGO_OUTPUT
${CARGO_TARGET_DIR}/${LIBRARY_NAME}.h
${CARGO_CURRENT_BINARY_DIR}/${CMAKE_SHARED_LIBRARY_PREFIX}${LIBRARY_NAME}${CMAKE_SHARED_LIBRARY_SUFFIX}
${CARGO_CURRENT_BINARY_DIR}/${CMAKE_STATIC_LIBRARY_PREFIX}${LIBRARY_NAME}${CMAKE_STATIC_LIBRARY_SUFFIX}
)
if(WIN32)
# \note The basename of an import library output by Cargo is the filename
# of its corresponding shared library.
list(APPEND CARGO_OUTPUT ${CARGO_CURRENT_BINARY_DIR}/${CMAKE_SHARED_LIBRARY_PREFIX}${LIBRARY_NAME}${CMAKE_SHARED_LIBRARY_SUFFIX}${CMAKE_STATIC_LIBRARY_SUFFIX})
endif()
add_custom_command(
OUTPUT ${CARGO_OUTPUT}
COMMAND
# \note cbindgen won't regenerate its output header file after it's
# been removed but it will after its configuration file has been
# updated.
${CMAKE_COMMAND} -DCONDITION=NOT_EXISTS -P ${CMAKE_SOURCE_DIR}/cmake/file_touch.cmake -- ${CARGO_TARGET_DIR}/${LIBRARY_NAME}.h ${CMAKE_SOURCE_DIR}/cbindgen.toml
COMMAND
${CMAKE_COMMAND} -E env CARGO_TARGET_DIR=${CARGO_TARGET_DIR} ${CARGO_CMD} build ${CARGO_FLAG}
MAIN_DEPENDENCY
lib.rs
DEPENDS
doc.rs
result.rs
utils.rs
${CMAKE_SOURCE_DIR}/build.rs
${CMAKE_SOURCE_DIR}/Cargo.toml
${CMAKE_SOURCE_DIR}/cbindgen.toml
WORKING_DIRECTORY
${CMAKE_SOURCE_DIR}
COMMENT
"Producing the library artifacts with Cargo..."
VERBATIM
)
add_custom_target(
${LIBRARY_NAME}_artifacts
DEPENDS ${CARGO_OUTPUT}
)
# \note cbindgen's naming behavior isn't fully configurable.
add_custom_command(
TARGET ${LIBRARY_NAME}_artifacts
POST_BUILD
COMMAND
# Compensate for cbindgen's variant struct naming.
${CMAKE_COMMAND} -DMATCH_REGEX=AM\([^_]+_[^_]+\)_Body -DREPLACE_EXPR=AM\\1 -P ${CMAKE_SOURCE_DIR}/cmake/file_regex_replace.cmake -- ${CARGO_TARGET_DIR}/${LIBRARY_NAME}.h
COMMAND
# Compensate for cbindgen's union tag enum type naming.
${CMAKE_COMMAND} -DMATCH_REGEX=AM\([^_]+\)_Tag -DREPLACE_EXPR=AM\\1Variant -P ${CMAKE_SOURCE_DIR}/cmake/file_regex_replace.cmake -- ${CARGO_TARGET_DIR}/${LIBRARY_NAME}.h
COMMAND
# Compensate for cbindgen's translation of consecutive uppercase letters to "ScreamingSnakeCase".
${CMAKE_COMMAND} -DMATCH_REGEX=A_M\([^_]+\)_ -DREPLACE_EXPR=AM_\\1_ -P ${CMAKE_SOURCE_DIR}/cmake/file_regex_replace.cmake -- ${CARGO_TARGET_DIR}/${LIBRARY_NAME}.h
WORKING_DIRECTORY
${CMAKE_SOURCE_DIR}
COMMENT
"Compensating for hard-coded cbindgen naming behaviors..."
VERBATIM
)
if(BUILD_SHARED_LIBS)
if(WIN32)
set(LIBRARY_DESTINATION "${CMAKE_INSTALL_BINDIR}")
else()
set(LIBRARY_DESTINATION "${CMAKE_INSTALL_LIBDIR}")
endif()
set(LIBRARY_DEFINE_SYMBOL "${SYMBOL_PREFIX}_EXPORTS")
# \note The basename of an import library output by Cargo is the filename
# of its corresponding shared library.
set(LIBRARY_IMPLIB "${CARGO_CURRENT_BINARY_DIR}/${CMAKE_SHARED_LIBRARY_PREFIX}${LIBRARY_NAME}${CMAKE_SHARED_LIBRARY_SUFFIX}${CMAKE_STATIC_LIBRARY_SUFFIX}")
set(LIBRARY_LOCATION "${CARGO_CURRENT_BINARY_DIR}/${CMAKE_SHARED_LIBRARY_PREFIX}${LIBRARY_NAME}${CMAKE_SHARED_LIBRARY_SUFFIX}")
set(LIBRARY_NO_SONAME "${WIN32}")
set(LIBRARY_SONAME "${CMAKE_SHARED_LIBRARY_PREFIX}${LIBRARY_NAME}${CMAKE_${CMAKE_BUILD_TYPE}_POSTFIX}${CMAKE_SHARED_LIBRARY_SUFFIX}")
set(LIBRARY_TYPE "SHARED")
else()
set(LIBRARY_DEFINE_SYMBOL "")
set(LIBRARY_DESTINATION "${CMAKE_INSTALL_LIBDIR}")
set(LIBRARY_IMPLIB "")
set(LIBRARY_LOCATION "${CARGO_CURRENT_BINARY_DIR}/${CMAKE_STATIC_LIBRARY_PREFIX}${LIBRARY_NAME}${CMAKE_STATIC_LIBRARY_SUFFIX}")
set(LIBRARY_NO_SONAME "TRUE")
set(LIBRARY_SONAME "")
set(LIBRARY_TYPE "STATIC")
endif()
add_library(${LIBRARY_NAME} ${LIBRARY_TYPE} IMPORTED GLOBAL)
set_target_properties(
${LIBRARY_NAME}
PROPERTIES
# \note Cargo writes a debug build into a nested directory instead of
# decorating its name.
DEBUG_POSTFIX ""
DEFINE_SYMBOL "${LIBRARY_DEFINE_SYMBOL}"
IMPORTED_IMPLIB "${LIBRARY_IMPLIB}"
IMPORTED_LOCATION "${LIBRARY_LOCATION}"
IMPORTED_NO_SONAME "${LIBRARY_NO_SONAME}"
IMPORTED_SONAME "${LIBRARY_SONAME}"
LINKER_LANGUAGE C
PUBLIC_HEADER "${CARGO_TARGET_DIR}/${LIBRARY_NAME}.h"
SOVERSION "${PROJECT_VERSION_MAJOR}"
VERSION "${PROJECT_VERSION}"
# \note Cargo exports all of the symbols automatically.
WINDOWS_EXPORT_ALL_SYMBOLS "TRUE"
)
target_compile_definitions(${LIBRARY_NAME} INTERFACE $<TARGET_PROPERTY:${LIBRARY_NAME},DEFINE_SYMBOL>)
target_include_directories(
${LIBRARY_NAME}
INTERFACE
"$<INSTALL_INTERFACE:${CMAKE_INSTALL_INCLUDEDIR}/${PROJECT_NAME}>"
)
set(CMAKE_THREAD_PREFER_PTHREAD TRUE)
set(THREADS_PREFER_PTHREAD_FLAG TRUE)
find_package(Threads REQUIRED)
set(LIBRARY_DEPENDENCIES Threads::Threads ${CMAKE_DL_LIBS})
if(WIN32)
list(APPEND LIBRARY_DEPENDENCIES Bcrypt userenv ws2_32)
else()
list(APPEND LIBRARY_DEPENDENCIES m)
endif()
target_link_libraries(${LIBRARY_NAME} INTERFACE ${LIBRARY_DEPENDENCIES})
install(
FILES $<TARGET_PROPERTY:${LIBRARY_NAME},IMPORTED_IMPLIB>
TYPE LIB
# \note The basename of an import library output by Cargo is the filename
# of its corresponding shared library.
RENAME "${CMAKE_STATIC_LIBRARY_PREFIX}${LIBRARY_NAME}${CMAKE_${CMAKE_BUILD_TYPE}_POSTFIX}${CMAKE_STATIC_LIBRARY_SUFFIX}"
OPTIONAL
)
set(LIBRARY_FILE_NAME "${CMAKE_${LIBRARY_TYPE}_LIBRARY_PREFIX}${LIBRARY_NAME}${CMAKE_${CMAKE_BUILD_TYPE}_POSTFIX}${CMAKE_${LIBRARY_TYPE}_LIBRARY_SUFFIX}")
install(
FILES $<TARGET_PROPERTY:${LIBRARY_NAME},IMPORTED_LOCATION>
RENAME "${LIBRARY_FILE_NAME}"
DESTINATION ${LIBRARY_DESTINATION}
)
install(
FILES $<TARGET_PROPERTY:${LIBRARY_NAME},PUBLIC_HEADER>
DESTINATION ${CMAKE_INSTALL_INCLUDEDIR}/${PROJECT_NAME}
)
find_package(Doxygen OPTIONAL_COMPONENTS dot)
if(DOXYGEN_FOUND)
set(DOXYGEN_GENERATE_LATEX YES)
set(DOXYGEN_PDF_HYPERLINKS YES)
set(DOXYGEN_PROJECT_LOGO "${CMAKE_SOURCE_DIR}/img/brandmark.png")
set(DOXYGEN_SORT_BRIEF_DOCS YES)
set(DOXYGEN_USE_MDFILE_AS_MAINPAGE "${CMAKE_SOURCE_DIR}/README.md")
doxygen_add_docs(
${LIBRARY_NAME}_docs
"${CARGO_TARGET_DIR}/${LIBRARY_NAME}.h"
"${CMAKE_SOURCE_DIR}/README.md"
USE_STAMP_FILE
WORKING_DIRECTORY ${CMAKE_SOURCE_DIR}
COMMENT "Producing documentation with Doxygen..."
)
# \note A Doxygen input file isn't a file-level dependency so the Doxygen
# command must instead depend upon a target that outputs the file or
# it will just output an error message when it can't be found.
add_dependencies(${LIBRARY_NAME}_docs ${LIBRARY_NAME}_artifacts)
endif()

View file

@ -1,85 +0,0 @@
use automerge as am;
use std::collections::BTreeSet;
use std::ops::{Deref, DerefMut};
use crate::result::AMobjId;
use automerge::transaction::Transactable;
/// \struct AMdoc
/// \brief A JSON-like CRDT.
#[derive(Clone)]
pub struct AMdoc {
body: am::AutoCommit,
obj_ids: BTreeSet<AMobjId>,
}
impl AMdoc {
pub fn new(body: am::AutoCommit) -> Self {
Self {
body,
obj_ids: BTreeSet::new(),
}
}
pub fn insert_object(
&mut self,
obj: &am::ObjId,
index: usize,
value: am::ObjType,
) -> Result<&AMobjId, am::AutomergeError> {
match self.body.insert_object(obj, index, value) {
Ok(ex_id) => {
let obj_id = AMobjId::new(ex_id);
self.obj_ids.insert(obj_id.clone());
match self.obj_ids.get(&obj_id) {
Some(obj_id) => Ok(obj_id),
None => Err(am::AutomergeError::Fail),
}
}
Err(e) => Err(e),
}
}
pub fn put_object<O: AsRef<am::ObjId>, P: Into<am::Prop>>(
&mut self,
obj: O,
prop: P,
value: am::ObjType,
) -> Result<&AMobjId, am::AutomergeError> {
match self.body.put_object(obj, prop, value) {
Ok(ex_id) => {
let obj_id = AMobjId::new(ex_id);
self.obj_ids.insert(obj_id.clone());
match self.obj_ids.get(&obj_id) {
Some(obj_id) => Ok(obj_id),
None => Err(am::AutomergeError::Fail),
}
}
Err(e) => Err(e),
}
}
pub fn drop_obj_id(&mut self, obj_id: &AMobjId) -> bool {
self.obj_ids.remove(obj_id)
}
}
impl Deref for AMdoc {
type Target = am::AutoCommit;
fn deref(&self) -> &Self::Target {
&self.body
}
}
impl DerefMut for AMdoc {
fn deref_mut(&mut self) -> &mut Self::Target {
&mut self.body
}
}
impl From<AMdoc> for *mut AMdoc {
fn from(b: AMdoc) -> Self {
Box::into_raw(Box::new(b))
}
}

File diff suppressed because it is too large Load diff

View file

@ -1,212 +0,0 @@
use automerge as am;
use std::ffi::CString;
use std::ops::Deref;
/// \struct AMobjId
/// \brief An object's unique identifier.
#[derive(Clone, Eq, Ord, PartialEq, PartialOrd)]
pub struct AMobjId(am::ObjId);
impl AMobjId {
pub fn new(obj_id: am::ObjId) -> Self {
Self(obj_id)
}
}
impl AsRef<am::ObjId> for AMobjId {
fn as_ref(&self) -> &am::ObjId {
&self.0
}
}
impl Deref for AMobjId {
type Target = am::ObjId;
fn deref(&self) -> &Self::Target {
&self.0
}
}
/// \memberof AMvalue
/// \struct AMbyteSpan
/// \brief A contiguous sequence of bytes.
///
#[repr(C)]
pub struct AMbyteSpan {
/// A pointer to the byte at position zero.
/// \warning \p src is only valid until the `AMfreeResult()` function is called
/// on the `AMresult` struct hosting the array of bytes to which
/// it points.
src: *const u8,
/// The number of bytes in the sequence.
count: usize,
}
impl From<&Vec<u8>> for AMbyteSpan {
fn from(v: &Vec<u8>) -> Self {
AMbyteSpan {
src: (*v).as_ptr(),
count: (*v).len(),
}
}
}
impl From<&mut am::ActorId> for AMbyteSpan {
fn from(actor: &mut am::ActorId) -> Self {
let slice = actor.to_bytes();
AMbyteSpan {
src: slice.as_ptr(),
count: slice.len(),
}
}
}
/// \struct AMvalue
/// \brief A discriminated union of value type variants for an `AMresult` struct.
///
/// \enum AMvalueVariant
/// \brief A value type discriminant.
///
/// \var AMvalue::tag
/// The variant discriminator of an `AMvalue` struct.
///
/// \var AMvalue::actor_id
/// An actor ID as an `AMbyteSpan` struct.
///
/// \var AMvalue::boolean
/// A boolean.
///
/// \var AMvalue::bytes
/// An array of bytes as an `AMbyteSpan` struct.
///
/// \var AMvalue::counter
/// A CRDT counter.
///
/// \var AMvalue::f64
/// A 64-bit float.
///
/// \var AMvalue::change_hash
/// A change hash as an `AMbyteSpan` struct.
///
/// \var AMvalue::int_
/// A 64-bit signed integer.
///
/// \var AMvalue::obj_id
/// An object identifier.
///
/// \var AMvalue::str
/// A UTF-8 string.
///
/// \var AMvalue::timestamp
/// A Lamport timestamp.
///
/// \var AMvalue::uint
/// A 64-bit unsigned integer.
#[repr(C)]
pub enum AMvalue<'a> {
/// An actor ID variant.
ActorId(AMbyteSpan),
/// A boolean variant.
Boolean(libc::c_char),
/// An array of bytes variant.
Bytes(AMbyteSpan),
/*
/// A changes variant.
Changes(_),
*/
/// A CRDT counter variant.
Counter(i64),
/// A 64-bit float variant.
F64(f64),
/// A change hash variant.
ChangeHash(AMbyteSpan),
/// A 64-bit signed integer variant.
Int(i64),
/*
/// A keys variant.
Keys(_),
*/
/// A nothing variant.
Nothing,
/// A null variant.
Null,
/// An object identifier variant.
ObjId(&'a AMobjId),
/// A UTF-8 string variant.
Str(*const libc::c_char),
/// A Lamport timestamp variant.
Timestamp(i64),
/*
/// A transaction variant.
Transaction(_),
*/
/// A 64-bit unsigned integer variant.
Uint(u64),
}
/// \struct AMresult
/// \brief A discriminated union of result variants.
///
pub enum AMresult<'a> {
ActorId(am::ActorId),
Changes(Vec<am::Change>),
Error(CString),
ObjId(&'a AMobjId),
Nothing,
Scalars(Vec<am::Value<'static>>, Option<CString>),
}
impl<'a> AMresult<'a> {
pub(crate) fn err(s: &str) -> Self {
AMresult::Error(CString::new(s).unwrap())
}
}
impl<'a> From<Result<am::ActorId, am::AutomergeError>> for AMresult<'a> {
fn from(maybe: Result<am::ActorId, am::AutomergeError>) -> Self {
match maybe {
Ok(actor_id) => AMresult::ActorId(actor_id),
Err(e) => AMresult::Error(CString::new(e.to_string()).unwrap()),
}
}
}
impl<'a> From<Result<&'a AMobjId, am::AutomergeError>> for AMresult<'a> {
fn from(maybe: Result<&'a AMobjId, am::AutomergeError>) -> Self {
match maybe {
Ok(obj_id) => AMresult::ObjId(obj_id),
Err(e) => AMresult::Error(CString::new(e.to_string()).unwrap()),
}
}
}
impl<'a> From<Result<(), am::AutomergeError>> for AMresult<'a> {
fn from(maybe: Result<(), am::AutomergeError>) -> Self {
match maybe {
Ok(()) => AMresult::Nothing,
Err(e) => AMresult::Error(CString::new(e.to_string()).unwrap()),
}
}
}
impl<'a> From<Result<Option<(am::Value<'static>, am::ObjId)>, am::AutomergeError>>
for AMresult<'a>
{
fn from(maybe: Result<Option<(am::Value<'static>, am::ObjId)>, am::AutomergeError>) -> Self {
match maybe {
// \todo Ensure that it's alright to ignore the `am::ObjId` value.
Ok(Some((value, _))) => AMresult::Scalars(vec![value], None),
Ok(None) => AMresult::Nothing,
Err(e) => AMresult::Error(CString::new(e.to_string()).unwrap()),
}
}
}
impl<'a> From<Result<am::Value<'static>, am::AutomergeError>> for AMresult<'a> {
fn from(maybe: Result<am::Value<'static>, am::AutomergeError>) -> Self {
match maybe {
Ok(value) => AMresult::Scalars(vec![value], None),
Err(e) => AMresult::Error(CString::new(e.to_string()).unwrap()),
}
}
}

View file

@ -1,7 +0,0 @@
use crate::AMresult;
impl<'a> From<AMresult<'a>> for *mut AMresult<'a> {
fn from(b: AMresult<'a>) -> Self {
Box::into_raw(Box::new(b))
}
}

View file

@ -1,51 +0,0 @@
cmake_minimum_required(VERSION 3.18 FATAL_ERROR)
find_package(cmocka REQUIRED)
add_executable(
test_${LIBRARY_NAME}
group_state.c
amdoc_property_tests.c
amlistput_tests.c
ammapput_tests.c
macro_utils.c
main.c
)
set_target_properties(test_${LIBRARY_NAME} PROPERTIES LINKER_LANGUAGE C)
# \note An imported library's INTERFACE_INCLUDE_DIRECTORIES property can't
# contain a non-existent path so its build-time include directory
# must be specified for all of its dependent targets instead.
target_include_directories(
test_${LIBRARY_NAME}
PRIVATE "$<BUILD_INTERFACE:${CARGO_TARGET_DIR}>"
)
target_link_libraries(test_${LIBRARY_NAME} PRIVATE cmocka ${LIBRARY_NAME})
add_dependencies(test_${LIBRARY_NAME} ${LIBRARY_NAME}_artifacts)
if(BUILD_SHARED_LIBS AND WIN32)
add_custom_command(
TARGET test_${LIBRARY_NAME}
POST_BUILD
COMMAND ${CMAKE_COMMAND} -E copy_if_different
${CARGO_CURRENT_BINARY_DIR}/${CMAKE_SHARED_LIBRARY_PREFIX}${LIBRARY_NAME}${CMAKE_${CMAKE_BUILD_TYPE}_POSTFIX}${CMAKE_SHARED_LIBRARY_SUFFIX}
${CMAKE_CURRENT_BINARY_DIR}
COMMENT "Copying the DLL built by Cargo into the test directory..."
VERBATIM
)
endif()
add_test(NAME test_${LIBRARY_NAME} COMMAND test_${LIBRARY_NAME})
add_custom_command(
TARGET test_${LIBRARY_NAME}
POST_BUILD
COMMAND
${CMAKE_CTEST_COMMAND} --config $<CONFIG> --output-on-failure
COMMENT
"Running the test(s)..."
VERBATIM
)

View file

@ -1,110 +0,0 @@
#include <setjmp.h>
#include <stdarg.h>
#include <stddef.h>
#include <stdio.h>
#include <stdlib.h>
#include <string.h>
/* third-party */
#include <cmocka.h>
/* local */
#include "group_state.h"
typedef struct {
GroupState* group_state;
char const* actor_id_str;
uint8_t* actor_id_bytes;
size_t actor_id_size;
} TestState;
static void hex_to_bytes(char const* hex_str, uint8_t* bytes, size_t const count) {
unsigned int byte;
char const* next = hex_str;
for (size_t index = 0; *next && index != count; next += 2, ++index) {
if (sscanf(next, "%02x", &byte) == 1) {
bytes[index] = (uint8_t)byte;
}
}
}
static int setup(void** state) {
TestState* test_state = calloc(1, sizeof(TestState));
group_setup((void**)&test_state->group_state);
test_state->actor_id_str = "000102030405060708090a0b0c0d0e0f";
test_state->actor_id_size = strlen(test_state->actor_id_str) / 2;
test_state->actor_id_bytes = malloc(test_state->actor_id_size);
hex_to_bytes(test_state->actor_id_str, test_state->actor_id_bytes, test_state->actor_id_size);
*state = test_state;
return 0;
}
static int teardown(void** state) {
TestState* test_state = *state;
group_teardown((void**)&test_state->group_state);
free(test_state->actor_id_bytes);
free(test_state);
return 0;
}
static void test_AMputActor(void **state) {
TestState* test_state = *state;
GroupState* group_state = test_state->group_state;
AMresult* res = AMsetActor(
group_state->doc,
test_state->actor_id_bytes,
test_state->actor_id_size
);
if (AMresultStatus(res) != AM_STATUS_OK) {
fail_msg("%s", AMerrorMessage(res));
}
assert_int_equal(AMresultSize(res), 0);
AMvalue value = AMresultValue(res, 0);
assert_int_equal(value.tag, AM_VALUE_NOTHING);
AMfreeResult(res);
res = AMgetActor(group_state->doc);
if (AMresultStatus(res) != AM_STATUS_OK) {
fail_msg("%s", AMerrorMessage(res));
}
assert_int_equal(AMresultSize(res), 1);
value = AMresultValue(res, 0);
assert_int_equal(value.tag, AM_VALUE_ACTOR_ID);
assert_int_equal(value.actor_id.count, test_state->actor_id_size);
assert_memory_equal(value.actor_id.src, test_state->actor_id_bytes, value.actor_id.count);
AMfreeResult(res);
}
static void test_AMputActorHex(void **state) {
TestState* test_state = *state;
GroupState* group_state = test_state->group_state;
AMresult* res = AMsetActorHex(
group_state->doc,
test_state->actor_id_str
);
if (AMresultStatus(res) != AM_STATUS_OK) {
fail_msg("%s", AMerrorMessage(res));
}
assert_int_equal(AMresultSize(res), 0);
AMvalue value = AMresultValue(res, 0);
assert_int_equal(value.tag, AM_VALUE_NOTHING);
AMfreeResult(res);
res = AMgetActorHex(group_state->doc);
if (AMresultStatus(res) != AM_STATUS_OK) {
fail_msg("%s", AMerrorMessage(res));
}
assert_int_equal(AMresultSize(res), 1);
value = AMresultValue(res, 0);
assert_int_equal(value.tag, AM_VALUE_STR);
assert_int_equal(strlen(value.str), test_state->actor_id_size * 2);
assert_string_equal(value.str, test_state->actor_id_str);
AMfreeResult(res);
}
int run_AMdoc_property_tests(void) {
const struct CMUnitTest tests[] = {
cmocka_unit_test_setup_teardown(test_AMputActor, setup, teardown),
cmocka_unit_test_setup_teardown(test_AMputActorHex, setup, teardown),
};
return cmocka_run_group_tests(tests, NULL, NULL);
}

View file

@ -1,235 +0,0 @@
#include <float.h>
#include <limits.h>
#include <setjmp.h>
#include <stdarg.h>
#include <stddef.h>
#include <stdint.h>
#include <string.h>
/* third-party */
#include <cmocka.h>
/* local */
#include "group_state.h"
#include "macro_utils.h"
#define test_AMlistPut(suffix, mode) test_AMlistPut ## suffix ## _ ## mode
#define static_void_test_AMlistPut(suffix, mode, member, scalar_value) \
static void test_AMlistPut ## suffix ## _ ## mode(void **state) { \
GroupState* group_state = *state; \
AMresult* res = AMlistPut ## suffix( \
group_state->doc, AM_ROOT, 0, !strcmp(#mode, "insert"), scalar_value \
); \
if (AMresultStatus(res) != AM_STATUS_OK) { \
fail_msg("%s", AMerrorMessage(res)); \
} \
assert_int_equal(AMresultSize(res), 0); \
AMvalue value = AMresultValue(res, 0); \
assert_int_equal(value.tag, AM_VALUE_NOTHING); \
AMfreeResult(res); \
res = AMlistGet(group_state->doc, AM_ROOT, 0); \
if (AMresultStatus(res) != AM_STATUS_OK) { \
fail_msg("%s", AMerrorMessage(res)); \
} \
assert_int_equal(AMresultSize(res), 1); \
value = AMresultValue(res, 0); \
assert_int_equal(value.tag, AMvalue_discriminant(#suffix)); \
assert_true(value.member == scalar_value); \
AMfreeResult(res); \
}
#define test_AMlistPutBytes(mode) test_AMlistPutBytes ## _ ## mode
#define static_void_test_AMlistPutBytes(mode, bytes_value) \
static void test_AMlistPutBytes_ ## mode(void **state) { \
static size_t const BYTES_SIZE = sizeof(bytes_value) / sizeof(uint8_t); \
\
GroupState* group_state = *state; \
AMresult* res = AMlistPutBytes( \
group_state->doc, \
AM_ROOT, \
0, \
!strcmp(#mode, "insert"), \
bytes_value, \
BYTES_SIZE \
); \
if (AMresultStatus(res) != AM_STATUS_OK) { \
fail_msg("%s", AMerrorMessage(res)); \
} \
assert_int_equal(AMresultSize(res), 0); \
AMvalue value = AMresultValue(res, 0); \
assert_int_equal(value.tag, AM_VALUE_NOTHING); \
AMfreeResult(res); \
res = AMlistGet(group_state->doc, AM_ROOT, 0); \
if (AMresultStatus(res) != AM_STATUS_OK) { \
fail_msg("%s", AMerrorMessage(res)); \
} \
assert_int_equal(AMresultSize(res), 1); \
value = AMresultValue(res, 0); \
assert_int_equal(value.tag, AM_VALUE_BYTES); \
assert_int_equal(value.bytes.count, BYTES_SIZE); \
assert_memory_equal(value.bytes.src, bytes_value, BYTES_SIZE); \
AMfreeResult(res); \
}
#define test_AMlistPutNull(mode) test_AMlistPutNull_ ## mode
#define static_void_test_AMlistPutNull(mode) \
static void test_AMlistPutNull_ ## mode(void **state) { \
GroupState* group_state = *state; \
AMresult* res = AMlistPutNull( \
group_state->doc, AM_ROOT, 0, !strcmp(#mode, "insert")); \
if (AMresultStatus(res) != AM_STATUS_OK) { \
fail_msg("%s", AMerrorMessage(res)); \
} \
assert_int_equal(AMresultSize(res), 0); \
AMvalue value = AMresultValue(res, 0); \
assert_int_equal(value.tag, AM_VALUE_NOTHING); \
AMfreeResult(res); \
res = AMlistGet(group_state->doc, AM_ROOT, 0); \
if (AMresultStatus(res) != AM_STATUS_OK) { \
fail_msg("%s", AMerrorMessage(res)); \
} \
assert_int_equal(AMresultSize(res), 1); \
value = AMresultValue(res, 0); \
assert_int_equal(value.tag, AM_VALUE_NULL); \
AMfreeResult(res); \
}
#define test_AMlistPutObject(label, mode) test_AMlistPutObject_ ## label ## _ ## mode
#define static_void_test_AMlistPutObject(label, mode) \
static void test_AMlistPutObject_ ## label ## _ ## mode(void **state) { \
GroupState* group_state = *state; \
AMresult* res = AMlistPutObject( \
group_state->doc, \
AM_ROOT, \
0, \
!strcmp(#mode, "insert"), \
AMobjType_tag(#label) \
); \
if (AMresultStatus(res) != AM_STATUS_OK) { \
fail_msg("%s", AMerrorMessage(res)); \
} \
assert_int_equal(AMresultSize(res), 1); \
AMvalue value = AMresultValue(res, 0); \
assert_int_equal(value.tag, AM_VALUE_OBJ_ID); \
/** \
* \note The `AMresult` struct can be deallocated immediately when its \
* value is a pointer to an opaque struct because its lifetime \
* is tied to the `AMdoc` struct instead. \
*/ \
AMfreeResult(res); \
assert_non_null(value.obj_id); \
assert_int_equal(AMobjSize(group_state->doc, value.obj_id), 0); \
AMfreeObjId(group_state->doc, value.obj_id); \
}
#define test_AMlistPutStr(mode) test_AMlistPutStr ## _ ## mode
#define static_void_test_AMlistPutStr(mode, str_value) \
static void test_AMlistPutStr_ ## mode(void **state) { \
static size_t const STR_LEN = strlen(str_value); \
\
GroupState* group_state = *state; \
AMresult* res = AMlistPutStr( \
group_state->doc, \
AM_ROOT, \
0, \
!strcmp(#mode, "insert"), \
str_value \
); \
if (AMresultStatus(res) != AM_STATUS_OK) { \
fail_msg("%s", AMerrorMessage(res)); \
} \
assert_int_equal(AMresultSize(res), 0); \
AMvalue value = AMresultValue(res, 0); \
assert_int_equal(value.tag, AM_VALUE_NOTHING); \
AMfreeResult(res); \
res = AMlistGet(group_state->doc, AM_ROOT, 0); \
if (AMresultStatus(res) != AM_STATUS_OK) { \
fail_msg("%s", AMerrorMessage(res)); \
} \
assert_int_equal(AMresultSize(res), 1); \
value = AMresultValue(res, 0); \
assert_int_equal(value.tag, AM_VALUE_STR); \
assert_int_equal(strlen(value.str), STR_LEN); \
assert_memory_equal(value.str, str_value, STR_LEN + 1); \
AMfreeResult(res); \
}
static uint8_t const BYTES_VALUE[] = {INT8_MIN, INT8_MAX / 2, INT8_MAX};
static_void_test_AMlistPutBytes(insert, BYTES_VALUE)
static_void_test_AMlistPutBytes(update, BYTES_VALUE)
static_void_test_AMlistPut(Counter, insert, counter, INT64_MAX)
static_void_test_AMlistPut(Counter, update, counter, INT64_MAX)
static_void_test_AMlistPut(F64, insert, f64, DBL_MAX)
static_void_test_AMlistPut(F64, update, f64, DBL_MAX)
static_void_test_AMlistPut(Int, insert, int_, INT64_MAX)
static_void_test_AMlistPut(Int, update, int_, INT64_MAX)
static_void_test_AMlistPutNull(insert)
static_void_test_AMlistPutNull(update)
static_void_test_AMlistPutObject(List, insert)
static_void_test_AMlistPutObject(List, update)
static_void_test_AMlistPutObject(Map, insert)
static_void_test_AMlistPutObject(Map, update)
static_void_test_AMlistPutObject(Text, insert)
static_void_test_AMlistPutObject(Text, update)
static_void_test_AMlistPutStr(insert, "Hello, world!")
static_void_test_AMlistPutStr(update, "Hello, world!")
static_void_test_AMlistPut(Timestamp, insert, timestamp, INT64_MAX)
static_void_test_AMlistPut(Timestamp, update, timestamp, INT64_MAX)
static_void_test_AMlistPut(Uint, insert, uint, UINT64_MAX)
static_void_test_AMlistPut(Uint, update, uint, UINT64_MAX)
int run_AMlistPut_tests(void) {
const struct CMUnitTest tests[] = {
cmocka_unit_test(test_AMlistPutBytes(insert)),
cmocka_unit_test(test_AMlistPutBytes(update)),
cmocka_unit_test(test_AMlistPut(Counter, insert)),
cmocka_unit_test(test_AMlistPut(Counter, update)),
cmocka_unit_test(test_AMlistPut(F64, insert)),
cmocka_unit_test(test_AMlistPut(F64, update)),
cmocka_unit_test(test_AMlistPut(Int, insert)),
cmocka_unit_test(test_AMlistPut(Int, update)),
cmocka_unit_test(test_AMlistPutNull(insert)),
cmocka_unit_test(test_AMlistPutNull(update)),
cmocka_unit_test(test_AMlistPutObject(List, insert)),
cmocka_unit_test(test_AMlistPutObject(List, update)),
cmocka_unit_test(test_AMlistPutObject(Map, insert)),
cmocka_unit_test(test_AMlistPutObject(Map, update)),
cmocka_unit_test(test_AMlistPutObject(Text, insert)),
cmocka_unit_test(test_AMlistPutObject(Text, update)),
cmocka_unit_test(test_AMlistPutStr(insert)),
cmocka_unit_test(test_AMlistPutStr(update)),
cmocka_unit_test(test_AMlistPut(Timestamp, insert)),
cmocka_unit_test(test_AMlistPut(Timestamp, update)),
cmocka_unit_test(test_AMlistPut(Uint, insert)),
cmocka_unit_test(test_AMlistPut(Uint, update)),
};
return cmocka_run_group_tests(tests, group_setup, group_teardown);
}

View file

@ -1,190 +0,0 @@
#include <float.h>
#include <limits.h>
#include <setjmp.h>
#include <stdarg.h>
#include <stddef.h>
#include <stdint.h>
#include <string.h>
/* third-party */
#include <cmocka.h>
/* local */
#include "group_state.h"
#include "macro_utils.h"
#define test_AMmapPut(suffix) test_AMmapPut ## suffix
#define static_void_test_AMmapPut(suffix, member, scalar_value) \
static void test_AMmapPut ## suffix(void **state) { \
GroupState* group_state = *state; \
AMresult* res = AMmapPut ## suffix( \
group_state->doc, \
AM_ROOT, \
#suffix, \
scalar_value \
); \
if (AMresultStatus(res) != AM_STATUS_OK) { \
fail_msg("%s", AMerrorMessage(res)); \
} \
assert_int_equal(AMresultSize(res), 0); \
AMvalue value = AMresultValue(res, 0); \
assert_int_equal(value.tag, AM_VALUE_NOTHING); \
AMfreeResult(res); \
res = AMmapGet(group_state->doc, AM_ROOT, #suffix); \
if (AMresultStatus(res) != AM_STATUS_OK) { \
fail_msg("%s", AMerrorMessage(res)); \
} \
assert_int_equal(AMresultSize(res), 1); \
value = AMresultValue(res, 0); \
assert_int_equal(value.tag, AMvalue_discriminant(#suffix)); \
assert_true(value.member == scalar_value); \
AMfreeResult(res); \
}
#define test_AMmapPutObject(label) test_AMmapPutObject_ ## label
#define static_void_test_AMmapPutObject(label) \
static void test_AMmapPutObject_ ## label(void **state) { \
GroupState* group_state = *state; \
AMresult* res = AMmapPutObject( \
group_state->doc, \
AM_ROOT, \
#label, \
AMobjType_tag(#label) \
); \
if (AMresultStatus(res) != AM_STATUS_OK) { \
fail_msg("%s", AMerrorMessage(res)); \
} \
assert_int_equal(AMresultSize(res), 1); \
AMvalue value = AMresultValue(res, 0); \
assert_int_equal(value.tag, AM_VALUE_OBJ_ID); \
/** \
* \note The `AMresult` struct can be deallocated immediately when its \
* value is a pointer to an opaque struct because its lifetime \
* is tied to the `AMdoc` struct instead. \
*/ \
AMfreeResult(res); \
assert_non_null(value.obj_id); \
assert_int_equal(AMobjSize(group_state->doc, value.obj_id), 0); \
AMfreeObjId(group_state->doc, value.obj_id); \
}
static void test_AMmapPutBytes(void **state) {
static char const* const KEY = "Bytes";
static uint8_t const BYTES_VALUE[] = {INT8_MIN, INT8_MAX / 2, INT8_MAX};
static size_t const BYTES_SIZE = sizeof(BYTES_VALUE) / sizeof(uint8_t);
GroupState* group_state = *state;
AMresult* res = AMmapPutBytes(
group_state->doc,
AM_ROOT,
KEY,
BYTES_VALUE,
BYTES_SIZE
);
if (AMresultStatus(res) != AM_STATUS_OK) {
fail_msg("%s", AMerrorMessage(res));
}
assert_int_equal(AMresultSize(res), 0);
AMvalue value = AMresultValue(res, 0);
assert_int_equal(value.tag, AM_VALUE_NOTHING);
AMfreeResult(res);
res = AMmapGet(group_state->doc, AM_ROOT, KEY);
if (AMresultStatus(res) != AM_STATUS_OK) {
fail_msg("%s", AMerrorMessage(res));
}
assert_int_equal(AMresultSize(res), 1);
value = AMresultValue(res, 0);
assert_int_equal(value.tag, AM_VALUE_BYTES);
assert_int_equal(value.bytes.count, BYTES_SIZE);
assert_memory_equal(value.bytes.src, BYTES_VALUE, BYTES_SIZE);
AMfreeResult(res);
}
static_void_test_AMmapPut(Counter, counter, INT64_MAX)
static_void_test_AMmapPut(F64, f64, DBL_MAX)
static_void_test_AMmapPut(Int, int_, INT64_MAX)
static void test_AMmapPutNull(void **state) {
static char const* const KEY = "Null";
GroupState* group_state = *state;
AMresult* res = AMmapPutNull(group_state->doc, AM_ROOT, KEY);
if (AMresultStatus(res) != AM_STATUS_OK) {
fail_msg("%s", AMerrorMessage(res));
}
assert_int_equal(AMresultSize(res), 0);
AMvalue value = AMresultValue(res, 0);
assert_int_equal(value.tag, AM_VALUE_NOTHING);
AMfreeResult(res);
res = AMmapGet(group_state->doc, AM_ROOT, KEY);
if (AMresultStatus(res) != AM_STATUS_OK) {
fail_msg("%s", AMerrorMessage(res));
}
assert_int_equal(AMresultSize(res), 1);
value = AMresultValue(res, 0);
assert_int_equal(value.tag, AM_VALUE_NULL);
AMfreeResult(res);
}
static_void_test_AMmapPutObject(List)
static_void_test_AMmapPutObject(Map)
static_void_test_AMmapPutObject(Text)
static void test_AMmapPutStr(void **state) {
static char const* const KEY = "Str";
static char const* const STR_VALUE = "Hello, world!";
size_t const STR_LEN = strlen(STR_VALUE);
GroupState* group_state = *state;
AMresult* res = AMmapPutStr(
group_state->doc,
AM_ROOT,
KEY,
STR_VALUE
);
if (AMresultStatus(res) != AM_STATUS_OK) {
fail_msg("%s", AMerrorMessage(res));
}
assert_int_equal(AMresultSize(res), 0);
AMvalue value = AMresultValue(res, 0);
assert_int_equal(value.tag, AM_VALUE_NOTHING);
AMfreeResult(res);
res = AMmapGet(group_state->doc, AM_ROOT, KEY);
if (AMresultStatus(res) != AM_STATUS_OK) {
fail_msg("%s", AMerrorMessage(res));
}
assert_int_equal(AMresultSize(res), 1);
value = AMresultValue(res, 0);
assert_int_equal(value.tag, AM_VALUE_STR);
assert_int_equal(strlen(value.str), STR_LEN);
assert_memory_equal(value.str, STR_VALUE, STR_LEN + 1);
AMfreeResult(res);
}
static_void_test_AMmapPut(Timestamp, timestamp, INT64_MAX)
static_void_test_AMmapPut(Uint, uint, UINT64_MAX)
int run_AMmapPut_tests(void) {
const struct CMUnitTest tests[] = {
cmocka_unit_test(test_AMmapPutBytes),
cmocka_unit_test(test_AMmapPut(Counter)),
cmocka_unit_test(test_AMmapPut(F64)),
cmocka_unit_test(test_AMmapPut(Int)),
cmocka_unit_test(test_AMmapPutNull),
cmocka_unit_test(test_AMmapPutObject(List)),
cmocka_unit_test(test_AMmapPutObject(Map)),
cmocka_unit_test(test_AMmapPutObject(Text)),
cmocka_unit_test(test_AMmapPutStr),
cmocka_unit_test(test_AMmapPut(Timestamp)),
cmocka_unit_test(test_AMmapPut(Uint)),
};
return cmocka_run_group_tests(tests, group_setup, group_teardown);
}

View file

@ -1,18 +0,0 @@
#include <stdlib.h>
/* local */
#include "group_state.h"
int group_setup(void** state) {
GroupState* group_state = calloc(1, sizeof(GroupState));
group_state->doc = AMallocDoc();
*state = group_state;
return 0;
}
int group_teardown(void** state) {
GroupState* group_state = *state;
AMfreeDoc(group_state->doc);
free(group_state);
return 0;
}

View file

@ -1,15 +0,0 @@
#ifndef GROUP_STATE_INCLUDED
#define GROUP_STATE_INCLUDED
/* local */
#include "automerge.h"
typedef struct {
AMdoc* doc;
} GroupState;
int group_setup(void** state);
int group_teardown(void** state);
#endif

View file

@ -1,23 +0,0 @@
#include <string.h>
/* local */
#include "macro_utils.h"
AMvalueVariant AMvalue_discriminant(char const* suffix) {
if (!strcmp(suffix, "Bytes")) return AM_VALUE_BYTES;
else if (!strcmp(suffix, "Counter")) return AM_VALUE_COUNTER;
else if (!strcmp(suffix, "F64")) return AM_VALUE_F64;
else if (!strcmp(suffix, "Int")) return AM_VALUE_INT;
else if (!strcmp(suffix, "Null")) return AM_VALUE_NULL;
else if (!strcmp(suffix, "Str")) return AM_VALUE_STR;
else if (!strcmp(suffix, "Timestamp")) return AM_VALUE_TIMESTAMP;
else if (!strcmp(suffix, "Uint")) return AM_VALUE_UINT;
else return AM_VALUE_NOTHING;
}
AMobjType AMobjType_tag(char const* obj_type_label) {
if (!strcmp(obj_type_label, "List")) return AM_OBJ_TYPE_LIST;
else if (!strcmp(obj_type_label, "Map")) return AM_OBJ_TYPE_MAP;
else if (!strcmp(obj_type_label, "Text")) return AM_OBJ_TYPE_TEXT;
else return 0;
}

View file

@ -1,23 +0,0 @@
#ifndef MACRO_UTILS_INCLUDED
#define MACRO_UTILS_INCLUDED
/* local */
#include "automerge.h"
/**
* \brief Gets the `AMvalue` discriminant corresponding to a function name suffix.
*
* \param[in] suffix A string.
* \return An `AMvalue` variant discriminant enum tag.
*/
AMvalueVariant AMvalue_discriminant(char const* suffix);
/**
* \brief Gets the `AMobjType` tag corresponding to a object type label.
*
* \param[in] obj_type_label A string.
* \return An `AMobjType` enum tag.
*/
AMobjType AMobjType_tag(char const* obj_type_label);
#endif

View file

@ -1,21 +0,0 @@
#include <stdarg.h>
#include <stddef.h>
#include <setjmp.h>
#include <stdint.h>
/* third-party */
#include <cmocka.h>
extern int run_AMdoc_property_tests(void);
extern int run_AMlistPut_tests(void);
extern int run_AMmapPut_tests(void);
int main(void) {
return (
run_AMdoc_property_tests() +
run_AMlistPut_tests() +
run_AMmapPut_tests()
);
}

857
automerge-cli/Cargo.lock generated
View file

@ -1,857 +0,0 @@
# This file is automatically @generated by Cargo.
# It is not intended for manual editing.
version = 3
[[package]]
name = "adler"
version = "1.0.2"
source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "f26201604c87b1e01bd3d98f8d5d9a8fcbb815e8cedb41ffccbeb4bf593a35fe"
[[package]]
name = "ansi_term"
version = "0.12.1"
source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "d52a9bb7ec0cf484c551830a7ce27bd20d67eac647e1befb56b0be4ee39a55d2"
dependencies = [
"winapi",
]
[[package]]
name = "anyhow"
version = "1.0.55"
source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "159bb86af3a200e19a068f4224eae4c8bb2d0fa054c7e5d1cacd5cef95e684cd"
[[package]]
name = "atty"
version = "0.2.14"
source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "d9b39be18770d11421cdb1b9947a45dd3f37e93092cbf377614828a319d5fee8"
dependencies = [
"hermit-abi",
"libc",
"winapi",
]
[[package]]
name = "autocfg"
version = "1.1.0"
source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "d468802bab17cbc0cc575e9b053f41e72aa36bfa6b7f55e3529ffa43161b97fa"
[[package]]
name = "automerge"
version = "0.1.0"
dependencies = [
"flate2",
"fxhash",
"hex",
"itertools",
"js-sys",
"leb128",
"nonzero_ext",
"rand",
"serde",
"sha2",
"smol_str",
"thiserror",
"tinyvec",
"tracing",
"unicode-segmentation",
"uuid",
"wasm-bindgen",
"web-sys",
]
[[package]]
name = "automerge-cli"
version = "0.1.0"
dependencies = [
"anyhow",
"atty",
"automerge",
"clap",
"colored_json",
"combine",
"duct",
"maplit",
"serde_json",
"thiserror",
"tracing-subscriber",
]
[[package]]
name = "bitflags"
version = "1.3.2"
source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "bef38d45163c2f1dde094a7dfd33ccf595c92905c8f8f4fdc18d06fb1037718a"
[[package]]
name = "block-buffer"
version = "0.10.2"
source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "0bf7fe51849ea569fd452f37822f606a5cabb684dc918707a0193fd4664ff324"
dependencies = [
"generic-array",
]
[[package]]
name = "bumpalo"
version = "3.9.1"
source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "a4a45a46ab1f2412e53d3a0ade76ffad2025804294569aae387231a0cd6e0899"
[[package]]
name = "byteorder"
version = "1.4.3"
source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "14c189c53d098945499cdfa7ecc63567cf3886b3332b312a5b4585d8d3a6a610"
[[package]]
name = "bytes"
version = "1.1.0"
source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "c4872d67bab6358e59559027aa3b9157c53d9358c51423c17554809a8858e0f8"
[[package]]
name = "cfg-if"
version = "1.0.0"
source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "baf1de4339761588bc0619e3cbc0120ee582ebb74b53b4efbf79117bd2da40fd"
[[package]]
name = "clap"
version = "3.1.5"
source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "ced1892c55c910c1219e98d6fc8d71f6bddba7905866ce740066d8bfea859312"
dependencies = [
"atty",
"bitflags",
"clap_derive",
"indexmap",
"lazy_static",
"os_str_bytes",
"strsim",
"termcolor",
"textwrap",
]
[[package]]
name = "clap_derive"
version = "3.1.4"
source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "da95d038ede1a964ce99f49cbe27a7fb538d1da595e4b4f70b8c8f338d17bf16"
dependencies = [
"heck",
"proc-macro-error",
"proc-macro2",
"quote",
"syn",
]
[[package]]
name = "colored_json"
version = "2.1.0"
source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "1fd32eb54d016e203b7c2600e3a7802c75843a92e38ccc4869aefeca21771a64"
dependencies = [
"ansi_term",
"atty",
"libc",
"serde",
"serde_json",
]
[[package]]
name = "combine"
version = "4.6.3"
source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "50b727aacc797f9fc28e355d21f34709ac4fc9adecfe470ad07b8f4464f53062"
dependencies = [
"bytes",
"memchr",
]
[[package]]
name = "cpufeatures"
version = "0.2.1"
source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "95059428f66df56b63431fdb4e1947ed2190586af5c5a8a8b71122bdf5a7f469"
dependencies = [
"libc",
]
[[package]]
name = "crc32fast"
version = "1.3.2"
source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "b540bd8bc810d3885c6ea91e2018302f68baba2129ab3e88f32389ee9370880d"
dependencies = [
"cfg-if",
]
[[package]]
name = "crypto-common"
version = "0.1.3"
source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "57952ca27b5e3606ff4dd79b0020231aaf9d6aa76dc05fd30137538c50bd3ce8"
dependencies = [
"generic-array",
"typenum",
]
[[package]]
name = "digest"
version = "0.10.3"
source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "f2fb860ca6fafa5552fb6d0e816a69c8e49f0908bf524e30a90d97c85892d506"
dependencies = [
"block-buffer",
"crypto-common",
]
[[package]]
name = "duct"
version = "0.13.5"
source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "0fc6a0a59ed0888e0041cf708e66357b7ae1a82f1c67247e1f93b5e0818f7d8d"
dependencies = [
"libc",
"once_cell",
"os_pipe",
"shared_child",
]
[[package]]
name = "either"
version = "1.6.1"
source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "e78d4f1cc4ae33bbfc157ed5d5a5ef3bc29227303d595861deb238fcec4e9457"
[[package]]
name = "flate2"
version = "1.0.22"
source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "1e6988e897c1c9c485f43b47a529cef42fde0547f9d8d41a7062518f1d8fc53f"
dependencies = [
"cfg-if",
"crc32fast",
"libc",
"miniz_oxide",
]
[[package]]
name = "fxhash"
version = "0.2.1"
source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "c31b6d751ae2c7f11320402d34e41349dd1016f8d5d45e48c4312bc8625af50c"
dependencies = [
"byteorder",
]
[[package]]
name = "generic-array"
version = "0.14.5"
source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "fd48d33ec7f05fbfa152300fdad764757cbded343c1aa1cff2fbaf4134851803"
dependencies = [
"typenum",
"version_check",
]
[[package]]
name = "getrandom"
version = "0.2.5"
source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "d39cd93900197114fa1fcb7ae84ca742095eed9442088988ae74fa744e930e77"
dependencies = [
"cfg-if",
"js-sys",
"libc",
"wasi",
"wasm-bindgen",
]
[[package]]
name = "hashbrown"
version = "0.11.2"
source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "ab5ef0d4909ef3724cc8cce6ccc8572c5c817592e9285f5464f8e86f8bd3726e"
[[package]]
name = "heck"
version = "0.4.0"
source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "2540771e65fc8cb83cd6e8a237f70c319bd5c29f78ed1084ba5d50eeac86f7f9"
[[package]]
name = "hermit-abi"
version = "0.1.19"
source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "62b467343b94ba476dcb2500d242dadbb39557df889310ac77c5d99100aaac33"
dependencies = [
"libc",
]
[[package]]
name = "hex"
version = "0.4.3"
source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "7f24254aa9a54b5c858eaee2f5bccdb46aaf0e486a595ed5fd8f86ba55232a70"
[[package]]
name = "indexmap"
version = "1.8.0"
source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "282a6247722caba404c065016bbfa522806e51714c34f5dfc3e4a3a46fcb4223"
dependencies = [
"autocfg",
"hashbrown",
]
[[package]]
name = "itertools"
version = "0.10.3"
source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "a9a9d19fa1e79b6215ff29b9d6880b706147f16e9b1dbb1e4e5947b5b02bc5e3"
dependencies = [
"either",
]
[[package]]
name = "itoa"
version = "1.0.1"
source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "1aab8fc367588b89dcee83ab0fd66b72b50b72fa1904d7095045ace2b0c81c35"
[[package]]
name = "js-sys"
version = "0.3.56"
source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "a38fc24e30fd564ce974c02bf1d337caddff65be6cc4735a1f7eab22a7440f04"
dependencies = [
"wasm-bindgen",
]
[[package]]
name = "lazy_static"
version = "1.4.0"
source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "e2abad23fbc42b3700f2f279844dc832adb2b2eb069b2df918f455c4e18cc646"
[[package]]
name = "leb128"
version = "0.2.5"
source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "884e2677b40cc8c339eaefcb701c32ef1fd2493d71118dc0ca4b6a736c93bd67"
[[package]]
name = "libc"
version = "0.2.119"
source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "1bf2e165bb3457c8e098ea76f3e3bc9db55f87aa90d52d0e6be741470916aaa4"
[[package]]
name = "log"
version = "0.4.14"
source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "51b9bbe6c47d51fc3e1a9b945965946b4c44142ab8792c50835a980d362c2710"
dependencies = [
"cfg-if",
]
[[package]]
name = "maplit"
version = "1.0.2"
source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "3e2e65a1a2e43cfcb47a895c4c8b10d1f4a61097f9f254f183aee60cad9c651d"
[[package]]
name = "memchr"
version = "2.4.1"
source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "308cc39be01b73d0d18f82a0e7b2a3df85245f84af96fdddc5d202d27e47b86a"
[[package]]
name = "miniz_oxide"
version = "0.4.4"
source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "a92518e98c078586bc6c934028adcca4c92a53d6a958196de835170a01d84e4b"
dependencies = [
"adler",
"autocfg",
]
[[package]]
name = "nonzero_ext"
version = "0.2.0"
source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "44a1290799eababa63ea60af0cbc3f03363e328e58f32fb0294798ed3e85f444"
[[package]]
name = "once_cell"
version = "1.9.0"
source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "da32515d9f6e6e489d7bc9d84c71b060db7247dc035bbe44eac88cf87486d8d5"
[[package]]
name = "os_pipe"
version = "0.9.2"
source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "fb233f06c2307e1f5ce2ecad9f8121cffbbee2c95428f44ea85222e460d0d213"
dependencies = [
"libc",
"winapi",
]
[[package]]
name = "os_str_bytes"
version = "6.0.0"
source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "8e22443d1643a904602595ba1cd8f7d896afe56d26712531c5ff73a15b2fbf64"
dependencies = [
"memchr",
]
[[package]]
name = "pin-project-lite"
version = "0.2.8"
source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "e280fbe77cc62c91527259e9442153f4688736748d24660126286329742b4c6c"
[[package]]
name = "ppv-lite86"
version = "0.2.16"
source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "eb9f9e6e233e5c4a35559a617bf40a4ec447db2e84c20b55a6f83167b7e57872"
[[package]]
name = "proc-macro-error"
version = "1.0.4"
source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "da25490ff9892aab3fcf7c36f08cfb902dd3e71ca0f9f9517bea02a73a5ce38c"
dependencies = [
"proc-macro-error-attr",
"proc-macro2",
"quote",
"syn",
"version_check",
]
[[package]]
name = "proc-macro-error-attr"
version = "1.0.4"
source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "a1be40180e52ecc98ad80b184934baf3d0d29f979574e439af5a55274b35f869"
dependencies = [
"proc-macro2",
"quote",
"version_check",
]
[[package]]
name = "proc-macro2"
version = "1.0.36"
source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "c7342d5883fbccae1cc37a2353b09c87c9b0f3afd73f5fb9bba687a1f733b029"
dependencies = [
"unicode-xid",
]
[[package]]
name = "quote"
version = "1.0.15"
source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "864d3e96a899863136fc6e99f3d7cae289dafe43bf2c5ac19b70df7210c0a145"
dependencies = [
"proc-macro2",
]
[[package]]
name = "rand"
version = "0.8.5"
source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "34af8d1a0e25924bc5b7c43c079c942339d8f0a8b57c39049bef581b46327404"
dependencies = [
"libc",
"rand_chacha",
"rand_core",
]
[[package]]
name = "rand_chacha"
version = "0.3.1"
source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "e6c10a63a0fa32252be49d21e7709d4d4baf8d231c2dbce1eaa8141b9b127d88"
dependencies = [
"ppv-lite86",
"rand_core",
]
[[package]]
name = "rand_core"
version = "0.6.3"
source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "d34f1408f55294453790c48b2f1ebbb1c5b4b7563eb1f418bcfcfdbb06ebb4e7"
dependencies = [
"getrandom",
]
[[package]]
name = "ryu"
version = "1.0.9"
source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "73b4b750c782965c211b42f022f59af1fbceabdd026623714f104152f1ec149f"
[[package]]
name = "serde"
version = "1.0.136"
source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "ce31e24b01e1e524df96f1c2fdd054405f8d7376249a5110886fb4b658484789"
dependencies = [
"serde_derive",
]
[[package]]
name = "serde_derive"
version = "1.0.136"
source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "08597e7152fcd306f41838ed3e37be9eaeed2b61c42e2117266a554fab4662f9"
dependencies = [
"proc-macro2",
"quote",
"syn",
]
[[package]]
name = "serde_json"
version = "1.0.79"
source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "8e8d9fa5c3b304765ce1fd9c4c8a3de2c8db365a5b91be52f186efc675681d95"
dependencies = [
"itoa",
"ryu",
"serde",
]
[[package]]
name = "sha2"
version = "0.10.2"
source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "55deaec60f81eefe3cce0dc50bda92d6d8e88f2a27df7c5033b42afeb1ed2676"
dependencies = [
"cfg-if",
"cpufeatures",
"digest",
]
[[package]]
name = "sharded-slab"
version = "0.1.4"
source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "900fba806f70c630b0a382d0d825e17a0f19fcd059a2ade1ff237bcddf446b31"
dependencies = [
"lazy_static",
]
[[package]]
name = "shared_child"
version = "0.3.5"
source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "6be9f7d5565b1483af3e72975e2dee33879b3b86bd48c0929fccf6585d79e65a"
dependencies = [
"libc",
"winapi",
]
[[package]]
name = "smallvec"
version = "1.8.0"
source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "f2dd574626839106c320a323308629dcb1acfc96e32a8cba364ddc61ac23ee83"
[[package]]
name = "smol_str"
version = "0.1.21"
source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "61d15c83e300cce35b7c8cd39ff567c1ef42dde6d4a1a38dbdbf9a59902261bd"
dependencies = [
"serde",
]
[[package]]
name = "strsim"
version = "0.10.0"
source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "73473c0e59e6d5812c5dfe2a064a6444949f089e20eec9a2e5506596494e4623"
[[package]]
name = "syn"
version = "1.0.86"
source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "8a65b3f4ffa0092e9887669db0eae07941f023991ab58ea44da8fe8e2d511c6b"
dependencies = [
"proc-macro2",
"quote",
"unicode-xid",
]
[[package]]
name = "termcolor"
version = "1.1.3"
source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "bab24d30b911b2376f3a13cc2cd443142f0c81dda04c118693e35b3835757755"
dependencies = [
"winapi-util",
]
[[package]]
name = "textwrap"
version = "0.15.0"
source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "b1141d4d61095b28419e22cb0bbf02755f5e54e0526f97f1e3d1d160e60885fb"
[[package]]
name = "thiserror"
version = "1.0.30"
source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "854babe52e4df1653706b98fcfc05843010039b406875930a70e4d9644e5c417"
dependencies = [
"thiserror-impl",
]
[[package]]
name = "thiserror-impl"
version = "1.0.30"
source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "aa32fd3f627f367fe16f893e2597ae3c05020f8bba2666a4e6ea73d377e5714b"
dependencies = [
"proc-macro2",
"quote",
"syn",
]
[[package]]
name = "thread_local"
version = "1.1.4"
source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "5516c27b78311c50bf42c071425c560ac799b11c30b31f87e3081965fe5e0180"
dependencies = [
"once_cell",
]
[[package]]
name = "tinyvec"
version = "1.5.1"
source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "2c1c1d5a42b6245520c249549ec267180beaffcc0615401ac8e31853d4b6d8d2"
dependencies = [
"tinyvec_macros",
]
[[package]]
name = "tinyvec_macros"
version = "0.1.0"
source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "cda74da7e1a664f795bb1f8a87ec406fb89a02522cf6e50620d016add6dbbf5c"
[[package]]
name = "tracing"
version = "0.1.31"
source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "f6c650a8ef0cd2dd93736f033d21cbd1224c5a967aa0c258d00fcf7dafef9b9f"
dependencies = [
"cfg-if",
"log",
"pin-project-lite",
"tracing-attributes",
"tracing-core",
]
[[package]]
name = "tracing-attributes"
version = "0.1.19"
source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "8276d9a4a3a558d7b7ad5303ad50b53d58264641b82914b7ada36bd762e7a716"
dependencies = [
"proc-macro2",
"quote",
"syn",
]
[[package]]
name = "tracing-core"
version = "0.1.22"
source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "03cfcb51380632a72d3111cb8d3447a8d908e577d31beeac006f836383d29a23"
dependencies = [
"lazy_static",
"valuable",
]
[[package]]
name = "tracing-log"
version = "0.1.2"
source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "a6923477a48e41c1951f1999ef8bb5a3023eb723ceadafe78ffb65dc366761e3"
dependencies = [
"lazy_static",
"log",
"tracing-core",
]
[[package]]
name = "tracing-subscriber"
version = "0.3.9"
source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "9e0ab7bdc962035a87fba73f3acca9b8a8d0034c2e6f60b84aeaaddddc155dce"
dependencies = [
"ansi_term",
"sharded-slab",
"smallvec",
"thread_local",
"tracing-core",
"tracing-log",
]
[[package]]
name = "typenum"
version = "1.15.0"
source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "dcf81ac59edc17cc8697ff311e8f5ef2d99fcbd9817b34cec66f90b6c3dfd987"
[[package]]
name = "unicode-segmentation"
version = "1.9.0"
source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "7e8820f5d777f6224dc4be3632222971ac30164d4a258d595640799554ebfd99"
[[package]]
name = "unicode-xid"
version = "0.2.2"
source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "8ccb82d61f80a663efe1f787a51b16b5a51e3314d6ac365b08639f52387b33f3"
[[package]]
name = "uuid"
version = "0.8.2"
source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "bc5cf98d8186244414c848017f0e2676b3fcb46807f6668a97dfe67359a3c4b7"
dependencies = [
"getrandom",
"serde",
]
[[package]]
name = "valuable"
version = "0.1.0"
source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "830b7e5d4d90034032940e4ace0d9a9a057e7a45cd94e6c007832e39edb82f6d"
[[package]]
name = "version_check"
version = "0.9.4"
source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "49874b5167b65d7193b8aba1567f5c7d93d001cafc34600cee003eda787e483f"
[[package]]
name = "wasi"
version = "0.10.2+wasi-snapshot-preview1"
source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "fd6fbd9a79829dd1ad0cc20627bf1ed606756a7f77edff7b66b7064f9cb327c6"
[[package]]
name = "wasm-bindgen"
version = "0.2.79"
source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "25f1af7423d8588a3d840681122e72e6a24ddbcb3f0ec385cac0d12d24256c06"
dependencies = [
"cfg-if",
"wasm-bindgen-macro",
]
[[package]]
name = "wasm-bindgen-backend"
version = "0.2.79"
source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "8b21c0df030f5a177f3cba22e9bc4322695ec43e7257d865302900290bcdedca"
dependencies = [
"bumpalo",
"lazy_static",
"log",
"proc-macro2",
"quote",
"syn",
"wasm-bindgen-shared",
]
[[package]]
name = "wasm-bindgen-macro"
version = "0.2.79"
source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "2f4203d69e40a52ee523b2529a773d5ffc1dc0071801c87b3d270b471b80ed01"
dependencies = [
"quote",
"wasm-bindgen-macro-support",
]
[[package]]
name = "wasm-bindgen-macro-support"
version = "0.2.79"
source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "bfa8a30d46208db204854cadbb5d4baf5fcf8071ba5bf48190c3e59937962ebc"
dependencies = [
"proc-macro2",
"quote",
"syn",
"wasm-bindgen-backend",
"wasm-bindgen-shared",
]
[[package]]
name = "wasm-bindgen-shared"
version = "0.2.79"
source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "3d958d035c4438e28c70e4321a2911302f10135ce78a9c7834c0cab4123d06a2"
[[package]]
name = "web-sys"
version = "0.3.56"
source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "c060b319f29dd25724f09a2ba1418f142f539b2be99fbf4d2d5a8f7330afb8eb"
dependencies = [
"js-sys",
"wasm-bindgen",
]
[[package]]
name = "winapi"
version = "0.3.9"
source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "5c839a674fcd7a98952e593242ea400abe93992746761e38641405d28b00f419"
dependencies = [
"winapi-i686-pc-windows-gnu",
"winapi-x86_64-pc-windows-gnu",
]
[[package]]
name = "winapi-i686-pc-windows-gnu"
version = "0.4.0"
source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "ac3b87c63620426dd9b991e5ce0329eff545bccbbb34f3be09ff6fb6ab51b7b6"
[[package]]
name = "winapi-util"
version = "0.1.5"
source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "70ec6ce85bb158151cae5e5c87f95a8e97d2c0c4b001223f33a334e3ce5de178"
dependencies = [
"winapi",
]
[[package]]
name = "winapi-x86_64-pc-windows-gnu"
version = "0.4.0"
source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "712e227841d057c1ee1cd2fb22fa7e5a5461ae8e48fa2ca79ec42cfc1931183f"

View file

@ -1,2 +0,0 @@
/node_modules
/yarn.lock

View file

@ -1,18 +0,0 @@
{
"name": "automerge-js",
"version": "0.1.0",
"main": "src/index.js",
"license": "MIT",
"scripts": {
"test": "mocha --bail --full-trace"
},
"devDependencies": {
"mocha": "^9.1.1"
},
"dependencies": {
"automerge-wasm": "file:../automerge-wasm",
"fast-sha256": "^1.3.0",
"pako": "^2.0.4",
"uuid": "^8.3"
}
}

View file

@ -1,18 +0,0 @@
// Properties of the document root object
//const OPTIONS = Symbol('_options') // object containing options passed to init()
//const CACHE = Symbol('_cache') // map from objectId to immutable object
const STATE = Symbol('_state') // object containing metadata about current state (e.g. sequence numbers)
const HEADS = Symbol('_heads') // object containing metadata about current state (e.g. sequence numbers)
const OBJECT_ID = Symbol('_objectId') // object containing metadata about current state (e.g. sequence numbers)
const READ_ONLY = Symbol('_readOnly') // object containing metadata about current state (e.g. sequence numbers)
const FROZEN = Symbol('_frozen') // object containing metadata about current state (e.g. sequence numbers)
// Properties of all Automerge objects
//const OBJECT_ID = Symbol('_objectId') // the object ID of the current object (string)
//const CONFLICTS = Symbol('_conflicts') // map or list (depending on object type) of conflicts
//const CHANGE = Symbol('_change') // the context object on proxy objects used in change callback
//const ELEM_IDS = Symbol('_elemIds') // list containing the element ID of each list element
module.exports = {
STATE, HEADS, OBJECT_ID, READ_ONLY, FROZEN
}

View file

@ -1,372 +0,0 @@
const AutomergeWASM = require("automerge-wasm")
const uuid = require('./uuid')
let { rootProxy, listProxy, textProxy, mapProxy } = require("./proxies")
let { Counter } = require("./counter")
let { Text } = require("./text")
let { Int, Uint, Float64 } = require("./numbers")
let { STATE, HEADS, OBJECT_ID, READ_ONLY, FROZEN } = require("./constants")
function init(actor) {
if (typeof actor != 'string') {
actor = null
}
const state = AutomergeWASM.create(actor)
return rootProxy(state, true);
}
function clone(doc) {
const state = doc[STATE].clone()
return rootProxy(state, true);
}
function free(doc) {
return doc[STATE].free()
}
function from(data, actor) {
let doc1 = init(actor)
let doc2 = change(doc1, (d) => Object.assign(d, data))
return doc2
}
function change(doc, options, callback) {
if (callback === undefined) {
// FIXME implement options
callback = options
options = {}
}
if (typeof options === "string") {
options = { message: options }
}
if (doc === undefined || doc[STATE] === undefined || doc[OBJECT_ID] !== "_root") {
throw new RangeError("must be the document root");
}
if (doc[FROZEN] === true) {
throw new RangeError("Attempting to use an outdated Automerge document")
}
if (!!doc[HEADS] === true) {
throw new RangeError("Attempting to change an out of date document");
}
if (doc[READ_ONLY] === false) {
throw new RangeError("Calls to Automerge.change cannot be nested")
}
const state = doc[STATE]
const heads = state.getHeads()
try {
doc[HEADS] = heads
doc[FROZEN] = true
let root = rootProxy(state);
callback(root)
if (state.pendingOps() === 0) {
doc[FROZEN] = false
doc[HEADS] = undefined
return doc
} else {
state.commit(options.message, options.time)
return rootProxy(state, true);
}
} catch (e) {
//console.log("ERROR: ",e)
doc[FROZEN] = false
doc[HEADS] = undefined
state.rollback()
throw e
}
}
function emptyChange(doc, options) {
if (options === undefined) {
options = {}
}
if (typeof options === "string") {
options = { message: options }
}
if (doc === undefined || doc[STATE] === undefined || doc[OBJECT_ID] !== "_root") {
throw new RangeError("must be the document root");
}
if (doc[FROZEN] === true) {
throw new RangeError("Attempting to use an outdated Automerge document")
}
if (doc[READ_ONLY] === false) {
throw new RangeError("Calls to Automerge.change cannot be nested")
}
const state = doc[STATE]
state.commit(options.message, options.time)
return rootProxy(state, true);
}
function load(data, actor) {
const state = AutomergeWASM.load(data, actor)
return rootProxy(state, true);
}
function save(doc) {
const state = doc[STATE]
return state.save()
}
function merge(local, remote) {
if (local[HEADS] === true) {
throw new RangeError("Attempting to change an out of date document");
}
const localState = local[STATE]
const heads = localState.getHeads()
const remoteState = remote[STATE]
const changes = localState.getChangesAdded(remoteState)
localState.applyChanges(changes)
local[HEADS] = heads
return rootProxy(localState, true)
}
function getActorId(doc) {
const state = doc[STATE]
return state.getActorId()
}
function conflictAt(context, objectId, prop) {
let values = context.getAll(objectId, prop)
if (values.length <= 1) {
return
}
let result = {}
for (const conflict of values) {
const datatype = conflict[0]
const value = conflict[1]
switch (datatype) {
case "map":
result[value] = mapProxy(context, value, [ prop ], true)
break;
case "list":
result[value] = listProxy(context, value, [ prop ], true)
break;
case "text":
result[value] = textProxy(context, value, [ prop ], true)
break;
//case "table":
//case "cursor":
case "str":
case "uint":
case "int":
case "f64":
case "boolean":
case "bytes":
case "null":
result[conflict[2]] = value
break;
case "counter":
result[conflict[2]] = new Counter(value)
break;
case "timestamp":
result[conflict[2]] = new Date(value)
break;
default:
throw RangeError(`datatype ${datatype} unimplemented`)
}
}
return result
}
function getConflicts(doc, prop) {
const state = doc[STATE]
const objectId = doc[OBJECT_ID]
return conflictAt(state, objectId, prop)
}
function getLastLocalChange(doc) {
const state = doc[STATE]
try {
return state.getLastLocalChange()
} catch (e) {
return
}
}
function getObjectId(doc) {
return doc[OBJECT_ID]
}
function getChanges(oldState, newState) {
const o = oldState[STATE]
const n = newState[STATE]
const heads = oldState[HEADS]
return n.getChanges(heads || o.getHeads())
}
function getAllChanges(doc) {
const state = doc[STATE]
return state.getChanges([])
}
function applyChanges(doc, changes) {
if (doc === undefined || doc[STATE] === undefined || doc[OBJECT_ID] !== "_root") {
throw new RangeError("must be the document root");
}
if (doc[FROZEN] === true) {
throw new RangeError("Attempting to use an outdated Automerge document")
}
if (doc[READ_ONLY] === false) {
throw new RangeError("Calls to Automerge.change cannot be nested")
}
const state = doc[STATE]
const heads = state.getHeads()
state.applyChanges(changes)
doc[HEADS] = heads
return [rootProxy(state, true)];
}
function getHistory(doc) {
const actor = getActorId(doc)
const history = getAllChanges(doc)
return history.map((change, index) => ({
get change () {
return decodeChange(change)
},
get snapshot () {
const [state] = applyChanges(init(), history.slice(0, index + 1))
return state
}
})
)
}
function equals() {
if (!isObject(val1) || !isObject(val2)) return val1 === val2
const keys1 = Object.keys(val1).sort(), keys2 = Object.keys(val2).sort()
if (keys1.length !== keys2.length) return false
for (let i = 0; i < keys1.length; i++) {
if (keys1[i] !== keys2[i]) return false
if (!equals(val1[keys1[i]], val2[keys2[i]])) return false
}
return true
}
function encodeSyncMessage(msg) {
return AutomergeWASM.encodeSyncMessage(msg)
}
function decodeSyncMessage(msg) {
return AutomergeWASM.decodeSyncMessage(msg)
}
function encodeSyncState(state) {
return AutomergeWASM.encodeSyncState(AutomergeWASM.importSyncState(state))
}
function decodeSyncState(state) {
return AutomergeWASM.exportSyncState(AutomergeWASM.decodeSyncState(state))
}
function generateSyncMessage(doc, inState) {
const state = doc[STATE]
const syncState = AutomergeWASM.importSyncState(inState)
const message = state.generateSyncMessage(syncState)
const outState = AutomergeWASM.exportSyncState(syncState)
return [ outState, message ]
}
function receiveSyncMessage(doc, inState, message) {
const syncState = AutomergeWASM.importSyncState(inState)
if (doc === undefined || doc[STATE] === undefined || doc[OBJECT_ID] !== "_root") {
throw new RangeError("must be the document root");
}
if (doc[FROZEN] === true) {
throw new RangeError("Attempting to use an outdated Automerge document")
}
if (!!doc[HEADS] === true) {
throw new RangeError("Attempting to change an out of date document");
}
if (doc[READ_ONLY] === false) {
throw new RangeError("Calls to Automerge.change cannot be nested")
}
const state = doc[STATE]
const heads = state.getHeads()
state.receiveSyncMessage(syncState, message)
const outState = AutomergeWASM.exportSyncState(syncState)
doc[HEADS] = heads
return [rootProxy(state, true), outState, null];
}
function initSyncState() {
return AutomergeWASM.exportSyncState(AutomergeWASM.initSyncState(change))
}
function encodeChange(change) {
return AutomergeWASM.encodeChange(change)
}
function decodeChange(data) {
return AutomergeWASM.decodeChange(data)
}
function encodeSyncMessage(change) {
return AutomergeWASM.encodeSyncMessage(change)
}
function decodeSyncMessage(data) {
return AutomergeWASM.decodeSyncMessage(data)
}
function getMissingDeps(doc, heads) {
const state = doc[STATE]
return state.getMissingDeps(heads)
}
function getHeads(doc) {
const state = doc[STATE]
return doc[HEADS] || state.getHeads()
}
function dump(doc) {
const state = doc[STATE]
state.dump()
}
function toJS(doc) {
if (typeof doc === "object") {
if (doc instanceof Uint8Array) {
return doc
}
if (doc === null) {
return doc
}
if (doc instanceof Array) {
return doc.map((a) => toJS(a))
}
if (doc instanceof Text) {
return doc.map((a) => toJS(a))
}
let tmp = {}
for (index in doc) {
tmp[index] = toJS(doc[index])
}
return tmp
} else {
return doc
}
}
module.exports = {
init, from, change, emptyChange, clone, free,
load, save, merge, getChanges, getAllChanges, applyChanges,
getLastLocalChange, getObjectId, getActorId, getConflicts,
encodeChange, decodeChange, equals, getHistory, getHeads, uuid,
generateSyncMessage, receiveSyncMessage, initSyncState,
decodeSyncMessage, encodeSyncMessage, decodeSyncState, encodeSyncState,
getMissingDeps,
dump, Text, Counter, Int, Uint, Float64, toJS,
}
// depricated
// Frontend, setDefaultBackend, Backend
// more...
/*
for (let name of ['getObjectId', 'getObjectById',
'setActorId',
'Text', 'Table', 'Counter', 'Observable' ]) {
module.exports[name] = Frontend[name]
}
*/

View file

@ -1,33 +0,0 @@
// Convience classes to allow users to stricly specify the number type they want
class Int {
constructor(value) {
if (!(Number.isInteger(value) && value <= Number.MAX_SAFE_INTEGER && value >= Number.MIN_SAFE_INTEGER)) {
throw new RangeError(`Value ${value} cannot be a uint`)
}
this.value = value
Object.freeze(this)
}
}
class Uint {
constructor(value) {
if (!(Number.isInteger(value) && value <= Number.MAX_SAFE_INTEGER && value >= 0)) {
throw new RangeError(`Value ${value} cannot be a uint`)
}
this.value = value
Object.freeze(this)
}
}
class Float64 {
constructor(value) {
if (typeof value !== 'number') {
throw new RangeError(`Value ${value} cannot be a float64`)
}
this.value = value || 0.0
Object.freeze(this)
}
}
module.exports = { Int, Uint, Float64 }

View file

@ -1,617 +0,0 @@
const AutomergeWASM = require("automerge-wasm")
const { Int, Uint, Float64 } = require("./numbers");
const { Counter, getWriteableCounter } = require("./counter");
const { Text } = require("./text");
const { STATE, HEADS, FROZEN, OBJECT_ID, READ_ONLY } = require("./constants")
function parseListIndex(key) {
if (typeof key === 'string' && /^[0-9]+$/.test(key)) key = parseInt(key, 10)
if (typeof key !== 'number') {
// throw new TypeError('A list index must be a number, but you passed ' + JSON.stringify(key))
return key
}
if (key < 0 || isNaN(key) || key === Infinity || key === -Infinity) {
throw new RangeError('A list index must be positive, but you passed ' + key)
}
return key
}
function valueAt(target, prop) {
const { context, objectId, path, readonly, heads} = target
let value = context.get(objectId, prop, heads)
if (value === undefined) {
return
}
const datatype = value[0]
const val = value[1]
switch (datatype) {
case undefined: return;
case "map": return mapProxy(context, val, [ ... path, prop ], readonly, heads);
case "list": return listProxy(context, val, [ ... path, prop ], readonly, heads);
case "text": return textProxy(context, val, [ ... path, prop ], readonly, heads);
//case "table":
//case "cursor":
case "str": return val;
case "uint": return val;
case "int": return val;
case "f64": return val;
case "boolean": return val;
case "null": return null;
case "bytes": return val;
case "timestamp": return val;
case "counter": {
if (readonly) {
return new Counter(val);
} else {
return getWriteableCounter(val, context, path, objectId, prop)
}
}
default:
throw RangeError(`datatype ${datatype} unimplemented`)
}
}
function import_value(value) {
switch (typeof value) {
case 'object':
if (value == null) {
return [ null, "null"]
} else if (value instanceof Uint) {
return [ value.value, "uint" ]
} else if (value instanceof Int) {
return [ value.value, "int" ]
} else if (value instanceof Float64) {
return [ value.value, "f64" ]
} else if (value instanceof Counter) {
return [ value.value, "counter" ]
} else if (value instanceof Date) {
return [ value.getTime(), "timestamp" ]
} else if (value instanceof Uint8Array) {
return [ value, "bytes" ]
} else if (value instanceof Array) {
return [ value, "list" ]
} else if (value instanceof Text) {
return [ value, "text" ]
} else if (value[OBJECT_ID]) {
throw new RangeError('Cannot create a reference to an existing document object')
} else {
return [ value, "map" ]
}
break;
case 'boolean':
return [ value, "boolean" ]
case 'number':
if (Number.isInteger(value)) {
return [ value, "int" ]
} else {
return [ value, "f64" ]
}
break;
case 'string':
return [ value ]
break;
default:
throw new RangeError(`Unsupported type of value: ${typeof value}`)
}
}
const MapHandler = {
get (target, key) {
const { context, objectId, path, readonly, frozen, heads, cache } = target
if (key === Symbol.toStringTag) { return target[Symbol.toStringTag] }
if (key === OBJECT_ID) return objectId
if (key === READ_ONLY) return readonly
if (key === FROZEN) return frozen
if (key === HEADS) return heads
if (key === STATE) return context;
if (!cache[key]) {
cache[key] = valueAt(target, key)
}
return cache[key]
},
set (target, key, val) {
let { context, objectId, path, readonly, frozen} = target
target.cache = {} // reset cache on set
if (val && val[OBJECT_ID]) {
throw new RangeError('Cannot create a reference to an existing document object')
}
if (key === FROZEN) {
target.frozen = val
return
}
if (key === HEADS) {
target.heads = val
return
}
let [ value, datatype ] = import_value(val)
if (frozen) {
throw new RangeError("Attempting to use an outdated Automerge document")
}
if (readonly) {
throw new RangeError(`Object property "${key}" cannot be modified`)
}
switch (datatype) {
case "list":
const list = context.putObject(objectId, key, [])
const proxyList = listProxy(context, list, [ ... path, key ], readonly );
for (let i = 0; i < value.length; i++) {
proxyList[i] = value[i]
}
break;
case "text":
const text = context.putObject(objectId, key, "", "text")
const proxyText = textProxy(context, text, [ ... path, key ], readonly );
for (let i = 0; i < value.length; i++) {
proxyText[i] = value.get(i)
}
break;
case "map":
const map = context.putObject(objectId, key, {})
const proxyMap = mapProxy(context, map, [ ... path, key ], readonly );
for (const key in value) {
proxyMap[key] = value[key]
}
break;
default:
context.put(objectId, key, value, datatype)
}
return true
},
deleteProperty (target, key) {
const { context, objectId, path, readonly, frozen } = target
target.cache = {} // reset cache on delete
if (readonly) {
throw new RangeError(`Object property "${key}" cannot be modified`)
}
context.delete(objectId, key)
return true
},
has (target, key) {
const value = this.get(target, key)
return value !== undefined
},
getOwnPropertyDescriptor (target, key) {
const { context, objectId } = target
const value = this.get(target, key)
if (typeof value !== 'undefined') {
return {
configurable: true, enumerable: true, value
}
}
},
ownKeys (target) {
const { context, objectId, heads} = target
return context.keys(objectId, heads)
},
}
const ListHandler = {
get (target, index) {
const {context, objectId, path, readonly, frozen, heads } = target
index = parseListIndex(index)
if (index === Symbol.hasInstance) { return (instance) => { return [].has(instance) } }
if (index === Symbol.toStringTag) { return target[Symbol.toStringTag] }
if (index === OBJECT_ID) return objectId
if (index === READ_ONLY) return readonly
if (index === FROZEN) return frozen
if (index === HEADS) return heads
if (index === STATE) return context;
if (index === 'length') return context.length(objectId, heads);
if (index === Symbol.iterator) {
let i = 0;
return function *() {
// FIXME - ugly
let value = valueAt(target, i)
while (value !== undefined) {
yield value
i += 1
value = valueAt(target, i)
}
}
}
if (typeof index === 'number') {
return valueAt(target, index)
} else {
return listMethods(target)[index]
}
},
set (target, index, val) {
let {context, objectId, path, readonly, frozen } = target
index = parseListIndex(index)
if (val && val[OBJECT_ID]) {
throw new RangeError('Cannot create a reference to an existing document object')
}
if (index === FROZEN) {
target.frozen = val
return
}
if (index === HEADS) {
target.heads = val
return
}
if (typeof index == "string") {
throw new RangeError('list index must be a number')
}
const [ value, datatype] = import_value(val)
if (frozen) {
throw new RangeError("Attempting to use an outdated Automerge document")
}
if (readonly) {
throw new RangeError(`Object property "${index}" cannot be modified`)
}
switch (datatype) {
case "list":
let list
if (index >= context.length(objectId)) {
list = context.insertObject(objectId, index, [])
} else {
list = context.putObject(objectId, index, [])
}
const proxyList = listProxy(context, list, [ ... path, index ], readonly);
proxyList.splice(0,0,...value)
break;
case "text":
let text
if (index >= context.length(objectId)) {
text = context.insertObject(objectId, index, "", "text")
} else {
text = context.putObject(objectId, index, "", "text")
}
const proxyText = textProxy(context, text, [ ... path, index ], readonly);
proxyText.splice(0,0,...value)
break;
case "map":
let map
if (index >= context.length(objectId)) {
map = context.insertObject(objectId, index, {})
} else {
map = context.putObject(objectId, index, {})
}
const proxyMap = mapProxy(context, map, [ ... path, index ], readonly);
for (const key in value) {
proxyMap[key] = value[key]
}
break;
default:
if (index >= context.length(objectId)) {
context.insert(objectId, index, value, datatype)
} else {
context.put(objectId, index, value, datatype)
}
}
return true
},
deleteProperty (target, index) {
const {context, objectId} = target
index = parseListIndex(index)
if (context.get(objectId, index)[0] == "counter") {
throw new TypeError('Unsupported operation: deleting a counter from a list')
}
context.delete(objectId, index)
return true
},
has (target, index) {
const {context, objectId, heads} = target
index = parseListIndex(index)
if (typeof index === 'number') {
return index < context.length(objectId, heads)
}
return index === 'length'
},
getOwnPropertyDescriptor (target, index) {
const {context, objectId, path, readonly, frozen, heads} = target
if (index === 'length') return {writable: true, value: context.length(objectId, heads) }
if (index === OBJECT_ID) return {configurable: false, enumerable: false, value: objectId}
index = parseListIndex(index)
let value = valueAt(target, index)
return { configurable: true, enumerable: true, value }
},
getPrototypeOf(target) { return Object.getPrototypeOf([]) },
ownKeys (target) {
const {context, objectId, heads } = target
let keys = []
// uncommenting this causes assert.deepEqual() to fail when comparing to a pojo array
// but not uncommenting it causes for (i in list) {} to not enumerate values properly
//for (let i = 0; i < target.context.length(objectId, heads); i++) { keys.push(i.toString()) }
keys.push("length");
return keys
}
}
const TextHandler = Object.assign({}, ListHandler, {
get (target, index) {
// FIXME this is a one line change from ListHandler.get()
const {context, objectId, path, readonly, frozen, heads } = target
index = parseListIndex(index)
if (index === Symbol.toStringTag) { return target[Symbol.toStringTag] }
if (index === Symbol.hasInstance) { return (instance) => { return [].has(instance) } }
if (index === OBJECT_ID) return objectId
if (index === READ_ONLY) return readonly
if (index === FROZEN) return frozen
if (index === HEADS) return heads
if (index === STATE) return context;
if (index === 'length') return context.length(objectId, heads);
if (index === Symbol.iterator) {
let i = 0;
return function *() {
let value = valueAt(target, i)
while (value !== undefined) {
yield value
i += 1
value = valueAt(target, i)
}
}
}
if (typeof index === 'number') {
return valueAt(target, index)
} else {
return textMethods(target)[index] || listMethods(target)[index]
}
},
getPrototypeOf(target) {
return Object.getPrototypeOf(new Text())
},
})
function mapProxy(context, objectId, path, readonly, heads) {
return new Proxy({context, objectId, path, readonly: !!readonly, frozen: false, heads, cache: {}}, MapHandler)
}
function listProxy(context, objectId, path, readonly, heads) {
let target = []
Object.assign(target, {context, objectId, path, readonly: !!readonly, frozen: false, heads, cache: {}})
return new Proxy(target, ListHandler)
}
function textProxy(context, objectId, path, readonly, heads) {
let target = []
Object.assign(target, {context, objectId, path, readonly: !!readonly, frozen: false, heads, cache: {}})
return new Proxy(target, TextHandler)
}
function rootProxy(context, readonly) {
return mapProxy(context, "_root", [], readonly)
}
function listMethods(target) {
const {context, objectId, path, readonly, frozen, heads} = target
const methods = {
deleteAt(index, numDelete) {
if (typeof numDelete === 'number') {
context.splice(objectId, index, numDelete)
} else {
context.delete(objectId, index)
}
return this
},
fill(val, start, end) {
// FIXME
let list = context.getObject(objectId)
let [value, datatype] = valueAt(target, index)
for (let index = parseListIndex(start || 0); index < parseListIndex(end || list.length); index++) {
context.put(objectId, index, value, datatype)
}
return this
},
indexOf(o, start = 0) {
// FIXME
const id = o[OBJECT_ID]
if (id) {
const list = context.getObject(objectId)
for (let index = start; index < list.length; index++) {
if (list[index][OBJECT_ID] === id) {
return index
}
}
return -1
} else {
return context.indexOf(objectId, o, start)
}
},
insertAt(index, ...values) {
this.splice(index, 0, ...values)
return this
},
pop() {
let length = context.length(objectId)
if (length == 0) {
return undefined
}
let last = valueAt(target, length - 1)
context.delete(objectId, length - 1)
return last
},
push(...values) {
let len = context.length(objectId)
this.splice(len, 0, ...values)
return context.length(objectId)
},
shift() {
if (context.length(objectId) == 0) return
const first = valueAt(target, 0)
context.delete(objectId, 0)
return first
},
splice(index, del, ...vals) {
index = parseListIndex(index)
del = parseListIndex(del)
for (let val of vals) {
if (val && val[OBJECT_ID]) {
throw new RangeError('Cannot create a reference to an existing document object')
}
}
if (frozen) {
throw new RangeError("Attempting to use an outdated Automerge document")
}
if (readonly) {
throw new RangeError("Sequence object cannot be modified outside of a change block")
}
let result = []
for (let i = 0; i < del; i++) {
let value = valueAt(target, index)
result.push(value)
context.delete(objectId, index)
}
const values = vals.map((val) => import_value(val))
for (let [value,datatype] of values) {
switch (datatype) {
case "list":
const list = context.insertObject(objectId, index, [])
const proxyList = listProxy(context, list, [ ... path, index ], readonly);
proxyList.splice(0,0,...value)
break;
case "text":
const text = context.insertObject(objectId, index, "", "text")
const proxyText = textProxy(context, text, [ ... path, index ], readonly);
proxyText.splice(0,0,...value)
break;
case "map":
const map = context.insertObject(objectId, index, {})
const proxyMap = mapProxy(context, map, [ ... path, index ], readonly);
for (const key in value) {
proxyMap[key] = value[key]
}
break;
default:
context.insert(objectId, index, value, datatype)
}
index += 1
}
return result
},
unshift(...values) {
this.splice(0, 0, ...values)
return context.length(objectId)
},
entries() {
let i = 0;
const iterator = {
next: () => {
let value = valueAt(target, i)
if (value === undefined) {
return { value: undefined, done: true }
} else {
return { value: [ i, value ], done: false }
}
}
}
return iterator
},
keys() {
let i = 0;
let len = context.length(objectId, heads)
const iterator = {
next: () => {
let value = undefined
if (i < len) { value = i; i++ }
return { value, done: true }
}
}
return iterator
},
values() {
let i = 0;
const iterator = {
next: () => {
let value = valueAt(target, i)
if (value === undefined) {
return { value: undefined, done: true }
} else {
return { value, done: false }
}
}
}
return iterator
}
}
// Read-only methods that can delegate to the JavaScript built-in implementations
// FIXME - super slow
for (let method of ['concat', 'every', 'filter', 'find', 'findIndex', 'forEach', 'includes',
'join', 'lastIndexOf', 'map', 'reduce', 'reduceRight',
'slice', 'some', 'toLocaleString', 'toString']) {
methods[method] = (...args) => {
const list = []
while (true) {
let value = valueAt(target, list.length)
if (value == undefined) {
break
}
list.push(value)
}
return list[method](...args)
}
}
return methods
}
function textMethods(target) {
const {context, objectId, path, readonly, frozen, heads } = target
const methods = {
set (index, value) {
return this[index] = value
},
get (index) {
return this[index]
},
toString () {
return context.text(objectId, heads).replace(//g,'')
},
toSpans () {
let spans = []
let chars = ''
let length = this.length
for (let i = 0; i < length; i++) {
const value = this[i]
if (typeof value === 'string') {
chars += value
} else {
if (chars.length > 0) {
spans.push(chars)
chars = ''
}
spans.push(value)
}
}
if (chars.length > 0) {
spans.push(chars)
}
return spans
},
toJSON () {
return this.toString()
}
}
return methods
}
module.exports = { rootProxy, textProxy, listProxy, mapProxy, MapHandler, ListHandler, TextHandler }

View file

@ -1,132 +0,0 @@
const { OBJECT_ID } = require('./constants')
const { isObject } = require('../src/common')
class Text {
constructor (text) {
const instance = Object.create(Text.prototype)
if (typeof text === 'string') {
instance.elems = [...text]
} else if (Array.isArray(text)) {
instance.elems = text
} else if (text === undefined) {
instance.elems = []
} else {
throw new TypeError(`Unsupported initial value for Text: ${text}`)
}
return instance
}
get length () {
return this.elems.length
}
get (index) {
return this.elems[index]
}
getElemId (index) {
return undefined
}
/**
* Iterates over the text elements character by character, including any
* inline objects.
*/
[Symbol.iterator] () {
let elems = this.elems, index = -1
return {
next () {
index += 1
if (index < elems.length) {
return {done: false, value: elems[index]}
} else {
return {done: true}
}
}
}
}
/**
* Returns the content of the Text object as a simple string, ignoring any
* non-character elements.
*/
toString() {
// Concatting to a string is faster than creating an array and then
// .join()ing for small (<100KB) arrays.
// https://jsperf.com/join-vs-loop-w-type-test
let str = ''
for (const elem of this.elems) {
if (typeof elem === 'string') str += elem
}
return str
}
/**
* Returns the content of the Text object as a sequence of strings,
* interleaved with non-character elements.
*
* For example, the value ['a', 'b', {x: 3}, 'c', 'd'] has spans:
* => ['ab', {x: 3}, 'cd']
*/
toSpans() {
let spans = []
let chars = ''
for (const elem of this.elems) {
if (typeof elem === 'string') {
chars += elem
} else {
if (chars.length > 0) {
spans.push(chars)
chars = ''
}
spans.push(elem)
}
}
if (chars.length > 0) {
spans.push(chars)
}
return spans
}
/**
* Returns the content of the Text object as a simple string, so that the
* JSON serialization of an Automerge document represents text nicely.
*/
toJSON() {
return this.toString()
}
/**
* Updates the list item at position `index` to a new value `value`.
*/
set (index, value) {
this.elems[index] = value
}
/**
* Inserts new list items `values` starting at position `index`.
*/
insertAt(index, ...values) {
this.elems.splice(index, 0, ... values)
}
/**
* Deletes `numDelete` list items starting at position `index`.
* if `numDelete` is not given, one item is deleted.
*/
deleteAt(index, numDelete = 1) {
this.elems.splice(index, numDelete)
}
}
// Read-only methods that can delegate to the JavaScript built-in array
for (let method of ['concat', 'every', 'filter', 'find', 'findIndex', 'forEach', 'includes',
'indexOf', 'join', 'lastIndexOf', 'map', 'reduce', 'reduceRight',
'slice', 'some', 'toLocaleString']) {
Text.prototype[method] = function (...args) {
const array = [...this]
return array[method](...args)
}
}
module.exports = { Text }

View file

@ -1,16 +0,0 @@
const { v4: uuid } = require('uuid')
function defaultFactory() {
return uuid().replace(/-/g, '')
}
let factory = defaultFactory
function makeUuid() {
return factory()
}
makeUuid.setFactory = newFactory => { factory = newFactory }
makeUuid.reset = () => { factory = defaultFactory }
module.exports = makeUuid

View file

@ -1,164 +0,0 @@
const assert = require('assert')
const util = require('util')
const Automerge = require('..')
describe('Automerge', () => {
describe('basics', () => {
it('should init clone and free', () => {
let doc1 = Automerge.init()
let doc2 = Automerge.clone(doc1);
})
it('handle basic set and read on root object', () => {
let doc1 = Automerge.init()
let doc2 = Automerge.change(doc1, (d) => {
d.hello = "world"
d.big = "little"
d.zip = "zop"
d.app = "dap"
assert.deepEqual(d, { hello: "world", big: "little", zip: "zop", app: "dap" })
})
assert.deepEqual(doc2, { hello: "world", big: "little", zip: "zop", app: "dap" })
})
it('handle basic sets over many changes', () => {
let doc1 = Automerge.init()
let timestamp = new Date();
let counter = new Automerge.Counter(100);
let bytes = new Uint8Array([10,11,12]);
let doc2 = Automerge.change(doc1, (d) => {
d.hello = "world"
})
let doc3 = Automerge.change(doc2, (d) => {
d.counter1 = counter
})
let doc4 = Automerge.change(doc3, (d) => {
d.timestamp1 = timestamp
})
let doc5 = Automerge.change(doc4, (d) => {
d.app = null
})
let doc6 = Automerge.change(doc5, (d) => {
d.bytes1 = bytes
})
let doc7 = Automerge.change(doc6, (d) => {
d.uint = new Automerge.Uint(1)
d.int = new Automerge.Int(-1)
d.float64 = new Automerge.Float64(5.5)
d.number1 = 100
d.number2 = -45.67
d.true = true
d.false = false
})
assert.deepEqual(doc7, { hello: "world", true: true, false: false, int: -1, uint: 1, float64: 5.5, number1: 100, number2: -45.67, counter1: counter, timestamp1: timestamp, bytes1: bytes, app: null })
let changes = Automerge.getAllChanges(doc7)
let t1 = Automerge.init()
;let [t2] = Automerge.applyChanges(t1, changes)
assert.deepEqual(doc7,t2)
})
it('handle overwrites to values', () => {
let doc1 = Automerge.init()
let doc2 = Automerge.change(doc1, (d) => {
d.hello = "world1"
})
let doc3 = Automerge.change(doc2, (d) => {
d.hello = "world2"
})
let doc4 = Automerge.change(doc3, (d) => {
d.hello = "world3"
})
let doc5 = Automerge.change(doc4, (d) => {
d.hello = "world4"
})
assert.deepEqual(doc5, { hello: "world4" } )
})
it('handle set with object value', () => {
let doc1 = Automerge.init()
let doc2 = Automerge.change(doc1, (d) => {
d.subobj = { hello: "world", subsubobj: { zip: "zop" } }
})
assert.deepEqual(doc2, { subobj: { hello: "world", subsubobj: { zip: "zop" } } })
})
it('handle simple list creation', () => {
let doc1 = Automerge.init()
let doc2 = Automerge.change(doc1, (d) => d.list = [])
assert.deepEqual(doc2, { list: []})
})
it('handle simple lists', () => {
let doc1 = Automerge.init()
let doc2 = Automerge.change(doc1, (d) => {
d.list = [ 1, 2, 3 ]
})
assert.deepEqual(doc2.list.length, 3)
assert.deepEqual(doc2.list[0], 1)
assert.deepEqual(doc2.list[1], 2)
assert.deepEqual(doc2.list[2], 3)
assert.deepEqual(doc2, { list: [1,2,3] })
// assert.deepStrictEqual(Automerge.toJS(doc2), { list: [1,2,3] })
let doc3 = Automerge.change(doc2, (d) => {
d.list[1] = "a"
})
assert.deepEqual(doc3.list.length, 3)
assert.deepEqual(doc3.list[0], 1)
assert.deepEqual(doc3.list[1], "a")
assert.deepEqual(doc3.list[2], 3)
assert.deepEqual(doc3, { list: [1,"a",3] })
})
it('handle simple lists', () => {
let doc1 = Automerge.init()
let doc2 = Automerge.change(doc1, (d) => {
d.list = [ 1, 2, 3 ]
})
let changes = Automerge.getChanges(doc1, doc2)
let docB1 = Automerge.init()
;let [docB2] = Automerge.applyChanges(docB1, changes)
assert.deepEqual(docB2, doc2);
})
it('handle text', () => {
let doc1 = Automerge.init()
let tmp = new Automerge.Text("hello")
let doc2 = Automerge.change(doc1, (d) => {
d.list = new Automerge.Text("hello")
d.list.insertAt(2,"Z")
})
let changes = Automerge.getChanges(doc1, doc2)
let docB1 = Automerge.init()
;let [docB2] = Automerge.applyChanges(docB1, changes)
assert.deepEqual(docB2, doc2);
})
it('have many list methods', () => {
let doc1 = Automerge.from({ list: [1,2,3] })
assert.deepEqual(doc1, { list: [1,2,3] });
let doc2 = Automerge.change(doc1, (d) => {
d.list.splice(1,1,9,10)
})
assert.deepEqual(doc2, { list: [1,9,10,3] });
let doc3 = Automerge.change(doc2, (d) => {
d.list.push(11,12)
})
assert.deepEqual(doc3, { list: [1,9,10,3,11,12] });
let doc4 = Automerge.change(doc3, (d) => {
d.list.unshift(2,2)
})
assert.deepEqual(doc4, { list: [2,2,1,9,10,3,11,12] });
let doc5 = Automerge.change(doc4, (d) => {
d.list.shift()
})
assert.deepEqual(doc5, { list: [2,1,9,10,3,11,12] });
let doc6 = Automerge.change(doc5, (d) => {
d.list.insertAt(3,100,101)
})
assert.deepEqual(doc6, { list: [2,1,9,100,101,10,3,11,12] });
})
})
})

View file

@ -1,97 +0,0 @@
const assert = require('assert')
const { checkEncoded } = require('./helpers')
const Automerge = require('..')
const { encodeChange, decodeChange } = Automerge
describe('change encoding', () => {
it('should encode text edits', () => {
/*
const change1 = {actor: 'aaaa', seq: 1, startOp: 1, time: 9, message: '', deps: [], ops: [
{action: 'makeText', obj: '_root', key: 'text', insert: false, pred: []},
{action: 'set', obj: '1@aaaa', elemId: '_head', insert: true, value: 'h', pred: []},
{action: 'del', obj: '1@aaaa', elemId: '2@aaaa', insert: false, pred: ['2@aaaa']},
{action: 'set', obj: '1@aaaa', elemId: '_head', insert: true, value: 'H', pred: []},
{action: 'set', obj: '1@aaaa', elemId: '4@aaaa', insert: true, value: 'i', pred: []}
]}
*/
const change1 = {actor: 'aaaa', seq: 1, startOp: 1, time: 9, message: null, deps: [], ops: [
{action: 'makeText', obj: '_root', key: 'text', pred: []},
{action: 'set', obj: '1@aaaa', elemId: '_head', insert: true, value: 'h', pred: []},
{action: 'del', obj: '1@aaaa', elemId: '2@aaaa', pred: ['2@aaaa']},
{action: 'set', obj: '1@aaaa', elemId: '_head', insert: true, value: 'H', pred: []},
{action: 'set', obj: '1@aaaa', elemId: '4@aaaa', insert: true, value: 'i', pred: []}
]}
checkEncoded(encodeChange(change1), [
0x85, 0x6f, 0x4a, 0x83, // magic bytes
0xe2, 0xbd, 0xfb, 0xf5, // checksum
1, 94, 0, 2, 0xaa, 0xaa, // chunkType: change, length, deps, actor 'aaaa'
1, 1, 9, 0, 0, // seq, startOp, time, message, actor list
12, 0x01, 4, 0x02, 4, // column count, objActor, objCtr
0x11, 8, 0x13, 7, 0x15, 8, // keyActor, keyCtr, keyStr
0x34, 4, 0x42, 6, // insert, action
0x56, 6, 0x57, 3, // valLen, valRaw
0x70, 6, 0x71, 2, 0x73, 2, // predNum, predActor, predCtr
0, 1, 4, 0, // objActor column: null, 0, 0, 0, 0
0, 1, 4, 1, // objCtr column: null, 1, 1, 1, 1
0, 2, 0x7f, 0, 0, 1, 0x7f, 0, // keyActor column: null, null, 0, null, 0
0, 1, 0x7c, 0, 2, 0x7e, 4, // keyCtr column: null, 0, 2, 0, 4
0x7f, 4, 0x74, 0x65, 0x78, 0x74, 0, 4, // keyStr column: 'text', null, null, null, null
1, 1, 1, 2, // insert column: false, true, false, true, true
0x7d, 4, 1, 3, 2, 1, // action column: makeText, set, del, set, set
0x7d, 0, 0x16, 0, 2, 0x16, // valLen column: 0, 0x16, 0, 0x16, 0x16
0x68, 0x48, 0x69, // valRaw column: 'h', 'H', 'i'
2, 0, 0x7f, 1, 2, 0, // predNum column: 0, 0, 1, 0, 0
0x7f, 0, // predActor column: 0
0x7f, 2 // predCtr column: 2
])
const decoded = decodeChange(encodeChange(change1))
assert.deepStrictEqual(decoded, Object.assign({hash: decoded.hash}, change1))
})
// FIXME - skipping this b/c it was never implemented in the rust impl and isnt trivial
/*
it.skip('should require strict ordering of preds', () => {
const change = new Uint8Array([
133, 111, 74, 131, 31, 229, 112, 44, 1, 105, 1, 58, 30, 190, 100, 253, 180, 180, 66, 49, 126,
81, 142, 10, 3, 35, 140, 189, 231, 34, 145, 57, 66, 23, 224, 149, 64, 97, 88, 140, 168, 194,
229, 4, 244, 209, 58, 138, 67, 140, 1, 152, 236, 250, 2, 0, 1, 4, 55, 234, 66, 242, 8, 21, 11,
52, 1, 66, 2, 86, 3, 87, 10, 112, 2, 113, 3, 115, 4, 127, 9, 99, 111, 109, 109, 111, 110, 86,
97, 114, 1, 127, 1, 127, 166, 1, 52, 48, 57, 49, 52, 57, 52, 53, 56, 50, 127, 2, 126, 0, 1,
126, 139, 1, 0
])
assert.throws(() => { decodeChange(change) }, /operation IDs are not in ascending order/)
})
*/
describe('with trailing bytes', () => {
let change = new Uint8Array([
0x85, 0x6f, 0x4a, 0x83, // magic bytes
0xb2, 0x98, 0x9e, 0xa9, // checksum
1, 61, 0, 2, 0x12, 0x34, // chunkType: change, length, deps, actor '1234'
1, 1, 252, 250, 220, 255, 5, // seq, startOp, time
14, 73, 110, 105, 116, 105, 97, 108, 105, 122, 97, 116, 105, 111, 110, // message: 'Initialization'
0, 6, // actor list, column count
0x15, 3, 0x34, 1, 0x42, 2, // keyStr, insert, action
0x56, 2, 0x57, 1, 0x70, 2, // valLen, valRaw, predNum
0x7f, 1, 0x78, // keyStr: 'x'
1, // insert: false
0x7f, 1, // action: set
0x7f, 19, // valLen: 1 byte of type uint
1, // valRaw: 1
0x7f, 0, // predNum: 0
0, 1, 2, 3, 4, 5, 6, 7, 8, 9 // 10 trailing bytes
])
it('should allow decoding and re-encoding', () => {
// NOTE: This calls the JavaScript encoding and decoding functions, even when the WebAssembly
// backend is loaded. Should the wasm backend export its own functions for testing?
checkEncoded(change, encodeChange(decodeChange(change)))
})
it('should be preserved in document encoding', () => {
const [doc] = Automerge.applyChanges(Automerge.init(), [change])
const [reconstructed] = Automerge.getAllChanges(Automerge.load(Automerge.save(doc)))
checkEncoded(change, reconstructed)
})
})
})

File diff suppressed because it is too large Load diff

View file

@ -1,697 +0,0 @@
const assert = require('assert')
const Automerge = require('..')
const { assertEqualsOneOf } = require('./helpers')
function attributeStateToAttributes(accumulatedAttributes) {
const attributes = {}
Object.entries(accumulatedAttributes).forEach(([key, values]) => {
if (values.length && values[0] !== null) {
attributes[key] = values[0]
}
})
return attributes
}
function isEquivalent(a, b) {
const aProps = Object.getOwnPropertyNames(a)
const bProps = Object.getOwnPropertyNames(b)
if (aProps.length != bProps.length) {
return false
}
for (let i = 0; i < aProps.length; i++) {
const propName = aProps[i]
if (a[propName] !== b[propName]) {
return false
}
}
return true
}
function isControlMarker(pseudoCharacter) {
return typeof pseudoCharacter === 'object' && pseudoCharacter.attributes
}
function opFrom(text, attributes) {
let op = { insert: text }
if (Object.keys(attributes).length > 0) {
op.attributes = attributes
}
return op
}
function accumulateAttributes(span, accumulatedAttributes) {
Object.entries(span).forEach(([key, value]) => {
if (!accumulatedAttributes[key]) {
accumulatedAttributes[key] = []
}
if (value === null) {
if (accumulatedAttributes[key].length === 0 || accumulatedAttributes[key] === null) {
accumulatedAttributes[key].unshift(null)
} else {
accumulatedAttributes[key].shift()
}
} else {
if (accumulatedAttributes[key][0] === null) {
accumulatedAttributes[key].shift()
} else {
accumulatedAttributes[key].unshift(value)
}
}
})
return accumulatedAttributes
}
function automergeTextToDeltaDoc(text) {
let ops = []
let controlState = {}
let currentString = ""
let attributes = {}
text.toSpans().forEach((span) => {
if (isControlMarker(span)) {
controlState = accumulateAttributes(span.attributes, controlState)
} else {
let next = attributeStateToAttributes(controlState)
// if the next span has the same calculated attributes as the current span
// don't bother outputting it as a separate span, just let it ride
if (typeof span === 'string' && isEquivalent(next, attributes)) {
currentString = currentString + span
return
}
if (currentString) {
ops.push(opFrom(currentString, attributes))
}
// If we've got a string, we might be able to concatenate it to another
// same-attributed-string, so remember it and go to the next iteration.
if (typeof span === 'string') {
currentString = span
attributes = next
} else {
// otherwise we have an embed "character" and should output it immediately.
// embeds are always one-"character" in length.
ops.push(opFrom(span, next))
currentString = ''
attributes = {}
}
}
})
// at the end, flush any accumulated string out
if (currentString) {
ops.push(opFrom(currentString, attributes))
}
return ops
}
function inverseAttributes(attributes) {
let invertedAttributes = {}
Object.keys(attributes).forEach((key) => {
invertedAttributes[key] = null
})
return invertedAttributes
}
function applyDeleteOp(text, offset, op) {
let length = op.delete
while (length > 0) {
if (isControlMarker(text.get(offset))) {
offset += 1
} else {
// we need to not delete control characters, but we do delete embed characters
text.deleteAt(offset, 1)
length -= 1
}
}
return [text, offset]
}
function applyRetainOp(text, offset, op) {
let length = op.retain
if (op.attributes) {
text.insertAt(offset, { attributes: op.attributes })
offset += 1
}
while (length > 0) {
const char = text.get(offset)
offset += 1
if (!isControlMarker(char)) {
length -= 1
}
}
if (op.attributes) {
text.insertAt(offset, { attributes: inverseAttributes(op.attributes) })
offset += 1
}
return [text, offset]
}
function applyInsertOp(text, offset, op) {
let originalOffset = offset
if (typeof op.insert === 'string') {
text.insertAt(offset, ...op.insert.split(''))
offset += op.insert.length
} else {
// we have an embed or something similar
text.insertAt(offset, op.insert)
offset += 1
}
if (op.attributes) {
text.insertAt(originalOffset, { attributes: op.attributes })
offset += 1
}
if (op.attributes) {
text.insertAt(offset, { attributes: inverseAttributes(op.attributes) })
offset += 1
}
return [text, offset]
}
// XXX: uhhhhh, why can't I pass in text?
function applyDeltaDocToAutomergeText(delta, doc) {
let offset = 0
delta.forEach(op => {
if (op.retain) {
[, offset] = applyRetainOp(doc.text, offset, op)
} else if (op.delete) {
[, offset] = applyDeleteOp(doc.text, offset, op)
} else if (op.insert) {
[, offset] = applyInsertOp(doc.text, offset, op)
}
})
}
describe('Automerge.Text', () => {
let s1, s2
beforeEach(() => {
s1 = Automerge.change(Automerge.init(), doc => doc.text = new Automerge.Text())
s2 = Automerge.merge(Automerge.init(), s1)
})
it('should support insertion', () => {
s1 = Automerge.change(s1, doc => doc.text.insertAt(0, 'a'))
assert.strictEqual(s1.text.length, 1)
assert.strictEqual(s1.text.get(0), 'a')
assert.strictEqual(s1.text.toString(), 'a')
//assert.strictEqual(s1.text.getElemId(0), `2@${Automerge.getActorId(s1)}`)
})
it('should support deletion', () => {
s1 = Automerge.change(s1, doc => doc.text.insertAt(0, 'a', 'b', 'c'))
s1 = Automerge.change(s1, doc => doc.text.deleteAt(1, 1))
assert.strictEqual(s1.text.length, 2)
assert.strictEqual(s1.text.get(0), 'a')
assert.strictEqual(s1.text.get(1), 'c')
assert.strictEqual(s1.text.toString(), 'ac')
})
it("should support implicit and explicit deletion", () => {
s1 = Automerge.change(s1, doc => doc.text.insertAt(0, "a", "b", "c"))
s1 = Automerge.change(s1, doc => doc.text.deleteAt(1))
s1 = Automerge.change(s1, doc => doc.text.deleteAt(1, 0))
assert.strictEqual(s1.text.length, 2)
assert.strictEqual(s1.text.get(0), "a")
assert.strictEqual(s1.text.get(1), "c")
assert.strictEqual(s1.text.toString(), "ac")
})
it('should handle concurrent insertion', () => {
s1 = Automerge.change(s1, doc => doc.text.insertAt(0, 'a', 'b', 'c'))
s2 = Automerge.change(s2, doc => doc.text.insertAt(0, 'x', 'y', 'z'))
s1 = Automerge.merge(s1, s2)
assert.strictEqual(s1.text.length, 6)
assertEqualsOneOf(s1.text.toString(), 'abcxyz', 'xyzabc')
assertEqualsOneOf(s1.text.join(''), 'abcxyz', 'xyzabc')
})
it('should handle text and other ops in the same change', () => {
s1 = Automerge.change(s1, doc => {
doc.foo = 'bar'
doc.text.insertAt(0, 'a')
})
assert.strictEqual(s1.foo, 'bar')
assert.strictEqual(s1.text.toString(), 'a')
assert.strictEqual(s1.text.join(''), 'a')
})
it('should serialize to JSON as a simple string', () => {
s1 = Automerge.change(s1, doc => doc.text.insertAt(0, 'a', '"', 'b'))
assert.strictEqual(JSON.stringify(s1), '{"text":"a\\"b"}')
})
it('should allow modification before an object is assigned to a document', () => {
s1 = Automerge.change(Automerge.init(), doc => {
const text = new Automerge.Text()
text.insertAt(0, 'a', 'b', 'c', 'd')
text.deleteAt(2)
doc.text = text
assert.strictEqual(doc.text.toString(), 'abd')
assert.strictEqual(doc.text.join(''), 'abd')
})
assert.strictEqual(s1.text.toString(), 'abd')
assert.strictEqual(s1.text.join(''), 'abd')
})
it('should allow modification after an object is assigned to a document', () => {
s1 = Automerge.change(Automerge.init(), doc => {
const text = new Automerge.Text()
doc.text = text
doc.text.insertAt(0, 'a', 'b', 'c', 'd')
doc.text.deleteAt(2)
assert.strictEqual(doc.text.toString(), 'abd')
assert.strictEqual(doc.text.join(''), 'abd')
})
assert.strictEqual(s1.text.join(''), 'abd')
})
it('should not allow modification outside of a change callback', () => {
assert.throws(() => s1.text.insertAt(0, 'a'), /object cannot be modified outside of a change block/)
})
describe('with initial value', () => {
it('should accept a string as initial value', () => {
let s1 = Automerge.change(Automerge.init(), doc => doc.text = new Automerge.Text('init'))
assert.strictEqual(s1.text.length, 4)
assert.strictEqual(s1.text.get(0), 'i')
assert.strictEqual(s1.text.get(1), 'n')
assert.strictEqual(s1.text.get(2), 'i')
assert.strictEqual(s1.text.get(3), 't')
assert.strictEqual(s1.text.toString(), 'init')
})
it('should accept an array as initial value', () => {
let s1 = Automerge.change(Automerge.init(), doc => doc.text = new Automerge.Text(['i', 'n', 'i', 't']))
assert.strictEqual(s1.text.length, 4)
assert.strictEqual(s1.text.get(0), 'i')
assert.strictEqual(s1.text.get(1), 'n')
assert.strictEqual(s1.text.get(2), 'i')
assert.strictEqual(s1.text.get(3), 't')
assert.strictEqual(s1.text.toString(), 'init')
})
it('should initialize text in Automerge.from()', () => {
let s1 = Automerge.from({text: new Automerge.Text('init')})
assert.strictEqual(s1.text.length, 4)
assert.strictEqual(s1.text.get(0), 'i')
assert.strictEqual(s1.text.get(1), 'n')
assert.strictEqual(s1.text.get(2), 'i')
assert.strictEqual(s1.text.get(3), 't')
assert.strictEqual(s1.text.toString(), 'init')
})
it('should encode the initial value as a change', () => {
const s1 = Automerge.from({text: new Automerge.Text('init')})
const changes = Automerge.getAllChanges(s1)
assert.strictEqual(changes.length, 1)
const [s2] = Automerge.applyChanges(Automerge.init(), changes)
assert.strictEqual(s2.text instanceof Automerge.Text, true)
assert.strictEqual(s2.text.toString(), 'init')
assert.strictEqual(s2.text.join(''), 'init')
})
it('should allow immediate access to the value', () => {
Automerge.change(Automerge.init(), doc => {
const text = new Automerge.Text('init')
assert.strictEqual(text.length, 4)
assert.strictEqual(text.get(0), 'i')
assert.strictEqual(text.toString(), 'init')
doc.text = text
assert.strictEqual(doc.text.length, 4)
assert.strictEqual(doc.text.get(0), 'i')
assert.strictEqual(doc.text.toString(), 'init')
})
})
it('should allow pre-assignment modification of the initial value', () => {
let s1 = Automerge.change(Automerge.init(), doc => {
const text = new Automerge.Text('init')
text.deleteAt(3)
assert.strictEqual(text.join(''), 'ini')
doc.text = text
assert.strictEqual(doc.text.join(''), 'ini')
assert.strictEqual(doc.text.toString(), 'ini')
})
assert.strictEqual(s1.text.toString(), 'ini')
assert.strictEqual(s1.text.join(''), 'ini')
})
it('should allow post-assignment modification of the initial value', () => {
let s1 = Automerge.change(Automerge.init(), doc => {
const text = new Automerge.Text('init')
doc.text = text
doc.text.deleteAt(0)
doc.text.insertAt(0, 'I')
assert.strictEqual(doc.text.join(''), 'Init')
assert.strictEqual(doc.text.toString(), 'Init')
})
assert.strictEqual(s1.text.join(''), 'Init')
assert.strictEqual(s1.text.toString(), 'Init')
})
})
describe('non-textual control characters', () => {
let s1
beforeEach(() => {
s1 = Automerge.change(Automerge.init(), doc => {
doc.text = new Automerge.Text()
doc.text.insertAt(0, 'a')
doc.text.insertAt(1, { attribute: 'bold' })
})
})
it('should allow fetching non-textual characters', () => {
assert.deepEqual(s1.text.get(1), { attribute: 'bold' })
//assert.strictEqual(s1.text.getElemId(1), `3@${Automerge.getActorId(s1)}`)
})
it('should include control characters in string length', () => {
assert.strictEqual(s1.text.length, 2)
assert.strictEqual(s1.text.get(0), 'a')
})
it('should exclude control characters from toString()', () => {
assert.strictEqual(s1.text.toString(), 'a')
})
it('should allow control characters to be updated', () => {
const s2 = Automerge.change(s1, doc => doc.text.get(1).attribute = 'italic')
const s3 = Automerge.load(Automerge.save(s2))
assert.strictEqual(s1.text.get(1).attribute, 'bold')
assert.strictEqual(s2.text.get(1).attribute, 'italic')
assert.strictEqual(s3.text.get(1).attribute, 'italic')
})
describe('spans interface to Text', () => {
it('should return a simple string as a single span', () => {
let s1 = Automerge.change(Automerge.init(), doc => {
doc.text = new Automerge.Text('hello world')
})
assert.deepEqual(s1.text.toSpans(), ['hello world'])
})
it('should return an empty string as an empty array', () => {
let s1 = Automerge.change(Automerge.init(), doc => {
doc.text = new Automerge.Text()
})
assert.deepEqual(s1.text.toSpans(), [])
})
it('should split a span at a control character', () => {
let s1 = Automerge.change(Automerge.init(), doc => {
doc.text = new Automerge.Text('hello world')
doc.text.insertAt(5, { attributes: { bold: true } })
})
assert.deepEqual(s1.text.toSpans(),
['hello', { attributes: { bold: true } }, ' world'])
})
it('should allow consecutive control characters', () => {
let s1 = Automerge.change(Automerge.init(), doc => {
doc.text = new Automerge.Text('hello world')
doc.text.insertAt(5, { attributes: { bold: true } })
doc.text.insertAt(6, { attributes: { italic: true } })
})
assert.deepEqual(s1.text.toSpans(),
['hello',
{ attributes: { bold: true } },
{ attributes: { italic: true } },
' world'
])
})
it('should allow non-consecutive control characters', () => {
let s1 = Automerge.change(Automerge.init(), doc => {
doc.text = new Automerge.Text('hello world')
doc.text.insertAt(5, { attributes: { bold: true } })
doc.text.insertAt(12, { attributes: { italic: true } })
})
assert.deepEqual(s1.text.toSpans(),
['hello',
{ attributes: { bold: true } },
' world',
{ attributes: { italic: true } }
])
})
it('should be convertable into a Quill delta', () => {
let s1 = Automerge.change(Automerge.init(), doc => {
doc.text = new Automerge.Text('Gandalf the Grey')
doc.text.insertAt(0, { attributes: { bold: true } })
doc.text.insertAt(7 + 1, { attributes: { bold: null } })
doc.text.insertAt(12 + 2, { attributes: { color: '#cccccc' } })
})
let deltaDoc = automergeTextToDeltaDoc(s1.text)
// From https://quilljs.com/docs/delta/
let expectedDoc = [
{ insert: 'Gandalf', attributes: { bold: true } },
{ insert: ' the ' },
{ insert: 'Grey', attributes: { color: '#cccccc' } }
]
assert.deepEqual(deltaDoc, expectedDoc)
})
it('should support embeds', () => {
let s1 = Automerge.change(Automerge.init(), doc => {
doc.text = new Automerge.Text('')
doc.text.insertAt(0, { attributes: { link: 'https://quilljs.com' } })
doc.text.insertAt(1, {
image: 'https://quilljs.com/assets/images/icon.png'
})
doc.text.insertAt(2, { attributes: { link: null } })
})
let deltaDoc = automergeTextToDeltaDoc(s1.text)
// From https://quilljs.com/docs/delta/
let expectedDoc = [{
// An image link
insert: {
image: 'https://quilljs.com/assets/images/icon.png'
},
attributes: {
link: 'https://quilljs.com'
}
}]
assert.deepEqual(deltaDoc, expectedDoc)
})
it('should handle concurrent overlapping spans', () => {
let s1 = Automerge.change(Automerge.init(), doc => {
doc.text = new Automerge.Text('Gandalf the Grey')
})
let s2 = Automerge.merge(Automerge.init(), s1)
let s3 = Automerge.change(s1, doc => {
doc.text.insertAt(8, { attributes: { bold: true } })
doc.text.insertAt(16 + 1, { attributes: { bold: null } })
})
let s4 = Automerge.change(s2, doc => {
doc.text.insertAt(0, { attributes: { bold: true } })
doc.text.insertAt(11 + 1, { attributes: { bold: null } })
})
let merged = Automerge.merge(s3, s4)
let deltaDoc = automergeTextToDeltaDoc(merged.text)
// From https://quilljs.com/docs/delta/
let expectedDoc = [
{ insert: 'Gandalf the Grey', attributes: { bold: true } },
]
assert.deepEqual(deltaDoc, expectedDoc)
})
it('should handle debolding spans', () => {
let s1 = Automerge.change(Automerge.init(), doc => {
doc.text = new Automerge.Text('Gandalf the Grey')
})
let s2 = Automerge.merge(Automerge.init(), s1)
let s3 = Automerge.change(s1, doc => {
doc.text.insertAt(0, { attributes: { bold: true } })
doc.text.insertAt(16 + 1, { attributes: { bold: null } })
})
let s4 = Automerge.change(s2, doc => {
doc.text.insertAt(8, { attributes: { bold: null } })
doc.text.insertAt(11 + 1, { attributes: { bold: true } })
})
let merged = Automerge.merge(s3, s4)
let deltaDoc = automergeTextToDeltaDoc(merged.text)
// From https://quilljs.com/docs/delta/
let expectedDoc = [
{ insert: 'Gandalf ', attributes: { bold: true } },
{ insert: 'the' },
{ insert: ' Grey', attributes: { bold: true } },
]
assert.deepEqual(deltaDoc, expectedDoc)
})
// xxx: how would this work for colors?
it('should handle destyling across destyled spans', () => {
let s1 = Automerge.change(Automerge.init(), doc => {
doc.text = new Automerge.Text('Gandalf the Grey')
})
let s2 = Automerge.merge(Automerge.init(), s1)
let s3 = Automerge.change(s1, doc => {
doc.text.insertAt(0, { attributes: { bold: true } })
doc.text.insertAt(16 + 1, { attributes: { bold: null } })
})
let s4 = Automerge.change(s2, doc => {
doc.text.insertAt(8, { attributes: { bold: null } })
doc.text.insertAt(11 + 1, { attributes: { bold: true } })
})
let merged = Automerge.merge(s3, s4)
let final = Automerge.change(merged, doc => {
doc.text.insertAt(3 + 1, { attributes: { bold: null } })
doc.text.insertAt(doc.text.length, { attributes: { bold: true } })
})
let deltaDoc = automergeTextToDeltaDoc(final.text)
// From https://quilljs.com/docs/delta/
let expectedDoc = [
{ insert: 'Gan', attributes: { bold: true } },
{ insert: 'dalf the Grey' },
]
assert.deepEqual(deltaDoc, expectedDoc)
})
it('should apply an insert', () => {
let s1 = Automerge.change(Automerge.init(), doc => {
doc.text = new Automerge.Text('Hello world')
})
const delta = [
{ retain: 6 },
{ insert: 'reader' },
{ delete: 5 }
]
let s2 = Automerge.change(s1, doc => {
applyDeltaDocToAutomergeText(delta, doc)
})
assert.strictEqual(s2.text.join(''), 'Hello reader')
})
it('should apply an insert with control characters', () => {
let s1 = Automerge.change(Automerge.init(), doc => {
doc.text = new Automerge.Text('Hello world')
})
const delta = [
{ retain: 6 },
{ insert: 'reader', attributes: { bold: true } },
{ delete: 5 },
{ insert: '!' }
]
let s2 = Automerge.change(s1, doc => {
applyDeltaDocToAutomergeText(delta, doc)
})
assert.strictEqual(s2.text.toString(), 'Hello reader!')
assert.deepEqual(s2.text.toSpans(), [
"Hello ",
{ attributes: { bold: true } },
"reader",
{ attributes: { bold: null } },
"!"
])
})
it('should account for control characters in retain/delete lengths', () => {
let s1 = Automerge.change(Automerge.init(), doc => {
doc.text = new Automerge.Text('Hello world')
doc.text.insertAt(4, { attributes: { color: '#ccc' } })
doc.text.insertAt(10, { attributes: { color: '#f00' } })
})
const delta = [
{ retain: 6 },
{ insert: 'reader', attributes: { bold: true } },
{ delete: 5 },
{ insert: '!' }
]
let s2 = Automerge.change(s1, doc => {
applyDeltaDocToAutomergeText(delta, doc)
})
assert.strictEqual(s2.text.toString(), 'Hello reader!')
assert.deepEqual(s2.text.toSpans(), [
"Hell",
{ attributes: { color: '#ccc'} },
"o ",
{ attributes: { bold: true } },
"reader",
{ attributes: { bold: null } },
{ attributes: { color: '#f00'} },
"!"
])
})
it('should support embeds', () => {
let s1 = Automerge.change(Automerge.init(), doc => {
doc.text = new Automerge.Text('')
})
let deltaDoc = [{
// An image link
insert: {
image: 'https://quilljs.com/assets/images/icon.png'
},
attributes: {
link: 'https://quilljs.com'
}
}]
let s2 = Automerge.change(s1, doc => {
applyDeltaDocToAutomergeText(deltaDoc, doc)
})
assert.deepEqual(s2.text.toSpans(), [
{ attributes: { link: 'https://quilljs.com' } },
{ image: 'https://quilljs.com/assets/images/icon.png'},
{ attributes: { link: null } },
])
})
})
})
it('should support unicode when creating text', () => {
s1 = Automerge.from({
text: new Automerge.Text('🐦')
})
assert.strictEqual(s1.text.get(0), '🐦')
})
})

View file

@ -1,32 +0,0 @@
const assert = require('assert')
const Automerge = require('..')
const uuid = Automerge.uuid
describe('uuid', () => {
afterEach(() => {
uuid.reset()
})
describe('default implementation', () => {
it('generates unique values', () => {
assert.notEqual(uuid(), uuid())
})
})
describe('custom implementation', () => {
let counter
function customUuid() {
return `custom-uuid-${counter++}`
}
before(() => uuid.setFactory(customUuid))
beforeEach(() => counter = 0)
it('invokes the custom factory', () => {
assert.equal(uuid(), 'custom-uuid-0')
assert.equal(uuid(), 'custom-uuid-1')
})
})
})

View file

@ -1,164 +0,0 @@
export type Actor = string;
export type ObjID = string;
export type Change = Uint8Array;
export type SyncMessage = Uint8Array;
export type Prop = string | number;
export type Hash = string;
export type Heads = Hash[];
export type Value = string | number | boolean | null | Date | Uint8Array
export type ObjType = string | Array | Object
export type FullValue =
["str", string] |
["int", number] |
["uint", number] |
["f64", number] |
["boolean", boolean] |
["timestamp", Date] |
["counter", number] |
["bytes", Uint8Array] |
["null", Uint8Array] |
["map", ObjID] |
["list", ObjID] |
["text", ObjID] |
["table", ObjID]
export enum ObjTypeName {
list = "list",
map = "map",
table = "table",
text = "text",
}
export type Datatype =
"boolean" |
"str" |
"int" |
"uint" |
"f64" |
"null" |
"timestamp" |
"counter" |
"bytes" |
"map" |
"text" |
"list";
export type DecodedSyncMessage = {
heads: Heads,
need: Heads,
have: any[]
changes: Change[]
}
export type DecodedChange = {
actor: Actor,
seq: number
startOp: number,
time: number,
message: string | null,
deps: Heads,
hash: Hash,
ops: Op[]
}
export type Op = {
action: string,
obj: ObjID,
key: string,
value?: string | number | boolean,
datatype?: string,
pred: string[],
}
export type Patch = {
obj: ObjID
action: 'assign' | 'insert' | 'delete'
key: Prop
value: Value
datatype: Datatype
conflict: boolean
}
export function create(actor?: Actor): Automerge;
export function load(data: Uint8Array, actor?: Actor): Automerge;
export function encodeChange(change: DecodedChange): Change;
export function decodeChange(change: Change): DecodedChange;
export function initSyncState(): SyncState;
export function encodeSyncMessage(message: DecodedSyncMessage): SyncMessage;
export function decodeSyncMessage(msg: SyncMessage): DecodedSyncMessage;
export function encodeSyncState(state: SyncState): Uint8Array;
export function decodeSyncState(data: Uint8Array): SyncState;
export class Automerge {
// change state
put(obj: ObjID, prop: Prop, value: Value, datatype?: Datatype): undefined;
putObject(obj: ObjID, prop: Prop, value: ObjType): ObjID;
insert(obj: ObjID, index: number, value: Value, datatype?: Datatype): undefined;
insertObject(obj: ObjID, index: number, value: ObjType): ObjID;
push(obj: ObjID, value: Value, datatype?: Datatype): undefined;
pushObject(obj: ObjID, value: ObjType): ObjID;
splice(obj: ObjID, start: number, delete_count: number, text?: string | Array<Value>): ObjID[] | undefined;
increment(obj: ObjID, prop: Prop, value: number): void;
delete(obj: ObjID, prop: Prop): void;
// returns a single value - if there is a conflict return the winner
get(obj: ObjID, prop: any, heads?: Heads): FullValue | null;
// return all values in case of a conflict
getAll(obj: ObjID, arg: any, heads?: Heads): FullValue[];
keys(obj: ObjID, heads?: Heads): string[];
text(obj: ObjID, heads?: Heads): string;
length(obj: ObjID, heads?: Heads): number;
materialize(obj?: ObjID, heads?: Heads): any;
// transactions
commit(message?: string, time?: number): Hash;
merge(other: Automerge): Heads;
getActorId(): Actor;
pendingOps(): number;
rollback(): number;
// patches
enablePatches(enable: boolean): void;
popPatches(): Patch[];
// save and load to local store
save(): Uint8Array;
saveIncremental(): Uint8Array;
loadIncremental(data: Uint8Array): number;
// sync over network
receiveSyncMessage(state: SyncState, message: SyncMessage): void;
generateSyncMessage(state: SyncState): SyncMessage | null;
// low level change functions
applyChanges(changes: Change[]): void;
getChanges(have_deps: Heads): Change[];
getChangeByHash(hash: Hash): Change | null;
getChangesAdded(other: Automerge): Change[];
getHeads(): Heads;
getLastLocalChange(): Change;
getMissingDeps(heads?: Heads): Heads;
// memory management
free(): void;
clone(actor?: string): Automerge;
fork(actor?: string): Automerge;
forkAt(heads: Heads, actor?: string): Automerge;
// dump internal state to console.log
dump(): void;
// dump internal state to a JS object
toJS(): any;
}
export class SyncState {
free(): void;
clone(): SyncState;
lastSentHeads: any;
sentHashes: any;
readonly sharedHeads: any;
}
export default function init (): Promise<any>;

View file

@ -1,6 +0,0 @@
let wasm = require("./bindgen")
module.exports = wasm
module.exports.load = module.exports.loadDoc
delete module.exports.loadDoc
Object.defineProperty(module.exports, "__esModule", { value: true });
module.exports.default = () => (new Promise((resolve,reject) => { resolve() }))

View file

@ -1,49 +0,0 @@
{
"collaborators": [
"Orion Henry <orion@inkandswitch.com>",
"Alex Good <alex@memoryandthought.me>",
"Martin Kleppmann"
],
"name": "automerge-wasm",
"description": "wasm-bindgen bindings to the automerge rust implementation",
"homepage": "https://github.com/automerge/automerge-rs/tree/main/automerge-wasm",
"repository": "github:automerge/automerge-rs",
"version": "0.1.2",
"license": "MIT",
"files": [
"README.md",
"LICENSE",
"package.json",
"index.d.ts",
"nodejs/index.js",
"nodejs/bindgen.js",
"nodejs/bindgen_bg.wasm",
"web/index.js",
"web/bindgen.js",
"web/bindgen_bg.wasm"
],
"types": "index.d.ts",
"module": "./web/index.js",
"main": "./nodejs/index.js",
"scripts": {
"build": "cross-env PROFILE=dev TARGET=nodejs yarn target",
"release": "cross-env PROFILE=release yarn buildall",
"buildall": "cross-env TARGET=nodejs yarn target && cross-env TARGET=web yarn target",
"target": "rimraf ./$TARGET && wasm-pack build --target $TARGET --$PROFILE --out-name bindgen -d $TARGET && cp $TARGET-index.js $TARGET/index.js",
"test": "ts-mocha -p tsconfig.json --type-check --bail --full-trace test/*.ts"
},
"dependencies": {},
"devDependencies": {
"@types/expect": "^24.3.0",
"@types/jest": "^27.4.0",
"@types/mocha": "^9.1.0",
"@types/node": "^17.0.13",
"cross-env": "^7.0.3",
"fast-sha256": "^1.3.0",
"mocha": "^9.1.3",
"pako": "^2.0.4",
"rimraf": "^3.0.2",
"ts-mocha": "^9.0.2",
"typescript": "^4.5.5"
}
}

View file

@ -1,433 +0,0 @@
use automerge as am;
use automerge::transaction::Transactable;
use automerge::{Change, ChangeHash, Prop};
use js_sys::{Array, Object, Reflect, Uint8Array};
use std::collections::HashSet;
use std::fmt::Display;
use wasm_bindgen::prelude::*;
use wasm_bindgen::JsCast;
use crate::{ObjId, ScalarValue, Value};
pub(crate) struct JS(pub(crate) JsValue);
pub(crate) struct AR(pub(crate) Array);
impl From<AR> for JsValue {
fn from(ar: AR) -> Self {
ar.0.into()
}
}
impl From<JS> for JsValue {
fn from(js: JS) -> Self {
js.0
}
}
impl From<am::sync::State> for JS {
fn from(state: am::sync::State) -> Self {
let shared_heads: JS = state.shared_heads.into();
let last_sent_heads: JS = state.last_sent_heads.into();
let their_heads: JS = state.their_heads.into();
let their_need: JS = state.their_need.into();
let sent_hashes: JS = state.sent_hashes.into();
let their_have = if let Some(have) = &state.their_have {
JsValue::from(AR::from(have.as_slice()).0)
} else {
JsValue::null()
};
let result: JsValue = Object::new().into();
// we can unwrap here b/c we made the object and know its not frozen
Reflect::set(&result, &"sharedHeads".into(), &shared_heads.0).unwrap();
Reflect::set(&result, &"lastSentHeads".into(), &last_sent_heads.0).unwrap();
Reflect::set(&result, &"theirHeads".into(), &their_heads.0).unwrap();
Reflect::set(&result, &"theirNeed".into(), &their_need.0).unwrap();
Reflect::set(&result, &"theirHave".into(), &their_have).unwrap();
Reflect::set(&result, &"sentHashes".into(), &sent_hashes.0).unwrap();
JS(result)
}
}
impl From<Vec<ChangeHash>> for JS {
fn from(heads: Vec<ChangeHash>) -> Self {
let heads: Array = heads
.iter()
.map(|h| JsValue::from_str(&h.to_string()))
.collect();
JS(heads.into())
}
}
impl From<HashSet<ChangeHash>> for JS {
fn from(heads: HashSet<ChangeHash>) -> Self {
let result: JsValue = Object::new().into();
for key in &heads {
Reflect::set(&result, &key.to_string().into(), &true.into()).unwrap();
}
JS(result)
}
}
impl From<Option<Vec<ChangeHash>>> for JS {
fn from(heads: Option<Vec<ChangeHash>>) -> Self {
if let Some(v) = heads {
let v: Array = v
.iter()
.map(|h| JsValue::from_str(&h.to_string()))
.collect();
JS(v.into())
} else {
JS(JsValue::null())
}
}
}
impl TryFrom<JS> for HashSet<ChangeHash> {
type Error = JsValue;
fn try_from(value: JS) -> Result<Self, Self::Error> {
let mut result = HashSet::new();
for key in Reflect::own_keys(&value.0)?.iter() {
if let Some(true) = Reflect::get(&value.0, &key)?.as_bool() {
result.insert(key.into_serde().map_err(to_js_err)?);
}
}
Ok(result)
}
}
impl TryFrom<JS> for Vec<ChangeHash> {
type Error = JsValue;
fn try_from(value: JS) -> Result<Self, Self::Error> {
let value = value.0.dyn_into::<Array>()?;
let value: Result<Vec<ChangeHash>, _> = value.iter().map(|j| j.into_serde()).collect();
let value = value.map_err(to_js_err)?;
Ok(value)
}
}
impl From<JS> for Option<Vec<ChangeHash>> {
fn from(value: JS) -> Self {
let value = value.0.dyn_into::<Array>().ok()?;
let value: Result<Vec<ChangeHash>, _> = value.iter().map(|j| j.into_serde()).collect();
let value = value.ok()?;
Some(value)
}
}
impl TryFrom<JS> for Vec<Change> {
type Error = JsValue;
fn try_from(value: JS) -> Result<Self, Self::Error> {
let value = value.0.dyn_into::<Array>()?;
let changes: Result<Vec<Uint8Array>, _> = value.iter().map(|j| j.dyn_into()).collect();
let changes = changes?;
let changes: Result<Vec<Change>, _> = changes
.iter()
.map(|a| Change::try_from(a.to_vec()))
.collect();
let changes = changes.map_err(to_js_err)?;
Ok(changes)
}
}
impl TryFrom<JS> for am::sync::State {
type Error = JsValue;
fn try_from(value: JS) -> Result<Self, Self::Error> {
let value = value.0;
let shared_heads = js_get(&value, "sharedHeads")?.try_into()?;
let last_sent_heads = js_get(&value, "lastSentHeads")?.try_into()?;
let their_heads = js_get(&value, "theirHeads")?.into();
let their_need = js_get(&value, "theirNeed")?.into();
let their_have = js_get(&value, "theirHave")?.try_into()?;
let sent_hashes = js_get(&value, "sentHashes")?.try_into()?;
Ok(am::sync::State {
shared_heads,
last_sent_heads,
their_heads,
their_need,
their_have,
sent_hashes,
})
}
}
impl TryFrom<JS> for Option<Vec<am::sync::Have>> {
type Error = JsValue;
fn try_from(value: JS) -> Result<Self, Self::Error> {
if value.0.is_null() {
Ok(None)
} else {
Ok(Some(value.try_into()?))
}
}
}
impl TryFrom<JS> for Vec<am::sync::Have> {
type Error = JsValue;
fn try_from(value: JS) -> Result<Self, Self::Error> {
let value = value.0.dyn_into::<Array>()?;
let have: Result<Vec<am::sync::Have>, JsValue> = value
.iter()
.map(|s| {
let last_sync = js_get(&s, "lastSync")?.try_into()?;
let bloom = js_get(&s, "bloom")?.try_into()?;
Ok(am::sync::Have { last_sync, bloom })
})
.collect();
let have = have?;
Ok(have)
}
}
impl TryFrom<JS> for am::sync::BloomFilter {
type Error = JsValue;
fn try_from(value: JS) -> Result<Self, Self::Error> {
let value: Uint8Array = value.0.dyn_into()?;
let value = value.to_vec();
let value = value.as_slice().try_into().map_err(to_js_err)?;
Ok(value)
}
}
impl From<&[ChangeHash]> for AR {
fn from(value: &[ChangeHash]) -> Self {
AR(value
.iter()
.map(|h| JsValue::from_str(&hex::encode(&h.0)))
.collect())
}
}
impl From<&[Change]> for AR {
fn from(value: &[Change]) -> Self {
let changes: Array = value
.iter()
.map(|c| Uint8Array::from(c.raw_bytes()))
.collect();
AR(changes)
}
}
impl From<&[am::sync::Have]> for AR {
fn from(value: &[am::sync::Have]) -> Self {
AR(value
.iter()
.map(|have| {
let last_sync: Array = have
.last_sync
.iter()
.map(|h| JsValue::from_str(&hex::encode(&h.0)))
.collect();
// FIXME - the clone and the unwrap here shouldnt be needed - look at into_bytes()
let bloom = Uint8Array::from(have.bloom.to_bytes().as_slice());
let obj: JsValue = Object::new().into();
// we can unwrap here b/c we created the object and know its not frozen
Reflect::set(&obj, &"lastSync".into(), &last_sync.into()).unwrap();
Reflect::set(&obj, &"bloom".into(), &bloom.into()).unwrap();
obj
})
.collect())
}
}
pub(crate) fn to_js_err<T: Display>(err: T) -> JsValue {
js_sys::Error::new(&std::format!("{}", err)).into()
}
pub(crate) fn js_get<J: Into<JsValue>>(obj: J, prop: &str) -> Result<JS, JsValue> {
Ok(JS(Reflect::get(&obj.into(), &prop.into())?))
}
pub(crate) fn js_set<V: Into<JsValue>>(obj: &JsValue, prop: &str, val: V) -> Result<bool, JsValue> {
Reflect::set(obj, &prop.into(), &val.into())
}
pub(crate) fn to_prop(p: JsValue) -> Result<Prop, JsValue> {
if let Some(s) = p.as_string() {
Ok(Prop::Map(s))
} else if let Some(n) = p.as_f64() {
Ok(Prop::Seq(n as usize))
} else {
Err(to_js_err("prop must me a string or number"))
}
}
pub(crate) fn to_objtype(
value: &JsValue,
datatype: &Option<String>,
) -> Option<(am::ObjType, Vec<(Prop, JsValue)>)> {
match datatype.as_deref() {
Some("map") => {
let map = value.clone().dyn_into::<js_sys::Object>().ok()?;
// FIXME unwrap
let map = js_sys::Object::keys(&map)
.iter()
.zip(js_sys::Object::values(&map).iter())
.map(|(key, val)| (key.as_string().unwrap().into(), val))
.collect();
Some((am::ObjType::Map, map))
}
Some("list") => {
let list = value.clone().dyn_into::<js_sys::Array>().ok()?;
let list = list
.iter()
.enumerate()
.map(|(i, e)| (i.into(), e))
.collect();
Some((am::ObjType::List, list))
}
Some("text") => {
let text = value.as_string()?;
let text = text
.chars()
.enumerate()
.map(|(i, ch)| (i.into(), ch.to_string().into()))
.collect();
Some((am::ObjType::Text, text))
}
Some(_) => None,
None => {
if let Ok(list) = value.clone().dyn_into::<js_sys::Array>() {
let list = list
.iter()
.enumerate()
.map(|(i, e)| (i.into(), e))
.collect();
Some((am::ObjType::List, list))
} else if let Ok(map) = value.clone().dyn_into::<js_sys::Object>() {
// FIXME unwrap
let map = js_sys::Object::keys(&map)
.iter()
.zip(js_sys::Object::values(&map).iter())
.map(|(key, val)| (key.as_string().unwrap().into(), val))
.collect();
Some((am::ObjType::Map, map))
} else if let Some(text) = value.as_string() {
let text = text
.chars()
.enumerate()
.map(|(i, ch)| (i.into(), ch.to_string().into()))
.collect();
Some((am::ObjType::Text, text))
} else {
None
}
}
}
}
pub(crate) fn get_heads(heads: Option<Array>) -> Option<Vec<ChangeHash>> {
let heads = heads?;
let heads: Result<Vec<ChangeHash>, _> = heads.iter().map(|j| j.into_serde()).collect();
heads.ok()
}
pub(crate) fn map_to_js(doc: &am::AutoCommit, obj: &ObjId) -> JsValue {
let keys = doc.keys(obj);
let map = Object::new();
for k in keys {
let val = doc.get(obj, &k);
match val {
Ok(Some((Value::Object(o), exid)))
if o == am::ObjType::Map || o == am::ObjType::Table =>
{
Reflect::set(&map, &k.into(), &map_to_js(doc, &exid)).unwrap();
}
Ok(Some((Value::Object(o), exid))) if o == am::ObjType::List => {
Reflect::set(&map, &k.into(), &list_to_js(doc, &exid)).unwrap();
}
Ok(Some((Value::Object(o), exid))) if o == am::ObjType::Text => {
Reflect::set(&map, &k.into(), &doc.text(&exid).unwrap().into()).unwrap();
}
Ok(Some((Value::Scalar(v), _))) => {
Reflect::set(&map, &k.into(), &ScalarValue(v).into()).unwrap();
}
_ => (),
};
}
map.into()
}
pub(crate) fn map_to_js_at(doc: &am::AutoCommit, obj: &ObjId, heads: &[ChangeHash]) -> JsValue {
let keys = doc.keys(obj);
let map = Object::new();
for k in keys {
let val = doc.get_at(obj, &k, heads);
match val {
Ok(Some((Value::Object(o), exid)))
if o == am::ObjType::Map || o == am::ObjType::Table =>
{
Reflect::set(&map, &k.into(), &map_to_js_at(doc, &exid, heads)).unwrap();
}
Ok(Some((Value::Object(o), exid))) if o == am::ObjType::List => {
Reflect::set(&map, &k.into(), &list_to_js_at(doc, &exid, heads)).unwrap();
}
Ok(Some((Value::Object(o), exid))) if o == am::ObjType::Text => {
Reflect::set(&map, &k.into(), &doc.text_at(&exid, heads).unwrap().into()).unwrap();
}
Ok(Some((Value::Scalar(v), _))) => {
Reflect::set(&map, &k.into(), &ScalarValue(v).into()).unwrap();
}
_ => (),
};
}
map.into()
}
pub(crate) fn list_to_js(doc: &am::AutoCommit, obj: &ObjId) -> JsValue {
let len = doc.length(obj);
let array = Array::new();
for i in 0..len {
let val = doc.get(obj, i as usize);
match val {
Ok(Some((Value::Object(o), exid)))
if o == am::ObjType::Map || o == am::ObjType::Table =>
{
array.push(&map_to_js(doc, &exid));
}
Ok(Some((Value::Object(o), exid))) if o == am::ObjType::List => {
array.push(&list_to_js(doc, &exid));
}
Ok(Some((Value::Object(o), exid))) if o == am::ObjType::Text => {
array.push(&doc.text(&exid).unwrap().into());
}
Ok(Some((Value::Scalar(v), _))) => {
array.push(&ScalarValue(v).into());
}
_ => (),
};
}
array.into()
}
pub(crate) fn list_to_js_at(doc: &am::AutoCommit, obj: &ObjId, heads: &[ChangeHash]) -> JsValue {
let len = doc.length(obj);
let array = Array::new();
for i in 0..len {
let val = doc.get_at(obj, i as usize, heads);
match val {
Ok(Some((Value::Object(o), exid)))
if o == am::ObjType::Map || o == am::ObjType::Table =>
{
array.push(&map_to_js_at(doc, &exid, heads));
}
Ok(Some((Value::Object(o), exid))) if o == am::ObjType::List => {
array.push(&list_to_js_at(doc, &exid, heads));
}
Ok(Some((Value::Object(o), exid))) if o == am::ObjType::Text => {
array.push(&doc.text_at(exid, heads).unwrap().into());
}
Ok(Some((Value::Scalar(v), _))) => {
array.push(&ScalarValue(v).into());
}
_ => (),
};
}
array.into()
}

View file

@ -1,919 +0,0 @@
#![doc(
html_logo_url = "https://raw.githubusercontent.com/automerge/automerge-rs/main/img/brandmark.svg",
html_favicon_url = "https:///raw.githubusercontent.com/automerge/automerge-rs/main/img/favicon.ico"
)]
#![warn(
missing_debug_implementations,
// missing_docs, // TODO: add documentation!
rust_2021_compatibility,
rust_2018_idioms,
unreachable_pub,
bad_style,
const_err,
dead_code,
improper_ctypes,
non_shorthand_field_patterns,
no_mangle_generic_items,
overflowing_literals,
path_statements,
patterns_in_fns_without_body,
private_in_public,
unconditional_recursion,
unused,
unused_allocation,
unused_comparisons,
unused_parens,
while_true
)]
#![allow(clippy::unused_unit)]
use am::transaction::CommitOptions;
use am::transaction::Transactable;
use am::ApplyOptions;
use automerge as am;
use automerge::Patch;
use automerge::VecOpObserver;
use automerge::{Change, ObjId, Prop, Value, ROOT};
use js_sys::{Array, Object, Uint8Array};
use std::convert::TryInto;
use wasm_bindgen::prelude::*;
use wasm_bindgen::JsCast;
mod interop;
mod sync;
mod value;
use interop::{
get_heads, js_get, js_set, list_to_js, list_to_js_at, map_to_js, map_to_js_at, to_js_err,
to_objtype, to_prop, AR, JS,
};
use sync::SyncState;
use value::{datatype, ScalarValue};
#[allow(unused_macros)]
macro_rules! log {
( $( $t:tt )* ) => {
web_sys::console::log_1(&format!( $( $t )* ).into());
};
}
#[cfg(feature = "wee_alloc")]
#[global_allocator]
static ALLOC: wee_alloc::WeeAlloc = wee_alloc::WeeAlloc::INIT;
#[wasm_bindgen]
#[derive(Debug)]
pub struct Automerge {
doc: automerge::AutoCommit,
observer: Option<VecOpObserver>,
}
#[wasm_bindgen]
impl Automerge {
pub fn new(actor: Option<String>) -> Result<Automerge, JsValue> {
let mut automerge = automerge::AutoCommit::new();
if let Some(a) = actor {
let a = automerge::ActorId::from(hex::decode(a).map_err(to_js_err)?.to_vec());
automerge.set_actor(a);
}
Ok(Automerge {
doc: automerge,
observer: None,
})
}
fn ensure_transaction_closed(&mut self) {
if self.doc.pending_ops() > 0 {
let mut opts = CommitOptions::default();
if let Some(observer) = self.observer.as_mut() {
opts.set_op_observer(observer);
}
self.doc.commit_with(opts);
}
}
#[allow(clippy::should_implement_trait)]
pub fn clone(&mut self, actor: Option<String>) -> Result<Automerge, JsValue> {
self.ensure_transaction_closed();
let mut automerge = Automerge {
doc: self.doc.clone(),
observer: None,
};
if let Some(s) = actor {
let actor = automerge::ActorId::from(hex::decode(s).map_err(to_js_err)?.to_vec());
automerge.doc.set_actor(actor);
}
Ok(automerge)
}
pub fn fork(&mut self, actor: Option<String>) -> Result<Automerge, JsValue> {
self.ensure_transaction_closed();
let mut automerge = Automerge {
doc: self.doc.fork(),
observer: None,
};
if let Some(s) = actor {
let actor = automerge::ActorId::from(hex::decode(s).map_err(to_js_err)?.to_vec());
automerge.doc.set_actor(actor);
}
Ok(automerge)
}
#[wasm_bindgen(js_name = forkAt)]
pub fn fork_at(&mut self, heads: JsValue, actor: Option<String>) -> Result<Automerge, JsValue> {
let deps: Vec<_> = JS(heads).try_into()?;
let mut automerge = Automerge {
doc: self.doc.fork_at(&deps)?,
observer: None,
};
if let Some(s) = actor {
let actor = automerge::ActorId::from(hex::decode(s).map_err(to_js_err)?.to_vec());
automerge.doc.set_actor(actor);
}
Ok(automerge)
}
pub fn free(self) {}
#[wasm_bindgen(js_name = pendingOps)]
pub fn pending_ops(&self) -> JsValue {
(self.doc.pending_ops() as u32).into()
}
pub fn commit(&mut self, message: Option<String>, time: Option<f64>) -> JsValue {
let mut commit_opts = CommitOptions::default();
if let Some(message) = message {
commit_opts.set_message(message);
}
if let Some(time) = time {
commit_opts.set_time(time as i64);
}
if let Some(observer) = self.observer.as_mut() {
commit_opts.set_op_observer(observer);
}
let hash = self.doc.commit_with(commit_opts);
JsValue::from_str(&hex::encode(&hash.0))
}
pub fn merge(&mut self, other: &mut Automerge) -> Result<Array, JsValue> {
self.ensure_transaction_closed();
let options = if let Some(observer) = self.observer.as_mut() {
ApplyOptions::default().with_op_observer(observer)
} else {
ApplyOptions::default()
};
let heads = self.doc.merge_with(&mut other.doc, options)?;
let heads: Array = heads
.iter()
.map(|h| JsValue::from_str(&hex::encode(&h.0)))
.collect();
Ok(heads)
}
pub fn rollback(&mut self) -> f64 {
self.doc.rollback() as f64
}
pub fn keys(&self, obj: JsValue, heads: Option<Array>) -> Result<Array, JsValue> {
let obj = self.import(obj)?;
let result = if let Some(heads) = get_heads(heads) {
self.doc
.keys_at(&obj, &heads)
.map(|s| JsValue::from_str(&s))
.collect()
} else {
self.doc.keys(&obj).map(|s| JsValue::from_str(&s)).collect()
};
Ok(result)
}
pub fn text(&self, obj: JsValue, heads: Option<Array>) -> Result<String, JsValue> {
let obj = self.import(obj)?;
if let Some(heads) = get_heads(heads) {
Ok(self.doc.text_at(&obj, &heads)?)
} else {
Ok(self.doc.text(&obj)?)
}
}
pub fn splice(
&mut self,
obj: JsValue,
start: f64,
delete_count: f64,
text: JsValue,
) -> Result<(), JsValue> {
let obj = self.import(obj)?;
let start = start as usize;
let delete_count = delete_count as usize;
let mut vals = vec![];
if let Some(t) = text.as_string() {
self.doc.splice_text(&obj, start, delete_count, &t)?;
} else {
if let Ok(array) = text.dyn_into::<Array>() {
for i in array.iter() {
let value = self
.import_scalar(&i, &None)
.ok_or_else(|| to_js_err("expected scalar"))?;
vals.push(value);
}
}
self.doc
.splice(&obj, start, delete_count, vals.into_iter())?;
}
Ok(())
}
pub fn push(&mut self, obj: JsValue, value: JsValue, datatype: JsValue) -> Result<(), JsValue> {
let obj = self.import(obj)?;
let value = self
.import_scalar(&value, &datatype.as_string())
.ok_or_else(|| to_js_err("invalid scalar value"))?;
let index = self.doc.length(&obj);
self.doc.insert(&obj, index, value)?;
Ok(())
}
#[wasm_bindgen(js_name = pushObject)]
pub fn push_object(&mut self, obj: JsValue, value: JsValue) -> Result<Option<String>, JsValue> {
let obj = self.import(obj)?;
let (value, subvals) =
to_objtype(&value, &None).ok_or_else(|| to_js_err("expected object"))?;
let index = self.doc.length(&obj);
let opid = self.doc.insert_object(&obj, index, value)?;
self.subset(&opid, subvals)?;
Ok(opid.to_string().into())
}
pub fn insert(
&mut self,
obj: JsValue,
index: f64,
value: JsValue,
datatype: JsValue,
) -> Result<(), JsValue> {
let obj = self.import(obj)?;
let index = index as f64;
let value = self
.import_scalar(&value, &datatype.as_string())
.ok_or_else(|| to_js_err("expected scalar value"))?;
self.doc.insert(&obj, index as usize, value)?;
Ok(())
}
#[wasm_bindgen(js_name = insertObject)]
pub fn insert_object(
&mut self,
obj: JsValue,
index: f64,
value: JsValue,
) -> Result<Option<String>, JsValue> {
let obj = self.import(obj)?;
let index = index as f64;
let (value, subvals) =
to_objtype(&value, &None).ok_or_else(|| to_js_err("expected object"))?;
let opid = self.doc.insert_object(&obj, index as usize, value)?;
self.subset(&opid, subvals)?;
Ok(opid.to_string().into())
}
pub fn put(
&mut self,
obj: JsValue,
prop: JsValue,
value: JsValue,
datatype: JsValue,
) -> Result<(), JsValue> {
let obj = self.import(obj)?;
let prop = self.import_prop(prop)?;
let value = self
.import_scalar(&value, &datatype.as_string())
.ok_or_else(|| to_js_err("expected scalar value"))?;
self.doc.put(&obj, prop, value)?;
Ok(())
}
#[wasm_bindgen(js_name = putObject)]
pub fn put_object(
&mut self,
obj: JsValue,
prop: JsValue,
value: JsValue,
) -> Result<JsValue, JsValue> {
let obj = self.import(obj)?;
let prop = self.import_prop(prop)?;
let (value, subvals) =
to_objtype(&value, &None).ok_or_else(|| to_js_err("expected object"))?;
let opid = self.doc.put_object(&obj, prop, value)?;
self.subset(&opid, subvals)?;
Ok(opid.to_string().into())
}
fn subset(&mut self, obj: &am::ObjId, vals: Vec<(am::Prop, JsValue)>) -> Result<(), JsValue> {
for (p, v) in vals {
let (value, subvals) = self.import_value(&v, None)?;
//let opid = self.0.set(id, p, value)?;
let opid = match (p, value) {
(Prop::Map(s), Value::Object(objtype)) => {
Some(self.doc.put_object(obj, s, objtype)?)
}
(Prop::Map(s), Value::Scalar(scalar)) => {
self.doc.put(obj, s, scalar.into_owned())?;
None
}
(Prop::Seq(i), Value::Object(objtype)) => {
Some(self.doc.insert_object(obj, i, objtype)?)
}
(Prop::Seq(i), Value::Scalar(scalar)) => {
self.doc.insert(obj, i, scalar.into_owned())?;
None
}
};
if let Some(opid) = opid {
self.subset(&opid, subvals)?;
}
}
Ok(())
}
pub fn increment(
&mut self,
obj: JsValue,
prop: JsValue,
value: JsValue,
) -> Result<(), JsValue> {
let obj = self.import(obj)?;
let prop = self.import_prop(prop)?;
let value: f64 = value
.as_f64()
.ok_or_else(|| to_js_err("increment needs a numeric value"))?;
self.doc.increment(&obj, prop, value as i64)?;
Ok(())
}
#[wasm_bindgen(js_name = get)]
pub fn get(
&self,
obj: JsValue,
prop: JsValue,
heads: Option<Array>,
) -> Result<Option<Array>, JsValue> {
let obj = self.import(obj)?;
let result = Array::new();
let prop = to_prop(prop);
let heads = get_heads(heads);
if let Ok(prop) = prop {
let value = if let Some(h) = heads {
self.doc.get_at(&obj, prop, &h)?
} else {
self.doc.get(&obj, prop)?
};
match value {
Some((Value::Object(obj_type), obj_id)) => {
result.push(&obj_type.to_string().into());
result.push(&obj_id.to_string().into());
Ok(Some(result))
}
Some((Value::Scalar(value), _)) => {
result.push(&datatype(&value).into());
result.push(&ScalarValue(value).into());
Ok(Some(result))
}
None => Ok(None),
}
} else {
Ok(None)
}
}
#[wasm_bindgen(js_name = getAll)]
pub fn get_all(
&self,
obj: JsValue,
arg: JsValue,
heads: Option<Array>,
) -> Result<Array, JsValue> {
let obj = self.import(obj)?;
let result = Array::new();
let prop = to_prop(arg);
if let Ok(prop) = prop {
let values = if let Some(heads) = get_heads(heads) {
self.doc.get_all_at(&obj, prop, &heads)
} else {
self.doc.get_all(&obj, prop)
}
.map_err(to_js_err)?;
for value in values {
match value {
(Value::Object(obj_type), obj_id) => {
let sub = Array::new();
sub.push(&obj_type.to_string().into());
sub.push(&obj_id.to_string().into());
result.push(&sub.into());
}
(Value::Scalar(value), id) => {
let sub = Array::new();
sub.push(&datatype(&value).into());
sub.push(&ScalarValue(value).into());
sub.push(&id.to_string().into());
result.push(&sub.into());
}
}
}
}
Ok(result)
}
#[wasm_bindgen(js_name = enablePatches)]
pub fn enable_patches(&mut self, enable: JsValue) -> Result<(), JsValue> {
let enable = enable
.as_bool()
.ok_or_else(|| to_js_err("expected boolean"))?;
if enable {
if self.observer.is_none() {
self.observer = Some(VecOpObserver::default());
}
} else {
self.observer = None;
}
Ok(())
}
#[wasm_bindgen(js_name = popPatches)]
pub fn pop_patches(&mut self) -> Result<Array, JsValue> {
// transactions send out observer updates as they occur, not waiting for them to be
// committed.
// If we pop the patches then we won't be able to revert them.
self.ensure_transaction_closed();
let patches = self
.observer
.as_mut()
.map_or_else(Vec::new, |o| o.take_patches());
let result = Array::new();
for p in patches {
let patch = Object::new();
match p {
Patch::Put {
obj,
key,
value,
conflict,
} => {
js_set(&patch, "action", "put")?;
js_set(&patch, "obj", obj.to_string())?;
js_set(&patch, "key", key)?;
match value {
(Value::Object(obj_type), obj_id) => {
js_set(&patch, "datatype", obj_type.to_string())?;
js_set(&patch, "value", obj_id.to_string())?;
}
(Value::Scalar(value), _) => {
js_set(&patch, "datatype", datatype(&value))?;
js_set(&patch, "value", ScalarValue(value))?;
}
};
js_set(&patch, "conflict", conflict)?;
}
Patch::Insert { obj, index, value } => {
js_set(&patch, "action", "insert")?;
js_set(&patch, "obj", obj.to_string())?;
js_set(&patch, "key", index as f64)?;
match value {
(Value::Object(obj_type), obj_id) => {
js_set(&patch, "datatype", obj_type.to_string())?;
js_set(&patch, "value", obj_id.to_string())?;
}
(Value::Scalar(value), _) => {
js_set(&patch, "datatype", datatype(&value))?;
js_set(&patch, "value", ScalarValue(value))?;
}
};
}
Patch::Increment { obj, key, value } => {
js_set(&patch, "action", "increment")?;
js_set(&patch, "obj", obj.to_string())?;
js_set(&patch, "key", key)?;
js_set(&patch, "value", value.0)?;
}
Patch::Delete { obj, key } => {
js_set(&patch, "action", "delete")?;
js_set(&patch, "obj", obj.to_string())?;
js_set(&patch, "key", key)?;
}
}
result.push(&patch);
}
Ok(result)
}
pub fn length(&self, obj: JsValue, heads: Option<Array>) -> Result<f64, JsValue> {
let obj = self.import(obj)?;
if let Some(heads) = get_heads(heads) {
Ok(self.doc.length_at(&obj, &heads) as f64)
} else {
Ok(self.doc.length(&obj) as f64)
}
}
pub fn delete(&mut self, obj: JsValue, prop: JsValue) -> Result<(), JsValue> {
let obj = self.import(obj)?;
let prop = to_prop(prop)?;
self.doc.delete(&obj, prop).map_err(to_js_err)?;
Ok(())
}
pub fn save(&mut self) -> Uint8Array {
self.ensure_transaction_closed();
Uint8Array::from(self.doc.save().as_slice())
}
#[wasm_bindgen(js_name = saveIncremental)]
pub fn save_incremental(&mut self) -> Uint8Array {
self.ensure_transaction_closed();
let bytes = self.doc.save_incremental();
Uint8Array::from(bytes.as_slice())
}
#[wasm_bindgen(js_name = loadIncremental)]
pub fn load_incremental(&mut self, data: Uint8Array) -> Result<f64, JsValue> {
self.ensure_transaction_closed();
let data = data.to_vec();
let options = if let Some(observer) = self.observer.as_mut() {
ApplyOptions::default().with_op_observer(observer)
} else {
ApplyOptions::default()
};
let len = self
.doc
.load_incremental_with(&data, options)
.map_err(to_js_err)?;
Ok(len as f64)
}
#[wasm_bindgen(js_name = applyChanges)]
pub fn apply_changes(&mut self, changes: JsValue) -> Result<(), JsValue> {
self.ensure_transaction_closed();
let changes: Vec<_> = JS(changes).try_into()?;
let options = if let Some(observer) = self.observer.as_mut() {
ApplyOptions::default().with_op_observer(observer)
} else {
ApplyOptions::default()
};
self.doc
.apply_changes_with(changes, options)
.map_err(to_js_err)?;
Ok(())
}
#[wasm_bindgen(js_name = getChanges)]
pub fn get_changes(&mut self, have_deps: JsValue) -> Result<Array, JsValue> {
self.ensure_transaction_closed();
let deps: Vec<_> = JS(have_deps).try_into()?;
let changes = self.doc.get_changes(&deps);
let changes: Array = changes
.iter()
.map(|c| Uint8Array::from(c.raw_bytes()))
.collect();
Ok(changes)
}
#[wasm_bindgen(js_name = getChangeByHash)]
pub fn get_change_by_hash(&mut self, hash: JsValue) -> Result<JsValue, JsValue> {
self.ensure_transaction_closed();
let hash = hash.into_serde().map_err(to_js_err)?;
let change = self.doc.get_change_by_hash(&hash);
if let Some(c) = change {
Ok(Uint8Array::from(c.raw_bytes()).into())
} else {
Ok(JsValue::null())
}
}
#[wasm_bindgen(js_name = getChangesAdded)]
pub fn get_changes_added(&mut self, other: &mut Automerge) -> Result<Array, JsValue> {
self.ensure_transaction_closed();
let changes = self.doc.get_changes_added(&mut other.doc);
let changes: Array = changes
.iter()
.map(|c| Uint8Array::from(c.raw_bytes()))
.collect();
Ok(changes)
}
#[wasm_bindgen(js_name = getHeads)]
pub fn get_heads(&mut self) -> Array {
self.ensure_transaction_closed();
let heads = self.doc.get_heads();
let heads: Array = heads
.iter()
.map(|h| JsValue::from_str(&hex::encode(&h.0)))
.collect();
heads
}
#[wasm_bindgen(js_name = getActorId)]
pub fn get_actor_id(&self) -> String {
let actor = self.doc.get_actor();
actor.to_string()
}
#[wasm_bindgen(js_name = getLastLocalChange)]
pub fn get_last_local_change(&mut self) -> Result<Uint8Array, JsValue> {
self.ensure_transaction_closed();
if let Some(change) = self.doc.get_last_local_change() {
Ok(Uint8Array::from(change.raw_bytes()))
} else {
Err(to_js_err("no local changes"))
}
}
pub fn dump(&mut self) {
self.ensure_transaction_closed();
self.doc.dump()
}
#[wasm_bindgen(js_name = getMissingDeps)]
pub fn get_missing_deps(&mut self, heads: Option<Array>) -> Result<Array, JsValue> {
self.ensure_transaction_closed();
let heads = get_heads(heads).unwrap_or_default();
let deps = self.doc.get_missing_deps(&heads);
let deps: Array = deps
.iter()
.map(|h| JsValue::from_str(&hex::encode(&h.0)))
.collect();
Ok(deps)
}
#[wasm_bindgen(js_name = receiveSyncMessage)]
pub fn receive_sync_message(
&mut self,
state: &mut SyncState,
message: Uint8Array,
) -> Result<(), JsValue> {
self.ensure_transaction_closed();
let message = message.to_vec();
let message = am::sync::Message::decode(message.as_slice()).map_err(to_js_err)?;
let options = if let Some(observer) = self.observer.as_mut() {
ApplyOptions::default().with_op_observer(observer)
} else {
ApplyOptions::default()
};
self.doc
.receive_sync_message_with(&mut state.0, message, options)
.map_err(to_js_err)?;
Ok(())
}
#[wasm_bindgen(js_name = generateSyncMessage)]
pub fn generate_sync_message(&mut self, state: &mut SyncState) -> Result<JsValue, JsValue> {
self.ensure_transaction_closed();
if let Some(message) = self.doc.generate_sync_message(&mut state.0) {
Ok(Uint8Array::from(message.encode().as_slice()).into())
} else {
Ok(JsValue::null())
}
}
#[wasm_bindgen(js_name = toJS)]
pub fn to_js(&self) -> JsValue {
map_to_js(&self.doc, &ROOT)
}
pub fn materialize(&self, obj: JsValue, heads: Option<Array>) -> Result<JsValue, JsValue> {
let obj = self.import(obj).unwrap_or(ROOT);
let heads = get_heads(heads);
if let Some(heads) = heads {
match self.doc.object_type(&obj) {
Some(am::ObjType::Map) => Ok(map_to_js_at(&self.doc, &obj, heads.as_slice())),
Some(am::ObjType::List) => Ok(list_to_js_at(&self.doc, &obj, heads.as_slice())),
Some(am::ObjType::Text) => Ok(self.doc.text_at(&obj, heads.as_slice())?.into()),
Some(am::ObjType::Table) => Ok(map_to_js_at(&self.doc, &obj, heads.as_slice())),
None => Err(to_js_err(format!("invalid obj {}", obj))),
}
} else {
match self.doc.object_type(&obj) {
Some(am::ObjType::Map) => Ok(map_to_js(&self.doc, &obj)),
Some(am::ObjType::List) => Ok(list_to_js(&self.doc, &obj)),
Some(am::ObjType::Text) => Ok(self.doc.text(&obj)?.into()),
Some(am::ObjType::Table) => Ok(map_to_js(&self.doc, &obj)),
None => Err(to_js_err(format!("invalid obj {}", obj))),
}
}
}
fn import(&self, id: JsValue) -> Result<ObjId, JsValue> {
if let Some(s) = id.as_string() {
if let Some(post) = s.strip_prefix('/') {
let mut obj = ROOT;
let mut is_map = true;
let parts = post.split('/');
for prop in parts {
if prop.is_empty() {
break;
}
let val = if is_map {
self.doc.get(obj, prop)?
} else {
self.doc.get(obj, am::Prop::Seq(prop.parse().unwrap()))?
};
match val {
Some((am::Value::Object(am::ObjType::Map), id)) => {
is_map = true;
obj = id;
}
Some((am::Value::Object(am::ObjType::Table), id)) => {
is_map = true;
obj = id;
}
Some((am::Value::Object(_), id)) => {
is_map = false;
obj = id;
}
None => return Err(to_js_err(format!("invalid path '{}'", s))),
_ => return Err(to_js_err(format!("path '{}' is not an object", s))),
};
}
Ok(obj)
} else {
Ok(self.doc.import(&s)?)
}
} else {
Err(to_js_err("invalid objid"))
}
}
fn import_prop(&self, prop: JsValue) -> Result<Prop, JsValue> {
if let Some(s) = prop.as_string() {
Ok(s.into())
} else if let Some(n) = prop.as_f64() {
Ok((n as usize).into())
} else {
Err(to_js_err(format!("invalid prop {:?}", prop)))
}
}
fn import_scalar(&self, value: &JsValue, datatype: &Option<String>) -> Option<am::ScalarValue> {
match datatype.as_deref() {
Some("boolean") => value.as_bool().map(am::ScalarValue::Boolean),
Some("int") => value.as_f64().map(|v| am::ScalarValue::Int(v as i64)),
Some("uint") => value.as_f64().map(|v| am::ScalarValue::Uint(v as u64)),
Some("str") => value.as_string().map(|v| am::ScalarValue::Str(v.into())),
Some("f64") => value.as_f64().map(am::ScalarValue::F64),
Some("bytes") => Some(am::ScalarValue::Bytes(
value.clone().dyn_into::<Uint8Array>().unwrap().to_vec(),
)),
Some("counter") => value.as_f64().map(|v| am::ScalarValue::counter(v as i64)),
Some("timestamp") => {
if let Some(v) = value.as_f64() {
Some(am::ScalarValue::Timestamp(v as i64))
} else if let Ok(d) = value.clone().dyn_into::<js_sys::Date>() {
Some(am::ScalarValue::Timestamp(d.get_time() as i64))
} else {
None
}
}
Some("null") => Some(am::ScalarValue::Null),
Some(_) => None,
None => {
if value.is_null() {
Some(am::ScalarValue::Null)
} else if let Some(b) = value.as_bool() {
Some(am::ScalarValue::Boolean(b))
} else if let Some(s) = value.as_string() {
Some(am::ScalarValue::Str(s.into()))
} else if let Some(n) = value.as_f64() {
if (n.round() - n).abs() < f64::EPSILON {
Some(am::ScalarValue::Int(n as i64))
} else {
Some(am::ScalarValue::F64(n))
}
} else if let Ok(d) = value.clone().dyn_into::<js_sys::Date>() {
Some(am::ScalarValue::Timestamp(d.get_time() as i64))
} else if let Ok(o) = &value.clone().dyn_into::<Uint8Array>() {
Some(am::ScalarValue::Bytes(o.to_vec()))
} else {
None
}
}
}
}
fn import_value(
&self,
value: &JsValue,
datatype: Option<String>,
) -> Result<(Value<'static>, Vec<(Prop, JsValue)>), JsValue> {
match self.import_scalar(value, &datatype) {
Some(val) => Ok((val.into(), vec![])),
None => {
if let Some((o, subvals)) = to_objtype(value, &datatype) {
Ok((o.into(), subvals))
} else {
web_sys::console::log_2(&"Invalid value".into(), value);
Err(to_js_err("invalid value"))
}
}
}
}
}
#[wasm_bindgen(js_name = create)]
pub fn init(actor: Option<String>) -> Result<Automerge, JsValue> {
console_error_panic_hook::set_once();
Automerge::new(actor)
}
#[wasm_bindgen(js_name = loadDoc)]
pub fn load(data: Uint8Array, actor: Option<String>) -> Result<Automerge, JsValue> {
let data = data.to_vec();
let observer = None;
let options = ApplyOptions::<()>::default();
let mut automerge = am::AutoCommit::load_with(&data, options).map_err(to_js_err)?;
if let Some(s) = actor {
let actor = automerge::ActorId::from(hex::decode(s).map_err(to_js_err)?.to_vec());
automerge.set_actor(actor);
}
Ok(Automerge {
doc: automerge,
observer,
})
}
#[wasm_bindgen(js_name = encodeChange)]
pub fn encode_change(change: JsValue) -> Result<Uint8Array, JsValue> {
let change: am::ExpandedChange = change.into_serde().map_err(to_js_err)?;
let change: Change = change.into();
Ok(Uint8Array::from(change.raw_bytes()))
}
#[wasm_bindgen(js_name = decodeChange)]
pub fn decode_change(change: Uint8Array) -> Result<JsValue, JsValue> {
let change = Change::from_bytes(change.to_vec()).map_err(to_js_err)?;
let change: am::ExpandedChange = change.decode();
JsValue::from_serde(&change).map_err(to_js_err)
}
#[wasm_bindgen(js_name = initSyncState)]
pub fn init_sync_state() -> SyncState {
SyncState(am::sync::State::new())
}
// this is needed to be compatible with the automerge-js api
#[wasm_bindgen(js_name = importSyncState)]
pub fn import_sync_state(state: JsValue) -> Result<SyncState, JsValue> {
Ok(SyncState(JS(state).try_into()?))
}
// this is needed to be compatible with the automerge-js api
#[wasm_bindgen(js_name = exportSyncState)]
pub fn export_sync_state(state: SyncState) -> JsValue {
JS::from(state.0).into()
}
#[wasm_bindgen(js_name = encodeSyncMessage)]
pub fn encode_sync_message(message: JsValue) -> Result<Uint8Array, JsValue> {
let heads = js_get(&message, "heads")?.try_into()?;
let need = js_get(&message, "need")?.try_into()?;
let changes = js_get(&message, "changes")?.try_into()?;
let have = js_get(&message, "have")?.try_into()?;
Ok(Uint8Array::from(
am::sync::Message {
heads,
need,
have,
changes,
}
.encode()
.as_slice(),
))
}
#[wasm_bindgen(js_name = decodeSyncMessage)]
pub fn decode_sync_message(msg: Uint8Array) -> Result<JsValue, JsValue> {
let data = msg.to_vec();
let msg = am::sync::Message::decode(&data).map_err(to_js_err)?;
let heads = AR::from(msg.heads.as_slice());
let need = AR::from(msg.need.as_slice());
let changes = AR::from(msg.changes.as_slice());
let have = AR::from(msg.have.as_slice());
let obj = Object::new().into();
js_set(&obj, "heads", heads)?;
js_set(&obj, "need", need)?;
js_set(&obj, "have", have)?;
js_set(&obj, "changes", changes)?;
Ok(obj)
}
#[wasm_bindgen(js_name = encodeSyncState)]
pub fn encode_sync_state(state: SyncState) -> Result<Uint8Array, JsValue> {
let state = state.0;
Ok(Uint8Array::from(state.encode().as_slice()))
}
#[wasm_bindgen(js_name = decodeSyncState)]
pub fn decode_sync_state(data: Uint8Array) -> Result<SyncState, JsValue> {
SyncState::decode(data)
}

View file

@ -1,38 +0,0 @@
use std::borrow::Cow;
use automerge as am;
use js_sys::Uint8Array;
use wasm_bindgen::prelude::*;
#[derive(Debug)]
pub struct ScalarValue<'a>(pub(crate) Cow<'a, am::ScalarValue>);
impl<'a> From<ScalarValue<'a>> for JsValue {
fn from(val: ScalarValue<'a>) -> Self {
match &*val.0 {
am::ScalarValue::Bytes(v) => Uint8Array::from(v.as_slice()).into(),
am::ScalarValue::Str(v) => v.to_string().into(),
am::ScalarValue::Int(v) => (*v as f64).into(),
am::ScalarValue::Uint(v) => (*v as f64).into(),
am::ScalarValue::F64(v) => (*v).into(),
am::ScalarValue::Counter(v) => (f64::from(v)).into(),
am::ScalarValue::Timestamp(v) => js_sys::Date::new(&(*v as f64).into()).into(),
am::ScalarValue::Boolean(v) => (*v).into(),
am::ScalarValue::Null => JsValue::null(),
}
}
}
pub(crate) fn datatype(s: &am::ScalarValue) -> String {
match s {
am::ScalarValue::Bytes(_) => "bytes".into(),
am::ScalarValue::Str(_) => "str".into(),
am::ScalarValue::Int(_) => "int".into(),
am::ScalarValue::Uint(_) => "uint".into(),
am::ScalarValue::F64(_) => "f64".into(),
am::ScalarValue::Counter(_) => "counter".into(),
am::ScalarValue::Timestamp(_) => "timestamp".into(),
am::ScalarValue::Boolean(_) => "boolean".into(),
am::ScalarValue::Null => "null".into(),
}
}

File diff suppressed because it is too large Load diff

View file

@ -1,13 +0,0 @@
export {
loadDoc as load,
create,
encodeChange,
decodeChange,
initSyncState,
encodeSyncMessage,
decodeSyncMessage,
encodeSyncState,
decodeSyncState,
} from "./bindgen.js"
import init from "./bindgen.js"
export default init;

View file

@ -1,48 +0,0 @@
use automerge::{transaction::Transactable, Automerge, ROOT};
use criterion::{criterion_group, criterion_main, Criterion};
fn query_single(doc: &Automerge, rounds: u32) {
for _ in 0..rounds {
// repeatedly get the last key
doc.get(ROOT, (rounds - 1).to_string()).unwrap();
}
}
fn query_range(doc: &Automerge, rounds: u32) {
for i in 0..rounds {
doc.get(ROOT, i.to_string()).unwrap();
}
}
fn put_doc(doc: &mut Automerge, rounds: u32) {
for i in 0..rounds {
let mut tx = doc.transaction();
tx.put(ROOT, i.to_string(), "value").unwrap();
tx.commit();
}
}
fn bench(c: &mut Criterion) {
let mut group = c.benchmark_group("map");
let rounds = 10_000;
let mut doc = Automerge::new();
put_doc(&mut doc, rounds);
group.bench_function("query single", |b| b.iter(|| query_single(&doc, rounds)));
group.bench_function("query range", |b| b.iter(|| query_range(&doc, rounds)));
group.bench_function("put", |b| {
b.iter_batched(
Automerge::new,
|mut doc| put_doc(&mut doc, rounds),
criterion::BatchSize::LargeInput,
)
});
group.finish();
}
criterion_group!(benches, bench);
criterion_main!(benches);

View file

@ -1,482 +0,0 @@
use std::ops::RangeBounds;
use crate::exid::ExId;
use crate::op_observer::OpObserver;
use crate::transaction::{CommitOptions, Transactable};
use crate::{
sync, ApplyOptions, Keys, KeysAt, ObjType, Parents, Range, RangeAt, ScalarValue, Values,
ValuesAt,
};
use crate::{
transaction::TransactionInner, ActorId, Automerge, AutomergeError, Change, ChangeHash, Prop,
Value,
};
/// An automerge document that automatically manages transactions.
#[derive(Debug, Clone)]
pub struct AutoCommit {
doc: Automerge,
transaction: Option<TransactionInner>,
}
impl Default for AutoCommit {
fn default() -> Self {
Self::new()
}
}
impl AutoCommit {
pub fn new() -> Self {
Self {
doc: Automerge::new(),
transaction: None,
}
}
/// Get the inner document.
#[doc(hidden)]
pub fn document(&mut self) -> &Automerge {
self.ensure_transaction_closed();
&self.doc
}
pub fn with_actor(mut self, actor: ActorId) -> Self {
self.ensure_transaction_closed();
self.doc.set_actor(actor);
self
}
pub fn set_actor(&mut self, actor: ActorId) -> &mut Self {
self.ensure_transaction_closed();
self.doc.set_actor(actor);
self
}
pub fn get_actor(&self) -> &ActorId {
self.doc.get_actor()
}
fn ensure_transaction_open(&mut self) {
if self.transaction.is_none() {
self.transaction = Some(self.doc.transaction_inner());
}
}
pub fn fork(&mut self) -> Self {
self.ensure_transaction_closed();
Self {
doc: self.doc.fork(),
transaction: self.transaction.clone(),
}
}
pub fn fork_at(&mut self, heads: &[ChangeHash]) -> Result<Self, AutomergeError> {
self.ensure_transaction_closed();
Ok(Self {
doc: self.doc.fork_at(heads)?,
transaction: self.transaction.clone(),
})
}
fn ensure_transaction_closed(&mut self) {
if let Some(tx) = self.transaction.take() {
tx.commit::<()>(&mut self.doc, None, None, None);
}
}
pub fn load(data: &[u8]) -> Result<Self, AutomergeError> {
let doc = Automerge::load(data)?;
Ok(Self {
doc,
transaction: None,
})
}
pub fn load_with<Obs: OpObserver>(
data: &[u8],
options: ApplyOptions<'_, Obs>,
) -> Result<Self, AutomergeError> {
let doc = Automerge::load_with(data, options)?;
Ok(Self {
doc,
transaction: None,
})
}
pub fn load_incremental(&mut self, data: &[u8]) -> Result<usize, AutomergeError> {
self.ensure_transaction_closed();
self.doc.load_incremental(data)
}
pub fn load_incremental_with<'a, Obs: OpObserver>(
&mut self,
data: &[u8],
options: ApplyOptions<'a, Obs>,
) -> Result<usize, AutomergeError> {
self.ensure_transaction_closed();
self.doc.load_incremental_with(data, options)
}
pub fn apply_changes(&mut self, changes: Vec<Change>) -> Result<(), AutomergeError> {
self.ensure_transaction_closed();
self.doc.apply_changes(changes)
}
pub fn apply_changes_with<Obs: OpObserver>(
&mut self,
changes: Vec<Change>,
options: ApplyOptions<'_, Obs>,
) -> Result<(), AutomergeError> {
self.ensure_transaction_closed();
self.doc.apply_changes_with(changes, options)
}
/// Takes all the changes in `other` which are not in `self` and applies them
pub fn merge(&mut self, other: &mut Self) -> Result<Vec<ChangeHash>, AutomergeError> {
self.ensure_transaction_closed();
other.ensure_transaction_closed();
self.doc.merge(&mut other.doc)
}
/// Takes all the changes in `other` which are not in `self` and applies them
pub fn merge_with<'a, Obs: OpObserver>(
&mut self,
other: &mut Self,
options: ApplyOptions<'a, Obs>,
) -> Result<Vec<ChangeHash>, AutomergeError> {
self.ensure_transaction_closed();
other.ensure_transaction_closed();
self.doc.merge_with(&mut other.doc, options)
}
pub fn save(&mut self) -> Vec<u8> {
self.ensure_transaction_closed();
self.doc.save()
}
// should this return an empty vec instead of None?
pub fn save_incremental(&mut self) -> Vec<u8> {
self.ensure_transaction_closed();
self.doc.save_incremental()
}
pub fn get_missing_deps(&mut self, heads: &[ChangeHash]) -> Vec<ChangeHash> {
self.ensure_transaction_closed();
self.doc.get_missing_deps(heads)
}
pub fn get_last_local_change(&mut self) -> Option<&Change> {
self.ensure_transaction_closed();
self.doc.get_last_local_change()
}
pub fn get_changes(&mut self, have_deps: &[ChangeHash]) -> Vec<&Change> {
self.ensure_transaction_closed();
self.doc.get_changes(have_deps)
}
pub fn get_change_by_hash(&mut self, hash: &ChangeHash) -> Option<&Change> {
self.ensure_transaction_closed();
self.doc.get_change_by_hash(hash)
}
pub fn get_changes_added<'a>(&mut self, other: &'a mut Self) -> Vec<&'a Change> {
self.ensure_transaction_closed();
other.ensure_transaction_closed();
self.doc.get_changes_added(&other.doc)
}
pub fn import(&self, s: &str) -> Result<ExId, AutomergeError> {
self.doc.import(s)
}
pub fn dump(&mut self) {
self.ensure_transaction_closed();
self.doc.dump()
}
pub fn generate_sync_message(&mut self, sync_state: &mut sync::State) -> Option<sync::Message> {
self.ensure_transaction_closed();
self.doc.generate_sync_message(sync_state)
}
pub fn receive_sync_message(
&mut self,
sync_state: &mut sync::State,
message: sync::Message,
) -> Result<(), AutomergeError> {
self.ensure_transaction_closed();
self.doc.receive_sync_message(sync_state, message)
}
pub fn receive_sync_message_with<'a, Obs: OpObserver>(
&mut self,
sync_state: &mut sync::State,
message: sync::Message,
options: ApplyOptions<'a, Obs>,
) -> Result<(), AutomergeError> {
self.ensure_transaction_closed();
self.doc
.receive_sync_message_with(sync_state, message, options)
}
#[cfg(feature = "optree-visualisation")]
pub fn visualise_optree(&self) -> String {
self.doc.visualise_optree()
}
/// Get the current heads of the document.
///
/// This closes the transaction first, if one is in progress.
pub fn get_heads(&mut self) -> Vec<ChangeHash> {
self.ensure_transaction_closed();
self.doc.get_heads()
}
pub fn commit(&mut self) -> ChangeHash {
self.commit_with::<()>(CommitOptions::default())
}
/// Commit the current operations with some options.
///
/// ```
/// # use automerge::transaction::CommitOptions;
/// # use automerge::transaction::Transactable;
/// # use automerge::ROOT;
/// # use automerge::AutoCommit;
/// # use automerge::ObjType;
/// # use std::time::SystemTime;
/// let mut doc = AutoCommit::new();
/// doc.put_object(&ROOT, "todos", ObjType::List).unwrap();
/// let now = SystemTime::now().duration_since(SystemTime::UNIX_EPOCH).unwrap().as_secs() as
/// i64;
/// doc.commit_with::<()>(CommitOptions::default().with_message("Create todos list").with_time(now));
/// ```
pub fn commit_with<Obs: OpObserver>(&mut self, options: CommitOptions<'_, Obs>) -> ChangeHash {
// ensure that even no changes triggers a change
self.ensure_transaction_open();
let tx = self.transaction.take().unwrap();
tx.commit(
&mut self.doc,
options.message,
options.time,
options.op_observer,
)
}
pub fn rollback(&mut self) -> usize {
self.transaction
.take()
.map(|tx| tx.rollback(&mut self.doc))
.unwrap_or(0)
}
}
impl Transactable for AutoCommit {
fn pending_ops(&self) -> usize {
self.transaction
.as_ref()
.map(|t| t.pending_ops())
.unwrap_or(0)
}
// KeysAt::()
// LenAt::()
// PropAt::()
// NthAt::()
fn keys<O: AsRef<ExId>>(&self, obj: O) -> Keys<'_, '_> {
self.doc.keys(obj)
}
fn keys_at<O: AsRef<ExId>>(&self, obj: O, heads: &[ChangeHash]) -> KeysAt<'_, '_> {
self.doc.keys_at(obj, heads)
}
fn range<O: AsRef<ExId>, R: RangeBounds<String>>(&self, obj: O, range: R) -> Range<'_, R> {
self.doc.range(obj, range)
}
fn range_at<O: AsRef<ExId>, R: RangeBounds<String>>(
&self,
obj: O,
range: R,
heads: &[ChangeHash],
) -> RangeAt<'_, R> {
self.doc.range_at(obj, range, heads)
}
fn values<O: AsRef<ExId>>(&self, obj: O) -> Values<'_> {
self.doc.values(obj)
}
fn values_at<O: AsRef<ExId>>(&self, obj: O, heads: &[ChangeHash]) -> ValuesAt<'_> {
self.doc.values_at(obj, heads)
}
fn length<O: AsRef<ExId>>(&self, obj: O) -> usize {
self.doc.length(obj)
}
fn length_at<O: AsRef<ExId>>(&self, obj: O, heads: &[ChangeHash]) -> usize {
self.doc.length_at(obj, heads)
}
fn object_type<O: AsRef<ExId>>(&self, obj: O) -> Option<ObjType> {
self.doc.object_type(obj)
}
// set(obj, prop, value) - value can be scalar or objtype
// del(obj, prop)
// inc(obj, prop, value)
// insert(obj, index, value)
/// Set the value of property `P` to value `V` in object `obj`.
///
/// # Returns
///
/// The opid of the operation which was created, or None if this operation doesn't change the
/// document or create a new object.
///
/// # Errors
///
/// This will return an error if
/// - The object does not exist
/// - The key is the wrong type for the object
/// - The key does not exist in the object
fn put<O: AsRef<ExId>, P: Into<Prop>, V: Into<ScalarValue>>(
&mut self,
obj: O,
prop: P,
value: V,
) -> Result<(), AutomergeError> {
self.ensure_transaction_open();
let tx = self.transaction.as_mut().unwrap();
tx.put(&mut self.doc, obj.as_ref(), prop, value)
}
fn put_object<O: AsRef<ExId>, P: Into<Prop>>(
&mut self,
obj: O,
prop: P,
value: ObjType,
) -> Result<ExId, AutomergeError> {
self.ensure_transaction_open();
let tx = self.transaction.as_mut().unwrap();
tx.put_object(&mut self.doc, obj.as_ref(), prop, value)
}
fn insert<O: AsRef<ExId>, V: Into<ScalarValue>>(
&mut self,
obj: O,
index: usize,
value: V,
) -> Result<(), AutomergeError> {
self.ensure_transaction_open();
let tx = self.transaction.as_mut().unwrap();
tx.insert(&mut self.doc, obj.as_ref(), index, value)
}
fn insert_object<O: AsRef<ExId>>(
&mut self,
obj: O,
index: usize,
value: ObjType,
) -> Result<ExId, AutomergeError> {
self.ensure_transaction_open();
let tx = self.transaction.as_mut().unwrap();
tx.insert_object(&mut self.doc, obj.as_ref(), index, value)
}
fn increment<O: AsRef<ExId>, P: Into<Prop>>(
&mut self,
obj: O,
prop: P,
value: i64,
) -> Result<(), AutomergeError> {
self.ensure_transaction_open();
let tx = self.transaction.as_mut().unwrap();
tx.increment(&mut self.doc, obj.as_ref(), prop, value)
}
fn delete<O: AsRef<ExId>, P: Into<Prop>>(
&mut self,
obj: O,
prop: P,
) -> Result<(), AutomergeError> {
self.ensure_transaction_open();
let tx = self.transaction.as_mut().unwrap();
tx.delete(&mut self.doc, obj.as_ref(), prop)
}
/// Splice new elements into the given sequence. Returns a vector of the OpIds used to insert
/// the new elements
fn splice<O: AsRef<ExId>, V: IntoIterator<Item = ScalarValue>>(
&mut self,
obj: O,
pos: usize,
del: usize,
vals: V,
) -> Result<(), AutomergeError> {
self.ensure_transaction_open();
let tx = self.transaction.as_mut().unwrap();
tx.splice(&mut self.doc, obj.as_ref(), pos, del, vals)
}
fn text<O: AsRef<ExId>>(&self, obj: O) -> Result<String, AutomergeError> {
self.doc.text(obj)
}
fn text_at<O: AsRef<ExId>>(
&self,
obj: O,
heads: &[ChangeHash],
) -> Result<String, AutomergeError> {
self.doc.text_at(obj, heads)
}
// TODO - I need to return these OpId's here **only** to get
// the legacy conflicts format of { [opid]: value }
// Something better?
fn get<O: AsRef<ExId>, P: Into<Prop>>(
&self,
obj: O,
prop: P,
) -> Result<Option<(Value<'_>, ExId)>, AutomergeError> {
self.doc.get(obj, prop)
}
fn get_at<O: AsRef<ExId>, P: Into<Prop>>(
&self,
obj: O,
prop: P,
heads: &[ChangeHash],
) -> Result<Option<(Value<'_>, ExId)>, AutomergeError> {
self.doc.get_at(obj, prop, heads)
}
fn get_all<O: AsRef<ExId>, P: Into<Prop>>(
&self,
obj: O,
prop: P,
) -> Result<Vec<(Value<'_>, ExId)>, AutomergeError> {
self.doc.get_all(obj, prop)
}
fn get_all_at<O: AsRef<ExId>, P: Into<Prop>>(
&self,
obj: O,
prop: P,
heads: &[ChangeHash],
) -> Result<Vec<(Value<'_>, ExId)>, AutomergeError> {
self.doc.get_all_at(obj, prop, heads)
}
fn parent_object<O: AsRef<ExId>>(&self, obj: O) -> Option<(ExId, Prop)> {
self.doc.parent_object(obj)
}
fn parents(&self, obj: ExId) -> Parents<'_> {
self.doc.parents(obj)
}
}

File diff suppressed because it is too large Load diff

View file

@ -1,997 +0,0 @@
use crate::columnar::{
ChangeEncoder, ChangeIterator, ColumnEncoder, DepsIterator, DocChange, DocOp, DocOpEncoder,
DocOpIterator, OperationIterator, COLUMN_TYPE_DEFLATE,
};
use crate::decoding;
use crate::decoding::{Decodable, InvalidChangeError};
use crate::encoding::{Encodable, DEFLATE_MIN_SIZE};
use crate::error::AutomergeError;
use crate::indexed_cache::IndexedCache;
use crate::legacy as amp;
use crate::transaction::TransactionInner;
use crate::types;
use crate::types::{ActorId, ElemId, Key, ObjId, Op, OpId, OpType};
use core::ops::Range;
use flate2::{
bufread::{DeflateDecoder, DeflateEncoder},
Compression,
};
use itertools::Itertools;
use sha2::Digest;
use sha2::Sha256;
use std::collections::{HashMap, HashSet};
use std::convert::TryInto;
use std::fmt::Debug;
use std::io::{Read, Write};
use std::num::NonZeroU64;
use tracing::instrument;
const MAGIC_BYTES: [u8; 4] = [0x85, 0x6f, 0x4a, 0x83];
const PREAMBLE_BYTES: usize = 8;
const HEADER_BYTES: usize = PREAMBLE_BYTES + 1;
const HASH_BYTES: usize = 32;
const BLOCK_TYPE_DOC: u8 = 0;
const BLOCK_TYPE_CHANGE: u8 = 1;
const BLOCK_TYPE_DEFLATE: u8 = 2;
const CHUNK_START: usize = 8;
const HASH_RANGE: Range<usize> = 4..8;
pub(crate) fn encode_document<'a, 'b>(
heads: Vec<amp::ChangeHash>,
changes: impl Iterator<Item = &'a Change>,
doc_ops: impl Iterator<Item = (&'b ObjId, &'b Op)>,
actors_index: &IndexedCache<ActorId>,
props: &'a [String],
) -> Vec<u8> {
let mut bytes: Vec<u8> = Vec::new();
let actors_map = actors_index.encode_index();
let actors = actors_index.sorted();
/*
// this assumes that all actor_ids referenced are seen in changes.actor_id which is true
// so long as we have a full history
let mut actors: Vec<_> = changes
.iter()
.map(|c| &c.actor)
.unique()
.sorted()
.cloned()
.collect();
*/
let (change_bytes, change_info) = ChangeEncoder::encode_changes(changes, &actors);
//let doc_ops = group_doc_ops(changes, &actors);
let (ops_bytes, ops_info) = DocOpEncoder::encode_doc_ops(doc_ops, &actors_map, props);
bytes.extend(MAGIC_BYTES);
bytes.extend([0, 0, 0, 0]); // we dont know the hash yet so fill in a fake
bytes.push(BLOCK_TYPE_DOC);
let mut chunk = Vec::new();
actors.len().encode_vec(&mut chunk);
for a in actors.into_iter() {
a.to_bytes().encode_vec(&mut chunk);
}
heads.len().encode_vec(&mut chunk);
for head in heads.iter() {
chunk.write_all(&head.0).unwrap();
}
chunk.extend(change_info);
chunk.extend(ops_info);
chunk.extend(change_bytes);
chunk.extend(ops_bytes);
leb128::write::unsigned(&mut bytes, chunk.len() as u64).unwrap();
bytes.extend(&chunk);
let hash_result = Sha256::digest(&bytes[CHUNK_START..bytes.len()]);
bytes.splice(HASH_RANGE, hash_result[0..4].iter().copied());
bytes
}
/// When encoding a change we take all the actor IDs referenced by a change and place them in an
/// array. The array has the actor who authored the change as the first element and all remaining
/// actors (i.e. those referenced in object IDs in the target of an operation or in the `pred` of
/// an operation) lexicographically ordered following the change author.
fn actor_ids_in_change(change: &amp::Change) -> Vec<amp::ActorId> {
let mut other_ids: Vec<&amp::ActorId> = change
.operations
.iter()
.flat_map(opids_in_operation)
.filter(|a| *a != &change.actor_id)
.unique()
.collect();
other_ids.sort();
// Now prepend the change actor
std::iter::once(&change.actor_id)
.chain(other_ids.into_iter())
.cloned()
.collect()
}
fn opids_in_operation(op: &amp::Op) -> impl Iterator<Item = &amp::ActorId> {
let obj_actor_id = match &op.obj {
amp::ObjectId::Root => None,
amp::ObjectId::Id(opid) => Some(opid.actor()),
};
let pred_ids = op.pred.iter().map(amp::OpId::actor);
let key_actor = match &op.key {
amp::Key::Seq(amp::ElementId::Id(i)) => Some(i.actor()),
_ => None,
};
obj_actor_id
.into_iter()
.chain(key_actor.into_iter())
.chain(pred_ids)
}
impl From<amp::Change> for Change {
fn from(value: amp::Change) -> Self {
encode(&value)
}
}
impl From<&amp::Change> for Change {
fn from(value: &amp::Change) -> Self {
encode(value)
}
}
fn encode(change: &amp::Change) -> Change {
let mut deps = change.deps.clone();
deps.sort_unstable();
let mut chunk = encode_chunk(change, &deps);
let mut bytes = Vec::with_capacity(MAGIC_BYTES.len() + 4 + chunk.bytes.len());
bytes.extend(&MAGIC_BYTES);
bytes.extend(vec![0, 0, 0, 0]); // we dont know the hash yet so fill in a fake
bytes.push(BLOCK_TYPE_CHANGE);
leb128::write::unsigned(&mut bytes, chunk.bytes.len() as u64).unwrap();
let body_start = bytes.len();
increment_range(&mut chunk.body, bytes.len());
increment_range(&mut chunk.message, bytes.len());
increment_range(&mut chunk.extra_bytes, bytes.len());
increment_range_map(&mut chunk.ops, bytes.len());
bytes.extend(&chunk.bytes);
let hash_result = Sha256::digest(&bytes[CHUNK_START..bytes.len()]);
let hash: amp::ChangeHash = hash_result[..].try_into().unwrap();
bytes.splice(HASH_RANGE, hash_result[0..4].iter().copied());
// any time I make changes to the encoder decoder its a good idea
// to run it through a round trip to detect errors the tests might not
// catch
// let c0 = Change::from_bytes(bytes.clone()).unwrap();
// std::assert_eq!(c1, c0);
// perhaps we should add something like this to the test suite
let bytes = ChangeBytes::Uncompressed(bytes);
Change {
bytes,
body_start,
hash,
seq: change.seq,
start_op: change.start_op,
time: change.time,
actors: chunk.actors,
message: chunk.message,
deps,
ops: chunk.ops,
extra_bytes: chunk.extra_bytes,
}
}
struct ChunkIntermediate {
bytes: Vec<u8>,
body: Range<usize>,
actors: Vec<ActorId>,
message: Range<usize>,
ops: HashMap<u32, Range<usize>>,
extra_bytes: Range<usize>,
}
fn encode_chunk(change: &amp::Change, deps: &[amp::ChangeHash]) -> ChunkIntermediate {
let mut bytes = Vec::new();
// All these unwraps are okay because we're writing to an in memory buffer so io erros should
// not happen
// encode deps
deps.len().encode(&mut bytes).unwrap();
for hash in deps.iter() {
bytes.write_all(&hash.0).unwrap();
}
let actors = actor_ids_in_change(change);
change.actor_id.to_bytes().encode(&mut bytes).unwrap();
// encode seq, start_op, time, message
change.seq.encode(&mut bytes).unwrap();
change.start_op.encode(&mut bytes).unwrap();
change.time.encode(&mut bytes).unwrap();
let message = bytes.len() + 1;
change.message.encode(&mut bytes).unwrap();
let message = message..bytes.len();
// encode ops into a side buffer - collect all other actors
let (ops_buf, mut ops) = ColumnEncoder::encode_ops(&change.operations, &actors);
// encode all other actors
actors[1..].encode(&mut bytes).unwrap();
// now we know how many bytes ops are offset by so we can adjust the ranges
increment_range_map(&mut ops, bytes.len());
// write out the ops
bytes.write_all(&ops_buf).unwrap();
// write out the extra bytes
let extra_bytes = bytes.len()..(bytes.len() + change.extra_bytes.len());
bytes.write_all(&change.extra_bytes).unwrap();
let body = 0..bytes.len();
ChunkIntermediate {
bytes,
body,
actors,
message,
ops,
extra_bytes,
}
}
#[derive(PartialEq, Debug, Clone)]
enum ChangeBytes {
Compressed {
compressed: Vec<u8>,
uncompressed: Vec<u8>,
},
Uncompressed(Vec<u8>),
}
impl ChangeBytes {
fn uncompressed(&self) -> &[u8] {
match self {
ChangeBytes::Compressed { uncompressed, .. } => &uncompressed[..],
ChangeBytes::Uncompressed(b) => &b[..],
}
}
fn compress(&mut self, body_start: usize) {
match self {
ChangeBytes::Compressed { .. } => {}
ChangeBytes::Uncompressed(uncompressed) => {
if uncompressed.len() > DEFLATE_MIN_SIZE {
let mut result = Vec::with_capacity(uncompressed.len());
result.extend(&uncompressed[0..8]);
result.push(BLOCK_TYPE_DEFLATE);
let mut deflater =
DeflateEncoder::new(&uncompressed[body_start..], Compression::default());
let mut deflated = Vec::new();
let deflated_len = deflater.read_to_end(&mut deflated).unwrap();
leb128::write::unsigned(&mut result, deflated_len as u64).unwrap();
result.extend(&deflated[..]);
*self = ChangeBytes::Compressed {
compressed: result,
uncompressed: std::mem::take(uncompressed),
}
}
}
}
}
fn raw(&self) -> &[u8] {
match self {
ChangeBytes::Compressed { compressed, .. } => &compressed[..],
ChangeBytes::Uncompressed(b) => &b[..],
}
}
}
/// A change represents a group of operations performed by an actor.
#[derive(PartialEq, Debug, Clone)]
pub struct Change {
bytes: ChangeBytes,
body_start: usize,
/// Hash of this change.
pub hash: amp::ChangeHash,
/// The index of this change in the changes from this actor.
pub seq: u64,
/// The start operation index. Starts at 1.
pub start_op: NonZeroU64,
/// The time that this change was committed.
pub time: i64,
/// The message of this change.
message: Range<usize>,
/// The actors referenced in this change.
actors: Vec<ActorId>,
/// The dependencies of this change.
pub deps: Vec<amp::ChangeHash>,
ops: HashMap<u32, Range<usize>>,
extra_bytes: Range<usize>,
}
impl Change {
pub fn actor_id(&self) -> &ActorId {
&self.actors[0]
}
#[instrument(level = "debug", skip(bytes))]
pub fn load_document(bytes: &[u8]) -> Result<Vec<Change>, AutomergeError> {
load_blocks(bytes)
}
pub fn from_bytes(bytes: Vec<u8>) -> Result<Change, decoding::Error> {
Change::try_from(bytes)
}
pub fn is_empty(&self) -> bool {
self.len() == 0
}
pub fn len(&self) -> usize {
// TODO - this could be a lot more efficient
self.iter_ops().count()
}
pub fn max_op(&self) -> u64 {
self.start_op.get() + (self.len() as u64) - 1
}
pub fn message(&self) -> Option<String> {
let m = &self.bytes.uncompressed()[self.message.clone()];
if m.is_empty() {
None
} else {
std::str::from_utf8(m).map(ToString::to_string).ok()
}
}
pub fn decode(&self) -> amp::Change {
amp::Change {
start_op: self.start_op,
seq: self.seq,
time: self.time,
hash: Some(self.hash),
message: self.message(),
actor_id: self.actors[0].clone(),
deps: self.deps.clone(),
operations: self
.iter_ops()
.map(|op| amp::Op {
action: op.action.clone(),
obj: op.obj.clone(),
key: op.key.clone(),
pred: op.pred.clone(),
insert: op.insert,
})
.collect(),
extra_bytes: self.extra_bytes().into(),
}
}
pub(crate) fn iter_ops(&self) -> OperationIterator<'_> {
OperationIterator::new(self.bytes.uncompressed(), self.actors.as_slice(), &self.ops)
}
pub fn extra_bytes(&self) -> &[u8] {
&self.bytes.uncompressed()[self.extra_bytes.clone()]
}
pub fn compress(&mut self) {
self.bytes.compress(self.body_start);
}
pub fn raw_bytes(&self) -> &[u8] {
self.bytes.raw()
}
}
fn read_leb128(bytes: &mut &[u8]) -> Result<(usize, usize), decoding::Error> {
let mut buf = &bytes[..];
let val = leb128::read::unsigned(&mut buf)? as usize;
let leb128_bytes = bytes.len() - buf.len();
Ok((val, leb128_bytes))
}
fn read_slice<T: Decodable + Debug>(
bytes: &[u8],
cursor: &mut Range<usize>,
) -> Result<T, decoding::Error> {
let mut view = &bytes[cursor.clone()];
let init_len = view.len();
let val = T::decode::<&[u8]>(&mut view).ok_or(decoding::Error::NoDecodedValue);
let bytes_read = init_len - view.len();
*cursor = (cursor.start + bytes_read)..cursor.end;
val
}
fn slice_bytes(bytes: &[u8], cursor: &mut Range<usize>) -> Result<Range<usize>, decoding::Error> {
let (val, len) = read_leb128(&mut &bytes[cursor.clone()])?;
let start = cursor.start + len;
let end = start + val;
*cursor = end..cursor.end;
Ok(start..end)
}
fn increment_range(range: &mut Range<usize>, len: usize) {
range.end += len;
range.start += len;
}
fn increment_range_map(ranges: &mut HashMap<u32, Range<usize>>, len: usize) {
for range in ranges.values_mut() {
increment_range(range, len);
}
}
fn export_objid(id: &ObjId, actors: &IndexedCache<ActorId>) -> amp::ObjectId {
if id == &ObjId::root() {
amp::ObjectId::Root
} else {
export_opid(&id.0, actors).into()
}
}
fn export_elemid(id: &ElemId, actors: &IndexedCache<ActorId>) -> amp::ElementId {
if id == &types::HEAD {
amp::ElementId::Head
} else {
export_opid(&id.0, actors).into()
}
}
fn export_opid(id: &OpId, actors: &IndexedCache<ActorId>) -> amp::OpId {
amp::OpId(id.0, actors.get(id.1).clone())
}
fn export_op(
op: &Op,
obj: &ObjId,
actors: &IndexedCache<ActorId>,
props: &IndexedCache<String>,
) -> amp::Op {
let action = op.action.clone();
let key = match &op.key {
Key::Map(n) => amp::Key::Map(props.get(*n).clone().into()),
Key::Seq(id) => amp::Key::Seq(export_elemid(id, actors)),
};
let obj = export_objid(obj, actors);
let pred = op.pred.iter().map(|id| export_opid(id, actors)).collect();
amp::Op {
action,
obj,
insert: op.insert,
pred,
key,
}
}
pub(crate) fn export_change(
change: TransactionInner,
actors: &IndexedCache<ActorId>,
props: &IndexedCache<String>,
) -> Change {
amp::Change {
actor_id: actors.get(change.actor).clone(),
seq: change.seq,
start_op: change.start_op,
time: change.time,
deps: change.deps,
message: change.message,
hash: change.hash,
operations: change
.operations
.iter()
.map(|(obj, _, op)| export_op(op, obj, actors, props))
.collect(),
extra_bytes: change.extra_bytes,
}
.into()
}
impl TryFrom<Vec<u8>> for Change {
type Error = decoding::Error;
fn try_from(bytes: Vec<u8>) -> Result<Self, Self::Error> {
let (chunktype, body) = decode_header_without_hash(&bytes)?;
let bytes = if chunktype == BLOCK_TYPE_DEFLATE {
decompress_chunk(0..PREAMBLE_BYTES, body, bytes)?
} else {
ChangeBytes::Uncompressed(bytes)
};
let (chunktype, hash, body) = decode_header(bytes.uncompressed())?;
if chunktype != BLOCK_TYPE_CHANGE {
return Err(decoding::Error::WrongType {
expected_one_of: vec![BLOCK_TYPE_CHANGE],
found: chunktype,
});
}
let body_start = body.start;
let mut cursor = body;
let deps = decode_hashes(bytes.uncompressed(), &mut cursor)?;
let actor =
ActorId::from(&bytes.uncompressed()[slice_bytes(bytes.uncompressed(), &mut cursor)?]);
let seq = read_slice(bytes.uncompressed(), &mut cursor)?;
let start_op = read_slice(bytes.uncompressed(), &mut cursor)?;
let time = read_slice(bytes.uncompressed(), &mut cursor)?;
let message = slice_bytes(bytes.uncompressed(), &mut cursor)?;
let actors = decode_actors(bytes.uncompressed(), &mut cursor, Some(actor))?;
let ops_info = decode_column_info(bytes.uncompressed(), &mut cursor, false)?;
let ops = decode_columns(&mut cursor, &ops_info);
Ok(Change {
bytes,
body_start,
hash,
seq,
start_op,
time,
actors,
message,
deps,
ops,
extra_bytes: cursor,
})
}
}
fn decompress_chunk(
preamble: Range<usize>,
body: Range<usize>,
compressed: Vec<u8>,
) -> Result<ChangeBytes, decoding::Error> {
let mut decoder = DeflateDecoder::new(&compressed[body]);
let mut decompressed = Vec::new();
decoder.read_to_end(&mut decompressed)?;
let mut result = Vec::with_capacity(decompressed.len() + preamble.len());
result.extend(&compressed[preamble]);
result.push(BLOCK_TYPE_CHANGE);
leb128::write::unsigned::<Vec<u8>>(&mut result, decompressed.len() as u64).unwrap();
result.extend(decompressed);
Ok(ChangeBytes::Compressed {
uncompressed: result,
compressed,
})
}
fn decode_hashes(
bytes: &[u8],
cursor: &mut Range<usize>,
) -> Result<Vec<amp::ChangeHash>, decoding::Error> {
let num_hashes = read_slice(bytes, cursor)?;
let mut hashes = Vec::with_capacity(num_hashes);
for _ in 0..num_hashes {
let hash = cursor.start..(cursor.start + HASH_BYTES);
*cursor = hash.end..cursor.end;
hashes.push(
bytes
.get(hash)
.ok_or(decoding::Error::NotEnoughBytes)?
.try_into()
.map_err(InvalidChangeError::from)?,
);
}
Ok(hashes)
}
fn decode_actors(
bytes: &[u8],
cursor: &mut Range<usize>,
first: Option<ActorId>,
) -> Result<Vec<ActorId>, decoding::Error> {
let num_actors: usize = read_slice(bytes, cursor)?;
let mut actors = Vec::with_capacity(num_actors + 1);
if let Some(actor) = first {
actors.push(actor);
}
for _ in 0..num_actors {
actors.push(ActorId::from(
bytes
.get(slice_bytes(bytes, cursor)?)
.ok_or(decoding::Error::NotEnoughBytes)?,
));
}
Ok(actors)
}
fn decode_column_info(
bytes: &[u8],
cursor: &mut Range<usize>,
allow_compressed_column: bool,
) -> Result<Vec<(u32, usize)>, decoding::Error> {
let num_columns = read_slice(bytes, cursor)?;
let mut columns = Vec::with_capacity(num_columns);
let mut last_id = 0;
for _ in 0..num_columns {
let id: u32 = read_slice(bytes, cursor)?;
if (id & !COLUMN_TYPE_DEFLATE) <= (last_id & !COLUMN_TYPE_DEFLATE) {
return Err(decoding::Error::ColumnsNotInAscendingOrder {
last: last_id,
found: id,
});
}
if id & COLUMN_TYPE_DEFLATE != 0 && !allow_compressed_column {
return Err(decoding::Error::ChangeContainedCompressedColumns);
}
last_id = id;
let length = read_slice(bytes, cursor)?;
columns.push((id, length));
}
Ok(columns)
}
fn decode_columns(
cursor: &mut Range<usize>,
columns: &[(u32, usize)],
) -> HashMap<u32, Range<usize>> {
let mut ops = HashMap::new();
for (id, length) in columns {
let start = cursor.start;
let end = start + length;
*cursor = end..cursor.end;
ops.insert(*id, start..end);
}
ops
}
fn decode_header(bytes: &[u8]) -> Result<(u8, amp::ChangeHash, Range<usize>), decoding::Error> {
let (chunktype, body) = decode_header_without_hash(bytes)?;
let calculated_hash = Sha256::digest(&bytes[PREAMBLE_BYTES..]);
let checksum = &bytes[4..8];
if checksum != &calculated_hash[0..4] {
return Err(decoding::Error::InvalidChecksum {
found: checksum.try_into().unwrap(),
calculated: calculated_hash[0..4].try_into().unwrap(),
});
}
let hash = calculated_hash[..]
.try_into()
.map_err(InvalidChangeError::from)?;
Ok((chunktype, hash, body))
}
fn decode_header_without_hash(bytes: &[u8]) -> Result<(u8, Range<usize>), decoding::Error> {
if bytes.len() <= HEADER_BYTES {
return Err(decoding::Error::NotEnoughBytes);
}
if bytes[0..4] != MAGIC_BYTES {
return Err(decoding::Error::WrongMagicBytes);
}
let (val, len) = read_leb128(&mut &bytes[HEADER_BYTES..])?;
let body = (HEADER_BYTES + len)..(HEADER_BYTES + len + val);
if bytes.len() != body.end {
return Err(decoding::Error::WrongByteLength {
expected: body.end,
found: bytes.len(),
});
}
let chunktype = bytes[PREAMBLE_BYTES];
Ok((chunktype, body))
}
fn load_blocks(bytes: &[u8]) -> Result<Vec<Change>, AutomergeError> {
let mut changes = Vec::new();
for slice in split_blocks(bytes)? {
decode_block(slice, &mut changes)?;
}
Ok(changes)
}
fn split_blocks(bytes: &[u8]) -> Result<Vec<&[u8]>, decoding::Error> {
// split off all valid blocks - ignore the rest if its corrupted or truncated
let mut blocks = Vec::new();
let mut cursor = bytes;
while let Some(block) = pop_block(cursor)? {
blocks.push(&cursor[block.clone()]);
if cursor.len() <= block.end {
break;
}
cursor = &cursor[block.end..];
}
Ok(blocks)
}
fn pop_block(bytes: &[u8]) -> Result<Option<Range<usize>>, decoding::Error> {
if bytes.len() < 4 || bytes[0..4] != MAGIC_BYTES {
// not reporting error here - file got corrupted?
return Ok(None);
}
let (val, len) = read_leb128(
&mut bytes
.get(HEADER_BYTES..)
.ok_or(decoding::Error::NotEnoughBytes)?,
)?;
// val is arbitrary so it could overflow
let end = (HEADER_BYTES + len)
.checked_add(val)
.ok_or(decoding::Error::Overflow)?;
if end > bytes.len() {
// not reporting error here - file got truncated?
return Ok(None);
}
Ok(Some(0..end))
}
fn decode_block(bytes: &[u8], changes: &mut Vec<Change>) -> Result<(), decoding::Error> {
match bytes[PREAMBLE_BYTES] {
BLOCK_TYPE_DOC => {
changes.extend(decode_document(bytes)?);
Ok(())
}
BLOCK_TYPE_CHANGE | BLOCK_TYPE_DEFLATE => {
changes.push(Change::try_from(bytes.to_vec())?);
Ok(())
}
found => Err(decoding::Error::WrongType {
expected_one_of: vec![BLOCK_TYPE_DOC, BLOCK_TYPE_CHANGE, BLOCK_TYPE_DEFLATE],
found,
}),
}
}
fn decode_document(bytes: &[u8]) -> Result<Vec<Change>, decoding::Error> {
let (chunktype, _hash, mut cursor) = decode_header(bytes)?;
// chunktype == 0 is a document, chunktype = 1 is a change
if chunktype > 0 {
return Err(decoding::Error::WrongType {
expected_one_of: vec![0],
found: chunktype,
});
}
let actors = decode_actors(bytes, &mut cursor, None)?;
let heads = decode_hashes(bytes, &mut cursor)?;
let changes_info = decode_column_info(bytes, &mut cursor, true)?;
let ops_info = decode_column_info(bytes, &mut cursor, true)?;
let changes_data = decode_columns(&mut cursor, &changes_info);
let mut doc_changes = ChangeIterator::new(bytes, &changes_data).collect::<Vec<_>>();
let doc_changes_deps = DepsIterator::new(bytes, &changes_data);
let doc_changes_len = doc_changes.len();
let ops_data = decode_columns(&mut cursor, &ops_info);
let doc_ops: Vec<_> = DocOpIterator::new(bytes, &actors, &ops_data).collect();
group_doc_change_and_doc_ops(&mut doc_changes, doc_ops, &actors)?;
let uncompressed_changes =
doc_changes_to_uncompressed_changes(doc_changes.into_iter(), &actors);
let changes = compress_doc_changes(uncompressed_changes, doc_changes_deps, doc_changes_len)
.ok_or(decoding::Error::NoDocChanges)?;
let mut calculated_heads = HashSet::new();
for change in &changes {
for dep in &change.deps {
calculated_heads.remove(dep);
}
calculated_heads.insert(change.hash);
}
if calculated_heads != heads.into_iter().collect::<HashSet<_>>() {
return Err(decoding::Error::MismatchedHeads);
}
Ok(changes)
}
fn compress_doc_changes(
uncompressed_changes: impl Iterator<Item = amp::Change>,
doc_changes_deps: impl Iterator<Item = Vec<usize>>,
num_changes: usize,
) -> Option<Vec<Change>> {
let mut changes: Vec<Change> = Vec::with_capacity(num_changes);
// fill out the hashes as we go
for (deps, mut uncompressed_change) in doc_changes_deps.zip_eq(uncompressed_changes) {
for idx in deps {
uncompressed_change.deps.push(changes.get(idx)?.hash);
}
changes.push(uncompressed_change.into());
}
Some(changes)
}
fn group_doc_change_and_doc_ops(
changes: &mut [DocChange],
mut ops: Vec<DocOp>,
actors: &[ActorId],
) -> Result<(), decoding::Error> {
let mut changes_by_actor: HashMap<usize, Vec<usize>> = HashMap::new();
for (i, change) in changes.iter().enumerate() {
let actor_change_index = changes_by_actor.entry(change.actor).or_default();
if change.seq != (actor_change_index.len() + 1) as u64 {
return Err(decoding::Error::ChangeDecompressFailed(
"Doc Seq Invalid".into(),
));
}
if change.actor >= actors.len() {
return Err(decoding::Error::ChangeDecompressFailed(
"Doc Actor Invalid".into(),
));
}
actor_change_index.push(i);
}
let mut op_by_id = HashMap::new();
ops.iter().enumerate().for_each(|(i, op)| {
op_by_id.insert((op.ctr, op.actor), i);
});
for i in 0..ops.len() {
let op = ops[i].clone(); // this is safe - avoid borrow checker issues
//let id = (op.ctr, op.actor);
//op_by_id.insert(id, i);
for succ in &op.succ {
if let Some(index) = op_by_id.get(succ) {
ops[*index].pred.push((op.ctr, op.actor));
} else {
let key = if op.insert {
amp::OpId(op.ctr, actors[op.actor].clone()).into()
} else {
op.key.clone()
};
let del = DocOp {
actor: succ.1,
ctr: succ.0,
action: OpType::Delete,
obj: op.obj.clone(),
key,
succ: Vec::new(),
pred: vec![(op.ctr, op.actor)],
insert: false,
};
op_by_id.insert(*succ, ops.len());
ops.push(del);
}
}
}
for op in ops {
// binary search for our change
let actor_change_index = changes_by_actor.entry(op.actor).or_default();
let mut left = 0;
let mut right = actor_change_index.len();
while left < right {
let seq = (left + right) / 2;
if changes[actor_change_index[seq]].max_op < op.ctr {
left = seq + 1;
} else {
right = seq;
}
}
if left >= actor_change_index.len() {
return Err(decoding::Error::ChangeDecompressFailed(
"Doc MaxOp Invalid".into(),
));
}
changes[actor_change_index[left]].ops.push(op);
}
changes
.iter_mut()
.for_each(|change| change.ops.sort_unstable());
Ok(())
}
fn doc_changes_to_uncompressed_changes<'a>(
changes: impl Iterator<Item = DocChange> + 'a,
actors: &'a [ActorId],
) -> impl Iterator<Item = amp::Change> + 'a {
changes.map(move |change| amp::Change {
// we've already confirmed that all change.actor's are valid
actor_id: actors[change.actor].clone(),
seq: change.seq,
time: change.time,
// SAFETY: this unwrap is safe as we always add 1
start_op: NonZeroU64::new(change.max_op - change.ops.len() as u64 + 1).unwrap(),
hash: None,
message: change.message,
operations: change
.ops
.into_iter()
.map(|op| amp::Op {
action: op.action.clone(),
insert: op.insert,
key: op.key,
obj: op.obj,
// we've already confirmed that all op.actor's are valid
pred: pred_into(op.pred.into_iter(), actors),
})
.collect(),
deps: Vec::new(),
extra_bytes: change.extra_bytes,
})
}
fn pred_into(
pred: impl Iterator<Item = (u64, usize)>,
actors: &[ActorId],
) -> amp::SortedVec<amp::OpId> {
pred.map(|(ctr, actor)| amp::OpId(ctr, actors[actor].clone()))
.collect()
}
#[cfg(test)]
mod tests {
use crate::legacy as amp;
#[test]
fn mismatched_head_repro_one() {
let op_json = serde_json::json!({
"ops": [
{
"action": "del",
"obj": "1@1485eebc689d47efbf8b892e81653eb3",
"elemId": "3164@0dcdf83d9594477199f80ccd25e87053",
"pred": [
"3164@0dcdf83d9594477199f80ccd25e87053"
],
"insert": false
},
],
"actor": "e63cf5ed1f0a4fb28b2c5bc6793b9272",
"hash": "e7fd5c02c8fdd2cdc3071ce898a5839bf36229678af3b940f347da541d147ae2",
"seq": 1,
"startOp": 3179,
"time": 1634146652,
"message": null,
"deps": [
"2603cded00f91e525507fc9e030e77f9253b239d90264ee343753efa99e3fec1"
]
});
let change: amp::Change = serde_json::from_value(op_json).unwrap();
let expected_hash: super::amp::ChangeHash =
"4dff4665d658a28bb6dcace8764eb35fa8e48e0a255e70b6b8cbf8e8456e5c50"
.parse()
.unwrap();
let encoded: super::Change = change.into();
assert_eq!(encoded.hash, expected_hash);
}
}

View file

@ -1,52 +0,0 @@
use crate::types::OpId;
use fxhash::FxBuildHasher;
use std::cmp;
use std::collections::HashMap;
#[derive(Debug, Clone, PartialEq)]
pub(crate) struct Clock(HashMap<usize, u64, FxBuildHasher>);
impl Clock {
pub(crate) fn new() -> Self {
Clock(Default::default())
}
pub(crate) fn include(&mut self, key: usize, n: u64) {
self.0
.entry(key)
.and_modify(|m| *m = cmp::max(n, *m))
.or_insert(n);
}
pub(crate) fn covers(&self, id: &OpId) -> bool {
if let Some(val) = self.0.get(&id.1) {
val >= &id.0
} else {
false
}
}
}
#[cfg(test)]
mod tests {
use super::*;
#[test]
fn covers() {
let mut clock = Clock::new();
clock.include(1, 20);
clock.include(2, 10);
assert!(clock.covers(&OpId(10, 1)));
assert!(clock.covers(&OpId(20, 1)));
assert!(!clock.covers(&OpId(30, 1)));
assert!(clock.covers(&OpId(5, 2)));
assert!(clock.covers(&OpId(10, 2)));
assert!(!clock.covers(&OpId(15, 2)));
assert!(!clock.covers(&OpId(1, 3)));
assert!(!clock.covers(&OpId(100, 3)));
}
}

File diff suppressed because it is too large Load diff

View file

@ -1,391 +0,0 @@
use core::fmt::Debug;
use std::{
io,
io::{Read, Write},
mem,
num::NonZeroU64,
};
use flate2::{bufread::DeflateEncoder, Compression};
use smol_str::SmolStr;
use crate::columnar::COLUMN_TYPE_DEFLATE;
use crate::ActorId;
pub(crate) const DEFLATE_MIN_SIZE: usize = 256;
/// The error type for encoding operations.
#[derive(Debug, thiserror::Error)]
pub enum Error {
#[error(transparent)]
Io(#[from] io::Error),
}
impl PartialEq<Error> for Error {
fn eq(&self, other: &Error) -> bool {
match (self, other) {
(Self::Io(error1), Self::Io(error2)) => error1.kind() == error2.kind(),
}
}
}
/// Encodes booleans by storing the count of the same value.
///
/// The sequence of numbers describes the count of false values on even indices (0-indexed) and the
/// count of true values on odd indices (0-indexed).
///
/// Counts are encoded as usize.
pub(crate) struct BooleanEncoder {
buf: Vec<u8>,
last: bool,
count: usize,
}
impl BooleanEncoder {
pub(crate) fn new() -> BooleanEncoder {
BooleanEncoder {
buf: Vec::new(),
last: false,
count: 0,
}
}
pub(crate) fn append(&mut self, value: bool) {
if value == self.last {
self.count += 1;
} else {
self.count.encode(&mut self.buf).ok();
self.last = value;
self.count = 1;
}
}
pub(crate) fn finish(mut self, col: u32) -> ColData {
if self.count > 0 {
self.count.encode(&mut self.buf).ok();
}
ColData::new(col, self.buf)
}
}
/// Encodes integers as the change since the previous value.
///
/// The initial value is 0 encoded as u64. Deltas are encoded as i64.
///
/// Run length encoding is then applied to the resulting sequence.
pub(crate) struct DeltaEncoder {
rle: RleEncoder<i64>,
absolute_value: u64,
}
impl DeltaEncoder {
pub(crate) fn new() -> DeltaEncoder {
DeltaEncoder {
rle: RleEncoder::new(),
absolute_value: 0,
}
}
pub(crate) fn append_value(&mut self, value: u64) {
self.rle
.append_value(value as i64 - self.absolute_value as i64);
self.absolute_value = value;
}
pub(crate) fn append_null(&mut self) {
self.rle.append_null();
}
pub(crate) fn finish(self, col: u32) -> ColData {
self.rle.finish(col)
}
}
enum RleState<T> {
Empty,
NullRun(usize),
LiteralRun(T, Vec<T>),
LoneVal(T),
Run(T, usize),
}
/// Encodes data in run lengh encoding format. This is very efficient for long repeats of data
///
/// There are 3 types of 'run' in this encoder:
/// - a normal run (compresses repeated values)
/// - a null run (compresses repeated nulls)
/// - a literal run (no compression)
///
/// A normal run consists of the length of the run (encoded as an i64) followed by the encoded value that this run contains.
///
/// A null run consists of a zero value (encoded as an i64) followed by the length of the null run (encoded as a usize).
///
/// A literal run consists of the **negative** length of the run (encoded as an i64) followed by the values in the run.
///
/// Therefore all the types start with an encoded i64, the value of which determines the type of the following data.
pub(crate) struct RleEncoder<T>
where
T: Encodable + PartialEq + Clone,
{
buf: Vec<u8>,
state: RleState<T>,
}
impl<T> RleEncoder<T>
where
T: Encodable + PartialEq + Clone,
{
pub(crate) fn new() -> RleEncoder<T> {
RleEncoder {
buf: Vec::new(),
state: RleState::Empty,
}
}
pub(crate) fn finish(mut self, col: u32) -> ColData {
match self.take_state() {
// this covers `only_nulls`
RleState::NullRun(size) => {
if !self.buf.is_empty() {
self.flush_null_run(size);
}
}
RleState::LoneVal(value) => self.flush_lit_run(vec![value]),
RleState::Run(value, len) => self.flush_run(&value, len),
RleState::LiteralRun(last, mut run) => {
run.push(last);
self.flush_lit_run(run);
}
RleState::Empty => {}
}
ColData::new(col, self.buf)
}
fn flush_run(&mut self, val: &T, len: usize) {
self.encode(&(len as i64));
self.encode(val);
}
fn flush_null_run(&mut self, len: usize) {
self.encode::<i64>(&0);
self.encode(&len);
}
fn flush_lit_run(&mut self, run: Vec<T>) {
self.encode(&-(run.len() as i64));
for val in run {
self.encode(&val);
}
}
fn take_state(&mut self) -> RleState<T> {
let mut state = RleState::Empty;
mem::swap(&mut self.state, &mut state);
state
}
pub(crate) fn append_null(&mut self) {
self.state = match self.take_state() {
RleState::Empty => RleState::NullRun(1),
RleState::NullRun(size) => RleState::NullRun(size + 1),
RleState::LoneVal(other) => {
self.flush_lit_run(vec![other]);
RleState::NullRun(1)
}
RleState::Run(other, len) => {
self.flush_run(&other, len);
RleState::NullRun(1)
}
RleState::LiteralRun(last, mut run) => {
run.push(last);
self.flush_lit_run(run);
RleState::NullRun(1)
}
}
}
pub(crate) fn append_value(&mut self, value: T) {
self.state = match self.take_state() {
RleState::Empty => RleState::LoneVal(value),
RleState::LoneVal(other) => {
if other == value {
RleState::Run(value, 2)
} else {
let mut v = Vec::with_capacity(2);
v.push(other);
RleState::LiteralRun(value, v)
}
}
RleState::Run(other, len) => {
if other == value {
RleState::Run(other, len + 1)
} else {
self.flush_run(&other, len);
RleState::LoneVal(value)
}
}
RleState::LiteralRun(last, mut run) => {
if last == value {
self.flush_lit_run(run);
RleState::Run(value, 2)
} else {
run.push(last);
RleState::LiteralRun(value, run)
}
}
RleState::NullRun(size) => {
self.flush_null_run(size);
RleState::LoneVal(value)
}
}
}
fn encode<V>(&mut self, val: &V)
where
V: Encodable,
{
val.encode(&mut self.buf).ok();
}
}
pub(crate) trait Encodable {
fn encode_with_actors_to_vec(&self, actors: &mut [ActorId]) -> io::Result<Vec<u8>> {
let mut buf = Vec::new();
self.encode_with_actors(&mut buf, actors)?;
Ok(buf)
}
fn encode_with_actors<R: Write>(&self, buf: &mut R, _actors: &[ActorId]) -> io::Result<usize> {
self.encode(buf)
}
fn encode<R: Write>(&self, buf: &mut R) -> io::Result<usize>;
fn encode_vec(&self, buf: &mut Vec<u8>) -> usize {
self.encode(buf).unwrap()
}
}
impl Encodable for SmolStr {
fn encode<R: Write>(&self, buf: &mut R) -> io::Result<usize> {
let bytes = self.as_bytes();
let head = bytes.len().encode(buf)?;
buf.write_all(bytes)?;
Ok(head + bytes.len())
}
}
impl Encodable for String {
fn encode<R: Write>(&self, buf: &mut R) -> io::Result<usize> {
let bytes = self.as_bytes();
let head = bytes.len().encode(buf)?;
buf.write_all(bytes)?;
Ok(head + bytes.len())
}
}
impl Encodable for Option<String> {
fn encode<R: Write>(&self, buf: &mut R) -> io::Result<usize> {
if let Some(s) = self {
s.encode(buf)
} else {
0.encode(buf)
}
}
}
impl Encodable for u64 {
fn encode<R: Write>(&self, buf: &mut R) -> io::Result<usize> {
leb128::write::unsigned(buf, *self)
}
}
impl Encodable for NonZeroU64 {
fn encode<R: Write>(&self, buf: &mut R) -> io::Result<usize> {
leb128::write::unsigned(buf, self.get())
}
}
impl Encodable for f64 {
fn encode<R: Write>(&self, buf: &mut R) -> io::Result<usize> {
let bytes = self.to_le_bytes();
buf.write_all(&bytes)?;
Ok(bytes.len())
}
}
impl Encodable for f32 {
fn encode<R: Write>(&self, buf: &mut R) -> io::Result<usize> {
let bytes = self.to_le_bytes();
buf.write_all(&bytes)?;
Ok(bytes.len())
}
}
impl Encodable for i64 {
fn encode<R: Write>(&self, buf: &mut R) -> io::Result<usize> {
leb128::write::signed(buf, *self)
}
}
impl Encodable for usize {
fn encode<R: Write>(&self, buf: &mut R) -> io::Result<usize> {
(*self as u64).encode(buf)
}
}
impl Encodable for u32 {
fn encode<R: Write>(&self, buf: &mut R) -> io::Result<usize> {
u64::from(*self).encode(buf)
}
}
impl Encodable for i32 {
fn encode<R: Write>(&self, buf: &mut R) -> io::Result<usize> {
i64::from(*self).encode(buf)
}
}
#[derive(Debug)]
pub(crate) struct ColData {
pub(crate) col: u32,
pub(crate) data: Vec<u8>,
#[cfg(debug_assertions)]
has_been_deflated: bool,
}
impl ColData {
pub(crate) fn new(col_id: u32, data: Vec<u8>) -> ColData {
ColData {
col: col_id,
data,
#[cfg(debug_assertions)]
has_been_deflated: false,
}
}
pub(crate) fn encode_col_len<R: Write>(&self, buf: &mut R) -> io::Result<usize> {
let mut len = 0;
if !self.data.is_empty() {
len += self.col.encode(buf)?;
len += self.data.len().encode(buf)?;
}
Ok(len)
}
pub(crate) fn deflate(&mut self) {
#[cfg(debug_assertions)]
{
debug_assert!(!self.has_been_deflated);
self.has_been_deflated = true;
}
if self.data.len() > DEFLATE_MIN_SIZE {
let mut deflated = Vec::new();
let mut deflater = DeflateEncoder::new(&self.data[..], Compression::default());
//This unwrap should be okay as we're reading and writing to in memory buffers
deflater.read_to_end(&mut deflated).unwrap();
self.col |= COLUMN_TYPE_DEFLATE;
self.data = deflated;
}
}
}

View file

@ -1,68 +0,0 @@
use crate::types::{ActorId, ScalarValue};
use crate::value::DataType;
use crate::{decoding, encoding, ChangeHash};
use thiserror::Error;
#[derive(Error, Debug, PartialEq)]
pub enum AutomergeError {
#[error("invalid obj id format `{0}`")]
InvalidObjIdFormat(String),
#[error("invalid obj id `{0}`")]
InvalidObjId(String),
#[error("there was an encoding problem: {0}")]
Encoding(#[from] encoding::Error),
#[error("there was a decoding problem: {0}")]
Decoding(#[from] decoding::Error),
#[error("key must not be an empty string")]
EmptyStringKey,
#[error("invalid seq {0}")]
InvalidSeq(u64),
#[error("index {0} is out of bounds")]
InvalidIndex(usize),
#[error("duplicate seq {0} found for actor {1}")]
DuplicateSeqNumber(u64, ActorId),
#[error("invalid hash {0}")]
InvalidHash(ChangeHash),
#[error("increment operations must be against a counter value")]
MissingCounter,
#[error("general failure")]
Fail,
#[error(transparent)]
HexDecode(#[from] hex::FromHexError),
}
#[cfg(feature = "wasm")]
impl From<AutomergeError> for wasm_bindgen::JsValue {
fn from(err: AutomergeError) -> Self {
js_sys::Error::new(&std::format!("{}", err)).into()
}
}
#[derive(Error, Debug)]
#[error("Invalid actor ID: {0}")]
pub struct InvalidActorId(pub String);
#[derive(Error, Debug, PartialEq)]
#[error("Invalid scalar value, expected {expected} but received {unexpected}")]
pub(crate) struct InvalidScalarValue {
pub(crate) raw_value: ScalarValue,
pub(crate) datatype: DataType,
pub(crate) unexpected: String,
pub(crate) expected: String,
}
#[derive(Error, Debug, PartialEq)]
#[error("Invalid change hash slice: {0:?}")]
pub struct InvalidChangeHashSlice(pub Vec<u8>);
#[derive(Error, Debug, PartialEq)]
#[error("Invalid object ID: {0}")]
pub struct InvalidObjectId(pub String);
#[derive(Error, Debug)]
#[error("Invalid element ID: {0}")]
pub struct InvalidElementId(pub String);
#[derive(Error, Debug)]
#[error("Invalid OpID: {0}")]
pub struct InvalidOpId(pub String);

View file

@ -1,82 +0,0 @@
use crate::ActorId;
use serde::Serialize;
use serde::Serializer;
use std::cmp::{Ord, Ordering};
use std::fmt;
use std::hash::{Hash, Hasher};
#[derive(Debug, Clone)]
pub enum ExId {
Root,
Id(u64, ActorId, usize),
}
impl PartialEq for ExId {
fn eq(&self, other: &Self) -> bool {
match (self, other) {
(ExId::Root, ExId::Root) => true,
(ExId::Id(ctr1, actor1, _), ExId::Id(ctr2, actor2, _))
if ctr1 == ctr2 && actor1 == actor2 =>
{
true
}
_ => false,
}
}
}
impl Eq for ExId {}
impl fmt::Display for ExId {
fn fmt(&self, f: &mut fmt::Formatter<'_>) -> fmt::Result {
match self {
ExId::Root => write!(f, "_root"),
ExId::Id(ctr, actor, _) => write!(f, "{}@{}", ctr, actor),
}
}
}
impl Hash for ExId {
fn hash<H: Hasher>(&self, state: &mut H) {
match self {
ExId::Root => 0.hash(state),
ExId::Id(ctr, actor, _) => {
ctr.hash(state);
actor.hash(state);
}
}
}
}
impl Ord for ExId {
fn cmp(&self, other: &Self) -> Ordering {
match (self, other) {
(ExId::Root, ExId::Root) => Ordering::Equal,
(ExId::Root, _) => Ordering::Less,
(_, ExId::Root) => Ordering::Greater,
(ExId::Id(c1, a1, _), ExId::Id(c2, a2, _)) if c1 == c2 => a2.cmp(a1),
(ExId::Id(c1, _, _), ExId::Id(c2, _, _)) => c1.cmp(c2),
}
}
}
impl PartialOrd for ExId {
fn partial_cmp(&self, other: &Self) -> Option<Ordering> {
Some(self.cmp(other))
}
}
impl Serialize for ExId {
fn serialize<S>(&self, serializer: S) -> Result<S::Ok, S::Error>
where
S: Serializer,
{
serializer.serialize_str(self.to_string().as_str())
}
}
impl AsRef<ExId> for ExId {
fn as_ref(&self) -> &ExId {
self
}
}

View file

@ -1,111 +0,0 @@
#![doc(
html_logo_url = "https://raw.githubusercontent.com/automerge/automerge-rs/main/img/brandmark.svg",
html_favicon_url = "https:///raw.githubusercontent.com/automerge/automerge-rs/main/img/favicon.ico"
)]
#![warn(
missing_debug_implementations,
// missing_docs, // TODO: add documentation!
rust_2018_idioms,
unreachable_pub,
bad_style,
const_err,
dead_code,
improper_ctypes,
non_shorthand_field_patterns,
no_mangle_generic_items,
overflowing_literals,
path_statements,
patterns_in_fns_without_body,
private_in_public,
unconditional_recursion,
unused,
unused_allocation,
unused_comparisons,
unused_parens,
while_true
)]
#[doc(hidden)]
#[macro_export]
macro_rules! log {
( $( $t:tt )* ) => {
{
use $crate::__log;
__log!( $( $t )* );
}
}
}
#[cfg(all(feature = "wasm", target_family = "wasm"))]
#[doc(hidden)]
#[macro_export]
macro_rules! __log {
( $( $t:tt )* ) => {
web_sys::console::log_1(&format!( $( $t )* ).into());
}
}
#[cfg(not(all(feature = "wasm", target_family = "wasm")))]
#[doc(hidden)]
#[macro_export]
macro_rules! __log {
( $( $t:tt )* ) => {
println!( $( $t )* );
}
}
mod autocommit;
mod automerge;
mod change;
mod clock;
mod columnar;
mod decoding;
mod encoding;
mod error;
mod exid;
mod indexed_cache;
mod keys;
mod keys_at;
mod legacy;
mod object_data;
mod op_observer;
mod op_set;
mod op_tree;
mod options;
mod parents;
mod query;
mod range;
mod range_at;
pub mod sync;
pub mod transaction;
mod types;
mod value;
mod values;
mod values_at;
#[cfg(feature = "optree-visualisation")]
mod visualisation;
pub use crate::automerge::Automerge;
pub use autocommit::AutoCommit;
pub use change::Change;
pub use decoding::Error as DecodingError;
pub use decoding::InvalidChangeError;
pub use encoding::Error as EncodingError;
pub use error::AutomergeError;
pub use exid::ExId as ObjId;
pub use keys::Keys;
pub use keys_at::KeysAt;
pub use legacy::Change as ExpandedChange;
pub use op_observer::OpObserver;
pub use op_observer::Patch;
pub use op_observer::VecOpObserver;
pub use options::ApplyOptions;
pub use parents::Parents;
pub use range::Range;
pub use range_at::RangeAt;
pub use types::{ActorId, ChangeHash, ObjType, OpType, Prop};
pub use value::{ScalarValue, Value};
pub use values::Values;
pub use values_at::ValuesAt;
pub const ROOT: ObjId = ObjId::Root;

View file

@ -1,176 +0,0 @@
use std::ops::RangeBounds;
use std::sync::{Arc, Mutex};
use crate::clock::Clock;
use crate::op_tree::{OpSetMetadata, OpTreeInternal};
use crate::query::{self, TreeQuery};
use crate::types::{Key, ObjId};
use crate::types::{Op, OpId};
use crate::{query::Keys, query::KeysAt, ObjType};
#[derive(Debug, Default, Clone, PartialEq)]
pub(crate) struct MapOpsCache {
pub(crate) last: Option<(Key, usize)>,
}
impl MapOpsCache {
fn lookup<'a, Q: TreeQuery<'a>>(&self, query: &mut Q) -> bool {
query.cache_lookup_map(self)
}
fn update<'a, Q: TreeQuery<'a>>(&mut self, query: &Q) {
query.cache_update_map(self);
// TODO: fixup the cache (reordering etc.)
}
}
#[derive(Debug, Default, Clone, PartialEq)]
pub(crate) struct SeqOpsCache {
// last insertion (list index, tree index, whether the last op was an insert, opid to be inserted)
// TODO: invalidation
pub(crate) last: Option<(usize, usize, bool, OpId)>,
}
impl SeqOpsCache {
fn lookup<'a, Q: TreeQuery<'a>>(&self, query: &mut Q) -> bool {
query.cache_lookup_seq(self)
}
fn update<'a, Q: TreeQuery<'a>>(&mut self, query: &Q) {
query.cache_update_seq(self);
// TODO: fixup the cache (reordering etc.)
}
}
/// Stores the data for an object.
#[derive(Debug, Clone, PartialEq)]
pub(crate) struct ObjectData {
cache: ObjectDataCache,
/// The type of this object.
typ: ObjType,
/// The operations pertaining to this object.
pub(crate) ops: OpTreeInternal,
/// The id of the parent object, root has no parent.
pub(crate) parent: Option<ObjId>,
}
#[derive(Debug, Clone)]
pub(crate) enum ObjectDataCache {
Map(Arc<Mutex<MapOpsCache>>),
Seq(Arc<Mutex<SeqOpsCache>>),
}
impl PartialEq for ObjectDataCache {
fn eq(&self, other: &ObjectDataCache) -> bool {
match (self, other) {
(ObjectDataCache::Map(_), ObjectDataCache::Map(_)) => true,
(ObjectDataCache::Map(_), ObjectDataCache::Seq(_)) => false,
(ObjectDataCache::Seq(_), ObjectDataCache::Map(_)) => false,
(ObjectDataCache::Seq(_), ObjectDataCache::Seq(_)) => true,
}
}
}
impl ObjectData {
pub(crate) fn root() -> Self {
ObjectData {
cache: ObjectDataCache::Map(Default::default()),
typ: ObjType::Map,
ops: Default::default(),
parent: None,
}
}
pub(crate) fn new(typ: ObjType, parent: Option<ObjId>) -> Self {
let internal = match typ {
ObjType::Map | ObjType::Table => ObjectDataCache::Map(Default::default()),
ObjType::List | ObjType::Text => ObjectDataCache::Seq(Default::default()),
};
ObjectData {
cache: internal,
typ,
ops: Default::default(),
parent,
}
}
pub(crate) fn keys(&self) -> Option<Keys<'_>> {
self.ops.keys()
}
pub(crate) fn keys_at(&self, clock: Clock) -> Option<KeysAt<'_>> {
self.ops.keys_at(clock)
}
pub(crate) fn range<'a, R: RangeBounds<String>>(
&'a self,
range: R,
meta: &'a OpSetMetadata,
) -> Option<query::Range<'a, R>> {
self.ops.range(range, meta)
}
pub(crate) fn range_at<'a, R: RangeBounds<String>>(
&'a self,
range: R,
meta: &'a OpSetMetadata,
clock: Clock,
) -> Option<query::RangeAt<'a, R>> {
self.ops.range_at(range, meta, clock)
}
pub(crate) fn search<'a, 'b: 'a, Q>(&'b self, mut query: Q, metadata: &OpSetMetadata) -> Q
where
Q: TreeQuery<'a>,
{
match self {
ObjectData {
ops,
cache: ObjectDataCache::Map(cache),
..
} => {
let mut cache = cache.lock().unwrap();
if !cache.lookup(&mut query) {
query = ops.search(query, metadata);
}
cache.update(&query);
query
}
ObjectData {
ops,
cache: ObjectDataCache::Seq(cache),
..
} => {
let mut cache = cache.lock().unwrap();
if !cache.lookup(&mut query) {
query = ops.search(query, metadata);
}
cache.update(&query);
query
}
}
}
pub(crate) fn update<F>(&mut self, index: usize, f: F)
where
F: FnOnce(&mut Op),
{
self.ops.update(index, f)
}
pub(crate) fn remove(&mut self, index: usize) -> Op {
self.ops.remove(index)
}
pub(crate) fn insert(&mut self, index: usize, op: Op) {
self.ops.insert(index, op)
}
pub(crate) fn typ(&self) -> ObjType {
self.typ
}
pub(crate) fn get(&self, index: usize) -> Option<&Op> {
self.ops.get(index)
}
}

View file

@ -1,135 +0,0 @@
use crate::exid::ExId;
use crate::Prop;
use crate::Value;
/// An observer of operations applied to the document.
pub trait OpObserver {
/// A new value has been inserted into the given object.
///
/// - `objid`: the object that has been inserted into.
/// - `index`: the index the new value has been inserted at.
/// - `tagged_value`: the value that has been inserted and the id of the operation that did the
/// insert.
fn insert(&mut self, objid: ExId, index: usize, tagged_value: (Value<'_>, ExId));
/// A new value has been put into the given object.
///
/// - `objid`: the object that has been put into.
/// - `key`: the key that the value as been put at.
/// - `tagged_value`: the value that has been put into the object and the id of the operation
/// that did the put.
/// - `conflict`: whether this put conflicts with other operations.
fn put(&mut self, objid: ExId, key: Prop, tagged_value: (Value<'_>, ExId), conflict: bool);
/// A counter has been incremented.
///
/// - `objid`: the object that contains the counter.
/// - `key`: they key that the chounter is at.
/// - `tagged_value`: the amount the counter has been incremented by, and the the id of the
/// increment operation.
fn increment(&mut self, objid: ExId, key: Prop, tagged_value: (i64, ExId));
/// A value has beeen deleted.
///
/// - `objid`: the object that has been deleted in.
/// - `key`: the key of the value that has been deleted.
fn delete(&mut self, objid: ExId, key: Prop);
}
impl OpObserver for () {
fn insert(&mut self, _objid: ExId, _index: usize, _tagged_value: (Value<'_>, ExId)) {}
fn put(&mut self, _objid: ExId, _key: Prop, _tagged_value: (Value<'_>, ExId), _conflict: bool) {
}
fn increment(&mut self, _objid: ExId, _key: Prop, _tagged_value: (i64, ExId)) {}
fn delete(&mut self, _objid: ExId, _key: Prop) {}
}
/// Capture operations into a [`Vec`] and store them as patches.
#[derive(Default, Debug, Clone)]
pub struct VecOpObserver {
patches: Vec<Patch>,
}
impl VecOpObserver {
/// Take the current list of patches, leaving the internal list empty and ready for new
/// patches.
pub fn take_patches(&mut self) -> Vec<Patch> {
std::mem::take(&mut self.patches)
}
}
impl OpObserver for VecOpObserver {
fn insert(&mut self, obj_id: ExId, index: usize, (value, id): (Value<'_>, ExId)) {
self.patches.push(Patch::Insert {
obj: obj_id,
index,
value: (value.into_owned(), id),
});
}
fn put(&mut self, objid: ExId, key: Prop, (value, id): (Value<'_>, ExId), conflict: bool) {
self.patches.push(Patch::Put {
obj: objid,
key,
value: (value.into_owned(), id),
conflict,
});
}
fn increment(&mut self, objid: ExId, key: Prop, tagged_value: (i64, ExId)) {
self.patches.push(Patch::Increment {
obj: objid,
key,
value: tagged_value,
});
}
fn delete(&mut self, objid: ExId, key: Prop) {
self.patches.push(Patch::Delete { obj: objid, key })
}
}
/// A notification to the application that something has changed in a document.
#[derive(Debug, Clone, PartialEq)]
pub enum Patch {
/// Associating a new value with a key in a map, or an existing list element
Put {
/// The object that was put into.
obj: ExId,
/// The key that the new value was put at.
key: Prop,
/// The value that was put, and the id of the operation that put it there.
value: (Value<'static>, ExId),
/// Whether this put conflicts with another.
conflict: bool,
},
/// Inserting a new element into a list/text
Insert {
/// The object that was inserted into.
obj: ExId,
/// The index that the new value was inserted at.
index: usize,
/// The value that was inserted, and the id of the operation that inserted it there.
value: (Value<'static>, ExId),
},
/// Incrementing a counter.
Increment {
/// The object that was incremented in.
obj: ExId,
/// The key that was incremented.
key: Prop,
/// The amount that the counter was incremented by, and the id of the operation that
/// did the increment.
value: (i64, ExId),
},
/// Deleting an element from a list/text
Delete {
/// The object that was deleted from.
obj: ExId,
/// The key that was deleted.
key: Prop,
},
}

View file

@ -1,318 +0,0 @@
use crate::clock::Clock;
use crate::exid::ExId;
use crate::indexed_cache::IndexedCache;
use crate::object_data::ObjectData;
use crate::query::{self, OpIdSearch, TreeQuery};
use crate::types::{self, ActorId, Key, ObjId, Op, OpId, OpType};
use crate::{ObjType, OpObserver};
use fxhash::FxBuildHasher;
use std::cmp::Ordering;
use std::collections::HashMap;
use std::ops::RangeBounds;
pub(crate) type OpSet = OpSetInternal;
#[derive(Debug, Clone, PartialEq)]
pub(crate) struct OpSetInternal {
/// The map of objects to their data.
objects: HashMap<ObjId, ObjectData, FxBuildHasher>,
/// The number of operations in the opset.
length: usize,
/// Metadata about the operations in this opset.
pub(crate) m: OpSetMetadata,
}
impl OpSetInternal {
pub(crate) fn new() -> Self {
let mut objects: HashMap<_, _, _> = Default::default();
objects.insert(ObjId::root(), ObjectData::root());
OpSetInternal {
objects,
length: 0,
m: OpSetMetadata {
actors: IndexedCache::new(),
props: IndexedCache::new(),
},
}
}
pub(crate) fn id_to_exid(&self, id: OpId) -> ExId {
if id == types::ROOT {
ExId::Root
} else {
ExId::Id(id.0, self.m.actors.cache[id.1].clone(), id.1)
}
}
pub(crate) fn iter(&self) -> Iter<'_> {
let mut objs: Vec<_> = self.objects.keys().collect();
objs.sort_by(|a, b| self.m.lamport_cmp(a.0, b.0));
Iter {
inner: self,
index: 0,
sub_index: 0,
objs,
}
}
pub(crate) fn parent_object(&self, obj: &ObjId) -> Option<(ObjId, Key)> {
let parent = self.objects.get(obj)?.parent?;
let key = self.search(&parent, OpIdSearch::new(obj.0)).key().unwrap();
Some((parent, key))
}
pub(crate) fn keys(&self, obj: ObjId) -> Option<query::Keys<'_>> {
if let Some(object) = self.objects.get(&obj) {
object.keys()
} else {
None
}
}
pub(crate) fn keys_at(&self, obj: ObjId, clock: Clock) -> Option<query::KeysAt<'_>> {
if let Some(object) = self.objects.get(&obj) {
object.keys_at(clock)
} else {
None
}
}
pub(crate) fn range<R: RangeBounds<String>>(
&self,
obj: ObjId,
range: R,
) -> Option<query::Range<'_, R>> {
if let Some(tree) = self.objects.get(&obj) {
tree.range(range, &self.m)
} else {
None
}
}
pub(crate) fn range_at<R: RangeBounds<String>>(
&self,
obj: ObjId,
range: R,
clock: Clock,
) -> Option<query::RangeAt<'_, R>> {
if let Some(tree) = self.objects.get(&obj) {
tree.range_at(range, &self.m, clock)
} else {
None
}
}
pub(crate) fn search<'a, 'b: 'a, Q>(&'b self, obj: &ObjId, query: Q) -> Q
where
Q: TreeQuery<'a>,
{
if let Some(object) = self.objects.get(obj) {
object.search(query, &self.m)
} else {
query
}
}
pub(crate) fn replace<F>(&mut self, obj: &ObjId, index: usize, f: F)
where
F: FnOnce(&mut Op),
{
if let Some(object) = self.objects.get_mut(obj) {
object.update(index, f)
}
}
pub(crate) fn remove(&mut self, obj: &ObjId, index: usize) -> Op {
// this happens on rollback - be sure to go back to the old state
let object = self.objects.get_mut(obj).unwrap();
self.length -= 1;
let op = object.remove(index);
if let OpType::Make(_) = &op.action {
self.objects.remove(&op.id.into());
}
op
}
pub(crate) fn len(&self) -> usize {
self.length
}
pub(crate) fn insert(&mut self, index: usize, obj: &ObjId, element: Op) {
if let OpType::Make(typ) = element.action {
self.objects
.insert(element.id.into(), ObjectData::new(typ, Some(*obj)));
}
if let Some(object) = self.objects.get_mut(obj) {
object.insert(index, element);
self.length += 1;
}
}
pub(crate) fn insert_op(&mut self, obj: &ObjId, op: Op) -> Op {
let q = self.search(obj, query::SeekOp::new(&op));
let succ = q.succ;
let pos = q.pos;
for i in succ {
self.replace(obj, i, |old_op| old_op.add_succ(&op));
}
if !op.is_delete() {
self.insert(pos, obj, op.clone());
}
op
}
pub(crate) fn insert_op_with_observer<Obs: OpObserver>(
&mut self,
obj: &ObjId,
op: Op,
observer: &mut Obs,
) -> Op {
let q = self.search(obj, query::SeekOpWithPatch::new(&op));
let query::SeekOpWithPatch {
pos,
succ,
seen,
values,
had_value_before,
..
} = q;
let ex_obj = self.id_to_exid(obj.0);
let key = match op.key {
Key::Map(index) => self.m.props[index].clone().into(),
Key::Seq(_) => seen.into(),
};
if op.insert {
let value = (op.value(), self.id_to_exid(op.id));
observer.insert(ex_obj, seen, value);
} else if op.is_delete() {
if let Some(winner) = &values.last() {
let value = (winner.value(), self.id_to_exid(winner.id));
let conflict = values.len() > 1;
observer.put(ex_obj, key, value, conflict);
} else {
observer.delete(ex_obj, key);
}
} else if let Some(value) = op.get_increment_value() {
// only observe this increment if the counter is visible, i.e. the counter's
// create op is in the values
if values.iter().any(|value| op.pred.contains(&value.id)) {
// we have observed the value
observer.increment(ex_obj, key, (value, self.id_to_exid(op.id)));
}
} else {
let winner = if let Some(last_value) = values.last() {
if self.m.lamport_cmp(op.id, last_value.id) == Ordering::Greater {
&op
} else {
last_value
}
} else {
&op
};
let value = (winner.value(), self.id_to_exid(winner.id));
if op.is_list_op() && !had_value_before {
observer.insert(ex_obj, seen, value);
} else {
let conflict = !values.is_empty();
observer.put(ex_obj, key, value, conflict);
}
}
for i in succ {
self.replace(obj, i, |old_op| old_op.add_succ(&op));
}
if !op.is_delete() {
self.insert(pos, obj, op.clone());
}
op
}
pub(crate) fn object_type(&self, id: &ObjId) -> Option<ObjType> {
self.objects.get(id).map(|object| object.typ())
}
#[cfg(feature = "optree-visualisation")]
pub(crate) fn visualise(&self) -> String {
let mut out = Vec::new();
let graph = super::visualisation::GraphVisualisation::construct(&self.objects, &self.m);
dot::render(&graph, &mut out).unwrap();
String::from_utf8_lossy(&out[..]).to_string()
}
}
impl Default for OpSetInternal {
fn default() -> Self {
Self::new()
}
}
impl<'a> IntoIterator for &'a OpSetInternal {
type Item = (&'a ObjId, &'a Op);
type IntoIter = Iter<'a>;
fn into_iter(self) -> Self::IntoIter {
self.iter()
}
}
pub(crate) struct Iter<'a> {
inner: &'a OpSetInternal,
index: usize,
objs: Vec<&'a ObjId>,
sub_index: usize,
}
impl<'a> Iterator for Iter<'a> {
type Item = (&'a ObjId, &'a Op);
fn next(&mut self) -> Option<Self::Item> {
let mut result = None;
for obj in self.objs.iter().skip(self.index) {
let object = self.inner.objects.get(obj)?;
result = object.get(self.sub_index).map(|op| (*obj, op));
if result.is_some() {
self.sub_index += 1;
break;
} else {
self.index += 1;
self.sub_index = 0;
}
}
result
}
}
#[derive(Clone, Debug, PartialEq)]
pub(crate) struct OpSetMetadata {
pub(crate) actors: IndexedCache<ActorId>,
pub(crate) props: IndexedCache<String>,
}
impl OpSetMetadata {
pub(crate) fn key_cmp(&self, left: &Key, right: &Key) -> Ordering {
match (left, right) {
(Key::Map(a), Key::Map(b)) => self.props[*a].cmp(&self.props[*b]),
_ => panic!("can only compare map keys"),
}
}
pub(crate) fn lamport_cmp(&self, left: OpId, right: OpId) -> Ordering {
match (left, right) {
(OpId(0, _), OpId(0, _)) => Ordering::Equal,
(OpId(0, _), OpId(_, _)) => Ordering::Less,
(OpId(_, _), OpId(0, _)) => Ordering::Greater,
(OpId(a, x), OpId(b, y)) if a == b => self.actors[x].cmp(&self.actors[y]),
(OpId(a, _), OpId(b, _)) => a.cmp(&b),
}
}
}

View file

@ -1,16 +0,0 @@
#[derive(Debug, Default)]
pub struct ApplyOptions<'a, Obs> {
pub op_observer: Option<&'a mut Obs>,
}
impl<'a, Obs> ApplyOptions<'a, Obs> {
pub fn with_op_observer(mut self, op_observer: &'a mut Obs) -> Self {
self.op_observer = Some(op_observer);
self
}
pub fn set_op_observer(&mut self, op_observer: &'a mut Obs) -> &mut Self {
self.op_observer = Some(op_observer);
self
}
}

View file

@ -1,20 +0,0 @@
use crate::{exid::ExId, Automerge, Prop};
#[derive(Debug)]
pub struct Parents<'a> {
pub(crate) obj: ExId,
pub(crate) doc: &'a Automerge,
}
impl<'a> Iterator for Parents<'a> {
type Item = (ExId, Prop);
fn next(&mut self) -> Option<Self::Item> {
if let Some((obj, prop)) = self.doc.parent_object(&self.obj) {
self.obj = obj.clone();
Some((obj, prop))
} else {
None
}
}
}

View file

@ -1,276 +0,0 @@
use crate::object_data::{MapOpsCache, SeqOpsCache};
use crate::op_tree::{OpSetMetadata, OpTreeNode};
use crate::types::{Clock, Counter, ElemId, Op, OpId, OpType, ScalarValue};
use fxhash::FxBuildHasher;
use std::cmp::Ordering;
use std::collections::{HashMap, HashSet};
use std::fmt::Debug;
mod elem_id_pos;
mod insert;
mod insert_prop;
mod keys;
mod keys_at;
mod len;
mod len_at;
mod list_vals;
mod list_vals_at;
mod nth;
mod nth_at;
mod opid;
mod prop;
mod prop_at;
mod range;
mod range_at;
mod seek_op;
mod seek_op_with_patch;
pub(crate) use elem_id_pos::ElemIdPos;
pub(crate) use insert::InsertNth;
pub(crate) use insert_prop::InsertProp;
pub(crate) use keys::Keys;
pub(crate) use keys_at::KeysAt;
pub(crate) use len::Len;
pub(crate) use len_at::LenAt;
pub(crate) use list_vals::ListVals;
pub(crate) use list_vals_at::ListValsAt;
pub(crate) use nth::Nth;
pub(crate) use nth_at::NthAt;
pub(crate) use opid::OpIdSearch;
pub(crate) use prop::Prop;
pub(crate) use prop_at::PropAt;
pub(crate) use range::Range;
pub(crate) use range_at::RangeAt;
pub(crate) use seek_op::SeekOp;
pub(crate) use seek_op_with_patch::SeekOpWithPatch;
#[derive(Debug, Clone, PartialEq)]
pub(crate) struct CounterData {
pos: usize,
val: i64,
succ: HashSet<OpId>,
op: Op,
}
pub(crate) trait TreeQuery<'a> {
fn cache_lookup_map(&mut self, _cache: &MapOpsCache) -> bool {
// by default we haven't found something in the cache
false
}
fn cache_update_map(&self, _cache: &mut MapOpsCache) {
// by default we don't have anything to update in the cache
}
fn cache_lookup_seq(&mut self, _cache: &SeqOpsCache) -> bool {
// by default we haven't found something in the cache
false
}
fn cache_update_seq(&self, _cache: &mut SeqOpsCache) {
// by default we don't have anything to update in the cache
}
#[inline(always)]
fn query_node_with_metadata(
&mut self,
child: &'a OpTreeNode,
_m: &OpSetMetadata,
) -> QueryResult {
self.query_node(child)
}
fn query_node(&mut self, _child: &'a OpTreeNode) -> QueryResult {
QueryResult::Descend
}
#[inline(always)]
fn query_element_with_metadata(&mut self, element: &'a Op, _m: &OpSetMetadata) -> QueryResult {
self.query_element(element)
}
fn query_element(&mut self, _element: &'a Op) -> QueryResult {
panic!("invalid element query")
}
}
#[derive(Debug, Clone, PartialEq)]
pub(crate) enum QueryResult {
Next,
Descend,
Finish,
}
#[derive(Clone, Debug, PartialEq)]
pub(crate) struct Index {
/// The map of visible elements to the number of operations targetting them.
pub(crate) visible: HashMap<ElemId, usize, FxBuildHasher>,
/// Set of opids found in this node and below.
pub(crate) ops: HashSet<OpId, FxBuildHasher>,
}
impl Index {
pub(crate) fn new() -> Self {
Index {
visible: Default::default(),
ops: Default::default(),
}
}
/// Get the number of visible elements in this index.
pub(crate) fn visible_len(&self) -> usize {
self.visible.len()
}
pub(crate) fn has_visible(&self, e: &Option<ElemId>) -> bool {
if let Some(seen) = e {
self.visible.contains_key(seen)
} else {
false
}
}
pub(crate) fn replace(&mut self, old: &Op, new: &Op) {
if old.id != new.id {
self.ops.remove(&old.id);
self.ops.insert(new.id);
}
assert!(new.key == old.key);
match (new.visible(), old.visible(), new.elemid()) {
(false, true, Some(elem)) => match self.visible.get(&elem).copied() {
Some(n) if n == 1 => {
self.visible.remove(&elem);
}
Some(n) => {
self.visible.insert(elem, n - 1);
}
None => panic!("remove overun in index"),
},
(true, false, Some(elem)) => *self.visible.entry(elem).or_default() += 1,
_ => {}
}
}
pub(crate) fn insert(&mut self, op: &Op) {
self.ops.insert(op.id);
if op.visible() {
if let Some(elem) = op.elemid() {
*self.visible.entry(elem).or_default() += 1;
}
}
}
pub(crate) fn remove(&mut self, op: &Op) {
self.ops.remove(&op.id);
if op.visible() {
if let Some(elem) = op.elemid() {
match self.visible.get(&elem).copied() {
Some(n) if n == 1 => {
self.visible.remove(&elem);
}
Some(n) => {
self.visible.insert(elem, n - 1);
}
None => panic!("remove overun in index"),
}
}
}
}
pub(crate) fn merge(&mut self, other: &Index) {
for id in &other.ops {
self.ops.insert(*id);
}
for (elem, n) in other.visible.iter() {
*self.visible.entry(*elem).or_default() += n;
}
}
}
impl Default for Index {
fn default() -> Self {
Self::new()
}
}
#[derive(Debug, Clone, PartialEq, Default)]
pub(crate) struct VisWindow {
counters: HashMap<OpId, CounterData>,
}
impl VisWindow {
fn visible_at(&mut self, op: &Op, pos: usize, clock: &Clock) -> bool {
if !clock.covers(&op.id) {
return false;
}
let mut visible = false;
match op.action {
OpType::Put(ScalarValue::Counter(Counter { start, .. })) => {
self.counters.insert(
op.id,
CounterData {
pos,
val: start,
succ: op.succ.iter().cloned().collect(),
op: op.clone(),
},
);
if !op.succ.iter().any(|i| clock.covers(i)) {
visible = true;
}
}
OpType::Increment(inc_val) => {
for id in &op.pred {
// pred is always before op.id so we can see them
if let Some(mut entry) = self.counters.get_mut(id) {
entry.succ.remove(&op.id);
entry.val += inc_val;
entry.op.action = OpType::Put(ScalarValue::counter(entry.val));
if !entry.succ.iter().any(|i| clock.covers(i)) {
visible = true;
}
}
}
}
_ => {
if !op.succ.iter().any(|i| clock.covers(i)) {
visible = true;
}
}
};
visible
}
pub(crate) fn seen_op(&self, op: &Op, pos: usize) -> Vec<(usize, Op)> {
let mut result = vec![];
for pred in &op.pred {
if let Some(entry) = self.counters.get(pred) {
result.push((entry.pos, entry.op.clone()));
}
}
if result.is_empty() {
result.push((pos, op.clone()));
}
result
}
}
pub(crate) fn binary_search_by<F>(node: &OpTreeNode, f: F) -> usize
where
F: Fn(&Op) -> Ordering,
{
let mut right = node.len();
let mut left = 0;
while left < right {
let seq = (left + right) / 2;
if f(node.get(seq).unwrap()) == Ordering::Less {
left = seq + 1;
} else {
right = seq;
}
}
left
}

View file

@ -1,53 +0,0 @@
use crate::{op_tree::OpTreeNode, types::ElemId};
use super::{QueryResult, TreeQuery};
/// Lookup the index in the list that this elemid occupies.
pub(crate) struct ElemIdPos {
elemid: ElemId,
pos: usize,
found: bool,
}
impl ElemIdPos {
pub(crate) fn new(elemid: ElemId) -> Self {
Self {
elemid,
pos: 0,
found: false,
}
}
pub(crate) fn index(&self) -> Option<usize> {
if self.found {
Some(self.pos)
} else {
None
}
}
}
impl<'a> TreeQuery<'a> for ElemIdPos {
fn query_node(&mut self, child: &OpTreeNode) -> QueryResult {
// if index has our element then we can continue
if child.index.has_visible(&Some(self.elemid)) {
// element is in this node somewhere
QueryResult::Descend
} else {
// not in this node, try the next one
self.pos += child.index.visible_len();
QueryResult::Next
}
}
fn query_element(&mut self, element: &crate::types::Op) -> QueryResult {
if element.elemid() == Some(self.elemid) {
// this is it
self.found = true;
return QueryResult::Finish;
} else if element.visible() {
self.pos += 1;
}
QueryResult::Next
}
}

View file

@ -1,68 +0,0 @@
use crate::op_tree::{OpSetMetadata, OpTreeNode};
use crate::query::{binary_search_by, QueryResult, TreeQuery};
use crate::types::{Key, Op};
use std::fmt::Debug;
#[derive(Debug, Clone, PartialEq)]
pub(crate) struct InsertProp<'a> {
key: Key,
pub(crate) ops: Vec<&'a Op>,
pub(crate) ops_pos: Vec<usize>,
pub(crate) pos: usize,
start: Option<usize>,
}
impl<'a> InsertProp<'a> {
pub(crate) fn new(prop: usize) -> Self {
InsertProp {
key: Key::Map(prop),
ops: vec![],
ops_pos: vec![],
pos: 0,
start: None,
}
}
}
impl<'a> TreeQuery<'a> for InsertProp<'a> {
fn cache_lookup_map(&mut self, cache: &crate::object_data::MapOpsCache) -> bool {
if let Some((last_key, last_pos)) = cache.last {
if last_key == self.key {
self.start = Some(last_pos);
}
}
// don't have all of the result yet
false
}
fn cache_update_map(&self, cache: &mut crate::object_data::MapOpsCache) {
cache.last = None
}
fn query_node_with_metadata(
&mut self,
child: &'a OpTreeNode,
m: &OpSetMetadata,
) -> QueryResult {
let start = if let Some(start) = self.start {
debug_assert!(binary_search_by(child, |op| m.key_cmp(&op.key, &self.key)) >= start);
start
} else {
binary_search_by(child, |op| m.key_cmp(&op.key, &self.key))
};
self.start = Some(start);
self.pos = start;
for pos in start..child.len() {
let op = child.get(pos).unwrap();
if op.key != self.key {
break;
}
if op.visible() {
self.ops.push(op);
self.ops_pos.push(pos);
}
self.pos += 1;
}
QueryResult::Finish
}
}

View file

@ -1,21 +0,0 @@
use crate::op_tree::OpTreeNode;
use crate::query::{QueryResult, TreeQuery};
use std::fmt::Debug;
#[derive(Debug, Clone, PartialEq)]
pub(crate) struct Len {
pub(crate) len: usize,
}
impl Len {
pub(crate) fn new() -> Self {
Len { len: 0 }
}
}
impl<'a> TreeQuery<'a> for Len {
fn query_node(&mut self, child: &OpTreeNode) -> QueryResult {
self.len = child.index.visible_len();
QueryResult::Finish
}
}

View file

@ -1,67 +0,0 @@
use crate::op_tree::{OpSetMetadata, OpTreeNode};
use crate::query::{binary_search_by, QueryResult, TreeQuery};
use crate::types::{Key, Op};
#[derive(Debug, Clone, PartialEq)]
pub(crate) struct Prop<'a> {
key: Key,
pub(crate) ops: Vec<&'a Op>,
pub(crate) ops_pos: Vec<usize>,
pub(crate) pos: usize,
start: Option<usize>,
}
impl<'a> Prop<'a> {
pub(crate) fn new(prop: usize) -> Self {
Prop {
key: Key::Map(prop),
ops: vec![],
ops_pos: vec![],
pos: 0,
start: None,
}
}
}
impl<'a> TreeQuery<'a> for Prop<'a> {
fn cache_lookup_map(&mut self, cache: &crate::object_data::MapOpsCache) -> bool {
if let Some((last_key, last_pos)) = cache.last {
if last_key == self.key {
self.start = Some(last_pos);
}
}
// don't have all of the result yet
false
}
fn cache_update_map(&self, cache: &mut crate::object_data::MapOpsCache) {
cache.last = self.start.map(|start| (self.key, start));
}
fn query_node_with_metadata(
&mut self,
child: &'a OpTreeNode,
m: &OpSetMetadata,
) -> QueryResult {
let start = if let Some(start) = self.start {
debug_assert!(binary_search_by(child, |op| m.key_cmp(&op.key, &self.key)) >= start);
start
} else {
binary_search_by(child, |op| m.key_cmp(&op.key, &self.key))
};
self.start = Some(start);
self.pos = start;
for pos in start..child.len() {
let op = child.get(pos).unwrap();
if op.key != self.key {
break;
}
if op.visible() {
self.ops.push(op);
self.ops_pos.push(pos);
}
self.pos += 1;
}
QueryResult::Finish
}
}

View file

@ -1,72 +0,0 @@
use crate::op_tree::{OpSetMetadata, OpTreeNode};
use crate::types::{Key, OpId};
use crate::Value;
use std::fmt::Debug;
use std::ops::RangeBounds;
#[derive(Debug)]
pub(crate) struct Range<'a, R: RangeBounds<String>> {
range: R,
index: usize,
last_key: Option<Key>,
index_back: usize,
last_key_back: Option<Key>,
root_child: &'a OpTreeNode,
meta: &'a OpSetMetadata,
}
impl<'a, R: RangeBounds<String>> Range<'a, R> {
pub(crate) fn new(range: R, root_child: &'a OpTreeNode, meta: &'a OpSetMetadata) -> Self {
Self {
range,
index: 0,
last_key: None,
index_back: root_child.len(),
last_key_back: None,
root_child,
meta,
}
}
}
impl<'a, R: RangeBounds<String>> Iterator for Range<'a, R> {
type Item = (&'a str, Value<'a>, OpId);
fn next(&mut self) -> Option<Self::Item> {
for i in self.index..self.index_back {
let op = self.root_child.get(i)?;
self.index += 1;
if Some(op.key) != self.last_key && op.visible() {
self.last_key = Some(op.key);
let prop = match op.key {
Key::Map(m) => self.meta.props.get(m),
Key::Seq(_) => panic!("found list op in range query"),
};
if self.range.contains(prop) {
return Some((prop, op.value(), op.id));
}
}
}
None
}
}
impl<'a, R: RangeBounds<String>> DoubleEndedIterator for Range<'a, R> {
fn next_back(&mut self) -> Option<Self::Item> {
for i in (self.index..self.index_back).rev() {
let op = self.root_child.get(i)?;
self.index_back -= 1;
if Some(op.key) != self.last_key_back && op.visible() {
self.last_key_back = Some(op.key);
let prop = match op.key {
Key::Map(m) => self.meta.props.get(m),
Key::Seq(_) => panic!("can't iterate through lists backwards"),
};
if self.range.contains(prop) {
return Some((prop, op.value(), op.id));
}
}
}
None
}
}

View file

@ -1,88 +0,0 @@
use crate::clock::Clock;
use crate::op_tree::{OpSetMetadata, OpTreeNode};
use crate::types::{Key, OpId};
use crate::Value;
use std::fmt::Debug;
use std::ops::RangeBounds;
use super::VisWindow;
#[derive(Debug)]
pub(crate) struct RangeAt<'a, R: RangeBounds<String>> {
clock: Clock,
window: VisWindow,
range: R,
index: usize,
last_key: Option<Key>,
index_back: usize,
last_key_back: Option<Key>,
root_child: &'a OpTreeNode,
meta: &'a OpSetMetadata,
}
impl<'a, R: RangeBounds<String>> RangeAt<'a, R> {
pub(crate) fn new(
range: R,
root_child: &'a OpTreeNode,
meta: &'a OpSetMetadata,
clock: Clock,
) -> Self {
Self {
clock,
window: VisWindow::default(),
range,
index: 0,
last_key: None,
index_back: root_child.len(),
last_key_back: None,
root_child,
meta,
}
}
}
impl<'a, R: RangeBounds<String>> Iterator for RangeAt<'a, R> {
type Item = (&'a str, Value<'a>, OpId);
fn next(&mut self) -> Option<Self::Item> {
for i in self.index..self.index_back {
let op = self.root_child.get(i)?;
let visible = self.window.visible_at(op, i, &self.clock);
self.index += 1;
if Some(op.key) != self.last_key && visible {
self.last_key = Some(op.key);
let prop = match op.key {
Key::Map(m) => self.meta.props.get(m),
Key::Seq(_) => panic!("found list op in range query"),
};
if self.range.contains(prop) {
return Some((prop, op.value(), op.id));
}
}
}
None
}
}
impl<'a, R: RangeBounds<String>> DoubleEndedIterator for RangeAt<'a, R> {
fn next_back(&mut self) -> Option<Self::Item> {
for i in (self.index..self.index_back).rev() {
let op = self.root_child.get(i)?;
self.index_back -= 1;
if Some(op.key) != self.last_key_back && op.visible() {
self.last_key_back = Some(op.key);
let prop = match op.key {
Key::Map(m) => self.meta.props.get(m),
Key::Seq(_) => panic!("can't iterate through lists backwards"),
};
if self.range.contains(prop) {
return Some((prop, op.value(), op.id));
}
}
}
None
}
}

View file

@ -1,116 +0,0 @@
use crate::op_tree::{OpSetMetadata, OpTreeNode};
use crate::query::{binary_search_by, QueryResult, TreeQuery};
use crate::types::{Key, Op, HEAD};
use std::cmp::Ordering;
use std::fmt::Debug;
#[derive(Debug, Clone, PartialEq)]
pub(crate) struct SeekOp<'a> {
/// the op we are looking for
op: &'a Op,
/// The position to insert at
pub(crate) pos: usize,
/// The indices of ops that this op overwrites
pub(crate) succ: Vec<usize>,
/// whether a position has been found
found: bool,
}
impl<'a> SeekOp<'a> {
pub(crate) fn new(op: &'a Op) -> Self {
SeekOp {
op,
succ: vec![],
pos: 0,
found: false,
}
}
fn lesser_insert(&self, op: &Op, m: &OpSetMetadata) -> bool {
op.insert && m.lamport_cmp(op.id, self.op.id) == Ordering::Less
}
fn greater_opid(&self, op: &Op, m: &OpSetMetadata) -> bool {
m.lamport_cmp(op.id, self.op.id) == Ordering::Greater
}
fn is_target_insert(&self, op: &Op) -> bool {
op.insert && op.elemid() == self.op.key.elemid()
}
}
impl<'a> TreeQuery<'a> for SeekOp<'a> {
fn query_node_with_metadata(&mut self, child: &OpTreeNode, m: &OpSetMetadata) -> QueryResult {
if self.found {
return QueryResult::Descend;
}
match self.op.key {
Key::Seq(HEAD) => {
while self.pos < child.len() {
let op = child.get(self.pos).unwrap();
if op.insert && m.lamport_cmp(op.id, self.op.id) == Ordering::Less {
break;
}
self.pos += 1;
}
QueryResult::Finish
}
Key::Seq(e) => {
if child.index.ops.contains(&e.0) {
QueryResult::Descend
} else {
self.pos += child.len();
QueryResult::Next
}
}
Key::Map(_) => {
self.pos = binary_search_by(child, |op| m.key_cmp(&op.key, &self.op.key));
while self.pos < child.len() {
let op = child.get(self.pos).unwrap();
if op.key != self.op.key {
break;
}
if self.op.overwrites(op) {
self.succ.push(self.pos);
}
if m.lamport_cmp(op.id, self.op.id) == Ordering::Greater {
break;
}
self.pos += 1;
}
QueryResult::Finish
}
}
}
fn query_element_with_metadata(&mut self, e: &Op, m: &OpSetMetadata) -> QueryResult {
if !self.found {
if self.is_target_insert(e) {
self.found = true;
if self.op.overwrites(e) {
self.succ.push(self.pos);
}
}
self.pos += 1;
QueryResult::Next
} else {
// we have already found the target
if self.op.overwrites(e) {
self.succ.push(self.pos);
}
if self.op.insert {
if self.lesser_insert(e, m) {
QueryResult::Finish
} else {
self.pos += 1;
QueryResult::Next
}
} else if e.insert || self.greater_opid(e, m) {
QueryResult::Finish
} else {
self.pos += 1;
QueryResult::Next
}
}
}
}

View file

@ -1,255 +0,0 @@
use crate::op_tree::{OpSetMetadata, OpTreeNode};
use crate::query::{binary_search_by, QueryResult, TreeQuery};
use crate::types::{ElemId, Key, Op, HEAD};
use std::cmp::Ordering;
use std::fmt::Debug;
#[derive(Debug, Clone, PartialEq)]
pub(crate) struct SeekOpWithPatch<'a> {
op: Op,
pub(crate) pos: usize,
pub(crate) succ: Vec<usize>,
found: bool,
pub(crate) seen: usize,
last_seen: Option<ElemId>,
pub(crate) values: Vec<&'a Op>,
pub(crate) had_value_before: bool,
}
impl<'a> SeekOpWithPatch<'a> {
pub(crate) fn new(op: &Op) -> Self {
SeekOpWithPatch {
op: op.clone(),
succ: vec![],
pos: 0,
found: false,
seen: 0,
last_seen: None,
values: vec![],
had_value_before: false,
}
}
fn lesser_insert(&self, op: &Op, m: &OpSetMetadata) -> bool {
op.insert && m.lamport_cmp(op.id, self.op.id) == Ordering::Less
}
fn greater_opid(&self, op: &Op, m: &OpSetMetadata) -> bool {
m.lamport_cmp(op.id, self.op.id) == Ordering::Greater
}
fn is_target_insert(&self, op: &Op) -> bool {
op.insert && op.elemid() == self.op.key.elemid()
}
/// Keeps track of the number of visible list elements we have seen. Increments `self.seen` if
/// operation `e` associates a visible value with a list element, and if we have not already
/// counted that list element (this ensures that if a list element has several values, i.e.
/// a conflict, then it is still only counted once).
fn count_visible(&mut self, e: &Op) {
if e.elemid() == self.op.elemid() {
return;
}
if e.insert {
self.last_seen = None
}
if e.visible() && self.last_seen.is_none() {
self.seen += 1;
self.last_seen = e.elemid()
}
}
}
impl<'a> TreeQuery<'a> for SeekOpWithPatch<'a> {
fn query_node_with_metadata(
&mut self,
child: &'a OpTreeNode,
m: &OpSetMetadata,
) -> QueryResult {
if self.found {
return QueryResult::Descend;
}
match self.op.key {
// Special case for insertion at the head of the list (`e == HEAD` is only possible for
// an insertion operation). Skip over any list elements whose elemId is greater than
// the opId of the operation being inserted.
Key::Seq(e) if e == HEAD => {
while self.pos < child.len() {
let op = child.get(self.pos).unwrap();
if op.insert && m.lamport_cmp(op.id, self.op.id) == Ordering::Less {
break;
}
self.count_visible(op);
self.pos += 1;
}
QueryResult::Finish
}
// Updating a list: search for the tree node that contains the new operation's
// reference element (i.e. the element we're updating or inserting after)
Key::Seq(e) => {
if self.found || child.index.ops.contains(&e.0) {
QueryResult::Descend
} else {
self.pos += child.len();
// When we skip over a subtree, we need to count the number of visible list
// elements we're skipping over. Each node stores the number of visible
// elements it contains. However, it could happen that a visible element is
// split across two tree nodes. To avoid double-counting in this situation, we
// subtract one if the last visible element also appears in this tree node.
let mut num_vis = child.index.visible_len();
if num_vis > 0 {
// FIXME: I think this is wrong: we should subtract one only if this
// subtree contains a *visible* (i.e. empty succs) operation for the list
// element with elemId `last_seen`; this will subtract one even if all
// values for this list element have been deleted in this subtree.
if child.index.has_visible(&self.last_seen) {
num_vis -= 1;
}
self.seen += num_vis;
// FIXME: this is also wrong: `last_seen` needs to be the elemId of the
// last *visible* list element in this subtree, but I think this returns
// the last operation's elemId regardless of whether it's visible or not.
// This will lead to incorrect counting if `last_seen` is not visible: it's
// not counted towards `num_vis`, so we shouldn't be subtracting 1.
self.last_seen = child.last().elemid();
}
QueryResult::Next
}
}
// Updating a map: operations appear in sorted order by key
Key::Map(_) => {
// Search for the place where we need to insert the new operation. First find the
// first op with a key >= the key we're updating
self.pos = binary_search_by(child, |op| m.key_cmp(&op.key, &self.op.key));
while self.pos < child.len() {
// Iterate over any existing operations for the same key; stop when we reach an
// operation with a different key
let op = child.get(self.pos).unwrap();
if op.key != self.op.key {
break;
}
// Keep track of any ops we're overwriting and any conflicts on this key
if self.op.overwrites(op) {
// when we encounter an increment op we also want to find the counter for
// it.
if self.op.is_inc() && op.is_counter() && op.visible() {
self.values.push(op);
}
self.succ.push(self.pos);
} else if op.visible() {
self.values.push(op);
}
// Ops for the same key should be in ascending order of opId, so we break when
// we reach an op with an opId greater than that of the new operation
if m.lamport_cmp(op.id, self.op.id) == Ordering::Greater {
break;
}
self.pos += 1;
}
// For the purpose of reporting conflicts, we also need to take into account any
// ops for the same key that appear after the new operation
let mut later_pos = self.pos;
while later_pos < child.len() {
let op = child.get(later_pos).unwrap();
if op.key != self.op.key {
break;
}
// No need to check if `self.op.overwrites(op)` because an operation's `preds`
// must always have lower Lamport timestamps than that op itself, and the ops
// here all have greater opIds than the new op
if op.visible() {
self.values.push(op);
}
later_pos += 1;
}
QueryResult::Finish
}
}
}
// Only called when operating on a sequence (list/text) object, since updates of a map are
// handled in `query_node_with_metadata`.
fn query_element_with_metadata(&mut self, e: &'a Op, m: &OpSetMetadata) -> QueryResult {
let result = if !self.found {
// First search for the referenced list element (i.e. the element we're updating, or
// after which we're inserting)
if self.is_target_insert(e) {
self.found = true;
if self.op.overwrites(e) {
// when we encounter an increment op we also want to find the counter for
// it.
if self.op.is_inc() && e.is_counter() && e.visible() {
self.values.push(e);
}
self.succ.push(self.pos);
}
if e.visible() {
self.had_value_before = true;
}
}
self.pos += 1;
QueryResult::Next
} else {
// Once we've found the reference element, keep track of any ops that we're overwriting
let overwritten = self.op.overwrites(e);
if overwritten {
// when we encounter an increment op we also want to find the counter for
// it.
if self.op.is_inc() && e.is_counter() && e.visible() {
self.values.push(e);
}
self.succ.push(self.pos);
}
// If the new op is an insertion, skip over any existing list elements whose elemId is
// greater than the ID of the new insertion
if self.op.insert {
if self.lesser_insert(e, m) {
// Insert before the first existing list element whose elemId is less than that
// of the new insertion
QueryResult::Finish
} else {
self.pos += 1;
QueryResult::Next
}
} else if e.insert {
// If the new op is an update of an existing list element, the first insertion op
// we encounter after the reference element indicates the end of the reference elem
QueryResult::Finish
} else {
// When updating an existing list element, keep track of any conflicts on this list
// element. We also need to remember if the list element had any visible elements
// prior to applying the new operation: if not, the new operation is resurrecting
// a deleted list element, so it looks like an insertion in the patch.
if e.visible() {
self.had_value_before = true;
if !overwritten {
self.values.push(e);
}
}
// We now need to put the ops for the same list element into ascending order, so we
// skip over any ops whose ID is less than that of the new operation.
if !self.greater_opid(e, m) {
self.pos += 1;
}
QueryResult::Next
}
};
// The patch needs to know the list index of each operation, so we count the number of
// visible list elements up to the insertion position of the new operation
if result == QueryResult::Next {
self.count_visible(e);
}
result
}
}

View file

@ -1,378 +0,0 @@
use itertools::Itertools;
use std::{
borrow::Cow,
collections::{HashMap, HashSet},
io,
io::Write,
};
use crate::{
decoding, decoding::Decoder, encoding::Encodable, ApplyOptions, Automerge, AutomergeError,
Change, ChangeHash, OpObserver,
};
mod bloom;
mod state;
pub use bloom::BloomFilter;
pub use state::{Have, State};
const HASH_SIZE: usize = 32; // 256 bits = 32 bytes
const MESSAGE_TYPE_SYNC: u8 = 0x42; // first byte of a sync message, for identification
impl Automerge {
pub fn generate_sync_message(&self, sync_state: &mut State) -> Option<Message> {
let our_heads = self.get_heads();
let our_need = self.get_missing_deps(sync_state.their_heads.as_ref().unwrap_or(&vec![]));
let their_heads_set = if let Some(ref heads) = sync_state.their_heads {
heads.iter().collect::<HashSet<_>>()
} else {
HashSet::new()
};
let our_have = if our_need.iter().all(|hash| their_heads_set.contains(hash)) {
vec![self.make_bloom_filter(sync_state.shared_heads.clone())]
} else {
Vec::new()
};
if let Some(ref their_have) = sync_state.their_have {
if let Some(first_have) = their_have.first().as_ref() {
if !first_have
.last_sync
.iter()
.all(|hash| self.get_change_by_hash(hash).is_some())
{
let reset_msg = Message {
heads: our_heads,
need: Vec::new(),
have: vec![Have::default()],
changes: Vec::new(),
};
return Some(reset_msg);
}
}
}
let mut changes_to_send = if let (Some(their_have), Some(their_need)) = (
sync_state.their_have.as_ref(),
sync_state.their_need.as_ref(),
) {
self.get_changes_to_send(their_have.clone(), their_need)
} else {
Vec::new()
};
let heads_unchanged = sync_state.last_sent_heads == our_heads;
let heads_equal = if let Some(their_heads) = sync_state.their_heads.as_ref() {
their_heads == &our_heads
} else {
false
};
if heads_unchanged && heads_equal && changes_to_send.is_empty() {
return None;
}
// deduplicate the changes to send with those we have already sent
changes_to_send.retain(|change| !sync_state.sent_hashes.contains(&change.hash));
sync_state.last_sent_heads = our_heads.clone();
sync_state
.sent_hashes
.extend(changes_to_send.iter().map(|c| c.hash));
let sync_message = Message {
heads: our_heads,
have: our_have,
need: our_need,
changes: changes_to_send.into_iter().cloned().collect(),
};
Some(sync_message)
}
pub fn receive_sync_message(
&mut self,
sync_state: &mut State,
message: Message,
) -> Result<(), AutomergeError> {
self.receive_sync_message_with::<()>(sync_state, message, ApplyOptions::default())
}
pub fn receive_sync_message_with<'a, Obs: OpObserver>(
&mut self,
sync_state: &mut State,
message: Message,
options: ApplyOptions<'a, Obs>,
) -> Result<(), AutomergeError> {
let before_heads = self.get_heads();
let Message {
heads: message_heads,
changes: message_changes,
need: message_need,
have: message_have,
} = message;
let changes_is_empty = message_changes.is_empty();
if !changes_is_empty {
self.apply_changes_with(message_changes, options)?;
sync_state.shared_heads = advance_heads(
&before_heads.iter().collect(),
&self.get_heads().into_iter().collect(),
&sync_state.shared_heads,
);
}
// trim down the sent hashes to those that we know they haven't seen
self.filter_changes(&message_heads, &mut sync_state.sent_hashes);
if changes_is_empty && message_heads == before_heads {
sync_state.last_sent_heads = message_heads.clone();
}
let known_heads = message_heads
.iter()
.filter(|head| self.get_change_by_hash(head).is_some())
.collect::<Vec<_>>();
if known_heads.len() == message_heads.len() {
sync_state.shared_heads = message_heads.clone();
// If the remote peer has lost all its data, reset our state to perform a full resync
if message_heads.is_empty() {
sync_state.last_sent_heads = Default::default();
sync_state.sent_hashes = Default::default();
}
} else {
sync_state.shared_heads = sync_state
.shared_heads
.iter()
.chain(known_heads)
.copied()
.unique()
.sorted()
.collect::<Vec<_>>();
}
sync_state.their_have = Some(message_have);
sync_state.their_heads = Some(message_heads);
sync_state.their_need = Some(message_need);
Ok(())
}
fn make_bloom_filter(&self, last_sync: Vec<ChangeHash>) -> Have {
let new_changes = self.get_changes(&last_sync);
let hashes = new_changes
.into_iter()
.map(|change| change.hash)
.collect::<Vec<_>>();
Have {
last_sync,
bloom: BloomFilter::from(&hashes[..]),
}
}
fn get_changes_to_send(&self, have: Vec<Have>, need: &[ChangeHash]) -> Vec<&Change> {
if have.is_empty() {
need.iter()
.filter_map(|hash| self.get_change_by_hash(hash))
.collect()
} else {
let mut last_sync_hashes = HashSet::new();
let mut bloom_filters = Vec::with_capacity(have.len());
for h in have {
let Have { last_sync, bloom } = h;
for hash in last_sync {
last_sync_hashes.insert(hash);
}
bloom_filters.push(bloom);
}
let last_sync_hashes = last_sync_hashes.into_iter().collect::<Vec<_>>();
let changes = self.get_changes(&last_sync_hashes);
let mut change_hashes = HashSet::with_capacity(changes.len());
let mut dependents: HashMap<ChangeHash, Vec<ChangeHash>> = HashMap::new();
let mut hashes_to_send = HashSet::new();
for change in &changes {
change_hashes.insert(change.hash);
for dep in &change.deps {
dependents.entry(*dep).or_default().push(change.hash);
}
if bloom_filters
.iter()
.all(|bloom| !bloom.contains_hash(&change.hash))
{
hashes_to_send.insert(change.hash);
}
}
let mut stack = hashes_to_send.iter().copied().collect::<Vec<_>>();
while let Some(hash) = stack.pop() {
if let Some(deps) = dependents.get(&hash) {
for dep in deps {
if hashes_to_send.insert(*dep) {
stack.push(*dep);
}
}
}
}
let mut changes_to_send = Vec::new();
for hash in need {
hashes_to_send.insert(*hash);
if !change_hashes.contains(hash) {
let change = self.get_change_by_hash(hash);
if let Some(change) = change {
changes_to_send.push(change);
}
}
}
for change in changes {
if hashes_to_send.contains(&change.hash) {
changes_to_send.push(change);
}
}
changes_to_send
}
}
}
/// The sync message to be sent.
#[derive(Debug, Clone)]
pub struct Message {
/// The heads of the sender.
pub heads: Vec<ChangeHash>,
/// The hashes of any changes that are being explicitly requested from the recipient.
pub need: Vec<ChangeHash>,
/// A summary of the changes that the sender already has.
pub have: Vec<Have>,
/// The changes for the recipient to apply.
pub changes: Vec<Change>,
}
impl Message {
pub fn encode(self) -> Vec<u8> {
let mut buf = vec![MESSAGE_TYPE_SYNC];
encode_hashes(&mut buf, &self.heads);
encode_hashes(&mut buf, &self.need);
(self.have.len() as u32).encode_vec(&mut buf);
for have in self.have {
encode_hashes(&mut buf, &have.last_sync);
have.bloom.to_bytes().encode_vec(&mut buf);
}
(self.changes.len() as u32).encode_vec(&mut buf);
for mut change in self.changes {
change.compress();
change.raw_bytes().encode_vec(&mut buf);
}
buf
}
pub fn decode(bytes: &[u8]) -> Result<Message, decoding::Error> {
let mut decoder = Decoder::new(Cow::Borrowed(bytes));
let message_type = decoder.read::<u8>()?;
if message_type != MESSAGE_TYPE_SYNC {
return Err(decoding::Error::WrongType {
expected_one_of: vec![MESSAGE_TYPE_SYNC],
found: message_type,
});
}
let heads = decode_hashes(&mut decoder)?;
let need = decode_hashes(&mut decoder)?;
let have_count = decoder.read::<u32>()?;
let mut have = Vec::with_capacity(have_count as usize);
for _ in 0..have_count {
let last_sync = decode_hashes(&mut decoder)?;
let bloom_bytes: Vec<u8> = decoder.read()?;
let bloom = BloomFilter::try_from(bloom_bytes.as_slice())?;
have.push(Have { last_sync, bloom });
}
let change_count = decoder.read::<u32>()?;
let mut changes = Vec::with_capacity(change_count as usize);
for _ in 0..change_count {
let change = decoder.read()?;
changes.push(Change::from_bytes(change)?);
}
Ok(Message {
heads,
need,
have,
changes,
})
}
}
fn encode_hashes(buf: &mut Vec<u8>, hashes: &[ChangeHash]) {
debug_assert!(
hashes.windows(2).all(|h| h[0] <= h[1]),
"hashes were not sorted"
);
hashes.encode_vec(buf);
}
impl Encodable for &[ChangeHash] {
fn encode<W: Write>(&self, buf: &mut W) -> io::Result<usize> {
let head = self.len().encode(buf)?;
let mut body = 0;
for hash in self.iter() {
buf.write_all(&hash.0)?;
body += hash.0.len();
}
Ok(head + body)
}
}
fn decode_hashes(decoder: &mut Decoder<'_>) -> Result<Vec<ChangeHash>, decoding::Error> {
let length = decoder.read::<u32>()?;
let mut hashes = Vec::with_capacity(length as usize);
for _ in 0..length {
let hash_bytes = decoder.read_bytes(HASH_SIZE)?;
let hash = ChangeHash::try_from(hash_bytes).map_err(decoding::Error::BadChangeFormat)?;
hashes.push(hash);
}
Ok(hashes)
}
fn advance_heads(
my_old_heads: &HashSet<&ChangeHash>,
my_new_heads: &HashSet<ChangeHash>,
our_old_shared_heads: &[ChangeHash],
) -> Vec<ChangeHash> {
let new_heads = my_new_heads
.iter()
.filter(|head| !my_old_heads.contains(head))
.copied()
.collect::<Vec<_>>();
let common_heads = our_old_shared_heads
.iter()
.filter(|head| my_new_heads.contains(head))
.copied()
.collect::<Vec<_>>();
let mut advanced_heads = HashSet::with_capacity(new_heads.len() + common_heads.len());
for head in new_heads.into_iter().chain(common_heads) {
advanced_heads.insert(head);
}
let mut advanced_heads = advanced_heads.into_iter().collect::<Vec<_>>();
advanced_heads.sort();
advanced_heads
}

View file

@ -1,63 +0,0 @@
use std::{borrow::Cow, collections::HashSet};
use super::{decode_hashes, encode_hashes, BloomFilter};
use crate::{decoding, decoding::Decoder, ChangeHash};
const SYNC_STATE_TYPE: u8 = 0x43; // first byte of an encoded sync state, for identification
/// The state of synchronisation with a peer.
#[derive(Debug, Clone, Default)]
pub struct State {
pub shared_heads: Vec<ChangeHash>,
pub last_sent_heads: Vec<ChangeHash>,
pub their_heads: Option<Vec<ChangeHash>>,
pub their_need: Option<Vec<ChangeHash>>,
pub their_have: Option<Vec<Have>>,
pub sent_hashes: HashSet<ChangeHash>,
}
/// A summary of the changes that the sender of the message already has.
/// This is implicitly a request to the recipient to send all changes that the
/// sender does not already have.
#[derive(Debug, Clone, Default)]
pub struct Have {
/// The heads at the time of the last successful sync with this recipient.
pub last_sync: Vec<ChangeHash>,
/// A bloom filter summarising all of the changes that the sender of the message has added
/// since the last sync.
pub bloom: BloomFilter,
}
impl State {
pub fn new() -> Self {
Default::default()
}
pub fn encode(&self) -> Vec<u8> {
let mut buf = vec![SYNC_STATE_TYPE];
encode_hashes(&mut buf, &self.shared_heads);
buf
}
pub fn decode(bytes: &[u8]) -> Result<Self, decoding::Error> {
let mut decoder = Decoder::new(Cow::Borrowed(bytes));
let record_type = decoder.read::<u8>()?;
if record_type != SYNC_STATE_TYPE {
return Err(decoding::Error::WrongType {
expected_one_of: vec![SYNC_STATE_TYPE],
found: record_type,
});
}
let shared_heads = decode_hashes(&mut decoder)?;
Ok(Self {
shared_heads,
last_sent_heads: Vec::new(),
their_heads: None,
their_need: None,
their_have: Some(Vec::new()),
sent_hashes: HashSet::new(),
})
}
}

View file

@ -1,395 +0,0 @@
use std::num::NonZeroU64;
use crate::automerge::Actor;
use crate::exid::ExId;
use crate::query::{self, OpIdSearch};
use crate::types::{Key, ObjId, OpId};
use crate::{change::export_change, types::Op, Automerge, ChangeHash, Prop};
use crate::{AutomergeError, ObjType, OpObserver, OpType, ScalarValue};
#[derive(Debug, Clone)]
pub(crate) struct TransactionInner {
pub(crate) actor: usize,
pub(crate) seq: u64,
pub(crate) start_op: NonZeroU64,
pub(crate) time: i64,
pub(crate) message: Option<String>,
pub(crate) extra_bytes: Vec<u8>,
pub(crate) hash: Option<ChangeHash>,
pub(crate) deps: Vec<ChangeHash>,
pub(crate) operations: Vec<(ObjId, Prop, Op)>,
}
impl TransactionInner {
pub(crate) fn pending_ops(&self) -> usize {
self.operations.len()
}
/// Commit the operations performed in this transaction, returning the hashes corresponding to
/// the new heads.
pub(crate) fn commit<Obs: OpObserver>(
mut self,
doc: &mut Automerge,
message: Option<String>,
time: Option<i64>,
op_observer: Option<&mut Obs>,
) -> ChangeHash {
if message.is_some() {
self.message = message;
}
if let Some(t) = time {
self.time = t;
}
if let Some(observer) = op_observer {
for (obj, prop, op) in &self.operations {
let ex_obj = doc.ops.id_to_exid(obj.0);
if op.insert {
let value = (op.value(), doc.id_to_exid(op.id));
match prop {
Prop::Map(_) => panic!("insert into a map"),
Prop::Seq(index) => observer.insert(ex_obj, *index, value),
}
} else if op.is_delete() {
observer.delete(ex_obj, prop.clone());
} else if let Some(value) = op.get_increment_value() {
observer.increment(ex_obj, prop.clone(), (value, doc.id_to_exid(op.id)));
} else {
let value = (op.value(), doc.ops.id_to_exid(op.id));
observer.put(ex_obj, prop.clone(), value, false);
}
}
}
let num_ops = self.pending_ops();
let change = export_change(self, &doc.ops.m.actors, &doc.ops.m.props);
let hash = change.hash;
doc.update_history(change, num_ops);
debug_assert_eq!(doc.get_heads(), vec![hash]);
hash
}
/// Undo the operations added in this transaction, returning the number of cancelled
/// operations.
pub(crate) fn rollback(self, doc: &mut Automerge) -> usize {
let num = self.pending_ops();
// remove in reverse order so sets are removed before makes etc...
for (obj, _prop, op) in self.operations.into_iter().rev() {
for pred_id in &op.pred {
if let Some(p) = doc.ops.search(&obj, OpIdSearch::new(*pred_id)).index() {
doc.ops.replace(&obj, p, |o| o.remove_succ(&op));
}
}
if let Some(pos) = doc.ops.search(&obj, OpIdSearch::new(op.id)).index() {
doc.ops.remove(&obj, pos);
}
}
// remove the actor from the cache so that it doesn't end up in the saved document
if doc.states.get(&self.actor).is_none() {
let actor = doc.ops.m.actors.remove_last();
doc.actor = Actor::Unused(actor);
}
num
}
/// Set the value of property `P` to value `V` in object `obj`.
///
/// # Returns
///
/// The opid of the operation which was created, or None if this operation doesn't change the
/// document
///
/// # Errors
///
/// This will return an error if
/// - The object does not exist
/// - The key is the wrong type for the object
/// - The key does not exist in the object
pub(crate) fn put<P: Into<Prop>, V: Into<ScalarValue>>(
&mut self,
doc: &mut Automerge,
ex_obj: &ExId,
prop: P,
value: V,
) -> Result<(), AutomergeError> {
let obj = doc.exid_to_obj(ex_obj)?;
let value = value.into();
let prop = prop.into();
self.local_op(doc, obj, prop, value.into())?;
Ok(())
}
/// Set the value of property `P` to value `V` in object `obj`.
///
/// # Returns
///
/// The opid of the operation which was created, or None if this operation doesn't change the
/// document
///
/// # Errors
///
/// This will return an error if
/// - The object does not exist
/// - The key is the wrong type for the object
/// - The key does not exist in the object
pub(crate) fn put_object<P: Into<Prop>>(
&mut self,
doc: &mut Automerge,
ex_obj: &ExId,
prop: P,
value: ObjType,
) -> Result<ExId, AutomergeError> {
let obj = doc.exid_to_obj(ex_obj)?;
let prop = prop.into();
let id = self.local_op(doc, obj, prop, value.into())?.unwrap();
let id = doc.id_to_exid(id);
Ok(id)
}
fn next_id(&mut self) -> OpId {
OpId(self.start_op.get() + self.pending_ops() as u64, self.actor)
}
fn insert_local_op(
&mut self,
doc: &mut Automerge,
prop: Prop,
op: Op,
pos: usize,
obj: ObjId,
succ_pos: &[usize],
) {
for succ in succ_pos {
doc.ops.replace(&obj, *succ, |old_op| {
old_op.add_succ(&op);
});
}
if !op.is_delete() {
doc.ops.insert(pos, &obj, op.clone());
}
self.operations.push((obj, prop, op));
}
pub(crate) fn insert<V: Into<ScalarValue>>(
&mut self,
doc: &mut Automerge,
ex_obj: &ExId,
index: usize,
value: V,
) -> Result<(), AutomergeError> {
let obj = doc.exid_to_obj(ex_obj)?;
let value = value.into();
self.do_insert(doc, obj, index, value.into())?;
Ok(())
}
pub(crate) fn insert_object(
&mut self,
doc: &mut Automerge,
ex_obj: &ExId,
index: usize,
value: ObjType,
) -> Result<ExId, AutomergeError> {
let obj = doc.exid_to_obj(ex_obj)?;
let id = self.do_insert(doc, obj, index, value.into())?;
let id = doc.id_to_exid(id);
Ok(id)
}
fn do_insert(
&mut self,
doc: &mut Automerge,
obj: ObjId,
index: usize,
action: OpType,
) -> Result<OpId, AutomergeError> {
let id = self.next_id();
let query = doc.ops.search(&obj, query::InsertNth::new(index, id));
let key = query.key()?;
let op = Op {
id,
action,
key,
succ: Default::default(),
pred: Default::default(),
insert: true,
};
doc.ops.insert(query.pos(), &obj, op.clone());
self.operations.push((obj, Prop::Seq(index), op));
Ok(id)
}
pub(crate) fn local_op(
&mut self,
doc: &mut Automerge,
obj: ObjId,
prop: Prop,
action: OpType,
) -> Result<Option<OpId>, AutomergeError> {
match prop {
Prop::Map(s) => self.local_map_op(doc, obj, s, action),
Prop::Seq(n) => self.local_list_op(doc, obj, n, action),
}
}
fn local_map_op(
&mut self,
doc: &mut Automerge,
obj: ObjId,
prop: String,
action: OpType,
) -> Result<Option<OpId>, AutomergeError> {
if prop.is_empty() {
return Err(AutomergeError::EmptyStringKey);
}
let id = self.next_id();
let prop_index = doc.ops.m.props.cache(prop.clone());
let query = doc.ops.search(&obj, query::InsertProp::new(prop_index));
// no key present to delete
if query.ops.is_empty() && action == OpType::Delete {
return Ok(None);
}
if query.ops.len() == 1 && query.ops[0].is_noop(&action) {
return Ok(None);
}
// increment operations are only valid against counter values.
// if there are multiple values (from conflicts) then we just need one of them to be a counter.
if matches!(action, OpType::Increment(_)) && query.ops.iter().all(|op| !op.is_counter()) {
return Err(AutomergeError::MissingCounter);
}
let pred = query.ops.iter().map(|op| op.id).collect();
let op = Op {
id,
action,
key: Key::Map(prop_index),
succ: Default::default(),
pred,
insert: false,
};
let pos = query.pos;
let ops_pos = query.ops_pos;
self.insert_local_op(doc, Prop::Map(prop), op, pos, obj, &ops_pos);
Ok(Some(id))
}
fn local_list_op(
&mut self,
doc: &mut Automerge,
obj: ObjId,
index: usize,
action: OpType,
) -> Result<Option<OpId>, AutomergeError> {
let query = doc.ops.search(&obj, query::Nth::new(index));
let id = self.next_id();
let pred = query.ops.iter().map(|op| op.id).collect();
let key = query.key()?;
if query.ops.len() == 1 && query.ops[0].is_noop(&action) {
return Ok(None);
}
// increment operations are only valid against counter values.
// if there are multiple values (from conflicts) then we just need one of them to be a counter.
if matches!(action, OpType::Increment(_)) && query.ops.iter().all(|op| !op.is_counter()) {
return Err(AutomergeError::MissingCounter);
}
let op = Op {
id,
action,
key,
succ: Default::default(),
pred,
insert: false,
};
let pos = query.pos;
let ops_pos = query.ops_pos;
self.insert_local_op(doc, Prop::Seq(index), op, pos, obj, &ops_pos);
Ok(Some(id))
}
pub(crate) fn increment<P: Into<Prop>>(
&mut self,
doc: &mut Automerge,
obj: &ExId,
prop: P,
value: i64,
) -> Result<(), AutomergeError> {
let obj = doc.exid_to_obj(obj)?;
self.local_op(doc, obj, prop.into(), OpType::Increment(value))?;
Ok(())
}
pub(crate) fn delete<P: Into<Prop>>(
&mut self,
doc: &mut Automerge,
ex_obj: &ExId,
prop: P,
) -> Result<(), AutomergeError> {
let obj = doc.exid_to_obj(ex_obj)?;
let prop = prop.into();
self.local_op(doc, obj, prop, OpType::Delete)?;
Ok(())
}
/// Splice new elements into the given sequence. Returns a vector of the OpIds used to insert
/// the new elements
pub(crate) fn splice(
&mut self,
doc: &mut Automerge,
ex_obj: &ExId,
mut pos: usize,
del: usize,
vals: impl IntoIterator<Item = ScalarValue>,
) -> Result<(), AutomergeError> {
let obj = doc.exid_to_obj(ex_obj)?;
for _ in 0..del {
// del()
self.local_op(doc, obj, pos.into(), OpType::Delete)?;
}
for v in vals {
// insert()
self.do_insert(doc, obj, pos, v.clone().into())?;
pos += 1;
}
Ok(())
}
}
#[cfg(test)]
mod tests {
use crate::{transaction::Transactable, ROOT};
use super::*;
#[test]
fn map_rollback_doesnt_panic() {
let mut doc = Automerge::new();
let mut tx = doc.transaction();
let a = tx.put_object(ROOT, "a", ObjType::Map).unwrap();
tx.put(&a, "b", 1).unwrap();
assert!(tx.get(&a, "b").unwrap().is_some());
}
}

View file

@ -1,176 +0,0 @@
use std::ops::RangeBounds;
use crate::exid::ExId;
use crate::{
AutomergeError, ChangeHash, Keys, KeysAt, ObjType, Parents, Prop, Range, RangeAt, ScalarValue,
Value, Values, ValuesAt,
};
/// A way of mutating a document within a single change.
pub trait Transactable {
/// Get the number of pending operations in this transaction.
fn pending_ops(&self) -> usize;
/// Set the value of property `P` to value `V` in object `obj`.
///
/// # Errors
///
/// This will return an error if
/// - The object does not exist
/// - The key is the wrong type for the object
/// - The key does not exist in the object
fn put<O: AsRef<ExId>, P: Into<Prop>, V: Into<ScalarValue>>(
&mut self,
obj: O,
prop: P,
value: V,
) -> Result<(), AutomergeError>;
/// Set the value of property `P` to the new object `V` in object `obj`.
///
/// # Returns
///
/// The id of the object which was created.
///
/// # Errors
///
/// This will return an error if
/// - The object does not exist
/// - The key is the wrong type for the object
/// - The key does not exist in the object
fn put_object<O: AsRef<ExId>, P: Into<Prop>>(
&mut self,
obj: O,
prop: P,
object: ObjType,
) -> Result<ExId, AutomergeError>;
/// Insert a value into a list at the given index.
fn insert<O: AsRef<ExId>, V: Into<ScalarValue>>(
&mut self,
obj: O,
index: usize,
value: V,
) -> Result<(), AutomergeError>;
/// Insert an object into a list at the given index.
fn insert_object<O: AsRef<ExId>>(
&mut self,
obj: O,
index: usize,
object: ObjType,
) -> Result<ExId, AutomergeError>;
/// Increment the counter at the prop in the object by `value`.
fn increment<O: AsRef<ExId>, P: Into<Prop>>(
&mut self,
obj: O,
prop: P,
value: i64,
) -> Result<(), AutomergeError>;
/// Delete the value at prop in the object.
fn delete<O: AsRef<ExId>, P: Into<Prop>>(
&mut self,
obj: O,
prop: P,
) -> Result<(), AutomergeError>;
fn splice<O: AsRef<ExId>, V: IntoIterator<Item = ScalarValue>>(
&mut self,
obj: O,
pos: usize,
del: usize,
vals: V,
) -> Result<(), AutomergeError>;
/// Like [`Self::splice`] but for text.
fn splice_text<O: AsRef<ExId>>(
&mut self,
obj: O,
pos: usize,
del: usize,
text: &str,
) -> Result<(), AutomergeError> {
let vals = text.chars().map(|c| c.into());
self.splice(obj, pos, del, vals)
}
/// Get the keys of the given object, it should be a map.
fn keys<O: AsRef<ExId>>(&self, obj: O) -> Keys<'_, '_>;
/// Get the keys of the given object at a point in history.
fn keys_at<O: AsRef<ExId>>(&self, obj: O, heads: &[ChangeHash]) -> KeysAt<'_, '_>;
fn range<O: AsRef<ExId>, R: RangeBounds<String>>(&self, obj: O, range: R) -> Range<'_, R>;
fn range_at<O: AsRef<ExId>, R: RangeBounds<String>>(
&self,
obj: O,
range: R,
heads: &[ChangeHash],
) -> RangeAt<'_, R>;
fn values<O: AsRef<ExId>>(&self, obj: O) -> Values<'_>;
fn values_at<O: AsRef<ExId>>(&self, obj: O, heads: &[ChangeHash]) -> ValuesAt<'_>;
/// Get the length of the given object.
fn length<O: AsRef<ExId>>(&self, obj: O) -> usize;
/// Get the length of the given object at a point in history.
fn length_at<O: AsRef<ExId>>(&self, obj: O, heads: &[ChangeHash]) -> usize;
/// Get type for object
fn object_type<O: AsRef<ExId>>(&self, obj: O) -> Option<ObjType>;
/// Get the string that this text object represents.
fn text<O: AsRef<ExId>>(&self, obj: O) -> Result<String, AutomergeError>;
/// Get the string that this text object represents at a point in history.
fn text_at<O: AsRef<ExId>>(
&self,
obj: O,
heads: &[ChangeHash],
) -> Result<String, AutomergeError>;
/// Get the value at this prop in the object.
fn get<O: AsRef<ExId>, P: Into<Prop>>(
&self,
obj: O,
prop: P,
) -> Result<Option<(Value<'_>, ExId)>, AutomergeError>;
/// Get the value at this prop in the object at a point in history.
fn get_at<O: AsRef<ExId>, P: Into<Prop>>(
&self,
obj: O,
prop: P,
heads: &[ChangeHash],
) -> Result<Option<(Value<'_>, ExId)>, AutomergeError>;
fn get_all<O: AsRef<ExId>, P: Into<Prop>>(
&self,
obj: O,
prop: P,
) -> Result<Vec<(Value<'_>, ExId)>, AutomergeError>;
fn get_all_at<O: AsRef<ExId>, P: Into<Prop>>(
&self,
obj: O,
prop: P,
heads: &[ChangeHash],
) -> Result<Vec<(Value<'_>, ExId)>, AutomergeError>;
/// Get the object id of the object that contains this object and the prop that this object is
/// at in that object.
fn parent_object<O: AsRef<ExId>>(&self, obj: O) -> Option<(ExId, Prop)>;
fn parents(&self, obj: ExId) -> Parents<'_>;
fn path_to_object<O: AsRef<ExId>>(&self, obj: O) -> Vec<(ExId, Prop)> {
let mut path = self.parents(obj.as_ref().clone()).collect::<Vec<_>>();
path.reverse();
path
}
}

View file

@ -1,36 +0,0 @@
use crate::{exid::ExId, Value};
use std::ops::RangeFull;
use crate::{query, Automerge};
#[derive(Debug)]
pub struct Values<'a> {
range: Option<query::Range<'a, RangeFull>>,
doc: &'a Automerge,
}
impl<'a> Values<'a> {
pub(crate) fn new(doc: &'a Automerge, range: Option<query::Range<'a, RangeFull>>) -> Self {
Self { range, doc }
}
}
impl<'a> Iterator for Values<'a> {
type Item = (&'a str, Value<'a>, ExId);
fn next(&mut self) -> Option<Self::Item> {
self.range
.as_mut()?
.next()
.map(|(key, value, id)| (key, value, self.doc.id_to_exid(id)))
}
}
impl<'a> DoubleEndedIterator for Values<'a> {
fn next_back(&mut self) -> Option<Self::Item> {
self.range
.as_mut()?
.next_back()
.map(|(key, value, id)| (key, value, self.doc.id_to_exid(id)))
}
}

View file

@ -1,36 +0,0 @@
use crate::{exid::ExId, Value};
use std::ops::RangeFull;
use crate::{query, Automerge};
#[derive(Debug)]
pub struct ValuesAt<'a> {
range: Option<query::RangeAt<'a, RangeFull>>,
doc: &'a Automerge,
}
impl<'a> ValuesAt<'a> {
pub(crate) fn new(doc: &'a Automerge, range: Option<query::RangeAt<'a, RangeFull>>) -> Self {
Self { range, doc }
}
}
impl<'a> Iterator for ValuesAt<'a> {
type Item = (&'a str, Value<'a>, ExId);
fn next(&mut self) -> Option<Self::Item> {
self.range
.as_mut()?
.next()
.map(|(key, value, id)| (key, value, self.doc.id_to_exid(id)))
}
}
impl<'a> DoubleEndedIterator for ValuesAt<'a> {
fn next_back(&mut self) -> Option<Self::Item> {
self.range
.as_mut()?
.next_back()
.map(|(key, value, id)| (key, value, self.doc.id_to_exid(id)))
}
}

View file

@ -1,52 +0,0 @@
Try the different editing traces on different automerge implementations
### Automerge Experiement - pure rust
```code
# cargo --release run
```
#### Benchmarks
There are some criterion benchmarks in the `benches` folder which can be run with `cargo bench` or `cargo criterion`.
For flamegraphing, `cargo flamegraph --bench main -- --bench "save" # or "load" or "replay" or nothing` can be useful.
### Automerge Experiement - wasm api
```code
# node automerge-wasm.js
```
### Automerge Experiment - JS wrapper
```code
# node automerge-js.js
```
### Automerge 1.0 pure javascript - new fast backend
This assume automerge has been checked out in a directory along side this repo
```code
# node automerge-1.0.js
```
### Automerge 1.0 with rust backend
This assume automerge has been checked out in a directory along side this repo
```code
# node automerge-rs.js
```
### Automerge Experiment - JS wrapper
```code
# node automerge-js.js
```
### Baseline Test. Javascript Array with no CRDT info
```code
# node baseline.js
```

View file

@ -1,31 +0,0 @@
// this assumes that the automerge-rs folder is checked out along side this repo
// and someone has run
// # cd automerge-rs/automerge-backend-wasm
// # yarn release
const { edits, finalText } = require('./editing-trace')
const Automerge = require('../../automerge')
const path = require('path')
const wasmBackend = require(path.resolve("../../automerge-rs/automerge-backend-wasm"))
Automerge.setDefaultBackend(wasmBackend)
const start = new Date()
let state = Automerge.from({text: new Automerge.Text()})
state = Automerge.change(state, doc => {
for (let i = 0; i < edits.length; i++) {
if (i % 10000 === 0) {
console.log(`Processed ${i} edits in ${new Date() - start} ms`)
}
if (edits[i][1] > 0) doc.text.deleteAt(edits[i][0], edits[i][1])
if (edits[i].length > 2) doc.text.insertAt(edits[i][0], ...edits[i].slice(2))
}
})
console.log(`Done in ${new Date() - start} ms`)
if (state.text.join('') !== finalText) {
throw new RangeError('ERROR: final text did not match expectation')
}

View file

@ -2,11 +2,11 @@
"nodes": { "nodes": {
"flake-utils": { "flake-utils": {
"locked": { "locked": {
"lastModified": 1642700792, "lastModified": 1667395993,
"narHash": "sha256-XqHrk7hFb+zBvRg6Ghl+AZDq03ov6OshJLiSWOoX5es=", "narHash": "sha256-nuEHfE/LcWyuSWnS8t12N1wc105Qtau+/OdUAjtQ0rA=",
"owner": "numtide", "owner": "numtide",
"repo": "flake-utils", "repo": "flake-utils",
"rev": "846b2ae0fc4cc943637d3d1def4454213e203cba", "rev": "5aed5285a952e0b949eb3ba02c12fa4fcfef535f",
"type": "github" "type": "github"
}, },
"original": { "original": {
@ -17,11 +17,11 @@
}, },
"flake-utils_2": { "flake-utils_2": {
"locked": { "locked": {
"lastModified": 1637014545, "lastModified": 1659877975,
"narHash": "sha256-26IZAc5yzlD9FlDT54io1oqG/bBoyka+FJk5guaX4x4=", "narHash": "sha256-zllb8aq3YO3h8B/U0/J1WBgAL8EX5yWf5pMj3G0NAmc=",
"owner": "numtide", "owner": "numtide",
"repo": "flake-utils", "repo": "flake-utils",
"rev": "bba5dcc8e0b20ab664967ad83d24d64cb64ec4f4", "rev": "c0e246b9b83f637f4681389ecabcb2681b4f3af0",
"type": "github" "type": "github"
}, },
"original": { "original": {
@ -32,11 +32,11 @@
}, },
"nixpkgs": { "nixpkgs": {
"locked": { "locked": {
"lastModified": 1643805626, "lastModified": 1669542132,
"narHash": "sha256-AXLDVMG+UaAGsGSpOtQHPIKB+IZ0KSd9WS77aanGzgc=", "narHash": "sha256-DRlg++NJAwPh8io3ExBJdNW7Djs3plVI5jgYQ+iXAZQ=",
"owner": "nixos", "owner": "nixos",
"repo": "nixpkgs", "repo": "nixpkgs",
"rev": "554d2d8aa25b6e583575459c297ec23750adb6cb", "rev": "a115bb9bd56831941be3776c8a94005867f316a7",
"type": "github" "type": "github"
}, },
"original": { "original": {
@ -48,11 +48,11 @@
}, },
"nixpkgs_2": { "nixpkgs_2": {
"locked": { "locked": {
"lastModified": 1637453606, "lastModified": 1665296151,
"narHash": "sha256-Gy6cwUswft9xqsjWxFYEnx/63/qzaFUwatcbV5GF/GQ=", "narHash": "sha256-uOB0oxqxN9K7XGF1hcnY+PQnlQJ+3bP2vCn/+Ru/bbc=",
"owner": "NixOS", "owner": "NixOS",
"repo": "nixpkgs", "repo": "nixpkgs",
"rev": "8afc4e543663ca0a6a4f496262cd05233737e732", "rev": "14ccaaedd95a488dd7ae142757884d8e125b3363",
"type": "github" "type": "github"
}, },
"original": { "original": {
@ -75,11 +75,11 @@
"nixpkgs": "nixpkgs_2" "nixpkgs": "nixpkgs_2"
}, },
"locked": { "locked": {
"lastModified": 1643941258, "lastModified": 1669775522,
"narHash": "sha256-uHyEuICSu8qQp6adPTqV33ajiwoF0sCh+Iazaz5r7fo=", "narHash": "sha256-6xxGArBqssX38DdHpDoPcPvB/e79uXyQBwpBcaO/BwY=",
"owner": "oxalica", "owner": "oxalica",
"repo": "rust-overlay", "repo": "rust-overlay",
"rev": "674156c4c2f46dd6a6846466cb8f9fee84c211ca", "rev": "3158e47f6b85a288d12948aeb9a048e0ed4434d6",
"type": "github" "type": "github"
}, },
"original": { "original": {

View file

@ -3,36 +3,39 @@
inputs = { inputs = {
nixpkgs.url = "github:nixos/nixpkgs/nixos-unstable"; nixpkgs.url = "github:nixos/nixpkgs/nixos-unstable";
flake-utils = { flake-utils.url = "github:numtide/flake-utils";
url = "github:numtide/flake-utils";
inputs.nixpkgs.follows = "nixpkgs";
};
rust-overlay.url = "github:oxalica/rust-overlay"; rust-overlay.url = "github:oxalica/rust-overlay";
}; };
outputs = { self, nixpkgs, flake-utils, rust-overlay }: outputs = {
self,
nixpkgs,
flake-utils,
rust-overlay,
}:
flake-utils.lib.eachDefaultSystem flake-utils.lib.eachDefaultSystem
(system: (system: let
let
pkgs = import nixpkgs { pkgs = import nixpkgs {
overlays = [ rust-overlay.overlay ]; overlays = [rust-overlay.overlays.default];
inherit system; inherit system;
}; };
lib = pkgs.lib;
rust = pkgs.rust-bin.stable.latest.default; rust = pkgs.rust-bin.stable.latest.default;
cargoNix = pkgs.callPackage ./Cargo.nix { in {
inherit pkgs; formatter = pkgs.alejandra;
release = true;
packages = {
deadnix = pkgs.runCommand "deadnix" {} ''
${pkgs.deadnix}/bin/deadnix --fail ${./.}
mkdir $out
'';
}; };
debugCargoNix = pkgs.callPackage ./Cargo.nix {
inherit pkgs; checks = {
release = false; inherit (self.packages.${system}) deadnix;
}; };
in
{ devShells.default = pkgs.mkShell {
devShell = pkgs.mkShell { buildInputs = with pkgs; [
buildInputs = with pkgs;
[
(rust.override { (rust.override {
extensions = ["rust-src"]; extensions = ["rust-src"];
targets = ["wasm32-unknown-unknown"]; targets = ["wasm32-unknown-unknown"];
@ -51,9 +54,12 @@
nodejs nodejs
yarn yarn
deno
# c deps
cmake cmake
cmocka cmocka
doxygen
rnix-lsp rnix-lsp
nixpkgs-fmt nixpkgs-fmt

View file

@ -0,0 +1,3 @@
{
"replacer": "scripts/denoify-replacer.mjs"
}

2
javascript/.eslintignore Normal file
View file

@ -0,0 +1,2 @@
dist
examples

15
javascript/.eslintrc.cjs Normal file
View file

@ -0,0 +1,15 @@
module.exports = {
root: true,
parser: "@typescript-eslint/parser",
plugins: ["@typescript-eslint"],
extends: ["eslint:recommended", "plugin:@typescript-eslint/recommended"],
rules: {
"@typescript-eslint/no-unused-vars": [
"error",
{
argsIgnorePattern: "^_",
varsIgnorePattern: "^_",
},
],
},
}

6
javascript/.gitignore vendored Normal file
View file

@ -0,0 +1,6 @@
/node_modules
/yarn.lock
dist
docs/
.vim
deno_dist/

View file

@ -0,0 +1,4 @@
e2e/verdacciodb
dist
docs
deno_dist

4
javascript/.prettierrc Normal file
View file

@ -0,0 +1,4 @@
{
"semi": false,
"arrowParens": "avoid"
}

39
javascript/HACKING.md Normal file
View file

@ -0,0 +1,39 @@
## Architecture
The `@automerge/automerge` package is a set of
[`Proxy`](https://developer.mozilla.org/en-US/docs/Web/JavaScript/Reference/Global_Objects/Proxy)
objects which provide an idiomatic javascript interface built on top of the
lower level `@automerge/automerge-wasm` package (which is in turn built from the
Rust codebase and can be found in `~/automerge-wasm`). I.e. the responsibility
of this codebase is
- To map from the javascript data model to the underlying `set`, `make`,
`insert`, and `delete` operations of Automerge.
- To expose a more convenient interface to functions in `automerge-wasm` which
generate messages to send over the network or compressed file formats to store
on disk
## Building and testing
Much of the functionality of this package depends on the
`@automerge/automerge-wasm` package and frequently you will be working on both
of them at the same time. It would be frustrating to have to push
`automerge-wasm` to NPM every time you want to test a change but I (Alex) also
don't trust `yarn link` to do the right thing here. Therefore, the `./e2e`
folder contains a little yarn package which spins up a local NPM registry. See
`./e2e/README` for details. In brief though:
To build `automerge-wasm` and install it in the local `node_modules`
```bash
cd e2e && yarn install && yarn run e2e buildjs
```
NOw that you've done this you can run the tests
```bash
yarn test
```
If you make changes to the `automerge-wasm` package you will need to re-run
`yarn e2e buildjs`

109
javascript/README.md Normal file
View file

@ -0,0 +1,109 @@
## Automerge
Automerge is a library of data structures for building collaborative
applications, this package is the javascript implementation.
Detailed documentation is available at [automerge.org](http://automerge.org/)
but see the following for a short getting started guid.
## Quickstart
First, install the library.
```
yarn add @automerge/automerge
```
If you're writing a `node` application, you can skip straight to [Make some
data](#make-some-data). If you're in a browser you need a bundler
### Bundler setup
`@automerge/automerge` is a wrapper around a core library which is written in
rust, compiled to WebAssembly and distributed as a separate package called
`@automerge/automerge-wasm`. Browsers don't currently support WebAssembly
modules taking part in ESM module imports, so you must use a bundler to import
`@automerge/automerge` in the browser. There are a lot of bundlers out there, we
have examples for common bundlers in the `examples` folder. Here is a short
example using Webpack 5.
Assuming a standard setup of a new webpack project, you'll need to enable the
`asyncWebAssembly` experiment. In a typical webpack project that means adding
something like this to `webpack.config.js`
```javascript
module.exports = {
...
experiments: { asyncWebAssembly: true },
performance: { // we dont want the wasm blob to generate warnings
hints: false,
maxEntrypointSize: 512000,
maxAssetSize: 512000
}
};
```
### Make some data
Automerge allows to separate threads of execution to make changes to some data
and always be able to merge their changes later.
```javascript
import * as automerge from "@automerge/automerge"
import * as assert from "assert"
let doc1 = automerge.from({
tasks: [
{ description: "feed fish", done: false },
{ description: "water plants", done: false },
],
})
// Create a new thread of execution
let doc2 = automerge.clone(doc1)
// Now we concurrently make changes to doc1 and doc2
// Complete a task in doc2
doc2 = automerge.change(doc2, d => {
d.tasks[0].done = true
})
// Add a task in doc1
doc1 = automerge.change(doc1, d => {
d.tasks.push({
description: "water fish",
done: false,
})
})
// Merge changes from both docs
doc1 = automerge.merge(doc1, doc2)
doc2 = automerge.merge(doc2, doc1)
// Both docs are merged and identical
assert.deepEqual(doc1, {
tasks: [
{ description: "feed fish", done: true },
{ description: "water plants", done: false },
{ description: "water fish", done: false },
],
})
assert.deepEqual(doc2, {
tasks: [
{ description: "feed fish", done: true },
{ description: "water plants", done: false },
{ description: "water fish", done: false },
],
})
```
## Development
See [HACKING.md](./HACKING.md)
## Meta
Copyright 2017present, the Automerge contributors. Released under the terms of the
MIT license (see `LICENSE`).

View file

@ -0,0 +1,12 @@
{
"extends": "../tsconfig.json",
"exclude": [
"../dist/**/*",
"../node_modules",
"../test/**/*",
"../src/**/*.deno.ts"
],
"compilerOptions": {
"outDir": "../dist/cjs"
}
}

View file

@ -0,0 +1,13 @@
{
"extends": "../tsconfig.json",
"exclude": [
"../dist/**/*",
"../node_modules",
"../test/**/*",
"../src/**/*.deno.ts"
],
"emitDeclarationOnly": true,
"compilerOptions": {
"outDir": "../dist"
}
}

View file

@ -0,0 +1,14 @@
{
"extends": "../tsconfig.json",
"exclude": [
"../dist/**/*",
"../node_modules",
"../test/**/*",
"../src/**/*.deno.ts"
],
"compilerOptions": {
"target": "es6",
"module": "es6",
"outDir": "../dist/mjs"
}
}

View file

@ -0,0 +1,10 @@
import * as Automerge from "../deno_dist/index.ts"
Deno.test("It should create, clone and free", () => {
let doc1 = Automerge.init()
let doc2 = Automerge.clone(doc1)
// this is only needed if weakrefs are not supported
Automerge.free(doc1)
Automerge.free(doc2)
})

3
javascript/e2e/.gitignore vendored Normal file
View file

@ -0,0 +1,3 @@
node_modules/
verdacciodb/
htpasswd

70
javascript/e2e/README.md Normal file
View file

@ -0,0 +1,70 @@
#End to end testing for javascript packaging
The network of packages and bundlers we rely on to get the `automerge` package
working is a little complex. We have the `automerge-wasm` package, which the
`automerge` package depends upon, which means that anyone who depends on
`automerge` needs to either a) be using node or b) use a bundler in order to
load the underlying WASM module which is packaged in `automerge-wasm`.
The various bundlers involved are complicated and capricious and so we need an
easy way of testing that everything is in fact working as expected. To do this
we run a custom NPM registry (namely [Verdaccio](https://verdaccio.org/)) and
build the `automerge-wasm` and `automerge` packages and publish them to this
registry. Once we have this registry running we are able to build the example
projects which depend on these packages and check that everything works as
expected.
## Usage
First, install everything:
```
yarn install
```
### Build `automerge-js`
This builds the `automerge-wasm` package and then runs `yarn build` in the
`automerge-js` project with the `--registry` set to the verdaccio registry. The
end result is that you can run `yarn test` in the resulting `automerge-js`
directory in order to run tests against the current `automerge-wasm`.
```
yarn e2e buildjs
```
### Build examples
This either builds or the examples in `automerge-js/examples` or just a subset
of them. Once this is complete you can run the relevant scripts (e.g. `vite dev`
for the Vite example) to check everything works.
```
yarn e2e buildexamples
```
Or, to just build the webpack example
```
yarn e2e buildexamples -e webpack
```
### Run Registry
If you're experimenting with a project which is not in the `examples` folder
you'll need a running registry. `run-registry` builds and publishes
`automerge-js` and `automerge-wasm` and then runs the registry at
`localhost:4873`.
```
yarn e2e run-registry
```
You can now run `yarn install --registry http://localhost:4873` to experiment
with the built packages.
## Using the `dev` build of `automerge-wasm`
All the commands above take a `-p` flag which can be either `release` or
`debug`. The `debug` builds with additional debug symbols which makes errors
less cryptic.

Some files were not shown because too many files have changed in this diff Show more